Compare commits

...

43 Commits

Author SHA1 Message Date
Larissa Wandzura
e206ef01af cleaned up the table a bit 2025-12-03 14:11:06 -06:00
Larissa Wandzura
06bbf043c0 minor change 2025-11-21 15:37:53 -06:00
Larissa Wandzura
079100b081 ran prettier 2025-11-20 11:19:15 -06:00
Larissa Wandzura
8e11703e94 more updates based on feedback 2025-11-19 17:29:47 -06:00
Larissa Wandzura
03e9ffcc91 Updates based on David's feedback 2025-11-19 16:54:36 -06:00
Larissa Wandzura
d7c75b8343 Update docs/sources/datasources/concepts.md
Co-authored-by: David Harris <david.harris@grafana.com>
2025-11-19 14:22:08 -06:00
Larissa Wandzura
9ddd8a02c0 Update docs/sources/datasources/concepts.md
Co-authored-by: David Harris <david.harris@grafana.com>
2025-11-19 14:21:35 -06:00
Larissa Wandzura
1601995fda Update docs/sources/datasources/concepts.md
Co-authored-by: David Harris <david.harris@grafana.com>
2025-11-19 14:21:23 -06:00
Larissa Wandzura
af87c3d6f3 update based on feedback 2025-11-14 15:14:38 -06:00
Larissa Wandzura
9f9b82f5cf clarified plugin install answer 2025-11-14 14:59:12 -06:00
Larissa Wandzura
04a9888a96 removed Grafana version to avoid confusing users 2025-11-13 16:17:32 -06:00
Larissa Wandzura
07a758d84a updates based on feedback on draft 2025-11-13 16:12:35 -06:00
Larissa Wandzura
111af8b1a8 Update docs/sources/datasources/concepts.md
Co-authored-by: Anna Urbiztondo <anna.urbiztondo@grafana.com>
2025-11-12 13:20:22 -06:00
Larissa Wandzura
4c97e49fc5 cleaned up spelling and punctuation 2025-11-07 16:10:32 -06:00
Larissa Wandzura
10a291ec8b added new concepts doc 2025-11-07 16:04:10 -06:00
beejeebus
0e9fe9dc40 Register external datasource plugins on startup
Current code only registers core datasource k8s api groups.

Add external plugins.

Companion grafana-enterprise PR:

https://github.com/grafana/grafana-enterprise/pull/10125
2025-11-07 14:42:41 -05:00
Paul Marbach
90ddd922ad Chore: Cleanup panelMonitoring feature flag (#113530) 2025-11-07 14:04:42 -05:00
Moustafa Baiou
1e1adafeec Alerting: Add admission hooks for rules app (#113429)
This adds validating admission hooks to enforce the requirements on AlertRules and RecordingRules that are currently enforced through the provisioning service and storage mechanisms in preparation of a consistent validation in both legacy storage and unified storage. It also adds a mutating admission hook to the app to ensure that folder annotations and folder labels are kept in sync so we can perform label-selector lists.
2025-11-07 12:01:16 -05:00
Paul Marbach
ecc9e9257e E2E: Prevent issue where certain times can cause test failures (#110196)
* E2E: Prevent issue where certain times can cause test failures

* re-enable first test
2025-11-07 11:34:11 -05:00
Paul Marbach
4fee8b34ad Suggestions: Refactor getPanelDataSummary into its own method (#113251)
* Suggestions: Refactor getPanelDataSummary into its own method

* restore order

* update some imports

* update codeowners
2025-11-07 11:33:13 -05:00
Roberto Jiménez Sánchez
02464c19b8 Provisioning: Add validation for Job specifications (#113590)
* Validate Job Specs

* Add comprehensive unit test coverage for job validator

- Added 8 new test cases to improve coverage from 88.9% to ~100%
- Tests for migrate action without options
- Tests for delete/move actions with resources (missing kind)
- Tests for move action with valid resources
- Tests for move/delete with both paths and resources
- Tests for move action with invalid source paths
- Tests for push action with valid paths

Now covers all validation paths including resource validation and
edge cases for all job action types.

* Add integration tests for job validation

Added comprehensive integration tests that verify the job validator properly
rejects invalid job specifications via the API:

- Test job without action (required field)
- Test job with invalid action
- Test pull job without pull options
- Test push job without push options
- Test push job with invalid branch name (consecutive dots)
- Test push job with path traversal attempt
- Test delete job without paths or resources
- Test delete job with invalid path (path traversal)
- Test move job without target path
- Test move job without paths or resources
- Test move job with invalid target path (path traversal)
- Test migrate job without migrate options
- Test valid pull job to ensure validation doesn't block legitimate requests

These tests verify that the admission controller properly validates job specs
before they are persisted, ensuring security (path traversal prevention) and
data integrity (required fields/options).

* Remove valid job test case from integration tests

Removed the positive test case as it's not necessary for validation testing.
The integration tests now focus solely on verifying that invalid job specs
are properly rejected by the admission controller.

* Fix movejob_test to expect validation error at creation time

Updated the 'move without target path' test to expect the job creation
to fail with a validation error, rather than expecting the job to be
created and then fail during execution.

This aligns with the new job validation logic which rejects invalid
job specs at the API admission control level (422 Unprocessable Entity)
before they can be persisted.

This is better behavior as it prevents invalid jobs from being created
in the first place, rather than allowing them to be created and then
failing during execution.

* Simplify action validation using slices.Contains

Replaced manual loop with slices.Contains for cleaner, more idiomatic Go code.
This reduces code complexity while maintaining the same validation logic.

- Added import for 'slices' package
- Replaced 8-line loop with 1-line slices.Contains call
- All unit tests pass

* Refactor job action validation in ValidateJob function

Removed the hardcoded valid actions array and simplified the validation logic. The function now directly appends an error for invalid actions, improving code clarity and maintainability. This change aligns with the recent updates to job validation, ensuring that invalid job specifications are properly handled.
2025-11-07 16:31:50 +00:00
Sven Grossmann
62129bb91f Search: Change copy to Search with Grafana Assistant (#113609) 2025-11-07 16:27:19 +00:00
Paul Marbach
3d8da61569 E2E: Improve ad-hoc filtering test (#113558)
* E2E: Improve ad-hoc filtering test

* remove unused import

* fix some table e2es after making getCell sync
2025-11-07 11:06:33 -05:00
Misi
d7d296df8e Fix: Return auth labels from /api/users/lookup (#113584)
* wip

* Return auth labels from /api/users/lookup

* Rename

* Address feedback

* Add more tests, fix tests

* Cleanup
2025-11-07 16:51:41 +01:00
Jean-Philippe Quéméner
305ed25896 fix(folders): add a circuit breaker to prevent infinite loops (#113596) 2025-11-07 14:32:17 +00:00
Yunwen Zheng
8b6cc211e9 Git Sync: Allow user disable push to configured branch (#113564)
* Git Sync: Allow user disable push to configured branch
2025-11-07 09:24:34 -05:00
Jean-Philippe Quéméner
1ca95cda4a fix(folders): prevent circular dependencies (#113595) 2025-11-07 14:19:55 +00:00
Alexa Vargas
e5ed003fb2 Dashboard Library: Add new "suggestedDashboards" feature toggle (#113591) 2025-11-07 13:38:59 +00:00
Jo
176b0f8b48 IAM: Refactor user org hooks to use MutateRequest API (#113392)
* update with mutation hooks

* add missing delete mutation
2025-11-07 14:36:53 +01:00
Juan Cabanas
33390a1483 LibraryPanels: Improve getAllLibraryElements filter performance (#113544) 2025-11-07 10:16:41 -03:00
Gabriel MABILLE
e90759e5af grafana-iam: enable dual writing for resource permissions (#112793)
* `grafana-iam`: enable dual writing for resource permissions

Co-authored-by: jguer <joao.guerreiro@grafana.com>

* copy paste mistake

* Reduce complexity

* nits to make the code easy to review

* Forgot to check the error

---------

Co-authored-by: jguer <joao.guerreiro@grafana.com>
2025-11-07 13:50:40 +01:00
Alex Khomenko
8cb5f5646a Provisioning: Fix miscellaneous issues with setting and displaying sync status (#113529)
* Provisioning: Preserve in progress job data

* Refactor code and cover more situations

* Fix linting

* Fix issue with remove path operation for started time

* Cleanup

* prettier

---------

Co-authored-by: Roberto Jimenez Sanchez <roberto.jimenez@grafana.com>
2025-11-07 12:27:25 +01:00
Seunghun Shin
c784de6ef5 Alerting: Add compressed periodic save for alert instances (#111803)
What is this feature?

This PR implements compressed periodic save for alert state storage, providing a more efficient alternative to regular periodic saves by grouping alert instances by rule UID and storing them using protobuf and snappy compression. When enabled via the state_compressed_periodic_save_enabled configuration option, the system groups alert instances by their alert rule, compresses each group using protobuf serialization and snappy compression, and processes all rules within a single database transaction at specified intervals instead of syncing after every alert evaluation cycle.

Why do we need this feature?

During discussions in PR #111357, we identified the need for a compressed approach to periodic alert state storage that could further reduce database load beyond the jitter mechanism. While the jitter feature distributes database operations over time, this compressed periodic save approach reduces the frequency of database operations by batching alert state updates at explicitly declared intervals rather than syncing after every alert evaluation cycle.
This approach provides several key benefits:

- Reduced Database Frequency: Instead of frequent sync operations tied to alert evaluation cycles, updates occur only at configured intervals
- Storage Efficiency: Rule-based grouping with protobuf and snappy compression significantly reduces storage requirements

The compressed periodic save complements the existing jitter mechanism by providing an alternative strategy focused on reducing overall database interaction frequency while maintaining data integrity through compression and batching.

Who is this feature for?

- Platform/Infrastructure teams managing large-scale Grafana deployments with high alert cardinality
- Organizations looking to optimize storage costs and database performance for alerting workloads
- Production environments with 1000+ alert rules where database write frequency is a concern
2025-11-07 11:51:48 +01:00
Jean-Philippe Quéméner
589435b7c2 fix(unified-storage): resource server tracing (#113582) 2025-11-07 11:51:32 +01:00
Gilles De Mey
b4d2d1eaf5 Alerting: Fix width of the code editor for Alertmanager configurations (#113541)
fix width of the code editor for Alertmanager configurations
2025-11-07 11:15:18 +01:00
Tobias Skarhed
36e28963d3 Scopes: Script for setting up gdev scope resources (#113448)
* Script for setting up gdev scope objects

* Script for setting up gdev scope objects

* Format

* Update codeowners

* Do a feature flag check

* Formatting

* Remove FF check, because creation is explicit anyways

* Formatting
2025-11-07 10:56:16 +01:00
Ida Štambuk
942b847952 CloudWatch: Add anomaly command to language support, add documentation for anomaly queries (#113311) 2025-11-07 09:54:24 +00:00
Elliot Kirk
488423abfc Icons: add hand pointer icon (#113255)
add hand pointer icon
2025-11-07 09:53:42 +00:00
Roberto Jiménez Sánchez
f75c853b90 Provisioning: Update slog-gokit to v0.1.5 to fix data race (#113455)
* Use fork of slog-gokit to fix data race

Replace github.com/tjhop/slog-gokit with fork that includes fix for data race in handler.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Update workspace

* Bump github.com/tjhop/slog-gokit to v0.1.5

* Update go.sum

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-07 09:47:53 +00:00
Ida Štambuk
4bbbd19049 CloudWatch: Make match exact toggle false by default (#113314) 2025-11-07 10:30:24 +01:00
Nathan Vērzemnieks
f4b23253b1 DataSources: Update SDKs in support of auth service (#112101)
* DataSources: Update SDKs for auth service

* Fix deprecated methods & types for new arrow-go version
2025-11-07 10:15:27 +01:00
Erik Sundell
06e1c83276 Chore: Bump plugin-e2e (#113578)
* bump plugin-e2e

* use plugin-e2e selector

* update lock file
2025-11-07 10:11:05 +01:00
Moustafa Baiou
54041155bd fix import path for annotation app 2025-11-06 19:33:12 -05:00
128 changed files with 5080 additions and 1842 deletions

3
.github/CODEOWNERS vendored
View File

@@ -227,6 +227,7 @@
/devenv/datasources.yaml @grafana/grafana-backend-group
/devenv/datasources_docker.yaml @grafana/grafana-backend-group
/devenv/dev-dashboards-without-uid/ @grafana/dashboards-squad
/devenv/scopes/ @grafana/grafana-operator-experience-squad
/devenv/dev-dashboards/annotations @grafana/dataviz-squad
/devenv/dev-dashboards/migrations @grafana/dataviz-squad
@@ -253,7 +254,6 @@
/devenv/dev-dashboards/all-panels.json @grafana/dataviz-squad
/devenv/dev-dashboards/dashboards.go @grafana/dataviz-squad
/devenv/dev-dashboards/home.json @grafana/dataviz-squad
/devenv/dev-dashboards/datasource-elasticsearch/ @grafana/partner-datasources
/devenv/dev-dashboards/datasource-opentsdb/ @grafana/partner-datasources
/devenv/dev-dashboards/datasource-influxdb/ @grafana/partner-datasources
@@ -549,6 +549,7 @@ i18next.config.ts @grafana/grafana-frontend-platform
/packages/grafana-data/src/geo/ @grafana/dataviz-squad
/packages/grafana-data/src/monaco/ @grafana/partner-datasources
/packages/grafana-data/src/panel/ @grafana/dashboards-squad
/packages/grafana-data/src/panel/suggestions/ @grafana/dataviz-squad
/packages/grafana-data/src/query/ @grafana/grafana-datasources-core-services
/packages/grafana-data/src/rbac/ @grafana/access-squad
/packages/grafana-data/src/table/ @grafana/dataviz-squad

View File

@@ -68,13 +68,13 @@ require (
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/at-wat/mqtt-go v0.19.4 // indirect
github.com/aws/aws-sdk-go v1.55.7 // indirect
github.com/aws/aws-sdk-go-v2 v1.38.1 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.18.6 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.4 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.0 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.4 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.38.0 // indirect
github.com/aws/aws-sdk-go-v2 v1.39.1 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.18.14 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.8 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5 // indirect
github.com/aws/smithy-go v1.23.1 // indirect
github.com/barkimedes/go-deepcopy v0.0.0-20220514131651-17c30cfc62df // indirect
github.com/benbjohnson/clock v1.3.5 // indirect
@@ -91,7 +91,6 @@ require (
github.com/cloudflare/circl v1.6.1 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.7 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/diegoholiveira/jsonlogic/v3 v3.7.4 // indirect
@@ -114,7 +113,7 @@ require (
github.com/go-jose/go-jose/v4 v4.1.2 // indirect
github.com/go-kit/log v0.2.1 // indirect
github.com/go-ldap/ldap/v3 v3.4.4 // indirect
github.com/go-logfmt/logfmt v0.6.0 // indirect
github.com/go-logfmt/logfmt v0.6.1 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/analysis v0.24.0 // indirect
@@ -159,7 +158,7 @@ require (
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f // indirect
github.com/grafana/dataplane/sdata v0.0.9 // indirect
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4 // indirect
github.com/grafana/grafana-aws-sdk v1.2.0 // indirect
github.com/grafana/grafana-aws-sdk v1.3.0 // indirect
github.com/grafana/grafana-azure-sdk-go/v2 v2.3.1 // indirect
github.com/grafana/grafana/apps/plugins v0.0.0 // indirect
github.com/grafana/grafana/apps/provisioning v0.0.0 // indirect
@@ -199,7 +198,6 @@ require (
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/lestrrat-go/strftime v1.0.4 // indirect
github.com/lib/pq v1.10.9 // indirect
github.com/magefile/mage v1.15.0 // indirect
github.com/mailru/easyjson v0.9.0 // indirect
github.com/mattetti/filebuffer v1.0.1 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
@@ -252,7 +250,6 @@ require (
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/rs/cors v1.11.1 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
github.com/shurcooL/httpfs v0.0.0-20230704072500-f1e31cf0ba5c // indirect
@@ -265,11 +262,7 @@ require (
github.com/stretchr/objx v0.5.2 // indirect
github.com/tetratelabs/wazero v1.8.2 // indirect
github.com/thomaspoignant/go-feature-flag v1.42.0 // indirect
github.com/tjhop/slog-gokit v0.1.3 // indirect
github.com/unknwon/bra v0.0.0-20200517080246-1e3013ecaff8 // indirect
github.com/unknwon/com v1.0.1 // indirect
github.com/unknwon/log v0.0.0-20200308114134-929b1006e34a // indirect
github.com/urfave/cli v1.22.17 // indirect
github.com/tjhop/slog-gokit v0.1.5 // indirect
github.com/woodsbury/decimal128 v1.3.0 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/zeebo/xxh3 v1.0.2 // indirect
@@ -319,7 +312,6 @@ require (
google.golang.org/protobuf v1.36.10 // indirect
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/fsnotify/fsnotify.v1 v1.4.7 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/mail.v2 v2.3.1 // indirect

View File

@@ -173,42 +173,42 @@ github.com/aws/aws-sdk-go v1.17.7/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN
github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go v1.55.7 h1:UJrkFq7es5CShfBwlWAC8DA077vp8PyVbQd3lqLiztE=
github.com/aws/aws-sdk-go v1.55.7/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/aws/aws-sdk-go-v2 v1.38.1 h1:j7sc33amE74Rz0M/PoCpsZQ6OunLqys/m5antM0J+Z8=
github.com/aws/aws-sdk-go-v2 v1.38.1/go.mod h1:9Q0OoGQoboYIAJyslFyF1f5K1Ryddop8gqMhWx/n4Wg=
github.com/aws/aws-sdk-go-v2 v1.39.1 h1:fWZhGAwVRK/fAN2tmt7ilH4PPAE11rDj7HytrmbZ2FE=
github.com/aws/aws-sdk-go-v2 v1.39.1/go.mod h1:sDioUELIUO9Znk23YVmIk86/9DOpkbyyVb1i/gUNFXY=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 h1:12SpdwU8Djs+YGklkinSSlcrPyj3H4VifVsKf78KbwA=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11/go.mod h1:dd+Lkp6YmMryke+qxW/VnKyhMBDTYP41Q2Bb+6gNZgY=
github.com/aws/aws-sdk-go-v2/config v1.31.2 h1:NOaSZpVGEH2Np/c1toSeW0jooNl+9ALmsUTZ8YvkJR0=
github.com/aws/aws-sdk-go-v2/config v1.31.2/go.mod h1:17ft42Yb2lF6OigqSYiDAiUcX4RIkEMY6XxEMJsrAes=
github.com/aws/aws-sdk-go-v2/credentials v1.18.6 h1:AmmvNEYrru7sYNJnp3pf57lGbiarX4T9qU/6AZ9SucU=
github.com/aws/aws-sdk-go-v2/credentials v1.18.6/go.mod h1:/jdQkh1iVPa01xndfECInp1v1Wnp70v3K4MvtlLGVEc=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.4 h1:lpdMwTzmuDLkgW7086jE94HweHCqG+uOJwHf3LZs7T0=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.4/go.mod h1:9xzb8/SV62W6gHQGC/8rrvgNXU6ZoYM3sAIJCIrXJxY=
github.com/aws/aws-sdk-go-v2/config v1.31.10 h1:7LllDZAegXU3yk41mwM6KcPu0wmjKGQB1bg99bNdQm4=
github.com/aws/aws-sdk-go-v2/config v1.31.10/go.mod h1:Ge6gzXPjqu4v0oHvgAwvGzYcK921GU0hQM25WF/Kl+8=
github.com/aws/aws-sdk-go-v2/credentials v1.18.14 h1:TxkI7QI+sFkTItN/6cJuMZEIVMFXeu2dI1ZffkXngKI=
github.com/aws/aws-sdk-go-v2/credentials v1.18.14/go.mod h1:12x4Uw/vijC11XkctTjy92TNCQ+UnNJkT7fzX0Yd93E=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8 h1:gLD09eaJUdiszm7vd1btiQUYE0Hj+0I2b8AS+75z9AY=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8/go.mod h1:4RW3oMPt1POR74qVOC4SbubxAwdP4pCT0nSw3jycOU4=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84 h1:cTXRdLkpBanlDwISl+5chq5ui1d1YWg4PWMR9c3kXyw=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84/go.mod h1:kwSy5X7tfIHN39uucmjQVs2LvDdXEjQucgQQEqCggEo=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.4 h1:IdCLsiiIj5YJ3AFevsewURCPV+YWUlOW8JiPhoAy8vg=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.4/go.mod h1:l4bdfCD7XyyZA9BolKBo1eLqgaJxl0/x91PL4Yqe0ao=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.4 h1:j7vjtr1YIssWQOMeOWRbh3z8g2oY/xPjnZH2gLY4sGw=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.4/go.mod h1:yDmJgqOiH4EA8Hndnv4KwAo8jCGTSnM5ASG1nBI+toA=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8 h1:6bgAZgRyT4RoFWhxS+aoGMFyE0cD1bSzFnEEi4bFPGI=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8/go.mod h1:KcGkXFVU8U28qS4KvLEcPxytPZPBcRawaH2Pf/0jptE=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8 h1:HhJYoES3zOz34yWEpGENqJvRVPqpmJyR3+AFg9ybhdY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8/go.mod h1:JnA+hPWeYAVbDssp83tv+ysAG8lTfLVXvSsyKg/7xNA=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 h1:bIqFDwgGXXN1Kpp99pDOdKMTTb5d2KyU5X/BZxjOkRo=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3/go.mod h1:H5O/EsxDWyU+LP/V8i5sm8cxoZgc2fdNR9bxlOFrQTo=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 h1:GMYy2EOWfzdP3wfVAGXBNKY5vK4K8vMET4sYOYltmqs=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36/go.mod h1:gDhdAV6wL3PmPqBhiPbnlS447GoWs8HTTOYef9/9Inw=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.0 h1:6+lZi2JeGKtCraAj1rpoZfKqnQ9SptseRZioejfUOLM=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.0/go.mod h1:eb3gfbVIxIoGgJsi9pGne19dhCBpK6opTYpQqAmdy44=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1 h1:oegbebPEMA/1Jny7kvwejowCaHz1FWZAQ94WXFNCyTM=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1/go.mod h1:kemo5Myr9ac0U9JfSjMo9yHLtw+pECEHsFtJ9tqCEI8=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4 h1:nAP2GYbfh8dd2zGZqFRSMlq+/F6cMPBUuCsGAMkN074=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4/go.mod h1:LT10DsiGjLWh4GbjInf9LQejkYEhBgBCjLG5+lvk4EE=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.4 h1:ueB2Te0NacDMnaC+68za9jLwkjzxGWm0KB5HTUHjLTI=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.4/go.mod h1:nLEfLnVMmLvyIG58/6gsSA03F1voKGaCfHV7+lR8S7s=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.8 h1:M6JI2aGFEzYxsF6CXIuRBnkge9Wf9a2xU39rNeXgu10=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.8/go.mod h1:Fw+MyTwlwjFsSTE31mH211Np+CUslml8mzc0AFEG09s=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17 h1:qcLWgdhq45sDM9na4cvXax9dyLitn8EYBRl8Ak4XtG4=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17/go.mod h1:M+jkjBFZ2J6DJrjMv2+vkBbuht6kxJYtJiwoVgX4p4U=
github.com/aws/aws-sdk-go-v2/service/s3 v1.84.0 h1:0reDqfEN+tB+sozj2r92Bep8MEwBZgtAXTND1Kk9OXg=
github.com/aws/aws-sdk-go-v2/service/s3 v1.84.0/go.mod h1:kUklwasNoCn5YpyAqC/97r6dzTA1SRKJfKq16SXeoDU=
github.com/aws/aws-sdk-go-v2/service/sso v1.28.2 h1:ve9dYBB8CfJGTFqcQ3ZLAAb/KXWgYlgu/2R2TZL2Ko0=
github.com/aws/aws-sdk-go-v2/service/sso v1.28.2/go.mod h1:n9bTZFZcBa9hGGqVz3i/a6+NG0zmZgtkB9qVVFDqPA8=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.33.2 h1:pd9G9HQaM6UZAZh19pYOkpKSQkyQQ9ftnl/LttQOcGI=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.33.2/go.mod h1:eknndR9rU8UpE/OmFpqU78V1EcXPKFTTm5l/buZYgvM=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.0 h1:iV1Ko4Em/lkJIsoKyGfc0nQySi+v0Udxr6Igq+y9JZc=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.0/go.mod h1:bEPcjW7IbolPfK67G1nilqWyoxYMSPrDiIQ3RdIdKgo=
github.com/aws/aws-sdk-go-v2/service/sso v1.29.4 h1:FTdEN9dtWPB0EOURNtDPmwGp6GGvMqRJCAihkSl/1No=
github.com/aws/aws-sdk-go-v2/service/sso v1.29.4/go.mod h1:mYubxV9Ff42fZH4kexj43gFPhgc/LyC7KqvUKt1watc=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.0 h1:I7ghctfGXrscr7r1Ga/mDqSJKm7Fkpl5Mwq79Z+rZqU=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.0/go.mod h1:Zo9id81XP6jbayIFWNuDpA6lMBWhsVy+3ou2jLa4JnA=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5 h1:+LVB0xBqEgjQoqr9bGZbRzvg212B0f17JdflleJRNR4=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5/go.mod h1:xoaxeqnnUaZjPjaICgIy5B+MHCSb/ZSOn4MvkFNOUA0=
github.com/aws/smithy-go v1.23.1 h1:sLvcH6dfAFwGkHLZ7dGiYF7aK6mg4CgKA/iDKjLDt9M=
github.com/aws/smithy-go v1.23.1/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/axiomhq/hyperloglog v0.0.0-20240507144631-af9851f82b27 h1:60m4tnanN1ctzIu4V3bfCNJ39BiOPSm1gHFlFjTkRE0=
@@ -334,10 +334,7 @@ github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03V
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/cpuguy83/go-md2man/v2 v2.0.7 h1:zbFlGlXEAKlwXpmvle3d8Oe3YnkKIK4xSRTd3sHPnBo=
github.com/cpuguy83/go-md2man/v2 v2.0.7/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/cznic/b v0.0.0-20180115125044-35e9bbe41f07/go.mod h1:URriBxXwVq5ijiJ12C7iIZqlA69nTlI+LgI6/pwftG8=
github.com/cznic/fileutil v0.0.0-20180108211300-6a051e75936f/go.mod h1:8S58EK26zhXSxzv7NQFpnliaOQsmDUxvoQO3rt154Vg=
@@ -456,8 +453,8 @@ github.com/go-ldap/ldap/v3 v3.4.4/go.mod h1:fe1MsuN5eJJ1FeLT/LEBVdWfNWKh459R7aXg
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logfmt/logfmt v0.6.0 h1:wGYYu3uicYdqXVgoYbvnkrPVXkuLM1p1ifugDMEdRi4=
github.com/go-logfmt/logfmt v0.6.0/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-logfmt/logfmt v0.6.1 h1:4hvbpePJKnIzH1B+8OR/JPbTx37NktoI9LE2QZBBkvE=
github.com/go-logfmt/logfmt v0.6.1/go.mod h1:EV2pOAQoZaT1ZXZbqDl5hrymndi4SY9ED9/z6CO0XAk=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.3.0/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
@@ -663,9 +660,6 @@ github.com/googleapis/gax-go/v2 v2.4.0/go.mod h1:XOTVJ59hdnfJLIP/dh8n5CGryZR2LxK
github.com/googleapis/gax-go/v2 v2.15.0 h1:SyjDc1mGgZU5LncH8gimWo9lW1DtIfPibOG81vgd/bo=
github.com/googleapis/gax-go/v2 v2.15.0/go.mod h1:zVVkkxAQHa1RQpg9z2AUCMnKhi0Qld9rcmyfL1OZhoc=
github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g=
github.com/gopherjs/gopherjs v0.0.0-20181103185306-d547d1d9531e/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gopherjs/gopherjs v1.17.2 h1:fQnZVsXk8uxXIStYb0N4bGk7jeyTalG/wsZjQ25dO0g=
github.com/gopherjs/gopherjs v1.17.2/go.mod h1:pRRIvn/QzFLrKfvEz3qUuEhtE/zLCWfreZ6J5gM2i+k=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.7.1/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
@@ -689,8 +683,8 @@ github.com/grafana/grafana-app-sdk v0.48.1 h1:bKJadWH18WCpJ+Zk8AezRFXCcZgGredRv+
github.com/grafana/grafana-app-sdk v0.48.1/go.mod h1:5LljCz+wvmGfkQ8ZKTOfserhtXNEF0cSFthoWShvN6c=
github.com/grafana/grafana-app-sdk/logging v0.48.1 h1:veM0X5LAPyN3KsDLglWjIofndbGuf7MqnrDuDN+F/Ng=
github.com/grafana/grafana-app-sdk/logging v0.48.1/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
github.com/grafana/grafana-aws-sdk v1.2.0 h1:LLR4/g91WBuCRwm2cbWfCREq565+GxIFe08nqqIcIuw=
github.com/grafana/grafana-aws-sdk v1.2.0/go.mod h1:bBo7qOmM3f61vO+2JxTolNUph1l2TmtzmWcU9/Im+8A=
github.com/grafana/grafana-aws-sdk v1.3.0 h1:/bfJzP93rCel1GbWoRSq0oUo424MZXt8jAp2BK9w8tM=
github.com/grafana/grafana-aws-sdk v1.3.0/go.mod h1:VGycF0JkCGKND2O5je1ucOqPJ0ZNhZYzV3c2bNBAaGk=
github.com/grafana/grafana-azure-sdk-go/v2 v2.3.1 h1:FFcEA01tW+SmuJIuDbHOdgUBL+d7DPrZ2N4zwzPhfGk=
github.com/grafana/grafana-azure-sdk-go/v2 v2.3.1/go.mod h1:Oi4anANlCuTCc66jCyqIzfVbgLXFll8Wja+Y4vfANlc=
github.com/grafana/grafana-plugin-sdk-go v0.281.0 h1:V8dGyatzcOLQeivFhBV2JWMwTSZH/clDnpfKG9p3dTA=
@@ -833,9 +827,6 @@ github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jszwedko/go-datemath v0.1.1-0.20230526204004-640a500621d6 h1:SwcnSwBR7X/5EHJQlXBockkJVIMRVt5yKaesBPMtyZQ=
github.com/jszwedko/go-datemath v0.1.1-0.20230526204004-640a500621d6/go.mod h1:WrYiIuiXUMIvTDAQw97C+9l0CnBmCcvosPjN3XDqS/o=
github.com/jtolds/gls v4.2.1+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0/go.mod h1:1NbS8ALrpOvjt0rHPNLyCIeMtbizbir8U//inJ+zuB8=
@@ -882,8 +873,6 @@ github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/madflojo/testcerts v1.4.0 h1:I09gN0C1ly9IgeVNcAqKk8RAKIJTe3QnFrrPBDyvzN4=
github.com/madflojo/testcerts v1.4.0/go.mod h1:MW8sh39gLnkKh4K0Nc55AyHEDl9l/FBLDUsQhpmkuo0=
github.com/magefile/mage v1.15.0 h1:BvGheCMAsG3bWUDbZ8AyXXpCNwU9u5CB6sM+HNb9HYg=
github.com/magefile/mage v1.15.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
github.com/magiconair/properties v1.8.6/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60=
github.com/mailru/easyjson v0.9.0 h1:PrnmzHw7262yW8sTBwxi1PdJA3Iw/EKBa8psRf7d9a4=
github.com/mailru/easyjson v0.9.0/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU=
@@ -1115,8 +1104,6 @@ github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0t
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/rs/cors v1.11.1 h1:eU3gRzXLRK57F5rKMGMZURNdIG4EoAmX8k94r9wXWHA=
github.com/rs/cors v1.11.1/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/sagikazarmark/crypt v0.6.0/go.mod h1:U8+INwJo3nBv1m6A/8OBXAq7Jnpspk5AxSgDyEQcea8=
@@ -1132,7 +1119,6 @@ github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp
github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=
github.com/shurcooL/httpfs v0.0.0-20230704072500-f1e31cf0ba5c h1:aqg5Vm5dwtvL+YgDpBcK1ITf3o96N/K7/wsRXQnUTEs=
github.com/shurcooL/httpfs v0.0.0-20230704072500-f1e31cf0ba5c/go.mod h1:owqhoLW1qZoYLZzLnBw+QkPP9WZnjlSWihhxAJC1+/M=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/shurcooL/vfsgen v0.0.0-20230704071429-0000e147ea92 h1:OfRzdxCzDhp+rsKWXuOO2I/quKMJ/+TQwVbIP/gltZg=
github.com/shurcooL/vfsgen v0.0.0-20230704071429-0000e147ea92/go.mod h1:7/OT02F6S6I7v6WXb+IjhMuZEYfH/RJ5RwEWnEo5BMg=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
@@ -1141,10 +1127,6 @@ github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6Mwd
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/smartystreets/assertions v0.0.0-20190116191733-b6c0e53d7304 h1:Jpy1PXuP99tXNrhbq2BaPz9B+jNAvH1JPQQpG/9GCXY=
github.com/smartystreets/assertions v0.0.0-20190116191733-b6c0e53d7304/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v0.0.0-20181108003508-044398e4856c h1:Ho+uVpkel/udgjbwB5Lktg9BtvJSh2DT0Hi6LPSyI2w=
github.com/smartystreets/goconvey v0.0.0-20181108003508-044398e4856c/go.mod h1:XDJAKZRPZ1CvBcN2aX5YOUTYGHki24fSF0Iv48Ibg0s=
github.com/soheilhy/cmux v0.1.5 h1:jjzc5WVemNEDTLwv9tlmemhC73tI08BNOIGwBOo10Js=
github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE1GqG0=
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 h1:+jumHNA0Wrelhe64i8F6HNlS8pkoyMv5sreGx2Ry5Rw=
@@ -1189,7 +1171,6 @@ github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/subosito/gotenv v1.4.1/go.mod h1:ayKnFf/c6rvx/2iiLrJUk1e6plDbT3edrFNGqEflhK0=
@@ -1202,8 +1183,8 @@ github.com/thejerf/slogassert v0.3.4/go.mod h1:0zn9ISLVKo1aPMTqcGfG1o6dWwt+Rk574
github.com/thomaspoignant/go-feature-flag v1.42.0 h1:C7embmOTzaLyRki+OoU2RvtVjJE9IrvgBA2C1mRN1lc=
github.com/thomaspoignant/go-feature-flag v1.42.0/go.mod h1:y0QiWH7chHWhGATb/+XqwAwErORmPSH2MUsQlCmmWlM=
github.com/tidwall/pretty v0.0.0-20180105212114-65a9db5fad51/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tjhop/slog-gokit v0.1.3 h1:6SdexP3UIeg93KLFeiM1Wp1caRwdTLgsD/THxBUy1+o=
github.com/tjhop/slog-gokit v0.1.3/go.mod h1:Bbu5v2748qpAWH7k6gse/kw3076IJf6owJmh7yArmJs=
github.com/tjhop/slog-gokit v0.1.5 h1:ayloIUi5EK2QYB8eY4DOPO95/mRtMW42lUkp3quJohc=
github.com/tjhop/slog-gokit v0.1.5/go.mod h1:yA48zAHvV+Sg4z4VRyeFyFUNNXd3JY5Zg84u3USICq0=
github.com/tmc/grpc-websocket-proxy v0.0.0-20220101234140-673ab2c3ae75 h1:6fotK7otjonDflCTK0BCfls4SPy3NcCVb5dqqmbRknE=
github.com/tmc/grpc-websocket-proxy v0.0.0-20220101234140-673ab2c3ae75/go.mod h1:KO6IkyS8Y3j8OdNO85qEYBsRPuteD+YciPomcXdrMnk=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
@@ -1213,16 +1194,6 @@ github.com/uber/jaeger-lib v2.4.1+incompatible h1:td4jdvLcExb4cBISKIpHuGoVXh+dVK
github.com/uber/jaeger-lib v2.4.1+incompatible/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U=
github.com/ugorji/go/codec v1.2.11 h1:BMaWp1Bb6fHwEtbplGBGJ498wD+LKlNSl25MjdZY4dU=
github.com/ugorji/go/codec v1.2.11/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
github.com/unknwon/bra v0.0.0-20200517080246-1e3013ecaff8 h1:aVGB3YnaS/JNfOW3tiHIlmNmTDg618va+eT0mVomgyI=
github.com/unknwon/bra v0.0.0-20200517080246-1e3013ecaff8/go.mod h1:fVle4kNr08ydeohzYafr20oZzbAkhQT39gKK/pFQ5M4=
github.com/unknwon/com v1.0.1 h1:3d1LTxD+Lnf3soQiD4Cp/0BRB+Rsa/+RTvz8GMMzIXs=
github.com/unknwon/com v1.0.1/go.mod h1:tOOxU81rwgoCLoOVVPHb6T/wt8HZygqH5id+GNnlCXM=
github.com/unknwon/log v0.0.0-20150304194804-e617c87089d3/go.mod h1:1xEUf2abjfP92w2GZTV+GgaRxXErwRXcClbUwrNJffU=
github.com/unknwon/log v0.0.0-20200308114134-929b1006e34a h1:vcrhXnj9g9PIE+cmZgaPSwOyJ8MAQTRmsgGrB0x5rF4=
github.com/unknwon/log v0.0.0-20200308114134-929b1006e34a/go.mod h1:1xEUf2abjfP92w2GZTV+GgaRxXErwRXcClbUwrNJffU=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli v1.22.17 h1:SYzXoiPfQjHBbkYxbew5prZHS1TOLT3ierW8SYLqtVQ=
github.com/urfave/cli v1.22.17/go.mod h1:b0ht0aqgH/6pBYzzxURyrM4xXNgsoT/n2ZzwQiEhNVo=
github.com/wk8/go-ordered-map v1.0.0 h1:BV7z+2PaK8LTSd/mWgY12HyMAo5CEgkHqbkVq2thqr8=
github.com/wk8/go-ordered-map/v2 v2.1.8 h1:5h/BUHu93oj4gIdvHHHGsScSTMijfx5PeYkE/fJgbpc=
github.com/wk8/go-ordered-map/v2 v2.1.8/go.mod h1:5nJHM5DyteebpVlHnWMV0rPz6Zp7+xBAnxjb1X5vnTw=
@@ -1528,7 +1499,6 @@ golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191020152052-9984515f0562/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191220142924-d4481acd189f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -1892,8 +1862,6 @@ gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4=
gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/fsnotify/fsnotify.v1 v1.4.7 h1:XNNYLJHt73EyYiCZi6+xjupS9CpvmiDgjPTAjrBlQbo=
gopkg.in/fsnotify/fsnotify.v1 v1.4.7/go.mod h1:Fyux9zXlo4rWoMSIzpn9fDAYjalPqJ/K1qJ27s+7ltE=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=

View File

@@ -8,7 +8,16 @@ spec:
preferredVersion: v0alpha1
versions:
- kinds:
- conversion: false
- admission:
mutation:
operations:
- CREATE
- UPDATE
validation:
operations:
- CREATE
- UPDATE
conversion: false
kind: AlertRule
plural: AlertRules
schemas:
@@ -214,7 +223,16 @@ spec:
- spec.panelRef.dashboardUID
- spec.panelRef.panelID
- spec.notificationSettings.receiver
- conversion: false
- admission:
mutation:
operations:
- CREATE
- UPDATE
validation:
operations:
- CREATE
- UPDATE
conversion: false
kind: RecordingRule
plural: RecordingRules
schemas:

View File

@@ -5,6 +5,7 @@ go 1.25.3
require (
github.com/grafana/grafana-app-sdk v0.48.1
github.com/grafana/grafana-app-sdk/logging v0.48.1
github.com/prometheus/common v0.67.1
k8s.io/apimachinery v0.34.1
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912
)
@@ -49,7 +50,6 @@ require (
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.23.2 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.67.1 // indirect
github.com/prometheus/procfs v0.16.1 // indirect
github.com/puzpuzpuz/xsync/v2 v2.5.1 // indirect
github.com/rogpeppe/go-internal v1.14.1 // indirect

View File

@@ -13,6 +13,18 @@ alertRulev0alpha1: alertRuleKind & {
schema: {
spec: v0alpha1.AlertRuleSpec
}
validation: {
operations: [
"CREATE",
"UPDATE",
]
}
mutation: {
operations: [
"CREATE",
"UPDATE",
]
}
selectableFields: [
"spec.title",
"spec.paused",

View File

@@ -13,6 +13,18 @@ recordingRulev0alpha1: recordingRuleKind & {
schema: {
spec: v0alpha1.RecordingRuleSpec
}
validation: {
operations: [
"CREATE",
"UPDATE",
]
}
mutation: {
operations: [
"CREATE",
"UPDATE",
]
}
selectableFields: [
"spec.title",
"spec.paused",

View File

@@ -3,6 +3,7 @@ package v0alpha1
import (
"fmt"
"slices"
"time"
)
func (o *AlertRule) GetProvenanceStatus() string {
@@ -48,4 +49,78 @@ func (s *AlertRuleSpec) ExecErrStateOrDefault() string {
return s.ExecErrState
}
// TODO: add duration clamping for the field types AlertRulePromDuration, AlertRulePromDurationWMillis, and the For and KeepFiringFor string pointers
func (d *AlertRulePromDuration) ToDuration() (time.Duration, error) {
return ToDuration(string(*d))
}
func (d *AlertRulePromDurationWMillis) ToDuration() (time.Duration, error) {
return ToDuration(string(*d))
}
func (d *AlertRulePromDuration) Clamp() error {
clampedDuration, err := ClampDuration(string(*d))
if err != nil {
return err
}
*d = AlertRulePromDuration(clampedDuration)
return nil
}
func (d *AlertRulePromDurationWMillis) Clamp() error {
clampedDuration, err := ClampDuration(string(*d))
if err != nil {
return err
}
*d = AlertRulePromDurationWMillis(clampedDuration)
return nil
}
func (spec *AlertRuleSpec) ClampDurations() error {
// clamp all duration fields
if err := spec.Trigger.Interval.Clamp(); err != nil {
return err
}
if spec.For != nil {
clamped, err := ClampDuration(*spec.For)
if err != nil {
return err
}
spec.For = &clamped
}
if spec.KeepFiringFor != nil {
clamped, err := ClampDuration(*spec.KeepFiringFor)
if err != nil {
return err
}
spec.KeepFiringFor = &clamped
}
if spec.NotificationSettings != nil {
if spec.NotificationSettings.GroupWait != nil {
if err := spec.NotificationSettings.GroupWait.Clamp(); err != nil {
return err
}
}
if spec.NotificationSettings.GroupInterval != nil {
if err := spec.NotificationSettings.GroupInterval.Clamp(); err != nil {
return err
}
}
if spec.NotificationSettings.RepeatInterval != nil {
if err := spec.NotificationSettings.RepeatInterval.Clamp(); err != nil {
return err
}
}
}
for k, expr := range spec.Expressions {
if expr.RelativeTimeRange != nil {
if err := expr.RelativeTimeRange.From.Clamp(); err != nil {
return err
}
if err := expr.RelativeTimeRange.To.Clamp(); err != nil {
return err
}
spec.Expressions[k] = expr
}
}
return nil
}

View File

@@ -1,10 +1,22 @@
package v0alpha1
import (
"fmt"
"time"
prom_model "github.com/prometheus/common/model"
)
const (
InternalPrefix = "grafana.com/"
GroupLabelKey = InternalPrefix + "group"
GroupIndexLabelKey = GroupLabelKey + "-index"
ProvenanceStatusAnnotationKey = InternalPrefix + "provenance"
// Copy of the max title length used in legacy validation path
AlertRuleMaxTitleLength = 190
// Annotation key used to store the folder UID on resources
FolderAnnotationKey = "grafana.app/folder"
FolderLabelKey = FolderAnnotationKey
)
const (
@@ -15,3 +27,20 @@ const (
var (
AcceptedProvenanceStatuses = []string{ProvenanceStatusNone, ProvenanceStatusAPI}
)
func ToDuration(s string) (time.Duration, error) {
promDuration, err := prom_model.ParseDuration(s)
if err != nil {
return 0, fmt.Errorf("invalid duration format: %w", err)
}
return time.Duration(promDuration), nil
}
// Convert the string duration to the longest valid Prometheus duration format (e.g., "60s" -> "1m")
func ClampDuration(s string) (string, error) {
promDuration, err := prom_model.ParseDuration(s)
if err != nil {
return "", fmt.Errorf("invalid duration format: %w", err)
}
return promDuration.String(), nil
}

View File

@@ -3,6 +3,7 @@ package v0alpha1
import (
"fmt"
"slices"
"time"
)
func (o *RecordingRule) GetProvenanceStatus() string {
@@ -27,4 +28,47 @@ func (o *RecordingRule) SetProvenanceStatus(status string) (err error) {
return
}
// TODO: add duration clamping for the field types RecordingRulePromDurationWMillis and RecordingRulePromDuration
func (d *RecordingRulePromDuration) ToDuration() (time.Duration, error) {
return ToDuration(string(*d))
}
func (d *RecordingRulePromDurationWMillis) ToDuration() (time.Duration, error) {
return ToDuration(string(*d))
}
func (d *RecordingRulePromDuration) Clamp() error {
clampedDuration, err := ClampDuration(string(*d))
if err != nil {
return err
}
*d = RecordingRulePromDuration(clampedDuration)
return nil
}
func (d *RecordingRulePromDurationWMillis) Clamp() error {
clampedDuration, err := ClampDuration(string(*d))
if err != nil {
return err
}
*d = RecordingRulePromDurationWMillis(clampedDuration)
return nil
}
func (spec *RecordingRuleSpec) ClampDurations() error {
// clamp all duration fields
if err := spec.Trigger.Interval.Clamp(); err != nil {
return err
}
for k, expr := range spec.Expressions {
if expr.RelativeTimeRange != nil {
if err := expr.RelativeTimeRange.From.Clamp(); err != nil {
return err
}
if err := expr.RelativeTimeRange.To.Clamp(); err != nil {
return err
}
spec.Expressions[k] = expr
}
}
return nil
}

View File

@@ -42,7 +42,21 @@ var appManifestData = app.ManifestData{
Plural: "AlertRules",
Scope: "Namespaced",
Conversion: false,
Schema: &versionSchemaAlertRulev0alpha1,
Admission: &app.AdmissionCapabilities{
Validation: &app.ValidationCapability{
Operations: []app.AdmissionOperation{
app.AdmissionOperationCreate,
app.AdmissionOperationUpdate,
},
},
Mutation: &app.MutationCapability{
Operations: []app.AdmissionOperation{
app.AdmissionOperationCreate,
app.AdmissionOperationUpdate,
},
},
},
Schema: &versionSchemaAlertRulev0alpha1,
SelectableFields: []string{
"spec.title",
"spec.paused",
@@ -57,7 +71,21 @@ var appManifestData = app.ManifestData{
Plural: "RecordingRules",
Scope: "Namespaced",
Conversion: false,
Schema: &versionSchemaRecordingRulev0alpha1,
Admission: &app.AdmissionCapabilities{
Validation: &app.ValidationCapability{
Operations: []app.AdmissionOperation{
app.AdmissionOperationCreate,
app.AdmissionOperationUpdate,
},
},
Mutation: &app.MutationCapability{
Operations: []app.AdmissionOperation{
app.AdmissionOperationCreate,
app.AdmissionOperationUpdate,
},
},
},
Schema: &versionSchemaRecordingRulev0alpha1,
SelectableFields: []string{
"spec.title",
"spec.paused",

View File

@@ -0,0 +1,45 @@
package alertrule
import (
"context"
"github.com/grafana/grafana-app-sdk/app"
"github.com/grafana/grafana-app-sdk/simple"
v1 "github.com/grafana/grafana/apps/alerting/rules/pkg/apis/alerting/v0alpha1"
"github.com/grafana/grafana/apps/alerting/rules/pkg/app/config"
)
func NewMutator(cfg config.RuntimeConfig) *simple.Mutator {
return &simple.Mutator{
MutateFunc: func(ctx context.Context, req *app.AdmissionRequest) (*app.MutatingResponse, error) {
// Mutate folder label to match folder UID from annotation
r, ok := req.Object.(*v1.AlertRule)
if !ok || r == nil {
// Nothing to do or wrong type; no mutation
return nil, nil
}
// Read folder UID from annotation
folderUID := ""
if r.Annotations != nil {
folderUID = r.Annotations[v1.FolderAnnotationKey]
}
// Ensure labels map exists and set the folder label if folderUID is present
if folderUID != "" {
if r.Labels == nil {
r.Labels = make(map[string]string)
}
// Maintain folder metadata label for downstream systems (alertmanager grouping etc.)
r.Labels[v1.FolderLabelKey] = folderUID
}
// clamp all duration fields
if err := r.Spec.ClampDurations(); err != nil {
return nil, err
}
return &app.MutatingResponse{UpdatedObject: r}, nil
},
}
}

View File

@@ -0,0 +1,123 @@
package alertrule
import (
"context"
"fmt"
"slices"
"strconv"
"time"
"github.com/grafana/grafana-app-sdk/app"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/grafana/grafana-app-sdk/simple"
model "github.com/grafana/grafana/apps/alerting/rules/pkg/apis/alerting/v0alpha1"
"github.com/grafana/grafana/apps/alerting/rules/pkg/app/config"
"github.com/grafana/grafana/apps/alerting/rules/pkg/app/util"
prom_model "github.com/prometheus/common/model"
)
func NewValidator(cfg config.RuntimeConfig) *simple.Validator {
return &simple.Validator{
ValidateFunc: func(ctx context.Context, req *app.AdmissionRequest) error {
// Cast to specific type
r, ok := req.Object.(*model.AlertRule)
if !ok {
return fmt.Errorf("object is not of type *v0alpha1.AlertRule")
}
// 1) Validate provenance status annotation
sourceProv := r.GetProvenanceStatus()
if !slices.Contains(model.AcceptedProvenanceStatuses, sourceProv) {
return fmt.Errorf("invalid provenance status: %s", sourceProv)
}
// 2) Validate group labels rules
group := r.Labels[model.GroupLabelKey]
groupIndexStr := r.Labels[model.GroupIndexLabelKey]
if req.Action == resource.AdmissionActionCreate {
if group != "" || groupIndexStr != "" {
return fmt.Errorf("cannot set group when creating alert rule")
}
}
if group != "" { // if group is set, group-index must be set and numeric
if groupIndexStr == "" {
return fmt.Errorf("%s must be set when %s is set", model.GroupIndexLabelKey, model.GroupLabelKey)
}
if _, err := strconv.Atoi(groupIndexStr); err != nil {
return fmt.Errorf("invalid %s: %w", model.GroupIndexLabelKey, err)
}
}
// 3) Validate folder is set and exists
// Read folder UID directly from annotations
folderUID := ""
if r.Annotations != nil {
folderUID = r.Annotations[model.FolderAnnotationKey]
}
if folderUID == "" {
return fmt.Errorf("folder is required")
}
if cfg.FolderValidator != nil {
ok, verr := cfg.FolderValidator(ctx, folderUID)
if verr != nil {
return fmt.Errorf("failed to validate folder: %w", verr)
}
if !ok {
return fmt.Errorf("folder does not exist: %s", folderUID)
}
}
// 4) Validate notification settings receiver if provided
if r.Spec.NotificationSettings != nil && r.Spec.NotificationSettings.Receiver != "" && cfg.NotificationSettingsValidator != nil {
ok, nerr := cfg.NotificationSettingsValidator(ctx, r.Spec.NotificationSettings.Receiver)
if nerr != nil {
return fmt.Errorf("failed to validate notification settings: %w", nerr)
}
if !ok {
return fmt.Errorf("invalid notification receiver: %s", r.Spec.NotificationSettings.Receiver)
}
}
// 5) Enforce max title length
if len(r.Spec.Title) > model.AlertRuleMaxTitleLength {
return fmt.Errorf("alert rule title is too long. Max length is %d", model.AlertRuleMaxTitleLength)
}
// 6) Validate evaluation interval against base interval
if err := util.ValidateInterval(cfg.BaseEvaluationInterval, &r.Spec.Trigger.Interval); err != nil {
return err
}
// 7) Disallow reserved/spec system label keys
if r.Spec.Labels != nil {
for key := range r.Spec.Labels {
if _, bad := cfg.ReservedLabelKeys[key]; bad {
return fmt.Errorf("label key is reserved and cannot be specified: %s", key)
}
}
}
// 8) For and KeepFiringFor must be >= 0 if set
if r.Spec.For != nil {
d, err := prom_model.ParseDuration(*r.Spec.For)
if err != nil {
return fmt.Errorf("invalid 'for' duration: %w", err)
}
if time.Duration(d) < 0 {
return fmt.Errorf("'for' cannot be less than 0")
}
}
if r.Spec.KeepFiringFor != nil {
d, err := prom_model.ParseDuration(*r.Spec.KeepFiringFor)
if err != nil {
return fmt.Errorf("invalid 'keepFiringFor' duration: %w", err)
}
if time.Duration(d) < 0 {
return fmt.Errorf("'keepFiringFor' cannot be less than 0")
}
}
return nil
},
}
}

View File

@@ -6,16 +6,29 @@ import (
"github.com/grafana/grafana-app-sdk/app"
"github.com/grafana/grafana-app-sdk/logging"
"github.com/grafana/grafana-app-sdk/operator"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/grafana/grafana-app-sdk/simple"
"github.com/grafana/grafana/apps/alerting/rules/pkg/apis"
"github.com/grafana/grafana/apps/alerting/rules/pkg/app/alertrule"
"github.com/grafana/grafana/apps/alerting/rules/pkg/app/config"
"github.com/grafana/grafana/apps/alerting/rules/pkg/app/recordingrule"
)
func New(cfg app.Config) (app.App, error) {
managedKinds := make([]simple.AppManagedKind, 0)
runtimeCfg, ok := cfg.SpecificConfig.(config.RuntimeConfig)
if !ok {
return nil, config.ErrInvalidRuntimeConfig
}
for _, kinds := range apis.GetKinds() {
for _, kind := range kinds {
managedKinds = append(managedKinds, simple.AppManagedKind{Kind: kind})
managedKind := simple.AppManagedKind{
Kind: kind,
Validator: buildKindValidator(kind, runtimeCfg),
Mutator: buildKindMutator(kind, runtimeCfg),
}
managedKinds = append(managedKinds, managedKind)
}
}
@@ -44,3 +57,23 @@ func New(cfg app.Config) (app.App, error) {
return a, nil
}
func buildKindValidator(kind resource.Kind, cfg config.RuntimeConfig) *simple.Validator {
switch kind.Kind() {
case "AlertRule":
return alertrule.NewValidator(cfg)
case "RecordingRule":
return recordingrule.NewValidator(cfg)
}
return nil
}
func buildKindMutator(kind resource.Kind, cfg config.RuntimeConfig) *simple.Mutator {
switch kind.Kind() {
case "AlertRule":
return alertrule.NewMutator(cfg)
case "RecordingRule":
return recordingrule.NewMutator(cfg)
}
return nil
}

View File

@@ -0,0 +1,175 @@
package app_test
import (
"context"
"testing"
"time"
appsdk "github.com/grafana/grafana-app-sdk/app"
"github.com/grafana/grafana-app-sdk/resource"
v1 "github.com/grafana/grafana/apps/alerting/rules/pkg/apis/alerting/v0alpha1"
"github.com/grafana/grafana/apps/alerting/rules/pkg/app/alertrule"
"github.com/grafana/grafana/apps/alerting/rules/pkg/app/config"
"github.com/grafana/grafana/apps/alerting/rules/pkg/app/recordingrule"
)
func makeDefaultRuntimeConfig() config.RuntimeConfig {
return config.RuntimeConfig{
FolderValidator: func(ctx context.Context, folderUID string) (bool, error) { return folderUID == "f1", nil },
BaseEvaluationInterval: 60 * time.Second, // seconds
ReservedLabelKeys: map[string]struct{}{"__reserved__": {}, "grafana_folder": {}},
NotificationSettingsValidator: func(ctx context.Context, receiver string) (bool, error) { return receiver == "notif-ok", nil },
}
}
func TestAlertRuleValidation_Success(t *testing.T) {
r := &v1.AlertRule{}
r.SetGroupVersionKind(v1.AlertRuleKind().GroupVersionKind())
r.Name = "uid-1"
r.Namespace = "ns1"
r.Annotations = map[string]string{v1.FolderAnnotationKey: "f1"}
r.Labels = map[string]string{}
r.Spec = v1.AlertRuleSpec{
Title: "ok",
Trigger: v1.AlertRuleIntervalTrigger{Interval: v1.AlertRulePromDuration("60s")},
Expressions: v1.AlertRuleExpressionMap{"A": v1.AlertRuleExpression{Model: map[string]any{"expr": "1"}, Source: boolPtr(true)}},
NoDataState: v1.DefaultNoDataState,
ExecErrState: v1.DefaultExecErrState,
NotificationSettings: &v1.AlertRuleV0alpha1SpecNotificationSettings{Receiver: "notif-ok"},
}
req := &appsdk.AdmissionRequest{Action: resource.AdmissionActionCreate, Object: r}
validator := alertrule.NewValidator(makeDefaultRuntimeConfig())
if err := validator.Validate(context.Background(), req); err != nil {
t.Fatalf("expected success, got error: %v", err)
}
}
func TestAlertRuleValidation_Errors(t *testing.T) {
mk := func(mut func(r *v1.AlertRule)) error {
r := baseAlertRule()
mut(r)
return alertrule.NewValidator(makeDefaultRuntimeConfig()).Validate(context.Background(), &appsdk.AdmissionRequest{Action: resource.AdmissionActionCreate, Object: r})
}
if err := mk(func(r *v1.AlertRule) { r.Annotations = nil }); err == nil {
t.Errorf("want folder required error")
}
if err := mk(func(r *v1.AlertRule) { r.Annotations[v1.FolderAnnotationKey] = "bad" }); err == nil {
t.Errorf("want folder not exist error")
}
if err := mk(func(r *v1.AlertRule) { r.Spec.Trigger.Interval = v1.AlertRulePromDuration("30s") }); err == nil {
t.Errorf("want base interval multiple error")
}
if err := mk(func(r *v1.AlertRule) {
r.Spec.NotificationSettings = &v1.AlertRuleV0alpha1SpecNotificationSettings{Receiver: "bad"}
}); err == nil {
t.Errorf("want invalid receiver error")
}
if err := mk(func(r *v1.AlertRule) { r.Labels[v1.GroupLabelKey] = "grp" }); err == nil {
t.Errorf("want group set on create error")
}
if err := mk(func(r *v1.AlertRule) { r.Spec.For = strPtr("-10s") }); err == nil {
t.Errorf("want for>=0 error")
}
if err := mk(func(r *v1.AlertRule) {
if r.Spec.Labels == nil {
r.Spec.Labels = map[string]v1.AlertRuleTemplateString{}
}
r.Spec.Labels["__reserved__"] = v1.AlertRuleTemplateString("x")
}); err == nil {
t.Errorf("want reserved label key error")
}
}
func baseAlertRule() *v1.AlertRule {
r := &v1.AlertRule{}
r.SetGroupVersionKind(v1.AlertRuleKind().GroupVersionKind())
r.Name = "uid-1"
r.Namespace = "ns1"
r.Annotations = map[string]string{v1.FolderAnnotationKey: "f1"}
r.Labels = map[string]string{}
r.Spec = v1.AlertRuleSpec{
Title: "ok",
Trigger: v1.AlertRuleIntervalTrigger{Interval: v1.AlertRulePromDuration("60s")},
Expressions: v1.AlertRuleExpressionMap{"A": v1.AlertRuleExpression{Model: map[string]any{"expr": "1"}, Source: boolPtr(true)}},
NoDataState: v1.DefaultNoDataState,
ExecErrState: v1.DefaultExecErrState,
}
return r
}
func TestRecordingRuleValidation_Success(t *testing.T) {
r := &v1.RecordingRule{}
r.SetGroupVersionKind(v1.RecordingRuleKind().GroupVersionKind())
r.Name = "uid-2"
r.Namespace = "ns1"
r.Annotations = map[string]string{v1.FolderAnnotationKey: "f1"}
r.Labels = map[string]string{}
r.Spec = v1.RecordingRuleSpec{
Title: "ok",
Trigger: v1.RecordingRuleIntervalTrigger{Interval: v1.RecordingRulePromDuration("60s")},
Expressions: v1.RecordingRuleExpressionMap{"A": v1.RecordingRuleExpression{Model: map[string]any{"expr": "1"}, Source: boolPtr(true)}},
Metric: "test_metric",
TargetDatasourceUID: "ds1",
}
req := &appsdk.AdmissionRequest{Action: resource.AdmissionActionCreate, Object: r}
validator := recordingrule.NewValidator(makeDefaultRuntimeConfig())
if err := validator.Validate(context.Background(), req); err != nil {
t.Fatalf("expected success, got error: %v", err)
}
}
func TestRecordingRuleValidation_Errors(t *testing.T) {
mk := func(mut func(r *v1.RecordingRule)) error {
r := baseRecordingRule()
mut(r)
return recordingrule.NewValidator(makeDefaultRuntimeConfig()).Validate(context.Background(), &appsdk.AdmissionRequest{Action: resource.AdmissionActionCreate, Object: r})
}
if err := mk(func(r *v1.RecordingRule) { r.Annotations = nil }); err == nil {
t.Errorf("want folder required error")
}
if err := mk(func(r *v1.RecordingRule) { r.Annotations[v1.FolderAnnotationKey] = "bad" }); err == nil {
t.Errorf("want folder not exist error")
}
if err := mk(func(r *v1.RecordingRule) { r.Spec.Trigger.Interval = v1.RecordingRulePromDuration("30s") }); err == nil {
t.Errorf("want base interval multiple error")
}
if err := mk(func(r *v1.RecordingRule) { r.Labels[v1.GroupLabelKey] = "grp" }); err == nil {
t.Errorf("want group set on create error")
}
if err := mk(func(r *v1.RecordingRule) { r.Spec.Metric = "" }); err == nil {
t.Errorf("want metric required error")
}
if err := mk(func(r *v1.RecordingRule) {
if r.Spec.Labels == nil {
r.Spec.Labels = map[string]v1.RecordingRuleTemplateString{}
}
r.Spec.Labels["__reserved__"] = v1.RecordingRuleTemplateString("x")
}); err == nil {
t.Errorf("want reserved label key error")
}
}
func baseRecordingRule() *v1.RecordingRule {
r := &v1.RecordingRule{}
r.SetGroupVersionKind(v1.RecordingRuleKind().GroupVersionKind())
r.Name = "uid-1"
r.Namespace = "ns1"
r.Annotations = map[string]string{v1.FolderAnnotationKey: "f1"}
r.Labels = map[string]string{}
r.Spec = v1.RecordingRuleSpec{
Title: "ok",
Trigger: v1.RecordingRuleIntervalTrigger{Interval: v1.RecordingRulePromDuration("60s")},
Expressions: v1.RecordingRuleExpressionMap{"A": v1.RecordingRuleExpression{Model: map[string]any{"expr": "1"}, Source: boolPtr(true)}},
Metric: "test_metric",
TargetDatasourceUID: "ds1",
}
return r
}
func boolPtr(b bool) *bool { return &b }
func strPtr(s string) *string { return &s }

View File

@@ -0,0 +1,22 @@
package config
import (
"context"
"errors"
"time"
)
var (
ErrInvalidRuntimeConfig = errors.New("invalid runtime config provided to alerting/rules app")
)
// RuntimeConfig holds configuration values needed at runtime by the alerting/rules app from the running Grafana instance.
type RuntimeConfig struct {
// function to check folder existence given its uid
FolderValidator func(ctx context.Context, folderUID string) (bool, error)
// base evaluation interval
BaseEvaluationInterval time.Duration
// set of strings which are illegal for label keys on rules
ReservedLabelKeys map[string]struct{}
NotificationSettingsValidator func(ctx context.Context, receiver string) (bool, error)
}

View File

@@ -0,0 +1,37 @@
package recordingrule
import (
"context"
"github.com/grafana/grafana-app-sdk/app"
"github.com/grafana/grafana-app-sdk/simple"
v1 "github.com/grafana/grafana/apps/alerting/rules/pkg/apis/alerting/v0alpha1"
"github.com/grafana/grafana/apps/alerting/rules/pkg/app/config"
)
func NewMutator(cfg config.RuntimeConfig) *simple.Mutator {
return &simple.Mutator{
MutateFunc: func(ctx context.Context, req *app.AdmissionRequest) (*app.MutatingResponse, error) {
r, ok := req.Object.(*v1.RecordingRule)
if !ok || r == nil {
return nil, nil
}
folderUID := ""
if r.Annotations != nil {
folderUID = r.Annotations[v1.FolderAnnotationKey]
}
if folderUID != "" {
if r.Labels == nil {
r.Labels = make(map[string]string)
}
r.Labels[v1.FolderLabelKey] = folderUID
}
if err := r.Spec.ClampDurations(); err != nil {
return nil, err
}
return &app.MutatingResponse{UpdatedObject: r}, nil
},
}
}

View File

@@ -0,0 +1,95 @@
package recordingrule
import (
"context"
"fmt"
"slices"
"strconv"
"github.com/grafana/grafana-app-sdk/app"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/grafana/grafana-app-sdk/simple"
model "github.com/grafana/grafana/apps/alerting/rules/pkg/apis/alerting/v0alpha1"
"github.com/grafana/grafana/apps/alerting/rules/pkg/app/config"
"github.com/grafana/grafana/apps/alerting/rules/pkg/app/util"
prom_model "github.com/prometheus/common/model"
)
func NewValidator(cfg config.RuntimeConfig) *simple.Validator {
return &simple.Validator{
ValidateFunc: func(ctx context.Context, req *app.AdmissionRequest) error {
// Cast to specific type
r, ok := req.Object.(*model.RecordingRule)
if !ok {
return fmt.Errorf("object is not of type *v0alpha1.RecordingRule")
}
sourceProv := r.GetProvenanceStatus()
if !slices.Contains(model.AcceptedProvenanceStatuses, sourceProv) {
return fmt.Errorf("invalid provenance status: %s", sourceProv)
}
group := r.Labels[model.GroupLabelKey]
groupIndexStr := r.Labels[model.GroupIndexLabelKey]
if req.Action == resource.AdmissionActionCreate {
if group != "" || groupIndexStr != "" {
return fmt.Errorf("cannot set group when creating recording rule")
}
}
if group != "" {
if groupIndexStr == "" {
return fmt.Errorf("%s must be set when %s is set", model.GroupIndexLabelKey, model.GroupLabelKey)
}
if _, err := strconv.Atoi(groupIndexStr); err != nil {
return fmt.Errorf("invalid %s: %w", model.GroupIndexLabelKey, err)
}
}
folderUID := ""
if r.Annotations != nil {
folderUID = r.Annotations[model.FolderAnnotationKey]
}
if folderUID == "" {
return fmt.Errorf("folder is required")
}
if cfg.FolderValidator != nil {
ok, verr := cfg.FolderValidator(ctx, folderUID)
if verr != nil {
return fmt.Errorf("failed to validate folder: %w", verr)
}
if !ok {
return fmt.Errorf("folder does not exist: %s", folderUID)
}
}
if len(r.Spec.Title) > model.AlertRuleMaxTitleLength {
return fmt.Errorf("recording rule title is too long. Max length is %d", model.AlertRuleMaxTitleLength)
}
if err := util.ValidateInterval(cfg.BaseEvaluationInterval, &r.Spec.Trigger.Interval); err != nil {
return err
}
if r.Spec.Labels != nil {
for key := range r.Spec.Labels {
if _, bad := cfg.ReservedLabelKeys[key]; bad {
return fmt.Errorf("label key is reserved and cannot be specified: %s", key)
}
}
}
if r.Spec.Metric == "" {
return fmt.Errorf("metric must be specified")
}
metric := prom_model.LabelValue(r.Spec.Metric)
if !metric.IsValid() {
return fmt.Errorf("metric contains invalid characters")
}
if !prom_model.IsValidMetricName(metric) { // nolint:staticcheck
return fmt.Errorf("invalid metric name")
}
return nil
},
}
}

View File

@@ -0,0 +1,27 @@
package util
import (
"fmt"
"time"
)
type DurationLike interface {
ToDuration() (time.Duration, error)
}
func ValidateInterval(baseInterval time.Duration, d DurationLike) error {
interval, err := d.ToDuration()
if err != nil {
return fmt.Errorf("invalid trigger interval: %w", err)
}
// Ensure interval is positive and an integer multiple of BaseEvaluationInterval (if provided)
if interval <= 0 {
return fmt.Errorf("trigger interval must be greater than 0")
}
if baseInterval > 0 {
if (interval % baseInterval) != 0 {
return fmt.Errorf("trigger interval must be a multiple of base evaluation interval (%s)", baseInterval.String())
}
}
return nil
}

View File

@@ -100,24 +100,24 @@ require (
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/at-wat/mqtt-go v0.19.4 // indirect
github.com/aws/aws-sdk-go v1.55.7 // indirect
github.com/aws/aws-sdk-go-v2 v1.38.1 // indirect
github.com/aws/aws-sdk-go-v2 v1.39.1 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 // indirect
github.com/aws/aws-sdk-go-v2/config v1.31.2 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.18.6 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.4 // indirect
github.com/aws/aws-sdk-go-v2/config v1.31.10 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.18.14 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8 // indirect
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.4 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.4 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.0 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.8 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17 // indirect
github.com/aws/aws-sdk-go-v2/service/s3 v1.84.0 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.28.2 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.33.2 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.38.0 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.29.4 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.0 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5 // indirect
github.com/aws/smithy-go v1.23.1 // indirect
github.com/axiomhq/hyperloglog v0.0.0-20240507144631-af9851f82b27 // indirect
github.com/bahlo/generic-list-go v0.2.0 // indirect
@@ -151,7 +151,6 @@ require (
github.com/cockroachdb/apd/v3 v3.2.1 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.7 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dennwc/varint v1.0.0 // indirect
github.com/dgraph-io/badger/v4 v4.7.0 // indirect
@@ -182,7 +181,7 @@ require (
github.com/go-jose/go-jose/v4 v4.1.2 // indirect
github.com/go-kit/log v0.2.1 // indirect
github.com/go-ldap/ldap/v3 v3.4.4 // indirect
github.com/go-logfmt/logfmt v0.6.0 // indirect
github.com/go-logfmt/logfmt v0.6.1 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/analysis v0.24.0 // indirect
@@ -235,7 +234,7 @@ require (
github.com/grafana/authlib/types v0.0.0-20250926065801-df98203cff37 // indirect
github.com/grafana/dataplane/sdata v0.0.9 // indirect
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4 // indirect
github.com/grafana/grafana-aws-sdk v1.2.0 // indirect
github.com/grafana/grafana-aws-sdk v1.3.0 // indirect
github.com/grafana/grafana-azure-sdk-go/v2 v2.3.1 // indirect
github.com/grafana/grafana-plugin-sdk-go v0.281.0 // indirect
github.com/grafana/grafana/apps/dashboard v0.0.0 // indirect
@@ -295,7 +294,6 @@ require (
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 // indirect
github.com/lestrrat-go/strftime v1.0.4 // indirect
github.com/lib/pq v1.10.9 // indirect
github.com/magefile/mage v1.15.0 // indirect
github.com/mailru/easyjson v0.9.0 // indirect
github.com/mattbaird/jsonpatch v0.0.0-20240118010651-0ba75a80ca38 // indirect
github.com/mattetti/filebuffer v1.0.1 // indirect
@@ -361,7 +359,6 @@ require (
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/rs/cors v1.11.1 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/sagikazarmark/locafero v0.11.0 // indirect
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 // indirect
github.com/sethvargo/go-retry v0.3.0 // indirect
@@ -382,13 +379,9 @@ require (
github.com/subosito/gotenv v1.6.0 // indirect
github.com/tetratelabs/wazero v1.8.2 // indirect
github.com/thomaspoignant/go-feature-flag v1.42.0 // indirect
github.com/tjhop/slog-gokit v0.1.3 // indirect
github.com/tjhop/slog-gokit v0.1.5 // indirect
github.com/uber/jaeger-client-go v2.30.0+incompatible // indirect
github.com/uber/jaeger-lib v2.4.1+incompatible // indirect
github.com/unknwon/bra v0.0.0-20200517080246-1e3013ecaff8 // indirect
github.com/unknwon/com v1.0.1 // indirect
github.com/unknwon/log v0.0.0-20200308114134-929b1006e34a // indirect
github.com/urfave/cli v1.22.17 // indirect
github.com/wk8/go-ordered-map/v2 v2.1.8 // indirect
github.com/woodsbury/decimal128 v1.3.0 // indirect
github.com/x448/float16 v0.8.4 // indirect
@@ -455,7 +448,6 @@ require (
google.golang.org/protobuf v1.36.10 // indirect
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/fsnotify/fsnotify.v1 v1.4.7 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/mail.v2 v2.3.1 // indirect

View File

@@ -237,22 +237,22 @@ github.com/aws/aws-sdk-go v1.17.7/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN
github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go v1.55.7 h1:UJrkFq7es5CShfBwlWAC8DA077vp8PyVbQd3lqLiztE=
github.com/aws/aws-sdk-go v1.55.7/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/aws/aws-sdk-go-v2 v1.38.1 h1:j7sc33amE74Rz0M/PoCpsZQ6OunLqys/m5antM0J+Z8=
github.com/aws/aws-sdk-go-v2 v1.38.1/go.mod h1:9Q0OoGQoboYIAJyslFyF1f5K1Ryddop8gqMhWx/n4Wg=
github.com/aws/aws-sdk-go-v2 v1.39.1 h1:fWZhGAwVRK/fAN2tmt7ilH4PPAE11rDj7HytrmbZ2FE=
github.com/aws/aws-sdk-go-v2 v1.39.1/go.mod h1:sDioUELIUO9Znk23YVmIk86/9DOpkbyyVb1i/gUNFXY=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 h1:12SpdwU8Djs+YGklkinSSlcrPyj3H4VifVsKf78KbwA=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11/go.mod h1:dd+Lkp6YmMryke+qxW/VnKyhMBDTYP41Q2Bb+6gNZgY=
github.com/aws/aws-sdk-go-v2/config v1.31.2 h1:NOaSZpVGEH2Np/c1toSeW0jooNl+9ALmsUTZ8YvkJR0=
github.com/aws/aws-sdk-go-v2/config v1.31.2/go.mod h1:17ft42Yb2lF6OigqSYiDAiUcX4RIkEMY6XxEMJsrAes=
github.com/aws/aws-sdk-go-v2/credentials v1.18.6 h1:AmmvNEYrru7sYNJnp3pf57lGbiarX4T9qU/6AZ9SucU=
github.com/aws/aws-sdk-go-v2/credentials v1.18.6/go.mod h1:/jdQkh1iVPa01xndfECInp1v1Wnp70v3K4MvtlLGVEc=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.4 h1:lpdMwTzmuDLkgW7086jE94HweHCqG+uOJwHf3LZs7T0=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.4/go.mod h1:9xzb8/SV62W6gHQGC/8rrvgNXU6ZoYM3sAIJCIrXJxY=
github.com/aws/aws-sdk-go-v2/config v1.31.10 h1:7LllDZAegXU3yk41mwM6KcPu0wmjKGQB1bg99bNdQm4=
github.com/aws/aws-sdk-go-v2/config v1.31.10/go.mod h1:Ge6gzXPjqu4v0oHvgAwvGzYcK921GU0hQM25WF/Kl+8=
github.com/aws/aws-sdk-go-v2/credentials v1.18.14 h1:TxkI7QI+sFkTItN/6cJuMZEIVMFXeu2dI1ZffkXngKI=
github.com/aws/aws-sdk-go-v2/credentials v1.18.14/go.mod h1:12x4Uw/vijC11XkctTjy92TNCQ+UnNJkT7fzX0Yd93E=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8 h1:gLD09eaJUdiszm7vd1btiQUYE0Hj+0I2b8AS+75z9AY=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8/go.mod h1:4RW3oMPt1POR74qVOC4SbubxAwdP4pCT0nSw3jycOU4=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84 h1:cTXRdLkpBanlDwISl+5chq5ui1d1YWg4PWMR9c3kXyw=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84/go.mod h1:kwSy5X7tfIHN39uucmjQVs2LvDdXEjQucgQQEqCggEo=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.4 h1:IdCLsiiIj5YJ3AFevsewURCPV+YWUlOW8JiPhoAy8vg=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.4/go.mod h1:l4bdfCD7XyyZA9BolKBo1eLqgaJxl0/x91PL4Yqe0ao=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.4 h1:j7vjtr1YIssWQOMeOWRbh3z8g2oY/xPjnZH2gLY4sGw=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.4/go.mod h1:yDmJgqOiH4EA8Hndnv4KwAo8jCGTSnM5ASG1nBI+toA=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8 h1:6bgAZgRyT4RoFWhxS+aoGMFyE0cD1bSzFnEEi4bFPGI=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8/go.mod h1:KcGkXFVU8U28qS4KvLEcPxytPZPBcRawaH2Pf/0jptE=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8 h1:HhJYoES3zOz34yWEpGENqJvRVPqpmJyR3+AFg9ybhdY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8/go.mod h1:JnA+hPWeYAVbDssp83tv+ysAG8lTfLVXvSsyKg/7xNA=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 h1:bIqFDwgGXXN1Kpp99pDOdKMTTb5d2KyU5X/BZxjOkRo=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3/go.mod h1:H5O/EsxDWyU+LP/V8i5sm8cxoZgc2fdNR9bxlOFrQTo=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 h1:GMYy2EOWfzdP3wfVAGXBNKY5vK4K8vMET4sYOYltmqs=
@@ -263,12 +263,12 @@ github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs v1.51.0 h1:e5cbPZYTIY2nUEFie
github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs v1.51.0/go.mod h1:UseIHRfrm7PqeZo6fcTb6FUCXzCnh1KJbQbmOfxArGM=
github.com/aws/aws-sdk-go-v2/service/ec2 v1.225.2 h1:IfMb3Ar8xEaWjgH/zeVHYD8izwJdQgRP5mKCTDt4GNk=
github.com/aws/aws-sdk-go-v2/service/ec2 v1.225.2/go.mod h1:35jGWx7ECvCwTsApqicFYzZ7JFEnBc6oHUuOQ3xIS54=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.0 h1:6+lZi2JeGKtCraAj1rpoZfKqnQ9SptseRZioejfUOLM=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.0/go.mod h1:eb3gfbVIxIoGgJsi9pGne19dhCBpK6opTYpQqAmdy44=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1 h1:oegbebPEMA/1Jny7kvwejowCaHz1FWZAQ94WXFNCyTM=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1/go.mod h1:kemo5Myr9ac0U9JfSjMo9yHLtw+pECEHsFtJ9tqCEI8=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4 h1:nAP2GYbfh8dd2zGZqFRSMlq+/F6cMPBUuCsGAMkN074=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4/go.mod h1:LT10DsiGjLWh4GbjInf9LQejkYEhBgBCjLG5+lvk4EE=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.4 h1:ueB2Te0NacDMnaC+68za9jLwkjzxGWm0KB5HTUHjLTI=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.4/go.mod h1:nLEfLnVMmLvyIG58/6gsSA03F1voKGaCfHV7+lR8S7s=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.8 h1:M6JI2aGFEzYxsF6CXIuRBnkge9Wf9a2xU39rNeXgu10=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.8/go.mod h1:Fw+MyTwlwjFsSTE31mH211Np+CUslml8mzc0AFEG09s=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17 h1:qcLWgdhq45sDM9na4cvXax9dyLitn8EYBRl8Ak4XtG4=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17/go.mod h1:M+jkjBFZ2J6DJrjMv2+vkBbuht6kxJYtJiwoVgX4p4U=
github.com/aws/aws-sdk-go-v2/service/kms v1.41.2 h1:zJeUxFP7+XP52u23vrp4zMcVhShTWbNO8dHV6xCSvFo=
@@ -279,12 +279,12 @@ github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi v1.26.6 h1:Pwbxovp
github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi v1.26.6/go.mod h1:Z4xLt5mXspLKjBV92i165wAJ/3T6TIv4n7RtIS8pWV0=
github.com/aws/aws-sdk-go-v2/service/s3 v1.84.0 h1:0reDqfEN+tB+sozj2r92Bep8MEwBZgtAXTND1Kk9OXg=
github.com/aws/aws-sdk-go-v2/service/s3 v1.84.0/go.mod h1:kUklwasNoCn5YpyAqC/97r6dzTA1SRKJfKq16SXeoDU=
github.com/aws/aws-sdk-go-v2/service/sso v1.28.2 h1:ve9dYBB8CfJGTFqcQ3ZLAAb/KXWgYlgu/2R2TZL2Ko0=
github.com/aws/aws-sdk-go-v2/service/sso v1.28.2/go.mod h1:n9bTZFZcBa9hGGqVz3i/a6+NG0zmZgtkB9qVVFDqPA8=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.33.2 h1:pd9G9HQaM6UZAZh19pYOkpKSQkyQQ9ftnl/LttQOcGI=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.33.2/go.mod h1:eknndR9rU8UpE/OmFpqU78V1EcXPKFTTm5l/buZYgvM=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.0 h1:iV1Ko4Em/lkJIsoKyGfc0nQySi+v0Udxr6Igq+y9JZc=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.0/go.mod h1:bEPcjW7IbolPfK67G1nilqWyoxYMSPrDiIQ3RdIdKgo=
github.com/aws/aws-sdk-go-v2/service/sso v1.29.4 h1:FTdEN9dtWPB0EOURNtDPmwGp6GGvMqRJCAihkSl/1No=
github.com/aws/aws-sdk-go-v2/service/sso v1.29.4/go.mod h1:mYubxV9Ff42fZH4kexj43gFPhgc/LyC7KqvUKt1watc=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.0 h1:I7ghctfGXrscr7r1Ga/mDqSJKm7Fkpl5Mwq79Z+rZqU=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.0/go.mod h1:Zo9id81XP6jbayIFWNuDpA6lMBWhsVy+3ou2jLa4JnA=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5 h1:+LVB0xBqEgjQoqr9bGZbRzvg212B0f17JdflleJRNR4=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5/go.mod h1:xoaxeqnnUaZjPjaICgIy5B+MHCSb/ZSOn4MvkFNOUA0=
github.com/aws/smithy-go v1.23.1 h1:sLvcH6dfAFwGkHLZ7dGiYF7aK6mg4CgKA/iDKjLDt9M=
github.com/aws/smithy-go v1.23.1/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/axiomhq/hyperloglog v0.0.0-20191112132149-a4c4c47bc57f/go.mod h1:2stgcRjl6QmW+gU2h5E7BQXg4HU0gzxKWDuT5HviN9s=
@@ -444,8 +444,8 @@ github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/dockercfg v0.3.2 h1:DlJTyZGBDlXqUZ2Dk2Q3xHs/FtnooJJVaad2S9GKorA=
github.com/cpuguy83/dockercfg v0.3.2/go.mod h1:sugsbF4//dDlL/i+S+rtpIWp+5h0BHJHfjj5/jFyUJc=
github.com/cpuguy83/go-md2man v1.0.10 h1:BSKMNlYxDvnunlTymqtgONjNnaRV1sTpcovwwjF22jk=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/cpuguy83/go-md2man/v2 v2.0.7 h1:zbFlGlXEAKlwXpmvle3d8Oe3YnkKIK4xSRTd3sHPnBo=
github.com/cpuguy83/go-md2man/v2 v2.0.7/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
@@ -595,8 +595,8 @@ github.com/go-ldap/ldap/v3 v3.4.4/go.mod h1:fe1MsuN5eJJ1FeLT/LEBVdWfNWKh459R7aXg
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logfmt/logfmt v0.6.0 h1:wGYYu3uicYdqXVgoYbvnkrPVXkuLM1p1ifugDMEdRi4=
github.com/go-logfmt/logfmt v0.6.0/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-logfmt/logfmt v0.6.1 h1:4hvbpePJKnIzH1B+8OR/JPbTx37NktoI9LE2QZBBkvE=
github.com/go-logfmt/logfmt v0.6.1/go.mod h1:EV2pOAQoZaT1ZXZbqDl5hrymndi4SY9ED9/z6CO0XAk=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.3.0/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
@@ -826,9 +826,6 @@ github.com/googleapis/gax-go/v2 v2.4.0/go.mod h1:XOTVJ59hdnfJLIP/dh8n5CGryZR2LxK
github.com/googleapis/gax-go/v2 v2.15.0 h1:SyjDc1mGgZU5LncH8gimWo9lW1DtIfPibOG81vgd/bo=
github.com/googleapis/gax-go/v2 v2.15.0/go.mod h1:zVVkkxAQHa1RQpg9z2AUCMnKhi0Qld9rcmyfL1OZhoc=
github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g=
github.com/gopherjs/gopherjs v0.0.0-20181103185306-d547d1d9531e/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gopherjs/gopherjs v1.17.2 h1:fQnZVsXk8uxXIStYb0N4bGk7jeyTalG/wsZjQ25dO0g=
github.com/gopherjs/gopherjs v1.17.2/go.mod h1:pRRIvn/QzFLrKfvEz3qUuEhtE/zLCWfreZ6J5gM2i+k=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.7.1/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
@@ -858,8 +855,8 @@ github.com/grafana/grafana-app-sdk v0.48.1 h1:bKJadWH18WCpJ+Zk8AezRFXCcZgGredRv+
github.com/grafana/grafana-app-sdk v0.48.1/go.mod h1:5LljCz+wvmGfkQ8ZKTOfserhtXNEF0cSFthoWShvN6c=
github.com/grafana/grafana-app-sdk/logging v0.48.1 h1:veM0X5LAPyN3KsDLglWjIofndbGuf7MqnrDuDN+F/Ng=
github.com/grafana/grafana-app-sdk/logging v0.48.1/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
github.com/grafana/grafana-aws-sdk v1.2.0 h1:LLR4/g91WBuCRwm2cbWfCREq565+GxIFe08nqqIcIuw=
github.com/grafana/grafana-aws-sdk v1.2.0/go.mod h1:bBo7qOmM3f61vO+2JxTolNUph1l2TmtzmWcU9/Im+8A=
github.com/grafana/grafana-aws-sdk v1.3.0 h1:/bfJzP93rCel1GbWoRSq0oUo424MZXt8jAp2BK9w8tM=
github.com/grafana/grafana-aws-sdk v1.3.0/go.mod h1:VGycF0JkCGKND2O5je1ucOqPJ0ZNhZYzV3c2bNBAaGk=
github.com/grafana/grafana-azure-sdk-go/v2 v2.3.1 h1:FFcEA01tW+SmuJIuDbHOdgUBL+d7DPrZ2N4zwzPhfGk=
github.com/grafana/grafana-azure-sdk-go/v2 v2.3.1/go.mod h1:Oi4anANlCuTCc66jCyqIzfVbgLXFll8Wja+Y4vfANlc=
github.com/grafana/grafana-cloud-migration-snapshot v1.9.0 h1:JOzchPgptwJdruYoed7x28lFDwhzs7kssResYsnC0iI=
@@ -1067,9 +1064,6 @@ github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jszwedko/go-datemath v0.1.1-0.20230526204004-640a500621d6 h1:SwcnSwBR7X/5EHJQlXBockkJVIMRVt5yKaesBPMtyZQ=
github.com/jszwedko/go-datemath v0.1.1-0.20230526204004-640a500621d6/go.mod h1:WrYiIuiXUMIvTDAQw97C+9l0CnBmCcvosPjN3XDqS/o=
github.com/jtolds/gls v4.2.1+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes=
@@ -1124,8 +1118,6 @@ github.com/m3db/prometheus_remote_client_golang v0.4.4 h1:DsAIjVKoCp7Ym35tAOFL1O
github.com/m3db/prometheus_remote_client_golang v0.4.4/go.mod h1:wHfVbA3eAK6dQvKjCkHhusWYegCk3bDGkA15zymSHdc=
github.com/madflojo/testcerts v1.4.0 h1:I09gN0C1ly9IgeVNcAqKk8RAKIJTe3QnFrrPBDyvzN4=
github.com/madflojo/testcerts v1.4.0/go.mod h1:MW8sh39gLnkKh4K0Nc55AyHEDl9l/FBLDUsQhpmkuo0=
github.com/magefile/mage v1.15.0 h1:BvGheCMAsG3bWUDbZ8AyXXpCNwU9u5CB6sM+HNb9HYg=
github.com/magefile/mage v1.15.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.6/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60=
github.com/magiconair/properties v1.8.10 h1:s31yESBquKXCV9a/ScB3ESkOjUYYv+X0rg8SYxI99mE=
@@ -1426,8 +1418,8 @@ github.com/rs/cors v1.11.1 h1:eU3gRzXLRK57F5rKMGMZURNdIG4EoAmX8k94r9wXWHA=
github.com/rs/cors v1.11.1/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU=
github.com/russellhaering/goxmldsig v1.4.0 h1:8UcDh/xGyQiyrW+Fq5t8f+l2DLB1+zlhYzkPUJ7Qhys=
github.com/russellhaering/goxmldsig v1.4.0/go.mod h1:gM4MDENBQf7M+V824SGfyIUVFWydB7n0KkEubVJl+Tw=
github.com/russross/blackfriday v1.5.2 h1:HyvC0ARfnZBqnXwABFeSZHpKvJHJJfPz81GNueLj0oo=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
@@ -1458,7 +1450,6 @@ github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp
github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=
github.com/shurcooL/httpfs v0.0.0-20230704072500-f1e31cf0ba5c h1:aqg5Vm5dwtvL+YgDpBcK1ITf3o96N/K7/wsRXQnUTEs=
github.com/shurcooL/httpfs v0.0.0-20230704072500-f1e31cf0ba5c/go.mod h1:owqhoLW1qZoYLZzLnBw+QkPP9WZnjlSWihhxAJC1+/M=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/shurcooL/vfsgen v0.0.0-20230704071429-0000e147ea92 h1:OfRzdxCzDhp+rsKWXuOO2I/quKMJ/+TQwVbIP/gltZg=
github.com/shurcooL/vfsgen v0.0.0-20230704071429-0000e147ea92/go.mod h1:7/OT02F6S6I7v6WXb+IjhMuZEYfH/RJ5RwEWnEo5BMg=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
@@ -1467,11 +1458,6 @@ github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6Mwd
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/smartystreets/assertions v0.0.0-20190116191733-b6c0e53d7304 h1:Jpy1PXuP99tXNrhbq2BaPz9B+jNAvH1JPQQpG/9GCXY=
github.com/smartystreets/assertions v0.0.0-20190116191733-b6c0e53d7304/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v0.0.0-20181108003508-044398e4856c/go.mod h1:XDJAKZRPZ1CvBcN2aX5YOUTYGHki24fSF0Iv48Ibg0s=
github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.5 h1:jjzc5WVemNEDTLwv9tlmemhC73tI08BNOIGwBOo10Js=
github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE1GqG0=
github.com/sony/gobreaker v0.5.0 h1:dRCvqm0P490vZPmy7ppEk2qCnCieBooFJ+YoXGYB+yg=
@@ -1526,7 +1512,6 @@ github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/subosito/gotenv v1.4.1/go.mod h1:ayKnFf/c6rvx/2iiLrJUk1e6plDbT3edrFNGqEflhK0=
@@ -1541,8 +1526,8 @@ github.com/thejerf/slogassert v0.3.4/go.mod h1:0zn9ISLVKo1aPMTqcGfG1o6dWwt+Rk574
github.com/thomaspoignant/go-feature-flag v1.42.0 h1:C7embmOTzaLyRki+OoU2RvtVjJE9IrvgBA2C1mRN1lc=
github.com/thomaspoignant/go-feature-flag v1.42.0/go.mod h1:y0QiWH7chHWhGATb/+XqwAwErORmPSH2MUsQlCmmWlM=
github.com/tidwall/pretty v0.0.0-20180105212114-65a9db5fad51/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tjhop/slog-gokit v0.1.3 h1:6SdexP3UIeg93KLFeiM1Wp1caRwdTLgsD/THxBUy1+o=
github.com/tjhop/slog-gokit v0.1.3/go.mod h1:Bbu5v2748qpAWH7k6gse/kw3076IJf6owJmh7yArmJs=
github.com/tjhop/slog-gokit v0.1.5 h1:ayloIUi5EK2QYB8eY4DOPO95/mRtMW42lUkp3quJohc=
github.com/tjhop/slog-gokit v0.1.5/go.mod h1:yA48zAHvV+Sg4z4VRyeFyFUNNXd3JY5Zg84u3USICq0=
github.com/tklauser/go-sysconf v0.3.14 h1:g5vzr9iPFFz24v2KZXs/pvpvh8/V9Fw6vQK5ZZb78yU=
github.com/tklauser/go-sysconf v0.3.14/go.mod h1:1ym4lWMLUOhuBOPGtRcJm7tEGX4SCYNEEEtghGG/8uY=
github.com/tklauser/numcpus v0.8.0 h1:Mx4Wwe/FjZLeQsK/6kt2EOepwwSl7SmJrK5bV/dXYgY=
@@ -1559,16 +1544,7 @@ github.com/uber/jaeger-lib v2.4.1+incompatible/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/ugorji/go/codec v1.2.11 h1:BMaWp1Bb6fHwEtbplGBGJ498wD+LKlNSl25MjdZY4dU=
github.com/ugorji/go/codec v1.2.11/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
github.com/unknwon/bra v0.0.0-20200517080246-1e3013ecaff8 h1:aVGB3YnaS/JNfOW3tiHIlmNmTDg618va+eT0mVomgyI=
github.com/unknwon/bra v0.0.0-20200517080246-1e3013ecaff8/go.mod h1:fVle4kNr08ydeohzYafr20oZzbAkhQT39gKK/pFQ5M4=
github.com/unknwon/com v1.0.1 h1:3d1LTxD+Lnf3soQiD4Cp/0BRB+Rsa/+RTvz8GMMzIXs=
github.com/unknwon/com v1.0.1/go.mod h1:tOOxU81rwgoCLoOVVPHb6T/wt8HZygqH5id+GNnlCXM=
github.com/unknwon/log v0.0.0-20150304194804-e617c87089d3/go.mod h1:1xEUf2abjfP92w2GZTV+GgaRxXErwRXcClbUwrNJffU=
github.com/unknwon/log v0.0.0-20200308114134-929b1006e34a h1:vcrhXnj9g9PIE+cmZgaPSwOyJ8MAQTRmsgGrB0x5rF4=
github.com/unknwon/log v0.0.0-20200308114134-929b1006e34a/go.mod h1:1xEUf2abjfP92w2GZTV+GgaRxXErwRXcClbUwrNJffU=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli v1.22.17 h1:SYzXoiPfQjHBbkYxbew5prZHS1TOLT3ierW8SYLqtVQ=
github.com/urfave/cli v1.22.17/go.mod h1:b0ht0aqgH/6pBYzzxURyrM4xXNgsoT/n2ZzwQiEhNVo=
github.com/urfave/cli/v2 v2.27.7 h1:bH59vdhbjLv3LAvIu6gd0usJHgoTTPhCFib8qqOwXYU=
github.com/urfave/cli/v2 v2.27.7/go.mod h1:CyNAG/xg+iAOg0N4MPGZqVmv2rCoP267496AOXUZjA4=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
@@ -1916,7 +1892,6 @@ golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191020152052-9984515f0562/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191220142924-d4481acd189f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -2293,8 +2268,6 @@ gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4=
gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/fsnotify/fsnotify.v1 v1.4.7 h1:XNNYLJHt73EyYiCZi6+xjupS9CpvmiDgjPTAjrBlQbo=
gopkg.in/fsnotify/fsnotify.v1 v1.4.7/go.mod h1:Fyux9zXlo4rWoMSIzpn9fDAYjalPqJ/K1qJ27s+7ltE=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=

View File

@@ -0,0 +1,172 @@
package jobs
import (
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/util/validation/field"
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/apps/provisioning/pkg/repository/git"
"github.com/grafana/grafana/apps/provisioning/pkg/safepath"
)
// ValidateJob performs validation on the Job specification and returns an error if validation fails
func ValidateJob(job *provisioning.Job) error {
list := field.ErrorList{}
// Validate action is specified
if job.Spec.Action == "" {
list = append(list, field.Required(field.NewPath("spec", "action"), "action must be specified"))
return toError(job.Name, list) // Early return since we can't validate further without knowing the action
}
// Validate repository is specified
if job.Spec.Repository == "" {
list = append(list, field.Required(field.NewPath("spec", "repository"), "repository must be specified"))
}
// Validate action-specific options
switch job.Spec.Action {
case provisioning.JobActionPull:
if job.Spec.Pull == nil {
list = append(list, field.Required(field.NewPath("spec", "pull"), "pull options required for pull action"))
}
// Pull options are simple, just incremental bool - no further validation needed
case provisioning.JobActionPush:
if job.Spec.Push == nil {
list = append(list, field.Required(field.NewPath("spec", "push"), "push options required for push action"))
} else {
list = append(list, validateExportJobOptions(job.Spec.Push)...)
}
case provisioning.JobActionPullRequest:
if job.Spec.PullRequest == nil {
list = append(list, field.Required(field.NewPath("spec", "pr"), "pull request options required for pr action"))
}
// PullRequest options are mostly informational - no strict validation needed
case provisioning.JobActionMigrate:
if job.Spec.Migrate == nil {
list = append(list, field.Required(field.NewPath("spec", "migrate"), "migrate options required for migrate action"))
}
// Migrate options are simple - no further validation needed
case provisioning.JobActionDelete:
if job.Spec.Delete == nil {
list = append(list, field.Required(field.NewPath("spec", "delete"), "delete options required for delete action"))
} else {
list = append(list, validateDeleteJobOptions(job.Spec.Delete)...)
}
case provisioning.JobActionMove:
if job.Spec.Move == nil {
list = append(list, field.Required(field.NewPath("spec", "move"), "move options required for move action"))
} else {
list = append(list, validateMoveJobOptions(job.Spec.Move)...)
}
default:
list = append(list, field.Invalid(field.NewPath("spec", "action"), job.Spec.Action, "invalid action"))
}
return toError(job.Name, list)
}
// toError converts a field.ErrorList to an error, returning nil if the list is empty
func toError(name string, list field.ErrorList) error {
if len(list) == 0 {
return nil
}
return apierrors.NewInvalid(
provisioning.JobResourceInfo.GroupVersionKind().GroupKind(),
name, list)
}
// validateExportJobOptions validates export (push) job options
func validateExportJobOptions(opts *provisioning.ExportJobOptions) field.ErrorList {
list := field.ErrorList{}
// Validate branch name if specified
if opts.Branch != "" {
if !git.IsValidGitBranchName(opts.Branch) {
list = append(list, field.Invalid(field.NewPath("spec", "push", "branch"), opts.Branch, "invalid git branch name"))
}
}
// Validate path if specified
if opts.Path != "" {
if err := safepath.IsSafe(opts.Path); err != nil {
list = append(list, field.Invalid(field.NewPath("spec", "push", "path"), opts.Path, err.Error()))
}
}
return list
}
// validateDeleteJobOptions validates delete job options
func validateDeleteJobOptions(opts *provisioning.DeleteJobOptions) field.ErrorList {
list := field.ErrorList{}
// At least one of paths or resources must be specified
if len(opts.Paths) == 0 && len(opts.Resources) == 0 {
list = append(list, field.Required(field.NewPath("spec", "delete"), "at least one path or resource must be specified"))
return list
}
// Validate paths
for i, p := range opts.Paths {
if err := safepath.IsSafe(p); err != nil {
list = append(list, field.Invalid(field.NewPath("spec", "delete", "paths").Index(i), p, err.Error()))
}
}
// Validate resources
for i, r := range opts.Resources {
if r.Name == "" {
list = append(list, field.Required(field.NewPath("spec", "delete", "resources").Index(i).Child("name"), "resource name is required"))
}
if r.Kind == "" {
list = append(list, field.Required(field.NewPath("spec", "delete", "resources").Index(i).Child("kind"), "resource kind is required"))
}
}
return list
}
// validateMoveJobOptions validates move job options
func validateMoveJobOptions(opts *provisioning.MoveJobOptions) field.ErrorList {
list := field.ErrorList{}
// At least one of paths or resources must be specified
if len(opts.Paths) == 0 && len(opts.Resources) == 0 {
list = append(list, field.Required(field.NewPath("spec", "move"), "at least one path or resource must be specified"))
return list
}
// Target path is required
if opts.TargetPath == "" {
list = append(list, field.Required(field.NewPath("spec", "move", "targetPath"), "target path is required"))
} else {
if err := safepath.IsSafe(opts.TargetPath); err != nil {
list = append(list, field.Invalid(field.NewPath("spec", "move", "targetPath"), opts.TargetPath, err.Error()))
}
}
// Validate source paths
for i, p := range opts.Paths {
if err := safepath.IsSafe(p); err != nil {
list = append(list, field.Invalid(field.NewPath("spec", "move", "paths").Index(i), p, err.Error()))
}
}
// Validate resources
for i, r := range opts.Resources {
if r.Name == "" {
list = append(list, field.Required(field.NewPath("spec", "move", "resources").Index(i).Child("name"), "resource name is required"))
}
if r.Kind == "" {
list = append(list, field.Required(field.NewPath("spec", "move", "resources").Index(i).Child("kind"), "resource kind is required"))
}
}
return list
}

View File

@@ -0,0 +1,593 @@
package jobs
import (
"testing"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
)
func TestValidateJob(t *testing.T) {
tests := []struct {
name string
job *provisioning.Job
wantErr bool
validateError func(t *testing.T, err error)
}{
{
name: "valid pull job",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPull,
Repository: "test-repo",
Pull: &provisioning.SyncJobOptions{
Incremental: true,
},
},
},
wantErr: false,
},
{
name: "missing action",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Repository: "test-repo",
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.action: Required value")
},
},
{
name: "invalid action",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobAction("invalid"),
Repository: "test-repo",
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.action: Invalid value")
},
},
{
name: "missing repository",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPull,
Pull: &provisioning.SyncJobOptions{
Incremental: true,
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.repository: Required value")
},
},
{
name: "pull action without pull options",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPull,
Repository: "test-repo",
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.pull: Required value")
},
},
{
name: "push action without push options",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPush,
Repository: "test-repo",
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.push: Required value")
},
},
{
name: "valid push job with valid branch",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPush,
Repository: "test-repo",
Push: &provisioning.ExportJobOptions{
Branch: "main",
Message: "Test commit",
},
},
},
wantErr: false,
},
{
name: "push job with invalid branch name",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPush,
Repository: "test-repo",
Push: &provisioning.ExportJobOptions{
Branch: "feature..branch", // Invalid: contains consecutive dots
Message: "Test commit",
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.push.branch")
require.Contains(t, err.Error(), "invalid git branch name")
},
},
{
name: "push job with invalid path",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPush,
Repository: "test-repo",
Push: &provisioning.ExportJobOptions{
Path: "../../../etc/passwd", // Invalid: path traversal
Message: "Test commit",
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.push.path")
},
},
{
name: "delete action without options",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionDelete,
Repository: "test-repo",
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.delete: Required value")
},
},
{
name: "delete action without paths or resources",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionDelete,
Repository: "test-repo",
Delete: &provisioning.DeleteJobOptions{},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "at least one path or resource must be specified")
},
},
{
name: "valid delete action with paths",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionDelete,
Repository: "test-repo",
Delete: &provisioning.DeleteJobOptions{
Paths: []string{"dashboard.json", "folder/other.json"},
},
},
},
wantErr: false,
},
{
name: "valid delete action with resources",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionDelete,
Repository: "test-repo",
Delete: &provisioning.DeleteJobOptions{
Resources: []provisioning.ResourceRef{
{
Name: "my-dashboard",
Kind: "Dashboard",
},
},
},
},
},
wantErr: false,
},
{
name: "delete action with invalid path",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionDelete,
Repository: "test-repo",
Delete: &provisioning.DeleteJobOptions{
Paths: []string{"../../etc/passwd"}, // Invalid: path traversal
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.delete.paths[0]")
},
},
{
name: "delete action with resource missing name",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionDelete,
Repository: "test-repo",
Delete: &provisioning.DeleteJobOptions{
Resources: []provisioning.ResourceRef{
{
Kind: "Dashboard",
},
},
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.delete.resources[0].name")
},
},
{
name: "move action without options",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMove,
Repository: "test-repo",
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.move: Required value")
},
},
{
name: "move action without paths or resources",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMove,
Repository: "test-repo",
Move: &provisioning.MoveJobOptions{
TargetPath: "new-location/",
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "at least one path or resource must be specified")
},
},
{
name: "move action without target path",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMove,
Repository: "test-repo",
Move: &provisioning.MoveJobOptions{
Paths: []string{"dashboard.json"},
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.move.targetPath: Required value")
},
},
{
name: "valid move action",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMove,
Repository: "test-repo",
Move: &provisioning.MoveJobOptions{
Paths: []string{"old-location/dashboard.json"},
TargetPath: "new-location/",
},
},
},
wantErr: false,
},
{
name: "move action with invalid target path",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMove,
Repository: "test-repo",
Move: &provisioning.MoveJobOptions{
Paths: []string{"dashboard.json"},
TargetPath: "../../../etc/", // Invalid: path traversal
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.move.targetPath")
},
},
{
name: "valid migrate job",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMigrate,
Repository: "test-repo",
Migrate: &provisioning.MigrateJobOptions{
History: true,
Message: "Migrate from legacy",
},
},
},
wantErr: false,
},
{
name: "migrate action without migrate options",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMigrate,
Repository: "test-repo",
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.migrate: Required value")
},
},
{
name: "valid pr job",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPullRequest,
Repository: "test-repo",
PullRequest: &provisioning.PullRequestJobOptions{
PR: 123,
Ref: "refs/pull/123/head",
},
},
},
wantErr: false,
},
{
name: "delete action with resource missing kind",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionDelete,
Repository: "test-repo",
Delete: &provisioning.DeleteJobOptions{
Resources: []provisioning.ResourceRef{
{
Name: "my-dashboard",
},
},
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.delete.resources[0].kind")
},
},
{
name: "move action with valid resources",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMove,
Repository: "test-repo",
Move: &provisioning.MoveJobOptions{
Resources: []provisioning.ResourceRef{
{
Name: "my-dashboard",
Kind: "Dashboard",
},
},
TargetPath: "new-location/",
},
},
},
wantErr: false,
},
{
name: "move action with resource missing kind",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMove,
Repository: "test-repo",
Move: &provisioning.MoveJobOptions{
Resources: []provisioning.ResourceRef{
{
Name: "my-dashboard",
},
},
TargetPath: "new-location/",
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.move.resources[0].kind")
},
},
{
name: "move action with both paths and resources",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMove,
Repository: "test-repo",
Move: &provisioning.MoveJobOptions{
Paths: []string{"dashboard.json"},
Resources: []provisioning.ResourceRef{
{
Name: "my-dashboard",
Kind: "Dashboard",
},
},
TargetPath: "new-location/",
},
},
},
wantErr: false,
},
{
name: "move action with invalid source path",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMove,
Repository: "test-repo",
Move: &provisioning.MoveJobOptions{
Paths: []string{"../invalid/path"},
TargetPath: "valid/target/",
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.move.paths[0]")
},
},
{
name: "delete action with both paths and resources",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionDelete,
Repository: "test-repo",
Delete: &provisioning.DeleteJobOptions{
Paths: []string{"dashboard.json"},
Resources: []provisioning.ResourceRef{
{
Name: "my-dashboard",
Kind: "Dashboard",
},
},
},
},
},
wantErr: false,
},
{
name: "push action with valid path",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPush,
Repository: "test-repo",
Push: &provisioning.ExportJobOptions{
Path: "some/valid/path",
Message: "Test commit",
},
},
},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := ValidateJob(tt.job)
if tt.wantErr {
require.Error(t, err)
if tt.validateError != nil {
tt.validateError(t, err)
}
} else {
require.NoError(t, err)
}
})
}
}

140
devenv/scopes/README.md Normal file
View File

@@ -0,0 +1,140 @@
# Scopes Provisioning Script
This script generates Scopes, ScopeNodes, and ScopeNavigations for Grafana development environments.
## Usage
### Create resources
```bash
# From devenv directory
./setup.sh scopes
# Or run directly
cd scopes
go run scopes.go
```
### Delete all gdev-prefixed resources
```bash
# From devenv directory
./setup.sh undev
# Or run directly
cd scopes
go run scopes.go -clean
```
**Note about caching**: The `/find/scope_navigations` endpoint used by the UI caches ScopeNavigation results for 15 minutes. After running cleanup, deleted resources may still appear in the UI until the cache expires. The resources are actually deleted (you can verify by checking the `/scopenavigations` list endpoint), but the UI will refresh after ~15 minutes or after restarting Grafana.
Doing an `Empty Cache and Hard Reload` will also help.
## Configuration
The script reads from `scopes-config.yaml` by default. You can specify a different config file:
```bash
go run scopes.go -config=my-config.yaml
```
### Configuration Format
The configuration file uses YAML format with a natural tree structure. The indentation itself represents the hierarchy:
- **scopes**: Map of scope definitions (key is the scope name)
- **tree**: Tree structure of scope nodes where the YAML structure defines parent-child relationships
- **navigations**: Map of scope navigations linking URLs to scopes (key is the navigation name)
Example:
```yaml
scopes:
app1:
title: Application 1
filters:
- key: app
operator: equals
value: app1
tree:
environments:
title: Environments
nodeType: container
children:
production:
title: Production
nodeType: container
children:
app1-prod:
title: Application 1
nodeType: leaf
linkId: app1
linkType: scope
navigations:
# Link to a dashboard
app1-nav:
url: /d/86Js1xRmk
scope: app1
# Link to another dashboard
app2-nav:
url: /d/GlAqcPgmz
scope: app2
# Custom URLs
explore-nav:
url: /explore
scope: app1
```
### Tree Structure
The tree structure uses YAML's natural indentation to represent hierarchy:
- **Key**: Unique identifier for the node (will be prefixed with "gdev-")
- **title**: Display title
- **nodeType**: Either "container" (can have children) or "leaf" (selectable scope)
- **linkId**: References a scope name (if nodeType is "leaf")
- **linkType**: Usually "scope"
- **children**: Map of child nodes (nested structure follows YAML indentation)
### Node Types
- **container**: A category/grouping node that can contain other nodes
- **leaf**: A selectable node that links to a scope
### Navigations
Navigations link URLs to scopes. The `url` field should contain the full URL path (e.g., `/d/abc123` for dashboards or `/explore` for other pages).
To find dashboard UIDs from gdev dashboards:
```bash
# Find UIDs of all gdev dashboards
find devenv/dev-dashboards -name "*.json" -exec sh -c 'echo "{}:" && jq -r ".uid // .dashboard.uid // \"NO_UID\"" {}' \;
# Or for a specific dashboard
jq -r ".uid // .dashboard.uid" devenv/dev-dashboards/all-panels.json
```
## Environment Variables
- `GRAFANA_URL`: Grafana URL (default: http://localhost:3000)
- `GRAFANA_NAMESPACE`: Namespace (default: default)
- `GRAFANA_USER`: Grafana username (default: admin)
- `GRAFANA_PASSWORD`: Grafana password (default: admin)
## Command Line Flags
- `-url`: Grafana URL
- `-namespace`: Namespace
- `-config`: Config file path (default: scopes-config.yaml)
- `-user`: Grafana username
- `-password`: Grafana password
- `-clean`: Delete all gdev-prefixed resources
## Prefix
All resources are automatically prefixed with "gdev-" to avoid conflicts with production data.

View File

@@ -0,0 +1,84 @@
scopes:
app1:
title: Application 1
filters:
- key: app
operator: equals
value: app1
app2:
title: Application 2
filters:
- key: app
operator: equals
value: app2
cluster1:
title: Cluster 1
filters:
- key: cluster
operator: equals
value: cluster1
tree:
gdev-scopes:
title: gdev-scopes
nodeType: container
children:
production:
title: Production
nodeType: container
children:
app1-prod:
title: Application 1
nodeType: leaf
linkId: app1
linkType: scope
app2-prod:
title: Application 2
nodeType: leaf
linkId: app2
linkType: scope
test-cases:
title: Test cases
nodeType: container
disableMultiSelect: true
children:
test-case-1:
title: Test case 1
nodeType: leaf
linkId: test-case-1
linkType: scope
test-case-2:
title: Test case 2
nodeType: leaf
linkId: test-case-2
linkType: scope
clusters:
title: Clusters
nodeType: container
linkId: cluster1
linkType: scope
children:
cluster1-node:
title: Cluster 1
nodeType: leaf
linkId: cluster1
linkType: scope
navigations:
# Example: Link to a dashboard
app1-nav:
url: /d/86Js1xRmk
scope: app1
# Example: Link to a dashboard with full URL (already has /d/)
app2-nav:
url: /d/GlAqcPgmz
scope: app2
# Example: Custom URL path
custom-nav:
url: /explore
scope: app1

433
devenv/scopes/scopes.go Normal file
View File

@@ -0,0 +1,433 @@
//go:build ignore
// +build ignore
package main
import (
"bytes"
"encoding/json"
"flag"
"fmt"
"io"
"net/http"
"os"
"strings"
"gopkg.in/yaml.v3"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/grafana/grafana/apps/scope/pkg/apis/scope/v0alpha1"
)
const (
prefix = "gdev"
apiVersion = "scope.grafana.app/v0alpha1"
defaultURL = "http://localhost:3000"
defaultUser = "admin"
)
var (
grafanaURL = flag.String("url", getEnv("GRAFANA_URL", defaultURL), "Grafana URL")
namespace = flag.String("namespace", getEnv("GRAFANA_NAMESPACE", "default"), "Namespace")
configFile = flag.String("config", "scopes-config.yaml", "Config file path")
user = flag.String("user", getEnv("GRAFANA_USER", defaultUser), "Grafana username")
password = flag.String("password", getEnv("GRAFANA_PASSWORD", "admin"), "Grafana password")
cleanupFlag = flag.Bool("clean", false, "Delete all gdev-prefixed resources")
)
func getEnv(key, defaultValue string) string {
if value := os.Getenv(key); value != "" {
return value
}
return defaultValue
}
type Config struct {
Scopes map[string]ScopeConfig `yaml:"scopes"`
Tree map[string]TreeNode `yaml:"tree"`
Navigations map[string]NavigationConfig `yaml:"navigations"`
}
// ScopeConfig is used for YAML parsing - converts to v0alpha1.ScopeSpec
type ScopeConfig struct {
Title string `yaml:"title"`
Filters []ScopeFilterConfig `yaml:"filters"`
}
// ScopeFilterConfig is used for YAML parsing - converts to v0alpha1.ScopeFilter
type ScopeFilterConfig struct {
Key string `yaml:"key"`
Value string `yaml:"value"`
Values []string `yaml:"values,omitempty"`
Operator string `yaml:"operator"`
}
// TreeNode is used for YAML parsing - converts to v0alpha1.ScopeNodeSpec
type TreeNode struct {
Title string `yaml:"title"`
NodeType string `yaml:"nodeType"`
LinkID string `yaml:"linkId,omitempty"`
LinkType string `yaml:"linkType,omitempty"`
Children map[string]TreeNode `yaml:"children,omitempty"`
}
type NavigationConfig struct {
URL string `yaml:"url"` // URL path (e.g., /d/abc123 or /explore)
Scope string `yaml:"scope"`
}
// Helper function to convert ScopeFilterConfig to v0alpha1.ScopeFilter
func convertFilter(cfg ScopeFilterConfig) v0alpha1.ScopeFilter {
filter := v0alpha1.ScopeFilter{
Key: cfg.Key,
Value: cfg.Value,
Values: cfg.Values,
Operator: v0alpha1.FilterOperator(cfg.Operator),
}
return filter
}
// Helper function to convert ScopeConfig to v0alpha1.ScopeSpec
func convertScopeSpec(cfg ScopeConfig) v0alpha1.ScopeSpec {
filters := make([]v0alpha1.ScopeFilter, len(cfg.Filters))
for i, f := range cfg.Filters {
filters[i] = convertFilter(f)
}
return v0alpha1.ScopeSpec{
Title: cfg.Title,
Filters: filters,
}
}
type Client struct {
baseURL string
namespace string
httpClient *http.Client
auth string
}
func NewClient(baseURL, namespace, user, password string) *Client {
return &Client{
baseURL: baseURL,
namespace: namespace,
httpClient: &http.Client{},
auth: basicAuth(user, password),
}
}
func basicAuth(username, password string) string {
return fmt.Sprintf("%s:%s", username, password)
}
func (c *Client) makeRequest(method, endpoint string, body []byte) error {
url := fmt.Sprintf("%s/apis/%s/namespaces/%s%s", c.baseURL, apiVersion, c.namespace, endpoint)
var req *http.Request
var err error
if body != nil {
req, err = http.NewRequest(method, url, bytes.NewBuffer(body))
} else {
req, err = http.NewRequest(method, url, nil)
}
if err != nil {
return fmt.Errorf("failed to create request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
req.SetBasicAuth(strings.Split(c.auth, ":")[0], strings.Split(c.auth, ":")[1])
resp, err := c.httpClient.Do(req)
if err != nil {
return fmt.Errorf("request failed: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
bodyBytes, _ := io.ReadAll(resp.Body)
// For DELETE requests, 404 is acceptable (resource already deleted)
if resp.StatusCode == 404 {
return nil
}
return fmt.Errorf("API request failed: HTTP %d - %s", resp.StatusCode, string(bodyBytes))
}
return nil
}
func (c *Client) createScope(name string, cfg ScopeConfig) error {
prefixedName := prefix + "-" + name
spec := convertScopeSpec(cfg)
resource := v0alpha1.Scope{
TypeMeta: metav1.TypeMeta{
APIVersion: apiVersion,
Kind: "Scope",
},
ObjectMeta: metav1.ObjectMeta{
Name: prefixedName,
},
Spec: spec,
}
body, err := json.Marshal(resource)
if err != nil {
return fmt.Errorf("failed to marshal scope: %w", err)
}
fmt.Printf("✓ Creating scope: %s\n", prefixedName)
return c.makeRequest("POST", "/scopes", body)
}
func (c *Client) createScopeNode(name string, node TreeNode, parentName string) error {
prefixedName := prefix + "-" + name
prefixedParent := ""
prefixedLinkID := ""
if parentName != "" {
prefixedParent = prefix + "-" + parentName
}
if node.LinkID != "" {
prefixedLinkID = prefix + "-" + node.LinkID
}
nodeType := v0alpha1.NodeType(node.NodeType)
if nodeType == "" {
nodeType = v0alpha1.NodeTypeContainer
}
linkType := v0alpha1.LinkType(node.LinkType)
if linkType == "" {
linkType = v0alpha1.LinkTypeScope
}
spec := v0alpha1.ScopeNodeSpec{
Title: node.Title,
NodeType: nodeType,
DisableMultiSelect: false,
}
if prefixedParent != "" {
spec.ParentName = prefixedParent
}
if prefixedLinkID != "" {
spec.LinkID = prefixedLinkID
spec.LinkType = linkType
}
resource := v0alpha1.ScopeNode{
TypeMeta: metav1.TypeMeta{
APIVersion: apiVersion,
Kind: "ScopeNode",
},
ObjectMeta: metav1.ObjectMeta{
Name: prefixedName,
},
Spec: spec,
}
body, err := json.Marshal(resource)
if err != nil {
return fmt.Errorf("failed to marshal scope node: %w", err)
}
fmt.Printf("✓ Creating scope node: %s\n", prefixedName)
return c.makeRequest("POST", "/scopenodes", body)
}
func (c *Client) createScopeNavigation(name string, nav NavigationConfig) error {
prefixedName := prefix + "-" + name
prefixedScope := prefix + "-" + nav.Scope
if nav.URL == "" {
return fmt.Errorf("navigation %s must have 'url' specified", name)
}
spec := v0alpha1.ScopeNavigationSpec{
URL: nav.URL,
Scope: prefixedScope,
}
resource := v0alpha1.ScopeNavigation{
TypeMeta: metav1.TypeMeta{
APIVersion: apiVersion,
Kind: "ScopeNavigation",
},
ObjectMeta: metav1.ObjectMeta{
Name: prefixedName,
},
Spec: spec,
}
body, err := json.Marshal(resource)
if err != nil {
return fmt.Errorf("failed to marshal scope navigation: %w", err)
}
fmt.Printf("✓ Creating scope navigation: %s\n", prefixedName)
return c.makeRequest("POST", "/scopenavigations", body)
}
func (c *Client) createTreeNodes(children map[string]TreeNode, parentName string) error {
for name, node := range children {
// Build full node name by appending to parent name
// This makes it easy to see the tree path from the node name
fullNodeName := name
if parentName != "" {
fullNodeName = parentName + "-" + name
}
// parentName here is the full parent name (already includes full path)
err := c.createScopeNode(fullNodeName, node, parentName)
if err != nil {
return err
}
if len(node.Children) > 0 {
// Pass fullNodeName as parent for children (will be prefixed with "gdev-" in createScopeNode)
if err := c.createTreeNodes(node.Children, fullNodeName); err != nil {
return err
}
}
}
return nil
}
func (c *Client) deleteResources() {
fmt.Println("Deleting all gdev-prefixed resources...")
// Delete scopes (silently handle errors if endpoints aren't available)
c.deleteResourceType("/scopes", "scope")
// Delete scope nodes
c.deleteResourceType("/scopenodes", "scope node")
// Delete scope navigations
c.deleteResourceType("/scopenavigations", "scope navigation")
fmt.Println("✓ Cleanup complete")
}
func (c *Client) deleteResourceType(endpoint, resourceType string) {
url := fmt.Sprintf("%s/apis/%s/namespaces/%s%s", c.baseURL, apiVersion, c.namespace, endpoint)
req, err := http.NewRequest("GET", url, nil)
if err != nil {
// Silently skip if we can't create request
return
}
req.Header.Set("Content-Type", "application/json")
req.SetBasicAuth(strings.Split(c.auth, ":")[0], strings.Split(c.auth, ":")[1])
resp, err := c.httpClient.Do(req)
if err != nil {
// Silently skip if endpoint isn't available
return
}
defer resp.Body.Close()
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
// Silently skip if endpoint returns error (might not be available)
return
}
var listResponse struct {
Items []struct {
Metadata struct {
Name string `json:"name"`
} `json:"metadata"`
} `json:"items"`
}
bodyBytes, _ := io.ReadAll(resp.Body)
if err := json.Unmarshal(bodyBytes, &listResponse); err != nil {
// Silently skip if we can't decode response
return
}
if len(listResponse.Items) == 0 {
return
}
deletedCount := 0
for _, item := range listResponse.Items {
if strings.HasPrefix(item.Metadata.Name, prefix+"-") {
fmt.Printf(" Deleting %s: %s\n", resourceType, item.Metadata.Name)
deleteURL := fmt.Sprintf("%s/%s", endpoint, item.Metadata.Name)
if err := c.makeRequest("DELETE", deleteURL, nil); err != nil {
// Silently skip deletion errors
} else {
deletedCount++
}
}
}
}
func main() {
flag.Parse()
client := NewClient(*grafanaURL, *namespace, *user, *password)
if *cleanupFlag {
// Cleanup should be silent if endpoints aren't available
client.deleteResources()
return
}
configData, err := os.ReadFile(*configFile)
if err != nil {
fmt.Fprintf(os.Stderr, "Error reading config file: %v\n", err)
os.Exit(1)
}
var config Config
if err := yaml.Unmarshal(configData, &config); err != nil {
fmt.Fprintf(os.Stderr, "Error parsing config file: %v\n", err)
os.Exit(1)
}
fmt.Printf("Loading configuration from: %s\n", *configFile)
fmt.Printf("Grafana URL: %s\n", *grafanaURL)
fmt.Printf("Namespace: %s\n", *namespace)
fmt.Printf("Prefix: %s\n\n", prefix)
// Create scopes
fmt.Println("Creating scopes...")
for name, scope := range config.Scopes {
if err := client.createScope(name, scope); err != nil {
fmt.Fprintf(os.Stderr, "Error creating scope %s: %v\n", name, err)
os.Exit(1)
}
}
fmt.Println()
// Create scope nodes (tree structure)
if len(config.Tree) > 0 {
fmt.Println("Creating scope nodes...")
if err := client.createTreeNodes(config.Tree, ""); err != nil {
fmt.Fprintf(os.Stderr, "Error creating scope nodes: %v\n", err)
os.Exit(1)
}
fmt.Println()
}
// Create scope navigations
if len(config.Navigations) > 0 {
fmt.Println("Creating scope navigations...")
for name, nav := range config.Navigations {
if err := client.createScopeNavigation(name, nav); err != nil {
fmt.Fprintf(os.Stderr, "Error creating scope navigation %s: %v\n", name, err)
os.Exit(1)
}
}
fmt.Println()
}
fmt.Println("✓ All resources created successfully!")
}

View File

@@ -19,6 +19,13 @@ bulkFolders() {
ln -s -f ../../../devenv/bulk-folders/bulk-folders.yaml ../conf/provisioning/dashboards/bulk-folders.yaml
}
scopes() {
echo -e "\xE2\x9C\x94 Setting up scopes, scope nodes, and scope navigations"
cd scopes
go run scopes.go
cd ..
}
requiresJsonnet() {
if ! type "jsonnet" > /dev/null; then
echo "you need you install jsonnet to run this script"
@@ -49,6 +56,12 @@ undev() {
rm -rf bulk-folders/Bulk\ Folder*
echo -e " \xE2\x9C\x94 Reverting bulk-folders provisioning"
# Removing scopes, scope nodes, and scope navigations
cd scopes
go run scopes.go -clean
cd ..
echo -e " \xE2\x9C\x94 Deleting scopes, scope nodes, and scope navigations"
# Removing the symlinks
rm -f ../conf/provisioning/dashboards/custom.yaml
rm -f ../conf/provisioning/dashboards/bulk-folders.yaml
@@ -63,6 +76,7 @@ usage() {
echo " bulk-dashboards - provision 400 dashboards"
echo " bulk-folders [folders] [dashboards] - provision many folders with dashboards"
echo " bulk-folders - provision 200 folders with 3 dashboards in each"
echo " scopes - provision scopes, scope nodes, and scope navigations"
echo " no args - provision core datasources and dev dashboards"
echo " undev - removes any provisioning done by the setup.sh"
}
@@ -80,6 +94,8 @@ main() {
bulkDashboard
elif [[ $cmd == "bulk-folders" ]]; then
bulkFolders "$arg1"
elif [[ $cmd == "scopes" ]]; then
scopes
elif [[ $cmd == "undev" ]]; then
undev
else

View File

@@ -68,7 +68,21 @@ You can change this behavior by disabling the `alertingSaveStateCompressed` feat
You can also reduce database load by writing states periodically instead of after every evaluation.
To save state periodically:
There are two approaches for periodic state saving:
#### Compressed periodic saves
You can combine compressed alert state storage with periodic saves by enabling both `alertingSaveStateCompressed` and `alertingSaveStatePeriodic` feature toggles together.
This approach groups all alert instances by rule UID and compresses them together for efficient storage.
When both feature toggles are enabled, Grafana will save compressed alert states at the interval specified by `state_periodic_save_interval`. Note that in compressed mode, the `state_periodic_save_batch_size` setting is ignored as the system groups instances by rule UID rather than by batch size.
#### Batch-based periodic saves
Alternatively, you can use batch-based periodic saves without compression:
This approach processes individual alert instances in batches of a specified size.
1. Enable the `alertingSaveStatePeriodic` feature toggle.
1. Disable the `alertingSaveStateCompressed` feature toggle.
@@ -77,7 +91,7 @@ By default, it saves the states every 5 minutes to the database and on each shut
can also be configured using the `state_periodic_save_interval` configuration flag. During this process, Grafana deletes all existing alert instances from the database and then writes the entire current set of instances back in batches in a single transaction.
Configure the size of each batch using the `state_periodic_save_batch_size` configuration option.
#### Jitter for periodic saves
##### Jitter for batch-based periodic saves
To further distribute database load, you can enable jitter for periodic state saves by setting `state_periodic_save_jitter_enabled = true`. When jitter is enabled, instead of saving all batches simultaneously, Grafana spreads the batch writes across a calculated time window of 85% of the save interval.

View File

@@ -250,6 +250,19 @@ You can query CloudWatch Logs using three supported query language options:
1. Select a region.
1. Select **CloudWatch Logs** from the query type drop-down.
1. Select the Logs Mode depending on whether you would like to query CloudWatch Logs Insights or Log Anomalies
**Log Anomalies**
[Anomaly detection](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/LogsAnomalyDetection.html) uses machine-learning and pattern recognition to establish baselines of typical log content.
The Log Anomalies query editor fetches the list of anomalies detected in your CloudWatch service. In order to query log anomalies in the editor, a log anomaly detector must be created in the AWS CloudWatch console first.
The log trend cell shows the number of occurrences of the pattern over the selected query time range.
The table shows 50 log anomalies at a time. If you would like to narrow down the list, you can filter anomalies by their ARN and suppressed state.
In addition to this, you can use the Logs Insights QL editor and the `anomaly` command together with the `patterns` command to define and display log anomalies in real time. See the [CloudWatch Logs Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/LogsAnomalyDetection-Insights.html) documentation for more info.
**Logs Insights**
1. Select the query language you would like to use in the **Query Language** drop-down.
1. Click **Select log groups** and choose up to 20 log groups to query.
1. Use the main input area to write your logs query. Amazon CloudWatch only supports a subset of OpenSearch SQL and PPL commands. To find out more about the syntax supported, consult [Amazon CloudWatch Logs documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData_Languages.html)
@@ -258,7 +271,7 @@ You can query CloudWatch Logs using three supported query language options:
You must specify the region and log groups when querying with **Logs Insights QL** and **OpenSearch PPL**. **OpenSearch SQL** doesn't require log group selection. However, selecting log groups simplifies query writing by populating syntax suggestions with discovered log group fields.
{{< /admonition >}}
Click **CloudWatch Logs Insights** to interactively view, search, and analyze your log data in the CloudWatch Logs Insights console. If you're not logged in to the CloudWatch console, the link forwards you to the login page.
Click **View in CloudWatch console** to interactively view, search, and analyze your log data in the CloudWatch Logs Insights console. If you're not logged in to the CloudWatch console, the link forwards you to the login page.
### Query Log groups with OpenSearch SQL

View File

@@ -0,0 +1,154 @@
---
aliases:
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Concepts
title: Data sources, plugins and integrations
weight: 70
refs:
data-source-management:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/data-source-management/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/data-source-management/
plugin-management:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/plugin-management/
- pattern: /docs/grafana-cloud
destination: /docs/grafana/<GRAFANA_VERSION>/administration/plugin-management/
---
# Data sources, plugins, and integrations
When working with Grafana, you'll encounter three key concepts: data sources, plugins, and integrations. Each one is essential in building effective monitoring solutions, but they serve distinct purposes, and are often confused with one another. This document clarifies the meaning of each concept and what each one does, when to use it, and how they work together to create observability solutions in Grafana.
## Data sources
A data source is a connection to a specific database, monitoring system, service, or other external location that stores data, metrics, logs, or traces. Examples include Prometheus, InfluxDB, PostgreSQL, or CloudWatch. When you configure a data source in Grafana, you're telling it where to fetch data from, providing connection details, credentials, and endpoints. Data sources are the foundation for working with Grafana. Without them, Grafana has nothing to visualize. Once configured, you can query your Prometheus data source to display CPU metrics, or query CloudWatch to visualize AWS infrastructure performance.
## Plugins
A plugin extends Grafanas core functionality. Plugins can add new data source types, visualization panels, or full-featured applications that integrate with Grafana. They make Grafana modular and extensible.
Plugins come in three types:
- **Data source plugins** connect Grafana to **external data sources**. You use this type of plugin when you want to access and work with data from an external source or third party. Examples include Prometheus, MSSQL, and Databricks.
- **Panel plugins** control how data appears in Grafana dashboards. Examples of panel plugins include pie chart, candlestick, and traffic light. Note that in some cases, panels don't rely on a data source at all. The **Text** panel can render static or templated content without querying data. Panels can also support user-driven actions. For example, the **Button** panel can trigger workflows or external calls.
- **App plugins** allow you to bundle data sources and panel plugins within a single package. They enable you to create custom pages within Grafana that can function like dashboards, providing dedicated spaces for documentation, sign-up forms, custom UI extensions, and integration with other services via HTTP. Cloud apps built as app plugins offer out-of-the-box observability solutions, such as Azure Cloud Native Monitoring and Redis Application, that provide comprehensive monitoring capabilities compared to standalone integrations
## Integrations
_Integrations are exclusive to Grafana Cloud._ An integration is a pre-packaged monitoring solution that bundles export/scrape configurations, pre-built dashboards, alert rules, and sometimes recording rules. Unlike standalone data sources, integrations handle the complete workflow: they configure how telemetry is collected and sent to Grafana Cloud's hosted databases, then provide ready-to-use dashboards and alerts. For example, a Kubernetes integration configures metric collection from your cluster, creates dashboards for monitoring, and sets up common alerts—all working together out of the box
## When to use each
Use a data source when:
- You want to connect Grafana to a specific system (for example, Prometheus or MySQL).
- Youre building custom dashboards with hand-picked metrics and visualizations.
- Your monitoring needs are unique or not covered by pre-packaged integrations.
Use a plugin when:
- You need to connect to a system Grafana doesnt support natively.
- You want to add new functionality (visualizations, workflows, or app-style extensions).
- You have specialized or industry-specific requirements (for example, IoT).
Use an integration when:
- Youre using Grafana Cloud and want a quick, pre-built setup.
- You prefer minimal configuration with ready-to-use dashboards and alerts.
- Youre new to observability and want to learn what good monitoring looks like.
## Relationships and interactions
How data sources, plugins, and integrations work together:
- Plugins extend what Grafana can do.
- Data sources define where Grafana reads data from.
- Integrations combine telemetry collection and pre-built content to create complete monitoring solutions.
Examples:
- Install the Databricks data source plugin. Configure the Databricks data source and run SQL queries against your Databricks workspace. Use the `Histogram` panel to visualize distributions in your query results, such as latency buckets, job durations, or model output scores.
- Install the Redis Application app plugin. This app provides a unified experience for monitoring Redis by working with your existing Redis data source. It adds custom pages for configuration and exploration, along with prebuilt dashboards, commands, and visualizations that help you analyze performance, memory usage, and key activity.
<!-- - Install the Azure Cloud Native Monitoring app plugin, which bundles the app and data source plugin types. It includes data source plugins for Azure Monitor and Log Analytics, panel plugins for visualizing Azure metrics, and a custom configuration page for managing authentication and subscriptions. -->
- If youre using Grafana Cloud, add the ClickHouse integration. This integration provides pre-built dashboards and alerts to monitor ClickHouse cluster metrics and logs, enabling users to visualize and analyze their ClickHouse performance and health in real-time.
## Frequently asked questions
**What's the difference between a data source and a data source plugin?**
A data source plugin is a **software component that enables Grafana to communicate** with specific types of databases or services, like Prometheus, MySQL, or InfluxDB. A data source is **an actual configured connection** to one of these databases, including the credentials, URL, and settings needed to retrieve data.
Think of it this way: You _install_ a plugin but _configure_ a data source.
**Do I need a plugin to use a data source?**
You must install the plugin before you configure or use the data source. Each data source plugin has its own versioning and lifecycle. Grafana includes built-in core data sources, which can be thought of as pre-installed plugins.
**Can I use integrations in self-hosted Grafana?**
No, integrations are exclusive to Grafana Cloud. In self-hosted Grafana, you can replicate similar setups manually using data sources and dashboards.
**Aren't integrations just pre-built dashboards?**
No, integrations are much more than just dashboards. While dashboards are part of an integration, theyre only one piece. Integrations typically include:
- Data collection setup (for example, pre-configured agents or exporters).
- Predefined metrics and queries tailored to the technology.
- Alerting rules and notifications to help detect common issues.
- Dashboards to visualize and explore that data.
**Whats the difference between plugin types?**
A data source plugin in Grafana is a software component that enables Grafana to connect to and retrieve data from various external data sources. After you install the plugin, you can use it to configure one or more data sources. Each data source defines the actual connection details, like the server URL, authentication method, and query options.
A panel plugin in Grafana is an extension that allows you to add new and custom visualizations to your Grafana dashboards. While Grafana comes with several built-in panel types (like graphs, single stats, and tables), panel plugins extend this functionality by providing specialized ways to display data.
An app plugin in Grafana is a type of plugin that provides a comprehensive, integrated, and often out-of-the-box experience within Grafana. Unlike data source plugins, which connect to external data sources, or panel plugins, which provide new visualization types, app plugins can combine various functionalities to create a more complete experience.
**How do data sources and integrations differ in how they handle data?**
Data sources query data where it already lives. They connect Grafana to an external system or database, such as Prometheus, MySQL, or Elasticsearch and fetch data on demand. You keep full control over your own data stores, schemas and retention policies.
In contrast, integrations focus on getting data into Grafana Clouds hosted backends. They ingest metrics, logs, and traces into systems like Mimir, Loki, or Tempo, using pre-configured agents and pipelines. Instead of querying an external database, Grafana queries its own managed storage where the integration has placed the data.
## Summary reference
Use the following table to compare how data sources, plugins, and integrations differ in scope, purpose, and use. It highlights where each applies within Grafana, what problems it solves, and how they work together to build observability solutions.
| Concept | Where it applies | Purpose | What it includes | When to use it | Example |
| ---------------------- | ---------------------- | ---------------------------------------------------- | ----------------------------------------------------------- | ------------------------------------------------------- | ------------------------------------------ |
| **Data source** | Self-hosted and Cloud | Connect to external metrics, logs, or traces storage | Connection settings, auth, query config | Visualize data from a database or monitoring system | Prometheus, CloudWatch, PostgreSQL |
| **Plugin** | Self-hosted and Cloud | Extend Grafana with new capabilities | Three types: data source, panel, and app | Add connectivity or functionality not included by default | Plotly panel, MongoDB data source |
| **App plugin** | Self-hosted and Cloud | Bundle plugins with custom pages or UI | Data source + panel plugins + custom routes | Create a dedicated app-like experience | Azure Cloud Native Monitoring |
| **Panel plugin** | Self-hosted and Cloud | Add new visualization types | Custom panels and visualization logic | Display data beyond built-in visualizations | Pie chart, Candlestick, Geomap |
| **Data source plugin** | Self-hosted and Cloud | Connect to a new external system type | Connector code for querying that system | Access data from an unsupported backend | Databricks, MongoDB, MSSQL |
| **Integration** | Grafana Cloud only | Pre-packaged observability for a specific technology | Telemetry config, dashboards, alerts, recording rules | Get an out-of-the-box setup with minimal configuration | Kubernetes, Redis, NGINX |
For detailed documentation and how-to guides related to data sources, plugins, and integrations, refer to the following references:
**Data sources**:
- [Manage data sources](ref:data-source-management)
**Plugins**:
- [Plugin types and usage](https://grafana.com/developers/plugin-tools/key-concepts/plugin-types-usage)
- [App plugins](https://grafana.com/developers/plugin-tools/how-to-guides/app-plugins/)
- [Data source plugins](https://grafana.com/developers/plugin-tools/how-to-guides/data-source-plugins/)
- [Panel plugins](https://grafana.com/developers/plugin-tools/how-to-guides/panel-plugins/)
**Integrations**:
- [Grafana integrations](https://grafana.com/docs/grafana-cloud/monitor-infrastructure/integrations/)
- [Install and manage integrations](https://grafana.com/docs/grafana-cloud/monitor-infrastructure/integrations/install-and-manage-integrations/)

View File

@@ -40,7 +40,6 @@ Most [generally available](https://grafana.com/docs/release-life-cycle/#general-
| `transformationsRedesign` | Enables the transformations redesign | Yes |
| `awsAsyncQueryCaching` | Enable caching for async queries for Redshift and Athena. Requires that the datasource has caching and async query support enabled | Yes |
| `dashgpt` | Enable AI powered features in dashboards | Yes |
| `panelMonitoring` | Enables panel monitoring through logs and measurements | Yes |
| `formatString` | Enable format string transformer | Yes |
| `kubernetesDashboards` | Use the kubernetes API in the frontend for dashboards | Yes |
| `addFieldFromCalculationStatFunctions` | Add cumulative and window functions to the add field from calculation transformation | Yes |

View File

@@ -1,16 +1,9 @@
import { Page, Locator } from '@playwright/test';
import { test, expect } from '@grafana/plugin-e2e';
import testDashboard from '../dashboards/AdHocFilterTest.json';
import { getCell } from '../panels-suite/table-utils';
// Helper function to get a specific cell in a table
const getCell = async (loc: Page | Locator, rowIdx: number, colIdx: number) =>
loc
.getByRole('row')
.nth(rowIdx)
.getByRole(rowIdx === 0 ? 'columnheader' : 'gridcell')
.nth(colIdx);
const fixture = require('../fixtures/prometheus-response.json');
test.describe(
'Dashboard with Table powered by Prometheus data source',
@@ -46,80 +39,90 @@ test.describe(
gotoDashboardPage,
selectors,
}) => {
// Handle query and query_range API calls
// Handle query and query_range API calls. Ideally, this would instead be directly tested against gdev-prometheus.
await page.route(/\/api\/ds\/query/, async (route) => {
const fixture = require('../fixtures/prometheus-response.json');
// during the test, we select the "inner_eval" slice to filter; this simulates the behavior
// of prometheus applying that filter and removing dataframes from the response.
if (route.request().postData()?.includes('{slice=\\\"inner_eval\\\"}')) {
fixture.results.A.frames.splice(1, 1);
const response = JSON.parse(JSON.stringify(fixture));
// This simulates the behavior of prometheus applying a filter and removing dataframes from the response where
// the label matches the selected filter. We check for either the slice being applied inline into the prometheus
// query or the adhoc filter being present in the request body of prometheus applying that filter and removing
// dataframes from the response.
const postData = route.request().postData();
const match =
postData?.match(/{slice=\\\"([\w_]+)\\\"}/) ??
postData?.match(/"adhocFilters":\[{"key":"slice","operator":"equals","value":"([\w_]+)"}\]/);
if (match) {
response.results.A.frames = response.results.A.frames.filter((frame) =>
frame.schema.fields.every((field) => !field.labels || field.labels.slice === match[1])
);
}
await route.fulfill({
status: 200,
contentType: 'application/json',
body: JSON.stringify(fixture),
body: JSON.stringify(response),
});
});
const dashboardPage = await gotoDashboardPage({ uid: dashboardUID });
const panel = dashboardPage.getByGrafanaSelector(
let panel = dashboardPage.getByGrafanaSelector(
selectors.components.Panels.Panel.title('Table powered by Prometheus')
);
await expect(panel).toBeVisible();
await expect(panel, 'panel is rendered').toBeVisible();
// Wait for the table to load completely
await expect(panel.locator('.rdg')).toBeVisible();
const table = panel.locator('.rdg');
await expect(table, 'table is rendered').toBeVisible();
// Get the first data cell in the third column (row 1, column 2)
const labelValueCell = await getCell(panel, 1, 1);
await expect(labelValueCell).toBeVisible();
const firstValue = (await getCell(table, 1, 1).textContent())!;
const secondValue = (await getCell(table, 2, 1).textContent())!;
expect(firstValue, `first cell is "${firstValue}"`).toBeTruthy();
expect(secondValue, `second cell is "${secondValue}"`).toBeTruthy();
expect(firstValue, 'first and second cell values are different').not.toBe(secondValue);
// Get the cell value before clicking the filter button
const labelValue = await labelValueCell.textContent();
expect(labelValue).toBeTruthy();
async function performTest(labelValue: string) {
// Confirm both cells are rendered before we proceed
const otherValue = labelValue === firstValue ? secondValue : firstValue;
await expect(table.getByText(labelValue), `"${labelValue}" is rendered`).toContainText(labelValue);
await expect(table.getByText(otherValue), `"${otherValue}" is rendered`).toContainText(otherValue);
const otherValueCell = await getCell(panel, 2, 1);
const otherValueLabel = await otherValueCell.textContent();
expect(otherValueLabel).toBeTruthy();
expect(otherValueLabel).not.toBe(labelValue);
// click the "Filter for value" button on the cell with the specified labelValue
await table.getByText(labelValue).hover();
table.getByText(labelValue).getByRole('button', { name: 'Filter for value' }).click();
// Hover over the first cell to trigger the appearance of filter actions
await labelValueCell.hover();
// Look for submenu items that contain the filtered value
// The adhoc filter should appear as a filter chip or within the variable controls
const submenuItems = dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.SubMenu.submenuItem);
await expect(submenuItems.filter({ hasText: labelValue }), `submenu contains "${labelValue}"`).toBeVisible();
await expect(
submenuItems.filter({ hasText: otherValue }),
`submenu does not contain "${otherValue}"`
).toBeHidden();
// Check if the "Filter for value" button appears on hover
const filterForValueButton = labelValueCell.getByRole('button', { name: 'Filter for value' });
await expect(filterForValueButton).toBeVisible();
// The URL parameter should contain the filter in format like: var-PromAdHoc=["columnName","=","value"]
const currentUrl = page.url();
const urlParams = new URLSearchParams(new URL(currentUrl).search);
const promAdHocParam = urlParams.get('var-PromAdHoc');
expect(promAdHocParam, `url contains "${labelValue}"`).toContain(labelValue);
expect(promAdHocParam, `url does not contain "${otherValue}"`).not.toContain(otherValue);
// Click on the "Filter for value" button
await filterForValueButton.click();
// finally, let's check that the table was updated and that the value was filtered out when the query was re-run
await expect(table.getByText(labelValue), `"${labelValue}" is still visible`).toHaveText(labelValue);
await expect(table.getByText(otherValue), `"${otherValue}" is filtered out`).toBeHidden();
// Check if the adhoc filter appears in the dashboard submenu
const submenuItems = dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.SubMenu.submenuItem);
await expect(submenuItems.first()).toBeVisible();
// Remove the adhoc filter by clicking the submenu item again
const filterChip = submenuItems.filter({ hasText: labelValue });
await filterChip.getByLabel(/Remove filter with key/).click();
await page.click('body', { position: { x: 0, y: 0 } }); // click outside to close the open menu from ad-hoc filters
// Look for submenu items that contain the filtered value
// The adhoc filter should appear as a filter chip or within the variable controls
const hasFilterValue = await submenuItems.filter({ hasText: labelValue! }).count();
expect(hasFilterValue).toBeGreaterThan(0);
// the "first" and "second" cells locators don't work here for some reason.
await expect(table.getByText(labelValue), `"${labelValue}" is still rendered`).toContainText(labelValue);
await expect(table.getByText(otherValue), `"${otherValue}" is rendered again`).toContainText(otherValue);
}
const hasOtherValue = await submenuItems.filter({ hasText: otherValueLabel! }).count();
expect(hasOtherValue).toBe(0);
// Check if the URL contains the var-PromAdHoc parameter with the filtered value
const currentUrl = page.url();
expect(currentUrl).toContain('var-PromAdHoc');
// The URL parameter should contain the filter in format like: var-PromAdHoc=["columnName","=","value"]
const urlParams = new URLSearchParams(new URL(currentUrl).search);
const promAdHocParam = urlParams.get('var-PromAdHoc');
expect(promAdHocParam).toBeTruthy();
expect(promAdHocParam).toContain(labelValue!);
expect(promAdHocParam).not.toContain(otherValueLabel!);
// finally, let's check that the table was updated and that the value was filtered out when the query was re-run
await expect(otherValueCell).toBeHidden();
await performTest(firstValue);
await performTest(secondValue);
});
}
);

View File

@@ -17,7 +17,7 @@ test.describe(
tag: ['@dashboards'],
},
() => {
test.fixme('Tests dashboard time zone scenarios', async ({ page, gotoDashboardPage, selectors }) => {
test('Tests dashboard time zone scenarios', async ({ page, gotoDashboardPage, selectors }) => {
const dashboardPage = await gotoDashboardPage({ uid: TIMEZONE_DASHBOARD_UID });
const fromTimeZone = 'UTC';
@@ -106,12 +106,18 @@ test.describe(
zone: 'Browser',
});
await expect(
dashboardPage
.getByGrafanaSelector(selectors.components.Panels.Panel.title('Panel with relative time override'))
.locator('[role="row"]')
.filter({ hasText: '00:00:00' })
).toBeVisible();
const relativeTimeRow = dashboardPage
.getByGrafanaSelector(selectors.components.Panels.Panel.title('Panel with relative time override'))
.locator('[role="row"]')
.filter({ hasText: '00:00:00' })
.first();
const timezoneRow = dashboardPage
.getByGrafanaSelector(selectors.components.Panels.Panel.title('Panel in timezone'))
.locator('[role="row"]')
.filter({ hasText: '00:00:00' })
.first();
await expect(relativeTimeRow).toBeVisible();
// Today so far, still in Browser timezone
await setTimeRange(page, dashboardPage, selectors, {
@@ -119,19 +125,8 @@ test.describe(
to: 'now',
});
await expect(
dashboardPage
.getByGrafanaSelector(selectors.components.Panels.Panel.title('Panel with relative time override'))
.locator('[role="row"]')
.filter({ hasText: '00:00:00' })
).toBeVisible();
await expect(
dashboardPage
.getByGrafanaSelector(selectors.components.Panels.Panel.title('Panel in timezone'))
.locator('[role="row"]')
.filter({ hasText: '00:00:00' })
).toBeVisible();
await expect(relativeTimeRow).toBeVisible();
await expect(timezoneRow).toBeVisible();
// Test UTC timezone
await setTimeRange(page, dashboardPage, selectors, {
@@ -140,12 +135,7 @@ test.describe(
zone: 'Coordinated Universal Time',
});
await expect(
dashboardPage
.getByGrafanaSelector(selectors.components.Panels.Panel.title('Panel with relative time override'))
.locator('[role="row"]')
.filter({ hasText: '00:00:00' })
).toBeVisible();
await expect(relativeTimeRow).toBeVisible();
// Today so far, still in UTC timezone
await setTimeRange(page, dashboardPage, selectors, {
@@ -153,19 +143,8 @@ test.describe(
to: 'now',
});
await expect(
dashboardPage
.getByGrafanaSelector(selectors.components.Panels.Panel.title('Panel with relative time override'))
.locator('[role="row"]')
.filter({ hasText: '00:00:00' })
).toBeVisible();
await expect(
dashboardPage
.getByGrafanaSelector(selectors.components.Panels.Panel.title('Panel in timezone'))
.locator('[role="row"]')
.filter({ hasText: '00:00:00' })
).toBeVisible();
await expect(relativeTimeRow).toBeVisible();
await expect(timezoneRow).toBeVisible();
// Test Tokyo timezone
await setTimeRange(page, dashboardPage, selectors, {
@@ -174,12 +153,7 @@ test.describe(
zone: 'Asia/Tokyo',
});
await expect(
dashboardPage
.getByGrafanaSelector(selectors.components.Panels.Panel.title('Panel with relative time override'))
.locator('[role="row"]')
.filter({ hasText: '00:00:00' })
).toBeVisible();
await expect(relativeTimeRow).toBeVisible();
// Today so far, still in Tokyo timezone
await setTimeRange(page, dashboardPage, selectors, {
@@ -187,19 +161,8 @@ test.describe(
to: 'now',
});
await expect(
dashboardPage
.getByGrafanaSelector(selectors.components.Panels.Panel.title('Panel with relative time override'))
.locator('[role="row"]')
.filter({ hasText: '00:00:00' })
).toBeVisible();
await expect(
dashboardPage
.getByGrafanaSelector(selectors.components.Panels.Panel.title('Panel in timezone'))
.locator('[role="row"]')
.filter({ hasText: '00:00:00' })
).toBeVisible();
await expect(relativeTimeRow).toBeVisible();
await expect(timezoneRow).toBeVisible();
// Test LA timezone
await setTimeRange(page, dashboardPage, selectors, {
@@ -208,12 +171,7 @@ test.describe(
zone: 'America/Los Angeles',
});
await expect(
dashboardPage
.getByGrafanaSelector(selectors.components.Panels.Panel.title('Panel with relative time override'))
.locator('[role="row"]')
.filter({ hasText: '00:00:00' })
).toBeVisible();
await expect(relativeTimeRow).toBeVisible();
// Today so far, still in LA timezone
await setTimeRange(page, dashboardPage, selectors, {
@@ -221,19 +179,8 @@ test.describe(
to: 'now',
});
await expect(
dashboardPage
.getByGrafanaSelector(selectors.components.Panels.Panel.title('Panel with relative time override'))
.locator('[role="row"]')
.filter({ hasText: '00:00:00' })
).toBeVisible();
await expect(
dashboardPage
.getByGrafanaSelector(selectors.components.Panels.Panel.title('Panel in timezone'))
.locator('[role="row"]')
.filter({ hasText: '00:00:00' })
).toBeVisible();
await expect(relativeTimeRow).toBeVisible();
await expect(timezoneRow).toBeVisible();
});
}
);

View File

@@ -65,11 +65,11 @@ test.describe('Panels test: Table - Kitchen Sink', { tag: ['@panels', '@table']
await expect(getCellHeight(page, 1, longTextColIdx)).resolves.toBeLessThan(100);
// test that hover overflow works.
const loremIpsumCell = await getCell(page, 1, longTextColIdx);
const loremIpsumCell = getCell(page, 1, longTextColIdx);
await loremIpsumCell.scrollIntoViewIfNeeded();
await loremIpsumCell.hover();
await expect(getCellHeight(page, 1, longTextColIdx)).resolves.toBeGreaterThan(100);
await (await getCell(page, 1, longTextColIdx + 1)).hover();
await getCell(page, 1, longTextColIdx + 1).hover();
await expect(getCellHeight(page, 1, longTextColIdx)).resolves.toBeLessThan(100);
// enable cell inspect, confirm that hover no longer triggers.
@@ -140,15 +140,15 @@ test.describe('Panels test: Table - Kitchen Sink', { tag: ['@panels', '@table']
).toBeVisible();
// click the "State" column header to sort it.
const stateColumnHeader = await getCell(page, 0, 1);
const stateColumnHeader = getCell(page, 0, 1);
await stateColumnHeader.getByText('Info').click();
await expect(stateColumnHeader).toHaveAttribute('aria-sort', 'ascending');
expect(getCell(page, 1, 1)).resolves.toContainText('down'); // down or down fast
await expect(getCell(page, 1, 1)).toContainText('down'); // down or down fast
await stateColumnHeader.getByText('Info').click();
await expect(stateColumnHeader).toHaveAttribute('aria-sort', 'descending');
expect(getCell(page, 1, 1)).resolves.toContainText('up'); // up or up fast
await expect(getCell(page, 1, 1)).toContainText('up'); // up or up fast
await stateColumnHeader.getByText('Info').click();
await expect(stateColumnHeader).not.toHaveAttribute('aria-sort');
@@ -171,7 +171,7 @@ test.describe('Panels test: Table - Kitchen Sink', { tag: ['@panels', '@table']
const stateColumnHeader = page.getByRole('columnheader').nth(infoColumnIdx);
// get the first value in the "State" column, filter it out, then check that it went away.
const firstStateValue = (await (await getCell(page, 1, infoColumnIdx)).textContent())!;
const firstStateValue = (await getCell(page, 1, infoColumnIdx).textContent())!;
await stateColumnHeader.getByTestId(selectors.components.Panels.Visualization.TableNG.Filters.HeaderButton).click();
const filterContainer = dashboardPage.getByGrafanaSelector(
selectors.components.Panels.Visualization.TableNG.Filters.Container
@@ -188,7 +188,7 @@ test.describe('Panels test: Table - Kitchen Sink', { tag: ['@panels', '@table']
await expect(filterContainer).not.toBeVisible();
// did it actually filter out our value?
await expect(getCell(page, 1, infoColumnIdx)).resolves.not.toHaveText(firstStateValue);
await expect(getCell(page, 1, infoColumnIdx)).not.toHaveText(firstStateValue);
});
test('Tests pagination, row height adjustment', async ({ gotoDashboardPage, selectors, page }) => {
@@ -289,7 +289,7 @@ test.describe('Panels test: Table - Kitchen Sink', { tag: ['@panels', '@table']
const dataLinkColIdx = await getColumnIdx(page, 'Data Link');
// Info column has a single DataLink by default.
const infoCell = await getCell(page, 1, infoColumnIdx);
const infoCell = getCell(page, 1, infoColumnIdx);
await expect(infoCell.locator('a')).toBeVisible();
expect(infoCell.locator('a')).toHaveAttribute('href');
expect(infoCell.locator('a')).not.toHaveAttribute('aria-haspopup');
@@ -306,7 +306,7 @@ test.describe('Panels test: Table - Kitchen Sink', { tag: ['@panels', '@table']
continue;
}
const cell = await getCell(page, 1, colIdx);
const cell = getCell(page, 1, colIdx);
await expect(cell.locator('a')).toBeVisible();
expect(cell.locator('a')).toHaveAttribute('href');
expect(cell.locator('a')).not.toHaveAttribute('aria-haspopup', 'menu');
@@ -319,7 +319,7 @@ test.describe('Panels test: Table - Kitchen Sink', { tag: ['@panels', '@table']
// loop thru the columns, click the links, observe that the tooltip appears, and close the tooltip.
for (let colIdx = 0; colIdx < colCount; colIdx++) {
const cell = await getCell(page, 1, colIdx);
const cell = getCell(page, 1, colIdx);
if (colIdx === infoColumnIdx) {
// the Info column should still have its single link.
expect(cell.locator('a')).not.toHaveAttribute('aria-haspopup', 'menu');
@@ -433,7 +433,7 @@ test.describe('Panels test: Table - Kitchen Sink', { tag: ['@panels', '@table']
await filterContainer.getByTitle('up', { exact: true }).locator('label').click();
await filterContainer.getByRole('button', { name: 'Ok' }).click();
const cell = await getCell(page, 1, dataLinkColumnIdx);
const cell = getCell(page, 1, dataLinkColumnIdx);
await expect(cell).toBeVisible();
await expect(cell).toHaveCSS('text-decoration', /line-through/);

View File

@@ -1,6 +1,6 @@
import { Page, Locator } from '@playwright/test';
export const getCell = async (loc: Page | Locator, rowIdx: number, colIdx: number) =>
export const getCell = (loc: Page | Locator, rowIdx: number, colIdx: number) =>
loc
.getByRole('row')
.nth(rowIdx)
@@ -8,7 +8,7 @@ export const getCell = async (loc: Page | Locator, rowIdx: number, colIdx: numbe
.nth(colIdx);
export const getCellHeight = async (loc: Page | Locator, rowIdx: number, colIdx: number) => {
const cell = await getCell(loc, rowIdx, colIdx);
const cell = getCell(loc, rowIdx, colIdx);
return (await cell.boundingBox())?.height ?? 0;
};
@@ -18,7 +18,7 @@ export const getColumnIdx = async (loc: Page | Locator, columnName: string) => {
let result = -1;
const colCount = await loc.getByRole('columnheader').count();
for (let colIdx = 0; colIdx < colCount; colIdx++) {
const cell = await getCell(loc, 0, colIdx);
const cell = getCell(loc, 0, colIdx);
if ((await cell.textContent()) === columnName) {
result = colIdx;
break;

View File

@@ -38,7 +38,7 @@ test.describe(
formatExpectError('Could not locate header elements in table panel')
).toContainText(['col1', 'col2']);
await expect(
panelEditPage.panel.locator.getByRole('gridcell'),
panelEditPage.panel.data,
formatExpectError('Could not locate headers in table panel')
).toContainText(['val1', 'val2', 'val3', 'val4']);
});
@@ -58,7 +58,7 @@ test.describe(
formatExpectError('Could not locate header elements in table panel')
).toContainText(['col1', 'col2']);
await expect(
panelEditPage.panel.locator.getByRole('gridcell'),
panelEditPage.panel.data,
formatExpectError('Could not locate data elements in table panel')
).toContainText(['val1', 'val2', 'val3', 'val4']);
});

30
go.mod
View File

@@ -32,7 +32,7 @@ require (
github.com/apache/arrow-go/v18 v18.4.1 // @grafana/plugins-platform-backend
github.com/armon/go-radix v1.0.0 // @grafana/grafana-app-platform-squad
github.com/aws/aws-sdk-go v1.55.7 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2 v1.38.1 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2 v1.39.1 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2/service/cloudwatch v1.45.3 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs v1.51.0 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2/service/ec2 v1.225.2 // @grafana/aws-datasources
@@ -63,7 +63,7 @@ require (
github.com/go-jose/go-jose/v4 v4.1.2 // @grafana/identity-access-team
github.com/go-kit/log v0.2.1 // @grafana/grafana-backend-group
github.com/go-ldap/ldap/v3 v3.4.4 // @grafana/identity-access-team
github.com/go-logfmt/logfmt v0.6.0 // @grafana/oss-big-tent
github.com/go-logfmt/logfmt v0.6.1 // @grafana/oss-big-tent
github.com/go-openapi/loads v0.23.1 // @grafana/alerting-backend
github.com/go-openapi/runtime v0.28.0 // @grafana/alerting-backend
github.com/go-openapi/strfmt v0.24.0 // @grafana/alerting-backend
@@ -98,7 +98,7 @@ require (
github.com/grafana/grafana-api-golang-client v0.27.0 // @grafana/alerting-backend
github.com/grafana/grafana-app-sdk v0.48.1 // @grafana/grafana-app-platform-squad
github.com/grafana/grafana-app-sdk/logging v0.48.1 // @grafana/grafana-app-platform-squad
github.com/grafana/grafana-aws-sdk v1.2.0 // @grafana/aws-datasources
github.com/grafana/grafana-aws-sdk v1.3.0 // @grafana/aws-datasources
github.com/grafana/grafana-azure-sdk-go/v2 v2.3.1 // @grafana/partner-datasources
github.com/grafana/grafana-cloud-migration-snapshot v1.9.0 // @grafana/grafana-operator-experience-squad
github.com/grafana/grafana-google-sdk-go v0.4.2 // @grafana/partner-datasources
@@ -172,7 +172,7 @@ require (
github.com/stretchr/testify v1.11.1 // @grafana/grafana-backend-group
github.com/testcontainers/testcontainers-go v0.36.0 //@grafana/grafana-app-platform-squad
github.com/thomaspoignant/go-feature-flag v1.42.0 // @grafana/grafana-backend-group
github.com/tjhop/slog-gokit v0.1.3 // @grafana/grafana-app-platform-squad
github.com/tjhop/slog-gokit v0.1.5 // @grafana/grafana-app-platform-squad
github.com/ua-parser/uap-go v0.0.0-20250213224047-9c035f085b90 // @grafana/grafana-backend-group
github.com/urfave/cli v1.22.17 // indirect; @grafana/grafana-backend-group
github.com/urfave/cli/v2 v2.27.7 // @grafana/grafana-backend-group
@@ -237,6 +237,7 @@ require (
github.com/grafana/grafana/apps/alerting/alertenrichment v0.0.0 // @grafana/alerting-backend
github.com/grafana/grafana/apps/alerting/notifications v0.0.0 // @grafana/alerting-backend
github.com/grafana/grafana/apps/alerting/rules v0.0.0 // @grafana/alerting-backend
github.com/grafana/grafana/apps/annotation v0.0.0 // @grafana/grafana-backend-services-squad
github.com/grafana/grafana/apps/correlations v0.0.0 // @grafana/datapro
github.com/grafana/grafana/apps/dashboard v0.0.0 // @grafana/grafana-app-platform-squad @grafana/dashboards-squad
github.com/grafana/grafana/apps/example v0.0.0-20251027162426-edef69fdc82b // @grafana/grafana-app-platform-squad
@@ -268,6 +269,7 @@ replace (
github.com/grafana/grafana/apps/alerting/alertenrichment => ./apps/alerting/alertenrichment
github.com/grafana/grafana/apps/alerting/notifications => ./apps/alerting/notifications
github.com/grafana/grafana/apps/alerting/rules => ./apps/alerting/rules
github.com/grafana/grafana/apps/annotation => ./apps/annotation
github.com/grafana/grafana/apps/correlations => ./apps/correlations
github.com/grafana/grafana/apps/dashboard => ./apps/dashboard
github.com/grafana/grafana/apps/folder => ./apps/folder
@@ -332,23 +334,23 @@ require (
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/at-wat/mqtt-go v0.19.4 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 // indirect
github.com/aws/aws-sdk-go-v2/config v1.31.2 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.18.6 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.4 // indirect
github.com/aws/aws-sdk-go-v2/config v1.31.10 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.18.14 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8 // indirect
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.4 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.4 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.0 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.8 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17 // indirect
github.com/aws/aws-sdk-go-v2/service/kms v1.41.2 // indirect
github.com/aws/aws-sdk-go-v2/service/s3 v1.84.0 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.28.2 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.33.2 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.38.0 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.29.4 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.0 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5 // indirect
github.com/axiomhq/hyperloglog v0.0.0-20240507144631-af9851f82b27 // indirect
github.com/bahlo/generic-list-go v0.2.0 // indirect
github.com/barkimedes/go-deepcopy v0.0.0-20220514131651-17c30cfc62df // indirect

56
go.sum
View File

@@ -846,22 +846,22 @@ github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2z
github.com/aws/aws-sdk-go v1.55.5/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/aws/aws-sdk-go v1.55.7 h1:UJrkFq7es5CShfBwlWAC8DA077vp8PyVbQd3lqLiztE=
github.com/aws/aws-sdk-go v1.55.7/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/aws/aws-sdk-go-v2 v1.38.1 h1:j7sc33amE74Rz0M/PoCpsZQ6OunLqys/m5antM0J+Z8=
github.com/aws/aws-sdk-go-v2 v1.38.1/go.mod h1:9Q0OoGQoboYIAJyslFyF1f5K1Ryddop8gqMhWx/n4Wg=
github.com/aws/aws-sdk-go-v2 v1.39.1 h1:fWZhGAwVRK/fAN2tmt7ilH4PPAE11rDj7HytrmbZ2FE=
github.com/aws/aws-sdk-go-v2 v1.39.1/go.mod h1:sDioUELIUO9Znk23YVmIk86/9DOpkbyyVb1i/gUNFXY=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 h1:12SpdwU8Djs+YGklkinSSlcrPyj3H4VifVsKf78KbwA=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11/go.mod h1:dd+Lkp6YmMryke+qxW/VnKyhMBDTYP41Q2Bb+6gNZgY=
github.com/aws/aws-sdk-go-v2/config v1.31.2 h1:NOaSZpVGEH2Np/c1toSeW0jooNl+9ALmsUTZ8YvkJR0=
github.com/aws/aws-sdk-go-v2/config v1.31.2/go.mod h1:17ft42Yb2lF6OigqSYiDAiUcX4RIkEMY6XxEMJsrAes=
github.com/aws/aws-sdk-go-v2/credentials v1.18.6 h1:AmmvNEYrru7sYNJnp3pf57lGbiarX4T9qU/6AZ9SucU=
github.com/aws/aws-sdk-go-v2/credentials v1.18.6/go.mod h1:/jdQkh1iVPa01xndfECInp1v1Wnp70v3K4MvtlLGVEc=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.4 h1:lpdMwTzmuDLkgW7086jE94HweHCqG+uOJwHf3LZs7T0=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.4/go.mod h1:9xzb8/SV62W6gHQGC/8rrvgNXU6ZoYM3sAIJCIrXJxY=
github.com/aws/aws-sdk-go-v2/config v1.31.10 h1:7LllDZAegXU3yk41mwM6KcPu0wmjKGQB1bg99bNdQm4=
github.com/aws/aws-sdk-go-v2/config v1.31.10/go.mod h1:Ge6gzXPjqu4v0oHvgAwvGzYcK921GU0hQM25WF/Kl+8=
github.com/aws/aws-sdk-go-v2/credentials v1.18.14 h1:TxkI7QI+sFkTItN/6cJuMZEIVMFXeu2dI1ZffkXngKI=
github.com/aws/aws-sdk-go-v2/credentials v1.18.14/go.mod h1:12x4Uw/vijC11XkctTjy92TNCQ+UnNJkT7fzX0Yd93E=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8 h1:gLD09eaJUdiszm7vd1btiQUYE0Hj+0I2b8AS+75z9AY=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.8/go.mod h1:4RW3oMPt1POR74qVOC4SbubxAwdP4pCT0nSw3jycOU4=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84 h1:cTXRdLkpBanlDwISl+5chq5ui1d1YWg4PWMR9c3kXyw=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.84/go.mod h1:kwSy5X7tfIHN39uucmjQVs2LvDdXEjQucgQQEqCggEo=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.4 h1:IdCLsiiIj5YJ3AFevsewURCPV+YWUlOW8JiPhoAy8vg=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.4/go.mod h1:l4bdfCD7XyyZA9BolKBo1eLqgaJxl0/x91PL4Yqe0ao=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.4 h1:j7vjtr1YIssWQOMeOWRbh3z8g2oY/xPjnZH2gLY4sGw=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.4/go.mod h1:yDmJgqOiH4EA8Hndnv4KwAo8jCGTSnM5ASG1nBI+toA=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8 h1:6bgAZgRyT4RoFWhxS+aoGMFyE0cD1bSzFnEEi4bFPGI=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.8/go.mod h1:KcGkXFVU8U28qS4KvLEcPxytPZPBcRawaH2Pf/0jptE=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8 h1:HhJYoES3zOz34yWEpGENqJvRVPqpmJyR3+AFg9ybhdY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.8/go.mod h1:JnA+hPWeYAVbDssp83tv+ysAG8lTfLVXvSsyKg/7xNA=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 h1:bIqFDwgGXXN1Kpp99pDOdKMTTb5d2KyU5X/BZxjOkRo=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3/go.mod h1:H5O/EsxDWyU+LP/V8i5sm8cxoZgc2fdNR9bxlOFrQTo=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 h1:GMYy2EOWfzdP3wfVAGXBNKY5vK4K8vMET4sYOYltmqs=
@@ -872,12 +872,12 @@ github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs v1.51.0 h1:e5cbPZYTIY2nUEFie
github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs v1.51.0/go.mod h1:UseIHRfrm7PqeZo6fcTb6FUCXzCnh1KJbQbmOfxArGM=
github.com/aws/aws-sdk-go-v2/service/ec2 v1.225.2 h1:IfMb3Ar8xEaWjgH/zeVHYD8izwJdQgRP5mKCTDt4GNk=
github.com/aws/aws-sdk-go-v2/service/ec2 v1.225.2/go.mod h1:35jGWx7ECvCwTsApqicFYzZ7JFEnBc6oHUuOQ3xIS54=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.0 h1:6+lZi2JeGKtCraAj1rpoZfKqnQ9SptseRZioejfUOLM=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.0/go.mod h1:eb3gfbVIxIoGgJsi9pGne19dhCBpK6opTYpQqAmdy44=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1 h1:oegbebPEMA/1Jny7kvwejowCaHz1FWZAQ94WXFNCyTM=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.1/go.mod h1:kemo5Myr9ac0U9JfSjMo9yHLtw+pECEHsFtJ9tqCEI8=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4 h1:nAP2GYbfh8dd2zGZqFRSMlq+/F6cMPBUuCsGAMkN074=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4/go.mod h1:LT10DsiGjLWh4GbjInf9LQejkYEhBgBCjLG5+lvk4EE=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.4 h1:ueB2Te0NacDMnaC+68za9jLwkjzxGWm0KB5HTUHjLTI=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.4/go.mod h1:nLEfLnVMmLvyIG58/6gsSA03F1voKGaCfHV7+lR8S7s=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.8 h1:M6JI2aGFEzYxsF6CXIuRBnkge9Wf9a2xU39rNeXgu10=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.8/go.mod h1:Fw+MyTwlwjFsSTE31mH211Np+CUslml8mzc0AFEG09s=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17 h1:qcLWgdhq45sDM9na4cvXax9dyLitn8EYBRl8Ak4XtG4=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17/go.mod h1:M+jkjBFZ2J6DJrjMv2+vkBbuht6kxJYtJiwoVgX4p4U=
github.com/aws/aws-sdk-go-v2/service/kms v1.41.2 h1:zJeUxFP7+XP52u23vrp4zMcVhShTWbNO8dHV6xCSvFo=
@@ -888,12 +888,12 @@ github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi v1.26.6 h1:Pwbxovp
github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi v1.26.6/go.mod h1:Z4xLt5mXspLKjBV92i165wAJ/3T6TIv4n7RtIS8pWV0=
github.com/aws/aws-sdk-go-v2/service/s3 v1.84.0 h1:0reDqfEN+tB+sozj2r92Bep8MEwBZgtAXTND1Kk9OXg=
github.com/aws/aws-sdk-go-v2/service/s3 v1.84.0/go.mod h1:kUklwasNoCn5YpyAqC/97r6dzTA1SRKJfKq16SXeoDU=
github.com/aws/aws-sdk-go-v2/service/sso v1.28.2 h1:ve9dYBB8CfJGTFqcQ3ZLAAb/KXWgYlgu/2R2TZL2Ko0=
github.com/aws/aws-sdk-go-v2/service/sso v1.28.2/go.mod h1:n9bTZFZcBa9hGGqVz3i/a6+NG0zmZgtkB9qVVFDqPA8=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.33.2 h1:pd9G9HQaM6UZAZh19pYOkpKSQkyQQ9ftnl/LttQOcGI=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.33.2/go.mod h1:eknndR9rU8UpE/OmFpqU78V1EcXPKFTTm5l/buZYgvM=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.0 h1:iV1Ko4Em/lkJIsoKyGfc0nQySi+v0Udxr6Igq+y9JZc=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.0/go.mod h1:bEPcjW7IbolPfK67G1nilqWyoxYMSPrDiIQ3RdIdKgo=
github.com/aws/aws-sdk-go-v2/service/sso v1.29.4 h1:FTdEN9dtWPB0EOURNtDPmwGp6GGvMqRJCAihkSl/1No=
github.com/aws/aws-sdk-go-v2/service/sso v1.29.4/go.mod h1:mYubxV9Ff42fZH4kexj43gFPhgc/LyC7KqvUKt1watc=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.0 h1:I7ghctfGXrscr7r1Ga/mDqSJKm7Fkpl5Mwq79Z+rZqU=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.0/go.mod h1:Zo9id81XP6jbayIFWNuDpA6lMBWhsVy+3ou2jLa4JnA=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5 h1:+LVB0xBqEgjQoqr9bGZbRzvg212B0f17JdflleJRNR4=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5/go.mod h1:xoaxeqnnUaZjPjaICgIy5B+MHCSb/ZSOn4MvkFNOUA0=
github.com/aws/smithy-go v1.23.1 h1:sLvcH6dfAFwGkHLZ7dGiYF7aK6mg4CgKA/iDKjLDt9M=
github.com/aws/smithy-go v1.23.1/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/axiomhq/hyperloglog v0.0.0-20191112132149-a4c4c47bc57f/go.mod h1:2stgcRjl6QmW+gU2h5E7BQXg4HU0gzxKWDuT5HviN9s=
@@ -1248,8 +1248,8 @@ github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-logfmt/logfmt v0.6.0 h1:wGYYu3uicYdqXVgoYbvnkrPVXkuLM1p1ifugDMEdRi4=
github.com/go-logfmt/logfmt v0.6.0/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-logfmt/logfmt v0.6.1 h1:4hvbpePJKnIzH1B+8OR/JPbTx37NktoI9LE2QZBBkvE=
github.com/go-logfmt/logfmt v0.6.1/go.mod h1:EV2pOAQoZaT1ZXZbqDl5hrymndi4SY9ED9/z6CO0XAk=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
@@ -1637,8 +1637,8 @@ github.com/grafana/grafana-app-sdk v0.48.1 h1:bKJadWH18WCpJ+Zk8AezRFXCcZgGredRv+
github.com/grafana/grafana-app-sdk v0.48.1/go.mod h1:5LljCz+wvmGfkQ8ZKTOfserhtXNEF0cSFthoWShvN6c=
github.com/grafana/grafana-app-sdk/logging v0.48.1 h1:veM0X5LAPyN3KsDLglWjIofndbGuf7MqnrDuDN+F/Ng=
github.com/grafana/grafana-app-sdk/logging v0.48.1/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
github.com/grafana/grafana-aws-sdk v1.2.0 h1:LLR4/g91WBuCRwm2cbWfCREq565+GxIFe08nqqIcIuw=
github.com/grafana/grafana-aws-sdk v1.2.0/go.mod h1:bBo7qOmM3f61vO+2JxTolNUph1l2TmtzmWcU9/Im+8A=
github.com/grafana/grafana-aws-sdk v1.3.0 h1:/bfJzP93rCel1GbWoRSq0oUo424MZXt8jAp2BK9w8tM=
github.com/grafana/grafana-aws-sdk v1.3.0/go.mod h1:VGycF0JkCGKND2O5je1ucOqPJ0ZNhZYzV3c2bNBAaGk=
github.com/grafana/grafana-azure-sdk-go/v2 v2.3.1 h1:FFcEA01tW+SmuJIuDbHOdgUBL+d7DPrZ2N4zwzPhfGk=
github.com/grafana/grafana-azure-sdk-go/v2 v2.3.1/go.mod h1:Oi4anANlCuTCc66jCyqIzfVbgLXFll8Wja+Y4vfANlc=
github.com/grafana/grafana-cloud-migration-snapshot v1.9.0 h1:JOzchPgptwJdruYoed7x28lFDwhzs7kssResYsnC0iI=
@@ -2499,8 +2499,8 @@ github.com/thomaspoignant/go-feature-flag v1.42.0 h1:C7embmOTzaLyRki+OoU2RvtVjJE
github.com/thomaspoignant/go-feature-flag v1.42.0/go.mod h1:y0QiWH7chHWhGATb/+XqwAwErORmPSH2MUsQlCmmWlM=
github.com/tidwall/pretty v0.0.0-20180105212114-65a9db5fad51/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tjhop/slog-gokit v0.1.3 h1:6SdexP3UIeg93KLFeiM1Wp1caRwdTLgsD/THxBUy1+o=
github.com/tjhop/slog-gokit v0.1.3/go.mod h1:Bbu5v2748qpAWH7k6gse/kw3076IJf6owJmh7yArmJs=
github.com/tjhop/slog-gokit v0.1.5 h1:ayloIUi5EK2QYB8eY4DOPO95/mRtMW42lUkp3quJohc=
github.com/tjhop/slog-gokit v0.1.5/go.mod h1:yA48zAHvV+Sg4z4VRyeFyFUNNXd3JY5Zg84u3USICq0=
github.com/tklauser/go-sysconf v0.3.14 h1:g5vzr9iPFFz24v2KZXs/pvpvh8/V9Fw6vQK5ZZb78yU=
github.com/tklauser/go-sysconf v0.3.14/go.mod h1:1ym4lWMLUOhuBOPGtRcJm7tEGX4SCYNEEEtghGG/8uY=
github.com/tklauser/numcpus v0.8.0 h1:Mx4Wwe/FjZLeQsK/6kt2EOepwwSl7SmJrK5bV/dXYgY=

View File

@@ -336,6 +336,8 @@ github.com/MicahParks/keyfunc/v2 v2.1.0/go.mod h1:rW42fi+xgLJ2FRRXAfNx9ZA8WpD4Oe
github.com/Microsoft/go-winio v0.4.21/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.6.1/go.mod h1:LRdKpFKfdobln8UmuiYcKPot9D2v6svN5+sAH+4kjUM=
github.com/Microsoft/hcsshim v0.11.5/go.mod h1:MV8xMfmECjl5HdO7U/3/hFVnkmSBjAjmA09d4bExKcU=
github.com/MissingRoberto/slog-gokit v0.0.0-20251105092822-783f72952ce4 h1:gTtFbl79tuZSeJuSO7kXSbmXSvKSa/PoUXda1tuz0O8=
github.com/MissingRoberto/slog-gokit v0.0.0-20251105092822-783f72952ce4/go.mod h1:yA48zAHvV+Sg4z4VRyeFyFUNNXd3JY5Zg84u3USICq0=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 h1:TngWCqHvy9oXAN6lEVMRuU21PR1EtLVZJmdB18Gu3Rw=
github.com/OneOfOne/xxhash v1.2.8/go.mod h1:eZbhyaAYD41SGSSsnmcpxVoRiQ/MPUTjUdIIOT9Um7Q=
github.com/PuerkitoBio/goquery v1.10.3 h1:pFYcNSqHxBD06Fpj/KsbStFRsgRATgnf3LeXiUkhzPo=
@@ -408,44 +410,75 @@ github.com/aws/aws-lambda-go v1.47.0/go.mod h1:dpMpZgvWx5vuQJfBt0zqBha60q7Dd7Rfg
github.com/aws/aws-msk-iam-sasl-signer-go v1.0.1 h1:nMp7diZObd4XEVUR0pEvn7/E13JIgManMX79Q6quV6E=
github.com/aws/aws-msk-iam-sasl-signer-go v1.0.1/go.mod h1:MVYeeOhILFFemC/XlYTClvBjYZrg/EPd3ts885KrNTI=
github.com/aws/aws-sdk-go-v2 v1.36.5/go.mod h1:EYrzvCCN9CMUTa5+6lf6MM4tq3Zjp8UhSGR/cBsjai0=
github.com/aws/aws-sdk-go-v2 v1.38.1/go.mod h1:9Q0OoGQoboYIAJyslFyF1f5K1Ryddop8gqMhWx/n4Wg=
github.com/aws/aws-sdk-go-v2/config v1.29.17/go.mod h1:9P4wwACpbeXs9Pm9w1QTh6BwWwJjwYvJ1iCt5QbCXh8=
github.com/aws/aws-sdk-go-v2/config v1.31.2/go.mod h1:17ft42Yb2lF6OigqSYiDAiUcX4RIkEMY6XxEMJsrAes=
github.com/aws/aws-sdk-go-v2/credentials v1.17.70/go.mod h1:M+lWhhmomVGgtuPOhO85u4pEa3SmssPTdcYpP/5J/xc=
github.com/aws/aws-sdk-go-v2/credentials v1.18.6/go.mod h1:/jdQkh1iVPa01xndfECInp1v1Wnp70v3K4MvtlLGVEc=
github.com/aws/aws-sdk-go-v2/feature/dynamodb/attributevalue v1.19.5 h1:oUEqVqonG3xuarrsze1KVJ30KagNYDemikTbdu8KlN8=
github.com/aws/aws-sdk-go-v2/feature/dynamodb/attributevalue v1.19.5/go.mod h1:VNM08cHlOsIbSHRqb6D/M2L4kKXfJv3A2/f0GNbOQSc=
github.com/aws/aws-sdk-go-v2/feature/dynamodb/expression v1.7.87 h1:oDPArGgCrG/4aTi86ij3S2PB59XXkTSKYVNQlmqRHXQ=
github.com/aws/aws-sdk-go-v2/feature/dynamodb/expression v1.7.87/go.mod h1:ZeQC4gVarhdcWeM1c90DyBLaBCNhEeAbKUXwVI/byvw=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.32/go.mod h1:h4Sg6FQdexC1yYG9RDnOvLbW1a/P986++/Y/a+GyEM8=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.4/go.mod h1:9xzb8/SV62W6gHQGC/8rrvgNXU6ZoYM3sAIJCIrXJxY=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.69/go.mod h1:GJj8mmO6YT6EqgduWocwhMoxTLFitkhIrK+owzrYL2I=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.36/go.mod h1:Q1lnJArKRXkenyog6+Y+zr7WDpk4e6XlR6gs20bbeNo=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.4/go.mod h1:l4bdfCD7XyyZA9BolKBo1eLqgaJxl0/x91PL4Yqe0ao=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.36/go.mod h1:UdyGa7Q91id/sdyHPwth+043HhmP6yP9MBHgbZM0xo8=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.4/go.mod h1:yDmJgqOiH4EA8Hndnv4KwAo8jCGTSnM5ASG1nBI+toA=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.34/go.mod h1:zf7Vcd1ViW7cPqYWEHLHJkS50X0JS2IKz9Cgaj6ugrs=
github.com/aws/aws-sdk-go-v2/service/dynamodb v1.44.0 h1:A99gjqZDbdhjtjJVZrmVzVKO2+p3MSg35bDWtbMQVxw=
github.com/aws/aws-sdk-go-v2/service/dynamodb v1.44.0/go.mod h1:mWB0GE1bqcVSvpW7OtFA0sKuHk52+IqtnsYU2jUfYAs=
github.com/aws/aws-sdk-go-v2/service/dynamodbstreams v1.26.0 h1:0wOCTKrmwkyC8Bk76hYH/B4IJn5MGt6gMkSXc0A2uyc=
github.com/aws/aws-sdk-go-v2/service/dynamodbstreams v1.26.0/go.mod h1:He/RikglWUczbkV+fkdpcV/3GdL/rTRNVy7VaUiezMo=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.4/go.mod h1:/xFi9KtvBXP97ppCz1TAEvU1Uf66qvid89rbem3wCzQ=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.0/go.mod h1:eb3gfbVIxIoGgJsi9pGne19dhCBpK6opTYpQqAmdy44=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.0/go.mod h1:iu6FSzgt+M2/x3Dk8zhycdIcHjEFb36IS8HVUVFoMg0=
github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery v1.10.17 h1:x187MqiHwBGjMGAed8Y8K1VGuCtFvQvXb24r+bwmSdo=
github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery v1.10.17/go.mod h1:mC9qMbA6e1pwEq6X3zDGtZRXMG2YaElJkbJlMVHLs5I=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.17/go.mod h1:ygpklyoaypuyDvOM5ujWGrYWpAK3h7ugnmKCU/76Ys4=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.4/go.mod h1:nLEfLnVMmLvyIG58/6gsSA03F1voKGaCfHV7+lR8S7s=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.15/go.mod h1:ZH34PJUc8ApjBIfgQCFvkWcUDBtl/WTD+uiYHjd8igA=
github.com/aws/aws-sdk-go-v2/service/kinesis v1.33.0 h1:JPXkrQk5OS/+Q81fKH97Ll/Vmmy0p9vwHhxw+V+tVjg=
github.com/aws/aws-sdk-go-v2/service/kinesis v1.33.0/go.mod h1:dJngkoVMrq0K7QvRkdRZYM4NUp6cdWa2GBdpm8zoY8U=
github.com/aws/aws-sdk-go-v2/service/kms v1.35.3 h1:UPTdlTOwWUX49fVi7cymEN6hDqCwe3LNv1vi7TXUutk=
github.com/aws/aws-sdk-go-v2/service/kms v1.35.3/go.mod h1:gjDP16zn+WWalyaUqwCCioQ8gU8lzttCCc9jYsiQI/8=
github.com/aws/aws-sdk-go-v2/service/kms v1.38.1/go.mod h1:cQn6tAF77Di6m4huxovNM7NVAozWTZLsDRp9t8Z/WYk=
github.com/aws/aws-sdk-go-v2/service/s3 v1.78.2/go.mod h1:U5SNqwhXB3Xe6F47kXvWihPl/ilGaEDe8HD/50Z9wxc=
github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.32.4 h1:NgRFYyFpiMD62y4VPXh4DosPFbZd4vdMVBWKk0VmWXc=
github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.32.4/go.mod h1:TKKN7IQoM7uTnyuFm9bm9cw5P//ZYTl4m3htBWQ1G/c=
github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.35.2 h1:vlYXbindmagyVA3RS2SPd47eKZ00GZZQcr+etTviHtc=
github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.35.2/go.mod h1:yGhDiLKguA3iFJYxbrQkQiNzuy+ddxesSZYWVeeEH5Q=
github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.35.7 h1:d+mnMa4JbJlooSbYQfrJpit/YINaB30JEVgrhtjZneA=
github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.35.7/go.mod h1:1X1NotbcGHH7PCQJ98PsExSxsJj/VWzz8MfFz43+02M=
github.com/aws/aws-sdk-go-v2/service/sns v1.31.3 h1:eSTEdxkfle2G98FE+Xl3db/XAXXVTJPNQo9K/Ar8oAI=
github.com/aws/aws-sdk-go-v2/service/sns v1.31.3/go.mod h1:1dn0delSO3J69THuty5iwP0US2Glt0mx2qBBlI13pvw=
github.com/aws/aws-sdk-go-v2/service/sns v1.34.2 h1:PajtbJ/5bEo6iUAIGMYnK8ljqg2F1h4mMCGh1acjN30=
github.com/aws/aws-sdk-go-v2/service/sns v1.34.2/go.mod h1:PJtxxMdj747j8DeZENRTTYAz/lx/pADn/U0k7YNNiUY=
github.com/aws/aws-sdk-go-v2/service/sns v1.34.7 h1:OBuZE9Wt8h2imuRktu+WfjiTGrnYdCIJg8IX92aalHE=
github.com/aws/aws-sdk-go-v2/service/sns v1.34.7/go.mod h1:4WYoZAhHt+dWYpoOQUgkUKfuQbE6Gg/hW4oXE0pKS9U=
github.com/aws/aws-sdk-go-v2/service/sqs v1.34.3 h1:Vjqy5BZCOIsn4Pj8xzyqgGmsSqzz7y/WXbN3RgOoVrc=
github.com/aws/aws-sdk-go-v2/service/sqs v1.34.3/go.mod h1:L0enV3GCRd5iG9B64W35C4/hwsCB00Ib+DKVGTadKHI=
github.com/aws/aws-sdk-go-v2/service/sqs v1.38.3 h1:j5BchjfDoS7K26vPdyJlyxBIIBGDflq3qjjJKBDlbcI=
github.com/aws/aws-sdk-go-v2/service/sqs v1.38.3/go.mod h1:Bar4MrRxeqdn6XIh8JGfiXuFRmyrrsZNTJotxEJmWW0=
github.com/aws/aws-sdk-go-v2/service/sqs v1.38.8 h1:80dpSqWMwx2dAm30Ib7J6ucz1ZHfiv5OCRwN/EnCOXQ=
github.com/aws/aws-sdk-go-v2/service/sqs v1.38.8/go.mod h1:IzNt/udsXlETCdvBOL0nmyMe2t9cGmXmZgsdoZGYYhI=
github.com/aws/aws-sdk-go-v2/service/ssm v1.52.4 h1:hgSBvRT7JEWx2+vEGI9/Ld5rZtl7M5lu8PqdvOmbRHw=
github.com/aws/aws-sdk-go-v2/service/ssm v1.52.4/go.mod h1:v7NIzEFIHBiicOMaMTuEmbnzGnqW0d+6ulNALul6fYE=
github.com/aws/aws-sdk-go-v2/service/ssm v1.58.0 h1:zQz6Q5uaC8s9734DV9UDAm2q1TEEfOvEejDBSulOapI=
github.com/aws/aws-sdk-go-v2/service/ssm v1.58.0/go.mod h1:PUWUl5MDiYNQkUHN9Pyd9kgtA/YhbxnSnHP+yQqzrM8=
github.com/aws/aws-sdk-go-v2/service/ssm v1.60.1 h1:OwMzNDe5VVTXD4kGmeK/FtqAITiV8Mw4TCa8IyNO0as=
github.com/aws/aws-sdk-go-v2/service/ssm v1.60.1/go.mod h1:IyVabkWrs8SNdOEZLyFFcW9bUltV4G6OQS0s6H20PHg=
github.com/aws/aws-sdk-go-v2/service/sso v1.25.5/go.mod h1:b7SiVprpU+iGazDUqvRSLf5XmCdn+JtT1on7uNL6Ipc=
github.com/aws/aws-sdk-go-v2/service/sso v1.28.2/go.mod h1:n9bTZFZcBa9hGGqVz3i/a6+NG0zmZgtkB9qVVFDqPA8=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.3/go.mod h1:vq/GQR1gOFLquZMSrxUK/cpvKCNVYibNyJ1m7JrU88E=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.33.2/go.mod h1:eknndR9rU8UpE/OmFpqU78V1EcXPKFTTm5l/buZYgvM=
github.com/aws/aws-sdk-go-v2/service/sts v1.34.0/go.mod h1:7ph2tGpfQvwzgistp2+zga9f+bCjlQJPkPUmMgDSD7w=
github.com/aws/aws-sdk-go-v2/service/sts v1.38.0/go.mod h1:bEPcjW7IbolPfK67G1nilqWyoxYMSPrDiIQ3RdIdKgo=
github.com/aws/smithy-go v1.22.4/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=
github.com/aws/smithy-go v1.22.5 h1:P9ATCXPMb2mPjYBgueqJNCA5S9UfktsW0tTxi+a7eqw=
github.com/aws/smithy-go v1.22.5/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=
github.com/aws/smithy-go v1.23.0/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=
github.com/awslabs/aws-lambda-go-api-proxy v0.16.2 h1:CJyGEyO1CIwOnXTU40urf0mchf6t3voxpvUDikOU9LY=
github.com/awslabs/aws-lambda-go-api-proxy v0.16.2/go.mod h1:vxxjwBHe/KbgFeNlAP/Tvp4SsVRL3WQamcWRxqVh0z0=
github.com/aymerick/douceur v0.2.0 h1:Mv+mAeH1Q+n9Fr+oyamOlAkUNPWPlA8PPGR0QAaYuPk=
@@ -550,7 +583,6 @@ github.com/couchbase/ghistogram v0.1.0 h1:b95QcQTCzjTUocDXp/uMgSNQi8oj1tGwnJ4bOD
github.com/couchbase/ghistogram v0.1.0/go.mod h1:s1Jhy76zqfEecpNWJfWUiKZookAFaiGOEoyzgHt9i7k=
github.com/couchbase/moss v0.2.0 h1:VCYrMzFwEryyhRSeI+/b3tRBSeTpi/8gn5Kf6dxqn+o=
github.com/couchbase/moss v0.2.0/go.mod h1:9MaHIaRuy9pvLPUJxB8sh8OrLfyDczECVL37grCIubs=
github.com/cpuguy83/go-md2man v1.0.10 h1:BSKMNlYxDvnunlTymqtgONjNnaRV1sTpcovwwjF22jk=
github.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
@@ -706,6 +738,7 @@ github.com/go-json-experiment/json v0.0.0-20250211171154-1ae217ad3535/go.mod h1:
github.com/go-kit/kit v0.12.0 h1:e4o3o3IsBfAKQh5Qbbiqyfu97Ku7jrO/JbohvztANh4=
github.com/go-kit/kit v0.12.0/go.mod h1:lHd+EkCZPIwYItmGDDRdhinkzX2A1sj+M9biaEaizzs=
github.com/go-latex/latex v0.0.0-20210823091927-c0d11ff05a81 h1:6zl3BbBhdnMkpSj2YY30qV3gDcVBGtFgVsV3+/i+mKQ=
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-openapi/analysis v0.23.0/go.mod h1:9mz9ZWaSlV8TvjQHLl2mUW2PbZtemkE8yA5v22ohupo=
github.com/go-openapi/errors v0.22.0/go.mod h1:J3DmZScxCDufmIMsdOuDHxJbdOGC0xtUynjIx092vXE=
@@ -805,10 +838,9 @@ github.com/grafana/dskit v0.0.0-20250818234656-8ff9c6532e85/go.mod h1:kImsvJ1xnm
github.com/grafana/go-gelf/v2 v2.0.1 h1:BOChP0h/jLeD+7F9mL7tq10xVkDG15he3T1zHuQaWak=
github.com/grafana/go-gelf/v2 v2.0.1/go.mod h1:lexHie0xzYGwCgiRGcvZ723bSNyNI8ZRD4s0CLobh90=
github.com/grafana/go-mysql-server v0.20.1-0.20251027172658-317a8d46ffa4/go.mod h1:EeYR0apo+8j2Dyxmn2ghkPlirO2S5mT1xHBrA+Efys8=
github.com/grafana/grafana-app-sdk v0.40.2/go.mod h1:BbNXPNki3mtbkWxYqJsyA1Cj9AShSyaY33z8WkyfVv0=
github.com/grafana/grafana-app-sdk/logging v0.40.2/go.mod h1:otUD9XpJD7A5sCLb8mcs9hIXGdeV6lnhzVwe747g4RU=
github.com/grafana/gomemcache v0.0.0-20250228145437-da7b95fd2ac1/go.mod h1:j/s0jkda4UXTemDs7Pgw/vMT06alWc42CHisvYac0qw=
github.com/grafana/grafana-app-sdk v0.40.1/go.mod h1:4P8h7VB6KcDjX9bAoBQc6IP8iNylxe6bSXLR9gA39gM=
github.com/grafana/grafana-app-sdk v0.40.2/go.mod h1:BbNXPNki3mtbkWxYqJsyA1Cj9AShSyaY33z8WkyfVv0=
github.com/grafana/grafana-app-sdk v0.41.0 h1:SYHN3U7B1myRKY3UZZDkFsue9TDmAOap0UrQVTqtYBU=
github.com/grafana/grafana-app-sdk v0.41.0/go.mod h1:Wg/3vEZfok1hhIWiHaaJm+FwkosfO98o8KbeLFEnZpY=
github.com/grafana/grafana-app-sdk v0.46.0/go.mod h1:LCTrqR1SwBS13XGVYveBmM7giJDDjzuXK+M9VzPuPWc=
@@ -818,6 +850,7 @@ github.com/grafana/grafana-app-sdk/logging v0.39.0 h1:3GgN5+dUZYqq74Q+GT9/ET+yo+
github.com/grafana/grafana-app-sdk/logging v0.39.0/go.mod h1:WhDENSnaGHtyVVwZGVnAR7YLvh2xlLDYR3D7E6h7XVk=
github.com/grafana/grafana-app-sdk/logging v0.39.1/go.mod h1:WhDENSnaGHtyVVwZGVnAR7YLvh2xlLDYR3D7E6h7XVk=
github.com/grafana/grafana-app-sdk/logging v0.40.0/go.mod h1:otUD9XpJD7A5sCLb8mcs9hIXGdeV6lnhzVwe747g4RU=
github.com/grafana/grafana-app-sdk/logging v0.40.2/go.mod h1:otUD9XpJD7A5sCLb8mcs9hIXGdeV6lnhzVwe747g4RU=
github.com/grafana/grafana-app-sdk/logging v0.43.0/go.mod h1:0xrjKSGY5z+NLGuGsXQpxiCHR4Smu79i/CbAfdkaB1M=
github.com/grafana/grafana-app-sdk/logging v0.43.1/go.mod h1:0xrjKSGY5z+NLGuGsXQpxiCHR4Smu79i/CbAfdkaB1M=
github.com/grafana/grafana-app-sdk/logging v0.43.2/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
@@ -825,10 +858,56 @@ github.com/grafana/grafana-app-sdk/logging v0.45.0/go.mod h1:Gh/nBWnspK3oDNWtiM5
github.com/grafana/grafana-app-sdk/logging v0.46.0/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
github.com/grafana/grafana-app-sdk/logging v0.48.0 h1:xolkQxBlA2LQF4hprKIAeu+zUem1DigYZ6XC1TOhFJE=
github.com/grafana/grafana-app-sdk/logging v0.48.0/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
github.com/grafana/grafana-app-sdk/plugin v0.41.0 h1:ShUvGpAVzM3UxcsfwS6l/lwW4ytDeTbCQXf8w2P8Yp8=
github.com/grafana/grafana-app-sdk/plugin v0.41.0/go.mod h1:YIhimVfAqtOp3kdhxOanaSZjypVKh/bYxf9wfFfhDm0=
github.com/grafana/grafana-aws-sdk v0.38.2 h1:TzQD0OpWsNjtldi5G5TLDlBRk8OyDf+B5ujcoAu4Dp0=
github.com/grafana/grafana-aws-sdk v0.38.2/go.mod h1:j3vi+cXYHEFqjhBGrI6/lw1TNM+dl0Y3f0cSnDOPy+s=
github.com/grafana/grafana-aws-sdk v1.0.2 h1:98eBuHYFmgvH0xO9kKf4RBsEsgQRp8EOA/9yhDIpkss=
github.com/grafana/grafana-aws-sdk v1.0.2/go.mod h1:hO7q7yWV+t6dmiyJjMa3IbuYnYkBua+G/IAlOPVIYKE=
github.com/grafana/grafana-aws-sdk v1.1.0/go.mod h1:7e+47EdHynteYWGoT5Ere9KeOXQObsk8F0vkOLQ1tz8=
github.com/grafana/grafana-aws-sdk v1.2.0/go.mod h1:bBo7qOmM3f61vO+2JxTolNUph1l2TmtzmWcU9/Im+8A=
github.com/grafana/grafana-azure-sdk-go/v2 v2.1.6/go.mod h1:V7y2BmsWxS3A9Ohebwn4OiSfJJqi//4JQydQ8fHTduo=
github.com/grafana/grafana-azure-sdk-go/v2 v2.2.0/go.mod h1:H9sVh9A4yg5egMGZeh0mifxT1Q/uqwKe1LBjBJU6pN8=
github.com/grafana/grafana-plugin-sdk-go v0.263.0/go.mod h1:U43Cnrj/9DNYyvFcNdeUWNjMXTKNB0jcTcQGpWKd2gw=
github.com/grafana/grafana-plugin-sdk-go v0.267.0/go.mod h1:OuwS4c/JYgn0rr/w5zhJBpLo4gKm/vw15RsfpYAvK9Q=
github.com/grafana/grafana-plugin-sdk-go v0.269.1/go.mod h1:yv2KbO4mlr9WuDK2f+2gHAMTwwLmLuqaEnrPXTRU+OI=
github.com/grafana/grafana-plugin-sdk-go v0.275.0/go.mod h1:mO9LJqdXDh5JpO/xIdPAeg5LdThgQ06Y/SLpXDWKw2c=
github.com/grafana/grafana-plugin-sdk-go v0.277.0/go.mod h1:mAUWg68w5+1f5TLDqagIr8sWr1RT9h7ufJl5NMcWJAU=
github.com/grafana/grafana-plugin-sdk-go v0.278.0/go.mod h1:+8NXT/XUJ/89GV6FxGQ366NZ3nU+cAXDMd0OUESF9H4=
github.com/grafana/grafana-plugin-sdk-go v0.279.0/go.mod h1:/7oGN6Z7DGTGaLHhgIYrRr6Wvmdsb3BLw5hL4Kbjy88=
github.com/grafana/grafana-plugin-sdk-go v0.280.0/go.mod h1:Z15Wiq3c4I0tzHYrLYpOqrO8u3+2RJ+HN2Q9uiZTILA=
github.com/grafana/grafana/apps/advisor v0.0.0-20250123151950-b066a6313173/go.mod h1:goSDiy3jtC2cp8wjpPZdUHRENcoSUHae1/Px/MDfddA=
github.com/grafana/grafana/apps/advisor v0.0.0-20250220154326-6e5de80ef295/go.mod h1:9I1dKV3Dqr0NPR9Af0WJGxOytp5/6W3JLiNChOz8r+c=
github.com/grafana/grafana/apps/alerting/notifications v0.0.0-20250121113133-e747350fee2d/go.mod h1:AvleS6icyPmcBjihtx5jYEvdzLmHGBp66NuE0AMR57A=
github.com/grafana/grafana/apps/alerting/notifications v0.0.0-20250416173722-ec17e0e4ce03/go.mod h1:oemrhKvFxxc5m32xKHPxInEHAObH0/hPPyHUiBUZ1Cc=
github.com/grafana/grafana/apps/alerting/notifications v0.0.0-20250506052906-7a2fc797fb4a/go.mod h1:VkX53kBiqIMHBoGgeEDJnzm5Nwcmv/726tuZuT5SvJY=
github.com/grafana/grafana/apps/alerting/rules v0.0.0-20250731223157-26b18dda3364/go.mod h1:wi4njPm5mJ8IpK13h57be8sWoxOhqr1UQOwmXhRM9Gk=
github.com/grafana/grafana/apps/dashboard v0.0.0-20250616135341-59c2f154336b/go.mod h1:OIlvNnUufYDhBXa4xK4CyzPI2C69ZJkHy5+aFDyPtXw=
github.com/grafana/grafana/apps/dashboard v0.0.0-20250616145019-8d27f12428cb/go.mod h1:OIlvNnUufYDhBXa4xK4CyzPI2C69ZJkHy5+aFDyPtXw=
github.com/grafana/grafana/apps/dashboard v0.0.0-20250627191313-2f1a6ae1712b/go.mod h1:eR8wca74ADgxBrvX0uNpdB1qnPaGx/KhCm4Xj8oqHfQ=
github.com/grafana/grafana/apps/investigation v0.0.0-20250121113133-e747350fee2d/go.mod h1:HQprw3MmiYj5OUV9CZnkwA1FKDZBmYACuAB3oDvUOmI=
github.com/grafana/grafana/apps/playlist v0.0.0-20250121113133-e747350fee2d/go.mod h1:DjJe5osrW/BKrzN9hAAOSElNWutj1bcriExa7iDP7kA=
github.com/grafana/grafana/apps/preferences v0.0.0-20250805113453-4b17c24d67ff h1:JDT0Mcfpi3c525xzeli+v5dR9pf5HhdFjr8djRdhs10=
github.com/grafana/grafana/apps/preferences v0.0.0-20250805113453-4b17c24d67ff/go.mod h1:NQlHMO5fHhjexw71wVjv522532NRvFg5F4tcjUEktjs=
github.com/grafana/grafana/apps/preferences v0.0.0-20250805120145-0c5a00302924 h1:uGXX6gCF1q2ytIL0w1X3UAKgF/UZ7eDDAgOaSqLOeW8=
github.com/grafana/grafana/apps/preferences v0.0.0-20250805120145-0c5a00302924/go.mod h1:NQlHMO5fHhjexw71wVjv522532NRvFg5F4tcjUEktjs=
github.com/grafana/grafana/apps/preferences v0.0.0-20250805123034-066163d71001 h1:y2AHkdji2I+zXv8rsSC8OjWEzJJjqW5OlmCsZR5+RuU=
github.com/grafana/grafana/apps/preferences v0.0.0-20250805123034-066163d71001/go.mod h1:NQlHMO5fHhjexw71wVjv522532NRvFg5F4tcjUEktjs=
github.com/grafana/grafana/pkg/aggregator v0.0.0-20250121113133-e747350fee2d/go.mod h1:1sq0guad+G4SUTlBgx7SXfhnzy7D86K/LcVOtiQCiMA=
github.com/grafana/grafana/pkg/semconv v0.0.0-20250121113133-e747350fee2d/go.mod h1:tfLnBpPYgwrBMRz4EXqPCZJyCjEG4Ev37FSlXnocJ2c=
github.com/grafana/grafana/pkg/storage/unified/apistore v0.0.0-20250121113133-e747350fee2d/go.mod h1:CXpwZ3Mkw6xVlGKc0SqUxqXCP3Uv182q6qAQnLaLxRg=
github.com/grafana/grafana/pkg/storage/unified/apistore v0.0.0-20250514132646-acbc7b54ed9e/go.mod h1:xrKQcxQxz+IUF90ybtfENFeEXtlj9nAsX/3Fw0KEIeQ=
github.com/grafana/nanogit v0.0.0-20250616082354-5e94194d02ed h1:59JF1WhHLT+lNX89Tm1OzOEySMVMASAhaPbsRjtp8Kc=
github.com/grafana/nanogit v0.0.0-20250616082354-5e94194d02ed/go.mod h1:OIAAKNgG5fpuJQRNO1lUSj9nc18Xl3O7M8fjIlBO1cI=
github.com/grafana/nanogit v0.0.0-20250619160700-ebf70d342aa5 h1:MAQ2B0cu0V1S91ZjVa7NomNZFjaR2SmdtvdwhqBtyhU=
github.com/grafana/nanogit v0.0.0-20250619160700-ebf70d342aa5/go.mod h1:tN93IZUaAmnSWgL0IgnKdLv6DNeIhTJGvl1wvQMrWco=
github.com/grafana/nanogit v0.0.0-20250723104447-68f58f5ecec0/go.mod h1:ToqLjIdvV3AZQa3K6e5m9hy/nsGaUByc2dWQlctB9iA=
github.com/grafana/prometheus-alertmanager v0.25.1-0.20240930132144-b5e64e81e8d3 h1:6D2gGAwyQBElSrp3E+9lSr7k8gLuP3Aiy20rweLWeBw=
github.com/grafana/prometheus-alertmanager v0.25.1-0.20240930132144-b5e64e81e8d3/go.mod h1:YeND+6FDA7OuFgDzYODN8kfPhXLCehcpxe4T9mdnpCY=
github.com/grafana/prometheus-alertmanager v0.25.1-0.20250331083058-4563aec7a975 h1:4/BZkGObFWZf4cLbE2Vqg/1VTz67Q0AJ7LHspWLKJoQ=
github.com/grafana/prometheus-alertmanager v0.25.1-0.20250331083058-4563aec7a975/go.mod h1:FGdGvhI40Dq+CTQaSzK9evuve774cgOUdGfVO04OXkw=
github.com/grafana/prometheus-alertmanager v0.25.1-0.20250604130045-92c8f6389b36 h1:AjZ58JRw1ZieFH/SdsddF5BXtsDKt5kSrKNPWrzYz3Y=
github.com/grafana/prometheus-alertmanager v0.25.1-0.20250604130045-92c8f6389b36/go.mod h1:O/QP1BCm0HHIzbKvgMzqb5sSyH88rzkFk84F4TfJjBU=
github.com/grafana/pyroscope-go/godeltaprof v0.1.8/go.mod h1:2+l7K7twW49Ct4wFluZD3tZ6e0SjanjcUUBPVD/UuGU=
github.com/grafana/sqlds/v4 v4.2.4/go.mod h1:BQRjUG8rOqrBI4NAaeoWrIMuoNgfi8bdhCJ+5cgEfLU=
github.com/grafana/tail v0.0.0-20230510142333-77b18831edf0 h1:bjh0PVYSVVFxzINqPFYJmAmJNrWPgnVjuSdYJGHmtFU=
@@ -1346,6 +1425,7 @@ github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY=
github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28=
github.com/tinylib/msgp v1.1.8 h1:FCXC1xanKO4I8plpHGH2P7koL/RzZs12l/+r7vakfm0=
github.com/tinylib/msgp v1.1.8/go.mod h1:qkpG+2ldGg4xRFmx+jfTvZPxfGFhi64BcnL9vkCm/Tw=
github.com/tjhop/slog-gokit v0.1.3/go.mod h1:Bbu5v2748qpAWH7k6gse/kw3076IJf6owJmh7yArmJs=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
github.com/trivago/tgo v1.0.7 h1:uaWH/XIy9aWYWpjm2CU3RpcqZXmX2ysQ9/Go+d9gyrM=

View File

@@ -92,7 +92,7 @@
"@emotion/eslint-plugin": "11.12.0",
"@grafana/eslint-config": "8.2.0",
"@grafana/eslint-plugin": "link:./packages/grafana-eslint-rules",
"@grafana/plugin-e2e": "2.1.7",
"@grafana/plugin-e2e": "^3.0.1",
"@grafana/test-utils": "workspace:*",
"@manypkg/get-packages": "^3.0.0",
"@npmcli/package-json": "^6.0.0",

View File

@@ -435,6 +435,7 @@ export {
isStandardFieldProp,
type OptionDefaults,
} from './panel/getPanelOptionsWithDefaults';
export { type PanelDataSummary, getPanelDataSummary } from './panel/suggestions/getPanelDataSummary';
export { createFieldConfigRegistry } from './panel/registryFactories';
export { type QueryRunner, type QueryRunnerOptions } from './types/queryRunner';
export { type GroupingToMatrixTransformerOptions } from './transformations/transformers/groupingToMatrix';
@@ -651,7 +652,6 @@ export {
type AngularPanelMenuItem,
type PanelPluginDataSupport,
type VisualizationSuggestion,
type PanelDataSummary,
type VisualizationSuggestionsSupplier,
VizOrientation,
VisualizationSuggestionScore,

View File

@@ -0,0 +1,94 @@
import { createDataFrame } from '../../dataframe/processDataFrame';
import { FieldType } from '../../types/dataFrame';
import { getPanelDataSummary } from './getPanelDataSummary';
describe('getPanelDataSummary', () => {
describe('when called with no dataframes', () => {
it('should return summary with zero counts', () => {
const summary = getPanelDataSummary();
expect(summary.rowCountTotal).toBe(0);
expect(summary.rowCountMax).toBe(0);
expect(summary.fieldCount).toBe(0);
expect(summary.frameCount).toBe(0);
expect(summary.hasData).toBe(false);
expect(summary.fieldCountByType(FieldType.time)).toBe(0);
expect(summary.fieldCountByType(FieldType.number)).toBe(0);
expect(summary.fieldCountByType(FieldType.string)).toBe(0);
expect(summary.fieldCountByType(FieldType.boolean)).toBe(0);
expect(summary.hasFieldType(FieldType.time)).toBe(false);
expect(summary.hasFieldType(FieldType.number)).toBe(false);
expect(summary.hasFieldType(FieldType.string)).toBe(false);
expect(summary.hasFieldType(FieldType.boolean)).toBe(false);
});
});
describe('when called with a single dataframes', () => {
it('should return correct summary', () => {
const frames = [
createDataFrame({
fields: [
{ name: 'time', type: FieldType.time, values: [1, 2, 3] },
{ name: 'value', type: FieldType.number, values: [10, 20, 30] },
],
}),
];
const summary = getPanelDataSummary(frames);
expect(summary.rowCountTotal).toBe(3);
expect(summary.rowCountMax).toBe(3);
expect(summary.fieldCount).toBe(2);
expect(summary.frameCount).toBe(1);
expect(summary.hasData).toBe(true);
expect(summary.fieldCountByType(FieldType.time)).toBe(1);
expect(summary.fieldCountByType(FieldType.number)).toBe(1);
expect(summary.fieldCountByType(FieldType.string)).toBe(0);
expect(summary.fieldCountByType(FieldType.boolean)).toBe(0);
expect(summary.hasFieldType(FieldType.time)).toBe(true);
expect(summary.hasFieldType(FieldType.number)).toBe(true);
expect(summary.hasFieldType(FieldType.string)).toBe(false);
expect(summary.hasFieldType(FieldType.boolean)).toBe(false);
});
});
describe('when called with multiple dataframes', () => {
it('should return correct summary', () => {
const frames = [
createDataFrame({
fields: [
{ name: 'time', type: FieldType.time, values: [1, 2, 3] },
{ name: 'value', type: FieldType.number, values: [10, 20, 30] },
],
}),
createDataFrame({
fields: [
{ name: 'category', type: FieldType.string, values: ['A', 'B'] },
{ name: 'amount', type: FieldType.number, values: [100, 200] },
],
}),
];
const summary = getPanelDataSummary(frames);
expect(summary.rowCountTotal).toBe(5);
expect(summary.rowCountMax).toBe(3);
expect(summary.fieldCount).toBe(4);
expect(summary.frameCount).toBe(2);
expect(summary.hasData).toBe(true);
expect(summary.fieldCountByType(FieldType.time)).toBe(1);
expect(summary.fieldCountByType(FieldType.number)).toBe(2);
expect(summary.fieldCountByType(FieldType.string)).toBe(1);
expect(summary.fieldCountByType(FieldType.boolean)).toBe(0);
expect(summary.hasFieldType(FieldType.time)).toBe(true);
expect(summary.hasFieldType(FieldType.number)).toBe(true);
expect(summary.hasFieldType(FieldType.string)).toBe(true);
expect(summary.hasFieldType(FieldType.boolean)).toBe(false);
});
});
});

View File

@@ -0,0 +1,82 @@
import { PreferredVisualisationType } from '../../types/data';
import { DataFrame, FieldType } from '../../types/dataFrame';
/**
* @alpha
*/
export interface PanelDataSummary {
hasData?: boolean;
rowCountTotal: number;
rowCountMax: number;
frameCount: number;
fieldCount: number;
fieldCountByType: (type: FieldType) => number;
hasFieldType: (type: FieldType) => boolean;
/** The first frame that set's this value */
preferredVisualisationType?: PreferredVisualisationType;
/* --- DEPRECATED FIELDS BELOW --- */
/** @deprecated use PanelDataSummary.fieldCountByType(FieldType.number) */
numberFieldCount: number;
/** @deprecated use PanelDataSummary.fieldCountByType(FieldType.time) */
timeFieldCount: number;
/** @deprecated use PanelDataSummary.fieldCountByType(FieldType.string) */
stringFieldCount: number;
/** @deprecated use PanelDataSummary.hasFieldType(FieldType.number) */
hasNumberField?: boolean;
/** @deprecated use PanelDataSummary.hasFieldType(FieldType.time) */
hasTimeField?: boolean;
/** @deprecated use PanelDataSummary.hasFieldType(FieldType.string) */
hasStringField?: boolean;
}
/**
* @alpha
* given a list of dataframes, summarize attributes of those frames for features like suggestions.
* @param frames - dataframes to summarize
* @returns summary of the dataframes
*/
export function getPanelDataSummary(frames: DataFrame[] = []): PanelDataSummary {
let rowCountTotal = 0;
let rowCountMax = 0;
let fieldCount = 0;
const countByType: Partial<Record<FieldType, number>> = {};
let preferredVisualisationType: PreferredVisualisationType | undefined;
for (const frame of frames) {
rowCountTotal += frame.length;
if (frame.meta?.preferredVisualisationType) {
preferredVisualisationType = frame.meta.preferredVisualisationType;
}
for (const field of frame.fields) {
fieldCount++;
countByType[field.type] = (countByType[field.type] || 0) + 1;
}
if (frame.length > rowCountMax) {
rowCountMax = frame.length;
}
}
const fieldCountByType = (f: FieldType) => countByType[f] ?? 0;
return {
rowCountTotal,
rowCountMax,
fieldCount,
preferredVisualisationType,
frameCount: frames.length,
hasData: rowCountTotal > 0,
hasFieldType: (f: FieldType) => fieldCountByType(f) > 0,
fieldCountByType,
// deprecated
numberFieldCount: fieldCountByType(FieldType.number),
timeFieldCount: fieldCountByType(FieldType.time),
stringFieldCount: fieldCountByType(FieldType.string),
hasTimeField: fieldCountByType(FieldType.time) > 0,
hasNumberField: fieldCountByType(FieldType.number) > 0,
hasStringField: fieldCountByType(FieldType.string) > 0,
};
}

View File

@@ -248,11 +248,6 @@ export interface FeatureToggles {
*/
externalServiceAccounts?: boolean;
/**
* Enables panel monitoring through logs and measurements
* @default true
*/
panelMonitoring?: boolean;
/**
* Enables native HTTP Histograms
*/
enableNativeHTTPHistogram?: boolean;
@@ -565,10 +560,14 @@ export interface FeatureToggles {
*/
queryLibrary?: boolean;
/**
* Enable suggested dashboards when creating new dashboards
* Enable dashboard library experiments that are production ready
*/
dashboardLibrary?: boolean;
/**
* Enable suggested dashboards when creating new dashboards
*/
suggestedDashboards?: boolean;
/**
* Sets the logs table as default visualisation in logs explore
*/
logsExploreTableDefaultVisualization?: boolean;

View File

@@ -162,6 +162,7 @@ export const availableIconsIndex = {
globe: true,
grafana: true,
'graph-bar': true,
'hand-pointer': true,
heart: true,
'heart-rate': true,
'heart-break': true,

View File

@@ -2,14 +2,15 @@ import { defaultsDeep } from 'lodash';
import { EventBus } from '../events/types';
import { StandardEditorProps } from '../field/standardFieldConfigEditorRegistry';
import { PanelDataSummary, getPanelDataSummary } from '../panel/suggestions/getPanelDataSummary';
import { Registry } from '../utils/Registry';
import { OptionsEditorItem } from './OptionsUIRegistryBuilder';
import { ScopedVars } from './ScopedVars';
import { AlertStateInfo } from './alerts';
import { PanelModel } from './dashboard';
import { LoadingState, PreferredVisualisationType } from './data';
import { DataFrame, FieldType } from './dataFrame';
import { LoadingState } from './data';
import { DataFrame } from './dataFrame';
import { DataQueryError, DataQueryRequest, DataQueryTimings } from './datasource';
import { FieldConfigSource } from './fieldOverrides';
import { IconName } from './icon';
@@ -258,25 +259,6 @@ export enum VisualizationSuggestionScore {
OK = 50,
}
/**
* @alpha
*/
export interface PanelDataSummary {
hasData?: boolean;
rowCountTotal: number;
rowCountMax: number;
frameCount: number;
fieldCount: number;
numberFieldCount: number;
timeFieldCount: number;
stringFieldCount: number;
hasNumberField?: boolean;
hasTimeField?: boolean;
hasStringField?: boolean;
/** The first frame that set's this value */
preferredVisualisationType?: PreferredVisualisationType;
}
/**
* @alpha
*/
@@ -293,68 +275,13 @@ export class VisualizationSuggestionsBuilder {
constructor(data?: PanelData, panel?: PanelModel) {
this.data = data;
this.panel = panel;
this.dataSummary = this.computeDataSummary();
this.dataSummary = getPanelDataSummary(this.data?.series);
}
getListAppender<TOptions, TFieldConfig>(defaults: VisualizationSuggestion<TOptions, TFieldConfig>) {
return new VisualizationSuggestionsListAppender<TOptions, TFieldConfig>(this.list, defaults);
}
private computeDataSummary() {
const frames = this.data?.series || [];
let numberFieldCount = 0;
let timeFieldCount = 0;
let stringFieldCount = 0;
let rowCountTotal = 0;
let rowCountMax = 0;
let fieldCount = 0;
let preferredVisualisationType: PreferredVisualisationType | undefined;
for (const frame of frames) {
rowCountTotal += frame.length;
if (frame.meta?.preferredVisualisationType) {
preferredVisualisationType = frame.meta.preferredVisualisationType;
}
for (const field of frame.fields) {
fieldCount++;
switch (field.type) {
case FieldType.number:
numberFieldCount += 1;
break;
case FieldType.time:
timeFieldCount += 1;
break;
case FieldType.string:
stringFieldCount += 1;
break;
}
}
if (frame.length > rowCountMax) {
rowCountMax = frame.length;
}
}
return {
numberFieldCount,
timeFieldCount,
stringFieldCount,
rowCountTotal,
rowCountMax,
fieldCount,
preferredVisualisationType,
frameCount: frames.length,
hasData: rowCountTotal > 0,
hasTimeField: timeFieldCount > 0,
hasNumberField: numberFieldCount > 0,
hasStringField: stringFieldCount > 0,
};
}
getList() {
return this.list;
}

View File

@@ -247,7 +247,7 @@ export interface CloudWatchLogsQuery extends common.DataQuery {
*/
logGroups?: Array<LogGroup>;
/**
* Whether a query is a Logs Insights or Logs Anomalies query
* Whether a query is a Logs Insights or Log Anomalies query
*/
logsMode?: LogsMode;
/**
@@ -275,7 +275,7 @@ export const defaultCloudWatchLogsQuery: Partial<CloudWatchLogsQuery> = {
};
/**
* Shape of a Cloudwatch Logs Anomalies query
* Shape of a Cloudwatch Log Anomalies query
*/
export interface CloudWatchLogsAnomaliesQuery extends common.DataQuery {
/**
@@ -284,7 +284,7 @@ export interface CloudWatchLogsAnomaliesQuery extends common.DataQuery {
anomalyDetectionARN?: string;
id: string;
/**
* Whether a query is a Logs Insights or Logs Anomalies query
* Whether a query is a Logs Insights or Log Anomalies query
*/
logsMode?: LogsMode;
/**

View File

@@ -273,7 +273,7 @@ func setupSimpleHTTPServer(features featuremgmt.FeatureToggles) *HTTPServer {
AccessControl: acimpl.ProvideAccessControl(featuremgmt.WithFeatures()),
annotationsRepo: annotationstest.NewFakeAnnotationsRepo(),
authInfoService: &authinfotest.FakeService{
ExpectedLabels: map[int64]string{int64(1): login.GetAuthProviderLabel(login.LDAPAuthModule)},
ExpectedRecentlyUsedLabel: map[int64]string{int64(1): login.GetAuthProviderLabel(login.LDAPAuthModule)},
},
tracer: tracing.InitializeTracerForTest(),
}

View File

@@ -314,7 +314,7 @@ func (hs *HTTPServer) searchOrgUsersHelper(c *contextmodel.ReqContext, query *or
filteredUsers = append(filteredUsers, user)
}
modules, err := hs.authInfoService.GetUserLabels(c.Req.Context(), login.GetUserLabelsQuery{
modules, err := hs.authInfoService.GetUsersRecentlyUsedLabel(c.Req.Context(), login.GetUserLabelsQuery{
UserIDs: authLabelsUserIDs,
})

View File

@@ -115,6 +115,7 @@ func (hs *HTTPServer) GetUserByLoginOrEmail(c *contextmodel.ReqContext) response
}
return response.Error(http.StatusInternalServerError, "Failed to get user", err)
}
result := user.UserProfileDTO{
ID: usr.ID,
UID: usr.UID,
@@ -128,6 +129,11 @@ func (hs *HTTPServer) GetUserByLoginOrEmail(c *contextmodel.ReqContext) response
UpdatedAt: usr.Updated,
CreatedAt: usr.Created,
}
// Populate AuthLabels using all historically used auth modules ordered by most recent.
if modules, err := hs.authInfoService.GetUserAuthModuleLabels(c.Req.Context(), usr.ID); err == nil {
result.AuthLabels = modules
}
return response.JSON(http.StatusOK, &result)
}

View File

@@ -185,6 +185,44 @@ func TestIntegrationUserAPIEndpoint_userLoggedIn(t *testing.T) {
require.NoError(t, err)
}, mock)
// Multiple historical auth labels should appear ordered by recency
loggedInUserScenario(t, "When calling GET returns with multiple auth labels", "/api/users/lookup", "/api/users/lookup", func(sc *scenarioContext) {
createUserCmd := user.CreateUserCommand{
Email: fmt.Sprint("multi", "@test.com"),
Name: "multi",
Login: "multi",
IsAdmin: true,
}
orgSvc, err := orgimpl.ProvideService(sqlStore, sc.cfg, quotatest.New(false, nil))
require.NoError(t, err)
userSvc, err := userimpl.ProvideService(
sqlStore, orgSvc, sc.cfg, nil, nil, tracing.InitializeTracerForTest(),
quotatest.New(false, nil), supportbundlestest.NewFakeBundleService(),
)
require.NoError(t, err)
usr, err := userSvc.Create(context.Background(), &createUserCmd)
require.Nil(t, err)
sc.handlerFunc = hs.GetUserByLoginOrEmail
userMock := usertest.NewUserServiceFake()
userMock.ExpectedUser = &user.User{ID: usr.ID, Email: usr.Email, Login: usr.Login, Name: usr.Name}
sc.userService = userMock
hs.userService = userMock
fakeAuth := &authinfotest.FakeService{ExpectedAuthModuleLabels: []string{login.GetAuthProviderLabel(login.OktaAuthModule), login.GetAuthProviderLabel(login.LDAPAuthModule), login.GetAuthProviderLabel(login.SAMLAuthModule)}}
hs.authInfoService = fakeAuth
sc.fakeReqWithParams("GET", sc.url, map[string]string{"loginOrEmail": usr.Email}).exec()
var resp user.UserProfileDTO
require.Equal(t, http.StatusOK, sc.resp.Code)
err = json.Unmarshal(sc.resp.Body.Bytes(), &resp)
require.NoError(t, err)
expected := []string{login.GetAuthProviderLabel(login.OktaAuthModule), login.GetAuthProviderLabel(login.LDAPAuthModule), login.GetAuthProviderLabel(login.SAMLAuthModule)}
require.Equal(t, expected, resp.AuthLabels)
}, mock)
loggedInUserScenario(t, "When calling GET on", "/api/users", "/api/users", func(sc *scenarioContext) {
userMock.ExpectedSearchUsers = mockResult

View File

@@ -3,10 +3,8 @@ package datasource
import (
"context"
"encoding/json"
"errors"
"fmt"
"maps"
"path/filepath"
"github.com/prometheus/client_golang/prometheus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -23,7 +21,6 @@ import (
datasourceV0 "github.com/grafana/grafana/pkg/apis/datasource/v0alpha1"
queryV0 "github.com/grafana/grafana/pkg/apis/query/v0alpha1"
grafanaregistry "github.com/grafana/grafana/pkg/apiserver/registry/generic"
"github.com/grafana/grafana/pkg/configprovider"
"github.com/grafana/grafana/pkg/plugins"
"github.com/grafana/grafana/pkg/plugins/manager/sources"
"github.com/grafana/grafana/pkg/promlib/models"
@@ -31,7 +28,6 @@ import (
"github.com/grafana/grafana/pkg/services/accesscontrol"
"github.com/grafana/grafana/pkg/services/apiserver/builder"
"github.com/grafana/grafana/pkg/services/featuremgmt"
"github.com/grafana/grafana/pkg/setting"
"github.com/grafana/grafana/pkg/tsdb/grafana-testdata-datasource/kinds"
)
@@ -53,7 +49,6 @@ type DataSourceAPIBuilder struct {
}
func RegisterAPIService(
cfgProvider configprovider.ConfigProvider,
features featuremgmt.FeatureToggles,
apiRegistrar builder.APIRegistrar,
pluginClient plugins.Client, // access to everything
@@ -61,6 +56,7 @@ func RegisterAPIService(
contextProvider PluginContextWrapper,
accessControl accesscontrol.AccessControl,
reg prometheus.Registerer,
pluginSources sources.Registry,
) (*DataSourceAPIBuilder, error) {
// We want to expose just a limited set of plugins
//nolint:staticcheck // not yet migrated to OpenFeature
@@ -75,13 +71,9 @@ func RegisterAPIService(
var err error
var builder *DataSourceAPIBuilder
cfg, err := cfgProvider.Get(context.Background())
pluginJSONs, err := getDatasourcePlugins(pluginSources)
if err != nil {
return nil, err
}
pluginJSONs, err := getCorePlugins(cfg)
if err != nil {
return nil, err
return nil, fmt.Errorf("error getting list of datasource plugins: %s", err)
}
ids := []string{
@@ -299,21 +291,29 @@ func (b *DataSourceAPIBuilder) GetOpenAPIDefinitions() openapi.GetOpenAPIDefinit
}
}
func getCorePlugins(cfg *setting.Cfg) ([]plugins.JSONData, error) {
coreDataSourcesPath := filepath.Join(cfg.StaticRootPath, "app", "plugins", "datasource")
coreDataSourcesSrc := sources.NewLocalSource(
plugins.ClassCore,
[]string{coreDataSourcesPath},
)
func getDatasourcePlugins(pluginSources sources.Registry) ([]plugins.JSONData, error) {
var pluginJSONs []plugins.JSONData
res, err := coreDataSourcesSrc.Discover(context.Background())
if err != nil {
return nil, errors.New("failed to load core data source plugins")
}
// It's possible that the same plugin will be found in different sources.
// Registering the same plugin twice in the API is Probably A Bad Thing,
// so this map keeps track of uniques, so we can skip duplicates.
var uniquePlugins = map[string]bool{}
pluginJSONs := make([]plugins.JSONData, 0, len(res))
for _, p := range res {
pluginJSONs = append(pluginJSONs, p.Primary.JSONData)
for _, pluginSource := range pluginSources.List(context.Background()) {
res, err := pluginSource.Discover(context.Background())
if err != nil {
return nil, err
}
for _, p := range res {
if p.Primary.JSONData.Type == plugins.TypeDataSource {
if _, found := uniquePlugins[p.Primary.JSONData.ID]; found {
backend.Logger.Info("Found duplicate plugin %s when registering API groups.", p.Primary.JSONData.ID)
continue
}
uniquePlugins[p.Primary.JSONData.ID] = true
pluginJSONs = append(pluginJSONs, p.Primary.JSONData)
}
}
}
return pluginJSONs, nil
}

View File

@@ -120,6 +120,14 @@ func validateOnUpdate(ctx context.Context,
return err
}
// Check that the folder being moved is not an ancestor of the target parent.
// This prevents circular references (e.g., moving A under B when B is already under A).
for _, ancestor := range info.Items {
if ancestor.Name == obj.Name {
return fmt.Errorf("cannot move folder under its own descendant, this would create a circular reference")
}
}
// if by moving a folder we exceed the max depth, return an error
if len(info.Items) > maxDepth+1 {
return folder.ErrMaximumDepthReached.Errorf("maximum folder depth reached")

View File

@@ -264,6 +264,71 @@ func TestValidateUpdate(t *testing.T) {
maxDepth: folder.MaxNestedFolderDepth,
expectedErr: "[folder.maximum-depth-reached]",
},
{
name: "error when moving folder under its own descendant (direct child)",
folder: &folders.Folder{
ObjectMeta: metav1.ObjectMeta{
Name: "parent",
Annotations: map[string]string{
utils.AnnoKeyFolder: "child",
},
},
Spec: folders.FolderSpec{
Title: "parent folder",
},
},
old: &folders.Folder{
ObjectMeta: metav1.ObjectMeta{
Name: "parent",
},
Spec: folders.FolderSpec{
Title: "parent folder",
},
},
// When querying parents of "child", we get the chain: child -> parent -> root
// This means "parent" is an ancestor of "child", so we can't move "parent" under "child"
parents: &folders.FolderInfoList{
Items: []folders.FolderInfo{
{Name: "child", Parent: "parent"},
{Name: "parent", Parent: folder.GeneralFolderUID},
{Name: folder.GeneralFolderUID},
},
},
expectedErr: "cannot move folder under its own descendant",
},
{
name: "error when moving folder under its grandchild",
folder: &folders.Folder{
ObjectMeta: metav1.ObjectMeta{
Name: "grandparent",
Annotations: map[string]string{
utils.AnnoKeyFolder: "grandchild",
},
},
Spec: folders.FolderSpec{
Title: "grandparent folder",
},
},
old: &folders.Folder{
ObjectMeta: metav1.ObjectMeta{
Name: "grandparent",
},
Spec: folders.FolderSpec{
Title: "grandparent folder",
},
},
// When querying parents of "grandchild", we get: grandchild -> child -> grandparent -> root
// This means "grandparent" is in the ancestry, so we can't move it under "grandchild"
parents: &folders.FolderInfoList{
Items: []folders.FolderInfo{
{Name: "grandchild", Parent: "child"},
{Name: "child", Parent: "grandparent"},
{Name: "grandparent", Parent: folder.GeneralFolderUID},
{Name: folder.GeneralFolderUID},
},
},
expectedErr: "cannot move folder under its own descendant",
},
}
for _, tt := range tests {

View File

@@ -350,23 +350,59 @@ func (b *IdentityAccessManagementAPIBuilder) UpdateAPIGroupInfo(apiGroupInfo *ge
}
//nolint:staticcheck // not yet migrated to OpenFeature
if b.features.IsEnabledGlobally(featuremgmt.FlagKubernetesAuthzResourcePermissionApis) {
resourcePermissionStore, err := NewLocalStore(iamv0.ResourcePermissionInfo, apiGroupInfo.Scheme, opts.OptsGetter, b.reg, b.accessClient, b.resourcePermissionsStorage)
if err != nil {
if err := b.UpdateResourcePermissionsAPIGroup(apiGroupInfo, opts, storage, b.enableDualWriter, enableZanzanaSync); err != nil {
return err
}
if enableZanzanaSync {
b.logger.Info("Enabling AfterCreate, BeginUpdate, and AfterDelete hooks for ResourcePermission to sync to Zanzana")
resourcePermissionStore.AfterCreate = b.AfterResourcePermissionCreate
resourcePermissionStore.BeginUpdate = b.BeginResourcePermissionUpdate
resourcePermissionStore.AfterDelete = b.AfterResourcePermissionDelete
}
storage[iamv0.ResourcePermissionInfo.StoragePath()] = resourcePermissionStore
}
apiGroupInfo.VersionedResourcesStorageMap[legacyiamv0.VERSION] = storage
return nil
}
func (b *IdentityAccessManagementAPIBuilder) UpdateResourcePermissionsAPIGroup(
apiGroupInfo *genericapiserver.APIGroupInfo,
opts builder.APIGroupOptions,
storage map[string]rest.Storage,
enableDualWriter bool,
enableZanzanaSync bool,
) error {
var store rest.Storage
// Create the legacy store first
legacyStore, err := NewLocalStore(iamv0.ResourcePermissionInfo, apiGroupInfo.Scheme, opts.OptsGetter, b.reg, b.accessClient, b.resourcePermissionsStorage)
if err != nil {
return err
}
// Register the hooks for Zanzana sync
// FIXME: The hooks are registered on the legacy store
// Once we fully migrate to unified storage, we can move these hooks to the unified store
if enableZanzanaSync {
b.logger.Info("Enabling AfterCreate, BeginUpdate, and AfterDelete hooks for ResourcePermission to sync to Zanzana")
legacyStore.AfterCreate = b.AfterResourcePermissionCreate
legacyStore.BeginUpdate = b.BeginResourcePermissionUpdate
legacyStore.AfterDelete = b.AfterResourcePermissionDelete
}
// Set the default store to the legacy store
store = legacyStore
if enableDualWriter {
// Create the dual write store (UniStore + LegacyStore)
uniStore, err := grafanaregistry.NewRegistryStore(apiGroupInfo.Scheme, iamv0.ResourcePermissionInfo, opts.OptsGetter)
if err != nil {
return err
}
store, err = opts.DualWriteBuilder(iamv0.ResourcePermissionInfo.GroupResource(), legacyStore, uniStore)
if err != nil {
return err
}
}
storage[iamv0.ResourcePermissionInfo.StoragePath()] = store
return nil
}
func (b *IdentityAccessManagementAPIBuilder) GetOpenAPIDefinitions() common.GetOpenAPIDefinitions {
return func(rc common.ReferenceCallback) map[string]common.OpenAPIDefinition {
dst := legacyiamv0.GetOpenAPIDefinitions(rc)

View File

@@ -16,8 +16,9 @@ import (
type FakeZanzanaClient struct {
zanzana.Client
writeCallback func(context.Context, *v1.WriteRequest) error
readCallback func(context.Context, *v1.ReadRequest) (*v1.ReadResponse, error)
writeCallback func(context.Context, *v1.WriteRequest) error
readCallback func(context.Context, *v1.ReadRequest) (*v1.ReadResponse, error)
mutateCallback func(context.Context, *v1.MutateRequest) error
}
// Read implements zanzana.Client.
@@ -33,6 +34,14 @@ func (f *FakeZanzanaClient) Write(ctx context.Context, req *v1.WriteRequest) err
return f.writeCallback(ctx, req)
}
// Mutate implements zanzana.Client.
func (f *FakeZanzanaClient) Mutate(ctx context.Context, req *v1.MutateRequest) error {
if f.mutateCallback != nil {
return f.mutateCallback(ctx, req)
}
return nil
}
func requireTuplesMatch(t *testing.T, actual []*v1.TupleKey, expected []*v1.TupleKey, msgAndArgs ...interface{}) {
t.Helper()
for _, exp := range expected {

View File

@@ -10,27 +10,8 @@ import (
iamv0 "github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1"
v1 "github.com/grafana/grafana/pkg/services/authz/proto/v1"
"github.com/grafana/grafana/pkg/services/authz/zanzana"
)
// createUserBasicRoleTuple creates a tuple for a user's basic role assignment
func createUserBasicRoleTuple(userUID, orgRole string) *v1.TupleKey {
if orgRole == "" {
return nil
}
basicRole := zanzana.TranslateBasicRole(orgRole)
if basicRole == "" {
return nil
}
return &v1.TupleKey{
User: zanzana.NewTupleEntry(zanzana.TypeUser, userUID, ""),
Relation: zanzana.RelationAssignee,
Object: zanzana.NewTupleEntry(zanzana.TypeRole, basicRole, ""),
}
}
// AfterUserCreate is a post-create hook that writes the user's basic role assignment to Zanzana (openFGA)
func (b *IdentityAccessManagementAPIBuilder) AfterUserCreate(obj runtime.Object, _ *metav1.CreateOptions) {
if b.zClient == nil {
@@ -43,24 +24,24 @@ func (b *IdentityAccessManagementAPIBuilder) AfterUserCreate(obj runtime.Object,
return
}
resourceType := "user"
operation := "create"
// Skip if user has no role assigned
if user.Spec.Role == "" {
b.logger.Debug("user has no role assigned, skipping basic role sync",
"namespace", user.Namespace,
"userUID", user.Name,
"name", user.Name,
)
return
}
resourceType := "user"
operation := "create"
// Grab a ticket to write to Zanzana
wait := time.Now()
b.zTickets <- true
hooksWaitHistogram.WithLabelValues(resourceType, operation).Observe(time.Since(wait).Seconds())
go func(u *iamv0.User) {
go func(namespace, subjectName, role, resourceType, operation string) {
start := time.Now()
status := "success"
@@ -70,44 +51,38 @@ func (b *IdentityAccessManagementAPIBuilder) AfterUserCreate(obj runtime.Object,
hooksOperationCounter.WithLabelValues(resourceType, operation, status).Inc()
}()
tuple := createUserBasicRoleTuple(u.Name, u.Spec.Role)
if tuple == nil {
b.logger.Warn("failed to create user basic role tuple",
"namespace", u.Namespace,
"userUID", u.Name,
"role", u.Spec.Role,
)
status = "failure"
return
}
b.logger.Debug("writing user basic role to zanzana",
"namespace", u.Namespace,
"userUID", u.Name,
"role", u.Spec.Role,
"namespace", namespace,
"name", subjectName,
"role", role,
)
ctx, cancel := context.WithTimeout(context.Background(), defaultWriteTimeout)
defer cancel()
err := b.zClient.Write(ctx, &v1.WriteRequest{
Namespace: u.Namespace,
Writes: &v1.WriteRequestWrites{
TupleKeys: []*v1.TupleKey{tuple},
err := b.zClient.Mutate(ctx, &v1.MutateRequest{
Namespace: namespace,
Operations: []*v1.MutateOperation{
{
Operation: &v1.MutateOperation_UpdateUserOrgRole{
UpdateUserOrgRole: &v1.UpdateUserOrgRoleOperation{User: subjectName, Role: role},
},
},
},
})
if err != nil {
status = "failure"
b.logger.Error("failed to write user basic role to zanzana",
"err", err,
"namespace", u.Namespace,
"userUID", u.Name,
"role", u.Spec.Role,
"namespace", namespace,
"name", subjectName,
"role", role,
)
} else {
hooksTuplesCounter.WithLabelValues(resourceType, operation, "write").Inc()
}
}(user.DeepCopy())
}(user.Namespace, user.Name, user.Spec.Role, resourceType, operation)
}
// BeginUserUpdate is a pre-update hook that gets called on user updates
@@ -142,7 +117,7 @@ func (b *IdentityAccessManagementAPIBuilder) BeginUserUpdate(ctx context.Context
b.zTickets <- true
hooksWaitHistogram.WithLabelValues("user", "update").Observe(time.Since(wait).Seconds())
go func(old, new *iamv0.User) {
go func(namespace, subjectName, oldRole, newRole string) {
start := time.Now()
status := "success"
@@ -153,72 +128,40 @@ func (b *IdentityAccessManagementAPIBuilder) BeginUserUpdate(ctx context.Context
}()
b.logger.Debug("updating user basic role in zanzana",
"namespace", new.Namespace,
"userUID", new.Name,
"oldRole", old.Spec.Role,
"newRole", new.Spec.Role,
"namespace", namespace,
"name", subjectName,
"oldRole", oldRole,
"newRole", newRole,
)
ctx, cancel := context.WithTimeout(context.Background(), defaultWriteTimeout)
defer cancel()
req := &v1.WriteRequest{
Namespace: new.Namespace,
err := b.zClient.Mutate(ctx, &v1.MutateRequest{
Namespace: namespace,
Operations: []*v1.MutateOperation{
{
Operation: &v1.MutateOperation_UpdateUserOrgRole{
UpdateUserOrgRole: &v1.UpdateUserOrgRoleOperation{User: subjectName, Role: newRole},
},
}, {
Operation: &v1.MutateOperation_DeleteUserOrgRole{
DeleteUserOrgRole: &v1.DeleteUserOrgRoleOperation{User: subjectName, Role: oldRole},
},
},
},
})
if err != nil {
status = "failure"
b.logger.Error("failed to update user basic role in zanzana",
"err", err,
"namespace", namespace,
"name", subjectName,
"role", newRole,
"oldRole", oldRole,
)
}
// Delete old role tuple if it existed
if old.Spec.Role != "" {
oldTuple := createUserBasicRoleTuple(old.Name, old.Spec.Role)
if oldTuple != nil {
deleteTuple := tupleToTupleKeyWithoutCondition(oldTuple)
req.Deletes = &v1.WriteRequestDeletes{
TupleKeys: []*v1.TupleKeyWithoutCondition{deleteTuple},
}
b.logger.Debug("deleting old user basic role from zanzana",
"namespace", new.Namespace,
"userUID", new.Name,
"role", old.Spec.Role,
)
}
}
// Write new role tuple if it exists
if new.Spec.Role != "" {
newTuple := createUserBasicRoleTuple(new.Name, new.Spec.Role)
if newTuple != nil {
req.Writes = &v1.WriteRequestWrites{
TupleKeys: []*v1.TupleKey{newTuple},
}
b.logger.Debug("writing new user basic role to zanzana",
"namespace", new.Namespace,
"userUID", new.Name,
"role", new.Spec.Role,
)
}
}
// Only make the request if there are deletes or writes
if (req.Deletes != nil && len(req.Deletes.TupleKeys) > 0) || (req.Writes != nil && len(req.Writes.TupleKeys) > 0) {
err := b.zClient.Write(ctx, req)
if err != nil {
status = "failure"
b.logger.Error("failed to update user basic role in zanzana",
"err", err,
"namespace", new.Namespace,
"userUID", new.Name,
)
} else {
if req.Deletes != nil && len(req.Deletes.TupleKeys) > 0 {
hooksTuplesCounter.WithLabelValues("user", "update", "delete").Inc()
}
if req.Writes != nil && len(req.Writes.TupleKeys) > 0 {
hooksTuplesCounter.WithLabelValues("user", "update", "write").Inc()
}
}
} else {
b.logger.Debug("no tuples to update in zanzana", "namespace", new.Namespace)
}
}(oldUser.DeepCopy(), newUser.DeepCopy())
}(oldUser.Namespace, oldUser.Name, oldUser.Spec.Role, newUser.Spec.Role)
}, nil
}
@@ -241,7 +184,7 @@ func (b *IdentityAccessManagementAPIBuilder) AfterUserDelete(obj runtime.Object,
if user.Spec.Role == "" {
b.logger.Debug("user had no role assigned, skipping basic role sync",
"namespace", user.Namespace,
"userUID", user.Name,
"name", user.Name,
)
return
}
@@ -250,7 +193,7 @@ func (b *IdentityAccessManagementAPIBuilder) AfterUserDelete(obj runtime.Object,
b.zTickets <- true
hooksWaitHistogram.WithLabelValues(resourceType, operation).Observe(time.Since(wait).Seconds())
go func(u *iamv0.User) {
go func(namespace, subjectName, role string) {
start := time.Now()
status := "success"
@@ -260,44 +203,36 @@ func (b *IdentityAccessManagementAPIBuilder) AfterUserDelete(obj runtime.Object,
hooksOperationCounter.WithLabelValues(resourceType, operation, status).Inc()
}()
tuple := createUserBasicRoleTuple(u.Name, u.Spec.Role)
if tuple == nil {
b.logger.Warn("failed to create user basic role tuple for deletion",
"namespace", u.Namespace,
"userUID", u.Name,
"role", u.Spec.Role,
)
status = "failure"
return
}
deleteTuple := tupleToTupleKeyWithoutCondition(tuple)
b.logger.Debug("deleting user basic role from zanzana",
"namespace", u.Namespace,
"userUID", u.Name,
"role", u.Spec.Role,
"namespace", namespace,
"name", subjectName,
"role", role,
)
ctx, cancel := context.WithTimeout(context.Background(), defaultWriteTimeout)
defer cancel()
err := b.zClient.Write(ctx, &v1.WriteRequest{
Namespace: u.Namespace,
Deletes: &v1.WriteRequestDeletes{
TupleKeys: []*v1.TupleKeyWithoutCondition{deleteTuple},
err := b.zClient.Mutate(ctx, &v1.MutateRequest{
Namespace: namespace,
Operations: []*v1.MutateOperation{
{
Operation: &v1.MutateOperation_DeleteUserOrgRole{
DeleteUserOrgRole: &v1.DeleteUserOrgRoleOperation{User: subjectName, Role: role},
},
},
},
})
if err != nil {
status = "failure"
b.logger.Error("failed to delete user basic role from zanzana",
"err", err,
"namespace", u.Namespace,
"userUID", u.Name,
"role", u.Spec.Role,
"namespace", namespace,
"name", subjectName,
"role", role,
)
} else {
hooksTuplesCounter.WithLabelValues(resourceType, operation, "delete").Inc()
}
}(user.DeepCopy())
}(user.Namespace, user.Name, user.Spec.Role)
}

View File

@@ -33,21 +33,22 @@ func TestAfterUserCreate(t *testing.T) {
},
}
testAdminRole := func(ctx context.Context, req *v1.WriteRequest) error {
testAdminRole := func(ctx context.Context, req *v1.MutateRequest) error {
defer wg.Done()
require.NotNil(t, req)
require.NotNil(t, req.Writes)
require.Len(t, req.Writes.TupleKeys, 1)
require.Equal(t, "org-1", req.Namespace)
require.Len(t, req.Operations, 1)
tuple := req.Writes.TupleKeys[0]
require.Equal(t, "user:df2p421det1q8c", tuple.User)
require.Equal(t, "assignee", tuple.Relation)
require.Equal(t, "role:basic_admin", tuple.Object)
op := req.Operations[0]
require.NotNil(t, op)
updateOp := op.GetUpdateUserOrgRole()
require.NotNil(t, updateOp)
require.Equal(t, "df2p421det1q8c", updateOp.User)
require.Equal(t, "Admin", updateOp.Role)
return nil
}
b.zClient = &FakeZanzanaClient{writeCallback: testAdminRole}
b.zClient = &FakeZanzanaClient{mutateCallback: testAdminRole}
b.AfterUserCreate(&user, nil)
wg.Wait()
})
@@ -64,21 +65,22 @@ func TestAfterUserCreate(t *testing.T) {
},
}
testEditorRole := func(ctx context.Context, req *v1.WriteRequest) error {
testEditorRole := func(ctx context.Context, req *v1.MutateRequest) error {
defer wg.Done()
require.NotNil(t, req)
require.NotNil(t, req.Writes)
require.Len(t, req.Writes.TupleKeys, 1)
require.Equal(t, "org-2", req.Namespace)
require.Len(t, req.Operations, 1)
tuple := req.Writes.TupleKeys[0]
require.Equal(t, "user:user123", tuple.User)
require.Equal(t, "assignee", tuple.Relation)
require.Equal(t, "role:basic_editor", tuple.Object)
op := req.Operations[0]
require.NotNil(t, op)
updateOp := op.GetUpdateUserOrgRole()
require.NotNil(t, updateOp)
require.Equal(t, "user123", updateOp.User)
require.Equal(t, "Editor", updateOp.Role)
return nil
}
b.zClient = &FakeZanzanaClient{writeCallback: testEditorRole}
b.zClient = &FakeZanzanaClient{mutateCallback: testEditorRole}
b.AfterUserCreate(&user, nil)
wg.Wait()
})
@@ -95,21 +97,22 @@ func TestAfterUserCreate(t *testing.T) {
},
}
testViewerRole := func(ctx context.Context, req *v1.WriteRequest) error {
testViewerRole := func(ctx context.Context, req *v1.MutateRequest) error {
defer wg.Done()
require.NotNil(t, req)
require.NotNil(t, req.Writes)
require.Len(t, req.Writes.TupleKeys, 1)
require.Equal(t, "org-3", req.Namespace)
require.Len(t, req.Operations, 1)
tuple := req.Writes.TupleKeys[0]
require.Equal(t, "user:viewer456", tuple.User)
require.Equal(t, "assignee", tuple.Relation)
require.Equal(t, "role:basic_viewer", tuple.Object)
op := req.Operations[0]
require.NotNil(t, op)
updateOp := op.GetUpdateUserOrgRole()
require.NotNil(t, updateOp)
require.Equal(t, "viewer456", updateOp.User)
require.Equal(t, "Viewer", updateOp.Role)
return nil
}
b.zClient = &FakeZanzanaClient{writeCallback: testViewerRole}
b.zClient = &FakeZanzanaClient{mutateCallback: testViewerRole}
b.AfterUserCreate(&user, nil)
wg.Wait()
})
@@ -184,31 +187,28 @@ func TestBeginUserUpdate(t *testing.T) {
},
}
testRoleChange := func(ctx context.Context, req *v1.WriteRequest) error {
testRoleChange := func(ctx context.Context, req *v1.MutateRequest) error {
defer wg.Done()
require.NotNil(t, req)
require.Equal(t, "org-1", req.Namespace)
require.Len(t, req.Operations, 2)
// Should delete old role
require.NotNil(t, req.Deletes)
require.Len(t, req.Deletes.TupleKeys, 1)
deleteTuple := req.Deletes.TupleKeys[0]
require.Equal(t, "user:testuser", deleteTuple.User)
require.Equal(t, "assignee", deleteTuple.Relation)
require.Equal(t, "role:basic_viewer", deleteTuple.Object)
// First operation should be UpdateUserOrgRole with new role
updateOp := req.Operations[0].GetUpdateUserOrgRole()
require.NotNil(t, updateOp)
require.Equal(t, "testuser", updateOp.User)
require.Equal(t, "Admin", updateOp.Role)
// Should write new role
require.NotNil(t, req.Writes)
require.Len(t, req.Writes.TupleKeys, 1)
writeTuple := req.Writes.TupleKeys[0]
require.Equal(t, "user:testuser", writeTuple.User)
require.Equal(t, "assignee", writeTuple.Relation)
require.Equal(t, "role:basic_admin", writeTuple.Object)
// Second operation should be DeleteUserOrgRole with old role
deleteOp := req.Operations[1].GetDeleteUserOrgRole()
require.NotNil(t, deleteOp)
require.Equal(t, "testuser", deleteOp.User)
require.Equal(t, "Viewer", deleteOp.Role)
return nil
}
b.zClient = &FakeZanzanaClient{writeCallback: testRoleChange}
b.zClient = &FakeZanzanaClient{mutateCallback: testRoleChange}
finishFunc, err := b.BeginUserUpdate(context.Background(), &newUser, &oldUser, nil)
require.NoError(t, err)
@@ -218,7 +218,7 @@ func TestBeginUserUpdate(t *testing.T) {
wg.Wait()
})
t.Run("should delete old role when new role is empty", func(t *testing.T) {
t.Run("should update role when new role is empty", func(t *testing.T) {
wg.Add(1)
oldUser := iamv0.User{
ObjectMeta: metav1.ObjectMeta{
@@ -240,26 +240,28 @@ func TestBeginUserUpdate(t *testing.T) {
},
}
testRemoveRole := func(ctx context.Context, req *v1.WriteRequest) error {
testRemoveRole := func(ctx context.Context, req *v1.MutateRequest) error {
defer wg.Done()
require.NotNil(t, req)
require.Equal(t, "org-2", req.Namespace)
require.Len(t, req.Operations, 2)
// Should delete old role
require.NotNil(t, req.Deletes)
require.Len(t, req.Deletes.TupleKeys, 1)
deleteTuple := req.Deletes.TupleKeys[0]
require.Equal(t, "user:testuser2", deleteTuple.User)
require.Equal(t, "assignee", deleteTuple.Relation)
require.Equal(t, "role:basic_editor", deleteTuple.Object)
// First operation should be UpdateUserOrgRole with empty role
updateOp := req.Operations[0].GetUpdateUserOrgRole()
require.NotNil(t, updateOp)
require.Equal(t, "testuser2", updateOp.User)
require.Equal(t, "", updateOp.Role)
// Should not write new role
require.Nil(t, req.Writes)
// Second operation should be DeleteUserOrgRole with old role
deleteOp := req.Operations[1].GetDeleteUserOrgRole()
require.NotNil(t, deleteOp)
require.Equal(t, "testuser2", deleteOp.User)
require.Equal(t, "Editor", deleteOp.Role)
return nil
}
b.zClient = &FakeZanzanaClient{writeCallback: testRemoveRole}
b.zClient = &FakeZanzanaClient{mutateCallback: testRemoveRole}
finishFunc, err := b.BeginUserUpdate(context.Background(), &newUser, &oldUser, nil)
require.NoError(t, err)
@@ -291,26 +293,28 @@ func TestBeginUserUpdate(t *testing.T) {
},
}
testAddRole := func(ctx context.Context, req *v1.WriteRequest) error {
testAddRole := func(ctx context.Context, req *v1.MutateRequest) error {
defer wg.Done()
require.NotNil(t, req)
require.Equal(t, "org-3", req.Namespace)
require.Len(t, req.Operations, 2)
// Should not delete old role (was empty)
require.Nil(t, req.Deletes)
// First operation should be UpdateUserOrgRole with new role
updateOp := req.Operations[0].GetUpdateUserOrgRole()
require.NotNil(t, updateOp)
require.Equal(t, "testuser3", updateOp.User)
require.Equal(t, "Admin", updateOp.Role)
// Should write new role
require.NotNil(t, req.Writes)
require.Len(t, req.Writes.TupleKeys, 1)
writeTuple := req.Writes.TupleKeys[0]
require.Equal(t, "user:testuser3", writeTuple.User)
require.Equal(t, "assignee", writeTuple.Relation)
require.Equal(t, "role:basic_admin", writeTuple.Object)
// Second operation should be DeleteUserOrgRole with empty old role
deleteOp := req.Operations[1].GetDeleteUserOrgRole()
require.NotNil(t, deleteOp)
require.Equal(t, "testuser3", deleteOp.User)
require.Equal(t, "", deleteOp.Role)
return nil
}
b.zClient = &FakeZanzanaClient{writeCallback: testAddRole}
b.zClient = &FakeZanzanaClient{mutateCallback: testAddRole}
finishFunc, err := b.BeginUserUpdate(context.Background(), &newUser, &oldUser, nil)
require.NoError(t, err)
@@ -368,12 +372,12 @@ func TestBeginUserUpdate(t *testing.T) {
}
callCount := 0
testNoCall := func(ctx context.Context, req *v1.WriteRequest) error {
testNoCall := func(ctx context.Context, req *v1.MutateRequest) error {
callCount++
return nil
}
b.zClient = &FakeZanzanaClient{writeCallback: testNoCall}
b.zClient = &FakeZanzanaClient{mutateCallback: testNoCall}
finishFunc, err := b.BeginUserUpdate(context.Background(), &newUser, &oldUser, nil)
require.NoError(t, err)
@@ -437,25 +441,23 @@ func TestAfterUserDelete(t *testing.T) {
},
}
testDeleteAdmin := func(ctx context.Context, req *v1.WriteRequest) error {
testDeleteAdmin := func(ctx context.Context, req *v1.MutateRequest) error {
defer wg.Done()
require.NotNil(t, req)
require.Equal(t, "org-1", req.Namespace)
require.Len(t, req.Operations, 1)
// Should have deletes but no writes
require.NotNil(t, req.Deletes)
require.Len(t, req.Deletes.TupleKeys, 1)
require.Nil(t, req.Writes)
deleteTuple := req.Deletes.TupleKeys[0]
require.Equal(t, "user:df2p421det1q8c", deleteTuple.User)
require.Equal(t, "assignee", deleteTuple.Relation)
require.Equal(t, "role:basic_admin", deleteTuple.Object)
op := req.Operations[0]
require.NotNil(t, op)
deleteOp := op.GetDeleteUserOrgRole()
require.NotNil(t, deleteOp)
require.Equal(t, "df2p421det1q8c", deleteOp.User)
require.Equal(t, "Admin", deleteOp.Role)
return nil
}
b.zClient = &FakeZanzanaClient{writeCallback: testDeleteAdmin}
b.zClient = &FakeZanzanaClient{mutateCallback: testDeleteAdmin}
b.AfterUserDelete(&user, nil)
wg.Wait()
})
@@ -472,22 +474,23 @@ func TestAfterUserDelete(t *testing.T) {
},
}
testDeleteEditor := func(ctx context.Context, req *v1.WriteRequest) error {
testDeleteEditor := func(ctx context.Context, req *v1.MutateRequest) error {
defer wg.Done()
require.NotNil(t, req)
require.Equal(t, "org-2", req.Namespace)
require.Len(t, req.Operations, 1)
require.NotNil(t, req.Deletes)
require.Len(t, req.Deletes.TupleKeys, 1)
deleteTuple := req.Deletes.TupleKeys[0]
require.Equal(t, "user:editor123", deleteTuple.User)
require.Equal(t, "assignee", deleteTuple.Relation)
require.Equal(t, "role:basic_editor", deleteTuple.Object)
op := req.Operations[0]
require.NotNil(t, op)
deleteOp := op.GetDeleteUserOrgRole()
require.NotNil(t, deleteOp)
require.Equal(t, "editor123", deleteOp.User)
require.Equal(t, "Editor", deleteOp.Role)
return nil
}
b.zClient = &FakeZanzanaClient{writeCallback: testDeleteEditor}
b.zClient = &FakeZanzanaClient{mutateCallback: testDeleteEditor}
b.AfterUserDelete(&user, nil)
wg.Wait()
})
@@ -504,22 +507,23 @@ func TestAfterUserDelete(t *testing.T) {
},
}
testDeleteViewer := func(ctx context.Context, req *v1.WriteRequest) error {
testDeleteViewer := func(ctx context.Context, req *v1.MutateRequest) error {
defer wg.Done()
require.NotNil(t, req)
require.Equal(t, "org-3", req.Namespace)
require.Len(t, req.Operations, 1)
require.NotNil(t, req.Deletes)
require.Len(t, req.Deletes.TupleKeys, 1)
deleteTuple := req.Deletes.TupleKeys[0]
require.Equal(t, "user:viewer456", deleteTuple.User)
require.Equal(t, "assignee", deleteTuple.Relation)
require.Equal(t, "role:basic_viewer", deleteTuple.Object)
op := req.Operations[0]
require.NotNil(t, op)
deleteOp := op.GetDeleteUserOrgRole()
require.NotNil(t, deleteOp)
require.Equal(t, "viewer456", deleteOp.User)
require.Equal(t, "Viewer", deleteOp.Role)
return nil
}
b.zClient = &FakeZanzanaClient{writeCallback: testDeleteViewer}
b.zClient = &FakeZanzanaClient{mutateCallback: testDeleteViewer}
b.AfterUserDelete(&user, nil)
wg.Wait()
})
@@ -563,47 +567,3 @@ func TestAfterUserDelete(t *testing.T) {
// If we get here without panic, the test passes
})
}
func TestCreateUserBasicRoleTuple(t *testing.T) {
t.Run("should create tuple for Admin role", func(t *testing.T) {
tuple := createUserBasicRoleTuple("user123", "Admin")
require.NotNil(t, tuple)
require.Equal(t, "user:user123", tuple.User)
require.Equal(t, "assignee", tuple.Relation)
require.Equal(t, "role:basic_admin", tuple.Object)
})
t.Run("should create tuple for Editor role", func(t *testing.T) {
tuple := createUserBasicRoleTuple("user456", "Editor")
require.NotNil(t, tuple)
require.Equal(t, "user:user456", tuple.User)
require.Equal(t, "assignee", tuple.Relation)
require.Equal(t, "role:basic_editor", tuple.Object)
})
t.Run("should create tuple for Viewer role", func(t *testing.T) {
tuple := createUserBasicRoleTuple("user789", "Viewer")
require.NotNil(t, tuple)
require.Equal(t, "user:user789", tuple.User)
require.Equal(t, "assignee", tuple.Relation)
require.Equal(t, "role:basic_viewer", tuple.Object)
})
t.Run("should create tuple for None role", func(t *testing.T) {
tuple := createUserBasicRoleTuple("user000", "None")
require.NotNil(t, tuple)
require.Equal(t, "user:user000", tuple.User)
require.Equal(t, "assignee", tuple.Relation)
require.Equal(t, "role:basic_none", tuple.Object)
})
t.Run("should return nil for empty role", func(t *testing.T) {
tuple := createUserBasicRoleTuple("user123", "")
require.Nil(t, tuple)
})
t.Run("should return nil for invalid role", func(t *testing.T) {
tuple := createUserBasicRoleTuple("user123", "InvalidRole")
require.Nil(t, tuple)
})
}

View File

@@ -404,32 +404,47 @@ func (rc *RepositoryController) addSyncJob(ctx context.Context, obj *provisionin
return nil
}
func (rc *RepositoryController) determineSyncStatus(obj *provisioning.Repository, syncOptions *provisioning.SyncJobOptions, healthStatus provisioning.HealthStatus) *provisioning.SyncStatus {
func (rc *RepositoryController) determineSyncStatusOps(obj *provisioning.Repository, syncOptions *provisioning.SyncJobOptions, healthStatus provisioning.HealthStatus) []map[string]interface{} {
const unhealthyMessage = "Repository is unhealthy"
hasUnhealthyMessage := len(obj.Status.Sync.Message) > 0 && obj.Status.Sync.Message[0] == unhealthyMessage
var patchOperations []map[string]interface{}
switch {
case syncOptions != nil:
return &provisioning.SyncStatus{
State: provisioning.JobStatePending,
LastRef: obj.Status.Sync.LastRef,
Started: time.Now().UnixMilli(),
}
// We will try to trigger a new sync job if we have sync options
patchOperations = append(patchOperations, map[string]interface{}{
"op": "replace",
"path": "/status/sync/state",
"value": provisioning.JobStatePending,
})
patchOperations = append(patchOperations, map[string]interface{}{
"op": "replace",
"path": "/status/sync/started",
"value": int64(0),
})
case healthStatus.Healthy && hasUnhealthyMessage: // if the repository is healthy and the message is set, clear it
// FIXME: is this the clearest way to do this? Should we introduce another status or way of way of handling more
// specific errors?
return &provisioning.SyncStatus{
LastRef: obj.Status.Sync.LastRef,
}
patchOperations = append(patchOperations, map[string]interface{}{
"op": "replace",
"path": "/status/sync/message",
"value": []string{},
})
case !healthStatus.Healthy && !hasUnhealthyMessage: // if the repository is unhealthy and the message is not already set, set it
return &provisioning.SyncStatus{
State: provisioning.JobStateError,
Message: []string{unhealthyMessage},
LastRef: obj.Status.Sync.LastRef,
}
default:
return nil
patchOperations = append(patchOperations, map[string]interface{}{
"op": "replace",
"path": "/status/sync/state",
"value": provisioning.JobStateError,
})
patchOperations = append(patchOperations, map[string]interface{}{
"op": "replace",
"path": "/status/sync/message",
"value": []string{unhealthyMessage},
})
}
return patchOperations
}
//nolint:gocyclo
@@ -509,13 +524,7 @@ func (rc *RepositoryController) process(item *queueItem) error {
// determine the sync strategy and sync status to apply
syncOptions := rc.determineSyncStrategy(ctx, obj, repo, shouldResync, healthStatus)
if syncStatus := rc.determineSyncStatus(obj, syncOptions, healthStatus); syncStatus != nil {
patchOperations = append(patchOperations, map[string]interface{}{
"op": "replace",
"path": "/status/sync",
"value": syncStatus,
})
}
patchOperations = append(patchOperations, rc.determineSyncStatusOps(obj, syncOptions, healthStatus)...)
// Apply all patch operations
if len(patchOperations) > 0 {
@@ -525,6 +534,8 @@ func (rc *RepositoryController) process(item *queueItem) error {
}
}
// QUESTION: should we trigger the sync job after we have applied all patch operations or before?
// Is there are risk of race condition here?
// Trigger sync job after we have applied all patch operations
if syncOptions != nil {
if err := rc.addSyncJob(ctx, obj, syncOptions); err != nil {

View File

@@ -132,28 +132,36 @@ func (c *jobsConnector) Connect(
}
spec.Repository = name
// If a sync job is being created, we should update its status to pending.
job, err := c.jobs.GetJobQueue().Insert(ctx, cfg.Namespace, spec)
if err != nil {
responder.Error(err)
return
}
// For pull jobs update the sync status
// patch the sync status 'state' to 'pending', and reset the 'started' field, leaving other fields unchanged.
// Intentionally maintain the previous job name until the jobs is picked up.
if spec.Pull != nil {
err = c.statusPatcherProvider.GetStatusPatcher().Patch(ctx, cfg, map[string]interface{}{
"op": "replace",
"path": "/status/sync",
"value": &provisioning.SyncStatus{
State: provisioning.JobStatePending,
LastRef: cfg.Status.Sync.LastRef,
Started: time.Now().UnixMilli(),
err = c.statusPatcherProvider.GetStatusPatcher().Patch(ctx, cfg,
map[string]interface{}{
"op": "replace",
"path": "/status/sync/state",
"value": provisioning.JobStatePending,
},
})
map[string]interface{}{
// Use "replace" instead of "remove" since "remove" fails if the path does not exist (RFC 6902).
// "started" field uses "omitempty", so it may be missing in the JSON.
"op": "replace",
"path": "/status/sync/started",
"value": int64(0),
},
)
if err != nil {
responder.Error(err)
return
}
}
job, err := c.jobs.GetJobQueue().Insert(ctx, cfg.Namespace, spec)
if err != nil {
responder.Error(err)
return
}
responder.Object(http.StatusAccepted, job)
}), 30*time.Second), nil
}

View File

@@ -110,25 +110,35 @@ func (r *SyncWorker) Process(ctx context.Context, repo repository.Repository, jo
}
syncStatus := job.Status.ToSyncStatus(job.Name)
// Preserve last ref as we use replace operation
// Preserve last ref
lastRef := repo.Config().Status.Sync.LastRef
syncStatus.LastRef = lastRef
if syncStatus.State == "" {
syncStatus.State = provisioning.JobStateWorking
}
// Ensure the sync state is set to 'working' if not already set or still pending.
// FIXME: This should not be needed as the progress recorder should have set it to 'working' by now.
syncStatus.State = provisioning.JobStateWorking
// Update sync status at start using JSON patch
// Update sync status at start using granular JSON patch operations
// Only patch fields that are actually being set to avoid overwriting with zero values
patchOperations := []map[string]interface{}{
{
"op": "replace",
"path": "/status/sync",
"value": syncStatus,
"path": "/status/sync/state",
"value": syncStatus.State,
},
{
"op": "replace",
"path": "/status/sync/job",
"value": syncStatus.JobID,
},
{
"op": "replace",
"path": "/status/sync/started",
"value": syncStatus.Started,
},
}
progress.SetMessage(ctx, "update sync status at start")
statusCtx, statusSpan := r.tracer.Start(ctx, "provisioning.sync.update_start_status")
if err := r.patchStatus(statusCtx, cfg, patchOperations...); err != nil {
statusSpan.End()
@@ -174,14 +184,13 @@ func (r *SyncWorker) Process(ctx context.Context, repo repository.Repository, jo
}
syncSpan.End()
// Create sync status and set hash if successful
if syncStatus.State == provisioning.JobStateSuccess {
if syncStatus.State != provisioning.JobStateError {
syncStatus.LastRef = currentRef
} else {
// Preserve the original lastRef on error
syncStatus.LastRef = lastRef
}
// Update final status using JSON patch
progress.SetMessage(ctx, "update status and stats")
patchOperations = []map[string]interface{}{
{

View File

@@ -115,17 +115,18 @@ func TestSyncWorker_Process(t *testing.T) {
rw.MockRepository.On("Config").Return(repoConfig)
pr.On("SetMessage", mock.Anything, "update sync status at start").Return()
rpf.On("Execute", mock.Anything, repoConfig, mock.MatchedBy(func(patch map[string]interface{}) bool {
if patch["op"] != "replace" || patch["path"] != "/status/sync" {
return false
}
if patch["value"].(provisioning.SyncStatus).LastRef != "existing-ref" || patch["value"].(provisioning.SyncStatus).JobID != "test-job" {
return false
}
return true
})).Return(errors.New("failed to patch status"))
// Expect granular patches for state, job, and started fields
rpf.On("Execute", mock.Anything, repoConfig,
mock.MatchedBy(func(patch map[string]interface{}) bool {
return patch["op"] == "replace" && patch["path"] == "/status/sync/state"
}),
mock.MatchedBy(func(patch map[string]interface{}) bool {
return patch["op"] == "replace" && patch["path"] == "/status/sync/job"
}),
mock.MatchedBy(func(patch map[string]interface{}) bool {
return patch["op"] == "replace" && patch["path"] == "/status/sync/started"
}),
).Return(errors.New("failed to patch status"))
},
expectedError: "update repo with job status at start: failed to patch status",
},
@@ -151,9 +152,9 @@ func TestSyncWorker_Process(t *testing.T) {
// Storage is migrated
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
// Initial status update succeeds
// Initial status update succeeds - expect granular patches
pr.On("SetMessage", mock.Anything, "update sync status at start").Return()
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything).Return(nil).Once()
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
// Repository resources creation fails
rrf.On("Client", mock.Anything, mock.Anything).Return(nil, errors.New("failed to create repository resources client"))
@@ -188,9 +189,9 @@ func TestSyncWorker_Process(t *testing.T) {
// Storage is migrated
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
// Initial status update succeeds
// Initial status update succeeds - expect granular patches
pr.On("SetMessage", mock.Anything, "update sync status at start").Return()
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything).Return(nil).Once()
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
// Repository resources creation succeeds
rrf.On("Client", mock.Anything, mock.Anything).Return(&resources.MockRepositoryResources{}, nil)
@@ -224,9 +225,9 @@ func TestSyncWorker_Process(t *testing.T) {
// Storage is migrated
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
// Initial status update
// Initial status update - expect granular patches
pr.On("SetMessage", mock.Anything, "update sync status at start").Return()
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything).Return(nil)
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
// Setup resources and clients
mockRepoResources := resources.NewMockRepositoryResources(t)
@@ -254,7 +255,7 @@ func TestSyncWorker_Process(t *testing.T) {
}
syncStatus := patch["value"].(provisioning.SyncStatus)
return syncStatus.LastRef == "new-ref" && syncStatus.State == provisioning.JobStateSuccess
})).Return(nil)
})).Return(nil).Once()
},
expectedError: "",
},
@@ -277,9 +278,9 @@ func TestSyncWorker_Process(t *testing.T) {
// Storage is migrated
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
// Initial status update
// Initial status update - expect granular patches
pr.On("SetMessage", mock.Anything, "update sync status at start").Return()
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything).Return(nil)
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
// Setup resources and clients
mockRepoResources := resources.NewMockRepositoryResources(t)
@@ -308,7 +309,7 @@ func TestSyncWorker_Process(t *testing.T) {
patch["path"] == "/status/sync" &&
syncStatus.LastRef == "existing-ref" && // LastRef should not change on failure
syncStatus.State == provisioning.JobStateError
})).Return(nil)
})).Return(nil).Once()
},
expectedError: "sync operation failed",
},
@@ -334,7 +335,9 @@ func TestSyncWorker_Process(t *testing.T) {
pr.On("SetMessage", mock.Anything, mock.Anything).Return()
pr.On("StrictMaxErrors", 20).Return()
pr.On("Complete", mock.Anything, mock.Anything).Return(provisioning.JobStatus{State: provisioning.JobStateSuccess})
rpf.On("Execute", mock.Anything, mock.Anything, mock.Anything).Return(nil)
// Initial patch with granular updates, final patch with full sync status
rpf.On("Execute", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
rpf.On("Execute", mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
s.On("Sync", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return("new-ref", nil)
},
expectedError: "",
@@ -355,10 +358,13 @@ func TestSyncWorker_Process(t *testing.T) {
mockRepoResources.On("Stats", mock.Anything).Return(nil, nil)
rrf.On("Client", mock.Anything, mock.Anything).Return(mockRepoResources, nil)
// Verify only sync status is patched
// Initial patch with granular updates
rpf.On("Execute", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
// Verify only sync status is patched for final update
rpf.On("Execute", mock.Anything, mock.Anything, mock.MatchedBy(func(patch map[string]interface{}) bool {
return patch["path"] == "/status/sync"
})).Return(nil)
})).Return(nil).Once()
// Simple mocks for other calls
mockClients := resources.NewMockResourceClients(t)
@@ -381,7 +387,8 @@ func TestSyncWorker_Process(t *testing.T) {
}
rw.MockRepository.On("Config").Return(repoConfig)
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
rpf.On("Execute", mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
// Initial patch with granular updates
rpf.On("Execute", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
mockRepoResources := resources.NewMockRepositoryResources(t)
stats := &provisioning.ResourceStats{
@@ -468,10 +475,13 @@ func TestSyncWorker_Process(t *testing.T) {
mockRepoResources.On("Stats", mock.Anything).Return(stats, nil)
rrf.On("Client", mock.Anything, mock.Anything).Return(mockRepoResources, nil)
// Initial patch with granular updates
rpf.On("Execute", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
// Verify only sync status is patched (multiple stats should be ignored)
rpf.On("Execute", mock.Anything, mock.Anything, mock.MatchedBy(func(patch map[string]interface{}) bool {
return patch["path"] == "/status/sync"
})).Return(nil)
})).Return(nil).Once()
// Simple mocks for other calls
mockClients := resources.NewMockResourceClients(t)
@@ -495,8 +505,8 @@ func TestSyncWorker_Process(t *testing.T) {
rw.MockRepository.On("Config").Return(repoConfig)
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
// Initial status patch succeeds
rpf.On("Execute", mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
// Initial status patch succeeds - expect granular patches
rpf.On("Execute", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
// Setup resources and clients
mockRepoResources := resources.NewMockRepositoryResources(t)

View File

@@ -35,6 +35,7 @@ import (
clientset "github.com/grafana/grafana/apps/provisioning/pkg/generated/clientset/versioned"
client "github.com/grafana/grafana/apps/provisioning/pkg/generated/clientset/versioned/typed/provisioning/v0alpha1"
informers "github.com/grafana/grafana/apps/provisioning/pkg/generated/informers/externalversions"
jobsvalidation "github.com/grafana/grafana/apps/provisioning/pkg/jobs"
"github.com/grafana/grafana/apps/provisioning/pkg/loki"
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
"github.com/grafana/grafana/pkg/apimachinery/identity"
@@ -576,10 +577,10 @@ func (b *APIBuilder) Validate(ctx context.Context, a admission.Attributes, o adm
return nil
}
// FIXME: Do nothing for Jobs for now
_, ok = obj.(*provisioning.Job)
// Validate Jobs
job, ok := obj.(*provisioning.Job)
if ok {
return nil
return jobsvalidation.ValidateJob(job)
}
repo, err := b.asRepository(ctx, obj, a.GetOldObject())

View File

@@ -128,6 +128,10 @@ func convertToK8sResource(
return nil, fmt.Errorf("failed to get metadata: %w", err)
}
meta.SetFolder(rule.NamespaceUID)
// Keep metadata label in sync with folder annotation for downstream consumers
if rule.NamespaceUID != "" {
k8sRule.Labels[model.FolderLabelKey] = rule.NamespaceUID
}
if rule.UpdatedBy != nil {
meta.SetUpdatedBy(string(*rule.UpdatedBy))
k8sRule.SetUpdatedBy(string(*rule.UpdatedBy))

View File

@@ -76,6 +76,10 @@ func convertToK8sResource(
return nil, fmt.Errorf("failed to get metadata: %w", err)
}
meta.SetFolder(rule.NamespaceUID)
// Keep metadata label in sync with folder annotation for downstream consumers
if rule.NamespaceUID != "" {
k8sRule.Labels[model.FolderLabelKey] = rule.NamespaceUID
}
if rule.UpdatedBy != nil {
meta.SetUpdatedBy(string(*rule.UpdatedBy))
k8sRule.SetUpdatedBy(string(*rule.UpdatedBy))

View File

@@ -104,7 +104,7 @@ func (s *legacyStorage) Get(ctx context.Context, name string, _ *metav1.GetOptio
return obj, err
}
func (s *legacyStorage) Create(ctx context.Context, obj runtime.Object, _ rest.ValidateObjectFunc, _ *metav1.CreateOptions) (runtime.Object, error) {
func (s *legacyStorage) Create(ctx context.Context, obj runtime.Object, createValidation rest.ValidateObjectFunc, _ *metav1.CreateOptions) (runtime.Object, error) {
info, err := request.NamespaceInfoFrom(ctx, true)
if err != nil {
return nil, err
@@ -114,6 +114,11 @@ func (s *legacyStorage) Create(ctx context.Context, obj runtime.Object, _ rest.V
if err != nil {
return nil, err
}
if createValidation != nil {
if err := createValidation(ctx, obj); err != nil {
return nil, err
}
}
p, ok := obj.(*model.RecordingRule)
if !ok {

View File

@@ -14,13 +14,17 @@ import (
"github.com/grafana/grafana/apps/alerting/rules/pkg/apis"
rulesApp "github.com/grafana/grafana/apps/alerting/rules/pkg/app"
rulesAppConfig "github.com/grafana/grafana/apps/alerting/rules/pkg/app/config"
"github.com/grafana/grafana/pkg/apimachinery/identity"
grafanarest "github.com/grafana/grafana/pkg/apiserver/rest"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/grafana/grafana/pkg/registry/apps/alerting/rules/alertrule"
"github.com/grafana/grafana/pkg/registry/apps/alerting/rules/recordingrule"
"github.com/grafana/grafana/pkg/services/apiserver/appinstaller"
"github.com/grafana/grafana/pkg/services/apiserver/endpoints/request"
reqns "github.com/grafana/grafana/pkg/services/apiserver/endpoints/request"
"github.com/grafana/grafana/pkg/services/ngalert"
ngmodels "github.com/grafana/grafana/pkg/services/ngalert/models"
"github.com/grafana/grafana/pkg/services/ngalert/notifier"
"github.com/grafana/grafana/pkg/setting"
)
@@ -50,11 +54,66 @@ func RegisterAppInstaller(
ng: ng,
}
provider := simple.NewAppProvider(apis.LocalManifest(), nil, rulesApp.New)
appSpecificConfig := rulesAppConfig.RuntimeConfig{
// Validate folder existence using the folder service
FolderValidator: func(ctx context.Context, folderUID string) (bool, error) {
if folderUID == "" {
return false, nil
}
orgID, err := reqns.OrgIDForList(ctx)
user, _ := identity.GetRequester(ctx)
if (err != nil || orgID < 1) && user != nil {
orgID = user.GetOrgID()
}
if user == nil || orgID < 1 {
// If we can't resolve identity/org in this context, don't block creation based on existence
return true, nil
}
// Use the RuleStore to check namespace (folder) visibility
_, err = ng.Api.RuleStore.GetNamespaceByUID(ctx, folderUID, orgID, user)
if err != nil {
return false, nil
}
return true, nil
},
BaseEvaluationInterval: ng.Cfg.UnifiedAlerting.BaseInterval,
ReservedLabelKeys: ngmodels.LabelsUserCannotSpecify,
// Validate that the configured notification receiver exists in the Alertmanager config
NotificationSettingsValidator: func(ctx context.Context, receiver string) (bool, error) {
if receiver == "" {
return false, nil
}
orgID, err := reqns.OrgIDForList(ctx)
if err != nil || orgID < 1 {
if user, _ := identity.GetRequester(ctx); user != nil {
orgID = user.GetOrgID()
}
}
if orgID < 1 {
// Without org context, skip validation rather than block
return true, nil
}
provider := notifier.NewCachedNotificationSettingsValidationService(ng.Api.AlertingStore)
vd, err := provider.Validator(ctx, orgID)
if err != nil {
log.New("alerting.rules.app").Error("failed to create notification settings validator", "error", err)
// If we cannot build a validator, don't block admission
return true, nil
}
// Only validate receiver presence; construct minimal settings
if err := vd.Validate(ngmodels.NotificationSettings{Receiver: receiver}); err != nil {
return false, nil
}
return true, nil
},
}
provider := simple.NewAppProvider(apis.LocalManifest(), appSpecificConfig, rulesApp.New)
appConfig := app.Config{
KubeConfig: restclient.Config{}, // this will be overridden by the installer's InitializeApp method
ManifestData: *apis.LocalManifest().ManifestData,
KubeConfig: restclient.Config{}, // this will be overridden by the installer's InitializeApp method
ManifestData: *apis.LocalManifest().ManifestData,
SpecificConfig: appSpecificConfig,
}
i, err := appsdkapiserver.NewDefaultAppInstaller(provider, appConfig, &apis.GoTypeAssociator{})
@@ -81,7 +140,7 @@ func (a *AlertingRulesAppInstaller) GetAuthorizer() authorizer.Authorizer {
}
func (a *AlertingRulesAppInstaller) GetLegacyStorage(gvr schema.GroupVersionResource) grafanarest.Storage {
namespacer := request.GetNamespaceMapper(a.cfg)
namespacer := reqns.GetNamespaceMapper(a.cfg)
switch gvr {
case recordingrule.ResourceInfo.GroupVersionResource():
return recordingrule.NewStorage(*a.ng.Api.AlertRules, namespacer)

View File

@@ -847,7 +847,7 @@ func Initialize(ctx context.Context, cfg *setting.Cfg, opts Options, apiOpts api
apiService := api4.ProvideService(cfg, routeRegisterImpl, accessControl, userService, authinfoimplService, ossGroups, identitySynchronizer, orgService, ldapImpl, userAuthTokenService, bundleregistryService)
dashboardsAPIBuilder := dashboard.RegisterAPIService(cfg, featureToggles, apiserverService, dashboardService, dashboardProvisioningService, service15, dashboardServiceImpl, dashboardPermissionsService, accessControl, accessClient, provisioningServiceImpl, dashboardsStore, registerer, sqlStore, tracingService, resourceClient, dualwriteService, sortService, quotaService, libraryPanelService, eventualRestConfigProvider, userService, libraryElementService, publicDashboardServiceImpl)
snapshotsAPIBuilder := dashboardsnapshot.RegisterAPIService(serviceImpl, apiserverService, cfg, featureToggles, sqlStore, registerer)
dataSourceAPIBuilder, err := datasource.RegisterAPIService(configProvider, featureToggles, apiserverService, middlewareHandler, scopedPluginDatasourceProvider, plugincontextProvider, accessControl, registerer)
dataSourceAPIBuilder, err := datasource.RegisterAPIService(featureToggles, apiserverService, middlewareHandler, scopedPluginDatasourceProvider, plugincontextProvider, accessControl, registerer, sourcesService)
if err != nil {
return nil, err
}
@@ -1485,7 +1485,7 @@ func InitializeForTest(ctx context.Context, t sqlutil.ITestDB, testingT interfac
apiService := api4.ProvideService(cfg, routeRegisterImpl, accessControl, userService, authinfoimplService, ossGroups, identitySynchronizer, orgService, ldapImpl, userAuthTokenService, bundleregistryService)
dashboardsAPIBuilder := dashboard.RegisterAPIService(cfg, featureToggles, apiserverService, dashboardService, dashboardProvisioningService, service15, dashboardServiceImpl, dashboardPermissionsService, accessControl, accessClient, provisioningServiceImpl, dashboardsStore, registerer, sqlStore, tracingService, resourceClient, dualwriteService, sortService, quotaService, libraryPanelService, eventualRestConfigProvider, userService, libraryElementService, publicDashboardServiceImpl)
snapshotsAPIBuilder := dashboardsnapshot.RegisterAPIService(serviceImpl, apiserverService, cfg, featureToggles, sqlStore, registerer)
dataSourceAPIBuilder, err := datasource.RegisterAPIService(configProvider, featureToggles, apiserverService, middlewareHandler, scopedPluginDatasourceProvider, plugincontextProvider, accessControl, registerer)
dataSourceAPIBuilder, err := datasource.RegisterAPIService(featureToggles, apiserverService, middlewareHandler, scopedPluginDatasourceProvider, plugincontextProvider, accessControl, registerer, sourcesService)
if err != nil {
return nil, err
}

View File

@@ -405,14 +405,6 @@ var (
Stage: FeatureStagePublicPreview,
Owner: identityAccessTeam,
},
{
Name: "panelMonitoring",
Description: "Enables panel monitoring through logs and measurements",
Stage: FeatureStageGeneralAvailability,
Expression: "true", // enabled by default
Owner: grafanaDatavizSquad,
FrontendOnly: true,
},
{
Name: "enableNativeHTTPHistogram",
Description: "Enables native HTTP Histograms",
@@ -971,6 +963,13 @@ var (
},
{
Name: "dashboardLibrary",
Description: "Enable dashboard library experiments that are production ready",
Stage: FeatureStageExperimental,
Owner: grafanaSharingSquad,
FrontendOnly: false,
},
{
Name: "suggestedDashboards",
Description: "Enable suggested dashboards when creating new dashboards",
Stage: FeatureStageExperimental,
Owner: grafanaSharingSquad,

View File

@@ -52,7 +52,6 @@ reportingRetries,preview,@grafana/grafana-operator-experience-squad,false,true,f
sseGroupByDatasource,experimental,@grafana/observability-metrics,false,false,false
lokiRunQueriesInParallel,privatePreview,@grafana/observability-logs,false,false,false
externalServiceAccounts,preview,@grafana/identity-access-team,false,false,false
panelMonitoring,GA,@grafana/dataviz-squad,false,false,true
enableNativeHTTPHistogram,experimental,@grafana/grafana-backend-services-squad,false,true,false
disableClassicHTTPHistogram,experimental,@grafana/grafana-backend-services-squad,false,true,false
formatString,GA,@grafana/dataviz-squad,false,false,true
@@ -127,6 +126,7 @@ disableNumericMetricsSortingInExpressions,experimental,@grafana/oss-big-tent,fal
grafanaManagedRecordingRules,experimental,@grafana/alerting-squad,false,false,false
queryLibrary,preview,@grafana/sharing-squad,false,false,false
dashboardLibrary,experimental,@grafana/sharing-squad,false,false,false
suggestedDashboards,experimental,@grafana/sharing-squad,false,false,false
logsExploreTableDefaultVisualization,experimental,@grafana/observability-logs,false,false,true
alertingListViewV2,privatePreview,@grafana/alerting-squad,false,false,true
alertingDisableSendAlertsExternal,experimental,@grafana/alerting-squad,false,false,false
1 Name Stage Owner requiresDevMode RequiresRestart FrontendOnly
52 sseGroupByDatasource experimental @grafana/observability-metrics false false false
53 lokiRunQueriesInParallel privatePreview @grafana/observability-logs false false false
54 externalServiceAccounts preview @grafana/identity-access-team false false false
panelMonitoring GA @grafana/dataviz-squad false false true
55 enableNativeHTTPHistogram experimental @grafana/grafana-backend-services-squad false true false
56 disableClassicHTTPHistogram experimental @grafana/grafana-backend-services-squad false true false
57 formatString GA @grafana/dataviz-squad false false true
126 grafanaManagedRecordingRules experimental @grafana/alerting-squad false false false
127 queryLibrary preview @grafana/sharing-squad false false false
128 dashboardLibrary experimental @grafana/sharing-squad false false false
129 suggestedDashboards experimental @grafana/sharing-squad false false false
130 logsExploreTableDefaultVisualization experimental @grafana/observability-logs false false true
131 alertingListViewV2 privatePreview @grafana/alerting-squad false false true
132 alertingDisableSendAlertsExternal experimental @grafana/alerting-squad false false false

View File

@@ -219,10 +219,6 @@ const (
// Automatic service account and token setup for plugins
FlagExternalServiceAccounts = "externalServiceAccounts"
// FlagPanelMonitoring
// Enables panel monitoring through logs and measurements
FlagPanelMonitoring = "panelMonitoring"
// FlagEnableNativeHTTPHistogram
// Enables native HTTP Histograms
FlagEnableNativeHTTPHistogram = "enableNativeHTTPHistogram"
@@ -516,9 +512,13 @@ const (
FlagQueryLibrary = "queryLibrary"
// FlagDashboardLibrary
// Enable suggested dashboards when creating new dashboards
// Enable dashboard library experiments that are production ready
FlagDashboardLibrary = "dashboardLibrary"
// FlagSuggestedDashboards
// Enable suggested dashboards when creating new dashboards
FlagSuggestedDashboards = "suggestedDashboards"
// FlagLogsExploreTableDefaultVisualization
// Sets the logs table as default visualisation in logs explore
FlagLogsExploreTableDefaultVisualization = "logsExploreTableDefaultVisualization"

View File

@@ -1082,14 +1082,14 @@
{
"metadata": {
"name": "dashboardLibrary",
"resourceVersion": "1760051989635",
"resourceVersion": "1762521182817",
"creationTimestamp": "2025-09-26T16:02:12Z",
"annotations": {
"grafana.app/updatedTimestamp": "2025-10-09 23:19:49.635811 +0000 UTC"
"grafana.app/updatedTimestamp": "2025-11-07 13:13:02.817210943 +0000 UTC"
}
},
"spec": {
"description": "Enable suggested dashboards when creating new dashboards",
"description": "Enable dashboard library experiments that are production ready",
"stage": "experimental",
"codeowner": "@grafana/sharing-squad"
}
@@ -2905,7 +2905,8 @@
"metadata": {
"name": "panelMonitoring",
"resourceVersion": "1753448760331",
"creationTimestamp": "2023-10-09T05:19:08Z"
"creationTimestamp": "2023-10-09T05:19:08Z",
"deletionTimestamp": "2025-11-06T15:46:51Z"
},
"spec": {
"description": "Enables panel monitoring through logs and measurements",
@@ -3779,6 +3780,18 @@
"codeowner": "@grafana/search-and-storage"
}
},
{
"metadata": {
"name": "suggestedDashboards",
"resourceVersion": "1762521182817",
"creationTimestamp": "2025-11-07T13:13:02Z"
},
"spec": {
"description": "Enable suggested dashboards when creating new dashboards",
"stage": "experimental",
"codeowner": "@grafana/sharing-squad"
}
},
{
"metadata": {
"name": "tableNextGen",

View File

@@ -388,7 +388,9 @@ func (ss *FolderUnifiedStoreImpl) GetFolders(ctx context.Context, q folder.GetFo
}
if (q.WithFullpath || q.WithFullpathUIDs) && f.Fullpath == "" {
buildFolderFullPaths(f, relations, folderMap)
if err := buildFolderFullPaths(f, relations, folderMap); err != nil {
return nil, err
}
}
hits = append(hits, f)
@@ -559,15 +561,21 @@ func computeFullPath(parents []*folder.Folder) (string, string) {
return strings.Join(fullpath, "/"), strings.Join(fullpathUIDs, "/")
}
func buildFolderFullPaths(f *folder.Folder, relations map[string]string, folderMap map[string]*folder.Folder) {
func buildFolderFullPaths(f *folder.Folder, relations map[string]string, folderMap map[string]*folder.Folder) error {
titles := make([]string, 0)
uids := make([]string, 0)
titles = append(titles, f.Title)
uids = append(uids, f.UID)
i := 0
currentUID := f.UID
for currentUID != "" {
// This is just a circuit breaker to prevent infinite loops. We should never reach this limit.
if i > 1000 {
return fmt.Errorf("folder depth exceeds the maximum allowed depth, You might have a circular reference")
}
i++
parentUID, exists := relations[currentUID]
if !exists {
break
@@ -588,6 +596,7 @@ func buildFolderFullPaths(f *folder.Folder, relations map[string]string, folderM
f.Fullpath = strings.Join(util.Reverse(titles), "/")
f.FullpathUIDs = strings.Join(util.Reverse(uids), "/")
return nil
}
func shouldSkipFolder(f *folder.Folder, filterUIDs map[string]struct{}) bool {

View File

@@ -881,7 +881,7 @@ func TestBuildFolderFullPaths(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
buildFolderFullPaths(tt.args.f, tt.args.relations, tt.args.folderMap)
require.NoError(t, buildFolderFullPaths(tt.args.f, tt.args.relations, tt.args.folderMap))
require.Equal(t, tt.want.Fullpath, tt.args.f.Fullpath, "BuildFolderFullPaths() = %v, want %v", tt.args.f.Fullpath, tt.want.Fullpath)
require.Equal(t, tt.want.FullpathUIDs, tt.args.f.FullpathUIDs, "BuildFolderFullPaths() = %v, want %v", tt.args.f.FullpathUIDs, tt.want.FullpathUIDs)
require.Equal(t, tt.want.Title, tt.args.f.Title, "BuildFolderFullPaths() = %v, want %v", tt.args.f.Title, tt.want.Title)

View File

@@ -473,17 +473,22 @@ func (l *LibraryElementService) getAllLibraryElements(c context.Context, signedI
if err != nil {
return err
}
// Every signed in user can see the general folder. The general folder might have "general" or the empty string as its UID.
var folderUIDS = []string{"general", ""}
folderMap := map[string]string{}
// Using a map for O(1) lookup instead of O(n) slice iteration
folderUIDSet := make(map[string]bool, len(fs)+2)
folderUIDSet["general"] = true
folderUIDSet[""] = true
folderMap := make(map[string]string, len(fs))
for _, f := range fs {
folderUIDS = append(folderUIDS, f.UID)
folderUIDSet[f.UID] = true
folderMap[f.UID] = f.Title
}
// if the user is not an admin, we need to filter out elements that are not in folders the user can see
for _, element := range elements {
if !signedInUser.HasRole(org.RoleAdmin) {
if !contains(folderUIDS, element.FolderUID) {
if !folderUIDSet[element.FolderUID] {
continue
}
}
@@ -522,10 +527,11 @@ func (l *LibraryElementService) getAllLibraryElements(c context.Context, signedI
})
}
var libraryElements []model.LibraryElement
var libraryElements []model.LibraryElementWithMeta
countBuilder := db.SQLBuilder{}
if folderFilter.includeGeneralFolder {
countBuilder.Write(selectLibraryElementDTOWithMeta)
countBuilder.Write(", '' as folder_uid ")
countBuilder.Write(getFromLibraryElementDTOWithMeta(l.SQLStore.GetDialect()))
countBuilder.Write(` WHERE le.org_id=? AND le.folder_id=0`, signedInUser.GetOrgID())
writeKindSQL(query, &countBuilder)
@@ -537,6 +543,7 @@ func (l *LibraryElementService) getAllLibraryElements(c context.Context, signedI
countBuilder.Write(" ")
}
countBuilder.Write(selectLibraryElementDTOWithMeta)
countBuilder.Write(", le.folder_uid as folder_uid ")
countBuilder.Write(getFromLibraryElementDTOWithMeta(l.SQLStore.GetDialect()))
countBuilder.Write(` WHERE le.org_id=? AND le.folder_id<>0`, signedInUser.GetOrgID())
writeKindSQL(query, &countBuilder)
@@ -550,8 +557,19 @@ func (l *LibraryElementService) getAllLibraryElements(c context.Context, signedI
return err
}
// Apply the same folder permission filtering to the count for non-admin users
totalCount := int64(len(libraryElements))
if !signedInUser.HasRole(org.RoleAdmin) {
totalCount = 0
for _, element := range libraryElements {
if folderUIDSet[element.FolderUID] {
totalCount++
}
}
}
result = model.LibraryElementSearchResult{
TotalCount: int64(len(libraryElements)),
TotalCount: totalCount,
Elements: retDTOs,
Page: query.Page,
PerPage: query.PerPage,
@@ -878,15 +896,6 @@ func (l *LibraryElementService) deleteLibraryElementsInFolderUID(c context.Conte
})
}
func contains(slice []string, element string) bool {
for _, item := range slice {
if item == element {
return true
}
}
return false
}
func getFoldersWithMatchingTitles(c context.Context, l *LibraryElementService, signedInUser identity.Requester, query model.SearchLibraryElementsQuery) ([]string, error) {
if len(strings.TrimSpace(query.SearchString)) <= 0 {
return nil, nil

View File

@@ -1370,4 +1370,161 @@ func TestIntegration_GetAllLibraryElements(t *testing.T) {
require.NotEmpty(t, element.UID, "Should have a UID")
require.Equal(t, int64(0), element.Meta.ConnectedDashboards, "Should have no connected dashboards")
})
// Non-admin user permission tests
scenarioWithPanel(t, "When a non-admin user has folders but none of the library elements are in those folders, it should return empty result",
func(t *testing.T, sc scenarioContext) {
// Create library panels in the scenario folder
// nolint:staticcheck
command := getCreatePanelCommand(sc.folder.ID, sc.folder.UID, "Text - Library Panel2")
sc.reqContext.Req.Body = mockRequestBody(command)
resp := sc.service.createHandler(sc.reqContext)
require.Equal(t, 200, resp.Status())
// Create a different folder that the non-admin user has access to (but has no panels)
differentFolder := &folder.Folder{
ID: 2,
OrgID: 1,
UID: "uid_for_DifferentFolder",
Title: "DifferentFolder",
}
// Change user to non-admin and set their accessible folders to only the different folder
// This simulates a user who can see a folder but that folder doesn't contain any of the library elements
sc.reqContext.OrgRole = org.RoleViewer
sc.folderSvc.ExpectedFolders = []*folder.Folder{differentFolder}
sc.folderSvc.AddFolder(differentFolder)
resp = sc.service.getAllHandler(sc.reqContext)
require.Equal(t, 200, resp.Status())
var result libraryElementsSearch
err := json.Unmarshal(resp.Body(), &result)
require.NoError(t, err)
// TotalCount should be 0 for non-admin users since they can't access the folders with panels
require.Equal(t, int64(0), result.Result.TotalCount, "TotalCount should be 0 since user has no access to folders with panels")
require.Equal(t, 0, len(result.Result.Elements), "Elements should be empty since user has no access to folders with panels")
require.Equal(t, 1, result.Result.Page, "Should be on page 1")
require.Equal(t, 100, result.Result.PerPage, "Should have perPage 100")
})
scenarioWithPanel(t, "When a non-admin user has folders and some library elements are in those folders, it should return only accessible elements",
func(t *testing.T, sc scenarioContext) {
// Create a second folder that the non-admin user will have access to
accessibleFolder := &folder.Folder{
ID: 2,
OrgID: 1,
UID: "uid_for_AccessibleFolder",
Title: "AccessibleFolder",
}
// Create a library panel in the accessible folder (need to add it to fake service first)
sc.folderSvc.ExpectedFolder = accessibleFolder
sc.folderSvc.AddFolder(accessibleFolder)
// nolint:staticcheck
command := getCreatePanelCommand(accessibleFolder.ID, accessibleFolder.UID, "Accessible Panel")
sc.reqContext.Req.Body = mockRequestBody(command)
resp := sc.service.createHandler(sc.reqContext)
require.Equal(t, 200, resp.Status())
// Create another panel in a folder the user won't have access to
inaccessibleFolder := &folder.Folder{
ID: 3,
OrgID: 1,
UID: "uid_for_InaccessibleFolder",
Title: "InaccessibleFolder",
}
sc.folderSvc.ExpectedFolder = inaccessibleFolder
sc.folderSvc.AddFolder(inaccessibleFolder)
// nolint:staticcheck
command = getCreatePanelCommand(inaccessibleFolder.ID, inaccessibleFolder.UID, "Inaccessible Panel")
sc.reqContext.Req.Body = mockRequestBody(command)
resp = sc.service.createHandler(sc.reqContext)
require.Equal(t, 200, resp.Status())
// Change user to non-admin and set their accessible folders to only the accessible folder and scenario folder
// This will filter out the inaccessible folder
sc.reqContext.OrgRole = org.RoleViewer
sc.folderSvc.ExpectedFolders = []*folder.Folder{sc.folder, accessibleFolder}
resp = sc.service.getAllHandler(sc.reqContext)
require.Equal(t, 200, resp.Status())
var result libraryElementsSearch
err := json.Unmarshal(resp.Body(), &result)
require.NoError(t, err)
// TotalCount should match the number of accessible elements (2) for non-admin users
require.Equal(t, int64(2), result.Result.TotalCount, "TotalCount should be 2 (only accessible panels)")
require.Equal(t, 2, len(result.Result.Elements), "Elements should contain only 2 accessible panels")
require.Equal(t, 1, result.Result.Page, "Should be on page 1")
require.Equal(t, 100, result.Result.PerPage, "Should have perPage 100")
// Verify the returned panels are from accessible folders only
folderUIDs := make(map[string]bool)
for _, element := range result.Result.Elements {
folderUIDs[element.FolderUID] = true
require.Contains(t, []string{sc.folder.UID, accessibleFolder.UID}, element.FolderUID, "Element should be in accessible folder")
require.NotEqual(t, inaccessibleFolder.UID, element.FolderUID, "Element should not be from inaccessible folder")
}
require.True(t, folderUIDs[sc.folder.UID], "Should include panel from scenario folder")
require.True(t, folderUIDs[accessibleFolder.UID], "Should include panel from accessible folder")
})
scenarioWithPanel(t, "When a non-admin user has access to all folders containing library elements, it should return all elements",
func(t *testing.T, sc scenarioContext) {
// Create a second folder that the non-admin user will have access to
folder2 := &folder.Folder{
ID: 2,
OrgID: 1,
UID: "uid_for_Folder2",
Title: "Folder2",
}
sc.folderSvc.ExpectedFolder = folder2
sc.folderSvc.AddFolder(folder2)
// Create a library panel in folder2
// nolint:staticcheck
command := getCreatePanelCommand(folder2.ID, folder2.UID, "Panel in Folder2")
sc.reqContext.Req.Body = mockRequestBody(command)
resp := sc.service.createHandler(sc.reqContext)
require.Equal(t, 200, resp.Status())
// Create another panel in the original scenario folder
sc.folderSvc.ExpectedFolder = sc.folder
// nolint:staticcheck
command = getCreatePanelCommand(sc.folder.ID, sc.folder.UID, "Panel in ScenarioFolder")
sc.reqContext.Req.Body = mockRequestBody(command)
resp = sc.service.createHandler(sc.reqContext)
require.Equal(t, 200, resp.Status())
// Change user to non-admin and set their accessible folders to include all folders with panels
sc.reqContext.OrgRole = org.RoleViewer
sc.folderSvc.ExpectedFolders = []*folder.Folder{sc.folder, folder2}
resp = sc.service.getAllHandler(sc.reqContext)
require.Equal(t, 200, resp.Status())
var result libraryElementsSearch
err := json.Unmarshal(resp.Body(), &result)
require.NoError(t, err)
// Should return all 3 panels (1 from initial setup + 2 created in this test)
require.Equal(t, int64(3), result.Result.TotalCount, "Should return all 3 panels")
require.Equal(t, 3, len(result.Result.Elements), "Should have 3 elements")
require.Equal(t, 1, result.Result.Page, "Should be on page 1")
require.Equal(t, 100, result.Result.PerPage, "Should have perPage 100")
// Verify all panels are from the accessible folders
folderUIDs := make(map[string]int)
for _, element := range result.Result.Elements {
folderUIDs[element.FolderUID]++
require.Contains(t, []string{sc.folder.UID, folder2.UID}, element.FolderUID, "All elements should be in accessible folders")
require.Equal(t, int64(model.PanelElement), element.Kind, "Should be a panel element")
require.Equal(t, "text", element.Type, "Should be text panel")
}
require.Equal(t, 2, folderUIDs[sc.folder.UID], "Should have 2 panels in scenario folder")
require.Equal(t, 1, folderUIDs[folder2.UID], "Should have 1 panel in folder2")
})
}

View File

@@ -8,15 +8,18 @@ import (
//go:generate mockery --name AuthInfoService --structname MockAuthInfoService --outpkg authinfotest --filename auth_info_service_mock.go --output ./authinfotest/
type AuthInfoService interface {
GetAuthInfo(ctx context.Context, query *GetAuthInfoQuery) (*UserAuth, error)
GetUserLabels(ctx context.Context, query GetUserLabelsQuery) (map[int64]string, error)
GetUsersRecentlyUsedLabel(ctx context.Context, query GetUserLabelsQuery) (map[int64]string, error)
GetUserAuthModuleLabels(ctx context.Context, userID int64) ([]string, error)
SetAuthInfo(ctx context.Context, cmd *SetAuthInfoCommand) error
UpdateAuthInfo(ctx context.Context, cmd *UpdateAuthInfoCommand) error
DeleteUserAuthInfo(ctx context.Context, userID int64) error
}
//go:generate mockery --name Store --structname MockAuthInfoStore --outpkg authinfotest --filename auth_info_store_mock.go --output ./authinfotest/
type Store interface {
GetAuthInfo(ctx context.Context, query *GetAuthInfoQuery) (*UserAuth, error)
GetUserLabels(ctx context.Context, query GetUserLabelsQuery) (map[int64]string, error)
GetUsersRecentlyUsedLabel(ctx context.Context, query GetUserLabelsQuery) (map[int64]string, error)
GetUserAuthModules(ctx context.Context, userID int64) ([]string, error)
SetAuthInfo(ctx context.Context, cmd *SetAuthInfoCommand) error
UpdateAuthInfo(ctx context.Context, cmd *UpdateAuthInfoCommand) error
DeleteUserAuthInfo(ctx context.Context, userID int64) error

View File

@@ -67,11 +67,28 @@ func (s *Service) GetAuthInfo(ctx context.Context, query *login.GetAuthInfoQuery
return authInfo, nil
}
func (s *Service) GetUserLabels(ctx context.Context, query login.GetUserLabelsQuery) (map[int64]string, error) {
// GetUserAuthModuleLabels returns all auth modules for a user ordered by most recent first.
func (s *Service) GetUserAuthModuleLabels(ctx context.Context, userID int64) ([]string, error) {
modules, err := s.authInfoStore.GetUserAuthModules(ctx, userID)
if err != nil {
return nil, err
}
result := make([]string, 0, len(modules))
// modules should be unique and should not contain empty strings
for _, m := range modules {
label := login.GetAuthProviderLabel(m)
result = append(result, label)
}
return result, nil
}
func (s *Service) GetUsersRecentlyUsedLabel(ctx context.Context, query login.GetUserLabelsQuery) (map[int64]string, error) {
if len(query.UserIDs) == 0 {
return map[int64]string{}, nil
}
return s.authInfoStore.GetUserLabels(ctx, query)
return s.authInfoStore.GetUsersRecentlyUsedLabel(ctx, query)
}
func (s *Service) setAuthInfoInCache(ctx context.Context, query *login.GetAuthInfoQuery, info *login.UserAuth) error {

View File

@@ -0,0 +1,31 @@
package authinfoimpl
import (
"context"
"testing"
"github.com/grafana/grafana/pkg/services/login"
"github.com/grafana/grafana/pkg/services/login/authinfotest"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestAuthInfoService_GetUserAuthModuleLabels(t *testing.T) {
store := authinfotest.NewMockAuthInfoStore(t)
userID := int64(42)
// Input modules from store (order matters, uniqueness assumed)
modules := []string{login.OktaAuthModule, login.LDAPAuthModule, login.SAMLAuthModule}
store.On("GetUserAuthModules", mock.Anything, userID).Return(modules, nil)
svc := ProvideService(store, nil, nil)
actual, err := svc.GetUserAuthModuleLabels(context.Background(), userID)
require.NoError(t, err)
expected := []string{login.GetAuthProviderLabel(login.OktaAuthModule), login.GetAuthProviderLabel(login.LDAPAuthModule), login.GetAuthProviderLabel(login.SAMLAuthModule)}
// Verify labels mapped and order preserved
require.Equal(t, expected, actual)
}

View File

@@ -82,7 +82,7 @@ func (s *Store) GetAuthInfo(ctx context.Context, query *login.GetAuthInfoQuery)
return userAuth, nil
}
func (s *Store) GetUserLabels(ctx context.Context, query login.GetUserLabelsQuery) (map[int64]string, error) {
func (s *Store) GetUsersRecentlyUsedLabel(ctx context.Context, query login.GetUserLabelsQuery) (map[int64]string, error) {
userAuths := []login.UserAuth{}
params := make([]interface{}, 0, len(query.UserIDs))
for _, id := range query.UserIDs {
@@ -105,6 +105,29 @@ func (s *Store) GetUserLabels(ctx context.Context, query login.GetUserLabelsQuer
return labelMap, nil
}
// GetUserAuthModules returns all auth modules a user has used ordered by most recently used first.
func (s *Store) GetUserAuthModules(ctx context.Context, userID int64) ([]string, error) {
rows := make([]struct {
AuthModule string `xorm:"auth_module"`
}, 0)
err := s.sqlStore.WithDbSession(ctx, func(sess *db.Session) error {
return sess.Table("user_auth").Where("user_id = ?", userID).Desc("created").Cols("auth_module").Find(&rows)
})
if err != nil {
return nil, err
}
modules := make([]string, 0, len(rows))
seen := make(map[string]struct{}, len(rows))
for _, r := range rows {
if _, ok := seen[r.AuthModule]; ok {
continue
}
seen[r.AuthModule] = struct{}{}
modules = append(modules, r.AuthModule)
}
return modules, nil
}
func (s *Store) SetAuthInfo(ctx context.Context, cmd *login.SetAuthInfoCommand) error {
authUser := &login.UserAuth{
UserId: cmd.UserId,

View File

@@ -45,7 +45,7 @@ func TestIntegrationAuthInfoStore(t *testing.T) {
UserId: 2,
}))
labels, err := store.GetUserLabels(ctx, login.GetUserLabelsQuery{UserIDs: []int64{1, 2}})
labels, err := store.GetUsersRecentlyUsedLabel(ctx, login.GetUserLabelsQuery{UserIDs: []int64{1, 2}})
require.NoError(t, err)
require.Len(t, labels, 2)

View File

@@ -1,17 +1,163 @@
// Code generated by mockery; DO NOT EDIT.
// github.com/vektra/mockery
// template: testify
// Code generated by mockery v2.53.5. DO NOT EDIT.
package authinfotest
import (
"context"
context "context"
"github.com/grafana/grafana/pkg/services/login"
"github.com/grafana/grafana/pkg/services/user"
login "github.com/grafana/grafana/pkg/services/login"
mock "github.com/stretchr/testify/mock"
)
// MockAuthInfoService is an autogenerated mock type for the AuthInfoService type
type MockAuthInfoService struct {
mock.Mock
}
// DeleteUserAuthInfo provides a mock function with given fields: ctx, userID
func (_m *MockAuthInfoService) DeleteUserAuthInfo(ctx context.Context, userID int64) error {
ret := _m.Called(ctx, userID)
if len(ret) == 0 {
panic("no return value specified for DeleteUserAuthInfo")
}
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, int64) error); ok {
r0 = rf(ctx, userID)
} else {
r0 = ret.Error(0)
}
return r0
}
// GetAuthInfo provides a mock function with given fields: ctx, query
func (_m *MockAuthInfoService) GetAuthInfo(ctx context.Context, query *login.GetAuthInfoQuery) (*login.UserAuth, error) {
ret := _m.Called(ctx, query)
if len(ret) == 0 {
panic("no return value specified for GetAuthInfo")
}
var r0 *login.UserAuth
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, *login.GetAuthInfoQuery) (*login.UserAuth, error)); ok {
return rf(ctx, query)
}
if rf, ok := ret.Get(0).(func(context.Context, *login.GetAuthInfoQuery) *login.UserAuth); ok {
r0 = rf(ctx, query)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*login.UserAuth)
}
}
if rf, ok := ret.Get(1).(func(context.Context, *login.GetAuthInfoQuery) error); ok {
r1 = rf(ctx, query)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// GetUserAuthModuleLabels provides a mock function with given fields: ctx, userID
func (_m *MockAuthInfoService) GetUserAuthModuleLabels(ctx context.Context, userID int64) ([]string, error) {
ret := _m.Called(ctx, userID)
if len(ret) == 0 {
panic("no return value specified for GetUserAuthModuleLabels")
}
var r0 []string
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, int64) ([]string, error)); ok {
return rf(ctx, userID)
}
if rf, ok := ret.Get(0).(func(context.Context, int64) []string); ok {
r0 = rf(ctx, userID)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]string)
}
}
if rf, ok := ret.Get(1).(func(context.Context, int64) error); ok {
r1 = rf(ctx, userID)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// GetUsersRecentlyUsedLabel provides a mock function with given fields: ctx, query
func (_m *MockAuthInfoService) GetUsersRecentlyUsedLabel(ctx context.Context, query login.GetUserLabelsQuery) (map[int64]string, error) {
ret := _m.Called(ctx, query)
if len(ret) == 0 {
panic("no return value specified for GetUsersRecentlyUsedLabel")
}
var r0 map[int64]string
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, login.GetUserLabelsQuery) (map[int64]string, error)); ok {
return rf(ctx, query)
}
if rf, ok := ret.Get(0).(func(context.Context, login.GetUserLabelsQuery) map[int64]string); ok {
r0 = rf(ctx, query)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(map[int64]string)
}
}
if rf, ok := ret.Get(1).(func(context.Context, login.GetUserLabelsQuery) error); ok {
r1 = rf(ctx, query)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// SetAuthInfo provides a mock function with given fields: ctx, cmd
func (_m *MockAuthInfoService) SetAuthInfo(ctx context.Context, cmd *login.SetAuthInfoCommand) error {
ret := _m.Called(ctx, cmd)
if len(ret) == 0 {
panic("no return value specified for SetAuthInfo")
}
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, *login.SetAuthInfoCommand) error); ok {
r0 = rf(ctx, cmd)
} else {
r0 = ret.Error(0)
}
return r0
}
// UpdateAuthInfo provides a mock function with given fields: ctx, cmd
func (_m *MockAuthInfoService) UpdateAuthInfo(ctx context.Context, cmd *login.UpdateAuthInfoCommand) error {
ret := _m.Called(ctx, cmd)
if len(ret) == 0 {
panic("no return value specified for UpdateAuthInfo")
}
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, *login.UpdateAuthInfoCommand) error); ok {
r0 = rf(ctx, cmd)
} else {
r0 = ret.Error(0)
}
return r0
}
// NewMockAuthInfoService creates a new instance of MockAuthInfoService. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewMockAuthInfoService(t interface {
@@ -25,741 +171,3 @@ func NewMockAuthInfoService(t interface {
return mock
}
// MockAuthInfoService is an autogenerated mock type for the AuthInfoService type
type MockAuthInfoService struct {
mock.Mock
}
type MockAuthInfoService_Expecter struct {
mock *mock.Mock
}
func (_m *MockAuthInfoService) EXPECT() *MockAuthInfoService_Expecter {
return &MockAuthInfoService_Expecter{mock: &_m.Mock}
}
// DeleteUserAuthInfo provides a mock function for the type MockAuthInfoService
func (_mock *MockAuthInfoService) DeleteUserAuthInfo(ctx context.Context, userID int64) error {
ret := _mock.Called(ctx, userID)
if len(ret) == 0 {
panic("no return value specified for DeleteUserAuthInfo")
}
var r0 error
if returnFunc, ok := ret.Get(0).(func(context.Context, int64) error); ok {
r0 = returnFunc(ctx, userID)
} else {
r0 = ret.Error(0)
}
return r0
}
// MockAuthInfoService_DeleteUserAuthInfo_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'DeleteUserAuthInfo'
type MockAuthInfoService_DeleteUserAuthInfo_Call struct {
*mock.Call
}
// DeleteUserAuthInfo is a helper method to define mock.On call
// - ctx context.Context
// - userID int64
func (_e *MockAuthInfoService_Expecter) DeleteUserAuthInfo(ctx interface{}, userID interface{}) *MockAuthInfoService_DeleteUserAuthInfo_Call {
return &MockAuthInfoService_DeleteUserAuthInfo_Call{Call: _e.mock.On("DeleteUserAuthInfo", ctx, userID)}
}
func (_c *MockAuthInfoService_DeleteUserAuthInfo_Call) Run(run func(ctx context.Context, userID int64)) *MockAuthInfoService_DeleteUserAuthInfo_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
var arg1 int64
if args[1] != nil {
arg1 = args[1].(int64)
}
run(
arg0,
arg1,
)
})
return _c
}
func (_c *MockAuthInfoService_DeleteUserAuthInfo_Call) Return(err error) *MockAuthInfoService_DeleteUserAuthInfo_Call {
_c.Call.Return(err)
return _c
}
func (_c *MockAuthInfoService_DeleteUserAuthInfo_Call) RunAndReturn(run func(ctx context.Context, userID int64) error) *MockAuthInfoService_DeleteUserAuthInfo_Call {
_c.Call.Return(run)
return _c
}
// GetAuthInfo provides a mock function for the type MockAuthInfoService
func (_mock *MockAuthInfoService) GetAuthInfo(ctx context.Context, query *login.GetAuthInfoQuery) (*login.UserAuth, error) {
ret := _mock.Called(ctx, query)
if len(ret) == 0 {
panic("no return value specified for GetAuthInfo")
}
var r0 *login.UserAuth
var r1 error
if returnFunc, ok := ret.Get(0).(func(context.Context, *login.GetAuthInfoQuery) (*login.UserAuth, error)); ok {
return returnFunc(ctx, query)
}
if returnFunc, ok := ret.Get(0).(func(context.Context, *login.GetAuthInfoQuery) *login.UserAuth); ok {
r0 = returnFunc(ctx, query)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*login.UserAuth)
}
}
if returnFunc, ok := ret.Get(1).(func(context.Context, *login.GetAuthInfoQuery) error); ok {
r1 = returnFunc(ctx, query)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// MockAuthInfoService_GetAuthInfo_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetAuthInfo'
type MockAuthInfoService_GetAuthInfo_Call struct {
*mock.Call
}
// GetAuthInfo is a helper method to define mock.On call
// - ctx context.Context
// - query *login.GetAuthInfoQuery
func (_e *MockAuthInfoService_Expecter) GetAuthInfo(ctx interface{}, query interface{}) *MockAuthInfoService_GetAuthInfo_Call {
return &MockAuthInfoService_GetAuthInfo_Call{Call: _e.mock.On("GetAuthInfo", ctx, query)}
}
func (_c *MockAuthInfoService_GetAuthInfo_Call) Run(run func(ctx context.Context, query *login.GetAuthInfoQuery)) *MockAuthInfoService_GetAuthInfo_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
var arg1 *login.GetAuthInfoQuery
if args[1] != nil {
arg1 = args[1].(*login.GetAuthInfoQuery)
}
run(
arg0,
arg1,
)
})
return _c
}
func (_c *MockAuthInfoService_GetAuthInfo_Call) Return(userAuth *login.UserAuth, err error) *MockAuthInfoService_GetAuthInfo_Call {
_c.Call.Return(userAuth, err)
return _c
}
func (_c *MockAuthInfoService_GetAuthInfo_Call) RunAndReturn(run func(ctx context.Context, query *login.GetAuthInfoQuery) (*login.UserAuth, error)) *MockAuthInfoService_GetAuthInfo_Call {
_c.Call.Return(run)
return _c
}
// GetUserLabels provides a mock function for the type MockAuthInfoService
func (_mock *MockAuthInfoService) GetUserLabels(ctx context.Context, query login.GetUserLabelsQuery) (map[int64]string, error) {
ret := _mock.Called(ctx, query)
if len(ret) == 0 {
panic("no return value specified for GetUserLabels")
}
var r0 map[int64]string
var r1 error
if returnFunc, ok := ret.Get(0).(func(context.Context, login.GetUserLabelsQuery) (map[int64]string, error)); ok {
return returnFunc(ctx, query)
}
if returnFunc, ok := ret.Get(0).(func(context.Context, login.GetUserLabelsQuery) map[int64]string); ok {
r0 = returnFunc(ctx, query)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(map[int64]string)
}
}
if returnFunc, ok := ret.Get(1).(func(context.Context, login.GetUserLabelsQuery) error); ok {
r1 = returnFunc(ctx, query)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// MockAuthInfoService_GetUserLabels_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetUserLabels'
type MockAuthInfoService_GetUserLabels_Call struct {
*mock.Call
}
// GetUserLabels is a helper method to define mock.On call
// - ctx context.Context
// - query login.GetUserLabelsQuery
func (_e *MockAuthInfoService_Expecter) GetUserLabels(ctx interface{}, query interface{}) *MockAuthInfoService_GetUserLabels_Call {
return &MockAuthInfoService_GetUserLabels_Call{Call: _e.mock.On("GetUserLabels", ctx, query)}
}
func (_c *MockAuthInfoService_GetUserLabels_Call) Run(run func(ctx context.Context, query login.GetUserLabelsQuery)) *MockAuthInfoService_GetUserLabels_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
var arg1 login.GetUserLabelsQuery
if args[1] != nil {
arg1 = args[1].(login.GetUserLabelsQuery)
}
run(
arg0,
arg1,
)
})
return _c
}
func (_c *MockAuthInfoService_GetUserLabels_Call) Return(int64ToString map[int64]string, err error) *MockAuthInfoService_GetUserLabels_Call {
_c.Call.Return(int64ToString, err)
return _c
}
func (_c *MockAuthInfoService_GetUserLabels_Call) RunAndReturn(run func(ctx context.Context, query login.GetUserLabelsQuery) (map[int64]string, error)) *MockAuthInfoService_GetUserLabels_Call {
_c.Call.Return(run)
return _c
}
// SetAuthInfo provides a mock function for the type MockAuthInfoService
func (_mock *MockAuthInfoService) SetAuthInfo(ctx context.Context, cmd *login.SetAuthInfoCommand) error {
ret := _mock.Called(ctx, cmd)
if len(ret) == 0 {
panic("no return value specified for SetAuthInfo")
}
var r0 error
if returnFunc, ok := ret.Get(0).(func(context.Context, *login.SetAuthInfoCommand) error); ok {
r0 = returnFunc(ctx, cmd)
} else {
r0 = ret.Error(0)
}
return r0
}
// MockAuthInfoService_SetAuthInfo_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetAuthInfo'
type MockAuthInfoService_SetAuthInfo_Call struct {
*mock.Call
}
// SetAuthInfo is a helper method to define mock.On call
// - ctx context.Context
// - cmd *login.SetAuthInfoCommand
func (_e *MockAuthInfoService_Expecter) SetAuthInfo(ctx interface{}, cmd interface{}) *MockAuthInfoService_SetAuthInfo_Call {
return &MockAuthInfoService_SetAuthInfo_Call{Call: _e.mock.On("SetAuthInfo", ctx, cmd)}
}
func (_c *MockAuthInfoService_SetAuthInfo_Call) Run(run func(ctx context.Context, cmd *login.SetAuthInfoCommand)) *MockAuthInfoService_SetAuthInfo_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
var arg1 *login.SetAuthInfoCommand
if args[1] != nil {
arg1 = args[1].(*login.SetAuthInfoCommand)
}
run(
arg0,
arg1,
)
})
return _c
}
func (_c *MockAuthInfoService_SetAuthInfo_Call) Return(err error) *MockAuthInfoService_SetAuthInfo_Call {
_c.Call.Return(err)
return _c
}
func (_c *MockAuthInfoService_SetAuthInfo_Call) RunAndReturn(run func(ctx context.Context, cmd *login.SetAuthInfoCommand) error) *MockAuthInfoService_SetAuthInfo_Call {
_c.Call.Return(run)
return _c
}
// UpdateAuthInfo provides a mock function for the type MockAuthInfoService
func (_mock *MockAuthInfoService) UpdateAuthInfo(ctx context.Context, cmd *login.UpdateAuthInfoCommand) error {
ret := _mock.Called(ctx, cmd)
if len(ret) == 0 {
panic("no return value specified for UpdateAuthInfo")
}
var r0 error
if returnFunc, ok := ret.Get(0).(func(context.Context, *login.UpdateAuthInfoCommand) error); ok {
r0 = returnFunc(ctx, cmd)
} else {
r0 = ret.Error(0)
}
return r0
}
// MockAuthInfoService_UpdateAuthInfo_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'UpdateAuthInfo'
type MockAuthInfoService_UpdateAuthInfo_Call struct {
*mock.Call
}
// UpdateAuthInfo is a helper method to define mock.On call
// - ctx context.Context
// - cmd *login.UpdateAuthInfoCommand
func (_e *MockAuthInfoService_Expecter) UpdateAuthInfo(ctx interface{}, cmd interface{}) *MockAuthInfoService_UpdateAuthInfo_Call {
return &MockAuthInfoService_UpdateAuthInfo_Call{Call: _e.mock.On("UpdateAuthInfo", ctx, cmd)}
}
func (_c *MockAuthInfoService_UpdateAuthInfo_Call) Run(run func(ctx context.Context, cmd *login.UpdateAuthInfoCommand)) *MockAuthInfoService_UpdateAuthInfo_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
var arg1 *login.UpdateAuthInfoCommand
if args[1] != nil {
arg1 = args[1].(*login.UpdateAuthInfoCommand)
}
run(
arg0,
arg1,
)
})
return _c
}
func (_c *MockAuthInfoService_UpdateAuthInfo_Call) Return(err error) *MockAuthInfoService_UpdateAuthInfo_Call {
_c.Call.Return(err)
return _c
}
func (_c *MockAuthInfoService_UpdateAuthInfo_Call) RunAndReturn(run func(ctx context.Context, cmd *login.UpdateAuthInfoCommand) error) *MockAuthInfoService_UpdateAuthInfo_Call {
_c.Call.Return(run)
return _c
}
// NewMockStore creates a new instance of MockStore. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewMockStore(t interface {
mock.TestingT
Cleanup(func())
}) *MockStore {
mock := &MockStore{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
// MockStore is an autogenerated mock type for the Store type
type MockStore struct {
mock.Mock
}
type MockStore_Expecter struct {
mock *mock.Mock
}
func (_m *MockStore) EXPECT() *MockStore_Expecter {
return &MockStore_Expecter{mock: &_m.Mock}
}
// DeleteUserAuthInfo provides a mock function for the type MockStore
func (_mock *MockStore) DeleteUserAuthInfo(ctx context.Context, userID int64) error {
ret := _mock.Called(ctx, userID)
if len(ret) == 0 {
panic("no return value specified for DeleteUserAuthInfo")
}
var r0 error
if returnFunc, ok := ret.Get(0).(func(context.Context, int64) error); ok {
r0 = returnFunc(ctx, userID)
} else {
r0 = ret.Error(0)
}
return r0
}
// MockStore_DeleteUserAuthInfo_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'DeleteUserAuthInfo'
type MockStore_DeleteUserAuthInfo_Call struct {
*mock.Call
}
// DeleteUserAuthInfo is a helper method to define mock.On call
// - ctx context.Context
// - userID int64
func (_e *MockStore_Expecter) DeleteUserAuthInfo(ctx interface{}, userID interface{}) *MockStore_DeleteUserAuthInfo_Call {
return &MockStore_DeleteUserAuthInfo_Call{Call: _e.mock.On("DeleteUserAuthInfo", ctx, userID)}
}
func (_c *MockStore_DeleteUserAuthInfo_Call) Run(run func(ctx context.Context, userID int64)) *MockStore_DeleteUserAuthInfo_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
var arg1 int64
if args[1] != nil {
arg1 = args[1].(int64)
}
run(
arg0,
arg1,
)
})
return _c
}
func (_c *MockStore_DeleteUserAuthInfo_Call) Return(err error) *MockStore_DeleteUserAuthInfo_Call {
_c.Call.Return(err)
return _c
}
func (_c *MockStore_DeleteUserAuthInfo_Call) RunAndReturn(run func(ctx context.Context, userID int64) error) *MockStore_DeleteUserAuthInfo_Call {
_c.Call.Return(run)
return _c
}
// GetAuthInfo provides a mock function for the type MockStore
func (_mock *MockStore) GetAuthInfo(ctx context.Context, query *login.GetAuthInfoQuery) (*login.UserAuth, error) {
ret := _mock.Called(ctx, query)
if len(ret) == 0 {
panic("no return value specified for GetAuthInfo")
}
var r0 *login.UserAuth
var r1 error
if returnFunc, ok := ret.Get(0).(func(context.Context, *login.GetAuthInfoQuery) (*login.UserAuth, error)); ok {
return returnFunc(ctx, query)
}
if returnFunc, ok := ret.Get(0).(func(context.Context, *login.GetAuthInfoQuery) *login.UserAuth); ok {
r0 = returnFunc(ctx, query)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*login.UserAuth)
}
}
if returnFunc, ok := ret.Get(1).(func(context.Context, *login.GetAuthInfoQuery) error); ok {
r1 = returnFunc(ctx, query)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// MockStore_GetAuthInfo_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetAuthInfo'
type MockStore_GetAuthInfo_Call struct {
*mock.Call
}
// GetAuthInfo is a helper method to define mock.On call
// - ctx context.Context
// - query *login.GetAuthInfoQuery
func (_e *MockStore_Expecter) GetAuthInfo(ctx interface{}, query interface{}) *MockStore_GetAuthInfo_Call {
return &MockStore_GetAuthInfo_Call{Call: _e.mock.On("GetAuthInfo", ctx, query)}
}
func (_c *MockStore_GetAuthInfo_Call) Run(run func(ctx context.Context, query *login.GetAuthInfoQuery)) *MockStore_GetAuthInfo_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
var arg1 *login.GetAuthInfoQuery
if args[1] != nil {
arg1 = args[1].(*login.GetAuthInfoQuery)
}
run(
arg0,
arg1,
)
})
return _c
}
func (_c *MockStore_GetAuthInfo_Call) Return(userAuth *login.UserAuth, err error) *MockStore_GetAuthInfo_Call {
_c.Call.Return(userAuth, err)
return _c
}
func (_c *MockStore_GetAuthInfo_Call) RunAndReturn(run func(ctx context.Context, query *login.GetAuthInfoQuery) (*login.UserAuth, error)) *MockStore_GetAuthInfo_Call {
_c.Call.Return(run)
return _c
}
// GetUserLabels provides a mock function for the type MockStore
func (_mock *MockStore) GetUserLabels(ctx context.Context, query login.GetUserLabelsQuery) (map[int64]string, error) {
ret := _mock.Called(ctx, query)
if len(ret) == 0 {
panic("no return value specified for GetUserLabels")
}
var r0 map[int64]string
var r1 error
if returnFunc, ok := ret.Get(0).(func(context.Context, login.GetUserLabelsQuery) (map[int64]string, error)); ok {
return returnFunc(ctx, query)
}
if returnFunc, ok := ret.Get(0).(func(context.Context, login.GetUserLabelsQuery) map[int64]string); ok {
r0 = returnFunc(ctx, query)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(map[int64]string)
}
}
if returnFunc, ok := ret.Get(1).(func(context.Context, login.GetUserLabelsQuery) error); ok {
r1 = returnFunc(ctx, query)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// MockStore_GetUserLabels_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetUserLabels'
type MockStore_GetUserLabels_Call struct {
*mock.Call
}
// GetUserLabels is a helper method to define mock.On call
// - ctx context.Context
// - query login.GetUserLabelsQuery
func (_e *MockStore_Expecter) GetUserLabels(ctx interface{}, query interface{}) *MockStore_GetUserLabels_Call {
return &MockStore_GetUserLabels_Call{Call: _e.mock.On("GetUserLabels", ctx, query)}
}
func (_c *MockStore_GetUserLabels_Call) Run(run func(ctx context.Context, query login.GetUserLabelsQuery)) *MockStore_GetUserLabels_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
var arg1 login.GetUserLabelsQuery
if args[1] != nil {
arg1 = args[1].(login.GetUserLabelsQuery)
}
run(
arg0,
arg1,
)
})
return _c
}
func (_c *MockStore_GetUserLabels_Call) Return(int64ToString map[int64]string, err error) *MockStore_GetUserLabels_Call {
_c.Call.Return(int64ToString, err)
return _c
}
func (_c *MockStore_GetUserLabels_Call) RunAndReturn(run func(ctx context.Context, query login.GetUserLabelsQuery) (map[int64]string, error)) *MockStore_GetUserLabels_Call {
_c.Call.Return(run)
return _c
}
// SetAuthInfo provides a mock function for the type MockStore
func (_mock *MockStore) SetAuthInfo(ctx context.Context, cmd *login.SetAuthInfoCommand) error {
ret := _mock.Called(ctx, cmd)
if len(ret) == 0 {
panic("no return value specified for SetAuthInfo")
}
var r0 error
if returnFunc, ok := ret.Get(0).(func(context.Context, *login.SetAuthInfoCommand) error); ok {
r0 = returnFunc(ctx, cmd)
} else {
r0 = ret.Error(0)
}
return r0
}
// MockStore_SetAuthInfo_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SetAuthInfo'
type MockStore_SetAuthInfo_Call struct {
*mock.Call
}
// SetAuthInfo is a helper method to define mock.On call
// - ctx context.Context
// - cmd *login.SetAuthInfoCommand
func (_e *MockStore_Expecter) SetAuthInfo(ctx interface{}, cmd interface{}) *MockStore_SetAuthInfo_Call {
return &MockStore_SetAuthInfo_Call{Call: _e.mock.On("SetAuthInfo", ctx, cmd)}
}
func (_c *MockStore_SetAuthInfo_Call) Run(run func(ctx context.Context, cmd *login.SetAuthInfoCommand)) *MockStore_SetAuthInfo_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
var arg1 *login.SetAuthInfoCommand
if args[1] != nil {
arg1 = args[1].(*login.SetAuthInfoCommand)
}
run(
arg0,
arg1,
)
})
return _c
}
func (_c *MockStore_SetAuthInfo_Call) Return(err error) *MockStore_SetAuthInfo_Call {
_c.Call.Return(err)
return _c
}
func (_c *MockStore_SetAuthInfo_Call) RunAndReturn(run func(ctx context.Context, cmd *login.SetAuthInfoCommand) error) *MockStore_SetAuthInfo_Call {
_c.Call.Return(run)
return _c
}
// UpdateAuthInfo provides a mock function for the type MockStore
func (_mock *MockStore) UpdateAuthInfo(ctx context.Context, cmd *login.UpdateAuthInfoCommand) error {
ret := _mock.Called(ctx, cmd)
if len(ret) == 0 {
panic("no return value specified for UpdateAuthInfo")
}
var r0 error
if returnFunc, ok := ret.Get(0).(func(context.Context, *login.UpdateAuthInfoCommand) error); ok {
r0 = returnFunc(ctx, cmd)
} else {
r0 = ret.Error(0)
}
return r0
}
// MockStore_UpdateAuthInfo_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'UpdateAuthInfo'
type MockStore_UpdateAuthInfo_Call struct {
*mock.Call
}
// UpdateAuthInfo is a helper method to define mock.On call
// - ctx context.Context
// - cmd *login.UpdateAuthInfoCommand
func (_e *MockStore_Expecter) UpdateAuthInfo(ctx interface{}, cmd interface{}) *MockStore_UpdateAuthInfo_Call {
return &MockStore_UpdateAuthInfo_Call{Call: _e.mock.On("UpdateAuthInfo", ctx, cmd)}
}
func (_c *MockStore_UpdateAuthInfo_Call) Run(run func(ctx context.Context, cmd *login.UpdateAuthInfoCommand)) *MockStore_UpdateAuthInfo_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 context.Context
if args[0] != nil {
arg0 = args[0].(context.Context)
}
var arg1 *login.UpdateAuthInfoCommand
if args[1] != nil {
arg1 = args[1].(*login.UpdateAuthInfoCommand)
}
run(
arg0,
arg1,
)
})
return _c
}
func (_c *MockStore_UpdateAuthInfo_Call) Return(err error) *MockStore_UpdateAuthInfo_Call {
_c.Call.Return(err)
return _c
}
func (_c *MockStore_UpdateAuthInfo_Call) RunAndReturn(run func(ctx context.Context, cmd *login.UpdateAuthInfoCommand) error) *MockStore_UpdateAuthInfo_Call {
_c.Call.Return(run)
return _c
}
// NewMockUserProtectionService creates a new instance of MockUserProtectionService. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewMockUserProtectionService(t interface {
mock.TestingT
Cleanup(func())
}) *MockUserProtectionService {
mock := &MockUserProtectionService{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
// MockUserProtectionService is an autogenerated mock type for the UserProtectionService type
type MockUserProtectionService struct {
mock.Mock
}
type MockUserProtectionService_Expecter struct {
mock *mock.Mock
}
func (_m *MockUserProtectionService) EXPECT() *MockUserProtectionService_Expecter {
return &MockUserProtectionService_Expecter{mock: &_m.Mock}
}
// AllowUserMapping provides a mock function for the type MockUserProtectionService
func (_mock *MockUserProtectionService) AllowUserMapping(user1 *user.User, authModule string) error {
ret := _mock.Called(user1, authModule)
if len(ret) == 0 {
panic("no return value specified for AllowUserMapping")
}
var r0 error
if returnFunc, ok := ret.Get(0).(func(*user.User, string) error); ok {
r0 = returnFunc(user1, authModule)
} else {
r0 = ret.Error(0)
}
return r0
}
// MockUserProtectionService_AllowUserMapping_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'AllowUserMapping'
type MockUserProtectionService_AllowUserMapping_Call struct {
*mock.Call
}
// AllowUserMapping is a helper method to define mock.On call
// - user1 *user.User
// - authModule string
func (_e *MockUserProtectionService_Expecter) AllowUserMapping(user1 interface{}, authModule interface{}) *MockUserProtectionService_AllowUserMapping_Call {
return &MockUserProtectionService_AllowUserMapping_Call{Call: _e.mock.On("AllowUserMapping", user1, authModule)}
}
func (_c *MockUserProtectionService_AllowUserMapping_Call) Run(run func(user1 *user.User, authModule string)) *MockUserProtectionService_AllowUserMapping_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 *user.User
if args[0] != nil {
arg0 = args[0].(*user.User)
}
var arg1 string
if args[1] != nil {
arg1 = args[1].(string)
}
run(
arg0,
arg1,
)
})
return _c
}
func (_c *MockUserProtectionService_AllowUserMapping_Call) Return(err error) *MockUserProtectionService_AllowUserMapping_Call {
_c.Call.Return(err)
return _c
}
func (_c *MockUserProtectionService_AllowUserMapping_Call) RunAndReturn(run func(user1 *user.User, authModule string) error) *MockUserProtectionService_AllowUserMapping_Call {
_c.Call.Return(run)
return _c
}

View File

@@ -0,0 +1,173 @@
// Code generated by mockery v2.53.5. DO NOT EDIT.
package authinfotest
import (
context "context"
login "github.com/grafana/grafana/pkg/services/login"
mock "github.com/stretchr/testify/mock"
)
// MockAuthInfoStore is an autogenerated mock type for the Store type
type MockAuthInfoStore struct {
mock.Mock
}
// DeleteUserAuthInfo provides a mock function with given fields: ctx, userID
func (_m *MockAuthInfoStore) DeleteUserAuthInfo(ctx context.Context, userID int64) error {
ret := _m.Called(ctx, userID)
if len(ret) == 0 {
panic("no return value specified for DeleteUserAuthInfo")
}
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, int64) error); ok {
r0 = rf(ctx, userID)
} else {
r0 = ret.Error(0)
}
return r0
}
// GetAuthInfo provides a mock function with given fields: ctx, query
func (_m *MockAuthInfoStore) GetAuthInfo(ctx context.Context, query *login.GetAuthInfoQuery) (*login.UserAuth, error) {
ret := _m.Called(ctx, query)
if len(ret) == 0 {
panic("no return value specified for GetAuthInfo")
}
var r0 *login.UserAuth
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, *login.GetAuthInfoQuery) (*login.UserAuth, error)); ok {
return rf(ctx, query)
}
if rf, ok := ret.Get(0).(func(context.Context, *login.GetAuthInfoQuery) *login.UserAuth); ok {
r0 = rf(ctx, query)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*login.UserAuth)
}
}
if rf, ok := ret.Get(1).(func(context.Context, *login.GetAuthInfoQuery) error); ok {
r1 = rf(ctx, query)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// GetUserAuthModules provides a mock function with given fields: ctx, userID
func (_m *MockAuthInfoStore) GetUserAuthModules(ctx context.Context, userID int64) ([]string, error) {
ret := _m.Called(ctx, userID)
if len(ret) == 0 {
panic("no return value specified for GetUserAuthModules")
}
var r0 []string
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, int64) ([]string, error)); ok {
return rf(ctx, userID)
}
if rf, ok := ret.Get(0).(func(context.Context, int64) []string); ok {
r0 = rf(ctx, userID)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]string)
}
}
if rf, ok := ret.Get(1).(func(context.Context, int64) error); ok {
r1 = rf(ctx, userID)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// GetUsersRecentlyUsedLabel provides a mock function with given fields: ctx, query
func (_m *MockAuthInfoStore) GetUsersRecentlyUsedLabel(ctx context.Context, query login.GetUserLabelsQuery) (map[int64]string, error) {
ret := _m.Called(ctx, query)
if len(ret) == 0 {
panic("no return value specified for GetUsersRecentlyUsedLabel")
}
var r0 map[int64]string
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, login.GetUserLabelsQuery) (map[int64]string, error)); ok {
return rf(ctx, query)
}
if rf, ok := ret.Get(0).(func(context.Context, login.GetUserLabelsQuery) map[int64]string); ok {
r0 = rf(ctx, query)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(map[int64]string)
}
}
if rf, ok := ret.Get(1).(func(context.Context, login.GetUserLabelsQuery) error); ok {
r1 = rf(ctx, query)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// SetAuthInfo provides a mock function with given fields: ctx, cmd
func (_m *MockAuthInfoStore) SetAuthInfo(ctx context.Context, cmd *login.SetAuthInfoCommand) error {
ret := _m.Called(ctx, cmd)
if len(ret) == 0 {
panic("no return value specified for SetAuthInfo")
}
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, *login.SetAuthInfoCommand) error); ok {
r0 = rf(ctx, cmd)
} else {
r0 = ret.Error(0)
}
return r0
}
// UpdateAuthInfo provides a mock function with given fields: ctx, cmd
func (_m *MockAuthInfoStore) UpdateAuthInfo(ctx context.Context, cmd *login.UpdateAuthInfoCommand) error {
ret := _m.Called(ctx, cmd)
if len(ret) == 0 {
panic("no return value specified for UpdateAuthInfo")
}
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, *login.UpdateAuthInfoCommand) error); ok {
r0 = rf(ctx, cmd)
} else {
r0 = ret.Error(0)
}
return r0
}
// NewMockAuthInfoStore creates a new instance of MockAuthInfoStore. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewMockAuthInfoStore(t interface {
mock.TestingT
Cleanup(func())
}) *MockAuthInfoStore {
mock := &MockAuthInfoStore{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -8,11 +8,12 @@ import (
type FakeService struct {
login.AuthInfoService
LatestUserID int64
ExpectedUserAuth *login.UserAuth
ExpectedExternalUser *login.ExternalUserInfo
ExpectedError error
ExpectedLabels map[int64]string
LatestUserID int64
ExpectedUserAuth *login.UserAuth
ExpectedExternalUser *login.ExternalUserInfo
ExpectedError error
ExpectedRecentlyUsedLabel map[int64]string
ExpectedAuthModuleLabels []string
SetAuthInfoFn func(ctx context.Context, cmd *login.SetAuthInfoCommand) error
UpdateAuthInfoFn func(ctx context.Context, cmd *login.UpdateAuthInfoCommand) error
@@ -24,8 +25,12 @@ func (a *FakeService) GetAuthInfo(ctx context.Context, query *login.GetAuthInfoQ
return a.ExpectedUserAuth, a.ExpectedError
}
func (a *FakeService) GetUserLabels(ctx context.Context, query login.GetUserLabelsQuery) (map[int64]string, error) {
return a.ExpectedLabels, a.ExpectedError
func (a *FakeService) GetUsersRecentlyUsedLabel(ctx context.Context, query login.GetUserLabelsQuery) (map[int64]string, error) {
return a.ExpectedRecentlyUsedLabel, a.ExpectedError
}
func (a *FakeService) GetUserAuthModuleLabels(ctx context.Context, userID int64) ([]string, error) {
return a.ExpectedAuthModuleLabels, a.ExpectedError
}
func (a *FakeService) SetAuthInfo(ctx context.Context, cmd *login.SetAuthInfoCommand) error {

View File

@@ -504,13 +504,6 @@ func initInstanceStore(sqlStore db.DB, logger log.Logger, featureToggles feature
if featureToggles.IsEnabledGlobally(featuremgmt.FlagAlertingSaveStateCompressed) {
logger.Info("Using protobuf-based alert instance store")
instanceStore = protoInstanceStore
// If FlagAlertingSaveStateCompressed is enabled, ProtoInstanceDBStore is used,
// which functions differently from InstanceDBStore. FlagAlertingSaveStatePeriodic is
// not applicable to ProtoInstanceDBStore, so a warning is logged if it is set.
//nolint:staticcheck // not yet migrated to OpenFeature
if featureToggles.IsEnabledGlobally(featuremgmt.FlagAlertingSaveStatePeriodic) {
logger.Warn("alertingSaveStatePeriodic is not used when alertingSaveStateCompressed feature flag enabled")
}
} else {
logger.Info("Using simple database alert instance store")
instanceStore = simpleInstanceStore
@@ -525,7 +518,15 @@ func initStatePersister(uaCfg setting.UnifiedAlertingSettings, cfg state.Manager
//nolint:staticcheck // not yet migrated to OpenFeature
if featureToggles.IsEnabledGlobally(featuremgmt.FlagAlertingSaveStateCompressed) {
logger.Info("Using rule state persister")
statePersister = state.NewSyncRuleStatePersisiter(logger, cfg)
if featureToggles.IsEnabledGlobally(featuremgmt.FlagAlertingSaveStatePeriodic) {
logger.Info("Compressed storage with periodic save enabled")
ticker := clock.New().Ticker(cfg.StatePeriodicSaveInterval)
statePersister = state.NewSyncRuleStatePersisiter(logger, ticker, cfg)
} else {
logger.Info("Compressed storage FullSync disabled")
statePersister = state.NewSyncRuleStatePersisiter(logger, nil, cfg)
}
} else if featureToggles.IsEnabledGlobally(featuremgmt.FlagAlertingSaveStatePeriodic) {
logger.Info("Using periodic state persister")
ticker := clock.New().Ticker(uaCfg.StatePeriodicSaveInterval)

View File

@@ -436,7 +436,9 @@ func TestInitStatePersister(t *testing.T) {
ua := setting.UnifiedAlertingSettings{
StatePeriodicSaveInterval: 1 * time.Minute,
}
cfg := state.ManagerCfg{}
cfg := state.ManagerCfg{
StatePeriodicSaveInterval: 1 * time.Minute,
}
tests := []struct {
name string

View File

@@ -8,6 +8,10 @@ import (
history_model "github.com/grafana/grafana/pkg/services/ngalert/state/historian/model"
)
type AlertInstancesProvider interface {
GetAlertInstances() []models.AlertInstance
}
// InstanceStore represents the ability to fetch and write alert instances.
type InstanceStore interface {
InstanceReader

View File

@@ -12,10 +12,6 @@ import (
"github.com/grafana/grafana/pkg/services/ngalert/models"
)
type AlertInstancesProvider interface {
GetAlertInstances() []models.AlertInstance
}
type AsyncStatePersister struct {
log log.Logger
batchSize int

View File

@@ -4,6 +4,7 @@ import (
"context"
"time"
"github.com/benbjohnson/clock"
"go.opentelemetry.io/otel/trace"
"github.com/grafana/grafana/pkg/infra/log"
@@ -11,22 +12,63 @@ import (
)
type SyncRuleStatePersister struct {
log log.Logger
store InstanceStore
log log.Logger
store InstanceStore
ticker *clock.Ticker
}
func NewSyncRuleStatePersisiter(log log.Logger, cfg ManagerCfg) StatePersister {
func NewSyncRuleStatePersisiter(log log.Logger, ticker *clock.Ticker, cfg ManagerCfg) StatePersister {
return &SyncRuleStatePersister{
log: log,
store: cfg.InstanceStore,
log: log,
store: cfg.InstanceStore,
ticker: ticker,
}
}
func (a *SyncRuleStatePersister) Async(_ context.Context, _ AlertInstancesProvider) {
a.log.Debug("Async: No-Op")
func (a *SyncRuleStatePersister) Async(ctx context.Context, instancesProvider AlertInstancesProvider) {
if a.ticker == nil {
return
}
for {
select {
case <-a.ticker.C:
if err := a.fullSync(ctx, instancesProvider); err != nil {
a.log.Error("Failed to do a full compressed state sync to database", "err", err)
}
case <-ctx.Done():
a.log.Info("Scheduler is shutting down, doing a final state sync.")
if err := a.fullSync(context.Background(), instancesProvider); err != nil {
a.log.Error("Failed to do a full compressed state sync to database", "err", err)
}
a.ticker.Stop()
a.log.Info("Compressed state async worker is shut down.")
return
}
}
}
func (a *SyncRuleStatePersister) fullSync(ctx context.Context, instancesProvider AlertInstancesProvider) error {
startTime := time.Now()
a.log.Debug("Full compressed state sync start")
instances := instancesProvider.GetAlertInstances()
// batchSize is set to 0 because compressed storage groups instances by ruleUID, not by batch size
err := a.store.FullSync(ctx, instances, 0, nil)
if err != nil {
a.log.Error("Full compressed state sync failed", "duration", time.Since(startTime), "instances", len(instances))
return err
}
a.log.Debug("Full compressed state sync done", "duration", time.Since(startTime), "instances", len(instances))
return nil
}
func (a *SyncRuleStatePersister) Sync(ctx context.Context, span trace.Span, ruleKey models.AlertRuleKeyWithGroup, states StateTransitions) {
if a.ticker != nil {
a.log.Debug("Skip immediate save, using periodic save instead")
return
}
if a.store == nil || len(states) == 0 {
return
}

View File

@@ -9,6 +9,7 @@ import (
"time"
"github.com/golang/snappy"
"github.com/grafana/grafana/pkg/services/sqlstore"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/types/known/timestamppb"
@@ -98,28 +99,13 @@ func (st ProtoInstanceDBStore) SaveAlertInstancesForRule(ctx context.Context, ke
logger := st.Logger.FromContext(ctx)
logger.Debug("SaveAlertInstancesForRule called", "rule_uid", key.UID, "org_id", key.OrgID, "instances", len(instances))
alert_instances_proto := make([]*pb.AlertInstance, len(instances))
for i, instance := range instances {
alert_instances_proto[i] = alertInstanceModelToProto(instance)
}
compressedAlertInstances, err := compressAlertInstances(alert_instances_proto)
compressedAlertInstances, err := convertAndCompressAlertInstances(instances)
if err != nil {
return fmt.Errorf("failed to compress alert instances: %w", err)
}
return st.SQLStore.WithTransactionalDbSession(ctx, func(sess *db.Session) error {
params := []any{key.OrgID, key.UID, compressedAlertInstances, time.Now()}
upsertSQL := st.SQLStore.GetDialect().UpsertSQL(
"alert_rule_state",
[]string{"org_id", "rule_uid"},
[]string{"org_id", "rule_uid", "data", "updated_at"},
)
_, err = sess.SQL(upsertSQL, params...).Query()
return err
return st.SQLStore.WithTransactionalDbSession(ctx, func(sess *sqlstore.DBSession) error {
return st.upsertCompressedAlertInstances(sess, key.OrgID, key.UID, compressedAlertInstances, time.Now())
})
}
@@ -134,9 +120,68 @@ func (st ProtoInstanceDBStore) DeleteAlertInstancesByRule(ctx context.Context, k
}
func (st ProtoInstanceDBStore) FullSync(ctx context.Context, instances []models.AlertInstance, batchSize int, jitterFunc func(int) time.Duration) error {
if len(instances) == 0 {
return nil
}
logger := st.Logger.FromContext(ctx)
logger.Error("FullSync called and not implemented")
return errors.New("fullsync is not implemented for proto instance database store")
logger.Debug("FullSync called", "total_instances", len(instances))
ruleGroups := make(map[models.AlertRuleKeyWithGroup][]models.AlertInstance)
for _, instance := range instances {
ruleKey := models.AlertRuleKeyWithGroup{
AlertRuleKey: models.AlertRuleKey{
OrgID: instance.RuleOrgID,
UID: instance.RuleUID,
},
RuleGroup: "",
}
ruleGroups[ruleKey] = append(ruleGroups[ruleKey], instance)
}
type preparedRule struct {
ruleKey models.AlertRuleKeyWithGroup
compressedData []byte
}
preparedRules := make([]preparedRule, 0, len(ruleGroups))
for ruleKey, ruleInstances := range ruleGroups {
// Convert and compress instances
compressedAlertInstances, err := convertAndCompressAlertInstances(ruleInstances)
if err != nil {
logger.Error("Failed to compress instances for rule", "rule_uid", ruleKey.UID, "error", err)
continue
}
preparedRules = append(preparedRules, preparedRule{
ruleKey: ruleKey,
compressedData: compressedAlertInstances,
})
logger.Debug("Prepared rule for sync", "rule_uid", ruleKey.UID, "org_id", ruleKey.OrgID, "instances", len(ruleInstances))
}
return st.SQLStore.WithTransactionalDbSession(ctx, func(sess *sqlstore.DBSession) error {
syncTimestamp := time.Now()
logger.Debug("Starting FullSync transaction", "rules_count", len(preparedRules), "timestamp", syncTimestamp)
// First we delete all records from the table
if _, err := sess.Exec("DELETE FROM alert_rule_state"); err != nil {
return fmt.Errorf("failed to delete alert_rule_state: %w", err)
}
for i, prepared := range preparedRules {
logger.Debug("Executing UPSERT for rule", "rule_uid", prepared.ruleKey.UID, "org_id", prepared.ruleKey.OrgID, "rule_index", i+1, "total_rules", len(preparedRules))
// Execute UPSERT with pre-compressed data using helper method
if err := st.upsertCompressedAlertInstances(sess, prepared.ruleKey.OrgID, prepared.ruleKey.UID, prepared.compressedData, syncTimestamp); err != nil {
return fmt.Errorf("failed to save instances for rule %s: %w", prepared.ruleKey.UID, err)
}
}
logger.Debug("FullSync transaction completed successfully", "rules_synced", len(preparedRules))
return nil
})
}
func alertInstanceModelToProto(modelInstance models.AlertInstance) *pb.AlertInstance {
@@ -155,6 +200,30 @@ func alertInstanceModelToProto(modelInstance models.AlertInstance) *pb.AlertInst
}
}
// convertAndCompressAlertInstances converts model instances to protobuf and compresses them
func convertAndCompressAlertInstances(instances []models.AlertInstance) ([]byte, error) {
alertInstancesProto := make([]*pb.AlertInstance, len(instances))
for i, instance := range instances {
alertInstancesProto[i] = alertInstanceModelToProto(instance)
}
return compressAlertInstances(alertInstancesProto)
}
// upsertCompressedAlertInstances performs upsert operation for compressed alert instances
func (st ProtoInstanceDBStore) upsertCompressedAlertInstances(sess *sqlstore.DBSession, orgID int64, ruleUID string, compressedData []byte, timestamp time.Time) error {
upsertSQL := st.SQLStore.GetDialect().UpsertSQL(
"alert_rule_state",
[]string{"org_id", "rule_uid"},
[]string{"org_id", "rule_uid", "data", "updated_at"},
)
params := []any{orgID, ruleUID, compressedData, timestamp}
_, err := sess.SQL(upsertSQL, params...).Query()
return err
}
func compressAlertInstances(instances []*pb.AlertInstance) ([]byte, error) {
mProto, err := proto.Marshal(&pb.AlertInstances{Instances: instances})
if err != nil {

View File

@@ -174,6 +174,166 @@ func TestCompressAndDecompressAlertInstances(t *testing.T) {
require.EqualExportedValues(t, alertInstances[1], decompressedInstances[1])
}
func TestConvertAndCompressAlertInstances(t *testing.T) {
now := time.Now()
modelInstances := []models.AlertInstance{
{
AlertInstanceKey: models.AlertInstanceKey{
RuleUID: "rule-uid-1",
RuleOrgID: 1,
LabelsHash: "hash-1",
},
Labels: map[string]string{"label-1": "value-1"},
CurrentState: models.InstanceStateFiring,
CurrentStateSince: now,
CurrentStateEnd: now.Add(time.Hour),
CurrentReason: "reason-1",
LastEvalTime: now.Add(-time.Minute),
LastSentAt: &now,
FiredAt: &now,
ResolvedAt: nil,
ResultFingerprint: "fingerprint-1",
},
{
AlertInstanceKey: models.AlertInstanceKey{
RuleUID: "rule-uid-1",
RuleOrgID: 1,
LabelsHash: "hash-2",
},
Labels: map[string]string{"label-2": "value-2"},
CurrentState: models.InstanceStateNormal,
CurrentStateSince: now,
CurrentStateEnd: now.Add(time.Hour),
CurrentReason: "reason-2",
LastEvalTime: now.Add(-time.Minute),
LastSentAt: nil,
FiredAt: nil,
ResolvedAt: &now,
ResultFingerprint: "fingerprint-2",
},
}
compressedData, err := convertAndCompressAlertInstances(modelInstances)
require.NoError(t, err)
require.NotEmpty(t, compressedData)
// Verify we can decompress and get back the same data
decompressedInstances, err := decompressAlertInstances(compressedData)
require.NoError(t, err)
require.Len(t, decompressedInstances, 2)
// Convert back to model to compare
for i, protoInstance := range decompressedInstances {
modelInstance := alertInstanceProtoToModel("rule-uid-1", 1, protoInstance)
require.Equal(t, modelInstances[i].Labels, modelInstance.Labels)
require.Equal(t, modelInstances[i].CurrentState, modelInstance.CurrentState)
require.Equal(t, modelInstances[i].LabelsHash, modelInstance.LabelsHash)
require.Equal(t, modelInstances[i].ResultFingerprint, modelInstance.ResultFingerprint)
}
}
func TestConvertAndCompressAlertInstances_EmptyInput(t *testing.T) {
emptyInstances := []models.AlertInstance{}
compressedData, err := convertAndCompressAlertInstances(emptyInstances)
require.NoError(t, err)
decompressedInstances, err := decompressAlertInstances(compressedData)
require.NoError(t, err)
require.Empty(t, decompressedInstances)
}
func TestFullSyncGroupingLogic(t *testing.T) {
now := time.Now()
// Test instances from multiple rules to verify grouping logic
instances := []models.AlertInstance{
{
AlertInstanceKey: models.AlertInstanceKey{
RuleUID: "rule-1",
RuleOrgID: 1,
LabelsHash: "hash-1-1",
},
Labels: models.InstanceLabels{"rule1": "instance1"},
CurrentState: models.InstanceStateFiring,
CurrentStateSince: now,
CurrentStateEnd: now.Add(time.Hour),
CurrentReason: "test reason 1",
LastEvalTime: now.Add(-time.Minute),
ResultFingerprint: "fingerprint-1-1",
},
{
AlertInstanceKey: models.AlertInstanceKey{
RuleUID: "rule-1",
RuleOrgID: 1,
LabelsHash: "hash-1-2",
},
Labels: models.InstanceLabels{"rule1": "instance2"},
CurrentState: models.InstanceStateNormal,
CurrentStateSince: now,
CurrentStateEnd: now.Add(time.Hour),
CurrentReason: "test reason 2",
LastEvalTime: now.Add(-time.Minute),
ResultFingerprint: "fingerprint-1-2",
},
{
AlertInstanceKey: models.AlertInstanceKey{
RuleUID: "rule-2",
RuleOrgID: 1,
LabelsHash: "hash-2-1",
},
Labels: models.InstanceLabels{"rule2": "instance1"},
CurrentState: models.InstanceStatePending,
CurrentStateSince: now,
CurrentStateEnd: now.Add(time.Hour),
CurrentReason: "test reason 3",
LastEvalTime: now.Add(-time.Minute),
ResultFingerprint: "fingerprint-2-1",
},
}
// Test the grouping logic that FullSync uses internally
ruleGroups := make(map[models.AlertRuleKeyWithGroup][]models.AlertInstance)
for _, instance := range instances {
ruleKey := models.AlertRuleKeyWithGroup{
AlertRuleKey: models.AlertRuleKey{
OrgID: instance.RuleOrgID,
UID: instance.RuleUID,
},
RuleGroup: "",
}
ruleGroups[ruleKey] = append(ruleGroups[ruleKey], instance)
}
// Verify grouping worked correctly
require.Len(t, ruleGroups, 2, "Should have 2 rule groups")
rule1Key := models.AlertRuleKeyWithGroup{
AlertRuleKey: models.AlertRuleKey{OrgID: 1, UID: "rule-1"},
RuleGroup: "",
}
rule2Key := models.AlertRuleKeyWithGroup{
AlertRuleKey: models.AlertRuleKey{OrgID: 1, UID: "rule-2"},
RuleGroup: "",
}
require.Len(t, ruleGroups[rule1Key], 2, "Rule 1 should have 2 instances")
require.Len(t, ruleGroups[rule2Key], 1, "Rule 2 should have 1 instance")
// Test compression for each group
for ruleKey, ruleInstances := range ruleGroups {
compressedData, err := convertAndCompressAlertInstances(ruleInstances)
require.NoError(t, err, "Compression should succeed for rule %s", ruleKey.UID)
require.NotEmpty(t, compressedData, "Compressed data should not be empty for rule %s", ruleKey.UID)
// Verify decompression works
decompressedInstances, err := decompressAlertInstances(compressedData)
require.NoError(t, err, "Decompression should succeed for rule %s", ruleKey.UID)
require.Len(t, decompressedInstances, len(ruleInstances), "Should have same number of instances after decompression for rule %s", ruleKey.UID)
}
}
func toProtoTimestampPtr(tm *time.Time) *timestamppb.Timestamp {
if tm == nil {
return nil

View File

@@ -114,9 +114,7 @@ func (w *parquetWriter) Close() error {
// writes the current buffer to parquet and re-inits the arrow buffer
func (w *parquetWriter) flush() error {
w.logger.Info("flush", "count", w.rv.Len())
//TODO: fix deprecation warning
//nolint:staticcheck
rec := array.NewRecord(w.schema, []arrow.Array{
rec := array.NewRecordBatch(w.schema, []arrow.Array{
w.rv.NewArray(),
w.namespace.NewArray(),
w.group.NewArray(),

View File

@@ -110,7 +110,7 @@ func NewBulkSettings(md metadata.MD) (BulkSettings, error) {
// All requests must be to the same NAMESPACE/GROUP/RESOURCE
func (s *server) BulkProcess(stream resourcepb.BulkStore_BulkProcessServer) error {
ctx := stream.Context()
ctx, span := s.tracer.Start(ctx, "resource.server.BulkProcess")
ctx, span := tracer.Start(ctx, "resource.server.BulkProcess")
defer span.End()
sendAndClose := func(rsp *resourcepb.BulkResponse) error {

View File

@@ -127,11 +127,8 @@ type SearchBackend interface {
GetOpenIndexes() []NamespacedResource
}
const tracingPrexfixSearch = "unified_search."
// This supports indexing+search regardless of implementation
type searchSupport struct {
tracer trace.Tracer
log *slog.Logger
storage StorageBackend
search SearchBackend
@@ -163,14 +160,11 @@ var (
_ resourcepb.ManagedObjectIndexServer = (*searchSupport)(nil)
)
func newSearchSupport(opts SearchOptions, storage StorageBackend, access types.AccessClient, blob BlobSupport, tracer trace.Tracer, indexMetrics *BleveIndexMetrics, ownsIndexFn func(key NamespacedResource) (bool, error)) (support *searchSupport, err error) {
func newSearchSupport(opts SearchOptions, storage StorageBackend, access types.AccessClient, blob BlobSupport, indexMetrics *BleveIndexMetrics, ownsIndexFn func(key NamespacedResource) (bool, error)) (support *searchSupport, err error) {
// No backend search support
if opts.Backend == nil {
return nil, nil
}
if tracer == nil {
return nil, fmt.Errorf("missing tracer")
}
if opts.InitWorkerThreads < 1 {
opts.InitWorkerThreads = 1
@@ -188,7 +182,6 @@ func newSearchSupport(opts SearchOptions, storage StorageBackend, access types.A
support = &searchSupport{
access: access,
tracer: tracer,
storage: storage,
search: opts.Backend,
log: slog.Default().With("logger", "resource-search"),
@@ -341,7 +334,7 @@ func (s *searchSupport) CountManagedObjects(ctx context.Context, req *resourcepb
// Search implements ResourceIndexServer.
func (s *searchSupport) Search(ctx context.Context, req *resourcepb.ResourceSearchRequest) (*resourcepb.ResourceSearchResponse, error) {
ctx, span := s.tracer.Start(ctx, tracingPrexfixSearch+"Search")
ctx, span := tracer.Start(ctx, "resource.searchSupport.Search")
defer span.End()
if req.Options.Key.Namespace == "" || req.Options.Key.Group == "" || req.Options.Key.Resource == "" {
@@ -499,7 +492,7 @@ func (s *searchSupport) buildIndexes(ctx context.Context) (int, error) {
func (s *searchSupport) init(ctx context.Context) error {
origCtx := ctx
ctx, span := s.tracer.Start(ctx, tracingPrexfixSearch+"Init")
ctx, span := tracer.Start(ctx, "resource.searchSupport.init")
defer span.End()
start := time.Now().Unix()
@@ -632,7 +625,7 @@ func (s *searchSupport) runIndexRebuilder(ctx context.Context) {
}
func (s *searchSupport) rebuildIndex(ctx context.Context, req rebuildRequest) {
ctx, span := s.tracer.Start(ctx, tracingPrexfixSearch+"RebuildIndex")
ctx, span := tracer.Start(ctx, "resource.searchSupport.rebuildIndex")
defer span.End()
l := s.log.With("namespace", req.Namespace, "group", req.Group, "resource", req.Resource)
@@ -731,7 +724,7 @@ func (s *searchSupport) getOrCreateIndex(ctx context.Context, key NamespacedReso
return nil, fmt.Errorf("search is not configured properly (missing unifiedStorageSearch feature toggle?)")
}
ctx, span := s.tracer.Start(ctx, tracingPrexfixSearch+"GetOrCreateIndex")
ctx, span := tracer.Start(ctx, "resource.searchSupport.getOrCreateIndex")
defer span.End()
span.SetAttributes(
attribute.String("namespace", key.Namespace),
@@ -808,7 +801,7 @@ func (s *searchSupport) getOrCreateIndex(ctx context.Context, key NamespacedReso
}
func (s *searchSupport) build(ctx context.Context, nsr NamespacedResource, size int64, indexBuildReason string, rebuild bool) (ResourceIndex, error) {
ctx, span := s.tracer.Start(ctx, tracingPrexfixSearch+"Build")
ctx, span := tracer.Start(ctx, "resource.searchSupport.build")
defer span.End()
span.SetAttributes(

View File

@@ -13,7 +13,6 @@ import (
"github.com/grafana/authlib/types"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"go.opentelemetry.io/otel/trace/noop"
dashboardv1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v1beta1"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
@@ -211,7 +210,7 @@ func TestSearchGetOrCreateIndex(t *testing.T) {
InitMinCount: 1, // set min count to default for this test
}
support, err := newSearchSupport(opts, storage, nil, nil, noop.NewTracerProvider().Tracer("test"), nil, nil)
support, err := newSearchSupport(opts, storage, nil, nil, nil, nil)
require.NoError(t, err)
require.NotNil(t, support)
@@ -267,7 +266,7 @@ func TestSearchGetOrCreateIndexWithIndexUpdate(t *testing.T) {
}
// Enable searchAfterWrite
support, err := newSearchSupport(opts, storage, nil, nil, noop.NewTracerProvider().Tracer("test"), nil, nil)
support, err := newSearchSupport(opts, storage, nil, nil, nil, nil)
require.NoError(t, err)
require.NotNil(t, support)
@@ -316,7 +315,7 @@ func TestSearchGetOrCreateIndexWithCancellation(t *testing.T) {
InitMinCount: 1, // set min count to default for this test
}
support, err := newSearchSupport(opts, storage, nil, nil, noop.NewTracerProvider().Tracer("test"), nil, nil)
support, err := newSearchSupport(opts, storage, nil, nil, nil, nil)
require.NoError(t, err)
require.NotNil(t, support)
@@ -594,7 +593,7 @@ func TestFindIndexesForRebuild(t *testing.T) {
MinBuildVersion: semver.MustParse("5.5.5"),
}
support, err := newSearchSupport(opts, storage, nil, nil, noop.NewTracerProvider().Tracer("test"), nil, nil)
support, err := newSearchSupport(opts, storage, nil, nil, nil, nil)
require.NoError(t, err)
require.NotNil(t, support)
@@ -665,7 +664,7 @@ func TestRebuildIndexes(t *testing.T) {
Resources: supplier,
}
support, err := newSearchSupport(opts, storage, nil, nil, noop.NewTracerProvider().Tracer("test"), nil, nil)
support, err := newSearchSupport(opts, storage, nil, nil, nil, nil)
require.NoError(t, err)
require.NotNil(t, support)

View File

@@ -14,8 +14,7 @@ import (
"github.com/Masterminds/semver"
"github.com/google/uuid"
"github.com/prometheus/client_golang/prometheus"
"go.opentelemetry.io/otel/trace"
"go.opentelemetry.io/otel/trace/noop"
"go.opentelemetry.io/otel"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
@@ -30,6 +29,8 @@ import (
"github.com/grafana/grafana/pkg/util/scheduler"
)
var tracer = otel.Tracer("github.com/grafana/grafana/pkg/storage/unified/resource")
// ResourceServer implements all gRPC services
type ResourceServer interface {
resourcepb.ResourceStoreServer
@@ -210,9 +211,6 @@ type SearchOptions struct {
}
type ResourceServerOptions struct {
// OTel tracer
Tracer trace.Tracer
// Real storage backend
Backend StorageBackend
@@ -259,10 +257,6 @@ type ResourceServerOptions struct {
}
func NewResourceServer(opts ResourceServerOptions) (*server, error) {
if opts.Tracer == nil {
opts.Tracer = noop.NewTracerProvider().Tracer("resource-server")
}
if opts.Backend == nil {
return nil, fmt.Errorf("missing Backend implementation")
}
@@ -314,8 +308,8 @@ func NewResourceServer(opts ResourceServerOptions) (*server, error) {
}
blobstore, err = NewCDKBlobSupport(ctx, CDKBlobSupportOptions{
Tracer: opts.Tracer,
Bucket: NewInstrumentedBucket(bucket, opts.Reg, opts.Tracer),
Tracer: tracer,
Bucket: NewInstrumentedBucket(bucket, opts.Reg, tracer),
})
if err != nil {
return nil, err
@@ -331,7 +325,6 @@ func NewResourceServer(opts ResourceServerOptions) (*server, error) {
// Make this cancelable
ctx, cancel := context.WithCancel(context.Background())
s := &server{
tracer: opts.Tracer,
log: logger,
backend: opts.Backend,
blob: blobstore,
@@ -355,7 +348,7 @@ func NewResourceServer(opts ResourceServerOptions) (*server, error) {
if opts.Search.Resources != nil {
var err error
s.search, err = newSearchSupport(opts.Search, s.backend, s.access, s.blob, opts.Tracer, opts.IndexMetrics, opts.OwnsIndexFn)
s.search, err = newSearchSupport(opts.Search, s.backend, s.access, s.blob, opts.IndexMetrics, opts.OwnsIndexFn)
if err != nil {
return nil, err
}
@@ -373,7 +366,6 @@ func NewResourceServer(opts ResourceServerOptions) (*server, error) {
var _ ResourceServer = &server{}
type server struct {
tracer trace.Tracer
log *slog.Logger
backend StorageBackend
blob BlobSupport
@@ -651,7 +643,7 @@ func (s *server) checkFolderMovePermissions(ctx context.Context, user claims.Aut
}
func (s *server) Create(ctx context.Context, req *resourcepb.CreateRequest) (*resourcepb.CreateResponse, error) {
ctx, span := s.tracer.Start(ctx, "storage_server.Create")
ctx, span := tracer.Start(ctx, "resource.server.Create")
defer span.End()
if r := verifyRequestKey(req.Key); r != nil {
@@ -738,7 +730,7 @@ func (s *server) sleepAfterSuccessfulWriteOperation(res responseWithErrorResult,
}
func (s *server) Update(ctx context.Context, req *resourcepb.UpdateRequest) (*resourcepb.UpdateResponse, error) {
ctx, span := s.tracer.Start(ctx, "storage_server.Update")
ctx, span := tracer.Start(ctx, "resource.server.Update")
defer span.End()
rsp := &resourcepb.UpdateResponse{}
@@ -812,7 +804,7 @@ func (s *server) update(ctx context.Context, user claims.AuthInfo, req *resource
}
func (s *server) Delete(ctx context.Context, req *resourcepb.DeleteRequest) (*resourcepb.DeleteResponse, error) {
ctx, span := s.tracer.Start(ctx, "storage_server.Delete")
ctx, span := tracer.Start(ctx, "resource.server.Delete")
defer span.End()
rsp := &resourcepb.DeleteResponse{}
@@ -983,7 +975,7 @@ func (s *server) read(ctx context.Context, user claims.AuthInfo, req *resourcepb
}
func (s *server) List(ctx context.Context, req *resourcepb.ListRequest) (*resourcepb.ListResponse, error) {
ctx, span := s.tracer.Start(ctx, "storage_server.List")
ctx, span := tracer.Start(ctx, "resource.server.List")
defer span.End()
// The history + trash queries do not yet support additional filters

View File

@@ -62,7 +62,6 @@ func NewResourceServer(opts ServerOptions) (resource.ResourceServer, error) {
}
serverOptions := resource.ResourceServerOptions{
Tracer: opts.Tracer,
Blob: resource.BlobConfig{
URL: apiserverCfg.Key("blob_url").MustString(""),
},

View File

@@ -461,7 +461,7 @@ func TestIntegrationCRUD(t *testing.T) {
}
created, err := adminClient.Create(ctx, alertRule, v1.CreateOptions{})
require.ErrorContains(t, err, "invalid alert rule")
require.ErrorContains(t, err, "trigger interval must be a multiple of base evaluation interval")
require.Nil(t, created)
})
}
@@ -564,3 +564,148 @@ func TestIntegrationBasicAPI(t *testing.T) {
t.Logf("Got error: %s", err)
})
}
func TestIntegrationFolderLabelSyncAndValidation(t *testing.T) {
testutil.SkipIntegrationTestInShortMode(t)
ctx := context.Background()
helper := common.GetTestHelper(t)
client := common.NewAlertRuleClient(t, helper.Org1.Admin)
// Prepare two folders for label sync update scenario
common.CreateTestFolder(t, helper, "test-folder-a")
common.CreateTestFolder(t, helper, "test-folder-b")
baseGen := ngmodels.RuleGen.With(
ngmodels.RuleMuts.WithUniqueUID(),
ngmodels.RuleMuts.WithUniqueTitle(),
ngmodels.RuleMuts.WithNamespaceUID("test-folder-a"),
ngmodels.RuleMuts.WithGroupName("test-group"),
ngmodels.RuleMuts.WithIntervalMatching(time.Duration(10)*time.Second),
)
t.Run("should keep folder label in sync with folder annotation on create and update", func(t *testing.T) {
rule := baseGen.Generate()
alertRule := &v0alpha1.AlertRule{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
Annotations: map[string]string{
v0alpha1.FolderAnnotationKey: "test-folder-a",
},
},
Spec: v0alpha1.AlertRuleSpec{
Title: rule.Title,
Expressions: v0alpha1.AlertRuleExpressionMap{
"A": {
QueryType: util.Pointer(rule.Data[0].QueryType),
DatasourceUID: util.Pointer(v0alpha1.AlertRuleDatasourceUID(rule.Data[0].DatasourceUID)),
Model: rule.Data[0].Model,
Source: util.Pointer(true),
RelativeTimeRange: &v0alpha1.AlertRuleRelativeTimeRange{
From: v0alpha1.AlertRulePromDurationWMillis("5m"),
To: v0alpha1.AlertRulePromDurationWMillis("0s"),
},
},
},
Trigger: v0alpha1.AlertRuleIntervalTrigger{
Interval: v0alpha1.AlertRulePromDuration(fmt.Sprintf("%ds", rule.IntervalSeconds)),
},
NoDataState: string(rule.NoDataState),
ExecErrState: string(rule.ExecErrState),
},
}
created, err := client.Create(ctx, alertRule, v1.CreateOptions{})
require.NoError(t, err)
defer func() { _ = client.Delete(ctx, created.Name, v1.DeleteOptions{}) }()
// On create, metadata.labels[v0alpha1.FolderLabelKey] should mirror annotation
require.Equal(t, "test-folder-a", created.Labels[v0alpha1.FolderLabelKey])
// Update annotation to point to a different folder and ensure label follows
updated := created.Copy().(*v0alpha1.AlertRule)
if updated.Annotations == nil {
updated.Annotations = map[string]string{}
}
updated.Annotations[v0alpha1.FolderAnnotationKey] = "test-folder-b"
after, err := client.Update(ctx, updated, v1.UpdateOptions{})
require.NoError(t, err)
require.Equal(t, "test-folder-b", after.Annotations[v0alpha1.FolderAnnotationKey])
require.Equal(t, "test-folder-b", after.Labels[v0alpha1.FolderLabelKey])
})
t.Run("should fail to create rule without folder annotation", func(t *testing.T) {
rule := baseGen.Generate()
alertRule := &v0alpha1.AlertRule{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
Annotations: map[string]string{}, // missing grafana.app/folder
},
Spec: v0alpha1.AlertRuleSpec{
Title: rule.Title,
Expressions: v0alpha1.AlertRuleExpressionMap{
"A": {
QueryType: util.Pointer(rule.Data[0].QueryType),
DatasourceUID: util.Pointer(v0alpha1.AlertRuleDatasourceUID(rule.Data[0].DatasourceUID)),
Model: rule.Data[0].Model,
Source: util.Pointer(true),
RelativeTimeRange: &v0alpha1.AlertRuleRelativeTimeRange{
From: v0alpha1.AlertRulePromDurationWMillis("5m"),
To: v0alpha1.AlertRulePromDurationWMillis("0s"),
},
},
},
Trigger: v0alpha1.AlertRuleIntervalTrigger{
Interval: v0alpha1.AlertRulePromDuration("10s"),
},
NoDataState: "NoData",
ExecErrState: "Error",
},
}
created, err := client.Create(ctx, alertRule, v1.CreateOptions{})
require.Error(t, err)
require.Nil(t, created)
})
t.Run("should fail to create rule with group labels preset", func(t *testing.T) {
rule := baseGen.Generate()
alertRule := &v0alpha1.AlertRule{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
Annotations: map[string]string{
v0alpha1.FolderAnnotationKey: "test-folder-a",
},
Labels: map[string]string{
v0alpha1.GroupLabelKey: "some-group",
v0alpha1.GroupIndexLabelKey: "0",
},
},
Spec: v0alpha1.AlertRuleSpec{
Title: rule.Title,
Expressions: v0alpha1.AlertRuleExpressionMap{
"A": {
QueryType: util.Pointer(rule.Data[0].QueryType),
DatasourceUID: util.Pointer(v0alpha1.AlertRuleDatasourceUID(rule.Data[0].DatasourceUID)),
Model: rule.Data[0].Model,
Source: util.Pointer(true),
RelativeTimeRange: &v0alpha1.AlertRuleRelativeTimeRange{
From: v0alpha1.AlertRulePromDurationWMillis("5m"),
To: v0alpha1.AlertRulePromDurationWMillis("0s"),
},
},
},
Trigger: v0alpha1.AlertRuleIntervalTrigger{Interval: v0alpha1.AlertRulePromDuration("10s")},
NoDataState: "NoData",
ExecErrState: "Error",
},
}
created, err := client.Create(ctx, alertRule, v1.CreateOptions{})
require.Error(t, err)
require.Nil(t, created)
})
}

View File

@@ -454,7 +454,7 @@ func TestIntegrationCRUD(t *testing.T) {
}
created, err := adminClient.Create(ctx, recordingRule, v1.CreateOptions{})
require.ErrorContains(t, err, "invalid alert rule")
require.ErrorContains(t, err, "trigger interval must be a multiple of base evaluation interval")
require.Nil(t, created)
})
}
@@ -557,3 +557,139 @@ func TestIntegrationBasicAPI(t *testing.T) {
t.Logf("Got error: %s", err)
})
}
func TestIntegrationFolderLabelSyncAndValidation(t *testing.T) {
testutil.SkipIntegrationTestInShortMode(t)
ctx := context.Background()
helper := common.GetTestHelper(t)
client := common.NewRecordingRuleClient(t, helper.Org1.Admin)
// Prepare two folders for label sync update scenario
common.CreateTestFolder(t, helper, "test-folder-a")
common.CreateTestFolder(t, helper, "test-folder-b")
baseGen := ngmodels.RuleGen.With(
ngmodels.RuleMuts.WithUniqueUID(),
ngmodels.RuleMuts.WithUniqueTitle(),
ngmodels.RuleMuts.WithNamespaceUID("test-folder-a"),
ngmodels.RuleMuts.WithGroupName("test-group"),
ngmodels.RuleMuts.WithAllRecordingRules(),
ngmodels.RuleMuts.WithIntervalMatching(time.Duration(10)*time.Second),
)
t.Run("should keep folder label in sync with folder annotation on create and update", func(t *testing.T) {
rule := baseGen.Generate()
recordingRule := &v0alpha1.RecordingRule{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
Annotations: map[string]string{
v0alpha1.FolderAnnotationKey: "test-folder-a",
},
},
Spec: v0alpha1.RecordingRuleSpec{
Title: rule.Title,
Metric: rule.Record.Metric,
Expressions: v0alpha1.RecordingRuleExpressionMap{
"A": {
QueryType: util.Pointer(rule.Data[0].QueryType),
DatasourceUID: util.Pointer(v0alpha1.RecordingRuleDatasourceUID(rule.Data[0].DatasourceUID)),
Model: rule.Data[0].Model,
Source: util.Pointer(true),
RelativeTimeRange: &v0alpha1.RecordingRuleRelativeTimeRange{
From: v0alpha1.RecordingRulePromDurationWMillis("5m"),
To: v0alpha1.RecordingRulePromDurationWMillis("0s"),
},
},
},
Trigger: v0alpha1.RecordingRuleIntervalTrigger{Interval: v0alpha1.RecordingRulePromDuration("10s")},
},
}
created, err := client.Create(ctx, recordingRule, v1.CreateOptions{})
require.NoError(t, err)
defer func() { _ = client.Delete(ctx, created.Name, v1.DeleteOptions{}) }()
// On create, metadata.labels[v0alpha1.FolderLabelKey] should mirror annotation
require.Equal(t, "test-folder-a", created.Labels[v0alpha1.FolderLabelKey])
updated := created.Copy().(*v0alpha1.RecordingRule)
if updated.Annotations == nil {
updated.Annotations = map[string]string{}
}
updated.Annotations[v0alpha1.FolderAnnotationKey] = "test-folder-b"
after, err := client.Update(ctx, updated, v1.UpdateOptions{})
require.NoError(t, err)
require.Equal(t, "test-folder-b", after.Annotations[v0alpha1.FolderAnnotationKey])
require.Equal(t, "test-folder-b", after.Labels[v0alpha1.FolderLabelKey])
})
t.Run("should fail to create recording rule without folder annotation", func(t *testing.T) {
rule := baseGen.Generate()
recordingRule := &v0alpha1.RecordingRule{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
Annotations: map[string]string{},
},
Spec: v0alpha1.RecordingRuleSpec{
Title: rule.Title,
Metric: rule.Record.Metric,
Expressions: v0alpha1.RecordingRuleExpressionMap{
"A": {
QueryType: util.Pointer(rule.Data[0].QueryType),
DatasourceUID: util.Pointer(v0alpha1.RecordingRuleDatasourceUID(rule.Data[0].DatasourceUID)),
Model: rule.Data[0].Model,
Source: util.Pointer(true),
RelativeTimeRange: &v0alpha1.RecordingRuleRelativeTimeRange{
From: v0alpha1.RecordingRulePromDurationWMillis("5m"),
To: v0alpha1.RecordingRulePromDurationWMillis("0s"),
},
},
},
Trigger: v0alpha1.RecordingRuleIntervalTrigger{Interval: v0alpha1.RecordingRulePromDuration("10s")},
},
}
created, err := client.Create(ctx, recordingRule, v1.CreateOptions{})
require.Error(t, err)
require.Nil(t, created)
})
t.Run("should fail to create rule with group labels preset", func(t *testing.T) {
rule := baseGen.Generate()
recordingRule := &v0alpha1.RecordingRule{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
Annotations: map[string]string{
v0alpha1.FolderAnnotationKey: "test-folder-a",
},
Labels: map[string]string{
v0alpha1.GroupLabelKey: "some-group",
v0alpha1.GroupIndexLabelKey: "0",
},
},
Spec: v0alpha1.RecordingRuleSpec{
Title: rule.Title,
Metric: rule.Record.Metric,
Expressions: v0alpha1.RecordingRuleExpressionMap{
"A": {
QueryType: util.Pointer(rule.Data[0].QueryType),
DatasourceUID: util.Pointer(v0alpha1.RecordingRuleDatasourceUID(rule.Data[0].DatasourceUID)),
Model: rule.Data[0].Model,
Source: util.Pointer(true),
RelativeTimeRange: &v0alpha1.RecordingRuleRelativeTimeRange{
From: v0alpha1.RecordingRulePromDurationWMillis("5m"),
To: v0alpha1.RecordingRulePromDurationWMillis("0s"),
},
},
},
Trigger: v0alpha1.RecordingRuleIntervalTrigger{Interval: v0alpha1.RecordingRulePromDuration("10s")},
},
}
created, err := client.Create(ctx, recordingRule, v1.CreateOptions{})
require.Error(t, err)
require.Nil(t, created)
})
}

View File

@@ -0,0 +1,182 @@
package provisioning
import (
"context"
"fmt"
"net/http"
"testing"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/pkg/util/testutil"
)
func TestIntegrationProvisioning_JobValidation(t *testing.T) {
testutil.SkipIntegrationTestInShortMode(t)
helper := runGrafana(t)
ctx := context.Background()
// Create a test repository first
const repo = "job-validation-test-repo"
testRepo := TestRepo{
Name: repo,
Target: "instance",
Copies: map[string]string{},
ExpectedDashboards: 0,
ExpectedFolders: 0,
}
helper.CreateRepo(t, testRepo)
tests := []struct {
name string
jobSpec map[string]interface{}
expectedErr string
}{
{
name: "job without action",
jobSpec: map[string]interface{}{
"repository": repo,
},
expectedErr: "spec.action: Required value: action must be specified",
},
{
name: "job with invalid action",
jobSpec: map[string]interface{}{
"action": "invalid-action",
"repository": repo,
},
expectedErr: "spec.action: Invalid value: \"invalid-action\": invalid action",
},
{
name: "pull job without pull options",
jobSpec: map[string]interface{}{
"action": string(provisioning.JobActionPull),
"repository": repo,
},
expectedErr: "spec.pull: Required value: pull options required for pull action",
},
{
name: "push job without push options",
jobSpec: map[string]interface{}{
"action": string(provisioning.JobActionPush),
"repository": repo,
},
expectedErr: "spec.push: Required value: push options required for push action",
},
{
name: "push job with invalid branch name",
jobSpec: map[string]interface{}{
"action": string(provisioning.JobActionPush),
"repository": repo,
"push": map[string]interface{}{
"branch": "feature..branch", // Invalid: consecutive dots
"message": "Test commit",
},
},
expectedErr: "spec.push.branch: Invalid value: \"feature..branch\": invalid git branch name",
},
{
name: "push job with path traversal",
jobSpec: map[string]interface{}{
"action": string(provisioning.JobActionPush),
"repository": repo,
"push": map[string]interface{}{
"path": "../../etc/passwd", // Invalid: path traversal
"message": "Test commit",
},
},
expectedErr: "spec.push.path: Invalid value: \"../../etc/passwd\"",
},
{
name: "delete job without paths or resources",
jobSpec: map[string]interface{}{
"action": string(provisioning.JobActionDelete),
"repository": repo,
"delete": map[string]interface{}{},
},
expectedErr: "spec.delete: Required value: at least one path or resource must be specified",
},
{
name: "delete job with invalid path",
jobSpec: map[string]interface{}{
"action": string(provisioning.JobActionDelete),
"repository": repo,
"delete": map[string]interface{}{
"paths": []string{"../invalid/path"},
},
},
expectedErr: "spec.delete.paths[0]: Invalid value: \"../invalid/path\"",
},
{
name: "move job without target path",
jobSpec: map[string]interface{}{
"action": string(provisioning.JobActionMove),
"repository": repo,
"move": map[string]interface{}{
"paths": []string{"dashboard.json"},
},
},
expectedErr: "spec.move.targetPath: Required value: target path is required",
},
{
name: "move job without paths or resources",
jobSpec: map[string]interface{}{
"action": string(provisioning.JobActionMove),
"repository": repo,
"move": map[string]interface{}{
"targetPath": "new-location/",
},
},
expectedErr: "spec.move: Required value: at least one path or resource must be specified",
},
{
name: "move job with invalid target path",
jobSpec: map[string]interface{}{
"action": string(provisioning.JobActionMove),
"repository": repo,
"move": map[string]interface{}{
"paths": []string{"dashboard.json"},
"targetPath": "../../../etc/", // Invalid: path traversal
},
},
expectedErr: "spec.move.targetPath: Invalid value: \"../../../etc/\"",
},
{
name: "migrate job without migrate options",
jobSpec: map[string]interface{}{
"action": string(provisioning.JobActionMigrate),
"repository": repo,
},
expectedErr: "spec.migrate: Required value: migrate options required for migrate action",
},
}
for i, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Create the job object directly
jobObj := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "provisioning.grafana.app/v0alpha1",
"kind": "Job",
"metadata": map[string]interface{}{
"name": fmt.Sprintf("test-job-validation-%d", i),
"namespace": "default",
},
"spec": tt.jobSpec,
},
}
// Try to create the job - should fail with validation error
_, err := helper.Jobs.Resource.Create(ctx, jobObj, metav1.CreateOptions{})
require.Error(t, err, "expected validation error for invalid job spec")
// Verify it's a validation error with correct status code
statusError := helper.RequireApiErrorStatus(err, metav1.StatusReasonInvalid, http.StatusUnprocessableEntity)
require.Contains(t, statusError.Message, tt.expectedErr, "error message should contain expected validation message")
})
}
}

Some files were not shown because too many files have changed in this diff Show More