Compare commits

..

19 Commits

Author SHA1 Message Date
Roberto Jimenez Sanchez
30219176e7 More improvements 2025-12-16 12:43:44 +01:00
Roberto Jimenez Sanchez
fa86386564 Commit some test changes 2025-12-16 10:43:38 +01:00
Roberto Jimenez Sanchez
755edef944 More changes 2025-12-15 17:49:11 +01:00
Roberto Jimenez Sanchez
f7bb66ea21 Some improvements 2025-12-15 17:22:03 +01:00
Roberto Jimenez Sanchez
909b9b6bc1 Move tests 2025-12-15 17:07:30 +01:00
Roberto Jimenez Sanchez
f0ea97d105 Remove invalid changes 2025-12-15 17:06:18 +01:00
Roberto Jimenez Sanchez
48f415e24b Remove some tests 2025-12-15 17:04:31 +01:00
Roberto Jimenez Sanchez
f954464825 Merge remote-tracking branch 'origin/main' into bugfix/files-authorization 2025-12-15 16:13:48 +01:00
Roberto Jimenez Sanchez
ca4b78f8ef Refactor provisioning tests to assert success for file operations on configured branches
Updated test cases in files_test.go to reflect the expected behavior of file deletion and movement operations on configured branches, changing assertions from error checks to success checks. This aligns with the recent changes in the provisioning logic that allow these operations to succeed instead of returning MethodNotAllowed.
2025-12-15 15:27:06 +01:00
Roberto Jimenez Sanchez
2e9d0a626e Merge remote-tracking branch 'origin/main' into bugfix/files-authorization 2025-12-15 15:26:31 +01:00
Roberto Jimenez Sanchez
af2c12228f Merge remote-tracking branch 'origin/main' into bugfix/files-authorization 2025-12-15 15:24:41 +01:00
Roberto Jimenez Sanchez
50ff5b976c Revert "Some fixes"
This reverts commit c73f9600d7.
2025-12-15 15:24:31 +01:00
Roberto Jimenez Sanchez
c73f9600d7 Some fixes 2025-12-15 13:37:09 +01:00
Roberto Jimenez Sanchez
1fbfa4d7fa Merge branch 'bugfix/deprecate-single-move-delete' into bugfix/files-authorization 2025-12-15 13:28:46 +01:00
Roberto Jimenez Sanchez
c6831199a2 Merge remote-tracking branch 'origin/main' into bugfix/deprecate-single-move-delete 2025-12-15 13:28:11 +01:00
Roberto Jimenez Sanchez
09e546a1f3 Provisioning: Add authorization integration tests for files endpoint
Adds comprehensive integration tests to verify authorization works correctly
for files endpoint operations. These endpoints are called by authenticated
users (not the provisioning service), so proper authorization is critical.

## Tests Added

### TestIntegrationProvisioning_FilesAuthorization
Tests authorization for different user roles (admin, editor, viewer):
- **GET operations**: All roles should be able to read files
- **POST operations** (create): Admin and editor can create, viewer cannot
- **PUT operations** (update): Admin and editor can update, viewer cannot
- **DELETE operations**: Admin and editor can delete, viewer cannot

### TestIntegrationProvisioning_FilesAuthorizationConfiguredBranch
Tests that single file/folder operations are properly blocked on the
configured branch (returns 405 MethodNotAllowed):
- DELETE on configured branch → MethodNotAllowed
- MOVE on configured branch → MethodNotAllowed
- DELETE/MOVE on branches → Authorization checked first

### TestIntegrationProvisioning_ProvisioningServiceIdentity
Verifies that the provisioning service itself (sync controller) can create
and update resources via the internal workflow, not via files endpoints.

## Test Results

 **POST (create) works correctly** - Proper authorization enforcement
 **Viewer role properly denied** - Access checker working for write ops
⚠️ **GET operations failing** - Access checker denying even admins (test env issue)
⚠️ **Branch operations** - Local repos don't support branches

## Key Findings

1. **Files endpoints are for users, not provisioning service**
   - Authenticated users call GET/POST/PUT/DELETE
   - Provisioning service uses internal sync workflow

2. **Authorization is resource-type based**
   - Uses access checker, not simple role checks
   - Properly validates permissions on dashboards, folders, etc.

3. **Test environment needs access checker configuration**
   - Current test setup doesn't grant access for test users
   - Need to investigate access checker setup in tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-15 13:27:00 +01:00
Roberto Jimenez Sanchez
3b56643aa2 Provisioning: Homogeneous authorization for file operations
Refactors authorization logic in dualwriter.go to ensure consistent and
secure validation across all file operations (create, update, delete, move).

## Key Changes

### 1. Homogeneous Authorization Flow
- All operations follow the same authorization pattern
- Simple validation checks (configured branch, path validation) happen BEFORE
  external service calls for performance
- Authorization checks happen consistently across all operations
- Provisioning service operates with admin-level privileges for resource types

### 2. Existing Resource Ownership Validation
- **CREATE**: Checks if resource UID already exists and validates permission
  to overwrite
- **UPDATE**: Validates permission on target resource
- **DELETE**: Validates permission on existing resource to prevent unauthorized
  deletion of resources owned by other repositories
- **MOVE**: When UID changes, validates permission to delete any existing
  resource with the new UID

### 3. Simplified Authorization Model
- Removed role-based authorization checks (editor/admin)
- Provisioning service is treated as admin-level for all operations
- Focus on resource-type level permissions via access checker
- Prevents cross-repository resource conflicts

### 4. Performance Optimization
- Simple checks (isConfiguredBranch, path validation) before external calls
- Avoids unnecessary authorization service calls when operation will be rejected
  based on simple rules

## Authorization Order

1. Parse and validate request
2. Check simple validation rules (configured branch check, etc.)
3. Authorize via external access checker
4. Check existing resource ownership (prevents cross-repo conflicts)
5. Execute operation

This ensures both good performance and comprehensive authorization.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-15 13:13:47 +01:00
Roberto Jimenez Sanchez
0250b37a4b fix: remove trailing whitespace in test file 2025-12-15 12:30:45 +01:00
Roberto Jimenez Sanchez
848c84204a Provisioning: Deprecate single file/folder move and delete on configured branch
Reject individual file and folder move/delete operations on the configured
branch via the single files endpoints (HTTP 405 MethodNotAllowed). Users
must use the bulk operations API (jobs API) instead.

Motivation:
- Reconciliation for these operations is not reliable as it must be
  recursive and cannot run synchronously since it could take a long time
- Simplifies authorization logic - fewer operations to secure and validate
- Reduces complexity and surface area for potential bugs
- Bulk operations via jobs API provide better control and observability

Operations on non-configured branches (e.g., creating PRs) continue to work
as before since they don't update the Grafana database.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-15 12:28:00 +01:00
187 changed files with 1324 additions and 5953 deletions

8
.github/CODEOWNERS vendored
View File

@@ -208,7 +208,7 @@
/pkg/tests/apis/shorturl @grafana/sharing-squad
/pkg/tests/api/correlations/ @grafana/datapro
/pkg/tsdb/grafanads/ @grafana/grafana-backend-group
/pkg/tsdb/opentsdb/ @grafana/oss-big-tent
/pkg/tsdb/opentsdb/ @grafana/partner-datasources
/pkg/util/ @grafana/grafana-backend-group
/pkg/web/ @grafana/grafana-backend-group
@@ -260,7 +260,7 @@
/devenv/dev-dashboards/dashboards.go @grafana/dataviz-squad
/devenv/dev-dashboards/home.json @grafana/dataviz-squad
/devenv/dev-dashboards/datasource-elasticsearch/ @grafana/partner-datasources
/devenv/dev-dashboards/datasource-opentsdb/ @grafana/oss-big-tent
/devenv/dev-dashboards/datasource-opentsdb/ @grafana/partner-datasources
/devenv/dev-dashboards/datasource-influxdb/ @grafana/partner-datasources
/devenv/dev-dashboards/datasource-mssql/ @grafana/partner-datasources
/devenv/dev-dashboards/datasource-loki/ @grafana/plugins-platform-frontend
@@ -307,7 +307,7 @@
/devenv/docker/blocks/mysql_exporter/ @grafana/oss-big-tent
/devenv/docker/blocks/mysql_opendata/ @grafana/oss-big-tent
/devenv/docker/blocks/mysql_tests/ @grafana/oss-big-tent
/devenv/docker/blocks/opentsdb/ @grafana/oss-big-tent
/devenv/docker/blocks/opentsdb/ @grafana/partner-datasources
/devenv/docker/blocks/postgres/ @grafana/oss-big-tent
/devenv/docker/blocks/postgres_tests/ @grafana/oss-big-tent
/devenv/docker/blocks/prometheus/ @grafana/oss-big-tent
@@ -1101,7 +1101,7 @@ eslint-suppressions.json @grafanabot
/public/app/plugins/datasource/mixed/ @grafana/dashboards-squad
/public/app/plugins/datasource/mssql/ @grafana/partner-datasources
/public/app/plugins/datasource/mysql/ @grafana/oss-big-tent
/public/app/plugins/datasource/opentsdb/ @grafana/oss-big-tent
/public/app/plugins/datasource/opentsdb/ @grafana/partner-datasources
/public/app/plugins/datasource/grafana-postgresql-datasource/ @grafana/oss-big-tent
/public/app/plugins/datasource/prometheus/ @grafana/oss-big-tent
/public/app/plugins/datasource/cloud-monitoring/ @grafana/partner-datasources

View File

@@ -111,13 +111,12 @@ jobs:
ownerRepo: 'grafana/grafana-enterprise'
from: ${{ needs.setup.outputs.release_branch }}
to: ${{ needs.create_next_release_branch_enterprise.outputs.branch }}
# Removed this for now since it doesn't work
# post_changelog_on_forum:
# needs: setup
# uses: grafana/grafana/.github/workflows/community-release.yml@main
# with:
# version: ${{ needs.setup.outputs.version }}
# dry_run: ${{ needs.setup.outputs.dry_run == 'true' }}
post_changelog_on_forum:
needs: setup
uses: grafana/grafana/.github/workflows/community-release.yml@main
with:
version: ${{ needs.setup.outputs.version }}
dry_run: ${{ needs.setup.outputs.dry_run == 'true' }}
create_github_release:
# a github release requires a git tag
# The github-release action retrieves the changelog using the /repos/grafana/grafana/contents/CHANGELOG.md API

View File

@@ -3,7 +3,7 @@
# Others can set up the YAML LSP manually, which supports schemas: https://github.com/redhat-developer/yaml-language-server
# $schema: https://golangci-lint.run/jsonschema/golangci.jsonschema.json
version: '2'
version: "2"
run:
timeout: 15m
concurrency: 10
@@ -83,16 +83,6 @@ linters:
deny:
- pkg: github.com/grafana/grafana/pkg
desc: apps/playlist is not allowed to import grafana core
apps-dashboard:
list-mode: lax
files:
- ./apps/dashboard/*
- ./apps/dashboard/**/*
allow:
- github.com/grafana/grafana/pkg/apimachinery
deny:
- pkg: github.com/grafana/grafana/pkg
desc: apps/dashboard is not allowed to import grafana core
apps-secret:
list-mode: lax
files:
@@ -291,16 +281,16 @@ linters:
text: G306
- linters:
- gosec
text: '401'
text: "401"
- linters:
- gosec
text: '402'
text: "402"
- linters:
- gosec
text: '501'
text: "501"
- linters:
- gosec
text: '404'
text: "404"
- linters:
- errorlint
text: non-wrapping format verb for fmt.Errorf

View File

@@ -15,7 +15,6 @@ require (
github.com/stretchr/testify v1.11.1
k8s.io/apimachinery v0.34.2
k8s.io/apiserver v0.34.2
k8s.io/client-go v0.34.2
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912
)
@@ -44,7 +43,6 @@ replace github.com/grafana/grafana/apps/plugins => ../plugins
replace github.com/prometheus/alertmanager => github.com/grafana/prometheus-alertmanager v0.25.1-0.20250911094103-5456b6e45604
require (
cel.dev/expr v0.24.0 // indirect
cloud.google.com/go/compute/metadata v0.9.0 // indirect
dario.cat/mergo v1.0.2 // indirect
filippo.io/edwards25519 v1.1.0 // indirect
@@ -57,7 +55,6 @@ require (
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver v1.5.0 // indirect
github.com/Masterminds/sprig/v3 v3.3.0 // indirect
github.com/NYTimes/gziphandler v1.1.1 // indirect
github.com/ProtonMail/go-crypto v1.1.6 // indirect
github.com/VividCortex/mysqlerr v0.0.0-20170204212430-6c6b55f8796f // indirect
github.com/alecthomas/units v0.0.0-20240927000941-0f3dac36c52b // indirect
@@ -88,7 +85,6 @@ require (
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cheekybits/genny v1.0.0 // indirect
github.com/cloudflare/circl v1.6.1 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
@@ -105,7 +101,6 @@ require (
github.com/evanphx/json-patch v5.9.11+incompatible // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.9.0 // indirect
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
github.com/gchaincl/sqlhooks v1.3.0 // indirect
github.com/getkin/kin-openapi v0.133.0 // indirect
@@ -149,13 +144,12 @@ require (
github.com/golang-migrate/migrate/v4 v4.7.0 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/cel-go v0.26.1 // indirect
github.com/google/flatbuffers v25.2.10+incompatible // indirect
github.com/google/gnostic-models v0.7.0 // indirect
github.com/google/go-querystring v1.1.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/google/wire v0.7.0 // indirect
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 // indirect
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba // indirect
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f // indirect
github.com/grafana/dataplane/sdata v0.0.9 // indirect
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4 // indirect
@@ -168,7 +162,6 @@ require (
github.com/grafana/sqlds/v4 v4.2.7 // indirect
github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus v1.1.0 // indirect
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.3.3 // indirect
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.1-0.20191002090509-6af20e3a5340 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-hclog v1.6.3 // indirect
@@ -183,7 +176,6 @@ require (
github.com/hashicorp/memberlist v0.5.2 // indirect
github.com/hashicorp/yamux v0.1.2 // indirect
github.com/huandu/xstrings v1.5.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jaegertracing/jaeger-idl v0.5.0 // indirect
github.com/jessevdk/go-flags v1.6.1 // indirect
github.com/jmespath-community/go-jmespath v1.1.1 // indirect
@@ -256,9 +248,7 @@ require (
github.com/shurcooL/vfsgen v0.0.0-20230704071429-0000e147ea92 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
github.com/spf13/cast v1.10.0 // indirect
github.com/spf13/cobra v1.10.1 // indirect
github.com/spf13/pflag v1.0.10 // indirect
github.com/stoewer/go-strcase v1.3.1 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/tetratelabs/wazero v1.8.2 // indirect
github.com/thomaspoignant/go-feature-flag v1.42.0 // indirect
@@ -266,9 +256,6 @@ require (
github.com/woodsbury/decimal128 v1.3.0 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/zeebo/xxh3 v1.0.2 // indirect
go.etcd.io/etcd/api/v3 v3.6.4 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.6.4 // indirect
go.etcd.io/etcd/client/v3 v3.6.4 // indirect
go.mongodb.org/mongo-driver v1.17.4 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.63.0 // indirect
@@ -287,8 +274,6 @@ require (
go.opentelemetry.io/proto/otlp v1.9.0 // indirect
go.uber.org/atomic v1.11.0 // indirect
go.uber.org/mock v0.6.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.1 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/crypto v0.45.0 // indirect
@@ -312,26 +297,23 @@ require (
google.golang.org/grpc v1.77.0 // indirect
google.golang.org/protobuf v1.36.10 // indirect
gopkg.in/alexcesaro/quotedprintable.v3 v3.0.0-20150716171945-2caba252f4dc // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/mail.v2 v2.3.1 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/src-d/go-errors.v1 v1.0.0 // indirect
gopkg.in/telebot.v3 v3.3.8 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/api v0.34.2 // indirect
k8s.io/apiextensions-apiserver v0.34.2 // indirect
k8s.io/client-go v0.34.2 // indirect
k8s.io/component-base v0.34.2 // indirect
k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/kms v0.34.2 // indirect
k8s.io/utils v0.0.0-20250604170112-4c0f3b243397 // indirect
modernc.org/libc v1.66.10 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect
modernc.org/sqlite v1.40.1 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.2 // indirect
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/structured-merge-diff/v6 v6.3.1 // indirect

View File

@@ -282,7 +282,6 @@ github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03V
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/cznic/b v0.0.0-20180115125044-35e9bbe41f07/go.mod h1:URriBxXwVq5ijiJ12C7iIZqlA69nTlI+LgI6/pwftG8=
github.com/cznic/fileutil v0.0.0-20180108211300-6a051e75936f/go.mod h1:8S58EK26zhXSxzv7NQFpnliaOQsmDUxvoQO3rt154Vg=
@@ -407,8 +406,6 @@ github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ=
github.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg=
github.com/go-openapi/analysis v0.24.0 h1:vE/VFFkICKyYuTWYnplQ+aVr45vlG6NcZKC7BdIXhsA=
github.com/go-openapi/analysis v0.24.0/go.mod h1:GLyoJA+bvmGGaHgpfeDh8ldpGo69fAJg7eeMDMRCIrw=
github.com/go-openapi/errors v0.22.3 h1:k6Hxa5Jg1TUyZnOwV2Lh81j8ayNw5VVYLvKrp4zFKFs=
@@ -609,10 +606,8 @@ github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2z
github.com/gorilla/mux v1.7.1/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 h1:ZzG/gCclEit9w0QUfQt9GURcOycAIGcsQAhY1u0AEX0=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmFAlqnWsXoRyUwSa2GHNEMSEDKGKfQ4=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f h1:Cbm6OKkOcJ+7CSZsGsEJzktC/SIa5bxVeYKQLuYK86o=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f/go.mod h1:axY0cdOg3q0TZHwpHnIz5x16xZ8ZBxJHShsSHHXcHQg=
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 h1:Muoy+FMGrHj3GdFbvsMzUT7eusgii9PKf9L1ZaXDDbY=
@@ -754,8 +749,6 @@ github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGw
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/jmoiron/sqlx v1.4.0 h1:1PLqN7S1UYp5t4SrVVnt4nUVNemrDAtxlulVe+Qgm3o=
github.com/jmoiron/sqlx v1.4.0/go.mod h1:ZrZ7UsYB/weZdl2Bxg6jCRO9c3YHl8r3ahlKmRT4JLY=
github.com/jonboulle/clockwork v0.5.0 h1:Hyh9A8u51kptdkR+cqRpT1EebBwTn1oK9YfGYbdFz6I=
github.com/jonboulle/clockwork v0.5.0/go.mod h1:3mZlmanh0g2NDKO5TWZVJAfofYk64M7XN3SzBPjZF60=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/jpillora/backoff v1.0.0 h1:uvFg412JmmHBHw7iwprIxkPMI+sGQ4kzOWsMeHnm2EA=
@@ -986,7 +979,6 @@ github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSg
github.com/pressly/goose/v3 v3.26.0 h1:KJakav68jdH0WDvoAcj8+n61WqOIaPGgH0bJWS6jpmM=
github.com/pressly/goose/v3 v3.26.0/go.mod h1:4hC1KrritdCxtuFsqgs1R4AU5bWtTAf+cnWvfhf2DNY=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.2/go.mod h1:OsXs2jCmiKlQ1lTBmv21f2mNfw4xf/QclQDMrYNZzcM=
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.3.0/go.mod h1:hJaj2vgQTGQmVCsAACORcieXFeDPbaTKGT+JTgUa3og=
@@ -1004,7 +996,6 @@ github.com/prometheus/client_model v0.1.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6T
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.0.0-20181126121408-4724e9255275/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
@@ -1019,7 +1010,6 @@ github.com/prometheus/common/sigv4 v0.1.0/go.mod h1:2Jkxxk9yYvCkE5G1sQT7GuEXm57J
github.com/prometheus/exporter-toolkit v0.14.0 h1:NMlswfibpcZZ+H0sZBiTjrA3/aBFHkNZqE+iCj5EmRg=
github.com/prometheus/exporter-toolkit v0.14.0/go.mod h1:Gu5LnVvt7Nr/oqTBUC23WILZepW0nffNo10XdhQcwWA=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
@@ -1046,7 +1036,6 @@ github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0t
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/rs/cors v1.11.1 h1:eU3gRzXLRK57F5rKMGMZURNdIG4EoAmX8k94r9wXWHA=
github.com/rs/cors v1.11.1/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/sagikazarmark/crypt v0.6.0/go.mod h1:U8+INwJo3nBv1m6A/8OBXAq7Jnpspk5AxSgDyEQcea8=
github.com/sagikazarmark/locafero v0.11.0 h1:1iurJgmM9G3PA/I+wWYIOw/5SyBtxapeHDcg+AAIFXc=
@@ -1069,8 +1058,6 @@ github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6Mwd
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/soheilhy/cmux v0.1.5 h1:jjzc5WVemNEDTLwv9tlmemhC73tI08BNOIGwBOo10Js=
github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE1GqG0=
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 h1:+jumHNA0Wrelhe64i8F6HNlS8pkoyMv5sreGx2Ry5Rw=
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8/go.mod h1:3n1Cwaq1E1/1lhQhtRK2ts/ZwZEhjcQeJQ1RuC6Q/8U=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
@@ -1084,7 +1071,6 @@ github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.13.0/go.mod h1:Icm2xNL3/8uyh/wFuB1jI7TiTNKp8632Nwegu+zgdYw=
@@ -1110,7 +1096,6 @@ github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals=
github.com/stretchr/testify v1.7.5/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
@@ -1127,8 +1112,6 @@ github.com/thomaspoignant/go-feature-flag v1.42.0/go.mod h1:y0QiWH7chHWhGATb/+Xq
github.com/tidwall/pretty v0.0.0-20180105212114-65a9db5fad51/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tjhop/slog-gokit v0.1.5 h1:ayloIUi5EK2QYB8eY4DOPO95/mRtMW42lUkp3quJohc=
github.com/tjhop/slog-gokit v0.1.5/go.mod h1:yA48zAHvV+Sg4z4VRyeFyFUNNXd3JY5Zg84u3USICq0=
github.com/tmc/grpc-websocket-proxy v0.0.0-20220101234140-673ab2c3ae75 h1:6fotK7otjonDflCTK0BCfls4SPy3NcCVb5dqqmbRknE=
github.com/tmc/grpc-websocket-proxy v0.0.0-20220101234140-673ab2c3ae75/go.mod h1:KO6IkyS8Y3j8OdNO85qEYBsRPuteD+YciPomcXdrMnk=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
github.com/uber/jaeger-client-go v2.30.0+incompatible h1:D6wyKGCecFaSRUpo8lCVbaOOb6ThwMmTEbhRwtKR97o=
github.com/uber/jaeger-client-go v2.30.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
@@ -1146,8 +1129,6 @@ github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcY
github.com/xanzy/go-gitlab v0.15.0/go.mod h1:8zdQa/ri1dfn8eS3Ir1SyfvOKlw7WBJ8DVThkpGiXrs=
github.com/xdg/scram v0.0.0-20180814205039-7eeb5667e42c/go.mod h1:lB8K/P019DLNhemzwFU4jHLhdvlE6uDZjXFejJXr49I=
github.com/xdg/stringprep v1.0.0/go.mod h1:Jhud4/sHMO4oL310DaZAKk9ZaJ08SJfe+sJh0HrGL1Y=
github.com/xiang90/probing v0.0.0-20221125231312-a49e3df8f510 h1:S2dVYn90KE98chqDkyE9Z4N61UnQd+KOfgp5Iu53llk=
github.com/xiang90/probing v0.0.0-20221125231312-a49e3df8f510/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
@@ -1158,8 +1139,6 @@ github.com/zeebo/assert v1.3.0/go.mod h1:Pq9JiuJQpG8JLJdtkwrJESF0Foym2/D9XMU5ciN
github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0=
github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA=
gitlab.com/nyarla/go-crypt v0.0.0-20160106005555-d9a5dc2b789b/go.mod h1:T3BPAOm2cqquPa0MKWeNkmOM5RQsRhkrwMWonFMN7fE=
go.etcd.io/bbolt v1.4.2 h1:IrUHp260R8c+zYx/Tm8QZr04CX+qWS5PGfPdevhdm1I=
go.etcd.io/bbolt v1.4.2/go.mod h1:Is8rSHO/b4f3XigBC0lL0+4FwAQv3HXEEIgFMuKHceM=
go.etcd.io/etcd/api/v3 v3.5.4/go.mod h1:5GB2vv4A4AOn3yk7MftYGHkUfGtDHnEraIjym4dYz5A=
go.etcd.io/etcd/api/v3 v3.6.4 h1:7F6N7toCKcV72QmoUKa23yYLiiljMrT4xCeBL9BmXdo=
go.etcd.io/etcd/api/v3 v3.6.4/go.mod h1:eFhhvfR8Px1P6SEuLT600v+vrhdDTdcfMzmnxVXXSbk=
@@ -1170,12 +1149,6 @@ go.etcd.io/etcd/client/v2 v2.305.4/go.mod h1:Ud+VUwIi9/uQHOMA+4ekToJ12lTxlv0zB/+
go.etcd.io/etcd/client/v3 v3.5.4/go.mod h1:ZaRkVgBZC+L+dLCjTcF1hRXpgZXQPOvnA/Ak/gq3kiY=
go.etcd.io/etcd/client/v3 v3.6.4 h1:YOMrCfMhRzY8NgtzUsHl8hC2EBSnuqbR3dh84Uryl7A=
go.etcd.io/etcd/client/v3 v3.6.4/go.mod h1:jaNNHCyg2FdALyKWnd7hxZXZxZANb0+KGY+YQaEMISo=
go.etcd.io/etcd/pkg/v3 v3.6.4 h1:fy8bmXIec1Q35/jRZ0KOes8vuFxbvdN0aAFqmEfJZWA=
go.etcd.io/etcd/pkg/v3 v3.6.4/go.mod h1:kKcYWP8gHuBRcteyv6MXWSN0+bVMnfgqiHueIZnKMtE=
go.etcd.io/etcd/server/v3 v3.6.4 h1:LsCA7CzjVt+8WGrdsnh6RhC0XqCsLkBly3ve5rTxMAU=
go.etcd.io/etcd/server/v3 v3.6.4/go.mod h1:aYCL/h43yiONOv0QIR82kH/2xZ7m+IWYjzRmyQfnCAg=
go.etcd.io/raft/v3 v3.6.0 h1:5NtvbDVYpnfZWcIHgGRk9DyzkBIXOi8j+DDp1IcnUWQ=
go.etcd.io/raft/v3 v3.6.0/go.mod h1:nLvLevg6+xrVtHUmVaTcTz603gQPHfh7kUAwV6YpfGo=
go.mongodb.org/mongo-driver v1.1.0/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.17.4 h1:jUorfmVzljjr0FLzYQsGP8cgN/qzzxlY9Vh0C9KFXVw=
go.mongodb.org/mongo-driver v1.17.4/go.mod h1:Hy04i7O2kC4RS06ZrhPRqj/u4DTYkFDAAccj+rVKqgQ=
@@ -1328,7 +1301,6 @@ golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73r
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181108082009-03003ca0c849/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190125091013-d26f9f9a57f3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -1740,7 +1712,6 @@ google.golang.org/genproto/googleapis/api v0.0.0-20251111163417-95abcf5c77ba/go.
google.golang.org/genproto/googleapis/rpc v0.0.0-20251111163417-95abcf5c77ba h1:UKgtfRM7Yh93Sya0Fo8ZzhDP4qBckrrxEr2oF5UIVb8=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251111163417-95abcf5c77ba/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.18.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=

View File

@@ -8,24 +8,18 @@ import (
"github.com/grafana/grafana-app-sdk/app"
"github.com/grafana/grafana-app-sdk/k8s"
appsdkapiserver "github.com/grafana/grafana-app-sdk/k8s/apiserver"
"github.com/grafana/grafana-app-sdk/logging"
"github.com/grafana/grafana-app-sdk/operator"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/grafana/grafana-app-sdk/simple"
advisorapi "github.com/grafana/grafana/apps/advisor/pkg/apis"
advisorv0alpha1 "github.com/grafana/grafana/apps/advisor/pkg/apis/advisor/v0alpha1"
"github.com/grafana/grafana/apps/advisor/pkg/app/checkregistry"
"github.com/grafana/grafana/apps/advisor/pkg/app/checks"
"github.com/grafana/grafana/apps/advisor/pkg/app/checkscheduler"
"github.com/grafana/grafana/apps/advisor/pkg/app/checktyperegisterer"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/services/org"
"github.com/grafana/grafana/pkg/setting"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apiserver/pkg/authorization/authorizer"
"k8s.io/client-go/rest"
)
func New(cfg app.Config) (app.App, error) {
@@ -194,45 +188,3 @@ func GetKinds() map[schema.GroupVersion][]resource.Kind {
},
}
}
func ProvideAppInstaller(
authorizer authorizer.Authorizer,
checkRegistry checkregistry.CheckService,
cfg *setting.Cfg,
orgService org.Service,
) (*AdvisorAppInstaller, error) {
provider := simple.NewAppProvider(advisorapi.LocalManifest(), nil, New)
pluginConfig := cfg.PluginSettings["grafana-advisor-app"]
specificConfig := checkregistry.AdvisorAppConfig{
CheckRegistry: checkRegistry,
PluginConfig: pluginConfig,
StackID: cfg.StackID,
OrgService: orgService,
}
appCfg := app.Config{
KubeConfig: rest.Config{},
ManifestData: *advisorapi.LocalManifest().ManifestData,
SpecificConfig: specificConfig,
}
defaultInstaller, err := appsdkapiserver.NewDefaultAppInstaller(provider, appCfg, advisorapi.NewGoTypeAssociator())
if err != nil {
return nil, err
}
installer := &AdvisorAppInstaller{
AppInstaller: defaultInstaller,
authorizer: authorizer,
}
return installer, nil
}
type AdvisorAppInstaller struct {
appsdkapiserver.AppInstaller
authorizer authorizer.Authorizer
}
func (a *AdvisorAppInstaller) GetAuthorizer() authorizer.Authorizer {
return a.authorizer
}

View File

@@ -0,0 +1,47 @@
package app
import (
"context"
claims "github.com/grafana/authlib/types"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"k8s.io/apiserver/pkg/authorization/authorizer"
)
func GetAuthorizer() authorizer.Authorizer {
return authorizer.AuthorizerFunc(func(
ctx context.Context, attr authorizer.Attributes,
) (authorized authorizer.Decision, reason string, err error) {
if !attr.IsResourceRequest() {
return authorizer.DecisionNoOpinion, "", nil
}
// Check for service identity
if identity.IsServiceIdentity(ctx) {
return authorizer.DecisionAllow, "", nil
}
// Check for access policy identity
info, ok := claims.AuthInfoFrom(ctx)
if ok && claims.IsIdentityType(info.GetIdentityType(), claims.TypeAccessPolicy) {
// For access policy identities, we need to use ResourceAuthorizer
// This requires an AccessClient, which should be provided by the API server
// For now, we'll use the default ResourceAuthorizer from the API server
// This will be set up by the API server's authorization chain
return authorizer.DecisionNoOpinion, "", nil
}
// For regular Grafana users, check if they are admin
u, err := identity.GetRequester(ctx)
if err != nil {
return authorizer.DecisionDeny, "valid user is required", err
}
// check if is admin
if u.HasRole(identity.RoleAdmin) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "forbidden", nil
})
}

View File

@@ -0,0 +1,91 @@
package app
import (
"context"
"testing"
claims "github.com/grafana/authlib/types"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/stretchr/testify/assert"
"k8s.io/apiserver/pkg/authorization/authorizer"
)
func TestGetAuthorizer(t *testing.T) {
tests := []struct {
name string
ctx context.Context
attr authorizer.Attributes
expectedDecision authorizer.Decision
expectedReason string
expectedErr error
}{
{
name: "non-resource request",
ctx: context.TODO(),
attr: &mockAttributes{resourceRequest: false},
expectedDecision: authorizer.DecisionNoOpinion,
expectedReason: "",
expectedErr: nil,
},
{
name: "user is admin",
ctx: identity.WithRequester(context.TODO(), &mockUser{isGrafanaAdmin: true}),
attr: &mockAttributes{resourceRequest: true},
expectedDecision: authorizer.DecisionAllow,
expectedReason: "",
expectedErr: nil,
},
{
name: "user is not admin",
ctx: identity.WithRequester(context.TODO(), &mockUser{isGrafanaAdmin: false}),
attr: &mockAttributes{resourceRequest: true},
expectedDecision: authorizer.DecisionDeny,
expectedReason: "forbidden",
expectedErr: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
auth := GetAuthorizer()
decision, reason, err := auth.Authorize(tt.ctx, tt.attr)
assert.Equal(t, tt.expectedDecision, decision)
assert.Equal(t, tt.expectedReason, reason)
assert.Equal(t, tt.expectedErr, err)
})
}
}
type mockAttributes struct {
authorizer.Attributes
resourceRequest bool
}
func (m *mockAttributes) IsResourceRequest() bool {
return m.resourceRequest
}
// Implement other methods of authorizer.Attributes as needed
type mockUser struct {
identity.Requester
isGrafanaAdmin bool
}
func (m *mockUser) GetIsGrafanaAdmin() bool {
return m.isGrafanaAdmin
}
func (m *mockUser) HasRole(role identity.RoleType) bool {
return role == identity.RoleAdmin && m.isGrafanaAdmin
}
func (m *mockUser) GetUID() string {
return "test-uid"
}
func (m *mockUser) GetIdentityType() claims.IdentityType {
return claims.TypeUser
}
// Implement other methods of identity.Requester as needed

View File

@@ -4,7 +4,7 @@ go 1.25.5
require (
github.com/go-kit/log v0.2.1
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4
github.com/grafana/grafana-app-sdk v0.48.5
github.com/grafana/grafana-app-sdk/logging v0.48.3

View File

@@ -216,10 +216,12 @@ github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grafana/grafana-app-sdk v0.48.5 h1:MS8l9fTZz+VbTfgApn09jw27GxhQ6fNOWGhC4ydvZmM=
github.com/grafana/grafana-app-sdk v0.48.5/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 h1:ZzG/gCclEit9w0QUfQt9GURcOycAIGcsQAhY1u0AEX0=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmFAlqnWsXoRyUwSa2GHNEMSEDKGKfQ4=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4 h1:jSojuc7njleS3UOz223WDlXOinmuLAIPI0z2vtq8EgI=
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4/go.mod h1:VahT+GtfQIM+o8ht2StR6J9g+Ef+C2Vokh5uuSmOD/4=
github.com/grafana/grafana-app-sdk v0.48.5 h1:MS8l9fTZz+VbTfgApn09jw27GxhQ6fNOWGhC4ydvZmM=

View File

@@ -9,7 +9,6 @@ require (
github.com/grafana/grafana-app-sdk/logging v0.48.3
github.com/grafana/grafana-plugin-sdk-go v0.284.0
github.com/grafana/grafana/pkg/apimachinery v0.0.0-20250514132646-acbc7b54ed9e
github.com/hashicorp/golang-lru/v2 v2.0.7
github.com/prometheus/client_golang v1.23.2
github.com/stretchr/testify v1.11.1
k8s.io/apimachinery v0.34.2
@@ -58,6 +57,7 @@ require (
github.com/hashicorp/go-hclog v1.6.3 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/go-plugin v1.7.0 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/hashicorp/yamux v0.1.2 // indirect
github.com/jaegertracing/jaeger-idl v0.5.0 // indirect
github.com/josharian/intern v1.0.0 // indirect

View File

@@ -530,7 +530,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": false,
"collapse": true,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -546,7 +546,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": false,
"collapse": true,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -548,7 +548,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": false,
"collapse": true,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -574,7 +574,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": false,
"collapse": true,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -1663,7 +1663,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": false,
"collapse": true,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -1727,7 +1727,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": false,
"collapse": true,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -328,7 +328,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": false,
"collapse": true,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -335,7 +335,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": false,
"collapse": true,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -501,9 +501,11 @@ func convertToRowsLayout(ctx context.Context, panels []interface{}, dsIndexProvi
if currentRow != nil {
// If currentRow is a hidden-header row (panels before first explicit row),
// it should not be collapsed because it will disappear and be visible only in edit mode
// set its collapse to match the first explicit row's collapsed value
// This matches frontend behavior: collapse: panel.collapsed
if currentRow.Spec.HideHeader != nil && *currentRow.Spec.HideHeader {
currentRow.Spec.Collapse = &[]bool{false}[0]
rowCollapsed := getBoolField(panelMap, "collapsed", false)
currentRow.Spec.Collapse = &rowCollapsed
}
// Flush current row to layout
rows = append(rows, *currentRow)

View File

@@ -5,11 +5,12 @@ import (
"sync"
"time"
"github.com/hashicorp/golang-lru/v2/expirable"
"k8s.io/apiserver/pkg/endpoints/request"
"github.com/grafana/authlib/types"
"github.com/grafana/grafana-app-sdk/logging"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/hashicorp/golang-lru/v2/expirable"
k8srequest "k8s.io/apiserver/pkg/endpoints/request"
"github.com/grafana/grafana/pkg/services/apiserver/endpoints/request"
)
const defaultCacheSize = 1000
@@ -31,15 +32,17 @@ type cachedProvider[T any] struct {
fetch func(context.Context) T
cache *expirable.LRU[string, T] // LRU cache: namespace to cache entry
inFlight sync.Map // map[string]*sync.Mutex - per-namespace fetch locks
logger log.Logger
}
// newCachedProvider creates a new cachedProvider.
// The fetch function should be able to handle context with different namespaces.
// A non-positive size turns LRU mechanism off (cache of unlimited size).
// A non-positive cacheTTL disables TTL expiration.
func newCachedProvider[T any](fetch func(context.Context) T, size int, cacheTTL time.Duration) *cachedProvider[T] {
func newCachedProvider[T any](fetch func(context.Context) T, size int, cacheTTL time.Duration, logger log.Logger) *cachedProvider[T] {
cacheProvider := &cachedProvider[T]{
fetch: fetch,
fetch: fetch,
logger: logger,
}
cacheProvider.cache = expirable.NewLRU(size, func(key string, value T) {
cacheProvider.inFlight.Delete(key)
@@ -50,13 +53,14 @@ func newCachedProvider[T any](fetch func(context.Context) T, size int, cacheTTL
// Get returns the cached value if it's still valid, otherwise calls fetch and caches the result.
func (p *cachedProvider[T]) Get(ctx context.Context) T {
// Get namespace info from ctx
namespace, ok := request.NamespaceFrom(ctx)
if !ok {
nsInfo, err := request.NamespaceInfoFrom(ctx, true)
if err != nil {
// No namespace, fall back to direct fetch call without caching
logging.FromContext(ctx).Warn("Unable to get namespace info from context, skipping cache")
p.logger.Warn("Unable to get namespace info from context, skipping cache", "error", err)
return p.fetch(ctx)
}
namespace := nsInfo.Value
// Fast path: check if cache is still valid
if entry, ok := p.cache.Get(namespace); ok {
return entry
@@ -77,7 +81,7 @@ func (p *cachedProvider[T]) Get(ctx context.Context) T {
}
// Fetch outside the main lock - only this namespace is blocked
logging.FromContext(ctx).Debug("cache miss or expired, fetching new value", "namespace", namespace)
p.logger.Debug("cache miss or expired, fetching new value", "namespace", namespace)
value := p.fetch(ctx)
// Update the cache for this namespace
@@ -89,12 +93,12 @@ func (p *cachedProvider[T]) Get(ctx context.Context) T {
// Preload loads data into the cache for the given namespaces.
func (p *cachedProvider[T]) Preload(ctx context.Context, nsInfos []types.NamespaceInfo) {
// Build the cache using a context with the namespace
logging.FromContext(ctx).Info("preloading cache", "nsInfos", len(nsInfos))
p.logger.Info("preloading cache", "nsInfos", len(nsInfos))
startedAt := time.Now()
defer func() {
logging.FromContext(ctx).Info("finished preloading cache", "nsInfos", len(nsInfos), "elapsed", time.Since(startedAt))
p.logger.Info("finished preloading cache", "nsInfos", len(nsInfos), "elapsed", time.Since(startedAt))
}()
for _, nsInfo := range nsInfos {
p.cache.Add(nsInfo.Value, p.fetch(request.WithNamespace(ctx, nsInfo.Value)))
p.cache.Add(nsInfo.Value, p.fetch(k8srequest.WithNamespace(ctx, nsInfo.Value)))
}
}

View File

@@ -8,11 +8,11 @@ import (
"testing"
"time"
authlib "github.com/grafana/authlib/types"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apiserver/pkg/endpoints/request"
authlib "github.com/grafana/authlib/types"
)
// testProvider tracks how many times get() is called
@@ -44,7 +44,7 @@ func TestCachedProvider_CacheHit(t *testing.T) {
underlying := newTestProvider(datasources)
// Test newCachedProvider directly instead of the wrapper
cached := newCachedProvider(underlying.get, defaultCacheSize, time.Minute)
cached := newCachedProvider(underlying.get, defaultCacheSize, time.Minute, log.New("test"))
// Use "default" namespace (org 1) - this is the standard Grafana namespace format
ctx := request.WithNamespace(context.Background(), "default")
@@ -69,7 +69,7 @@ func TestCachedProvider_NamespaceIsolation(t *testing.T) {
}
underlying := newTestProvider(datasources)
cached := newCachedProvider(underlying.get, defaultCacheSize, time.Minute)
cached := newCachedProvider(underlying.get, defaultCacheSize, time.Minute, log.New("test"))
// Use "default" (org 1) and "org-2" (org 2) - standard Grafana namespace formats
ctx1 := request.WithNamespace(context.Background(), "default")
@@ -102,7 +102,7 @@ func TestCachedProvider_NoNamespaceFallback(t *testing.T) {
}
underlying := newTestProvider(datasources)
cached := newCachedProvider(underlying.get, defaultCacheSize, time.Minute)
cached := newCachedProvider(underlying.get, defaultCacheSize, time.Minute, log.New("test"))
// Context without namespace - should fall back to direct provider call
ctx := context.Background()
@@ -123,7 +123,7 @@ func TestCachedProvider_ConcurrentAccess(t *testing.T) {
}
underlying := newTestProvider(datasources)
cached := newCachedProvider(underlying.get, defaultCacheSize, time.Minute)
cached := newCachedProvider(underlying.get, defaultCacheSize, time.Minute, log.New("test"))
// Use "default" namespace (org 1)
ctx := request.WithNamespace(context.Background(), "default")
@@ -155,7 +155,7 @@ func TestCachedProvider_ConcurrentNamespaces(t *testing.T) {
}
underlying := newTestProvider(datasources)
cached := newCachedProvider(underlying.get, defaultCacheSize, time.Minute)
cached := newCachedProvider(underlying.get, defaultCacheSize, time.Minute, log.New("test"))
var wg sync.WaitGroup
numOrgs := 10
@@ -198,7 +198,7 @@ func TestCachedProvider_CorrectDataPerNamespace(t *testing.T) {
"org-2": {{UID: "org2-ds", Type: "loki", Name: "Org2 DS", Default: true}},
},
}
cached := newCachedProvider(underlying.Index, defaultCacheSize, time.Minute)
cached := newCachedProvider(underlying.Index, defaultCacheSize, time.Minute, log.New("test"))
// Use valid namespace formats
ctx1 := request.WithNamespace(context.Background(), "default")
@@ -228,7 +228,7 @@ func TestCachedProvider_PreloadMultipleNamespaces(t *testing.T) {
"org-3": {{UID: "org3-ds", Type: "tempo", Name: "Org3 DS", Default: true}},
},
}
cached := newCachedProvider(underlying.Index, defaultCacheSize, time.Minute)
cached := newCachedProvider(underlying.Index, defaultCacheSize, time.Minute, log.New("test"))
// Preload multiple namespaces
nsInfos := []authlib.NamespaceInfo{
@@ -346,7 +346,7 @@ func TestCachedProvider_TTLExpiration(t *testing.T) {
underlying := newTestProvider(datasources)
// Use a very short TTL for testing
shortTTL := 50 * time.Millisecond
cached := newCachedProvider(underlying.get, defaultCacheSize, shortTTL)
cached := newCachedProvider(underlying.get, defaultCacheSize, shortTTL, log.New("test"))
ctx := request.WithNamespace(context.Background(), "default")
@@ -379,7 +379,7 @@ func TestCachedProvider_ParallelNamespacesFetch(t *testing.T) {
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
},
}
cached := newCachedProvider(provider.get, defaultCacheSize, time.Minute)
cached := newCachedProvider(provider.get, defaultCacheSize, time.Minute, log.New("test"))
numNamespaces := 5
var wg sync.WaitGroup
@@ -421,7 +421,7 @@ func TestCachedProvider_SameNamespaceSerialFetch(t *testing.T) {
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
},
}
cached := newCachedProvider(provider.get, defaultCacheSize, time.Minute)
cached := newCachedProvider(provider.get, defaultCacheSize, time.Minute, log.New("test"))
numGoroutines := 10
var wg sync.WaitGroup

View File

@@ -3,6 +3,8 @@ package schemaversion
import (
"context"
"time"
"github.com/grafana/grafana/pkg/infra/log"
)
// Shared utility functions for datasource migrations across different schema versions.
@@ -34,7 +36,7 @@ func WrapIndexProviderWithCache(provider DataSourceIndexProvider, cacheTTL time.
return provider
}
return &cachedIndexProvider{
newCachedProvider[*DatasourceIndex](provider.Index, defaultCacheSize, cacheTTL),
newCachedProvider[*DatasourceIndex](provider.Index, defaultCacheSize, cacheTTL, log.New("schemaversion.dsindexprovider")),
}
}
@@ -44,7 +46,7 @@ func WrapLibraryElementProviderWithCache(provider LibraryElementIndexProvider, c
return provider
}
return &cachedLibraryElementProvider{
newCachedProvider[[]LibraryElementInfo](provider.GetLibraryElementInfo, defaultCacheSize, cacheTTL),
newCachedProvider[[]LibraryElementInfo](provider.GetLibraryElementInfo, defaultCacheSize, cacheTTL, log.New("schemaversion.leindexprovider")),
}
}

View File

@@ -75,9 +75,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": true,
"spotlight": false
"spotlight": false,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -154,9 +154,9 @@
"effects": {
"barGlow": false,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": false
"spotlight": false,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -233,9 +233,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": false
"spotlight": false,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -312,9 +312,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true
"spotlight": true,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -391,9 +391,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true
"spotlight": true,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -470,9 +470,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": false,
"spotlight": true
"spotlight": true,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -549,9 +549,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": false,
"spotlight": true
"spotlight": true,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -641,9 +641,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true
"spotlight": true,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -720,9 +720,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true
"spotlight": true,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -799,9 +799,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true
"spotlight": true,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -878,9 +878,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true
"spotlight": true,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -974,9 +974,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": false,
"spotlight": false
"spotlight": false,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1053,9 +1053,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": false,
"spotlight": false
"spotlight": false,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1132,9 +1132,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false
"spotlight": false,
"gradient": true
},
"orientation": "auto",
"reduceOptions": {
@@ -1211,9 +1211,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": false,
"spotlight": false
"spotlight": false,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1290,9 +1290,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": false,
"spotlight": false
"spotlight": false,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1386,9 +1386,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false
"spotlight": false,
"gradient": true
},
"orientation": "auto",
"reduceOptions": {
@@ -1469,9 +1469,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false
"spotlight": false,
"gradient": true
},
"orientation": "auto",
"reduceOptions": {
@@ -1552,9 +1552,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false
"spotlight": false,
"gradient": true
},
"orientation": "auto",
"reduceOptions": {
@@ -1643,9 +1643,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true
"spotlight": true,
"gradient": true
},
"glow": "both",
"orientation": "auto",
@@ -1727,9 +1727,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true
"spotlight": true,
"gradient": true
},
"glow": "both",
"orientation": "auto",
@@ -1825,9 +1825,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true
"spotlight": true,
"gradient": true
},
"glow": "both",
"orientation": "auto",
@@ -1910,9 +1910,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true
"spotlight": true,
"gradient": true
},
"glow": "both",
"orientation": "auto",
@@ -1994,9 +1994,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true
"spotlight": true,
"gradient": true
},
"glow": "both",
"orientation": "auto",
@@ -2078,9 +2078,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true
"spotlight": true,
"gradient": true
},
"glow": "both",
"orientation": "auto",
@@ -2172,9 +2172,7 @@
},
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
@@ -2240,9 +2238,7 @@
},
"orientation": "auto",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
@@ -2279,4 +2275,4 @@
"title": "Panel tests - Gauge (new)",
"uid": "panel-tests-gauge-new",
"weekStart": ""
}
}

View File

@@ -955,9 +955,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": false,
"spotlight": false
"spotlight": false,
"gradient": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1162,4 +1162,4 @@
"title": "Panel tests - Old gauge to new",
"uid": "panel-tests-old-gauge-to-new",
"weekStart": ""
}
}

View File

@@ -221,7 +221,7 @@ require (
github.com/googleapis/enterprise-certificate-proxy v0.3.6 // indirect
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
github.com/gorilla/mux v1.8.1 // indirect
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 // indirect
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba // indirect
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f // indirect
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 // indirect
github.com/grafana/dataplane/sdata v0.0.9 // indirect

View File

@@ -817,8 +817,8 @@ github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 h1:ZzG/gCclEit9w0QUfQt9GURcOycAIGcsQAhY1u0AEX0=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmFAlqnWsXoRyUwSa2GHNEMSEDKGKfQ4=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f h1:Cbm6OKkOcJ+7CSZsGsEJzktC/SIa5bxVeYKQLuYK86o=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f/go.mod h1:axY0cdOg3q0TZHwpHnIz5x16xZ8ZBxJHShsSHHXcHQg=
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 h1:Muoy+FMGrHj3GdFbvsMzUT7eusgii9PKf9L1ZaXDDbY=

View File

@@ -74,7 +74,7 @@ require (
github.com/google/gnostic-models v0.7.0 // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 // indirect
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba // indirect
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f // indirect
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 // indirect
github.com/grafana/dataplane/sdata v0.0.9 // indirect

View File

@@ -174,8 +174,8 @@ github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 h1:ZzG/gCclEit9w0QUfQt9GURcOycAIGcsQAhY1u0AEX0=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmFAlqnWsXoRyUwSa2GHNEMSEDKGKfQ4=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f h1:Cbm6OKkOcJ+7CSZsGsEJzktC/SIa5bxVeYKQLuYK86o=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f/go.mod h1:axY0cdOg3q0TZHwpHnIz5x16xZ8ZBxJHShsSHHXcHQg=
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 h1:Muoy+FMGrHj3GdFbvsMzUT7eusgii9PKf9L1ZaXDDbY=

View File

@@ -1327,10 +1327,6 @@ alertmanager_max_silences_count =
# Maximum silence size in bytes. Default: 0 (no limit).
alertmanager_max_silence_size_bytes =
# Maximum size of the expanded template output in bytes. Default: 10485760 (0 - no limit).
# The result of template expansion will be truncated to the limit.
alertmanager_max_template_output_bytes =
# Redis server address or addresses. It can be a single Redis address if using Redis standalone,
# or a list of comma-separated addresses if using Redis Cluster/Sentinel.
ha_redis_address =

View File

@@ -44,7 +44,7 @@ refs:
destination: /docs/grafana-cloud/alerting-and-irm/oncall/user-and-team-management/#available-grafana-oncall-rbac-roles--granted-actions
---
# Grafana RBAC role definitions
# RBAC role definitions
{{< admonition type="note" >}}
Available in [Grafana Enterprise](/docs/grafana/<GRAFANA_VERSION>/introduction/grafana-enterprise/) and [Grafana Cloud](/docs/grafana-cloud).
@@ -59,7 +59,7 @@ The following tables list permissions associated with basic and fixed roles. Thi
| Grafana Admin | `basic_grafana_admin` |
| `fixed:authentication.config:writer`<br>`fixed:general.auth.config:writer`<br>`fixed:ldap:writer`<br>`fixed:licensing:writer`<br>`fixed:migrationassistant:migrator`<br>`fixed:org.users:writer`<br>`fixed:organization:maintainer`<br>`fixed:plugins:maintainer`<br>`fixed:provisioning:writer`<br>`fixed:roles:writer`<br>`fixed:settings:reader`<br>`fixed:settings:writer`<br>`fixed:stats:reader`<br>`fixed:support.bundles:writer`<br>`fixed:usagestats:reader`<br>`fixed:users:writer` | Default [Grafana server administrator](/docs/grafana/<GRAFANA_VERSION>/administration/roles-and-permissions/#grafana-server-administrators) assignments. |
| Admin | `basic_admin` | All roles assigned to Editor and `fixed:reports:writer` <br>`fixed:datasources:writer`<br>`fixed:organization:writer`<br>`fixed:datasources.permissions:writer`<br>`fixed:teams:writer`<br>`fixed:dashboards:writer`<br>`fixed:dashboards.permissions:writer`<br>`fixed:dashboards.public:writer`<br>`fixed:folders:writer`<br>`fixed:folders.permissions:writer`<br>`fixed:alerting:writer`<br>`fixed:alerting.provisioning.secrets:reader`<br>`fixed:alerting.provisioning:writer`<br>`fixed:datasources.caching:writer`<br>`fixed:plugins:writer`<br>`fixed:library.panels:writer` | Default [Grafana organization administrator](ref:rbac-basic-roles) assignments. |
| Editor | `basic_editor` | All roles assigned to Viewer and `fixed:datasources:explorer` <br>`fixed:dashboards:creator`<br>`fixed:folders:creator`<br>`fixed:annotations:writer`<br>`fixed:alerting:writer`<br>`fixed:library.panels:creator`<br>`fixed:library.panels:general.writer`<br>`fixed:alerting.provisioning.provenance:writer` | Default [Editor](ref:rbac-basic-roles) assignments. |
| Editor | `basic_editor` | All roles assigned to Viewer and `fixed:datasources:explorer` <br>`fixed:dashboards:creator`<br>`fixed:folders:creator`<br>`fixed:annotations:writer`<br>`fixed:alerting:writer`<br>`fixed:library.panels:creator`<br>`fixed:library.panels:general.writer`<br>`fixed:alerting.provisioning.status:writer` | Default [Editor](ref:rbac-basic-roles) assignments. |
| Viewer | `basic_viewer` | `fixed:datasources.id:reader`<br>`fixed:organization:reader`<br>`fixed:annotations:reader`<br>`fixed:annotations.dashboard:writer`<br>`fixed:alerting:reader`<br>`fixed:plugins.app:reader`<br>`fixed:dashboards.insights:reader`<br>`fixed:datasources.insights:reader`<br>`fixed:library.panels:general.reader`<br>`fixed:folders.general:reader`<br>`fixed:datasources.builtin:reader` | Default [Viewer](ref:rbac-basic-roles) assignments. |
| No Basic Role | n/a | | Default [No Basic Role](ref:rbac-basic-roles) |
@@ -74,86 +74,86 @@ These UUIDs won't be available if your instance was created before Grafana v10.2
To learn how to use the roles API to determine the role UUIDs, refer to [Manage RBAC roles](ref:rbac-manage-rbac-roles).
{{< /admonition >}}
| Fixed role | UUID | Permissions | Description |
| ----------------------------------------------- | ----------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `fixed:alerting:reader` | `fixed_O2oP1_uBFozI2i93klAkcvEWR30` | All permissions from `fixed:alerting.rules:reader` <br>`fixed:alerting.instances:reader`<br>`fixed:alerting.notifications:reader` | Read-only permissions for all Grafana, Mimir, Loki and Alertmanager alert rules\*, alerts, contact points, and notification policies.[\*](#alerting-roles) |
| `fixed:alerting:writer` | `fixed_-PAZgSJsDlRD8NUg-PFSeH_BkJY` | All permissions from `fixed:alerting.rules:writer` <br>`fixed:alerting.instances:writer`<br>`fixed:alerting.notifications:writer` | Create, update, and delete Grafana, Mimir, Loki and Alertmanager alert rules\*, silences, contact points, templates, mute timings, and notification policies.[\*](#alerting-roles) |
| `fixed:alerting.instances:reader` | `fixed_ut5fVS-Ulh_ejFoskFhJT_rYg0Y` | `alert.instances:read` for organization scope <br> `alert.instances.external:read` for scope `datasources:*` | Read all alerts and silences in the organization produced by Grafana Alerts and Mimir and Loki alerts and silences.[\*](#alerting-roles) |
| `fixed:alerting.instances:writer` | `fixed_pKOBJE346uyqMLdgWbk1NsQfEl0` | All permissions from `fixed:alerting.instances:reader` and<br> `alert.instances:create`<br>`alert.instances:write` for organization scope <br> `alert.instances.external:write` for scope `datasources:*` | Create, update and expire all silences in the organization produced by Grafana, Mimir, and Loki.[\*](#alerting-roles) |
| `fixed:alerting.notifications:reader` | `fixed_hmBn0lX5h1RZXB9Vaot420EEdA0` | `alert.notifications:read` for organization scope<br>`alert.notifications.external:read` for scope `datasources:*` | Read all Grafana and Alertmanager contact points, templates, and notification policies.[\*](#alerting-roles) |
| `fixed:alerting.notifications:writer` | `fixed_XplK6HPNxf9AP5IGTdB5Iun4tJc` | All permissions from `fixed:alerting.notifications:reader` and<br>`alert.notifications:write`for organization scope<br>`alert.notifications.external:read` for scope `datasources:*` | Create, update, and delete contact points, templates, mute timings and notification policies for Grafana and external Alertmanager.[\*](#alerting-roles) |
| `fixed:alerting.provisioning:writer` | `fixed_y7pFjdEkxpx5ETdcxPvp0AgRuUo` | `alert.provisioning:read` and `alert.provisioning:write` | Create, update and delete Grafana alert rules, notification policies, contact points, templates, etc via provisioning API. [\*](#alerting-roles) |
| `fixed:alerting.provisioning.secrets:reader` | `fixed_9fmzXXZZG-Od0Amy2ofEG8Uk--c` | `alert.provisioning:read` and `alert.provisioning.secrets:read` | Read-only permissions for Provisioning API and let export resources with decrypted secrets [\*](#alerting-roles) |
| `fixed:alerting.provisioning.provenance:writer` | `fixed_eAxlzfkTuobvKEgXHveFMBZrOj8` | `alert.provisioning.provenance:write` | Set provenance status to alert rules, notification policies, contact points, etc. Should be used together with regular writer roles. [\*](#alerting-roles) |
| `fixed:alerting.rules:reader` | `fixed_fRGKL_vAqUsmUWq5EYKnOha9DcA` | `alert.rule:read`, `alert.silences:read` for scope `folders:*` <br> `alert.rules.external:read` for scope `datasources:*` <br> `alert.notifications.time-intervals:read` <br> `alert.notifications.receivers:list` | Read all\* Grafana, Mimir, and Loki alert rules.[\*](#alerting-roles) and read rule-specific silences |
| `fixed:alerting.rules:writer` | `fixed_YJJGwAalUwDZPrXSyFH8GfYBXAc` | All permissions from `fixed:alerting.rules:reader` and <br> `alert.rule:create` <br> `alert.rule:write` <br> `alert.rule:delete` <br> `alert.silences:create` <br> `alert.silences:write` for scope `folders:*` <br> `alert.rules.external:write` for scope `datasources:*` | Create, update, and delete all\* Grafana, Mimir, and Loki alert rules.[\*](#alerting-roles) and manage rule-specific silences |
| `fixed:annotations:reader` | `fixed_hpZnoizrfAJsrceNcNQqWYV-xNU` | `annotations:read` for scopes `annotations:type:*` | Read all annotations and annotation tags. |
| `fixed:annotations:writer` | `fixed_ZVW-Aa9Tzle6J4s2aUFcq1StKWE` | All permissions from `fixed:annotations:reader` <br>`annotations:write` <br>`annotations.create`<br> `annotations:delete` for scope `annotations:type:*` | Read, create, update and delete all annotations and annotation tags. |
| `fixed:annotations.dashboard:writer` | `fixed_8A775xenXeKaJk4Cr7bchP9yXOA` | `annotations:write` <br>`annotations.create`<br> `annotations:delete` for scope `annotations:type:dashboard` | Create, update and delete dashboard annotations and annotation tags. |
| `fixed:authentication.config:writer` | `fixed_0rYhZ2Qnzs8AdB1nX7gexk3fHDw` | `settings:read` for scope `settings:auth.saml:*` <br> `settings:write` for scope `settings:auth.saml:*` | Read and update authentication and SAML settings. |
| `fixed:general.auth.config:writer` | `fixed_QFxIT_FGtBqbIVJIwx1bLgI5z6c` | `settings:read` for scope `settings:auth:oauth_allow_insecure_email_lookup` <br> `settings:write` for scope `settings:auth:oauth_allow_insecure_email_lookup` | Read and update the Grafana instance's general authentication configuration settings. |
| `fixed:dashboards:creator` | `fixed_ZorKUcEPCM01A1fPakEzGBUyU64` | `dashboards:create`<br>`folders:read` | Create dashboards. |
| `fixed:dashboards:reader` | `fixed_Sgr67JTOhjQGFlzYRahOe45TdWM` | `dashboards:read` | Read all dashboards. |
| `fixed:dashboards:writer` | `fixed_OK2YOQGIoI1G031hVzJB6rAJQAs` | All permissions from `fixed:dashboards:reader` and <br>`dashboards:write`<br>`dashboards:delete`<br>`dashboards:create`<br>`dashboards.permissions:read`<br>`dashboards.permissions:write` | Read, create, update, and delete all dashboards. |
| `fixed:dashboards.insights:reader` | `fixed_JlBJ2_gizP8zhgaeGE2rjyZe2Rs` | `dashboards.insights:read` | Read dashboard insights data and see presence indicators. |
| `fixed:dashboards.permissions:reader` | `fixed_f17oxuXW_58LL8mYJsm4T_mCeIw` | `dashboards.permissions:read` | Read all dashboard permissions. |
| `fixed:dashboards.permissions:writer` | `fixed_CcznxhWX_Yqn8uWMXMQ-b5iFW9k` | All permissions from `fixed:dashboards.permissions:reader` and <br>`dashboards.permissions:write` | Read and update all dashboard permissions. |
| `fixed:dashboards.public:writer` | `fixed_f_GHHRBciaqESXfGz2oCcooqHxs` | `dashboards.public:write` | Create, update, delete or pause a shared dashboard. |
| `fixed:datasources:creator` | `fixed_XX8jHREgUt-wo1A-rPXIiFlX6Zw` | `datasources:create` | Create data sources. |
| `fixed:datasources:explorer` | `fixed_qDzW9mzx9yM91T5Bi8dHUM2muTw` | `datasources:explore` | Enable the Explore feature. Data source permissions still apply, you can only query data sources for which you have query permissions. |
| `fixed:datasources:reader` | `fixed_C2x8IxkiBc1KZVjyYH775T9jNMQ` | `datasources:read`<br>`datasources:query` | Read and query data sources. |
| `fixed:datasources:writer` | `fixed_q8HXq8kjjA5IlHHgBJlKlUyaNik` | All permissions from `fixed:datasources:reader` and <br>`datasources:create`<br>`datasources:write`<br>`datasources:delete` | Read, query, create, delete, or update a data source. |
| `fixed:datasources.builtin:reader` | `fixed_q8HXq8kjjA5IlHHgBJlKlUyaNik` | `datasources:read` and `datasources:query` scoped to `datasources:uid:grafana` | An internal role used to grant Viewers access to the builtin example data source in Grafana. |
| `fixed:datasources.caching:reader` | `fixed_D2ddpGxJYlw0mbsTS1ek9fj0kj4` | `datasources.caching:read` | Read data source query caching settings. |
| `fixed:datasources.caching:writer` | `fixed_JtFjHr7jd7hSqUYcktKvRvIOGRE` | `datasources.caching:read`<br>`datasources.caching:write` | Enable, disable, or update query caching settings. |
| `fixed:datasources.id:reader` | `fixed_entg--fHmDqWY2-69N0ocawK0Os` | `datasources.id:read` | Read the ID of a data source based on its name. |
| `fixed:datasources.insights:reader` | `fixed_EBZ3NwlfecNPp2p0XcZRC1nfEYk` | `datasources.insights:read` | Read data source insights data. |
| `fixed:datasources.permissions:reader` | `fixed_ErYA-cTN3yn4h4GxaVPcawRhiOY` | `datasources.permissions:read` | Read data source permissions. |
| `fixed:datasources.permissions:writer` | `fixed_aiQh9YDfLOKjQhYasF9_SFUjQiw` | All permissions from `fixed:datasources.permissions:reader` and <br>`datasources.permissions:write` | Create, read, or delete permissions of a data source. |
| `fixed:folders:creator` | `fixed_gGLRbZGAGB6n9uECqSh_W382RlQ` | `folders:create` | Create folders in the root level. |
| `fixed:folders:reader` | `fixed_yeW-5QPeo-i5PZUIUXMlAA97GnQ` | `folders:read`<br>`dashboards:read` | Read all folders and dashboards. |
| `fixed:folders:writer` | `fixed_wJXLoTzgE7jVuz90dryYoiogL0o` | All permissions from `fixed:dashboards:writer` and <br>`folders:read`<br>`folders:write`<br>`folders:create`<br>`folders:delete`<br>`folders.permissions:read`<br>`folders.permissions:write` | Read, update, and delete all folders and dashboards. Create folders and subfolders. |
| `fixed:folders.general:reader` | `fixed_rSASbkg8DvpG_gTX5s41d7uxRvI` | `folders:read` scoped to `folders:uid:general` | An internal role used to correctly display access to the folder tree for Viewer role. |
| `fixed:folders.permissions:reader` | `fixed_E06l4cx0JFm47EeLBE4nmv3pnSo` | `folders.permissions:read` | Read all folder permissions. |
| `fixed:folders.permissions:writer` | `fixed_3GAgpQ_hWG8o7-lwNb86_VB37eI` | All permissions from `fixed:folders.permissions:reader` and <br>`folders.permissions:write` | Read and update all folder permissions. |
| `fixed:ldap:reader` | `fixed_lMcOPwSkxKY-qCK8NMJc5k6izLE` | `ldap.user:read`<br>`ldap.status:read` | Read the LDAP configuration and LDAP status information. |
| `fixed:ldap:writer` | `fixed_p6AvnU4GCQyIh7-hbwI-bk3GYnU` | All permissions from `fixed:ldap:reader` and <br>`ldap.user:sync`<br>`ldap.config:reload` | Read and update the LDAP configuration, and read LDAP status information. |
| `fixed:library.panels:creator` | `fixed_6eX6ItfegCIY5zLmPqTDW8ZV7KY` | `library.panels:create`<br>`folders:read` | Create library panel at the root level. |
| `fixed:library.panels:general.reader` | `fixed_ct0DghiBWR_2BiQm3EvNPDVmpio` | `library.panels:read` | Read all library panels at the root level. |
| `fixed:library.panels:general.writer` | `fixed_DgprkmqfN_1EhZ2v1_d1fYG8LzI` | All permissions from `fixed:library.panels:general.reader` plus<br>`library.panels:create`<br>`library.panels:delete`<br>`library.panels:write` | Create, read, write or delete all library panels and their permissions at the root level. |
| `fixed:library.panels:reader` | `fixed_tvTr9CnZ6La5vvUO_U_X1LPnhUs` | `library.panels:read` | Read all library panels. |
| `fixed:library.panels:writer` | `fixed_JTljAr21LWLTXCkgfBC4H0lhBC8` | All permissions from `fixed:library.panels:reader` plus<br>`library.panels:create`<br>`library.panels:delete`<br>`library.panels:write` | Create, read, write or delete all library panels and their permissions. |
| `fixed:licensing:reader` | `fixed_OADpuXvNEylO2Kelu3GIuBXEAYE` | `licensing:read`<br>`licensing.reports:read` | Read licensing information and licensing reports. |
| `fixed:licensing:writer` | `fixed_gzbz3rJpQMdaKHt-E4q0PVaKMoE` | All permissions from `fixed:licensing:reader` and <br>`licensing:write`<br>`licensing:delete` | Read licensing information and licensing reports, update and delete the license token. |
| `fixed:migrationassistant:migrator` | `fixed_LLk2p7TRuBztOAksTQb1Klc8YTk` | `migrationassistant:migrate` | Execute on-prem to cloud migrations through the Migration Assistant. |
| `fixed:org.users:reader` | `fixed_oCqNwlVHLOpw7-jAlwp4HzYqwGY` | `org.users:read` | Read users within a single organization. |
| `fixed:org.users:writer` | `fixed_VERj5nayasjgf_Yh0sWqqCkxWlw` | All permissions from `fixed:org.users:reader` and <br>`org.users:add`<br>`org.users:remove`<br>`org.users:write` | Within a single organization, add a user, invite a new user, read information about a user and their role, remove a user from that organization, or change the role of a user. |
| `fixed:organization:maintainer` | `fixed_CMm-uuBaPUBf4r8XG3jIvxo55bg` | All permissions from `fixed:organization:reader` and <br> `orgs:write`<br>`orgs:create`<br>`orgs:delete`<br>`orgs.quotas:write` | Create, read, write, or delete an organization. Read or write its quotas. This role needs to be assigned globally. |
| `fixed:organization:reader` | `fixed_0SZPJlTHdNEe8zO91zv7Zwiwa2w` | `orgs:read`<br>`orgs.quotas:read` | Read an organization and its quotas. |
| `fixed:organization:writer` | `fixed_Y4jGqDd8w1yCrPwlik8z5Iu8-3M` | All permissions from `fixed:organization:reader` and <br> `orgs:write`<br>`orgs.preferences:read`<br>`orgs.preferences:write` | Read an organization, its quotas, or its preferences. Update organization properties, or its preferences. |
| `fixed:plugins:maintainer` | `fixed_yEOKidBcWgbm74x-nTa3lW5lOyY` | `plugins:install` | Install and uninstall plugins. Needs to be assigned globally. |
| `fixed:plugins:writer` | `fixed_MRYpGk7kpNNwt2VoVOXFiPnQziE` | `plugins:write` | Enable and disable plugins and edit plugins' settings. |
| `fixed:plugins.app:reader` | `fixed_AcZRiNYx7NueYkUqzw1o2OGGUAA` | `plugins.app:access` | Access application plugins (still enforcing the organization role). |
| `fixed:provisioning:writer` | `fixed_bgk1FCyR6OEDwhgirZlQgu5LlCA` | `provisioning:reload` | Reload provisioning. |
| `fixed:reports:reader` | `fixed_72_8LU_0ukfm6BdblOw8Z9q-GQ8` | `reports:read`<br>`reports:send`<br>`reports.settings:read` | Read all reports and shared report settings. |
| `fixed:reports:writer` | `fixed_jBW3_7g1EWOjGVBYeVRwtFxhUNw` | All permissions from `fixed:reports:reader` and <br>`reports:create`<br>`reports:write`<br>`reports:delete`<br>`reports.settings:write` | Create, read, update, or delete all reports and shared report settings. |
| `fixed:roles:reader` | `fixed_GkfG-1NSwEGb4hpK3-E3qHyNltc` | `roles:read`<br>`teams.roles:read`<br>`users.roles:read`<br>`users.permissions:read` | Read all access control roles, roles and permissions assigned to users, teams. |
| `fixed:roles:resetter` | `fixed_WgPpC3qJRmVpVTJavFNwfS5RuzQ` | `roles:write` with scope `permissions:type:escalate` | Reset basic roles to their default. |
| `fixed:roles:writer` | `fixed_W5aFaw8isAM27x_eWfElBhZ0iOc` | All permissions from `fixed:roles:reader` and <br>`roles:write`<br>`roles:delete`<br>`teams.roles:add`<br>`teams.roles:remove`<br>`users.roles:add`<br>`users.roles:remove` | Create, read, update, or delete all roles, assign or unassign roles to users, teams. |
| `fixed:serviceaccounts:creator` | `fixed_Ikw60fckA0MyiiZ73BawSfOULy4` | `serviceaccounts:create` | Create Grafana service accounts. |
| `fixed:serviceaccounts:reader` | `fixed_QFjJAZ88iawMLInYOxPA1DB1w6I` | `serviceaccounts:read` | Read Grafana service accounts. |
| `fixed:serviceaccounts:writer` | `fixed_iBvUNUEZBZ7PUW0vdkN5iojc2sk` | `serviceaccounts:read`<br>`serviceaccounts:create`<br>`serviceaccounts:write`<br>`serviceaccounts:delete`<br>`serviceaccounts.permissions:read`<br>`serviceaccounts.permissions:write` | Create, update, read and delete all Grafana service accounts and manage service account permissions. |
| `fixed:settings:reader` | `fixed_0LaUt1x6PP8hsZzEBhqPQZFUd8Q` | `settings:read` | Read Grafana instance settings. |
| `fixed:settings:writer` | `fixed_joIHDgMrGg790hMhUufVzcU4j44` | All permissions from `fixed:settings:reader` and<br>`settings:write` | Read and update Grafana instance settings. |
| `fixed:stats:reader` | `fixed_OnRCXxZVINWpcKvTF5A1gecJ7pA` | `server.stats:read` | Read Grafana instance statistics. |
| `fixed:support.bundles:reader` | `fixed_gcPjI3PTUJwRx-GJZwDhNa7zbos` | `support.bundles:read` | List and download support bundles. |
| `fixed:support.bundles:writer` | `fixed_dTgCv9Wxrp_WHAhwHYIgeboxKpE` | `support.bundles:read`<br>`support.bundles:create`<br>`support.bundles:delete` | Create, delete, list and download support bundles. |
| `fixed:teams:creator` | `fixed_nzVQoNSDSn0fg1MDgO6XnZX2RZI` | `teams:create`<br>`org.users:read` | Create a team and list organization users (required to manage the created team). |
| `fixed:teams:read` | `fixed_Z8pB0GQlrqRt8IZBCJQxPWvJPgQ` | `teams:read` | List all teams. |
| `fixed:teams:writer` | `fixed_xw1T0579h620MOYi4L96GUs7fZY` | `teams:create`<br>`teams:delete`<br>`teams:read`<br>`teams:write`<br>`teams.permissions:read`<br>`teams.permissions:write` | Create, read, update and delete teams and manage team memberships. |
| `fixed:usagestats:reader` | `fixed_eAM0azEvnWFCJAjNkUKnGL_1-bU` | `server.usagestats.report:read` | View usage statistics report. |
| `fixed:users:reader` | `fixed_buZastUG3reWyQpPemcWjGqPAd0` | `users:read`<br>`users.quotas:read`<br>`users.authtoken:read` | Read all users and their information, such as team memberships, authentication tokens, and quotas. |
| `fixed:users:writer` | `fixed_wjzgHHo_Ux25DJuELn_oiAdB_yM` | All permissions from `fixed:users:reader` and <br>`users:write`<br>`users:create`<br>`users:delete`<br>`users:enable`<br>`users:disable`<br>`users.password:write`<br>`users.permissions:write`<br>`users:logout`<br>`users.authtoken:write`<br>`users.quotas:write` | Read and update all attributes and settings for all users in Grafana: update user information, read user information, create or enable or disable a user, make a user a Grafana administrator, sign out a user, update a users authentication token, or update quotas for all users. |
| Fixed role | UUID | Permissions | Description |
| -------------------------------------------- | ----------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `fixed:alerting:reader` | `fixed_O2oP1_uBFozI2i93klAkcvEWR30` | All permissions from `fixed:alerting.rules:reader` <br>`fixed:alerting.instances:reader`<br>`fixed:alerting.notifications:reader` | Read-only permissions for all Grafana, Mimir, Loki and Alertmanager alert rules\*, alerts, contact points, and notification policies.[\*](#alerting-roles) |
| `fixed:alerting:writer` | `fixed_-PAZgSJsDlRD8NUg-PFSeH_BkJY` | All permissions from `fixed:alerting.rules:writer` <br>`fixed:alerting.instances:writer`<br>`fixed:alerting.notifications:writer` | Create, update, and delete Grafana, Mimir, Loki and Alertmanager alert rules\*, silences, contact points, templates, mute timings, and notification policies.[\*](#alerting-roles) |
| `fixed:alerting.instances:reader` | `fixed_ut5fVS-Ulh_ejFoskFhJT_rYg0Y` | `alert.instances:read` for organization scope <br> `alert.instances.external:read` for scope `datasources:*` | Read all alerts and silences in the organization produced by Grafana Alerts and Mimir and Loki alerts and silences.[\*](#alerting-roles) |
| `fixed:alerting.instances:writer` | `fixed_pKOBJE346uyqMLdgWbk1NsQfEl0` | All permissions from `fixed:alerting.instances:reader` and<br> `alert.instances:create`<br>`alert.instances:write` for organization scope <br> `alert.instances.external:write` for scope `datasources:*` | Create, update and expire all silences in the organization produced by Grafana, Mimir, and Loki.[\*](#alerting-roles) |
| `fixed:alerting.notifications:reader` | `fixed_hmBn0lX5h1RZXB9Vaot420EEdA0` | `alert.notifications:read` for organization scope<br>`alert.notifications.external:read` for scope `datasources:*` | Read all Grafana and Alertmanager contact points, templates, and notification policies.[\*](#alerting-roles) |
| `fixed:alerting.notifications:writer` | `fixed_XplK6HPNxf9AP5IGTdB5Iun4tJc` | All permissions from `fixed:alerting.notifications:reader` and<br>`alert.notifications:write`for organization scope<br>`alert.notifications.external:read` for scope `datasources:*` | Create, update, and delete contact points, templates, mute timings and notification policies for Grafana and external Alertmanager.[\*](#alerting-roles) |
| `fixed:alerting.provisioning:writer` | `fixed_y7pFjdEkxpx5ETdcxPvp0AgRuUo` | `alert.provisioning:read` and `alert.provisioning:write` | Create, update and delete Grafana alert rules, notification policies, contact points, templates, etc via provisioning API. [\*](#alerting-roles) |
| `fixed:alerting.provisioning.secrets:reader` | `fixed_9fmzXXZZG-Od0Amy2ofEG8Uk--c` | `alert.provisioning:read` and `alert.provisioning.secrets:read` | Read-only permissions for Provisioning API and let export resources with decrypted secrets [\*](#alerting-roles) |
| `fixed:alerting.provisioning.status:writer` | `fixed_eAxlzfkTuobvKEgXHveFMBZrOj8` | `alert.provisioning.provenance:write` | Set provenance status to alert rules, notification policies, contact points, etc. Should be used together with regular writer roles. [\*](#alerting-roles) |
| `fixed:alerting.rules:reader` | `fixed_fRGKL_vAqUsmUWq5EYKnOha9DcA` | `alert.rule:read`, `alert.silences:read` for scope `folders:*` <br> `alert.rules.external:read` for scope `datasources:*` <br> `alert.notifications.time-intervals:read` <br> `alert.notifications.receivers:list` | Read all\* Grafana, Mimir, and Loki alert rules.[\*](#alerting-roles) and read rule-specific silences |
| `fixed:alerting.rules:writer` | `fixed_YJJGwAalUwDZPrXSyFH8GfYBXAc` | All permissions from `fixed:alerting.rules:reader` and <br> `alert.rule:create` <br> `alert.rule:write` <br> `alert.rule:delete` <br> `alert.silences:create` <br> `alert.silences:write` for scope `folders:*` <br> `alert.rules.external:write` for scope `datasources:*` | Create, update, and delete all\* Grafana, Mimir, and Loki alert rules.[\*](#alerting-roles) and manage rule-specific silences |
| `fixed:annotations:reader` | `fixed_hpZnoizrfAJsrceNcNQqWYV-xNU` | `annotations:read` for scopes `annotations:type:*` | Read all annotations and annotation tags. |
| `fixed:annotations:writer` | `fixed_ZVW-Aa9Tzle6J4s2aUFcq1StKWE` | All permissions from `fixed:annotations:reader` <br>`annotations:write` <br>`annotations.create`<br> `annotations:delete` for scope `annotations:type:*` | Read, create, update and delete all annotations and annotation tags. |
| `fixed:annotations.dashboard:writer` | `fixed_8A775xenXeKaJk4Cr7bchP9yXOA` | `annotations:write` <br>`annotations.create`<br> `annotations:delete` for scope `annotations:type:dashboard` | Create, update and delete dashboard annotations and annotation tags. |
| `fixed:authentication.config:writer` | `fixed_0rYhZ2Qnzs8AdB1nX7gexk3fHDw` | `settings:read` for scope `settings:auth.saml:*` <br> `settings:write` for scope `settings:auth.saml:*` | Read and update authentication and SAML settings. |
| `fixed:general.auth.config:writer` | `fixed_QFxIT_FGtBqbIVJIwx1bLgI5z6c` | `settings:read` for scope `settings:auth:oauth_allow_insecure_email_lookup` <br> `settings:write` for scope `settings:auth:oauth_allow_insecure_email_lookup` | Read and update the Grafana instance's general authentication configuration settings. |
| `fixed:dashboards:creator` | `fixed_ZorKUcEPCM01A1fPakEzGBUyU64` | `dashboards:create`<br>`folders:read` | Create dashboards. |
| `fixed:dashboards:reader` | `fixed_Sgr67JTOhjQGFlzYRahOe45TdWM` | `dashboards:read` | Read all dashboards. |
| `fixed:dashboards:writer` | `fixed_OK2YOQGIoI1G031hVzJB6rAJQAs` | All permissions from `fixed:dashboards:reader` and <br>`dashboards:write`<br>`dashboards:delete`<br>`dashboards:create`<br>`dashboards.permissions:read`<br>`dashboards.permissions:write` | Read, create, update, and delete all dashboards. |
| `fixed:dashboards.insights:reader` | `fixed_JlBJ2_gizP8zhgaeGE2rjyZe2Rs` | `dashboards.insights:read` | Read dashboard insights data and see presence indicators. |
| `fixed:dashboards.permissions:reader` | `fixed_f17oxuXW_58LL8mYJsm4T_mCeIw` | `dashboards.permissions:read` | Read all dashboard permissions. |
| `fixed:dashboards.permissions:writer` | `fixed_CcznxhWX_Yqn8uWMXMQ-b5iFW9k` | All permissions from `fixed:dashboards.permissions:reader` and <br>`dashboards.permissions:write` | Read and update all dashboard permissions. |
| `fixed:dashboards.public:writer` | `fixed_f_GHHRBciaqESXfGz2oCcooqHxs` | `dashboards.public:write` | Create, update, delete or pause a shared dashboard. |
| `fixed:datasources:creator` | `fixed_XX8jHREgUt-wo1A-rPXIiFlX6Zw` | `datasources:create` | Create data sources. |
| `fixed:datasources:explorer` | `fixed_qDzW9mzx9yM91T5Bi8dHUM2muTw` | `datasources:explore` | Enable the Explore feature. Data source permissions still apply, you can only query data sources for which you have query permissions. |
| `fixed:datasources:reader` | `fixed_C2x8IxkiBc1KZVjyYH775T9jNMQ` | `datasources:read`<br>`datasources:query` | Read and query data sources. |
| `fixed:datasources:writer` | `fixed_q8HXq8kjjA5IlHHgBJlKlUyaNik` | All permissions from `fixed:datasources:reader` and <br>`datasources:create`<br>`datasources:write`<br>`datasources:delete` | Read, query, create, delete, or update a data source. |
| `fixed:datasources.builtin:reader` | `fixed_q8HXq8kjjA5IlHHgBJlKlUyaNik` | `datasources:read` and `datasources:query` scoped to `datasources:uid:grafana` | An internal role used to grant Viewers access to the builtin example data source in Grafana. |
| `fixed:datasources.caching:reader` | `fixed_D2ddpGxJYlw0mbsTS1ek9fj0kj4` | `datasources.caching:read` | Read data source query caching settings. |
| `fixed:datasources.caching:writer` | `fixed_JtFjHr7jd7hSqUYcktKvRvIOGRE` | `datasources.caching:read`<br>`datasources.caching:write` | Enable, disable, or update query caching settings. |
| `fixed:datasources.id:reader` | `fixed_entg--fHmDqWY2-69N0ocawK0Os` | `datasources.id:read` | Read the ID of a data source based on its name. |
| `fixed:datasources.insights:reader` | `fixed_EBZ3NwlfecNPp2p0XcZRC1nfEYk` | `datasources.insights:read` | Read data source insights data. |
| `fixed:datasources.permissions:reader` | `fixed_ErYA-cTN3yn4h4GxaVPcawRhiOY` | `datasources.permissions:read` | Read data source permissions. |
| `fixed:datasources.permissions:writer` | `fixed_aiQh9YDfLOKjQhYasF9_SFUjQiw` | All permissions from `fixed:datasources.permissions:reader` and <br>`datasources.permissions:write` | Create, read, or delete permissions of a data source. |
| `fixed:folders:creator` | `fixed_gGLRbZGAGB6n9uECqSh_W382RlQ` | `folders:create` | Create folders in the root level. |
| `fixed:folders:reader` | `fixed_yeW-5QPeo-i5PZUIUXMlAA97GnQ` | `folders:read`<br>`dashboards:read` | Read all folders and dashboards. |
| `fixed:folders:writer` | `fixed_wJXLoTzgE7jVuz90dryYoiogL0o` | All permissions from `fixed:dashboards:writer` and <br>`folders:read`<br>`folders:write`<br>`folders:create`<br>`folders:delete`<br>`folders.permissions:read`<br>`folders.permissions:write` | Read, update, and delete all folders and dashboards. Create folders and subfolders. |
| `fixed:folders.general:reader` | `fixed_rSASbkg8DvpG_gTX5s41d7uxRvI` | `folders:read` scoped to `folders:uid:general` | An internal role used to correctly display access to the folder tree for Viewer role. |
| `fixed:folders.permissions:reader` | `fixed_E06l4cx0JFm47EeLBE4nmv3pnSo` | `folders.permissions:read` | Read all folder permissions. |
| `fixed:folders.permissions:writer` | `fixed_3GAgpQ_hWG8o7-lwNb86_VB37eI` | All permissions from `fixed:folders.permissions:reader` and <br>`folders.permissions:write` | Read and update all folder permissions. |
| `fixed:ldap:reader` | `fixed_lMcOPwSkxKY-qCK8NMJc5k6izLE` | `ldap.user:read`<br>`ldap.status:read` | Read the LDAP configuration and LDAP status information. |
| `fixed:ldap:writer` | `fixed_p6AvnU4GCQyIh7-hbwI-bk3GYnU` | All permissions from `fixed:ldap:reader` and <br>`ldap.user:sync`<br>`ldap.config:reload` | Read and update the LDAP configuration, and read LDAP status information. |
| `fixed:library.panels:creator` | `fixed_6eX6ItfegCIY5zLmPqTDW8ZV7KY` | `library.panels:create`<br>`folders:read` | Create library panel at the root level. |
| `fixed:library.panels:general.reader` | `fixed_ct0DghiBWR_2BiQm3EvNPDVmpio` | `library.panels:read` | Read all library panels at the root level. |
| `fixed:library.panels:general.writer` | `fixed_DgprkmqfN_1EhZ2v1_d1fYG8LzI` | All permissions from `fixed:library.panels:general.reader` plus<br>`library.panels:create`<br>`library.panels:delete`<br>`library.panels:write` | Create, read, write or delete all library panels and their permissions at the root level. |
| `fixed:library.panels:reader` | `fixed_tvTr9CnZ6La5vvUO_U_X1LPnhUs` | `library.panels:read` | Read all library panels. |
| `fixed:library.panels:writer` | `fixed_JTljAr21LWLTXCkgfBC4H0lhBC8` | All permissions from `fixed:library.panels:reader` plus<br>`library.panels:create`<br>`library.panels:delete`<br>`library.panels:write` | Create, read, write or delete all library panels and their permissions. |
| `fixed:licensing:reader` | `fixed_OADpuXvNEylO2Kelu3GIuBXEAYE` | `licensing:read`<br>`licensing.reports:read` | Read licensing information and licensing reports. |
| `fixed:licensing:writer` | `fixed_gzbz3rJpQMdaKHt-E4q0PVaKMoE` | All permissions from `fixed:licensing:reader` and <br>`licensing:write`<br>`licensing:delete` | Read licensing information and licensing reports, update and delete the license token. |
| `fixed:migrationassistant:migrator` | `fixed_LLk2p7TRuBztOAksTQb1Klc8YTk` | `migrationassistant:migrate` | Execute on-prem to cloud migrations through the Migration Assistant. |
| `fixed:org.users:reader` | `fixed_oCqNwlVHLOpw7-jAlwp4HzYqwGY` | `org.users:read` | Read users within a single organization. |
| `fixed:org.users:writer` | `fixed_VERj5nayasjgf_Yh0sWqqCkxWlw` | All permissions from `fixed:org.users:reader` and <br>`org.users:add`<br>`org.users:remove`<br>`org.users:write` | Within a single organization, add a user, invite a new user, read information about a user and their role, remove a user from that organization, or change the role of a user. |
| `fixed:organization:maintainer` | `fixed_CMm-uuBaPUBf4r8XG3jIvxo55bg` | All permissions from `fixed:organization:reader` and <br> `orgs:write`<br>`orgs:create`<br>`orgs:delete`<br>`orgs.quotas:write` | Create, read, write, or delete an organization. Read or write its quotas. This role needs to be assigned globally. |
| `fixed:organization:reader` | `fixed_0SZPJlTHdNEe8zO91zv7Zwiwa2w` | `orgs:read`<br>`orgs.quotas:read` | Read an organization and its quotas. |
| `fixed:organization:writer` | `fixed_Y4jGqDd8w1yCrPwlik8z5Iu8-3M` | All permissions from `fixed:organization:reader` and <br> `orgs:write`<br>`orgs.preferences:read`<br>`orgs.preferences:write` | Read an organization, its quotas, or its preferences. Update organization properties, or its preferences. |
| `fixed:plugins:maintainer` | `fixed_yEOKidBcWgbm74x-nTa3lW5lOyY` | `plugins:install` | Install and uninstall plugins. Needs to be assigned globally. |
| `fixed:plugins:writer` | `fixed_MRYpGk7kpNNwt2VoVOXFiPnQziE` | `plugins:write` | Enable and disable plugins and edit plugins' settings. |
| `fixed:plugins.app:reader` | `fixed_AcZRiNYx7NueYkUqzw1o2OGGUAA` | `plugins.app:access` | Access application plugins (still enforcing the organization role). |
| `fixed:provisioning:writer` | `fixed_bgk1FCyR6OEDwhgirZlQgu5LlCA` | `provisioning:reload` | Reload provisioning. |
| `fixed:reports:reader` | `fixed_72_8LU_0ukfm6BdblOw8Z9q-GQ8` | `reports:read`<br>`reports:send`<br>`reports.settings:read` | Read all reports and shared report settings. |
| `fixed:reports:writer` | `fixed_jBW3_7g1EWOjGVBYeVRwtFxhUNw` | All permissions from `fixed:reports:reader` and <br>`reports:create`<br>`reports:write`<br>`reports:delete`<br>`reports.settings:write` | Create, read, update, or delete all reports and shared report settings. |
| `fixed:roles:reader` | `fixed_GkfG-1NSwEGb4hpK3-E3qHyNltc` | `roles:read`<br>`teams.roles:read`<br>`users.roles:read`<br>`users.permissions:read` | Read all access control roles, roles and permissions assigned to users, teams. |
| `fixed:roles:resetter` | `fixed_WgPpC3qJRmVpVTJavFNwfS5RuzQ` | `roles:write` with scope `permissions:type:escalate` | Reset basic roles to their default. |
| `fixed:roles:writer` | `fixed_W5aFaw8isAM27x_eWfElBhZ0iOc` | All permissions from `fixed:roles:reader` and <br>`roles:write`<br>`roles:delete`<br>`teams.roles:add`<br>`teams.roles:remove`<br>`users.roles:add`<br>`users.roles:remove` | Create, read, update, or delete all roles, assign or unassign roles to users, teams. |
| `fixed:serviceaccounts:creator` | `fixed_Ikw60fckA0MyiiZ73BawSfOULy4` | `serviceaccounts:create` | Create Grafana service accounts. |
| `fixed:serviceaccounts:reader` | `fixed_QFjJAZ88iawMLInYOxPA1DB1w6I` | `serviceaccounts:read` | Read Grafana service accounts. |
| `fixed:serviceaccounts:writer` | `fixed_iBvUNUEZBZ7PUW0vdkN5iojc2sk` | `serviceaccounts:read`<br>`serviceaccounts:create`<br>`serviceaccounts:write`<br>`serviceaccounts:delete`<br>`serviceaccounts.permissions:read`<br>`serviceaccounts.permissions:write` | Create, update, read and delete all Grafana service accounts and manage service account permissions. |
| `fixed:settings:reader` | `fixed_0LaUt1x6PP8hsZzEBhqPQZFUd8Q` | `settings:read` | Read Grafana instance settings. |
| `fixed:settings:writer` | `fixed_joIHDgMrGg790hMhUufVzcU4j44` | All permissions from `fixed:settings:reader` and<br>`settings:write` | Read and update Grafana instance settings. |
| `fixed:stats:reader` | `fixed_OnRCXxZVINWpcKvTF5A1gecJ7pA` | `server.stats:read` | Read Grafana instance statistics. |
| `fixed:support.bundles:reader` | `fixed_gcPjI3PTUJwRx-GJZwDhNa7zbos` | `support.bundles:read` | List and download support bundles. |
| `fixed:support.bundles:writer` | `fixed_dTgCv9Wxrp_WHAhwHYIgeboxKpE` | `support.bundles:read`<br>`support.bundles:create`<br>`support.bundles:delete` | Create, delete, list and download support bundles. |
| `fixed:teams:creator` | `fixed_nzVQoNSDSn0fg1MDgO6XnZX2RZI` | `teams:create`<br>`org.users:read` | Create a team and list organization users (required to manage the created team). |
| `fixed:teams:read` | `fixed_Z8pB0GQlrqRt8IZBCJQxPWvJPgQ` | `teams:read` | List all teams. |
| `fixed:teams:writer` | `fixed_xw1T0579h620MOYi4L96GUs7fZY` | `teams:create`<br>`teams:delete`<br>`teams:read`<br>`teams:write`<br>`teams.permissions:read`<br>`teams.permissions:write` | Create, read, update and delete teams and manage team memberships. |
| `fixed:usagestats:reader` | `fixed_eAM0azEvnWFCJAjNkUKnGL_1-bU` | `server.usagestats.report:read` | View usage statistics report. |
| `fixed:users:reader` | `fixed_buZastUG3reWyQpPemcWjGqPAd0` | `users:read`<br>`users.quotas:read`<br>`users.authtoken:read` | Read all users and their information, such as team memberships, authentication tokens, and quotas. |
| `fixed:users:writer` | `fixed_wjzgHHo_Ux25DJuELn_oiAdB_yM` | All permissions from `fixed:users:reader` and <br>`users:write`<br>`users:create`<br>`users:delete`<br>`users:enable`<br>`users:disable`<br>`users.password:write`<br>`users.permissions:write`<br>`users:logout`<br>`users.authtoken:write`<br>`users.quotas:write` | Read and update all attributes and settings for all users in Grafana: update user information, read user information, create or enable or disable a user, make a user a Grafana administrator, sign out a user, update a users authentication token, or update quotas for all users. |
### Alerting roles
@@ -164,20 +164,10 @@ Access to Grafana alert rules is an intersection of many permissions:
- Permission to read a folder. For example, the fixed role `fixed:folders:reader` includes the action `folders:read` and a folder scope `folders:id:`.
- Permission to query **all** data sources that a given alert rule uses. If a user cannot query a given data source, they cannot see any alert rules that query that data source.
There is only one exclusion. Role `fixed:alerting.provisioning:writer` does not require user to have any additional permissions and provides access to all aspects of the alerting configuration via special provisioning API.
There is only one exclusion at this moment. Role `fixed:alerting.provisioning:writer` does not require user to have any additional permissions and provides access to all aspects of the alerting configuration via special provisioning API.
For more information about the permissions required to access alert rules, refer to [Create a custom role to access alerts in a folder](ref:plan-rbac-rollout-strategy-create-a-custom-role-to-access-alerts-in-a-folder).
#### Alerting basic roles
The following table lists the default RBAC alerting role assignments to the basic roles:
| Basic role | Associated fixed roles | Description |
| ---------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------- |
| Admin | `fixed:alerting:writer`<br>`fixed:alerting.provisioning.secrets:reader`<br>`fixed:alerting.provisioning:writer` | Default [Grafana organization administrator](ref:rbac-basic-roles) assignments. |
| Editor | `fixed:alerting:writer`<br>`fixed:alerting.provisioning.provenance:writer` | Default [Editor](ref:rbac-basic-roles) assignments. |
| Viewer | `fixed:alerting:reader` | Default [Viewer](ref:rbac-basic-roles) assignments. |
### Grafana OnCall roles
If you are using [Grafana OnCall](ref:oncall), you can try out the integration between Grafana OnCall and RBAC.

View File

@@ -62,9 +62,6 @@ The following steps describe a basic configuration:
# The URL of the Loki server
loki_remote_url = http://localhost:3100
[feature_toggles]
enable = alertingCentralAlertHistory
```
1. **Configure the Loki data source in Grafana**

View File

@@ -17,166 +17,55 @@ weight: 155
# Configure RBAC
[Role-based access control (RBAC)](/docs/grafana/latest/administration/roles-and-permissions/access-control/plan-rbac-rollout-strategy/) for Grafana Enterprise and Grafana Cloud provides a standardized way of granting, changing, and revoking access, so that users can view and modify Grafana resources.
Role-based access control (RBAC) for Grafana Enterprise and Grafana Cloud provides a standardized way of granting, changing, and revoking access, so that users can view and modify Grafana resources.
A user is any individual who can log in to Grafana. Each user has a role that includes permissions. Permissions determine the tasks a user can perform in the system.
A user is any individual who can log in to Grafana. Each user is associated with a role that includes permissions. Permissions determine the tasks a user can perform in the system.
Each permission contains one or more actions and a scope.
## Role types
Grafana has three types of roles for managing access:
- **Basic roles**: Admin, Editor, Viewer, and No basic role. These are assigned to users and provide default access levels.
- **Fixed roles**: Predefined groups of permissions for specific use cases. Basic roles automatically include certain fixed roles.
- **Custom roles**: User-defined roles that combine specific permissions for granular access control.
## Basic role permissions
The following table summarizes the default alerting permissions for each basic role.
| Capability | Admin | Editor | Viewer |
| ----------------------------------------- | :---: | :----: | :----: |
| View alert rules | ✓ | ✓ | ✓ |
| Create, edit, and delete alert rules | ✓ | ✓ | |
| View silences | ✓ | ✓ | ✓ |
| Create, edit, and expire silences | ✓ | ✓ | |
| View contact points and templates | ✓ | ✓ | ✓ |
| Create, edit, and delete contact points | ✓ | ✓ | |
| View notification policies | ✓ | ✓ | ✓ |
| Create, edit, and delete policies | ✓ | ✓ | |
| View mute timings | ✓ | ✓ | ✓ |
| Create, edit, and delete timing intervals | ✓ | ✓ | |
| Access provisioning API | ✓ | ✓ | |
| Export with decrypted secrets | ✓ | | |
{{< admonition type="note" >}}
Access to alert rules also requires permission to read the folder containing the rules and permission to query the data sources used in the rules.
{{< /admonition >}}
## Permissions
Grafana Alerting has the following permissions organized by resource type.
Grafana Alerting has the following permissions.
### Alert rules
Permissions for managing Grafana-managed alert rules.
| Action | Applicable scope | Description |
| -------------------- | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `alert.rules:create` | `folders:*`<br>`folders:uid:*` | Create Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder and `datasources:query` in the scope of data sources the user can query. |
| `alert.rules:read` | `folders:*`<br>`folders:uid:*` | Read Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder. |
| `alert.rules:write` | `folders:*`<br>`folders:uid:*` | Update Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder. To allow query modifications add `datasources:query` in the scope of data sources the user can query. |
| `alert.rules:delete` | `folders:*`<br>`folders:uid:*` | Delete Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder. |
### External alert rules
Permissions for managing alert rules in external data sources that support alerting.
| Action | Applicable scope | Description |
| ---------------------------- | -------------------------------------- | ---------------------------------------------------------------------------------------------- |
| `alert.rules.external:read` | `datasources:*`<br>`datasources:uid:*` | Read alert rules in data sources that support alerting (Prometheus, Mimir, and Loki). |
| `alert.rules.external:write` | `datasources:*`<br>`datasources:uid:*` | Create, update, and delete alert rules in data sources that support alerting (Mimir and Loki). |
### Alert instances and silences
Permissions for managing alert instances and silences in Grafana.
| Action | Applicable scope | Description |
| ------------------------ | ------------------------------ | ------------------------------------------------------------------------------------ |
| `alert.instances:read` | n/a | Read alerts and silences in the current organization. |
| `alert.instances:create` | n/a | Create silences in the current organization. |
| `alert.instances:write` | n/a | Update and expire silences in the current organization. |
| `alert.silences:read` | `folders:*`<br>`folders:uid:*` | Read all general silences and rule-specific silences in a folder and its subfolders. |
| `alert.silences:create` | `folders:*`<br>`folders:uid:*` | Create rule-specific silences in a folder and its subfolders. |
| `alert.silences:write` | `folders:*`<br>`folders:uid:*` | Update and expire rule-specific silences in a folder and its subfolders. |
### External alert instances
Permissions for managing alert instances in external data sources.
| Action | Applicable scope | Description |
| -------------------------------- | -------------------------------------- | ----------------------------------------------------------------- |
| `alert.instances.external:read` | `datasources:*`<br>`datasources:uid:*` | Read alerts and silences in data sources that support alerting. |
| `alert.instances.external:write` | `datasources:*`<br>`datasources:uid:*` | Manage alerts and silences in data sources that support alerting. |
### Contact points
Permissions for managing contact points (notification receivers).
| Action | Applicable scope | Description |
| -------------------------------------------- | ---------------------------------- | ----------------------------------------------------------------------------------------------------------- |
| `alert.notifications.receivers:list` | n/a | List contact points in the current organization. |
| `alert.notifications.receivers:read` | `receivers:*`<br>`receivers:uid:*` | Read contact points. |
| `alert.notifications.receivers.secrets:read` | `receivers:*`<br>`receivers:uid:*` | Export contact points with decrypted secrets. |
| `alert.notifications.receivers:create` | n/a | Create a new contact points. The creator is automatically granted full access to the created contact point. |
| `alert.notifications.receivers:write` | `receivers:*`<br>`receivers:uid:*` | Update existing contact points. |
| `alert.notifications.receivers:delete` | `receivers:*`<br>`receivers:uid:*` | Update and delete existing contact points. |
| `alert.notifications.receivers:test` | `receivers:*`<br>`receivers:uid:*` | Test contact points to verify their configuration. |
| `receivers.permissions:read` | `receivers:*`<br>`receivers:uid:*` | Read permissions for contact points. |
| `receivers.permissions:write` | `receivers:*`<br>`receivers:uid:*` | Manage permissions for contact points. |
### Notification policies
Permissions for managing notification policies (routing rules).
| Action | Applicable scope | Description |
| ---------------------------------- | ---------------- | ----------------------------------------------------- |
| `alert.notifications.routes:read` | n/a | Read notification policies. |
| `alert.notifications.routes:write` | n/a | Create new, update, and delete notification policies. |
### Time intervals
Permissions for managing mute time intervals.
| Action | Applicable scope | Description |
| ------------------------------------------- | ---------------- | -------------------------------------------------- |
| `alert.notifications.time-intervals:read` | n/a | Read mute time intervals. |
| `alert.notifications.time-intervals:write` | n/a | Create new or update existing mute time intervals. |
| `alert.notifications.time-intervals:delete` | n/a | Delete existing time intervals. |
### Templates
Permissions for managing notification templates.
| Action | Applicable scope | Description |
| ------------------------------------------ | ---------------- | ------------------------------------------------------------------------------- |
| `alert.notifications.templates:read` | n/a | Read templates. |
| `alert.notifications.templates:write` | n/a | Create new or update existing templates. |
| `alert.notifications.templates:delete` | n/a | Delete existing templates. |
| `alert.notifications.templates.test:write` | n/a | Test templates with custom payloads (preview and payload editor functionality). |
### General notifications
Legacy permissions for managing all notification resources.
| Action | Applicable scope | Description |
| --------------------------- | ---------------- | -------------------------------------------------------------------------------------------------------- |
| `alert.notifications:read` | n/a | Read all templates, contact points, notification policies, and mute timings in the current organization. |
| `alert.notifications:write` | n/a | Manage templates, contact points, notification policies, and mute timings in the current organization. |
### External notifications
Permissions for managing notification resources in external data sources.
| Action | Applicable scope | Description |
| ------------------------------------ | -------------------------------------- | ---------------------------------------------------------------------------------------------------------------- |
| `alert.notifications.external:read` | `datasources:*`<br>`datasources:uid:*` | Read templates, contact points, notification policies, and mute timings in data sources that support alerting. |
| `alert.notifications.external:write` | `datasources:*`<br>`datasources:uid:*` | Manage templates, contact points, notification policies, and mute timings in data sources that support alerting. |
### Provisioning
Permissions for managing alerting resources via the provisioning API.
| Action | Applicable scope | Description |
| ---------------------------------------- | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `alert.provisioning:read` | n/a | Read all Grafana alert rules, notification policies, etc via provisioning API. Permissions to folders and data source are not required. |
| `alert.provisioning.secrets:read` | n/a | Same as `alert.provisioning:read` plus ability to export resources with decrypted secrets. |
| `alert.provisioning:write` | n/a | Update all Grafana alert rules, notification policies, etc via provisioning API. Permissions to folders and data source are not required. |
| `alert.rules.provisioning:read` | n/a | Read Grafana alert rules via provisioning API. More specific than `alert.provisioning:read`. |
| `alert.rules.provisioning:write` | n/a | Create, update, and delete Grafana alert rules via provisioning API. More specific than `alert.provisioning:write`. |
| `alert.notifications.provisioning:read` | n/a | Read notification resources (contact points, notification policies, templates, time intervals) via provisioning API. More specific than `alert.provisioning:read`. |
| `alert.notifications.provisioning:write` | n/a | Create, update, and delete notification resources via provisioning API. More specific than `alert.provisioning:write`. |
| `alert.provisioning.provenance:write` | n/a | Set provisioning status for alerting resources. Cannot be used alone. Requires user to have permissions to access resources. |
| Action | Applicable scope | Description |
| -------------------------------------------- | -------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `alert.instances.external:read` | `datasources:*`<br>`datasources:uid:*` | Read alerts and silences in data sources that support alerting. |
| `alert.instances.external:write` | `datasources:*`<br>`datasources:uid:*` | Manage alerts and silences in data sources that support alerting. |
| `alert.instances:create` | n/a | Create silences in the current organization. |
| `alert.instances:read` | n/a | Read alerts and silences in the current organization. |
| `alert.instances:write` | n/a | Update and expire silences in the current organization. |
| `alert.notifications.external:read` | `datasources:*`<br>`datasources:uid:*` | Read templates, contact points, notification policies, and mute timings in data sources that support alerting. |
| `alert.notifications.external:write` | `datasources:*`<br>`datasources:uid:*` | Manage templates, contact points, notification policies, and mute timings in data sources that support alerting. |
| `alert.notifications:write` | n/a | Manage templates, contact points, notification policies, and mute timings in the current organization. |
| `alert.notifications:read` | n/a | Read all templates, contact points, notification policies, and mute timings in the current organization. |
| `alert.rules.external:read` | `datasources:*`<br>`datasources:uid:*` | Read alert rules in data sources that support alerting (Prometheus, Mimir, and Loki) |
| `alert.rules.external:write` | `datasources:*`<br>`datasources:uid:*` | Create, update, and delete alert rules in data sources that support alerting (Mimir and Loki). |
| `alert.rules:create` | `folders:*`<br>`folders:uid:*` | Create Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder and `datasources:query` in the scope of data sources the user can query. |
| `alert.rules:delete` | `folders:*`<br>`folders:uid:*` | Delete Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder. |
| `alert.rules:read` | `folders:*`<br>`folders:uid:*` | Read Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder. |
| `alert.rules:write` | `folders:*`<br>`folders:uid:*` | Update Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder. To allow query modifications add `datasources:query` in the scope of data sources the user can query. |
| `alert.silences:create` | `folders:*`<br>`folders:uid:*` | Create rule-specific silences in a folder and its subfolders. |
| `alert.silences:read` | `folders:*`<br>`folders:uid:*` | Read all general silences and rule-specific silences in a folder and its subfolders. |
| `alert.silences:write` | `folders:*`<br>`folders:uid:*` | Update and expire rule-specific silences in a folder and its subfolders. |
| `alert.provisioning:read` | n/a | Read all Grafana alert rules, notification policies, etc via provisioning API. Permissions to folders and data source are not required. |
| `alert.provisioning.secrets:read` | n/a | Same as `alert.provisioning:read` plus ability to export resources with decrypted secrets. |
| `alert.provisioning:write` | n/a | Update all Grafana alert rules, notification policies, etc via provisioning API. Permissions to folders and data source are not required. |
| `alert.provisioning.provenance:write` | n/a | Set provisioning status for alerting resources. Cannot be used alone. Requires user to have permissions to access resources |
| `alert.notifications.receivers:read` | `receivers:*`<br>`receivers:uid:*` | Read contact points. |
| `alert.notifications.receivers.secrets:read` | `receivers:*`<br>`receivers:uid:*` | Export contact points with decrypted secrets. |
| `alert.notifications.receivers:create` | n/a | Create a new contact points. The creator is automatically granted full access to the created contact point. |
| `alert.notifications.receivers:write` | `receivers:*`<br>`receivers:uid:*` | Update existing contact points. |
| `alert.notifications.receivers:delete` | `receivers:*`<br>`receivers:uid:*` | Update and delete existing contact points. |
| `receivers.permissions:read` | `receivers:*`<br>`receivers:uid:*` | Read permissions for contact points. |
| `receivers.permissions:write` | `receivers:*`<br>`receivers:uid:*` | Manage permissions for contact points. |
| `alert.notifications.time-intervals:read` | n/a | Read mute time intervals. |
| `alert.notifications.time-intervals:write` | n/a | Create new or update existing mute time intervals. |
| `alert.notifications.time-intervals:delete` | n/a | Delete existing time intervals. |
| `alert.notifications.templates:read` | n/a | Read templates. |
| `alert.notifications.templates:write` | n/a | Create new or update existing templates. |
| `alert.notifications.templates:delete` | n/a | Delete existing templates. |
| `alert.notifications.templates.test:write` | n/a | Test templates with custom payloads (preview and payload editor functionality). |
| `alert.notifications.routes:read` | n/a | Read notification policies. |
| `alert.notifications.routes:write` | n/a | Create new, update and update notification policies. |
To help plan your RBAC rollout strategy, refer to [Plan your RBAC rollout strategy](https://grafana.com/docs/grafana/next/administration/roles-and-permissions/access-control/plan-rbac-rollout-strategy/).

View File

@@ -16,7 +16,7 @@ title: Manage access using folders or data sources
weight: 200
---
# Manage access using folders or data sources
## Manage access using folders or data sources
You can extend the access provided by a role to alert rules and rule-specific silences by assigning permissions to individual folders or data sources.

View File

@@ -55,7 +55,7 @@ Details of the fixed roles and the access they provide for Grafana Alerting are
| Full read-only access: `fixed:alerting:reader` | All permissions from `fixed:alerting.rules:reader` <br>`fixed:alerting.instances:reader`<br>`fixed:alerting.notifications:reader` | Read alert rules, alert instances, silences, contact points, and notification policies in Grafana and external providers. |
| Read via Provisioning API + Export Secrets: `fixed:alerting.provisioning.secrets:reader` | `alert.provisioning:read` and `alert.provisioning.secrets:read` | Read alert rules, alert instances, silences, contact points, and notification policies using the provisioning API and use export with decrypted secrets. |
| Access to alert rules provisioning API: `fixed:alerting.provisioning:writer` | `alert.provisioning:read` and `alert.provisioning:write` | Manage all alert rules, notification policies, contact points, templates, in the organization using the provisioning API. |
| Set provisioning status: `fixed:alerting.provisioning.provenance:writer` | `alert.provisioning.provenance:write` | Set provisioning rules for Alerting resources. Should be used together with other regular roles (Notifications Writer and/or Rules Writer.) |
| Set provisioning status: `fixed:alerting.provisioning.status:writer` | `alert.provisioning.provenance:write` | Set provisioning rules for Alerting resources. Should be used together with other regular roles (Notifications Writer and/or Rules Writer.) |
| Contact Point Reader: `fixed:alerting.receivers:reader` | `alert.notifications.receivers:read` for scope `receivers:*` | Read all contact points. |
| Contact Point Creator: `fixed:alerting.receivers:creator` | `alert.notifications.receivers:create` | Create a new contact point. The user is automatically granted full access to the created contact point. |
| Contact Point Writer: `fixed:alerting.receivers:writer` | `alert.notifications.receivers:read`, `alert.notifications.receivers:write`, `alert.notifications.receivers:delete` for scope `receivers:*` and <br> `alert.notifications.receivers:create` | Create a new contact point and manage all existing contact points. |
@@ -63,8 +63,8 @@ Details of the fixed roles and the access they provide for Grafana Alerting are
| Templates Writer: `fixed:alerting.templates:writer` | `alert.notifications.templates:read`, `alert.notifications.templates:write`, `alert.notifications.templates:delete`, `alert.notifications.templates.test:write` | Create new and manage existing notification templates. Test templates with custom payloads. |
| Time Intervals Reader: `fixed:alerting.time-intervals:reader` | `alert.notifications.time-intervals:read` | Read all time intervals. |
| Time Intervals Writer: `fixed:alerting.time-intervals:writer` | `alert.notifications.time-intervals:read`, `alert.notifications.time-intervals:write`, `alert.notifications.time-intervals:delete` | Create new and manage existing time intervals. |
| Notification Policies Reader: `fixed:alerting.routes:reader` | `alert.notifications.routes:read` | Read all notification policies. |
| Notification Policies Writer: `fixed:alerting.routes:writer` | `alert.notifications.routes:read`<br>`alert.notifications.routes:write` | Create new and manage existing notification policies. |
| Notification Policies Reader: `fixed:alerting.routes:reader` | `alert.notifications.routes:read` | Read all time intervals. |
| Notification Policies Writer: `fixed:alerting.routes:writer` | `alert.notifications.routes:read` `alert.notifications.routes:write` | Create new and manage existing time intervals. |
## Create custom roles

View File

@@ -16,27 +16,25 @@ weight: 150
# Configure roles and permissions
This guide explains how to configure roles and permissions for Grafana Alerting for Grafana OSS users. You'll learn how to manage access using roles, folder permissions, and contact point permissions.
A user is any individual who can log in to Grafana. Each user is associated with a role that includes permissions. Permissions determine the tasks a user can perform in the system. For example, the Admin role includes permissions for an administrator to create and delete users.
For more information, refer to [Organization roles](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/administration/roles-and-permissions/#organization-roles).
## Manage access using roles
Grafana OSS has three roles: Admin, Editor, and Viewer.
For Grafana OSS, there are three roles: Admin, Editor, and Viewer.
The following table describes the access each role provides for Grafana Alerting.
Details of the roles and the access they provide for Grafana Alerting are below.
| Role | Access |
| ------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Viewer | Read access to alert rules, notification resources (notification API, contact points, templates, time intervals, notification policies, and silences). |
| Editor | Write access to alert rules, notification resources (notification API, contact points, templates, time intervals, notification policies, and silences), and provisioning. |
| Admin | Write access to alert rules, notification resources (notification API, contact points, templates, time intervals, notification policies, and silences), and provisioning, as well as assign roles. |
| Role | Access |
| ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Admin | Write access to alert rules, notification resources (notification API, contact points, templates, time intervals, notification policies, and silences), and provisioning. |
| Editor | Write access to alert rules, notification resources (notification API, contact points, templates, time intervals, notification policies, and silences), and provisioning. |
| Viewer | Read access to alert rules, notification resources (notification API, contact points, templates, time intervals, notification policies, and silences). |
## Assign roles
To assign roles, an admin needs to complete the following steps.
To assign roles, admins need to complete the following steps.
1. Navigate to **Administration** > **Users and access** > **Users, Teams, or Service Accounts**.
1. Search for the user, team or service account you want to add a role for.
@@ -60,30 +58,32 @@ Refer to the following table for details on the additional access provided by fo
You can't use folders to customize access to notification resources.
{{< /admonition >}}
To manage folder permissions, complete the following steps:
To manage folder permissions, complete the following steps.
1. In the left-side menu, click **Dashboards**.
1. Hover your mouse cursor over a folder and click **Go to folder**.
1. Click **Manage permissions** from the Folder actions menu.
1. Update or add permissions as required.
## Manage access to contact points
## Manage access using contact point permissions
Extend or limit the access provided by a role to contact points by assigning permissions to individual contact points.
### Before you begin
Extend or limit the access provided by a role to contact points by assigning permissions to individual contact point.
This allows different users, teams, or service accounts to have customized access to read or modify specific contact points.
Refer to the following table for details on the additional access provided by contact point permissions.
| Contact point permission | Additional Access |
| ------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------- |
| View | View and export contact point as well as select it on the Alert rule edit page |
| Edit | Update or delete the contact point |
| Admin | Same additional access as Edit and manage permissions for the contact point. User should have additional permissions to read users and teams. |
| Folder permission | Additional Access |
| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |
| View | View and export contact point as well as select it on the Alert rule edit page |
| Edit | Update or delete the contact point |
| Admin | Same additional access as Edit and manage permissions for the contact point. User should have additional permissions to read users and teams. |
### Assign contact point permissions
### Steps
To manage contact point permissions, complete the following steps:
To contact point permissions, complete the following steps.
1. In the left-side menu, click **Contact points**.
1. Hover your mouse cursor over a contact point and click **More**.

View File

@@ -1776,13 +1776,6 @@ Specify the frequency of polling for Alertmanager configuration changes. The def
The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), for example, 30s or 1m.
#### `alertmanager_max_template_output_bytes`
Maximum size in bytes that the expanded result of any single template expression (e.g. {{ .CommonAnnotations.description }}, {{ .ExternalURL }}, etc.) may reach during notification rendering.
The limit is checked after template execution for each templated field, but before the value is inserted into the final notification payload sent to the receiver.
If exceeded, the notification will contain output truncated up to the limit and a warning will be logged.
The default value is 10,485,760 bytes (10Mb).
#### `ha_redis_address`
Redis server address or addresses. It can be a single Redis address if using Redis standalone,

View File

@@ -43,36 +43,24 @@ If the data source doesn't support loading the full range logs volume, the logs
The following sections provide detailed explanations on how to visualize and interact with individual logs in Explore.
### Infinite scroll
### Logs navigation
<!-- vale Grafana.GoogleWill = NO -->
Logs navigation, located at the right side of the log lines, can be used to easily request additional logs by clicking **Older logs** at the bottom of the navigation. This is especially useful when you reach the line limit and you want to see more logs. Each request run from the navigation displays in the navigation as separate page. Every page shows `from` and `to` timestamps of the incoming log lines. You can see previous results by clicking on each page. Explore caches the last five requests run from the logs navigation so you're not re-running the same queries when clicking on the pages, saving time and resources.
When you reach the bottom of the list of logs, you will see the message `Scroll to load more`. If you continue scrolling and the displayed logs are within the selected time interval, Grafana will load more logs. When the sort order is "newest first" you receive older logs, and when the sort order is "oldest first" you get newer logs.
<!-- vale Grafana.GoogleWill = YES -->
![Navigate logs in Explore](/static/img/docs/explore/navigate-logs-8-0.png)
### Visualization options
You have the option to customize the display of logs and choose which columns to show. Following is a list of available options.
<!-- vale Grafana.Spelling = NO -->
| Option | Description |
| ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Expand / Collapse | Expand or collapse the controls toolbar. |
| Scroll to bottom | Jump to the bottom of the logs table. |
| Oldest Logs First / Newest logs first | Sort direction (ascending or descending). |
| Search logs / Close search | Click to open/close the client side string search of the displayed logs result. |
| Deduplication | **None** does not perform any deduplication, **Exact** matches are done on the whole line except for date fields. **Numbers** matches are done on the line after stripping out numbers such as durations, IP addresses, and so on. **Signature** is the most aggressive deduplication as it strips all letters and numbers and matches on the remaining whitespace and punctuation. |
| Filter levels | Filter logs in display by log level: All levels, Info, Debut, Warning, Error. |
| Set Timestamp format | Hide timestamps (disabled), Show milliseconds timestamps, Show nanoseconds timestamps. |
| Set line wrap | Disable line wrapping, Enable line wrapping, Enable line wrapping and prettify JSON. |
| Enable highlighting | Plain text, Highlight text. |
| Font size | Small font (default), Large font. |
| Unescaped newlines | Only displayed if the logs contain unescaped new lines. Click to unescape and display as new lines. |
| Download logs | Plain text (txt), JavaScript Object Notation (JSON), Comma-separated values (CSV) |
<!-- vale Grafana.Spelling = YES -->
| Option | Description |
| ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Time** | Shows or hides the time column. This is the timestamp associated with the log line as reported from the data source. |
| **Unique labels** | Shows or hides the unique labels column that includes only non-common labels. All common labels are displayed above. |
| **Wrap lines** | Set this to `true` if you want the display to use line wrapping. If set to `false`, it will result in horizontal scrolling. |
| **Prettify JSON** | Set this to `true` to pretty print all JSON logs. This setting does not affect logs in any format other than JSON. |
| **Deduplication** | Log data can be very repetitive. Explore hides duplicate log lines using a few different deduplication algorithms. **Exact** matches are done on the whole line except for date fields. **Numbers** matches are done on the line after stripping out numbers such as durations, IP addresses, and so on. **Signature** is the most aggressive deduplication as it strips all letters and numbers and matches on the remaining whitespace and punctuation. |
| **Display results order** | You can change the order of received logs from the default descending order (newest first) to ascending order (oldest first). |
### Download log lines
@@ -155,31 +143,16 @@ Click the **eye icon** to select a subset of fields to visualize in the logs lis
Each field has a **stats icon**, which displays ad-hoc statistics in relation to all displayed logs.
For data sources that support log types, such as Loki, instead of a single view containing all fields, fields will be displayed grouped by their type: Indexed Labels, Parsed fields, and Structured Metadata.
#### Links
Grafana provides data links or correlations, allowing you to convert any part of a log message into an internal or external link. These links enable you to navigate to related data or external resources, offering a seamless and convenient way to explore additional information.
{{< figure src="/static/img/docs/explore/data-link-9-4.png" max-width="800px" caption="Data link in Explore" >}}
#### Log details modes
There are two modes available to view log details:
- **Inline** The default, displays log details below the log line.
- **Sidebar** Displays log details in a sidebar view.
No matter which display mode you are currently viewing, you can change it by clicking the mode control icon.
### Log context
Log context is a feature that displays additional lines of context surrounding a log entry that matches a specific search query. This helps in understanding the context of the log entry and is similar to the `-C` parameter in the `grep` command.
If you're using Loki for your logs, to modify your log context queries, you can use the Loki log context query editor at the top of the table. You can activate this editor by clicking the menu for the log line, and selecting **Show context**. Within the **Log Context** view, you have the option to modify your search by removing one or more label filters from the log stream. If your original query used a parser, you can refine your search by leveraging extracted label filters.
Change the **Context time window** option to look for logs within a specific time interval around your log line.
Toggle **Wrap lines** if you encounter long lines of text that make it difficult to read and analyze the context around log entries. By enabling this toggle, Grafana automatically wraps long lines of text to fit within the visible width of the viewer, making the log entries easier to read and understand.
Click **Open in split view** to execute the context query for a log entry in a split screen in the Explore view. Clicking this button opens a new Explore pane with the context query displayed alongside the log entry, making it easier to analyze and understand the surrounding context.

View File

@@ -31,7 +31,7 @@ refs:
_Logs_ are structured records of events or messages generated by a system or application&mdash;that is, a series of text records with status updates from your system or app. They generally include timestamps, messages, and context information like the severity of the logged event.
The logs visualization displays these records from data sources that support logs, such as Elastic, Influx, and Loki. The logs visualization shows, by default, the timestamp, a colored string representing the log status, the log line body, as well as collapsible log events that help you analyze the information generated.
The logs visualization displays these records from data sources that support logs, such as Elastic, Influx, and Loki. The logs visualization has colored indicators of log status, as well as collapsible log events that help you analyze the information generated.
{{< figure src="/media/docs/grafana/panels-visualizations/screenshot-logs-v12.3.png" max-width="750px" alt="Logs visualization" >}}
@@ -100,16 +100,16 @@ Use these settings to refine your visualization:
| Option | Description |
| --------------- | --------------- |
| Show timestamps | Show or hide the time column. This is the timestamp associated with the log line as reported from the data source. |
| Time | Show or hide the time column. This is the timestamp associated with the log line as reported from the data source. |
| Unique labels | Show or hide the unique labels column, which shows only non-common labels. |
| Common labels | Show or hide the common labels. |
| Wrap lines | Turn line wrapping on or off. |
| Prettify JSON | Toggle the switch on to pretty print all JSON logs. This setting does not affect logs in any format other than JSON. |
| Enable highlighting | Use a predefined syntax coloring grammar to highlight relevant parts of the log lines |
| Enable logs highlighting | Experimental. Use a predefined coloring scheme to highlight relevant parts of the log lines. Subtle colors are added to the log lines to improve readability and help with identifying important information faster. |
| Enable log details | Toggle the switch on to see an extendable area with log details including labels and detected fields. Each field or label has a stats icon to display ad-hoc statistics in relation to all displayed logs. The default setting is on. |
| Log Details panel mode | Choose to display the log details in a sidebar panel or inline, below the log line. |
| Enable infinite scrolling | Request more results by scrolling to the bottom of the logs list. |
| Show controls | Display controls to jump to the last or first log line, and filters by log level |
| Font size | Select between the default font size and small font size. |
| Log details panel mode | Choose to display the log details in a sidebar panel or inline, below the log line. The default mode depends on viewport size: the default mode for smaller viewports is inline, while for larger ones, it's sidebar. You can also change mode dynamically in the panel by clicking the mode control. |
| Enable infinite scrolling | Request more results by scrolling to the bottom of the logs list. When you reach the bottom of the list of logs, if you continue scrolling and the displayed logs are within the selected time interval, you can request to load more logs. When the sort order is **Newest first**, you receive older logs, and when the sort order is **Oldest first** you get newer logs. |
| Show controls | Display controls to jump to the last or first log line, and filter by log level. |
| Font size | Select between the **Default** font size and **Small** font sizes.|
| Deduplication | Hide log messages that are duplicates of others shown, according to your selected criteria. Choose from: <ul><li>**Exact** - Ignoring ISO datetimes.</li><li>**Numerical** - Ignoring only those that differ by numbers such as IPs or latencies.</li><li>**Signatures** - Removing successive lines with identical punctuation and white space.</li></ul> |
| Order | Set whether to show results **Newest first** or **Oldest first**. |

View File

@@ -1817,7 +1817,7 @@
},
"public/app/features/dashboard-scene/edit-pane/DashboardEditPaneSplitter.tsx": {
"react-hooks/rules-of-hooks": {
"count": 5
"count": 4
}
},
"public/app/features/dashboard-scene/inspect/HelpWizard/HelpWizard.tsx": {
@@ -2910,6 +2910,11 @@
"count": 1
}
},
"public/app/features/plugins/admin/components/PluginDetailsPage.tsx": {
"@typescript-eslint/consistent-type-assertions": {
"count": 1
}
},
"public/app/features/plugins/admin/helpers.ts": {
"no-restricted-syntax": {
"count": 2

2
go.mod
View File

@@ -87,7 +87,7 @@ require (
github.com/googleapis/gax-go/v2 v2.15.0 // @grafana/grafana-backend-group
github.com/gorilla/mux v1.8.1 // @grafana/grafana-backend-group
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // @grafana/grafana-app-platform-squad
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 // @grafana/alerting-backend
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba // @grafana/alerting-backend
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f // @grafana/identity-access-team
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 // @grafana/identity-access-team
github.com/grafana/dataplane/examples v0.0.1 // @grafana/observability-metrics

4
go.sum
View File

@@ -1613,8 +1613,8 @@ github.com/gorilla/sessions v1.2.1 h1:DHd3rPN5lE3Ts3D8rKkQ8x/0kqfeNmBAaiSi+o7Fsg
github.com/gorilla/sessions v1.2.1/go.mod h1:dk2InVEVJ0sfLlnXv9EAgkf6ecYs/i80K/zI+bUmuGM=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 h1:ZzG/gCclEit9w0QUfQt9GURcOycAIGcsQAhY1u0AEX0=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmFAlqnWsXoRyUwSa2GHNEMSEDKGKfQ4=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f h1:Cbm6OKkOcJ+7CSZsGsEJzktC/SIa5bxVeYKQLuYK86o=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f/go.mod h1:axY0cdOg3q0TZHwpHnIz5x16xZ8ZBxJHShsSHHXcHQg=
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 h1:Muoy+FMGrHj3GdFbvsMzUT7eusgii9PKf9L1ZaXDDbY=

View File

@@ -1185,20 +1185,10 @@ export interface FeatureToggles {
*/
onlyStoreActionSets?: boolean;
/**
* Show insights for plugins in the plugin details page
* @default false
*/
pluginInsights?: boolean;
/**
* Enables a new panel time settings drawer
*/
panelTimeSettings?: boolean;
/**
* Enables the raw DSL query editor in the Elasticsearch data source
* @default false
*/
elasticsearchRawDSLQuery?: boolean;
/**
* Enables app platform API for annotations
* @default false
*/

View File

@@ -273,7 +273,7 @@ export interface DataSourceWithSupplementaryQueriesSupport<TQuery extends DataQu
/**
* Returns supplementary query types that data source supports.
*/
getSupportedSupplementaryQueryTypes(dsRequest?: DataQueryRequest<DataQuery>): SupplementaryQueryType[];
getSupportedSupplementaryQueryTypes(): SupplementaryQueryType[];
/**
* Returns a supplementary query to be used to fetch supplementary data based on the provided type and original query.
* If the provided query is not suitable for the provided supplementary query type, undefined should be returned.
@@ -283,8 +283,7 @@ export interface DataSourceWithSupplementaryQueriesSupport<TQuery extends DataQu
export const hasSupplementaryQuerySupport = <TQuery extends DataQuery>(
datasource: DataSourceApi | (DataSourceApi & DataSourceWithSupplementaryQueriesSupport<TQuery>),
type: SupplementaryQueryType,
dsRequest?: DataQueryRequest<DataQuery>
type: SupplementaryQueryType
): datasource is DataSourceApi & DataSourceWithSupplementaryQueriesSupport<TQuery> => {
if (!datasource) {
return false;
@@ -294,7 +293,7 @@ export const hasSupplementaryQuerySupport = <TQuery extends DataQuery>(
('getDataProvider' in datasource || 'getSupplementaryRequest' in datasource) &&
'getSupplementaryQuery' in datasource &&
'getSupportedSupplementaryQueryTypes' in datasource &&
datasource.getSupportedSupplementaryQueryTypes(dsRequest).includes(type)
datasource.getSupportedSupplementaryQueryTypes().includes(type)
);
};

View File

@@ -35,10 +35,6 @@ export interface TraceToMetricsData extends DataSourceJsonData {
interface Props extends DataSourcePluginOptionsEditorProps<TraceToMetricsData> {}
export function TraceToMetricsSettings({ options, onOptionsChange }: Props) {
const supportedDataSourceTypes = [
'prometheus',
'victoriametrics-metrics-datasource', // external
];
const styles = useStyles2(getStyles);
return (
@@ -51,10 +47,10 @@ export function TraceToMetricsSettings({ options, onOptionsChange }: Props) {
>
<DataSourcePicker
inputId="trace-to-metrics-data-source-picker"
pluginId="prometheus"
current={options.jsonData.tracesToMetrics?.datasourceUid}
noDefault={true}
width={40}
filter={(ds) => supportedDataSourceTypes.includes(ds.type)}
onChange={(ds: DataSourceInstanceSettings) =>
updateDatasourcePluginJsonDataOption({ onOptionsChange, options }, 'tracesToMetrics', {
...options.jsonData.tracesToMetrics,

View File

@@ -387,10 +387,6 @@ export interface ElasticsearchDataQuery extends common.DataQuery {
* List of bucket aggregations
*/
bucketAggs?: Array<BucketAggregation>;
/**
* Editor type
*/
editorType?: string;
/**
* List of metric aggregations
*/
@@ -399,10 +395,6 @@ export interface ElasticsearchDataQuery extends common.DataQuery {
* Lucene query
*/
query?: string;
/**
* Raw DSL query
*/
rawDSLQuery?: string;
/**
* Name of time field
*/

View File

@@ -1,6 +1,6 @@
import { Chance } from 'chance';
import { DashboardsTreeItem, DashboardViewItem, ManagerKind, UIDashboardViewItem } from '../types/browse-dashboards';
import { DashboardsTreeItem, DashboardViewItem, UIDashboardViewItem } from '../types/browse-dashboards';
function wellFormedEmptyFolder(
seed = 1,
@@ -64,14 +64,13 @@ function wellFormedFolder(
}
export function treeViewersCanEdit() {
const [, { folderA, folderC, folderD }] = wellFormedTree();
const [, { folderA, folderC }] = wellFormedTree();
return [
[folderA, folderC, folderD],
[folderA, folderC],
{
folderA,
folderC,
folderD,
},
] as const;
}
@@ -91,8 +90,6 @@ export function wellFormedTree() {
const folderB = wellFormedFolder(seed++);
const folderB_empty = wellFormedEmptyFolder(seed++);
const folderC = wellFormedFolder(seed++);
// folderD is marked as managed by repo (git-synced) for testing disabled folder behavior
const folderD = wellFormedFolder(seed++, {}, { managedBy: ManagerKind.Repo });
const dashbdD = wellFormedDashboard(seed++);
const dashbdE = wellFormedDashboard(seed++);
@@ -110,7 +107,6 @@ export function wellFormedTree() {
folderB,
folderB_empty,
folderC,
folderD,
dashbdD,
dashbdE,
],
@@ -127,7 +123,6 @@ export function wellFormedTree() {
folderB,
folderB_empty,
folderC,
folderD,
dashbdD,
dashbdE,
},

View File

@@ -4,7 +4,6 @@ import { HttpResponse, http } from 'msw';
import { treeViewersCanEdit, wellFormedTree } from '../../../fixtures/folders';
const [mockTree, { folderB }] = wellFormedTree();
// folderD is included in mockTree and will be returned by the handlers with managedBy: 'repo'
const [mockTreeThatViewersCanEdit] = treeViewersCanEdit();
const collator = new Intl.Collator();
@@ -49,7 +48,6 @@ const listFoldersHandler = () =>
id: random.integer({ min: 1, max: 1000 }),
uid: folder.item.uid,
title: folder.item.kind === 'folder' ? folder.item.title : "invalid - this shouldn't happen",
...('managedBy' in folder.item && folder.item.managedBy ? { managedBy: folder.item.managedBy } : {}),
};
})
.sort((a, b) => collator.compare(a.title, b.title)) // API always sorts by title
@@ -78,7 +76,6 @@ const getFolderHandler = () =>
uid: folder?.item.uid,
...additionalProperties,
...(accessControlQueryParam ? { accessControl: mockAccessControl } : {}),
...('managedBy' in folder.item && folder.item.managedBy ? { managedBy: folder.item.managedBy } : {}),
});
});

View File

@@ -5,7 +5,6 @@ import { wellFormedTree } from '../../../../fixtures/folders';
import { getErrorResponse } from '../../../helpers';
const [mockTree, { folderB }] = wellFormedTree();
// folderD is included in mockTree and will be returned by the handlers with managedBy: 'repo'
const baseResponse = {
kind: 'Folder',
@@ -25,7 +24,7 @@ const folderToAppPlatform = (folder: (typeof mockTree)[number]['item'], id?: num
// TODO: Generalise annotations in fixture data
'grafana.app/createdBy': 'user:1',
'grafana.app/updatedBy': 'user:2',
'grafana.app/managedBy': 'managedBy' in folder ? folder.managedBy : 'user',
'grafana.app/managedBy': 'user',
'grafana.app/updatedTimestamp': '2024-01-01T00:00:00Z',
'grafana.app/folder': folder.kind === 'folder' ? folder.parentUID : undefined,
},

View File

@@ -3,7 +3,7 @@
// @grafana/schema?
// New package @grafana/core? @grafana/types?
export enum ManagerKind {
enum ManagerKind {
Repo = 'repo',
Terraform = 'terraform',
Kubectl = 'kubectl',

View File

@@ -97,13 +97,7 @@ export const Dropdown = React.memo(({ children, overlay, placement, offset, root
see https://github.com/jsx-eslint/eslint-plugin-jsx-a11y/blob/main/docs/rules/no-static-element-interactions.md#case-the-event-handler-is-only-being-used-to-capture-bubbled-events
*/}
{/* eslint-disable-next-line jsx-a11y/no-static-element-interactions, jsx-a11y/click-events-have-key-events */}
<div
ref={refs.setFloating}
style={floatingStyles}
onClick={onOverlayClicked}
onKeyDown={handleKeys}
{...getFloatingProps()}
>
<div ref={refs.setFloating} style={floatingStyles} onClick={onOverlayClicked} onKeyDown={handleKeys}>
<CSSTransition
nodeRef={transitionRef}
appear={true}

View File

@@ -112,15 +112,17 @@ func TestGetHomeDashboard(t *testing.T) {
}
func newTestLive(t *testing.T) *live.GrafanaLive {
features := featuremgmt.WithFeatures()
cfg := setting.NewCfg()
cfg.AppURL = "http://localhost:3000/"
gLive, err := live.ProvideService(cfg,
gLive, err := live.ProvideService(nil, cfg,
routing.NewRouteRegister(),
nil, nil, nil, nil,
nil,
&usagestats.UsageStatsMock{T: t},
featuremgmt.WithFeatures(),
&dashboards.FakeDashboardService{}, nil)
features, acimpl.ProvideAccessControl(features),
&dashboards.FakeDashboardService{},
nil, nil)
require.NoError(t, err)
return gLive
}

View File

@@ -638,7 +638,7 @@ func (hs *HTTPServer) addMiddlewaresAndStaticRoutes() {
m := hs.web
m.Use(requestmeta.SetupRequestMetadata())
m.Use(middleware.RequestTracing(hs.tracer, middleware.ShouldTraceWithExceptions))
m.Use(middleware.RequestTracing(hs.tracer, middleware.SkipTracingPaths))
m.Use(middleware.RequestMetrics(hs.Features, hs.Cfg, hs.promRegister))
m.UseMiddleware(hs.LoggerMiddleware.Middleware())

View File

@@ -294,7 +294,6 @@ func (hs *HTTPServer) SearchOrgUsersWithPaging(c *contextmodel.ReqContext) respo
}
func (hs *HTTPServer) searchOrgUsersHelper(c *contextmodel.ReqContext, query *org.SearchOrgUsersQuery) (*org.SearchOrgUsersQueryResult, error) {
query.ExcludeHiddenUsers = true
result, err := hs.orgService.SearchOrgUsers(c.Req.Context(), query)
if err != nil {
return nil, err
@@ -304,6 +303,9 @@ func (hs *HTTPServer) searchOrgUsersHelper(c *contextmodel.ReqContext, query *or
userIDs := map[string]bool{}
authLabelsUserIDs := make([]int64, 0, len(result.OrgUsers))
for _, user := range result.OrgUsers {
if dtos.IsHiddenUser(user.Login, c.SignedInUser, hs.Cfg) {
continue
}
user.AvatarURL = dtos.GetGravatarUrl(hs.Cfg, user.Email)
userIDs[fmt.Sprint(user.UserID)] = true

View File

@@ -171,16 +171,11 @@ func TestIntegrationOrgUsersAPIEndpoint_userLoggedIn(t *testing.T) {
orgService.ExpectedSearchOrgUsersResult = &org.SearchOrgUsersQueryResult{
OrgUsers: []*org.OrgUserDTO{
{Login: testUserLogin, Email: "testUser@grafana.com"},
{Login: "user1", Email: "user1@grafana.com"},
{Login: "user2", Email: "user2@grafana.com"},
},
}
orgService.SearchOrgUsersFn = func(ctx context.Context, query *org.SearchOrgUsersQuery) (*org.SearchOrgUsersQueryResult, error) {
require.True(t, query.ExcludeHiddenUsers)
return orgService.ExpectedSearchOrgUsersResult, nil
}
defer func() { orgService.SearchOrgUsersFn = nil }()
sc.handlerFunc = hs.GetOrgUsersForCurrentOrg
sc.fakeReqWithParams("GET", sc.url, map[string]string{}).exec()
@@ -196,18 +191,6 @@ func TestIntegrationOrgUsersAPIEndpoint_userLoggedIn(t *testing.T) {
loggedInUserScenarioWithRole(t, "When calling GET as an admin on", "GET", "api/org/users/lookup",
"api/org/users/lookup", org.RoleAdmin, func(sc *scenarioContext) {
orgService.ExpectedSearchOrgUsersResult = &org.SearchOrgUsersQueryResult{
OrgUsers: []*org.OrgUserDTO{
{Login: testUserLogin, Email: "testUser@grafana.com"},
{Login: "user2", Email: "user2@grafana.com"},
},
}
orgService.SearchOrgUsersFn = func(ctx context.Context, query *org.SearchOrgUsersQuery) (*org.SearchOrgUsersQueryResult, error) {
require.True(t, query.ExcludeHiddenUsers)
return orgService.ExpectedSearchOrgUsersResult, nil
}
defer func() { orgService.SearchOrgUsersFn = nil }()
sc.handlerFunc = hs.GetOrgUsersForCurrentOrgLookup
sc.fakeReqWithParams("GET", sc.url, map[string]string{}).exec()

View File

@@ -162,7 +162,6 @@ var serviceIdentityTokenPermissions = []string{
"collections.grafana.app:*", // user stars
"plugins.grafana.app:*",
"historian.alerting.grafana.app:*",
"advisor.grafana.app:*",
// Secrets Manager uses a custom verb for secret decryption, and its authorizer does not allow wildcard permissions.
"secret.grafana.app/securevalues:decrypt",

View File

@@ -10,9 +10,6 @@ import (
"github.com/grafana/grafana/pkg/expr"
)
// Get results as raw protobuf
const PROTOBUF_CONTENT_TYPE = "application/vnd.grafana.pluginv2.QueryDataResponse"
// Generic query request with shared time across all values
// Copied from: https://github.com/grafana/grafana/blob/main/pkg/api/dtos/models.go#L62
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object

View File

@@ -19,18 +19,11 @@ func (NoopBackend) Shutdown() {}
func (NoopBackend) String() string { return "" }
// NoopPolicyRuleProvider is a no-op implementation of PolicyRuleProvider
type NoopPolicyRuleProvider struct{}
func ProvideNoopPolicyRuleProvider() PolicyRuleProvider { return &NoopPolicyRuleProvider{} }
func (NoopPolicyRuleProvider) PolicyRuleProvider(PolicyRuleEvaluators) audit.PolicyRuleEvaluator {
return NoopPolicyRuleEvaluator{}
}
// NoopPolicyRuleEvaluator is a no-op implementation of audit.PolicyRuleEvaluator
type NoopPolicyRuleEvaluator struct{}
func ProvideNoopPolicyRuleEvaluator() audit.PolicyRuleEvaluator { return &NoopPolicyRuleEvaluator{} }
func (NoopPolicyRuleEvaluator) EvaluatePolicyRule(authorizer.Attributes) audit.RequestAuditConfig {
return audit.RequestAuditConfig{Level: auditinternal.LevelNone}
}

View File

@@ -1,59 +0,0 @@
package auditing
import (
"slices"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"k8s.io/apimachinery/pkg/runtime/schema"
auditinternal "k8s.io/apiserver/pkg/apis/audit"
"k8s.io/apiserver/pkg/audit"
"k8s.io/apiserver/pkg/authentication/user"
"k8s.io/apiserver/pkg/authorization/authorizer"
)
// PolicyRuleEvaluators is a map of API group+version to audit.PolicyRuleEvaluator
type PolicyRuleEvaluators = map[schema.GroupVersion]audit.PolicyRuleEvaluator
type PolicyRuleProvider interface {
PolicyRuleProvider(evaluators PolicyRuleEvaluators) audit.PolicyRuleEvaluator
}
// PolicyRuleEvaluator alias for easier imports.
type PolicyRuleEvaluator = audit.PolicyRuleEvaluator
// DefaultGrafanaPolicyRuleEvaluator provides a sane default configuration for audit logging for API group+versions.
type defaultGrafanaPolicyRuleEvaluator struct{}
var _ PolicyRuleEvaluator = &defaultGrafanaPolicyRuleEvaluator{}
func NewDefaultGrafanaPolicyRuleEvaluator() audit.PolicyRuleEvaluator {
return defaultGrafanaPolicyRuleEvaluator{}
}
func (defaultGrafanaPolicyRuleEvaluator) EvaluatePolicyRule(attrs authorizer.Attributes) audit.RequestAuditConfig {
// Skip non-resource and watch requests otherwise it is too noisy.
if !attrs.IsResourceRequest() || attrs.GetVerb() == utils.VerbWatch {
return audit.RequestAuditConfig{
Level: auditinternal.LevelNone,
}
}
// Skip auditing if the user is part of the privileged group.
// The loopback client uses this group, so requests initiated in `/api/` would be duplicated.
if u := attrs.GetUser(); u != nil && slices.Contains(u.GetGroups(), user.SystemPrivilegedGroup) {
return audit.RequestAuditConfig{
Level: auditinternal.LevelNone,
}
}
return audit.RequestAuditConfig{
Level: auditinternal.LevelMetadata,
OmitStages: []auditinternal.Stage{
// Only log on StageResponseComplete
auditinternal.StageRequestReceived,
auditinternal.StageResponseStarted,
auditinternal.StagePanic,
},
OmitManagedFields: false, // Setting it to true causes extra copying/unmarshalling.
}
}

View File

@@ -1,73 +0,0 @@
package auditing_test
import (
"testing"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/apiserver/auditing"
"github.com/stretchr/testify/require"
auditinternal "k8s.io/apiserver/pkg/apis/audit"
"k8s.io/apiserver/pkg/authentication/user"
"k8s.io/apiserver/pkg/authorization/authorizer"
)
func TestDefaultGrafanaPolicyRuleEvaluator(t *testing.T) {
t.Parallel()
evaluator := auditing.NewDefaultGrafanaPolicyRuleEvaluator()
require.NotNil(t, evaluator)
t.Run("returns audit level none for non-resource requests", func(t *testing.T) {
t.Parallel()
attrs := authorizer.AttributesRecord{
ResourceRequest: false,
}
config := evaluator.EvaluatePolicyRule(attrs)
require.Equal(t, auditinternal.LevelNone, config.Level)
})
t.Run("returns audit level none for watch requests", func(t *testing.T) {
t.Parallel()
attrs := authorizer.AttributesRecord{
ResourceRequest: true,
Verb: utils.VerbWatch,
}
config := evaluator.EvaluatePolicyRule(attrs)
require.Equal(t, auditinternal.LevelNone, config.Level)
})
t.Run("returns audit level none for requests from privileged group", func(t *testing.T) {
t.Parallel()
attrs := authorizer.AttributesRecord{
ResourceRequest: true,
Verb: utils.VerbCreate,
User: &user.DefaultInfo{
Groups: []string{"test-group", user.SystemPrivilegedGroup},
},
}
config := evaluator.EvaluatePolicyRule(attrs)
require.Equal(t, auditinternal.LevelNone, config.Level)
})
t.Run("return audit level metadata for other resource requests", func(t *testing.T) {
t.Parallel()
attrs := authorizer.AttributesRecord{
ResourceRequest: true,
Verb: utils.VerbCreate,
User: &user.DefaultInfo{
Name: "test-user",
Groups: []string{"test-group"},
},
}
config := evaluator.EvaluatePolicyRule(attrs)
require.Equal(t, auditinternal.LevelMetadata, config.Level)
})
}

View File

@@ -73,20 +73,16 @@ func RouteOperationName(req *http.Request) (string, bool) {
return "", false
}
func ShouldTraceWithExceptions(req *http.Request) bool {
// Paths that don't need tracing spans applied to them because of the
// little value that would provide us
if strings.HasPrefix(req.URL.Path, "/public/") ||
// Paths that don't need tracing spans applied to them because of the
// little value that would provide us
func SkipTracingPaths(req *http.Request) bool {
return strings.HasPrefix(req.URL.Path, "/public/") ||
req.URL.Path == "/robots.txt" ||
req.URL.Path == "/favicon.ico" ||
req.URL.Path == "/api/health" {
return false
}
return true
req.URL.Path == "/api/health"
}
func ShouldTraceAllPaths(req *http.Request) bool {
func TraceAllPaths(req *http.Request) bool {
return true
}

View File

@@ -222,7 +222,7 @@ func RegisterAPIService(
return builder
}
func NewAPIService(ac authlib.AccessClient, features featuremgmt.FeatureToggles, folderClientProvider client.K8sHandlerProvider, datasourceProvider schemaversion.DataSourceIndexProvider, libraryElementProvider schemaversion.LibraryElementIndexProvider, resourcePermissionsSvc *dynamic.NamespaceableResourceInterface, search *SearchHandler) *DashboardsAPIBuilder {
func NewAPIService(ac authlib.AccessClient, features featuremgmt.FeatureToggles, folderClientProvider client.K8sHandlerProvider, datasourceProvider schemaversion.DataSourceIndexProvider, libraryElementProvider schemaversion.LibraryElementIndexProvider, resourcePermissionsSvc *dynamic.NamespaceableResourceInterface) *DashboardsAPIBuilder {
migration.Initialize(datasourceProvider, libraryElementProvider, migration.DefaultCacheTTL)
return &DashboardsAPIBuilder{
minRefreshInterval: "10s",
@@ -231,7 +231,6 @@ func NewAPIService(ac authlib.AccessClient, features featuremgmt.FeatureToggles,
dashboardService: &dashsvc.DashboardServiceImpl{}, // for validation helpers only
folderClientProvider: folderClientProvider,
resourcePermissionsSvc: resourcePermissionsSvc,
search: search,
isStandalone: true,
}
}

View File

@@ -6,16 +6,15 @@ import (
"fmt"
"net/http"
"google.golang.org/protobuf/proto"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apiserver/pkg/registry/rest"
"github.com/grafana/grafana-app-sdk/logging"
"github.com/grafana/grafana-plugin-sdk-go/backend"
data "github.com/grafana/grafana-plugin-sdk-go/experimental/apis/data/v0alpha1"
"github.com/grafana/grafana/pkg/apimachinery/errutil"
query "github.com/grafana/grafana/pkg/apis/query/v0alpha1"
"github.com/grafana/grafana/pkg/services/datasources"
"github.com/grafana/grafana/pkg/web"
)
@@ -94,49 +93,26 @@ func (r *subQueryREST) Connect(ctx context.Context, name string, opts runtime.Ob
Headers: map[string]string{},
})
code := query.GetResponseCode(rsp)
// all errors get converted into k8 errors when sent in responder.Error and lose important context like downstream info
var e errutil.Error
if errors.As(err, &e) && e.Source == errutil.SourceDownstream {
err = nil
rsp = &backend.QueryDataResponse{Responses: map[string]backend.DataResponse{
"A": {
Error: errors.New(e.LogMessage),
ErrorSource: backend.ErrorSourceDownstream,
Status: backend.StatusBadRequest,
},
}}
responder.Object(int(backend.StatusBadRequest),
&query.QueryDataResponse{QueryDataResponse: backend.QueryDataResponse{Responses: map[string]backend.DataResponse{
"A": {
Error: errors.New(e.LogMessage),
ErrorSource: backend.ErrorSourceDownstream,
Status: backend.StatusBadRequest,
},
}}},
)
return
}
if err != nil {
responder.Error(err)
return
}
// Respond with raw protobuf when requested
for _, accept := range req.Header.Values("Accept") {
if accept == query.PROTOBUF_CONTENT_TYPE { // pluginv2.QueryDataResponse
p, err := backend.ToProto().QueryDataResponse(rsp)
if err != nil {
responder.Error(err)
return
}
data, err := proto.Marshal(p)
if err != nil {
responder.Error(err)
return
}
w.Header().Add("Content-Type", query.PROTOBUF_CONTENT_TYPE)
w.WriteHeader(code)
_, err = w.Write(data)
if err != nil {
logging.FromContext(ctx).Warn("unable to write protobuf result", "err", err)
}
return
}
}
responder.Object(code,
responder.Object(query.GetResponseCode(rsp),
&query.QueryDataResponse{QueryDataResponse: *rsp},
)
}), nil

View File

@@ -105,7 +105,8 @@ func (c *filesConnector) Connect(ctx context.Context, name string, opts runtime.
return
}
folders := resources.NewFolderManager(readWriter, folderClient, resources.NewEmptyFolderTree())
dualReadWriter := resources.NewDualReadWriter(readWriter, parser, folders, c.access)
authorizer := resources.NewRepositoryAuthorizer(repo.Config(), c.access)
dualReadWriter := resources.NewDualReadWriter(readWriter, parser, folders, authorizer)
query := r.URL.Query()
opts := resources.DualWriteOptions{
Ref: query.Get("ref"),

View File

@@ -328,124 +328,91 @@ func (b *APIBuilder) GetAuthorizer() authorizer.Authorizer {
return authorizer.DecisionDeny, "failed to find requester", err
}
return b.authorizeResource(ctx, a, id)
// Different routes may need different permissions.
// * Reading and modifying a repository's configuration requires administrator privileges.
// * Reading a repository's limited configuration (/stats & /settings) requires viewer privileges.
// * Reading a repository's files requires viewer privileges.
// * Reading a repository's refs requires viewer privileges.
// * Editing a repository's files requires editor privileges.
// * Syncing a repository requires editor privileges.
// * Exporting a repository requires administrator privileges.
// * Migrating a repository requires administrator privileges.
// * Testing a repository configuration requires administrator privileges.
// * Viewing a repository's history requires editor privileges.
switch a.GetResource() {
case provisioning.RepositoryResourceInfo.GetName():
// TODO: Support more fine-grained permissions than the basic roles. Especially on Enterprise.
switch a.GetSubresource() {
case "", "test", "jobs":
// Doing something with the repository itself.
if id.GetOrgRole().Includes(identity.RoleAdmin) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "admin role is required", nil
case "refs":
// This is strictly a read operation. It is handy on the frontend for viewers.
if id.GetOrgRole().Includes(identity.RoleViewer) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "viewer role is required", nil
case "files":
// Access to files is controlled by the AccessClient
return authorizer.DecisionAllow, "", nil
case "resources", "sync", "history":
// These are strictly read operations.
// Sync can also be somewhat destructive, but it's expected to be fine to import changes.
if id.GetOrgRole().Includes(identity.RoleEditor) {
return authorizer.DecisionAllow, "", nil
} else {
return authorizer.DecisionDeny, "editor role is required", nil
}
case "status":
if id.GetOrgRole().Includes(identity.RoleViewer) && a.GetVerb() == apiutils.VerbGet {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "users cannot update the status of a repository", nil
default:
if id.GetIsGrafanaAdmin() {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "unmapped subresource defaults to no access", nil
}
case "stats":
// This can leak information one shouldn't necessarily have access to.
if id.GetOrgRole().Includes(identity.RoleAdmin) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "admin role is required", nil
case "settings":
// This is strictly a read operation. It is handy on the frontend for viewers.
if id.GetOrgRole().Includes(identity.RoleViewer) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "viewer role is required", nil
case provisioning.JobResourceInfo.GetName(),
provisioning.HistoricJobResourceInfo.GetName():
// Jobs are shown on the configuration page.
if id.GetOrgRole().Includes(identity.RoleAdmin) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "admin role is required", nil
default:
// We haven't bothered with this kind yet.
if id.GetIsGrafanaAdmin() {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "unmapped kind defaults to no access", nil
}
})
}
// authorizeResource handles authorization for different resources.
// Different routes may need different permissions.
// * Reading and modifying a repository's configuration requires administrator privileges.
// * Reading a repository's limited configuration (/stats & /settings) requires viewer privileges.
// * Reading a repository's files requires viewer privileges.
// * Reading a repository's refs requires viewer privileges.
// * Editing a repository's files requires editor privileges.
// * Syncing a repository requires editor privileges.
// * Exporting a repository requires administrator privileges.
// * Migrating a repository requires administrator privileges.
// * Testing a repository configuration requires administrator privileges.
// * Viewing a repository's history requires editor privileges.
func (b *APIBuilder) authorizeResource(ctx context.Context, a authorizer.Attributes, id identity.Requester) (authorizer.Decision, string, error) {
switch a.GetResource() {
case provisioning.RepositoryResourceInfo.GetName():
return b.authorizeRepositorySubresource(a, id)
case "stats":
return b.authorizeStats(id)
case "settings":
return b.authorizeSettings(id)
case provisioning.JobResourceInfo.GetName(), provisioning.HistoricJobResourceInfo.GetName():
return b.authorizeJobs(id)
default:
return b.authorizeDefault(id)
}
}
// authorizeRepositorySubresource handles authorization for repository subresources.
func (b *APIBuilder) authorizeRepositorySubresource(a authorizer.Attributes, id identity.Requester) (authorizer.Decision, string, error) {
// TODO: Support more fine-grained permissions than the basic roles. Especially on Enterprise.
switch a.GetSubresource() {
case "", "test":
// Doing something with the repository itself.
if id.GetOrgRole().Includes(identity.RoleAdmin) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "admin role is required", nil
case "jobs":
// Posting jobs requires editor privileges (for syncing).
if id.GetOrgRole().Includes(identity.RoleAdmin) || id.GetOrgRole().Includes(identity.RoleEditor) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "editor role is required", nil
case "refs":
// This is strictly a read operation. It is handy on the frontend for viewers.
if id.GetOrgRole().Includes(identity.RoleViewer) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "viewer role is required", nil
case "files":
// Access to files is controlled by the AccessClient
return authorizer.DecisionAllow, "", nil
case "resources", "sync", "history":
// These are strictly read operations.
// Sync can also be somewhat destructive, but it's expected to be fine to import changes.
if id.GetOrgRole().Includes(identity.RoleEditor) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "editor role is required", nil
case "status":
if id.GetOrgRole().Includes(identity.RoleViewer) && a.GetVerb() == apiutils.VerbGet {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "users cannot update the status of a repository", nil
default:
if id.GetIsGrafanaAdmin() {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "unmapped subresource defaults to no access", nil
}
}
// authorizeStats handles authorization for stats resource.
func (b *APIBuilder) authorizeStats(id identity.Requester) (authorizer.Decision, string, error) {
// This can leak information one shouldn't necessarily have access to.
if id.GetOrgRole().Includes(identity.RoleAdmin) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "admin role is required", nil
}
// authorizeSettings handles authorization for settings resource.
func (b *APIBuilder) authorizeSettings(id identity.Requester) (authorizer.Decision, string, error) {
// This is strictly a read operation. It is handy on the frontend for viewers.
if id.GetOrgRole().Includes(identity.RoleViewer) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "viewer role is required", nil
}
// authorizeJobs handles authorization for job resources.
func (b *APIBuilder) authorizeJobs(id identity.Requester) (authorizer.Decision, string, error) {
// Jobs are shown on the configuration page.
if id.GetOrgRole().Includes(identity.RoleAdmin) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "admin role is required", nil
}
// authorizeDefault handles authorization for unmapped resources.
func (b *APIBuilder) authorizeDefault(id identity.Requester) (authorizer.Decision, string, error) {
// We haven't bothered with this kind yet.
if id.GetIsGrafanaAdmin() {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "unmapped kind defaults to no access", nil
}
func (b *APIBuilder) GetGroupVersion() schema.GroupVersion {
return provisioning.SchemeGroupVersion
}

View File

@@ -7,9 +7,9 @@ import (
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
authlib "github.com/grafana/authlib/types"
"github.com/grafana/grafana-app-sdk/logging"
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
@@ -21,18 +21,11 @@ import (
// DualReadWriter is a wrapper around a repository that can read from and write resources
// into both the Git repository as well as in Grafana. It isn't a dual writer in the sense of what unistore handling calls dual writing.
// Standard provisioning Authorizer has already run by the time DualReadWriter is called
// for incoming requests from actors, external or internal. However, since it is the files
// connector that redirects here, the external resources such as dashboards
// end up requiring additional authorization checks which the DualReadWriter performs here.
// TODO: it does not support folders yet
type DualReadWriter struct {
repo repository.ReaderWriter
parser Parser
folders *FolderManager
access authlib.AccessChecker
repo repository.ReaderWriter
parser Parser
folders *FolderManager
authorizer Authorizer
}
type DualWriteOptions struct {
@@ -48,8 +41,8 @@ type DualWriteOptions struct {
Branch string // Configured default branch
}
func NewDualReadWriter(repo repository.ReaderWriter, parser Parser, folders *FolderManager, access authlib.AccessChecker) *DualReadWriter {
return &DualReadWriter{repo: repo, parser: parser, folders: folders, access: access}
func NewDualReadWriter(repo repository.ReaderWriter, parser Parser, folders *FolderManager, authorizer Authorizer) *DualReadWriter {
return &DualReadWriter{repo: repo, parser: parser, folders: folders, authorizer: authorizer}
}
func (r *DualReadWriter) Read(ctx context.Context, path string, ref string) (*ParsedResource, error) {
@@ -77,8 +70,7 @@ func (r *DualReadWriter) Read(ctx context.Context, path string, ref string) (*Pa
return nil, fmt.Errorf("error running dryRun: %w", err)
}
// Authorize based on the existing resource
if err = r.authorize(ctx, parsed, utils.VerbGet); err != nil {
if err = r.authorizer.AuthorizeResource(ctx, parsed, utils.VerbGet); err != nil {
return nil, err
}
@@ -86,7 +78,7 @@ func (r *DualReadWriter) Read(ctx context.Context, path string, ref string) (*Pa
}
func (r *DualReadWriter) Delete(ctx context.Context, opts DualWriteOptions) (*ParsedResource, error) {
if err := repository.IsWriteAllowed(r.repo.Config(), opts.Ref); err != nil {
if err := r.authorizer.AuthorizeWrite(ctx, opts.Ref); err != nil {
return nil, err
}
@@ -112,7 +104,7 @@ func (r *DualReadWriter) Delete(ctx context.Context, opts DualWriteOptions) (*Pa
return nil, fmt.Errorf("parse file: %w", err)
}
if err = r.authorize(ctx, parsed, utils.VerbDelete); err != nil {
if err = r.authorizer.AuthorizeResource(ctx, parsed, utils.VerbDelete); err != nil {
return nil, err
}
@@ -144,7 +136,7 @@ func (r *DualReadWriter) Delete(ctx context.Context, opts DualWriteOptions) (*Pa
// CreateFolder creates a new folder in the repository
// FIXME: fix signature to return ParsedResource
func (r *DualReadWriter) CreateFolder(ctx context.Context, opts DualWriteOptions) (*provisioning.ResourceWrapper, error) {
if err := repository.IsWriteAllowed(r.repo.Config(), opts.Ref); err != nil {
if err := r.authorizer.AuthorizeWrite(ctx, opts.Ref); err != nil {
return nil, err
}
@@ -152,9 +144,12 @@ func (r *DualReadWriter) CreateFolder(ctx context.Context, opts DualWriteOptions
return nil, fmt.Errorf("not a folder path")
}
if err := r.authorizeCreateFolder(ctx, opts.Path); err != nil {
// For create operations, use empty name to check parent folder permissions
folderParsed := folderParsedResource(opts.Path, opts.Ref, r.repo.Config(), "")
if err := r.authorizer.AuthorizeResource(ctx, folderParsed, utils.VerbCreate); err != nil {
return nil, err
}
// TODO: authorized to create folders under first existing ancestor folder
// Now actually create the folder
if err := r.repo.Create(ctx, opts.Path, opts.Ref, nil, opts.Message); err != nil {
@@ -202,17 +197,90 @@ func (r *DualReadWriter) CreateFolder(ctx context.Context, opts DualWriteOptions
// CreateResource creates a new resource in the repository
func (r *DualReadWriter) CreateResource(ctx context.Context, opts DualWriteOptions) (*ParsedResource, error) {
return r.createOrUpdate(ctx, true, opts)
if err := r.authorizer.AuthorizeWrite(ctx, opts.Ref); err != nil {
return nil, err
}
info := &repository.FileInfo{
Data: opts.Data,
Path: opts.Path,
Ref: opts.Ref,
}
parsed, err := r.parser.Parse(ctx, info)
if err != nil {
return nil, err
}
// TODO: check if the resource does not exist in the database.
// Make sure the value is valid
if !opts.SkipDryRun {
if err := parsed.DryRun(ctx); err != nil {
logger := logging.FromContext(ctx).With("path", opts.Path, "name", parsed.Obj.GetName(), "ref", opts.Ref)
logger.Warn("failed to dry run resource on create", "error", err)
return nil, fmt.Errorf("error running dryRun: %w", err)
}
}
if len(parsed.Errors) > 0 {
// Now returns BadRequest (400) for validation errors
return nil, fmt.Errorf("errors while parsing file [%v]", parsed.Errors)
}
// TODO: is this the right way?
// Check if resource already exists - create should fail if it does
if err = r.ensureExisting(ctx, parsed); err != nil {
return nil, err
}
if parsed.Existing != nil {
return nil, apierrors.NewConflict(parsed.GVR.GroupResource(), parsed.Obj.GetName(),
fmt.Errorf("resource already exists"))
}
// Authorization check: Check if we can create the resource in the folder from the file
if err = r.authorizer.AuthorizeResource(ctx, parsed, utils.VerbCreate); err != nil {
return nil, err
}
// TODO: authorized to create folders under first existing ancestor folder
data, err := parsed.ToSaveBytes()
if err != nil {
return nil, err
}
// Always use the provisioning identity when writing
ctx, _, err = identity.WithProvisioningIdentity(ctx, parsed.Obj.GetNamespace())
if err != nil {
return nil, fmt.Errorf("unable to use provisioning identity %w", err)
}
// TODO: handle the error repository.ErrFileAlreadyExists
err = r.repo.Create(ctx, opts.Path, opts.Ref, data, opts.Message)
if err != nil {
return nil, err // raw error is useful
}
// Directly update the grafana database
// Behaves the same running sync after writing
// FIXME: to make sure if behaves in the same way as in sync, we should
// we should refactor the code to use the same function.
if r.shouldUpdateGrafanaDB(opts, parsed) {
if _, err := r.folders.EnsureFolderPathExist(ctx, opts.Path); err != nil {
return nil, fmt.Errorf("ensure folder path exists: %w", err)
}
err = parsed.Run(ctx)
}
return parsed, err
}
// UpdateResource updates a resource in the repository
func (r *DualReadWriter) UpdateResource(ctx context.Context, opts DualWriteOptions) (*ParsedResource, error) {
return r.createOrUpdate(ctx, false, opts)
}
// Create or updates a resource in the repository
func (r *DualReadWriter) createOrUpdate(ctx context.Context, create bool, opts DualWriteOptions) (*ParsedResource, error) {
if err := repository.IsWriteAllowed(r.repo.Config(), opts.Ref); err != nil {
if err := r.authorizer.AuthorizeWrite(ctx, opts.Ref); err != nil {
return nil, err
}
@@ -231,7 +299,7 @@ func (r *DualReadWriter) createOrUpdate(ctx context.Context, create bool, opts D
if !opts.SkipDryRun {
if err := parsed.DryRun(ctx); err != nil {
logger := logging.FromContext(ctx).With("path", opts.Path, "name", parsed.Obj.GetName(), "ref", opts.Ref)
logger.Warn("failed to dry run resource on create", "error", err)
logger.Warn("failed to dry run resource on update", "error", err)
return nil, fmt.Errorf("error running dryRun: %w", err)
}
@@ -242,12 +310,15 @@ func (r *DualReadWriter) createOrUpdate(ctx context.Context, create bool, opts D
return nil, fmt.Errorf("errors while parsing file [%v]", parsed.Errors)
}
// Verify that we can create (or update) the referenced resource
verb := utils.VerbUpdate
if parsed.Action == provisioning.ResourceActionCreate {
verb = utils.VerbCreate
// Populate existing resource to check permissions in the correct folder
if err = r.ensureExisting(ctx, parsed); err != nil {
return nil, err
}
if err = r.authorize(ctx, parsed, verb); err != nil {
// TODO: what to do with a name or kind change?
// Authorization check: Check if we can update the existing resource in its current folder
if err = r.authorizer.AuthorizeResource(ctx, parsed, utils.VerbUpdate); err != nil {
return nil, err
}
@@ -262,12 +333,7 @@ func (r *DualReadWriter) createOrUpdate(ctx context.Context, create bool, opts D
return nil, fmt.Errorf("unable to use provisioning identity %w", err)
}
// Create or update
if create {
err = r.repo.Create(ctx, opts.Path, opts.Ref, data, opts.Message)
} else {
err = r.repo.Update(ctx, opts.Path, opts.Ref, data, opts.Message)
}
err = r.repo.Update(ctx, opts.Path, opts.Ref, data, opts.Message)
if err != nil {
return nil, err // raw error is useful
}
@@ -289,7 +355,7 @@ func (r *DualReadWriter) createOrUpdate(ctx context.Context, create bool, opts D
// MoveResource moves a resource from one path to another in the repository
func (r *DualReadWriter) MoveResource(ctx context.Context, opts DualWriteOptions) (*ParsedResource, error) {
if err := repository.IsWriteAllowed(r.repo.Config(), opts.Ref); err != nil {
if err := r.authorizer.AuthorizeWrite(ctx, opts.Ref); err != nil {
return nil, err
}
@@ -328,6 +394,19 @@ func (r *DualReadWriter) moveDirectory(ctx context.Context, opts DualWriteOption
}
}
// Check permissions to delete the original folder
originalFolderID := ParseFolder(opts.OriginalPath, r.repo.Config().Name).ID
originalFolderParsed := folderParsedResource(opts.OriginalPath, opts.Ref, r.repo.Config(), originalFolderID)
if err := r.authorizer.AuthorizeResource(ctx, originalFolderParsed, utils.VerbDelete); err != nil {
return nil, fmt.Errorf("not authorized to move from original folder: %w", err)
}
// Check permissions to create at the new folder location (empty name for create)
newFolderParsed := folderParsedResource(opts.Path, opts.Ref, r.repo.Config(), "")
if err := r.authorizer.AuthorizeResource(ctx, newFolderParsed, utils.VerbCreate); err != nil {
return nil, fmt.Errorf("not authorized to move to new folder: %w", err)
}
// For branch operations, we just perform the repository move without updating Grafana DB
// Always use the provisioning identity when writing
ctx, _, err := identity.WithProvisioningIdentity(ctx, r.repo.Config().Namespace)
@@ -378,8 +457,13 @@ func (r *DualReadWriter) moveFile(ctx context.Context, opts DualWriteOptions) (*
return nil, fmt.Errorf("parse original file: %w", err)
}
// Authorize delete on the original path
if err = r.authorize(ctx, parsed, utils.VerbDelete); err != nil {
// Populate existing resource to check delete permission in the correct folder
if err = r.ensureExisting(ctx, parsed); err != nil {
return nil, err
}
// Authorize delete on the original path (checks existing resource's folder if it exists)
if err = r.authorizer.AuthorizeResource(ctx, parsed, utils.VerbDelete); err != nil {
return nil, fmt.Errorf("not authorized to delete original file: %w", err)
}
@@ -417,13 +501,20 @@ func (r *DualReadWriter) moveFile(ctx context.Context, opts DualWriteOptions) (*
return nil, fmt.Errorf("errors while parsing moved file [%v]", newParsed.Errors)
}
// Authorize create on the new path
verb := utils.VerbCreate
if newParsed.Action == provisioning.ResourceActionUpdate {
verb = utils.VerbUpdate
// Populate existing resource at destination to check if we're overwriting something
if err = r.ensureExisting(ctx, newParsed); err != nil {
return nil, err
}
if err = r.authorize(ctx, newParsed, verb); err != nil {
return nil, fmt.Errorf("not authorized to create new file: %w", err)
// Authorize for the target resource
// - If resource exists at destination: Check if we can update it in its folder
// - If no resource at destination: Check if we can create in the new folder
verb := utils.VerbUpdate
if newParsed.Existing == nil {
verb = utils.VerbCreate
}
if err = r.authorizer.AuthorizeResource(ctx, newParsed, verb); err != nil {
return nil, fmt.Errorf("not authorized for destination: %w", err)
}
data, err := newParsed.ToSaveBytes()
@@ -481,57 +572,25 @@ func (r *DualReadWriter) moveFile(ctx context.Context, opts DualWriteOptions) (*
return newParsed, nil
}
func (r *DualReadWriter) authorize(ctx context.Context, parsed *ParsedResource, verb string) error {
id, err := identity.GetRequester(ctx)
// ensureExisting populates parsed.Existing if a resource with the given name exists in storage.
// Returns nil if no resource exists, if Client is nil, or if Existing is already populated.
// This is used before authorization checks to ensure we validate permissions against the actual
// existing resource's folder, not just the folder specified in the file.
func (r *DualReadWriter) ensureExisting(ctx context.Context, parsed *ParsedResource) error {
if parsed.Client == nil || parsed.Existing != nil {
return nil // Already populated or can't check
}
existing, err := parsed.Client.Get(ctx, parsed.Obj.GetName(), metav1.GetOptions{})
if err != nil {
return apierrors.NewUnauthorized(err.Error())
if apierrors.IsNotFound(err) {
return nil // No existing resource
}
return fmt.Errorf("failed to check for existing resource: %w", err)
}
var name string
if parsed.Existing != nil {
name = parsed.Existing.GetName()
} else {
name = parsed.Obj.GetName()
}
rsp, err := r.access.Check(ctx, id, authlib.CheckRequest{
Group: parsed.GVR.Group,
Resource: parsed.GVR.Resource,
Namespace: id.GetNamespace(),
Name: name,
Verb: verb,
}, parsed.Meta.GetFolder())
if err != nil || !rsp.Allowed {
return apierrors.NewForbidden(parsed.GVR.GroupResource(), parsed.Obj.GetName(),
fmt.Errorf("no access to read the embedded file"))
}
idType, _, err := authlib.ParseTypeID(id.GetID())
if err != nil {
return apierrors.NewForbidden(parsed.GVR.GroupResource(), parsed.Obj.GetName(), fmt.Errorf("could not determine identity type to check access"))
}
// only apply role based access if identity is not of type access policy
if idType == authlib.TypeAccessPolicy || id.GetOrgRole().Includes(identity.RoleEditor) {
return nil
}
return apierrors.NewForbidden(parsed.GVR.GroupResource(), parsed.Obj.GetName(),
fmt.Errorf("must be admin or editor to access files from provisioning"))
}
func (r *DualReadWriter) authorizeCreateFolder(ctx context.Context, _ string) error {
id, err := identity.GetRequester(ctx)
if err != nil {
return apierrors.NewUnauthorized(err.Error())
}
// Simple role based access for now
if id.GetOrgRole().Includes(identity.RoleEditor) {
return nil
}
return apierrors.NewForbidden(FolderResource.GroupResource(), "",
fmt.Errorf("must be admin or editor to access folders with provisioning"))
parsed.Existing = existing
return nil
}
func (r *DualReadWriter) deleteFolder(ctx context.Context, opts DualWriteOptions) (*ParsedResource, error) {
@@ -547,6 +606,13 @@ func (r *DualReadWriter) deleteFolder(ctx context.Context, opts DualWriteOptions
}
}
// Check permissions to delete the folder
folderID := ParseFolder(opts.Path, r.repo.Config().Name).ID
folderParsed := folderParsedResource(opts.Path, opts.Ref, r.repo.Config(), folderID)
if err := r.authorizer.AuthorizeResource(ctx, folderParsed, utils.VerbDelete); err != nil {
return nil, err
}
// For branch operations, just delete from the repository without updating Grafana DB
err := r.repo.Delete(ctx, opts.Path, opts.Ref, opts.Message)
if err != nil {
@@ -575,6 +641,54 @@ func getPathType(isDir bool) string {
return "file (no trailing '/')"
}
// folderParsedResource creates a ParsedResource for a folder path.
// This is used for authorization checks on folder operations.
// For create operations, name should be empty string to check parent permissions.
// For other operations, name should be the folder ID derived from the path.
func folderParsedResource(path, ref string, repo *provisioning.Repository, name string) *ParsedResource {
folderObj := &unstructured.Unstructured{}
folderObj.SetName(name)
folderObj.SetNamespace(repo.Namespace)
// TODO: which parent? top existing ancestor.
meta, _ := utils.MetaAccessor(folderObj)
if meta != nil {
// Set parent folder for folder operations
parentFolder := ""
if path != "" {
parentPath := safepath.Dir(path)
if parentPath != "" {
parentFolder = ParseFolder(parentPath, repo.Name).ID
} else {
parentFolder = RootFolder(repo)
}
}
meta.SetFolder(parentFolder)
}
return &ParsedResource{
Info: &repository.FileInfo{
Path: path,
Ref: ref,
},
Obj: folderObj,
Meta: meta,
GVK: schema.GroupVersionKind{
Group: FolderResource.Group,
Version: FolderResource.Version,
Kind: "Folder",
},
GVR: FolderResource,
Repo: provisioning.ResourceRepositoryInfo{
Type: repo.Spec.Type,
Namespace: repo.Namespace,
Name: repo.Name,
Title: repo.Spec.Title,
},
}
}
func folderDeleteResponse(ctx context.Context, path, ref string, repo repository.Repository) (*ParsedResource, error) {
urls, err := getFolderURLs(ctx, path, ref, repo)
if err != nil {

View File

@@ -37,7 +37,7 @@ var WireSetExts = wire.NewSet(
// Auditing Options
auditing.ProvideNoopBackend,
auditing.ProvideNoopPolicyRuleProvider,
auditing.ProvideNoopPolicyRuleEvaluator,
)
var provisioningExtras = wire.NewSet(

View File

@@ -1,150 +0,0 @@
package advisor
import (
"github.com/grafana/grafana/pkg/services/accesscontrol"
"github.com/grafana/grafana/pkg/services/org"
)
const (
// Check
ActionAdvisorCheckCreate = "advisor.checks:create" // CREATE.
ActionAdvisorCheckWrite = "advisor.checks:write" // UPDATE.
ActionAdvisorCheckRead = "advisor.checks:read" // GET + LIST.
ActionAdvisorCheckDelete = "advisor.checks:delete" // DELETE.
// CheckTypes
ActionAdvisorCheckTypesCreate = "advisor.checktypes:create" // CREATE.
ActionAdvisorCheckTypesWrite = "advisor.checktypes:write" // UPDATE.
ActionAdvisorCheckTypesRead = "advisor.checktypes:read" // GET + LIST.
ActionAdvisorCheckTypesDelete = "advisor.checktypes:delete" // DELETE.
// Register
ActionAdvisorRegisterCreate = "advisor.register:create" // CREATE (register check types).
)
var (
ScopeProviderAdvisorCheck = accesscontrol.NewScopeProvider("advisor.checks")
ScopeProviderAdvisorCheckTypes = accesscontrol.NewScopeProvider("advisor.checktypes")
ScopeProviderAdvisorRegister = accesscontrol.NewScopeProvider("advisor.register")
ScopeAllAdvisorCheck = ScopeProviderAdvisorCheck.GetResourceAllScope()
ScopeAllAdvisorCheckTypes = ScopeProviderAdvisorCheckTypes.GetResourceAllScope()
ScopeAllAdvisorRegister = ScopeProviderAdvisorRegister.GetResourceAllScope()
)
func registerAccessControlRoles(service accesscontrol.Service) error {
// Check
checkReader := accesscontrol.RoleRegistration{
Role: accesscontrol.RoleDTO{
Name: "fixed:advisor.checks:reader",
DisplayName: "Advisor Check Reader",
Description: "Read and list advisor checks.",
Group: "Advisor",
Permissions: []accesscontrol.Permission{
{
Action: ActionAdvisorCheckRead,
Scope: ScopeAllAdvisorCheck,
},
},
},
Grants: []string{string(org.RoleAdmin)},
}
checkWriter := accesscontrol.RoleRegistration{
Role: accesscontrol.RoleDTO{
Name: "fixed:advisor.checks:writer",
DisplayName: "Advisor Check Writer",
Description: "Create, update and delete advisor checks.",
Group: "Advisor",
Permissions: []accesscontrol.Permission{
{
Action: ActionAdvisorCheckCreate,
Scope: ScopeAllAdvisorCheck,
},
{
Action: ActionAdvisorCheckRead,
Scope: ScopeAllAdvisorCheck,
},
{
Action: ActionAdvisorCheckWrite,
Scope: ScopeAllAdvisorCheck,
},
{
Action: ActionAdvisorCheckDelete,
Scope: ScopeAllAdvisorCheck,
},
},
},
Grants: []string{string(org.RoleAdmin)},
}
// CheckTypes
checkTypesReader := accesscontrol.RoleRegistration{
Role: accesscontrol.RoleDTO{
Name: "fixed:advisor.checktypes:reader",
DisplayName: "Advisor Check Types Reader",
Description: "Read and list advisor check types.",
Group: "Advisor",
Permissions: []accesscontrol.Permission{
{
Action: ActionAdvisorCheckTypesRead,
Scope: ScopeAllAdvisorCheckTypes,
},
},
},
Grants: []string{string(org.RoleAdmin)},
}
checkTypesWriter := accesscontrol.RoleRegistration{
Role: accesscontrol.RoleDTO{
Name: "fixed:advisor.checktypes:writer",
DisplayName: "Advisor Check Types Writer",
Description: "Create, update and delete advisor check types.",
Group: "Advisor",
Permissions: []accesscontrol.Permission{
{
Action: ActionAdvisorCheckTypesCreate,
Scope: ScopeAllAdvisorCheckTypes,
},
{
Action: ActionAdvisorCheckTypesRead,
Scope: ScopeAllAdvisorCheckTypes,
},
{
Action: ActionAdvisorCheckTypesWrite,
Scope: ScopeAllAdvisorCheckTypes,
},
{
Action: ActionAdvisorCheckTypesDelete,
Scope: ScopeAllAdvisorCheckTypes,
},
},
},
Grants: []string{string(org.RoleAdmin)},
}
// Register
registerWriter := accesscontrol.RoleRegistration{
Role: accesscontrol.RoleDTO{
Name: "fixed:advisor.register:writer",
DisplayName: "Advisor Register Writer",
Description: "Register default advisor check types.",
Group: "Advisor",
Permissions: []accesscontrol.Permission{
{
Action: ActionAdvisorRegisterCreate,
Scope: ScopeAllAdvisorRegister,
},
},
},
Grants: []string{string(org.RoleAdmin)},
}
return service.DeclareFixedRoles(
checkReader,
checkWriter,
checkTypesReader,
checkTypesWriter,
registerWriter,
)
}

View File

@@ -1,17 +1,17 @@
package advisor
import (
"fmt"
authlib "github.com/grafana/authlib/types"
"github.com/grafana/grafana-app-sdk/app"
appsdkapiserver "github.com/grafana/grafana-app-sdk/k8s/apiserver"
"github.com/grafana/grafana-app-sdk/simple"
advisorapi "github.com/grafana/grafana/apps/advisor/pkg/apis"
advisorapp "github.com/grafana/grafana/apps/advisor/pkg/app"
"github.com/grafana/grafana/apps/advisor/pkg/app/checkregistry"
"github.com/grafana/grafana/pkg/services/accesscontrol"
"github.com/grafana/grafana/pkg/services/apiserver/appinstaller"
grafanaauthorizer "github.com/grafana/grafana/pkg/services/apiserver/auth/authorizer"
"github.com/grafana/grafana/pkg/services/org"
"github.com/grafana/grafana/pkg/setting"
"k8s.io/apiserver/pkg/authorization/authorizer"
"k8s.io/client-go/rest"
)
var (
@@ -20,26 +20,37 @@ var (
)
type AdvisorAppInstaller struct {
*advisorapp.AdvisorAppInstaller
appsdkapiserver.AppInstaller
}
// GetAuthorizer returns the authorizer for the plugins app.
func (a *AdvisorAppInstaller) GetAuthorizer() authorizer.Authorizer {
return advisorapp.GetAuthorizer()
}
func ProvideAppInstaller(
accessControlService accesscontrol.Service,
accessClient authlib.AccessClient,
checkRegistry checkregistry.CheckService,
cfg *setting.Cfg,
orgService org.Service,
) (*AdvisorAppInstaller, error) {
if err := registerAccessControlRoles(accessControlService); err != nil {
return nil, fmt.Errorf("registering access control roles: %w", err)
provider := simple.NewAppProvider(advisorapi.LocalManifest(), nil, advisorapp.New)
pluginConfig := cfg.PluginSettings["grafana-advisor-app"]
specificConfig := checkregistry.AdvisorAppConfig{
CheckRegistry: checkRegistry,
PluginConfig: pluginConfig,
StackID: cfg.StackID,
OrgService: orgService,
}
authorizer := grafanaauthorizer.NewResourceAuthorizer(accessClient)
i, err := advisorapp.ProvideAppInstaller(authorizer, checkRegistry, cfg, orgService)
appCfg := app.Config{
KubeConfig: rest.Config{},
ManifestData: *advisorapi.LocalManifest().ManifestData,
SpecificConfig: specificConfig,
}
installer := &AdvisorAppInstaller{}
i, err := appsdkapiserver.NewDefaultAppInstaller(provider, appCfg, advisorapi.NewGoTypeAssociator())
if err != nil {
return nil, err
}
return &AdvisorAppInstaller{
AdvisorAppInstaller: i,
}, nil
installer.AppInstaller = i
return installer, nil
}

View File

@@ -349,7 +349,6 @@ var wireBasicSet = wire.NewSet(
dashboardservice.ProvideDashboardService,
dashboardservice.ProvideDashboardProvisioningService,
dashboardservice.ProvideDashboardPluginService,
dashboardservice.ProvideDashboardAccessService,
dashboardstore.ProvideDashboardStore,
folderimpl.ProvideService,
wire.Bind(new(folder.Service), new(*folderimpl.Service)),

20
pkg/server/wire_gen.go generated

File diff suppressed because one or more lines are too long

View File

@@ -86,9 +86,6 @@ func newPermissionRegistry() *permissionRegistry {
"plugins": "plugins:id:",
"plugins.plugins": "plugins.plugins:uid:",
"plugins.metas": "plugins.metas:uid:",
"advisor.checks": "advisor.checks:uid:",
"advisor.checktypes": "advisor.checktypes:uid:",
"advisor.register": "advisor.register:uid:",
"provisioners": "provisioners:",
"reports": "reports:id:",
"permissions": "permissions:type:",

View File

@@ -9,7 +9,6 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apiserver/pkg/admission"
"k8s.io/apiserver/pkg/audit"
"k8s.io/apiserver/pkg/authorization/authorizer"
"k8s.io/apiserver/pkg/registry/generic"
genericapiserver "k8s.io/apiserver/pkg/server"
@@ -60,13 +59,6 @@ type APIGroupAuthorizer interface {
GetAuthorizer() authorizer.Authorizer
}
// APIGroupAuditor allows different API groups to opt-in and provide their own auditing policy evaluator function.
// Auditing is only enabled if this is implemented. If no customization is needed, you can use the default evaluator,
// `pkg/apiserver/auditing.NewDefaultGrafanaPolicyRuleEvaluator()`.
type APIGroupAuditor interface {
GetPolicyRuleEvaluator() audit.PolicyRuleEvaluator
}
type APIGroupMutation interface {
// Mutate allows the builder to make changes to the object before it is persisted.
// Context is used only for timeout/deadline/cancellation and tracing information.

View File

@@ -29,7 +29,6 @@ import (
"k8s.io/klog/v2"
"k8s.io/kube-openapi/pkg/common"
"github.com/grafana/grafana/pkg/apiserver/auditing"
"github.com/grafana/grafana/pkg/apiserver/endpoints/filters"
grafanarest "github.com/grafana/grafana/pkg/apiserver/rest"
"github.com/grafana/grafana/pkg/services/apiserver/endpoints/request"
@@ -498,32 +497,6 @@ func AddPostStartHooks(
return nil
}
func EvaluatorPolicyRuleFromBuilders(builders []APIGroupBuilder) auditing.PolicyRuleEvaluators {
policyRuleEvaluators := make(auditing.PolicyRuleEvaluators, 0)
for _, b := range builders {
auditor, ok := b.(APIGroupAuditor)
if !ok {
continue
}
policyRuleEvaluator := auditor.GetPolicyRuleEvaluator()
if policyRuleEvaluator == nil {
continue
}
for _, gv := range GetGroupVersions(b) {
if gv.Empty() {
continue
}
policyRuleEvaluators[gv] = policyRuleEvaluator
}
}
return policyRuleEvaluators
}
func allowRegisteringResourceByInfo(allowedResources []string, name string) bool {
// trim any subresources from the name
name = strings.Split(name, "/")[0]

View File

@@ -28,7 +28,6 @@ import (
dataplaneaggregator "github.com/grafana/grafana/pkg/aggregator/apiserver"
"github.com/grafana/grafana/pkg/api/routing"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/apiserver/auditing"
grafanaresponsewriter "github.com/grafana/grafana/pkg/apiserver/endpoints/responsewriter"
grafanarest "github.com/grafana/grafana/pkg/apiserver/rest"
"github.com/grafana/grafana/pkg/infra/db"
@@ -116,8 +115,8 @@ type service struct {
builderMetrics *builder.BuilderMetrics
dualWriterMetrics *grafanarest.DualWriterMetrics
auditBackend audit.Backend
auditPolicyRuleProvider auditing.PolicyRuleProvider
auditBackend audit.Backend
auditPolicyRuleEvaluator audit.PolicyRuleEvaluator
}
func ProvideService(
@@ -143,7 +142,7 @@ func ProvideService(
appInstallers []appsdkapiserver.AppInstaller,
builderMetrics *builder.BuilderMetrics,
auditBackend audit.Backend,
auditPolicyRuleProvider auditing.PolicyRuleProvider,
auditPolicyRuleEvaluator audit.PolicyRuleEvaluator,
) (*service, error) {
scheme := builder.ProvideScheme()
codecs := builder.ProvideCodecFactory(scheme)
@@ -175,7 +174,7 @@ func ProvideService(
builderMetrics: builderMetrics,
dualWriterMetrics: grafanarest.NewDualWriterMetrics(reg),
auditBackend: auditBackend,
auditPolicyRuleProvider: auditPolicyRuleProvider,
auditPolicyRuleEvaluator: auditPolicyRuleEvaluator,
}
// This will be used when running as a dskit service
s.NamedService = services.NewBasicService(s.start, s.running, nil).WithName(modules.GrafanaAPIServer)
@@ -366,7 +365,7 @@ func (s *service) start(ctx context.Context) error {
// Auditing Options
serverConfig.AuditBackend = s.auditBackend
serverConfig.AuditPolicyRuleEvaluator = s.auditPolicyRuleProvider.PolicyRuleProvider(builder.EvaluatorPolicyRuleFromBuilders(s.builders))
serverConfig.AuditPolicyRuleEvaluator = s.auditPolicyRuleEvaluator
// Add OpenAPI specs for each group+version (existing builders)
err = builder.SetupConfig(

View File

@@ -301,11 +301,6 @@ func NewMapperRegistry() MapperRegistry {
"plugins": newResourceTranslation("plugins.plugins", "uid", false, nil),
"metas": newResourceTranslation("plugins.metas", "uid", false, nil),
},
"advisor.grafana.app": {
"checks": newResourceTranslation("advisor.checks", "uid", false, nil),
"checktypes": newResourceTranslation("advisor.checktypes", "uid", false, nil),
"register": newResourceTranslation("advisor.register", "uid", false, nil),
},
})
return mapper

View File

@@ -58,13 +58,6 @@ const (
RelationGetPermissions string = "get_permissions"
RelationSetPermissions string = "set_permissions"
RelationCanGet string = "can_get"
RelationCanCreate string = "can_create"
RelationCanUpdate string = "can_update"
RelationCanDelete string = "can_delete"
RelationCanGetPermissions string = "can_get_permissions"
RelationCanSetPermissions string = "can_set_permissions"
RelationSubresourceSetView string = "resource_" + RelationSetView
RelationSubresourceSetEdit string = "resource_" + RelationSetEdit
RelationSubresourceSetAdmin string = "resource_" + RelationSetAdmin
@@ -141,26 +134,6 @@ var RelationToVerbMapping = map[string]string{
RelationSetPermissions: utils.VerbSetPermissions,
}
// FolderPermissionRelation returns the optimized folder relation for permission management.
func FolderPermissionRelation(relation string) string {
switch relation {
case RelationGet:
return RelationCanGet
case RelationCreate:
return RelationCanCreate
case RelationUpdate:
return RelationCanUpdate
case RelationDelete:
return RelationCanDelete
case RelationGetPermissions:
return RelationCanGetPermissions
case RelationSetPermissions:
return RelationCanSetPermissions
default:
return relation
}
}
func IsGroupResourceRelation(relation string) bool {
return isValidRelation(relation, RelationsGroupResource)
}

View File

@@ -4,21 +4,15 @@ type folder
relations
define parent: [folder]
# Permission levels
# Action sets
define view: [user, service-account, team#member, role#assignee] or edit or view from parent
define edit: [user, service-account, team#member, role#assignee] or admin or edit from parent
define admin: [user, service-account, team#member, role#assignee] or admin from parent
define edit: [user, service-account, team#member, role#assignee] or edit from parent
define view: [user, service-account, team#member, role#assignee] or view from parent
define get: [user, service-account, team#member, role#assignee] or get from parent
define create: [user, service-account, team#member, role#assignee] or create from parent
define update: [user, service-account, team#member, role#assignee] or update from parent
define delete: [user, service-account, team#member, role#assignee] or delete from parent
define get_permissions: [user, service-account, team#member, role#assignee] or get_permissions from parent
define set_permissions: [user, service-account, team#member, role#assignee] or set_permissions from parent
# Computed actions
define can_get: admin or edit or view or get
define can_create: admin or edit or create
define can_update: admin or edit or update
define can_delete: admin or edit or delete
define can_get_permissions: admin or get_permissions
define can_set_permissions: admin or set_permissions
define get: [user, service-account, team#member, role#assignee] or view or get from parent
define create: [user, service-account, team#member, role#assignee] or edit or create from parent
define update: [user, service-account, team#member, role#assignee] or edit or update from parent
define delete: [user, service-account, team#member, role#assignee] or edit or delete from parent
define get_permissions: [user, service-account, team#member, role#assignee] or admin or get_permissions from parent
define set_permissions: [user, service-account, team#member, role#assignee] or admin or set_permissions from parent

View File

@@ -1,947 +0,0 @@
package server
import (
"context"
"fmt"
"math/rand"
"testing"
"time"
authzv1 "github.com/grafana/authlib/authz/proto/v1"
openfgav1 "github.com/openfga/api/proto/openfga/v1"
"github.com/prometheus/client_golang/prometheus"
"github.com/stretchr/testify/require"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/grafana/grafana/pkg/infra/tracing"
authzextv1 "github.com/grafana/grafana/pkg/services/authz/proto/v1"
"github.com/grafana/grafana/pkg/services/authz/zanzana/common"
"github.com/grafana/grafana/pkg/services/authz/zanzana/store"
"github.com/grafana/grafana/pkg/services/sqlstore"
"github.com/grafana/grafana/pkg/setting"
)
const (
benchNamespace = "default"
// Folder tree parameters
foldersPerLevel = 3
folderDepth = 7
// Other data generation parameters
numResources = 50000
numUsers = 1000
numTeams = 100
// Timeout for List operations
listTimeout = 30 * time.Second
// Resource type constants for benchmarks
benchDashboardGroup = "dashboard.grafana.app"
benchDashboardResource = "dashboards"
benchFolderGroup = "folder.grafana.app"
benchFolderResource = "folders"
// BenchmarkBatchCheck measures the performance of BatchCheck requests with 50 items per batch.
batchCheckSize = 50
)
// benchmarkData holds all the generated test data for benchmarks
type benchmarkData struct {
folders []string // folder UIDs
folderDepths map[string]int // folder UID -> depth level
folderParents map[string]string // folder UID -> parent UID
folderDescendants map[string]int // folder UID -> number of descendants (including self)
foldersByDepth [][]string // folders grouped by depth level
resources []string // resource names
resourceFolders map[string]string // resource name -> folder UID
users []string // user identifiers (e.g., "user:1")
teams []string // team identifiers (e.g., "team:1")
// Pre-computed test scenarios
deepestFolder string // folder at max depth for worst-case tests
midDepthFolder string // folder at depth/2
shallowFolder string // folder at depth 1
rootFolder string // root level folder (depth 0)
largestRootFolder string // root folder with most descendants
largestRootDescCount int // number of descendants in largestRootFolder
maxDepth int // maximum depth in the tree
}
// generateFolderHierarchy creates a balanced tree of folders.
// Each folder has `childrenPerFolder` children, up to `depth` levels deep.
func generateFolderHierarchy(childrenPerFolder, depth int) ([]*openfgav1.TupleKey, *benchmarkData) {
// Calculate total folders: childrenPerFolder + childrenPerFolder^2 + ... + childrenPerFolder^(depth+1)
totalFolders := 0
levelSize := childrenPerFolder
for d := 0; d <= depth; d++ {
totalFolders += levelSize
levelSize *= childrenPerFolder
}
data := &benchmarkData{
folders: make([]string, 0, totalFolders),
folderDepths: make(map[string]int),
folderParents: make(map[string]string),
folderDescendants: make(map[string]int),
}
tuples := make([]*openfgav1.TupleKey, 0, totalFolders)
folderIdx := 0
// Track folders at each level for parent assignment
levelFolders := make([][]string, depth+1)
for i := range levelFolders {
levelFolders[i] = make([]string, 0)
}
// Create root level folders (depth 0)
for i := 0; i < childrenPerFolder; i++ {
folderUID := fmt.Sprintf("folder-%d", folderIdx)
data.folders = append(data.folders, folderUID)
data.folderDepths[folderUID] = 0
levelFolders[0] = append(levelFolders[0], folderUID)
folderIdx++
}
// Create folders at each subsequent depth level
for d := 1; d <= depth; d++ {
parentFolders := levelFolders[d-1]
// Each parent gets exactly childrenPerFolder children
for _, parentUID := range parentFolders {
for j := 0; j < childrenPerFolder; j++ {
folderUID := fmt.Sprintf("folder-%d", folderIdx)
data.folders = append(data.folders, folderUID)
data.folderDepths[folderUID] = d
data.folderParents[folderUID] = parentUID
levelFolders[d] = append(levelFolders[d], folderUID)
// Create parent relationship tuple
tuples = append(tuples, common.NewFolderParentTuple(folderUID, parentUID))
folderIdx++
}
}
}
// Set reference folders for different depth scenarios
data.rootFolder = levelFolders[0][0]
data.shallowFolder = levelFolders[0][0]
if len(levelFolders[1]) > 0 {
data.shallowFolder = levelFolders[1][0]
}
midDepth := depth / 2
if len(levelFolders[midDepth]) > 0 {
data.midDepthFolder = levelFolders[midDepth][0]
}
// Deepest folder
if len(levelFolders[depth]) > 0 {
data.deepestFolder = levelFolders[depth][0]
}
// Calculate descendant counts for each folder (bottom-up)
// Initialize all folders with count of 1 (self)
for _, folder := range data.folders {
data.folderDescendants[folder] = 1
}
// Process folders from deepest to shallowest, accumulating descendant counts
for d := depth; d >= 0; d-- {
for _, folder := range levelFolders[d] {
if parent, hasParent := data.folderParents[folder]; hasParent {
data.folderDescendants[parent] += data.folderDescendants[folder]
}
}
}
// Find root folder with most descendants
for _, rootFolder := range levelFolders[0] {
count := data.folderDescendants[rootFolder]
if count > data.largestRootDescCount {
data.largestRootDescCount = count
data.largestRootFolder = rootFolder
}
}
// Store folders by depth for depth-based testing
data.foldersByDepth = levelFolders
data.maxDepth = depth
return tuples, data
}
// generateResources creates resources distributed across folders
func generateResources(data *benchmarkData, numResources int) []*openfgav1.TupleKey {
data.resources = make([]string, numResources)
data.resourceFolders = make(map[string]string, numResources)
// Distribute resources across folders
for i := 0; i < numResources; i++ {
resourceName := fmt.Sprintf("resource-%d", i)
folderIdx := i % len(data.folders)
folderUID := data.folders[folderIdx]
data.resources[i] = resourceName
data.resourceFolders[resourceName] = folderUID
}
// Note: We don't create tuples for resources themselves,
// permissions are assigned to users/teams on folders or directly on resources
return nil
}
// generateUsers creates user identifiers
func generateUsers(data *benchmarkData, numUsers int) {
data.users = make([]string, numUsers)
for i := 0; i < numUsers; i++ {
data.users[i] = fmt.Sprintf("user:%d", i)
}
}
// generateTeams creates team identifiers
func generateTeams(data *benchmarkData, numTeams int) {
data.teams = make([]string, numTeams)
for i := 0; i < numTeams; i++ {
data.teams[i] = fmt.Sprintf("team:%d", i)
}
}
// generatePermissionTuples creates various permission assignments for benchmarking.
// Users are distributed across 7 patterns: global, root folder, mid-depth folder,
// folder-scoped resource, direct resource, team-based, and no permissions.
const numPermissionPatterns = 7
func generatePermissionTuples(data *benchmarkData) []*openfgav1.TupleKey {
tuples := make([]*openfgav1.TupleKey, 0)
// Distribute users across different permission patterns
usersPerPattern := len(data.users) / numPermissionPatterns
// Pattern 1: Users with GroupResource permission (all access)
// Users 0 to usersPerPattern-1
for i := 0; i < usersPerPattern; i++ {
tuples = append(tuples, common.NewGroupResourceTuple(
data.users[i],
common.RelationGet,
benchDashboardGroup,
benchDashboardResource,
"",
))
}
// Pattern 2: Users with folder-level permission on root folders
// Users usersPerPattern to 2*usersPerPattern-1
for i := usersPerPattern; i < 2*usersPerPattern; i++ {
folderIdx := (i - usersPerPattern) % len(data.folders)
// Only assign to root-level folders for this pattern
for j := folderIdx; j < len(data.folders); j++ {
if data.folderDepths[data.folders[j]] == 0 {
tuples = append(tuples, common.NewFolderTuple(
data.users[i],
common.RelationSetView,
data.folders[j],
))
break
}
}
}
// Pattern 3: Users with folder-level permission on mid-depth folders
// Use relative depth range: 1/3 to 2/3 of max depth
// Use "view" relation which grants get through the optimized schema
minMidDepth := data.maxDepth / 3
maxMidDepth := 2 * data.maxDepth / 3
if maxMidDepth < minMidDepth {
maxMidDepth = minMidDepth
}
// Collect folders in the mid-depth range
var midDepthFolders []string
for d := minMidDepth; d <= maxMidDepth; d++ {
if d < len(data.foldersByDepth) {
midDepthFolders = append(midDepthFolders, data.foldersByDepth[d]...)
}
}
// Fall back to root folders if no mid-depth folders exist
if len(midDepthFolders) == 0 {
midDepthFolders = data.foldersByDepth[0]
}
for i := 2 * usersPerPattern; i < 3*usersPerPattern; i++ {
folderIdx := (i - 2*usersPerPattern) % len(midDepthFolders)
tuples = append(tuples, common.NewFolderTuple(
data.users[i],
common.RelationSetView,
midDepthFolders[folderIdx],
))
}
// Pattern 4: Users with folder-scoped resource permission
for i := 3 * usersPerPattern; i < 4*usersPerPattern; i++ {
folderIdx := (i - 3*usersPerPattern) % len(data.folders)
tuples = append(tuples, common.NewFolderResourceTuple(
data.users[i],
common.RelationGet,
benchDashboardGroup,
benchDashboardResource,
"",
data.folders[folderIdx],
))
}
// Pattern 5: Users with direct resource permission
for i := 4 * usersPerPattern; i < 5*usersPerPattern; i++ {
resourceIdx := (i - 4*usersPerPattern) % len(data.resources)
tuples = append(tuples, common.NewResourceTuple(
data.users[i],
common.RelationGet,
benchDashboardGroup,
benchDashboardResource,
"",
data.resources[resourceIdx],
))
}
// Pattern 6: Team memberships and team permissions
// First, add users to teams
for i := 5 * usersPerPattern; i < 6*usersPerPattern && i < len(data.users); i++ {
teamIdx := (i - 5*usersPerPattern) % len(data.teams)
tuples = append(tuples, common.NewTypedTuple(
common.TypeTeam,
data.users[i],
common.RelationTeamMember,
fmt.Sprintf("%d", teamIdx),
))
}
// Then, give teams folder permissions
// Use "view" relation which grants get through the optimized schema
for i := 0; i < len(data.teams); i++ {
folderIdx := i % len(data.folders)
teamMember := fmt.Sprintf("team:%d#member", i)
tuples = append(tuples, common.NewFolderTuple(
teamMember,
common.RelationSetView,
data.folders[folderIdx],
))
}
// Pattern 7: Users with no permissions (remaining users)
// These users don't get any tuples - they're for testing denial cases
return tuples
}
// setupBenchmarkServer creates a server with the benchmark data loaded
func setupBenchmarkServer(b *testing.B) (*Server, *benchmarkData) {
b.Helper()
if testing.Short() {
b.Skip("skipping benchmark in short mode")
}
cfg := setting.NewCfg()
testStore := sqlstore.NewTestStore(b, sqlstore.WithCfg(cfg))
openFGAStore, err := store.NewEmbeddedStore(cfg, testStore, log.NewNopLogger())
require.NoError(b, err)
openfga, err := NewOpenFGAServer(cfg.ZanzanaServer, openFGAStore)
require.NoError(b, err)
srv, err := NewServer(cfg.ZanzanaServer, openfga, log.NewNopLogger(), tracing.NewNoopTracerService(), prometheus.NewRegistry())
require.NoError(b, err)
// Generate test data
b.Log("Generating folder hierarchy...")
folderTuples, data := generateFolderHierarchy(foldersPerLevel, folderDepth)
b.Log("Generating resources...")
generateResources(data, numResources)
b.Log("Generating users...")
generateUsers(data, numUsers)
b.Log("Generating teams...")
generateTeams(data, numTeams)
b.Log("Generating permission tuples...")
permTuples := generatePermissionTuples(data)
// Add special user with permission on largest root folder (for >1000 folder test)
// Use "view" relation which grants get through the optimized schema
largeRootUserTuple := common.NewFolderTuple(
"user:large-root-access",
common.RelationSetView,
data.largestRootFolder,
)
permTuples = append(permTuples, largeRootUserTuple)
// Add users with permissions at each depth level for depth-based testing
// Use "view" relation which grants get through the optimized schema
for depth := 0; depth <= data.maxDepth; depth++ {
if len(data.foldersByDepth[depth]) == 0 {
continue
}
folder := data.foldersByDepth[depth][0]
user := fmt.Sprintf("user:depth-%d-access", depth)
permTuples = append(permTuples, common.NewFolderTuple(user, common.RelationSetView, folder))
}
// Combine all tuples
allTuples := append(folderTuples, permTuples...)
b.Logf("Total tuples to write: %d", len(allTuples))
// Get store info
ctx := newContextWithNamespace()
storeInf, err := srv.getStoreInfo(ctx, benchNamespace)
require.NoError(b, err)
// Write tuples in batches (OpenFGA limits to 100 per write)
batchSize := 100
for i := 0; i < len(allTuples); i += batchSize {
end := i + batchSize
if end > len(allTuples) {
end = len(allTuples)
}
batch := allTuples[i:end]
_, err = srv.openfga.Write(ctx, &openfgav1.WriteRequest{
StoreId: storeInf.ID,
AuthorizationModelId: storeInf.ModelID,
Writes: &openfgav1.WriteRequestWrites{
TupleKeys: batch,
OnDuplicate: "ignore",
},
})
require.NoError(b, err)
if (i/batchSize)%100 == 0 {
b.Logf("Written %d/%d tuples", end, len(allTuples))
}
}
b.Logf("Benchmark data setup complete: %d folders, %d resources, %d users, %d teams",
len(data.folders), len(data.resources), len(data.users), len(data.teams))
b.Logf("Largest root folder: %s with %d descendants", data.largestRootFolder, data.largestRootDescCount)
return srv, data
}
// BenchmarkCheck measures the performance of Check requests
func BenchmarkCheck(b *testing.B) {
srv, data := setupBenchmarkServer(b)
ctx := newContextWithNamespace()
// Helper to create check requests
newCheckReq := func(subject, verb, group, resource, folder, name string) *authzv1.CheckRequest {
return &authzv1.CheckRequest{
Namespace: benchNamespace,
Subject: subject,
Verb: verb,
Group: group,
Resource: resource,
Folder: folder,
Name: name,
}
}
usersPerPattern := len(data.users) / 7
b.Run("GroupResourceDirect", func(b *testing.B) {
// User with group_resource permission - should have access to everything
user := data.users[0] // First user has GroupResource permission
resource := data.resources[rand.Intn(len(data.resources))]
folder := data.resourceFolders[resource]
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.Check(ctx, newCheckReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource, folder, resource))
if err != nil {
b.Fatal(err)
}
if !res.GetAllowed() {
b.Fatal("expected access to be allowed")
}
}
})
// Test folder inheritance at each depth level (0 to maxDepth)
// User has permission on ROOT folder (depth 0), we check access at each deeper level
rootUser := "user:depth-0-access" // has view permission on root folder
for depth := 0; depth <= data.maxDepth; depth++ {
depth := depth // capture for closure
if len(data.foldersByDepth[depth]) == 0 {
continue
}
b.Run(fmt.Sprintf("FolderInheritance/Depth%d", depth), func(b *testing.B) {
resource := data.resources[0]
folder := data.foldersByDepth[depth][0]
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.Check(ctx, newCheckReq(rootUser, utils.VerbGet, benchDashboardGroup, benchDashboardResource, folder, resource))
if err != nil {
b.Fatal(err)
}
_ = res.GetAllowed()
}
})
}
b.Run("FolderResourceScoped", func(b *testing.B) {
// User with folder-scoped resource permission
user := data.users[3*usersPerPattern]
folderIdx := 0
folder := data.folders[folderIdx]
resource := data.resources[folderIdx]
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.Check(ctx, newCheckReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource, folder, resource))
if err != nil {
b.Fatal(err)
}
_ = res.GetAllowed()
}
})
b.Run("DirectResource", func(b *testing.B) {
// User with direct resource permission
user := data.users[4*usersPerPattern]
resourceIdx := 0
resource := data.resources[resourceIdx]
folder := data.resourceFolders[resource]
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.Check(ctx, newCheckReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource, folder, resource))
if err != nil {
b.Fatal(err)
}
_ = res.GetAllowed()
}
})
b.Run("TeamMembership", func(b *testing.B) {
// User who is a team member, team has folder permission
user := data.users[5*usersPerPattern]
teamIdx := 0
folderIdx := teamIdx % len(data.folders)
folder := data.folders[folderIdx]
resource := data.resources[folderIdx%len(data.resources)]
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.Check(ctx, newCheckReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource, folder, resource))
if err != nil {
b.Fatal(err)
}
_ = res.GetAllowed()
}
})
b.Run("NoAccess", func(b *testing.B) {
// User with no permissions - tests denial path
user := data.users[len(data.users)-1] // Last user has no permissions
resource := data.resources[0]
folder := data.resourceFolders[resource]
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.Check(ctx, newCheckReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource, folder, resource))
if err != nil {
b.Fatal(err)
}
if res.GetAllowed() {
b.Fatal("expected access to be denied")
}
}
})
b.Run("FolderCheck", func(b *testing.B) {
// Direct folder access check
user := data.users[usersPerPattern]
folder := data.rootFolder
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.Check(ctx, newCheckReq(user, utils.VerbGet, benchFolderGroup, benchFolderResource, "", folder))
if err != nil {
b.Fatal(err)
}
_ = res.GetAllowed()
}
})
}
func BenchmarkBatchCheck(b *testing.B) {
srv, data := setupBenchmarkServer(b)
ctx := newContextWithNamespace()
// Helper to create batch check requests
newBatchCheckReq := func(subject string, items []*authzextv1.BatchCheckItem) *authzextv1.BatchCheckRequest {
return &authzextv1.BatchCheckRequest{
Namespace: benchNamespace,
Subject: subject,
Items: items,
}
}
// Helper to create batch items for resources in folders
createBatchItems := func(resources []string, resourceFolders map[string]string) []*authzextv1.BatchCheckItem {
items := make([]*authzextv1.BatchCheckItem, 0, batchCheckSize)
for i := 0; i < batchCheckSize && i < len(resources); i++ {
resource := resources[i]
items = append(items, &authzextv1.BatchCheckItem{
Verb: utils.VerbGet,
Group: benchDashboardGroup,
Resource: benchDashboardResource,
Name: resource,
Folder: resourceFolders[resource],
})
}
return items
}
// Helper to create batch items for folders at a specific depth
createFolderBatchItems := func(folders []string, depth int, folderDepths map[string]int) []*authzextv1.BatchCheckItem {
items := make([]*authzextv1.BatchCheckItem, 0, batchCheckSize)
for _, folder := range folders {
if folderDepths[folder] == depth && len(items) < batchCheckSize {
items = append(items, &authzextv1.BatchCheckItem{
Verb: utils.VerbGet,
Group: benchDashboardGroup,
Resource: benchDashboardResource,
Name: fmt.Sprintf("resource-in-%s", folder),
Folder: folder,
})
}
}
// Fill remaining slots if needed
for len(items) < batchCheckSize && len(folders) > 0 {
folder := folders[len(items)%len(folders)]
items = append(items, &authzextv1.BatchCheckItem{
Verb: utils.VerbGet,
Group: benchDashboardGroup,
Resource: benchDashboardResource,
Name: fmt.Sprintf("resource-%d", len(items)),
Folder: folder,
})
}
return items
}
usersPerPattern := len(data.users) / numPermissionPatterns
b.Run("GroupResourceDirect", func(b *testing.B) {
// User with group_resource permission - should have access to everything
user := data.users[0]
items := createBatchItems(data.resources, data.resourceFolders)
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
b.Run("FolderInheritance/Depth1", func(b *testing.B) {
// User with folder permission on shallow folder
user := data.users[usersPerPattern]
items := createFolderBatchItems(data.folders, 1, data.folderDepths)
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
b.Run("FolderInheritance/Depth4", func(b *testing.B) {
// User with folder permission on mid-depth folder
user := data.users[2*usersPerPattern]
items := createFolderBatchItems(data.folders, 4, data.folderDepths)
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
b.Run("FolderInheritance/Depth7", func(b *testing.B) {
// Check access on deepest folders (worst case for inheritance traversal)
user := data.users[usersPerPattern]
items := createFolderBatchItems(data.folders, data.maxDepth, data.folderDepths)
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
b.Run("DirectResource", func(b *testing.B) {
// User with direct resource permission
user := data.users[4*usersPerPattern]
items := createBatchItems(data.resources, data.resourceFolders)
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
b.Run("TeamMembership", func(b *testing.B) {
// User who is a team member, team has folder permission
user := data.users[5*usersPerPattern]
items := createBatchItems(data.resources, data.resourceFolders)
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
b.Run("NoAccess", func(b *testing.B) {
// User with no permissions - tests denial path
user := data.users[len(data.users)-1]
items := createBatchItems(data.resources, data.resourceFolders)
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
b.Run("MixedFolders", func(b *testing.B) {
// Batch of items across different folder depths
user := data.users[usersPerPattern]
items := make([]*authzextv1.BatchCheckItem, 0, batchCheckSize)
for i := 0; i < batchCheckSize; i++ {
folder := data.folders[i%len(data.folders)]
items = append(items, &authzextv1.BatchCheckItem{
Verb: utils.VerbGet,
Group: benchDashboardGroup,
Resource: benchDashboardResource,
Name: fmt.Sprintf("resource-%d", i),
Folder: folder,
})
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
}
// BenchmarkList measures the performance of List requests (Compile equivalent)
func BenchmarkList(b *testing.B) {
srv, data := setupBenchmarkServer(b)
baseCtx := newContextWithNamespace()
// Helper to create list requests
newListReq := func(subject, verb, group, resource string) *authzv1.ListRequest {
return &authzv1.ListRequest{
Namespace: benchNamespace,
Subject: subject,
Verb: verb,
Group: group,
Resource: resource,
}
}
// Helper to create context with timeout
ctxWithTimeout := func() (context.Context, context.CancelFunc) {
return context.WithTimeout(baseCtx, listTimeout)
}
usersPerPattern := len(data.users) / 7
b.Run("AllAccess", func(b *testing.B) {
// User with group_resource permission - should return All=true quickly
user := data.users[0]
b.Logf("Test: User with group_resource permission (access to ALL dashboards)")
b.Logf("Expected: All=true returned immediately without ListObjects call")
b.ResetTimer()
for i := 0; i < b.N; i++ {
ctx, cancel := ctxWithTimeout()
res, err := srv.List(ctx, newListReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource))
cancel()
if err != nil {
b.Fatalf("Error: %v", err)
}
if !res.GetAll() {
b.Fatal("expected All=true for user with group_resource permission")
}
}
})
b.Run("FolderScoped", func(b *testing.B) {
// User with folder permissions - should return folder list
user := data.users[usersPerPattern]
b.Logf("Test: User with direct folder permission on a single folder")
b.Logf("Expected: Returns list of folders user has access to")
b.ResetTimer()
for i := 0; i < b.N; i++ {
ctx, cancel := ctxWithTimeout()
res, err := srv.List(ctx, newListReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource))
cancel()
if err != nil {
b.Fatalf("Error: %v", err)
}
if i == 0 {
b.Logf("Result: %d folders, %d items, All=%v", len(res.GetFolders()), len(res.GetItems()), res.GetAll())
}
}
})
b.Run("DirectResources", func(b *testing.B) {
// User with direct resource permissions - should return items list
user := data.users[4*usersPerPattern]
b.Logf("Test: User with direct permission on specific resources")
b.Logf("Expected: Returns list of specific resources user has access to")
b.ResetTimer()
for i := 0; i < b.N; i++ {
ctx, cancel := ctxWithTimeout()
res, err := srv.List(ctx, newListReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource))
cancel()
if err != nil {
b.Fatalf("Error: %v", err)
}
if i == 0 {
b.Logf("Result: %d folders, %d items, All=%v", len(res.GetFolders()), len(res.GetItems()), res.GetAll())
}
}
})
b.Run("NoAccess", func(b *testing.B) {
// User with no permissions - should return empty results
user := data.users[len(data.users)-1]
b.Logf("Test: User with NO permissions (denial case)")
b.Logf("Expected: Empty results")
b.ResetTimer()
for i := 0; i < b.N; i++ {
ctx, cancel := ctxWithTimeout()
res, err := srv.List(ctx, newListReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource))
cancel()
if err != nil {
b.Fatalf("Error: %v", err)
}
if i == 0 {
b.Logf("Result: %d folders, %d items, All=%v", len(res.GetFolders()), len(res.GetItems()), res.GetAll())
}
}
})
b.Run("LargeRootFolder", func(b *testing.B) {
// User with access to root folder that has many descendants
user := "user:large-root-access"
b.Logf("Test: User with permission on ROOT folder (folder-0)")
b.Logf("Root folder %s has %d total descendants", data.largestRootFolder, data.largestRootDescCount)
b.Logf("Expected: ListObjects should return folders through inheritance")
b.ResetTimer()
for i := 0; i < b.N; i++ {
ctx, cancel := ctxWithTimeout()
start := time.Now()
res, err := srv.List(ctx, newListReq(user, utils.VerbGet, benchFolderGroup, benchFolderResource))
elapsed := time.Since(start)
cancel()
if err != nil {
b.Fatalf("Error after %v: %v", elapsed, err)
}
if i == 0 {
b.Logf("Result: %d folders returned in %v (descendants: %d)",
len(res.GetItems()), elapsed, data.largestRootDescCount)
}
}
})
// Test List at various folder depths to find breaking point
b.Run("ByDepth", func(b *testing.B) {
b.Logf("Testing List performance at various folder depths (timeout: %v)", listTimeout)
b.Logf("Tree structure: %d folders per level, %d max depth", foldersPerLevel, data.maxDepth)
for depth := 0; depth <= data.maxDepth; depth++ {
if len(data.foldersByDepth[depth]) == 0 {
continue
}
folder := data.foldersByDepth[depth][0]
descendants := data.folderDescendants[folder]
user := fmt.Sprintf("user:depth-%d-access", depth)
b.Run(fmt.Sprintf("Depth%d_%dDescendants", depth, descendants), func(b *testing.B) {
b.Logf("Test: User with permission on folder at depth %d", depth)
b.Logf("Folder: %s, Descendants: %d", folder, descendants)
// First, do a single timed run to report
ctx, cancel := ctxWithTimeout()
start := time.Now()
res, err := srv.List(ctx, newListReq(user, utils.VerbGet, benchFolderGroup, benchFolderResource))
elapsed := time.Since(start)
cancel()
if err != nil {
b.Logf("FAILED after %v: %v", elapsed, err)
if elapsed >= listTimeout {
b.Logf("TIMEOUT: List took longer than %v", listTimeout)
}
b.Skip("Skipping benchmark iterations due to error")
return
}
b.Logf("Result: %d folders in %v", len(res.GetItems()), elapsed)
if elapsed > 5*time.Second {
b.Logf("WARNING: Single List took %v, skipping benchmark iterations", elapsed)
b.Skip("Too slow for benchmark iterations")
return
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
ctx, cancel := ctxWithTimeout()
_, err := srv.List(ctx, newListReq(user, utils.VerbGet, benchFolderGroup, benchFolderResource))
cancel()
if err != nil {
b.Fatalf("Error: %v", err)
}
}
})
}
})
}

View File

@@ -126,14 +126,8 @@ func (s *Server) checkTyped(ctx context.Context, subject, relation string, resou
return &authzv1.CheckResponse{Allowed: false}, nil
}
// Use optimized folder permission relations for permission management
checkRelation := relation
if resource.Type() == common.TypeFolder {
checkRelation = common.FolderPermissionRelation(relation)
}
// Check if subject has direct access to resource
res, err := s.openfgaCheck(ctx, store, subject, checkRelation, resourceIdent, contextuals, nil)
res, err := s.openfgaCheck(ctx, store, subject, relation, resourceIdent, contextuals, nil)
if err != nil {
return nil, err
}
@@ -149,15 +143,14 @@ func (s *Server) checkGeneric(ctx context.Context, subject, relation string, res
defer span.End()
var (
folderIdent = resource.FolderIdent()
resourceCtx = resource.Context()
folderRelation = common.SubresourceRelation(relation)
folderCheckRelation = common.FolderPermissionRelation(relation)
folderIdent = resource.FolderIdent()
resourceCtx = resource.Context()
folderRelation = common.SubresourceRelation(relation)
)
if folderIdent != "" && isFolderPermissionBasedResource(resource.GroupResource()) {
// Check if resource inherits permissions from the folder (like dashboards in a folder)
res, err := s.openfgaCheck(ctx, store, subject, folderCheckRelation, folderIdent, contextuals, resourceCtx)
res, err := s.openfgaCheck(ctx, store, subject, relation, folderIdent, contextuals, resourceCtx)
if err != nil {
return nil, err
}

View File

@@ -85,12 +85,6 @@ func (s *Server) listTyped(ctx context.Context, subject, relation string, resour
resourceCtx = resource.Context()
)
// Use optimized folder permission relations for permission management
listRelation := relation
if resource.Type() == common.TypeFolder {
listRelation = common.FolderPermissionRelation(relation)
}
var items []string
if resource.HasSubresource() && common.IsSubresourceRelation(subresourceRelation) {
// List requested subresources
@@ -116,7 +110,7 @@ func (s *Server) listTyped(ctx context.Context, subject, relation string, resour
StoreId: store.ID,
AuthorizationModelId: store.ModelID,
Type: resource.Type(),
Relation: listRelation,
Relation: relation,
User: subject,
ContextualTuples: contextuals,
})
@@ -135,9 +129,8 @@ func (s *Server) listGeneric(ctx context.Context, subject, relation string, reso
defer span.End()
var (
folderRelation = common.SubresourceRelation(relation)
folderListRelation = common.FolderPermissionRelation(relation) // Optimized for permission management
resourceCtx = resource.Context()
folderRelation = common.SubresourceRelation(relation)
resourceCtx = resource.Context()
)
// 1. List all folders subject has access to resource type in
@@ -166,7 +159,7 @@ func (s *Server) listGeneric(ctx context.Context, subject, relation string, reso
StoreId: store.ID,
AuthorizationModelId: store.ModelID,
Type: common.TypeFolder,
Relation: folderListRelation,
Relation: relation,
User: subject,
Context: resourceCtx,
ContextualTuples: contextuals,

View File

@@ -44,11 +44,6 @@ type DashboardService interface {
GetDashboardsByLibraryPanelUID(ctx context.Context, libraryPanelUID string, orgID int64) ([]*DashboardRef, error)
}
type DashboardAccessService interface {
// The user as access to {VERB} the requested dashboard
HasDashboardAccess(ctx context.Context, user identity.Requester, verb string, namespace string, name string) (bool, error)
}
type PermissionsRegistrationService interface {
RegisterDashboardPermissions(service accesscontrol.DashboardPermissionsService)

View File

@@ -5,9 +5,8 @@ package dashboards
import (
context "context"
mock "github.com/stretchr/testify/mock"
identity "github.com/grafana/grafana/pkg/apimachinery/identity"
mock "github.com/stretchr/testify/mock"
model "github.com/grafana/grafana/pkg/services/search/model"
@@ -530,11 +529,6 @@ func (_m *FakeDashboardService) ValidateDashboardRefreshInterval(minRefreshInter
return r0
}
// CanViewDashboard uses the access control service to check if the requested user can see a dashboard
func (_m *FakeDashboardService) HasDashboardAccess(ctx context.Context, user identity.Requester, verb string, namespace string, name string) (bool, error) {
return true, nil
}
// NewFakeDashboardService creates a new instance of FakeDashboardService. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewFakeDashboardService(t interface {

View File

@@ -67,7 +67,6 @@ var (
_ dashboards.DashboardService = (*DashboardServiceImpl)(nil)
_ dashboards.DashboardProvisioningService = (*DashboardServiceImpl)(nil)
_ dashboards.PluginService = (*DashboardServiceImpl)(nil)
_ dashboards.DashboardAccessService = (*DashboardServiceImpl)(nil)
daysInTrash = 24 * 30 * time.Hour
tracer = otel.Tracer("github.com/grafana/grafana/pkg/services/dashboards/service")
@@ -101,38 +100,6 @@ type DashboardServiceImpl struct {
dashboardPermissionsReady chan struct{}
}
// CanViewDashboard uses the access control service to check if the requested user can see a dashboard
func (dr *DashboardServiceImpl) HasDashboardAccess(ctx context.Context, user identity.Requester, verb string, namespace string, name string) (bool, error) {
ns, err := claims.ParseNamespace(namespace)
if err != nil {
return false, err
}
dash, err := dr.GetDashboard(ctx, &dashboards.GetDashboardQuery{
UID: name,
OrgID: ns.OrgID,
})
if err != nil || dash == nil {
return false, nil
}
var action string
switch verb {
case utils.VerbGet:
action = dashboards.ActionDashboardsRead
case utils.VerbUpdate:
action = dashboards.ActionDashboardsWrite
default:
return false, fmt.Errorf("unsupported verb")
}
evaluator := accesscontrol.EvalPermission(action,
dashboards.ScopeDashboardsProvider.GetResourceScopeUID(name))
canView, err := dr.ac.Evaluate(ctx, user, evaluator)
if err != nil || !canView {
return false, nil
}
return true, nil
}
func (dr *DashboardServiceImpl) startK8sDeletedDashboardsCleanupJob(ctx context.Context) chan struct{} {
done := make(chan struct{})
go func() {

View File

@@ -23,9 +23,3 @@ func ProvideDashboardPluginService(
) dashboards.PluginService {
return orig
}
func ProvideDashboardAccessService(
features featuremgmt.FeatureToggles, orig *DashboardServiceImpl,
) dashboards.DashboardAccessService {
return orig
}

View File

@@ -1953,14 +1953,6 @@ var (
Owner: identityAccessTeam,
Expression: "true",
},
{
Name: "pluginInsights",
Description: "Show insights for plugins in the plugin details page",
Stage: FeatureStageExperimental,
FrontendOnly: true,
Owner: grafanaPluginsPlatformSquad,
Expression: "false",
},
{
Name: "panelTimeSettings",
Description: "Enables a new panel time settings drawer",
@@ -1970,13 +1962,6 @@ var (
RequiresRestart: false,
HideFromDocs: false,
},
{
Name: "elasticsearchRawDSLQuery",
Description: "Enables the raw DSL query editor in the Elasticsearch data source",
Stage: FeatureStageExperimental,
Owner: grafanaPartnerPluginsSquad,
Expression: "false",
},
{
Name: "kubernetesAnnotations",
Description: "Enables app platform API for annotations",

View File

@@ -265,9 +265,7 @@ jaegerEnableGrpcEndpoint,experimental,@grafana/oss-big-tent,false,false,false
pluginStoreServiceLoading,experimental,@grafana/plugins-platform-backend,false,false,false
newPanelPadding,preview,@grafana/dashboards-squad,false,false,true
onlyStoreActionSets,GA,@grafana/identity-access-team,false,false,false
pluginInsights,experimental,@grafana/plugins-platform-backend,false,false,true
panelTimeSettings,experimental,@grafana/dashboards-squad,false,false,false
elasticsearchRawDSLQuery,experimental,@grafana/partner-datasources,false,false,false
kubernetesAnnotations,experimental,@grafana/grafana-backend-services-squad,false,false,false
awsDatasourcesHttpProxy,experimental,@grafana/aws-datasources,false,false,false
transformationsEmptyPlaceholder,preview,@grafana/datapro,false,false,true
1 Name Stage Owner requiresDevMode RequiresRestart FrontendOnly
265 pluginStoreServiceLoading experimental @grafana/plugins-platform-backend false false false
266 newPanelPadding preview @grafana/dashboards-squad false false true
267 onlyStoreActionSets GA @grafana/identity-access-team false false false
pluginInsights experimental @grafana/plugins-platform-backend false false true
268 panelTimeSettings experimental @grafana/dashboards-squad false false false
elasticsearchRawDSLQuery experimental @grafana/partner-datasources false false false
269 kubernetesAnnotations experimental @grafana/grafana-backend-services-squad false false false
270 awsDatasourcesHttpProxy experimental @grafana/aws-datasources false false false
271 transformationsEmptyPlaceholder preview @grafana/datapro false false true

View File

@@ -758,10 +758,6 @@ const (
// Enables a new panel time settings drawer
FlagPanelTimeSettings = "panelTimeSettings"
// FlagElasticsearchRawDSLQuery
// Enables the raw DSL query editor in the Elasticsearch data source
FlagElasticsearchRawDSLQuery = "elasticsearchRawDSLQuery"
// FlagKubernetesAnnotations
// Enables app platform API for annotations
FlagKubernetesAnnotations = "kubernetesAnnotations"

View File

@@ -1206,19 +1206,6 @@
"codeowner": "@grafana/partner-datasources"
}
},
{
"metadata": {
"name": "elasticsearchRawDSLQuery",
"resourceVersion": "1763508396079",
"creationTimestamp": "2025-11-18T23:26:36Z"
},
"spec": {
"description": "Enables the raw DSL query editor in the Elasticsearch data source",
"stage": "experimental",
"codeowner": "@grafana/partner-datasources",
"expression": "false"
}
},
{
"metadata": {
"name": "enableAppChromeExtensions",
@@ -2667,20 +2654,6 @@
"expression": "false"
}
},
{
"metadata": {
"name": "pluginInsights",
"resourceVersion": "1761300628147",
"creationTimestamp": "2025-10-24T10:10:28Z"
},
"spec": {
"description": "Show insights for plugins in the plugin details page",
"stage": "experimental",
"codeowner": "@grafana/plugins-platform-backend",
"frontend": true,
"expression": "false"
}
},
{
"metadata": {
"name": "pluginInstallAPISync",

View File

@@ -134,7 +134,7 @@ func (s *frontendService) addMiddlewares(m *web.Mux) {
loggermiddleware := loggermw.Provide(s.cfg, s.features)
m.Use(requestmeta.SetupRequestMetadata())
m.Use(middleware.RequestTracing(s.tracer, middleware.ShouldTraceAllPaths))
m.Use(middleware.RequestTracing(s.tracer, middleware.TraceAllPaths))
m.Use(middleware.RequestMetrics(s.features, s.cfg, s.promRegister))
m.UseMiddleware(s.contextMiddleware())

View File

@@ -424,9 +424,6 @@ func (l *LibraryElementService) toLibraryElementError(err error, message string)
if errors.Is(err, model.ErrLibraryElementUIDTooLong) {
return response.Error(http.StatusBadRequest, model.ErrLibraryElementUIDTooLong.Error(), err)
}
if errors.Is(err, model.ErrLibraryElementProvisionedFolder) {
return response.Error(http.StatusConflict, model.ErrLibraryElementProvisionedFolder.Error(), err)
}
if err != nil && strings.Contains(err.Error(), "insufficient permissions") {
return response.Error(http.StatusForbidden, err.Error(), err)
}

View File

@@ -10,7 +10,6 @@ import (
"github.com/grafana/grafana/pkg/api/dtos"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/infra/metrics"
ac "github.com/grafana/grafana/pkg/services/accesscontrol"
@@ -126,20 +125,6 @@ func (l *LibraryElementService) CreateElement(c context.Context, signedInUser id
}
}
if cmd.FolderUID != nil {
f, err := l.folderService.Get(c, &folder.GetFolderQuery{
OrgID: signedInUser.GetOrgID(),
UID: cmd.FolderUID,
SignedInUser: signedInUser,
})
if err != nil {
return model.LibraryElementDTO{}, err
}
if f.ManagedBy == utils.ManagerKindRepo {
return model.LibraryElementDTO{}, model.ErrLibraryElementProvisionedFolder
}
}
updatedModel := cmd.Model
var err error
if cmd.Kind == int64(model.PanelElement) {
@@ -616,21 +601,6 @@ func (l *LibraryElementService) PatchLibraryElement(c context.Context, signedInU
if err := l.requireSupportedElementKind(cmd.Kind); err != nil {
return model.LibraryElementDTO{}, err
}
if cmd.FolderUID != nil {
f, err := l.folderService.Get(c, &folder.GetFolderQuery{
OrgID: signedInUser.GetOrgID(),
UID: cmd.FolderUID,
SignedInUser: signedInUser,
})
if err != nil {
return model.LibraryElementDTO{}, err
}
if f.ManagedBy == utils.ManagerKindRepo {
return model.LibraryElementDTO{}, model.ErrLibraryElementProvisionedFolder
}
}
err := l.SQLStore.WithTransactionalDbSession(c, func(session *db.Session) error {
elementInDB, err := l.GetLibraryElement(c, signedInUser, session, uid)
if err != nil {

View File

@@ -161,8 +161,6 @@ var (
ErrLibraryElementInvalidUID = errors.New("uid contains illegal characters")
// errLibraryElementUIDTooLong is an error for when the uid of a library element is invalid
ErrLibraryElementUIDTooLong = errors.New("uid too long, max 40 characters")
// ErrLibraryElementProvisionedFolder indicates that a library element cannot be created on a provisioned folder.
ErrLibraryElementProvisionedFolder = errors.New("resource type not supported in repository-managed folders")
)
// Commands

View File

@@ -6,11 +6,10 @@ import (
"fmt"
"strings"
"github.com/grafana/authlib/types"
"github.com/grafana/grafana-plugin-sdk-go/backend"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/cmd/grafana-cli/logger"
"github.com/grafana/grafana/pkg/services/accesscontrol"
"github.com/grafana/grafana/pkg/services/dashboards"
"github.com/grafana/grafana/pkg/services/live/model"
)
@@ -33,9 +32,10 @@ type dashboardEvent struct {
// DashboardHandler manages all the `grafana/dashboard/*` channels
type DashboardHandler struct {
Publisher model.ChannelPublisher
ClientCount model.ChannelClientCount
AccessControl dashboards.DashboardAccessService
Publisher model.ChannelPublisher
ClientCount model.ChannelClientCount
DashboardService dashboards.DashboardService
AccessControl accesscontrol.AccessControl
}
// GetHandlerForPath called on init
@@ -49,15 +49,23 @@ func (h *DashboardHandler) OnSubscribe(ctx context.Context, user identity.Reques
// make sure can view this dashboard
if len(parts) == 2 && parts[0] == "uid" {
ns := types.OrgNamespaceFormatter(user.GetOrgID())
ok, err := h.AccessControl.HasDashboardAccess(ctx, user, utils.VerbGet, ns, parts[1])
if ok && err == nil {
return model.SubscribeReply{
Presence: true,
JoinLeave: true,
}, backend.SubscribeStreamStatusOK, nil
query := dashboards.GetDashboardQuery{UID: parts[1], OrgID: user.GetOrgID()}
_, err := h.DashboardService.GetDashboard(ctx, &query)
if err != nil {
logger.Error("Error getting dashboard", "query", query, "error", err)
return model.SubscribeReply{}, backend.SubscribeStreamStatusNotFound, nil
}
return model.SubscribeReply{}, backend.SubscribeStreamStatusPermissionDenied, err
evaluator := accesscontrol.EvalPermission(dashboards.ActionDashboardsRead, dashboards.ScopeDashboardsProvider.GetResourceScopeUID(parts[1]))
canView, err := h.AccessControl.Evaluate(ctx, user, evaluator)
if err != nil || !canView {
return model.SubscribeReply{}, backend.SubscribeStreamStatusPermissionDenied, err
}
return model.SubscribeReply{
Presence: true,
JoinLeave: true,
}, backend.SubscribeStreamStatusOK, nil
}
// Unknown path
@@ -80,16 +88,29 @@ func (h *DashboardHandler) OnPublish(ctx context.Context, requester identity.Req
// just ignore the event
return model.PublishReply{}, backend.PublishStreamStatusNotFound, fmt.Errorf("ignore???")
}
ns := types.OrgNamespaceFormatter(requester.GetOrgID())
ok, err := h.AccessControl.HasDashboardAccess(ctx, requester, utils.VerbUpdate, ns, parts[1])
if ok && err == nil {
msg, err := json.Marshal(event)
if err != nil {
return model.PublishReply{}, backend.PublishStreamStatusNotFound, fmt.Errorf("internal error")
}
return model.PublishReply{Data: msg}, backend.PublishStreamStatusOK, nil
query := dashboards.GetDashboardQuery{UID: parts[1], OrgID: requester.GetOrgID()}
_, err = h.DashboardService.GetDashboard(ctx, &query)
if err != nil {
logger.Error("Unknown dashboard", "query", query)
return model.PublishReply{}, backend.PublishStreamStatusNotFound, nil
}
evaluator := accesscontrol.EvalPermission(dashboards.ActionDashboardsWrite, dashboards.ScopeDashboardsProvider.GetResourceScopeUID(parts[1]))
canEdit, err := h.AccessControl.Evaluate(ctx, requester, evaluator)
if err != nil {
return model.PublishReply{}, backend.PublishStreamStatusNotFound, fmt.Errorf("internal error")
}
// Ignore edit events if the user can not edit
if !canEdit {
return model.PublishReply{}, backend.PublishStreamStatusNotFound, nil // NOOP
}
msg, err := json.Marshal(event)
if err != nil {
return model.PublishReply{}, backend.PublishStreamStatusNotFound, fmt.Errorf("internal error")
}
return model.PublishReply{Data: msg}, backend.PublishStreamStatusOK, nil
}
return model.PublishReply{}, backend.PublishStreamStatusNotFound, nil

View File

@@ -27,11 +27,13 @@ import (
"github.com/grafana/grafana/pkg/api/response"
"github.com/grafana/grafana/pkg/api/routing"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/infra/localcache"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/grafana/grafana/pkg/infra/usagestats"
"github.com/grafana/grafana/pkg/middleware"
"github.com/grafana/grafana/pkg/middleware/requestmeta"
"github.com/grafana/grafana/pkg/plugins"
"github.com/grafana/grafana/pkg/services/accesscontrol"
"github.com/grafana/grafana/pkg/services/apiserver"
contextmodel "github.com/grafana/grafana/pkg/services/contexthandler/model"
"github.com/grafana/grafana/pkg/services/dashboards"
@@ -50,6 +52,7 @@ import (
"github.com/grafana/grafana/pkg/services/org"
"github.com/grafana/grafana/pkg/services/pluginsintegration/plugincontext"
"github.com/grafana/grafana/pkg/services/pluginsintegration/pluginstore"
"github.com/grafana/grafana/pkg/services/secrets"
"github.com/grafana/grafana/pkg/setting"
"github.com/grafana/grafana/pkg/util"
"github.com/grafana/grafana/pkg/web"
@@ -69,23 +72,28 @@ type CoreGrafanaScope struct {
Dashboards DashboardActivityChannel
}
func ProvideService(cfg *setting.Cfg, routeRegister routing.RouteRegister, plugCtxProvider *plugincontext.Provider,
pluginStore pluginstore.Store, pluginClient plugins.Client, dataSourceCache datasources.CacheService,
func ProvideService(plugCtxProvider *plugincontext.Provider, cfg *setting.Cfg, routeRegister routing.RouteRegister,
pluginStore pluginstore.Store, pluginClient plugins.Client, cacheService *localcache.CacheService,
dataSourceCache datasources.CacheService, secretsService secrets.Service,
usageStatsService usagestats.Service, toggles featuremgmt.FeatureToggles,
dashboardService dashboards.DashboardAccessService,
configProvider apiserver.RestConfigProvider) (*GrafanaLive, error) {
accessControl accesscontrol.AccessControl, dashboardService dashboards.DashboardService,
orgService org.Service, configProvider apiserver.RestConfigProvider) (*GrafanaLive, error) {
g := &GrafanaLive{
Cfg: cfg,
Features: toggles,
PluginContextProvider: plugCtxProvider,
RouteRegister: routeRegister,
pluginStore: pluginStore,
pluginClient: pluginClient,
CacheService: cacheService,
DataSourceCache: dataSourceCache,
SecretsService: secretsService,
channels: make(map[string]model.ChannelHandler),
GrafanaScope: CoreGrafanaScope{
Features: make(map[string]model.ChannelHandlerFactory),
},
usageStatsService: usageStatsService,
orgService: orgService,
keyPrefix: "gf_live",
}
@@ -168,13 +176,19 @@ func ProvideService(cfg *setting.Cfg, routeRegister routing.RouteRegister, plugC
// Initialize the main features
dash := &features.DashboardHandler{
Publisher: g.Publish,
ClientCount: g.ClientCount,
AccessControl: dashboardService,
Publisher: g.Publish,
ClientCount: g.ClientCount,
DashboardService: dashboardService,
AccessControl: accessControl,
}
g.GrafanaScope.Dashboards = dash
g.GrafanaScope.Features["dashboard"] = dash
g.GrafanaScope.Features["watch"] = features.NewWatchRunner(g.Publish, configProvider)
// Testing watch with just the provisioning support -- this will be removed when it is well validated
//nolint:staticcheck // not yet migrated to OpenFeature
if toggles.IsEnabledGlobally(featuremgmt.FlagProvisioning) {
g.GrafanaScope.Features["watch"] = features.NewWatchRunner(g.Publish, configProvider)
}
g.surveyCaller = survey.NewCaller(managedStreamRunner, node)
err = g.surveyCaller.SetupHandlers()
@@ -384,11 +398,11 @@ func ProvideService(cfg *setting.Cfg, routeRegister routing.RouteRegister, plugC
pushPipelineWSHandler.ServeHTTP(ctx.Resp, r)
}
routeRegister.Group("/api/live", func(group routing.RouteRegister) {
g.RouteRegister.Group("/api/live", func(group routing.RouteRegister) {
group.Get("/ws", g.websocketHandler)
}, middleware.ReqSignedIn, requestmeta.SetSLOGroup(requestmeta.SLOGroupNone))
routeRegister.Group("/api/live", func(group routing.RouteRegister) {
g.RouteRegister.Group("/api/live", func(group routing.RouteRegister) {
group.Get("/push/:streamId", g.pushWebsocketHandler)
group.Get("/pipeline/push/*", g.pushPipelineWebsocketHandler)
}, middleware.ReqOrgAdmin, requestmeta.SetSLOGroup(requestmeta.SLOGroupNone))
@@ -447,9 +461,13 @@ type GrafanaLive struct {
PluginContextProvider *plugincontext.Provider
Cfg *setting.Cfg
Features featuremgmt.FeatureToggles
RouteRegister routing.RouteRegister
CacheService *localcache.CacheService
DataSourceCache datasources.CacheService
SecretsService secrets.Service
pluginStore pluginstore.Store
pluginClient plugins.Client
orgService org.Service
keyPrefix string // HA prefix for grafana cloud (since the org is always 1)
@@ -1338,6 +1356,71 @@ func (g *GrafanaLive) HandleWriteConfigsPostHTTP(c *contextmodel.ReqContext) res
})
}
// HandleWriteConfigsPutHTTP ...
func (g *GrafanaLive) HandleWriteConfigsPutHTTP(c *contextmodel.ReqContext) response.Response {
body, err := io.ReadAll(c.Req.Body)
if err != nil {
return response.Error(http.StatusInternalServerError, "Error reading body", err)
}
var cmd pipeline.WriteConfigUpdateCmd
err = json.Unmarshal(body, &cmd)
if err != nil {
return response.Error(http.StatusBadRequest, "Error decoding write config update command", err)
}
if cmd.UID == "" {
return response.Error(http.StatusBadRequest, "UID required", nil)
}
existingBackend, ok, err := g.pipelineStorage.GetWriteConfig(c.Req.Context(), c.GetOrgID(), pipeline.WriteConfigGetCmd{
UID: cmd.UID,
})
if err != nil {
return response.Error(http.StatusInternalServerError, "Failed to get write config", err)
}
if ok {
if cmd.SecureSettings == nil {
cmd.SecureSettings = map[string]string{}
}
secureJSONData, err := g.SecretsService.DecryptJsonData(c.Req.Context(), existingBackend.SecureSettings)
if err != nil {
logger.Error("Error decrypting secure settings", "error", err)
return response.Error(http.StatusInternalServerError, "Error decrypting secure settings", err)
}
for k, v := range secureJSONData {
if _, ok := cmd.SecureSettings[k]; !ok {
cmd.SecureSettings[k] = v
}
}
}
result, err := g.pipelineStorage.UpdateWriteConfig(c.Req.Context(), c.GetOrgID(), cmd)
if err != nil {
return response.Error(http.StatusInternalServerError, "Failed to update write config", err)
}
return response.JSON(http.StatusOK, util.DynMap{
"writeConfig": pipeline.WriteConfigToDto(result),
})
}
// HandleWriteConfigsDeleteHTTP ...
func (g *GrafanaLive) HandleWriteConfigsDeleteHTTP(c *contextmodel.ReqContext) response.Response {
body, err := io.ReadAll(c.Req.Body)
if err != nil {
return response.Error(http.StatusInternalServerError, "Error reading body", err)
}
var cmd pipeline.WriteConfigDeleteCmd
err = json.Unmarshal(body, &cmd)
if err != nil {
return response.Error(http.StatusBadRequest, "Error decoding write config delete command", err)
}
if cmd.UID == "" {
return response.Error(http.StatusBadRequest, "UID required", nil)
}
err = g.pipelineStorage.DeleteWriteConfig(c.Req.Context(), c.GetOrgID(), cmd)
if err != nil {
return response.Error(http.StatusInternalServerError, "Failed to delete write config", err)
}
return response.JSON(http.StatusOK, util.DynMap{})
}
// Write to the standard log15 logger
func handleLog(msg centrifuge.LogEntry) {
arr := make([]interface{}, 0)

View File

@@ -19,6 +19,7 @@ import (
"github.com/grafana/grafana/pkg/api/routing"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/infra/usagestats"
"github.com/grafana/grafana/pkg/services/accesscontrol/acimpl"
"github.com/grafana/grafana/pkg/services/dashboards"
"github.com/grafana/grafana/pkg/services/featuremgmt"
"github.com/grafana/grafana/pkg/setting"
@@ -339,14 +340,16 @@ func setupLiveService(cfg *setting.Cfg, t *testing.T) (*GrafanaLive, error) {
cfg = setting.NewCfg()
}
return ProvideService(cfg,
return ProvideService(nil,
cfg,
routing.NewRouteRegister(),
nil, nil, nil,
nil, nil, nil, nil,
nil,
&usagestats.UsageStatsMock{T: t},
featuremgmt.WithFeatures(),
acimpl.ProvideAccessControl(featuremgmt.WithFeatures()),
&dashboards.FakeDashboardService{},
nil)
nil, nil)
}
type dummyTransport struct {

View File

@@ -457,7 +457,6 @@ type paginationContext struct {
labelOptions []ngmodels.LabelOption
limitAlertsPerRule int64
limitRulesPerGroup int64
compact bool
}
// pageResult is the result of fetching and filtering of one page
@@ -493,7 +492,6 @@ func (ctx *paginationContext) fetchAndFilterPage(log log.Logger, store ListAlert
Limit: remainingGroups,
RuleLimit: remainingRules,
ContinueToken: token,
Compact: ctx.compact,
}
ruleList, newToken, err := store.ListAlertRulesByGroup(ctx.opts.Ctx, &byGroupQuery)
@@ -521,7 +519,7 @@ func (ctx *paginationContext) fetchAndFilterPage(log log.Logger, store ListAlert
log, rg.GroupKey, rg.Folder, rg.Rules,
ctx.provenanceRecords, ctx.limitAlertsPerRule,
ctx.stateFilterSet, ctx.matchers, ctx.labelOptions,
ctx.ruleStatusMutator, ctx.alertStateMutator, ctx.compact,
ctx.ruleStatusMutator, ctx.alertStateMutator,
)
ruleGroup.Totals = totals
accumulateTotals(result.totalsDelta, totals)
@@ -787,8 +785,6 @@ func PrepareRuleGroupStatusesV2(log log.Logger, store ListAlertRulesStoreV2, opt
}
span.SetAttributes(attribute.Int("rule_name_count", len(ruleNamesSet)))
compact := getBoolWithDefault(opts.Query, "compact", false)
span.SetAttributes(attribute.Bool("compact", compact))
pagCtx := &paginationContext{
opts: opts,
provenanceRecords: provenanceRecords,
@@ -811,7 +807,6 @@ func PrepareRuleGroupStatusesV2(log log.Logger, store ListAlertRulesStoreV2, opt
labelOptions: labelOptions,
limitAlertsPerRule: limitAlertsPerRule,
limitRulesPerGroup: limitRulesPerGroup,
compact: compact,
}
groups, rulesTotals, continueToken, err := paginateRuleGroups(log, store, pagCtx, span, maxGroups, maxRules, nextToken)
@@ -964,7 +959,7 @@ func PrepareRuleGroupStatuses(log log.Logger, store ListAlertRulesStore, opts Ru
break
}
ruleGroup, totals := toRuleGroup(log, rg.GroupKey, rg.Folder, rg.Rules, provenanceRecords, limitAlertsPerRule, stateFilterSet, matchers, labelOptions, ruleStatusMutator, alertStateMutator, false)
ruleGroup, totals := toRuleGroup(log, rg.GroupKey, rg.Folder, rg.Rules, provenanceRecords, limitAlertsPerRule, stateFilterSet, matchers, labelOptions, ruleStatusMutator, alertStateMutator)
ruleGroup.Totals = totals
for k, v := range totals {
rulesTotals[k] += v
@@ -1115,7 +1110,7 @@ func matchersMatch(matchers []*labels.Matcher, labels map[string]string) bool {
return true
}
func toRuleGroup(log log.Logger, groupKey ngmodels.AlertRuleGroupKey, folderFullPath string, rules []*ngmodels.AlertRule, provenanceRecords map[string]ngmodels.Provenance, limitAlerts int64, stateFilterSet map[eval.State]struct{}, matchers labels.Matchers, labelOptions []ngmodels.LabelOption, ruleStatusMutator RuleStatusMutator, ruleAlertStateMutator RuleAlertStateMutator, compact bool) (*apimodels.RuleGroup, map[string]int64) {
func toRuleGroup(log log.Logger, groupKey ngmodels.AlertRuleGroupKey, folderFullPath string, rules []*ngmodels.AlertRule, provenanceRecords map[string]ngmodels.Provenance, limitAlerts int64, stateFilterSet map[eval.State]struct{}, matchers labels.Matchers, labelOptions []ngmodels.LabelOption, ruleStatusMutator RuleStatusMutator, ruleAlertStateMutator RuleAlertStateMutator) (*apimodels.RuleGroup, map[string]int64) {
newGroup := &apimodels.RuleGroup{
Name: groupKey.RuleGroup,
// file is what Prometheus uses for provisioning, we replace it with namespace which is the folder in Grafana.
@@ -1131,14 +1126,10 @@ func toRuleGroup(log log.Logger, groupKey ngmodels.AlertRuleGroupKey, folderFull
if prov, exists := provenanceRecords[rule.ResourceID()]; exists {
provenance = prov
}
var query string
if !compact {
query = ruleToQuery(log, rule)
}
alertingRule := apimodels.AlertingRule{
State: "inactive",
Name: rule.Title,
Query: query,
Query: ruleToQuery(log, rule),
QueriedDatasourceUIDs: extractDatasourceUIDs(rule),
Duration: rule.For.Seconds(),
KeepFiringFor: rule.KeepFiringFor.Seconds(),

View File

@@ -110,12 +110,6 @@ func (aq *AlertQuery) String() string {
}
func (aq *AlertQuery) setModelProps() error {
if aq.Model == nil {
// No data to extract, use an empty map.
aq.modelProps = map[string]any{}
return nil
}
aq.modelProps = make(map[string]any)
err := json.Unmarshal(aq.Model, &aq.modelProps)
if err != nil {

View File

@@ -1022,7 +1022,6 @@ type ListAlertRulesExtendedQuery struct {
Limit int64
RuleLimit int64
ContinueToken string
Compact bool
}
// CountAlertRulesQuery is the query for counting alert rules

View File

@@ -12,7 +12,6 @@ import (
"github.com/grafana/alerting/models"
alertingNotify "github.com/grafana/alerting/notify"
"github.com/grafana/alerting/notify/nfstatus"
alertingTemplates "github.com/grafana/alerting/templates"
"github.com/prometheus/alertmanager/config"
amv2 "github.com/prometheus/alertmanager/api/v2/models"
@@ -59,7 +58,6 @@ type alertmanager struct {
decryptFn alertingNotify.GetDecryptedValueFn
crypto Crypto
features featuremgmt.FeatureToggles
dynamicLimits alertingNotify.DynamicLimits
}
// maintenanceOptions represent the options for components that need maintenance on a frequency within the Alertmanager.
@@ -150,16 +148,6 @@ func NewAlertmanager(ctx context.Context, orgID int64, cfg *setting.Cfg, store A
return nil, err
}
limits := alertingNotify.DynamicLimits{
Dispatcher: nilLimits{},
Templates: alertingTemplates.Limits{
MaxTemplateOutputSize: cfg.UnifiedAlerting.AlertmanagerMaxTemplateOutputSize,
},
}
if err := limits.Templates.Validate(); err != nil {
return nil, fmt.Errorf("invalid template limits: %w", err)
}
am := &alertmanager{
Base: gam,
ConfigMetrics: m.AlertmanagerConfigMetrics,
@@ -170,7 +158,6 @@ func NewAlertmanager(ctx context.Context, orgID int64, cfg *setting.Cfg, store A
decryptFn: decryptFn,
crypto: crypto,
features: featureToggles,
dynamicLimits: limits,
}
return am, nil
@@ -395,7 +382,7 @@ func (am *alertmanager) applyConfig(ctx context.Context, cfg *apimodels.Postable
TimeIntervals: amConfig.TimeIntervals,
Templates: templates,
Receivers: receivers,
Limits: am.dynamicLimits,
DispatcherLimits: &nilLimits{},
Raw: rawConfig,
Hash: configHash,
})

Some files were not shown because too many files have changed in this diff Show More