Compare commits

...

34 Commits

Author SHA1 Message Date
Kevin Minehart
6649ad3795 [v11.2.x] Alerting: Add useReturnTo hook to safely handle returnTo parameter (#96480)
Add useReturnTo hook to safely handle returnTo parameter

Co-authored-by: Konrad Lalik <konrad.lalik@grafana.com>
2024-11-14 17:35:05 +01:00
grafana-delivery-bot[bot]
14043ae8f4 [v11.2.x] Docs: Add canvas custom images and icon guidance (#96469)
Co-authored-by: Drew Slobodnjak <60050885+drew08t@users.noreply.github.com>
Co-authored-by: Isabel Matwawana <76437239+imatwawana@users.noreply.github.com>
2024-11-14 11:02:30 -05:00
grafana-delivery-bot[bot]
5cc4535338 [v11.2.x] Docs: Add auth entries to what's new 11.2 (#96393)
Co-authored-by: Isabel Matwawana <76437239+imatwawana@users.noreply.github.com>
2024-11-14 10:21:23 -05:00
lean.dev
b10a2dc68c [v11.2.x] MigrationAssistant: Restrict dashboards, folders and datasources by the org id of the signed in user (#96344)
apply security patch: v11.2.x/195-202410172117.patch
2024-11-12 16:33:06 -03:00
github-actions[bot]
21679c7c25 Release: 11.2.3+security-01 (#96265)
* Update changelog

* baldm0mma/update changelog with cve

* Update CHANGELOG.md

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: jev forsberg <jev.forsberg@grafana.com>
Co-authored-by: Kevin Minehart <5140827+kminehart@users.noreply.github.com>
2024-11-12 16:00:58 +00:00
grafana-delivery-bot[bot]
40c4f15f2c [v11.2.x] docs: Fixed title wording from bar gauge to canvas (#96318)
Co-authored-by: Señor Performo - Leandro Melendez <54183040+srperf@users.noreply.github.com>
Fixed title wording from bar gauge to canvas (#96312)
2024-11-12 10:06:33 -05:00
grafana-delivery-bot[bot]
d54516f7b1 [v11.2.x] docs: Update CanvasDoc adding video link (#95993)
docs: Update CanvasDoc adding video link (#95953)

Update CanvasDoc adding video link

(cherry picked from commit cd3a71e7cb)

Co-authored-by: Señor Performo - Leandro Melendez <54183040+srperf@users.noreply.github.com>
2024-11-11 15:14:31 +01:00
Fayzal Ghantiwala
7080ba2ae5 [v11.2.x] Alerting: Make context deadline on AlertNG service startup configurable (#96133)
Alerting: Make context deadline on AlertNG service startup configurable (#96053)

* Make alerting context deadline configurable

* Remove debug logs

* Change default timeout

* Update tests

(cherry picked from commit 1fdc48faba)
2024-11-08 16:46:47 +00:00
grafana-delivery-bot[bot]
8e91bdea7a [v11.2.x] Alerting: Force refetch prom rules when refreshing panel (#96124)
Alerting: Force refetch prom rules when refreshing panel (#96120)

Force refetch prom rules when refreshing panel

(cherry picked from commit ea0a6a1f7f)

Co-authored-by: Sonia Aguilar <33540275+soniaAguilarPeiron@users.noreply.github.com>
2024-11-08 16:36:22 +01:00
grafana-delivery-bot[bot]
d4b779e16c [v11.2.x] ServerLock: Fix pg concurrency/locking issue (#95934)
ServerLock: Fix pg concurrency/locking issue (#95916)

Fix pg unique constraint validation in serverlock

(cherry picked from commit ab974ddf14)

Co-authored-by: Misi <mgyongyosi@users.noreply.github.com>
2024-11-06 11:08:45 +02:00
grafana-delivery-bot[bot]
fbf07aee1a [v11.2.x] [DOC] Add Pyroscope to list of products (#95910)
[DOC] Add Pyroscope to list of products (#95884)

* Add Pyroscope to list of products

* Update docs/sources/shared/basics/what-is-grafana.md

* Apply suggestions from code review

Co-authored-by: Christopher Moyer <35463610+chri2547@users.noreply.github.com>
Co-authored-by: Bryan Huhta <32787160+bryanhuhta@users.noreply.github.com>

---------

Co-authored-by: Christopher Moyer <35463610+chri2547@users.noreply.github.com>
Co-authored-by: Bryan Huhta <32787160+bryanhuhta@users.noreply.github.com>
(cherry picked from commit 78c5fe61df)

Co-authored-by: Kim Nylander <104772500+knylander-grafana@users.noreply.github.com>
2024-11-05 14:59:51 -05:00
grafana-delivery-bot[bot]
6b970d811e [v11.2.x] Azure: Handle namespace request rejection (#95908)
Azure: Handle namespace request rejection (#95574)

Handle rejection and add test

(cherry picked from commit da1a5426d0)

Co-authored-by: Andreas Christou <andreas.christou@grafana.com>
2024-11-05 20:05:15 +01:00
grafana-delivery-bot[bot]
270004097c [v11.2.x] Timeseries: Utilize min/max on stacking percentage (#95792)
Timeseries: Utilize min/max on stacking percentage (#95581)

* Bring in defined min/max into stacking range

* simplify logic

* different approach

---------

Co-authored-by: Leon Sorokin <leeoniya@gmail.com>
(cherry picked from commit 68aefc73b6)

Co-authored-by: Kristina <kristina.durivage@grafana.com>
2024-11-04 15:09:07 -06:00
grafana-delivery-bot[bot]
2ac79897ee [v11.2.x] Remove second aliases section (#95594)
Co-authored-by: Jack Baldry <jack.baldry@grafana.com>
2024-10-30 10:00:14 +00:00
grafana-delivery-bot[bot]
2ee784d59a [v11.2.x] Replace myself with Irene who oversees Grafana documentation (#95495)
Co-authored-by: Irene Rodríguez <irene.rodriguez@grafana.com>
Co-authored-by: Jack Baldry <jack.baldry@grafana.com>
2024-10-28 15:33:58 +02:00
grafana-delivery-bot[bot]
a971ad3a22 [v11.2.x] User: Check SignedInUser OrgID in RevokeInvite (#95489)
User: Check SignedInUser OrgID in RevokeInvite (#95476)

Check SignedInUser OrgID in RevokeInvite

(cherry picked from commit fedcf47702)

Co-authored-by: Misi <mgyongyosi@users.noreply.github.com>
2024-10-28 14:41:42 +02:00
grafana-delivery-bot[bot]
5cc7981d06 [v11.2.x] Update _index.md (#95474)
Co-authored-by: Irene Rodríguez <irene.rodriguez@grafana.com>
Co-authored-by: Jay <92761481+JayEkin@users.noreply.github.com>
2024-10-28 12:01:30 +02:00
grafana-delivery-bot[bot]
5e9024a42e [v11.2.x] Update _index.md (#95469)
Co-authored-by: Irene Rodríguez <irene.rodriguez@grafana.com>
Co-authored-by: Jay <92761481+JayEkin@users.noreply.github.com>
2024-10-28 11:28:33 +02:00
Eric Leijonmarck
b58db36814 [v11.2.x] Folders: Add admin permissions upon creation of a folder w. SA (#95416)
Folders: Add admin permissions upon creation of a folder w. SA (#95072)

* add admin permissions upon creation of a folder w. SA

* Update pkg/services/folder/folderimpl/folder.go

Co-authored-by: Karl Persson <kalle.persson@grafana.com>

* Grant service account permissions for creation of dashboards

* Grant service account admin permissions upon creating a datasource

* fetch user using the userservice with the userid

* Revert "fetch user using the userservice with the userid"

This reverts commit 23cba78752.

* revert back to original datasource creation

---------

Co-authored-by: Karl Persson <kalle.persson@grafana.com>
(cherry picked from commit 9ab064bfc5)
2024-10-28 09:14:19 +00:00
Kevin Minehart
b91bd951a6 [v11.2.x] CI: Remove drone steps for building windows because its done in grafana-… (#95412)
CI: Remove drone steps for building windows because it's done in grafana-… (#95373)

Remove drone steps for building windows because it's done in grafana-build now

(cherry picked from commit 67b3848fd9)
2024-10-25 07:53:32 -06:00
grafana-delivery-bot[bot]
6a66c96e8a [v11.2.x] Remove doc-validator requirement to run on all pull requests (#95318)
Co-authored-by: Jack Baldry <jack.baldry@grafana.com>
2024-10-24 11:38:07 +03:00
Kevin Minehart
9fe9778d53 [v11.2.x] CI: use linux to build msi installers (#95298)
CI: use linux to build msi installers (#95215)

* Build the MSI installers using Linux and wine

(cherry picked from commit 66c728d26b)
2024-10-23 14:12:24 -06:00
grafana-delivery-bot[bot]
5ffd30075d [v11.2.x] Docs: Table visualization update (#95285)
Co-authored-by: Adela Almasan <88068998+adela-almasan@users.noreply.github.com>
Co-authored-by: Isabel Matwawana <76437239+imatwawana@users.noreply.github.com>
2024-10-23 15:01:31 -04:00
grafana-delivery-bot[bot]
bfbf8d6b9c [v11.2.x] Prometheus: Fix passing query timeout to upstream queries (#95263)
Prometheus: Fix passing query timeout to upstream queries (#95104)

* remove queryTimeout from constructor

* use queryTimeout for range and instant queries

* remove comment

* remove default query timeout

* fix linting

(cherry picked from commit 78a00d09cd)

Co-authored-by: ismail simsek <ismailsimsek09@gmail.com>
2024-10-23 16:49:16 +02:00
grafana-delivery-bot[bot]
ea458d3a15 [v11.2.x] Fix: Deduplicate OrgID in SA logins (#94393)
* Fix: Deduplicate OrgID in SA logins (#94378)

* Fix: Deduplicate OrgID in SA logins

(cherry picked from commit b90e09e966)

* Fix: Actually call the DedupOrgInLogin migration (#94520)

* Fix: Account for conflicting logins in dedupOrgInlogin migration (#94669)

---------

Co-authored-by: Gabriel MABILLE <gamab@users.noreply.github.com>
2024-10-23 15:34:56 +02:00
grafana-delivery-bot[bot]
3576d41ef9 [v11.2.x] Azure: Fix duplicated traces in multi-resource trace query (#95246)
Azure: Fix duplicated traces in multi-resource trace query (#95156)

Use first resource as base resource for query

(cherry picked from commit 8bb7475e4f)

Co-authored-by: Andreas Christou <andreas.christou@grafana.com>
2024-10-23 15:24:26 +03:00
grafana-delivery-bot[bot]
6926deae8d [v11.2.x] Migration: Remove table aliasing in delete statement to make it work for mariadb (#95231)
Migration: Remove table aliasing in delete statement to make it work for mariadb (#95226)

Migration: remove table aliasing in delete statement to make it work in mariadb
(cherry picked from commit 6f7528f896)

Co-authored-by: Karl Persson <kalle.persson@grafana.com>
2024-10-23 11:21:55 +02:00
grafana-delivery-bot[bot]
96948d560e [v11.2.x] format datasources list with columns (#95225)
Co-authored-by: Jack Baldry <jack.baldry@grafana.com>
Co-authored-by: Robby Milo <robbymilo@fastmail.com>
2024-10-23 09:24:14 +01:00
grafana-delivery-bot[bot]
d76e4c51d6 [v11.2.x] Anonymous User: Adds validator service for anonymous users (#94993)
Anonymous User: Adds validator service for anonymous users (#94700)

(cherry picked from commit 3438196010)

Co-authored-by: lean.dev <34773040+leandro-deveikis@users.noreply.github.com>
2024-10-22 13:41:22 -03:00
github-actions[bot]
70c7e8f82c Release: 11.2.3 (#95177)
* Update changelog

* Update version to 11.2.3

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-10-22 09:27:38 -07:00
grafana-delivery-bot[bot]
2b425f803c [v11.2.x] Azure Monitor: Support metric namespaces fallback (#95154)
Azure Monitor: Support metric namespaces fallback (#94722)

* Update display names

* Update multi-resource types

* Update default metric namespace list

* Initialise namespace list with fallback namespaces

* Add test

* Update test

(cherry picked from commit 986bd2f9f8)

Co-authored-by: Andreas Christou <andreas.christou@grafana.com>
2024-10-22 15:44:13 +03:00
grafana-delivery-bot[bot]
1cd87ca64f [v11.2.x] Docs note on Cross-account observability permissions for CW datasource (#95124)
Co-authored-by: Jara Suárez de Puga García <jara.suarezdepuga@grafana.com>
2024-10-22 10:06:19 +01:00
grafana-delivery-bot[bot]
9957e99294 [v11.2.x] [docs] fix provisioning folder name (#95100)
Co-authored-by: Scott Lepper <scott.lepper@gmail.com>
fix provisioning folder name (#95099)
2024-10-21 23:44:04 +03:00
Adela Almasan
3a68ba5699 [v11.2.x] Transformations: Add 'transpose' transform (#95076)
Transformations: Add 'transpose' transform (#88963)

Co-authored-by: Leon Sorokin <leeoniya@gmail.com>
(cherry picked from commit 8bb548e17b)

Co-authored-by: Jmdane <70574656+jmdane@users.noreply.github.com>
2024-10-21 13:09:22 -05:00
141 changed files with 2024 additions and 1197 deletions

View File

@@ -18,18 +18,10 @@ load(
"publish_packages_pipeline",
)
load("scripts/drone/events/rrc-patch.star", "rrc_patch_pipelines")
load(
"scripts/drone/pipelines/ci_images.star",
"publish_ci_windows_test_image_pipeline",
)
load(
"scripts/drone/pipelines/publish_images.star",
"publish_image_pipelines_public",
)
load(
"scripts/drone/pipelines/windows.star",
"windows_test_backend",
)
load(
"scripts/drone/rgm.star",
"rgm",
@@ -46,12 +38,7 @@ def main(_ctx):
publish_npm_pipelines() +
publish_packages_pipeline() +
rgm() +
[windows_test_backend({
"event": ["promote"],
"target": ["test-windows"],
}, "oss", "testing")] +
integration_test_pipelines() +
publish_ci_windows_test_image_pipeline() +
cronjobs() +
secrets()
)

View File

@@ -539,7 +539,7 @@ steps:
name: identify-runner
- commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.0.56/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.1.1/grabpl
- chmod +x bin/grabpl
image: byrnedo/alpine-curl:0.1.8
name: grabpl
@@ -978,7 +978,7 @@ steps:
name: clone-enterprise
- commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.0.56/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.1.1/grabpl
- chmod +x bin/grabpl
image: byrnedo/alpine-curl:0.1.8
name: grabpl
@@ -1940,7 +1940,7 @@ steps:
name: identify-runner
- commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.0.56/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.1.1/grabpl
- chmod +x bin/grabpl
image: byrnedo/alpine-curl:0.1.8
name: grabpl
@@ -2476,7 +2476,7 @@ services:
steps:
- commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.0.56/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.1.1/grabpl
- chmod +x bin/grabpl
image: byrnedo/alpine-curl:0.1.8
name: grabpl
@@ -2658,53 +2658,6 @@ volumes:
clone:
retries: 3
depends_on:
- main-test-frontend
- main-test-backend
- main-build-e2e-publish
- main-integration-tests
environment:
EDITION: oss
image_pull_secrets:
- gcr
- gar
kind: pipeline
name: main-windows
platform:
arch: amd64
os: windows
version: "1809"
services: []
steps:
- commands:
- echo $env:DRONE_RUNNER_NAME
image: mcr.microsoft.com/windows:1809
name: identify-runner
- commands:
- $$ProgressPreference = "SilentlyContinue"
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.0.56/windows/grabpl.exe
-OutFile grabpl.exe
image: grafana/ci-wix:0.1.1
name: windows-init
trigger:
branch: main
event:
- push
paths:
exclude:
- '*.md'
- docs/**
- latest.json
repo:
- grafana/grafana
type: docker
volumes:
- host:
path: //./pipe/docker_engine/
name: docker
---
clone:
retries: 3
depends_on:
- main-build-e2e-publish
- main-integration-tests
environment:
@@ -2756,7 +2709,6 @@ depends_on:
- main-test-backend
- main-build-e2e-publish
- main-integration-tests
- main-windows
kind: pipeline
name: main-notify
platform:
@@ -3108,7 +3060,7 @@ services:
steps:
- commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.0.56/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.1.1/grabpl
- chmod +x bin/grabpl
image: byrnedo/alpine-curl:0.1.8
name: grabpl
@@ -3353,7 +3305,7 @@ steps:
name: identify-runner
- commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.0.56/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.1.1/grabpl
- chmod +x bin/grabpl
image: byrnedo/alpine-curl:0.1.8
name: grabpl
@@ -3485,7 +3437,7 @@ steps:
name: identify-runner
- commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.0.56/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.1.1/grabpl
- chmod +x bin/grabpl
image: byrnedo/alpine-curl:0.1.8
name: grabpl
@@ -4188,51 +4140,6 @@ volumes:
clone:
retries: 3
depends_on: []
environment:
EDITION: oss
image_pull_secrets:
- gcr
- gar
kind: pipeline
name: release-whatsnew-checker
node:
type: no-parallel
platform:
arch: amd64
os: linux
services: []
steps:
- commands:
- go build -o ./bin/build -ldflags '-extldflags -static' ./pkg/build/cmd
depends_on: []
environment:
CGO_ENABLED: 0
image: golang:1.22.7-alpine
name: compile-build-cmd
- commands:
- ./bin/build whatsnew-checker
depends_on:
- compile-build-cmd
image: golang:1.22.7-alpine
name: whats-new-checker
trigger:
event:
exclude:
- promote
ref:
exclude:
- refs/tags/*-cloud*
include:
- refs/tags/v*
type: docker
volumes:
- host:
path: /var/run/docker.sock
name: docker
---
clone:
retries: 3
depends_on: []
image_pull_secrets:
- gcr
- gar
@@ -4303,53 +4210,34 @@ volumes:
---
clone:
retries: 3
depends_on:
- rgm-tag-prerelease
depends_on: []
environment:
EDITION: oss
image_pull_secrets:
- gcr
- gar
kind: pipeline
name: rgm-tag-prerelease-windows
name: release-whatsnew-checker
node:
type: no-parallel
platform:
arch: amd64
os: windows
version: "1809"
os: linux
services: []
steps:
- commands:
- echo $env:DRONE_RUNNER_NAME
image: mcr.microsoft.com/windows:1809
name: identify-runner
- commands:
- $$ProgressPreference = "SilentlyContinue"
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.0.56/windows/grabpl.exe
-OutFile grabpl.exe
image: grafana/ci-wix:0.1.1
name: windows-init
- commands:
- $$gcpKey = $$env:GCP_KEY
- '[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($$gcpKey))
> gcpkey.json'
- dos2unix gcpkey.json
- gcloud auth activate-service-account --key-file=gcpkey.json
- rm gcpkey.json
- cp C:\App\nssm-2.24.zip .
- .\grabpl.exe windows-installer --target gs://grafana-prerelease/artifacts/downloads/${DRONE_TAG}/oss/release/grafana-${DRONE_TAG:1}.windows-amd64.zip
--edition oss ${DRONE_TAG}
- $$fname = ((Get-Childitem grafana*.msi -name) -split "`n")[0]
- gsutil cp $$fname gs://grafana-prerelease/artifacts/downloads/${DRONE_TAG}/oss/release/
- gsutil cp "$$fname.sha256" gs://grafana-prerelease/artifacts/downloads/${DRONE_TAG}/oss/release/
depends_on:
- windows-init
- go build -o ./bin/build -ldflags '-extldflags -static' ./pkg/build/cmd
depends_on: []
environment:
GCP_KEY:
from_secret: gcp_grafanauploads_base64
GITHUB_TOKEN:
from_secret: github_token
PRERELEASE_BUCKET:
from_secret: prerelease_bucket
image: grafana/ci-wix:0.1.1
name: build-windows-installer
CGO_ENABLED: 0
image: golang:1.22.7-alpine
name: compile-build-cmd
- commands:
- ./bin/build whatsnew-checker
depends_on:
- compile-build-cmd
image: golang:1.22.7-alpine
name: whats-new-checker
trigger:
event:
exclude:
@@ -4362,14 +4250,13 @@ trigger:
type: docker
volumes:
- host:
path: //./pipe/docker_engine/
path: /var/run/docker.sock
name: docker
---
clone:
retries: 3
depends_on:
- rgm-tag-prerelease
- rgm-tag-prerelease-windows
image_pull_secrets:
- gcr
- gar
@@ -5005,59 +4892,6 @@ volumes:
path: /var/run/docker.sock
name: docker
---
clone:
disable: true
depends_on: []
environment:
EDITION: oss
image_pull_secrets:
- gcr
- gar
kind: pipeline
name: testing-test-backend-windows
platform:
arch: amd64
os: windows
version: "1809"
services: []
steps:
- commands:
- git clone "https://$$env:GITHUB_TOKEN@github.com/$$env:DRONE_REPO.git" .
- git checkout -f $$env:DRONE_COMMIT
environment:
GITHUB_TOKEN:
from_secret: github_token
image: grafana/ci-wix:0.1.1
name: clone
- commands: []
depends_on:
- clone
image: golang:1.22.7-windowsservercore-1809
name: windows-init
- commands:
- go install github.com/google/wire/cmd/wire@v0.5.0
- wire gen -tags oss ./pkg/server
depends_on:
- windows-init
image: golang:1.22.7-windowsservercore-1809
name: wire-install
- commands:
- go test -short -covermode=atomic -timeout=5m ./pkg/...
depends_on:
- wire-install
image: golang:1.22.7-windowsservercore-1809
name: test-backend
trigger:
event:
- promote
target:
- test-windows
type: docker
volumes:
- host:
path: //./pipe/docker_engine/
name: docker
---
clone:
retries: 3
depends_on: []
@@ -5122,7 +4956,7 @@ services:
steps:
- commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.0.56/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v3.1.1/grabpl
- chmod +x bin/grabpl
image: byrnedo/alpine-curl:0.1.8
name: grabpl
@@ -5287,55 +5121,6 @@ volumes:
temp:
medium: memory
---
clone:
disable: true
depends_on: []
image_pull_secrets:
- gcr
- gar
kind: pipeline
name: publish-ci-windows-test-image
platform:
arch: amd64
os: windows
version: "1809"
services: []
steps:
- commands:
- git clone "https://$$env:GITHUB_TOKEN@github.com/grafana/grafana-ci-sandbox.git"
.
- git checkout -f $$env:DRONE_COMMIT
environment:
GITHUB_TOKEN:
from_secret: github_token
image: grafana/ci-wix:0.1.1
name: clone
- commands:
- cd scripts\build\ci-windows-test
- docker login -u $$env:DOCKER_USERNAME -p $$env:DOCKER_PASSWORD
- docker build -t grafana/grafana-ci-windows-test:$$env:TAG .
- docker push grafana/grafana-ci-windows-test:$$env:TAG
environment:
DOCKER_PASSWORD:
from_secret: docker_password
DOCKER_USERNAME:
from_secret: docker_username
image: docker:windowsservercore-1809
name: build-and-publish
volumes:
- name: docker
path: //./pipe/docker_engine/
trigger:
event:
- promote
target:
- ci-windows-test-image
type: docker
volumes:
- host:
path: //./pipe/docker_engine/
name: docker
---
clone:
retries: 3
kind: pipeline
@@ -5646,6 +5431,7 @@ steps:
- trivy --exit-code 0 --severity UNKNOWN,LOW,MEDIUM jwilder/dockerize:0.6.1
- trivy --exit-code 0 --severity UNKNOWN,LOW,MEDIUM koalaman/shellcheck:stable
- trivy --exit-code 0 --severity UNKNOWN,LOW,MEDIUM rockylinux:9
- trivy --exit-code 0 --severity UNKNOWN,LOW,MEDIUM scottyhardy/docker-wine:stable-9.0
depends_on:
- authenticate-gcr
image: aquasec/trivy:0.21.0
@@ -5683,6 +5469,7 @@ steps:
- trivy --exit-code 1 --severity HIGH,CRITICAL jwilder/dockerize:0.6.1
- trivy --exit-code 1 --severity HIGH,CRITICAL koalaman/shellcheck:stable
- trivy --exit-code 1 --severity HIGH,CRITICAL rockylinux:9
- trivy --exit-code 1 --severity HIGH,CRITICAL scottyhardy/docker-wine:stable-9.0
depends_on:
- authenticate-gcr
environment:
@@ -5914,6 +5701,6 @@ kind: secret
name: gcr_credentials
---
kind: signature
hmac: 58b776458f032819ea9981e96a9cbfe6bcc66c74b407f7c54b020c7433462816
hmac: e46c5ccc2787bfd913ff6283e604060453ac66c0b34c4b70b8bf9ec412dad546
...

8
.github/CODEOWNERS vendored
View File

@@ -38,17 +38,11 @@
/docs/.codespellignore @grafana/docs-tooling
/docs/sources/ @Eve832
/docs/sources/administration/ @jdbaldry
/docs/sources/alerting/ @brendamuir
/docs/sources/dashboards/ @imatwawana
/docs/sources/datasources/ @jdbaldry
/docs/sources/explore/ @grafana/explore-squad @lwandz13
/docs/sources/fundamentals @irenerl24
/docs/sources/getting-started/ @irenerl24
/docs/sources/introduction/ @irenerl24
/docs/sources/panels-visualizations/ @imatwawana
/docs/sources/release-notes/ @Eve832 @GrafanaWriter
/docs/sources/setup-grafana/ @irenerl24
/docs/sources/release-notes/ @irenerl24 @GrafanaWriter
/docs/sources/upgrade-guide/ @imatwawana
/docs/sources/whatsnew/ @imatwawana

View File

@@ -1,13 +1,18 @@
name: "doc-validator"
on:
pull_request:
paths: ["docs/sources/**"]
workflow_dispatch:
inputs:
include:
description: |
Regular expression that matches paths to include in linting.
For example: docs/sources/(?:alerting|fundamentals)/.+\.md
required: true
jobs:
doc-validator:
runs-on: "ubuntu-latest"
container:
image: "grafana/doc-validator:v5.0.0"
image: "grafana/doc-validator:v5.2.0"
steps:
- name: "Checkout code"
uses: "actions/checkout@v4"
@@ -15,15 +20,7 @@ jobs:
# Only run doc-validator on specific directories.
run: >
doc-validator
'--include=^docs/sources/(?:alerting|fundamentals|getting-started|introduction|setup-grafana|upgrade-guide|whatsnew/whats-new-in-v(?:9|10))/.+\.md$'
'--include=${{ inputs.include }}'
'--skip-checks=^(?:image.+|canonical-does-not-match-pretty-URL)$'
./docs/sources
/docs/grafana/latest
| reviewdog
-f=rdjsonl
--fail-on-error
--filter-mode=nofilter
--name=doc-validator
--reporter=github-pr-review
env:
REVIEWDOG_GITHUB_API_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -1,3 +1,27 @@
<!-- 11.2.3+security-01 START -->
# 11.2.3+security-01 (2024-11-12)
### Bug fixes
- **MigrationAssistant:** Fix Migration Assistant issue [CVE-2024-9476]
<!-- 11.2.3+security-01 END -->
<!-- 11.2.3 START -->
# 11.2.3 (2024-10-22)
### Bug fixes
- **Alerting:** Fix incorrect permission on POST external rule groups endpoint [CVE-2024-8118] [#93947](https://github.com/grafana/grafana/pull/93947), [@alexweav](https://github.com/alexweav)
- **AzureMonitor:** Fix App Insights portal URL for multi-resource trace queries [#94475](https://github.com/grafana/grafana/pull/94475), [@aangelisc](https://github.com/aangelisc)
- **Canvas:** Allow API calls to grafana origin [#94129](https://github.com/grafana/grafana/pull/94129), [@adela-almasan](https://github.com/adela-almasan)
- **Folders:** Correctly show new folder button under root folder [#94712](https://github.com/grafana/grafana/pull/94712), [@IevaVasiljeva](https://github.com/IevaVasiljeva)
- **OrgSync:** Do not set default Organization for a user to a non-existent Organization [#94549](https://github.com/grafana/grafana/pull/94549), [@mgyongyosi](https://github.com/mgyongyosi)
- **Plugins:** Skip install errors if dependency plugin already exists [#94717](https://github.com/grafana/grafana/pull/94717), [@wbrowne](https://github.com/wbrowne)
- **ServerSideExpressions:** Disable SQL Expressions to prevent RCE and LFI vulnerability [#94959](https://github.com/grafana/grafana/pull/94959), [@samjewell](https://github.com/samjewell)
<!-- 11.2.3 END -->
<!-- 11.2.2+security-01 START -->
# 11.2.2+security-01 (2024-10-17)

View File

@@ -1195,6 +1195,9 @@ enabled =
# Comma-separated list of organization IDs for which to disable unified alerting. Only supported if unified alerting is enabled.
disabled_orgs =
# Specify how long to wait for the alerting service to initialize
initialization_timeout = 30s
# Specify the frequency of polling for admin config changes.
# The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.
admin_config_poll_interval = 60s

View File

@@ -1183,6 +1183,9 @@
# Comma-separated list of organization IDs for which to disable unified alerting. Only supported if unified alerting is enabled.
;disabled_orgs =
# Specify how long to wait for the alerting service to initialize
;initialization_timeout = 30s
# Specify the frequency of polling for admin config changes.
# The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), e.g. 30s or 1m.
;admin_config_poll_interval = 60s

View File

@@ -73,7 +73,7 @@ Therefore, we heavily rely on the expertise of the community.
## Data sources
You can manage data sources in Grafana by adding YAML configuration files in the [`provisioning/data sources`]({{< relref "../../setup-grafana/configure-grafana#provisioning" >}}) directory.
You can manage data sources in Grafana by adding YAML configuration files in the [`provisioning/datasources`]({{< relref "../../setup-grafana/configure-grafana#provisioning" >}}) directory.
Each configuration file can contain a list of `datasources` to add or update during startup.
If the data source already exists, Grafana reconfigures it to match the provisioned configuration file.

View File

@@ -2,6 +2,7 @@
aliases:
- ../../../alerting-rules/manage-contact-points/configure-oncall/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/manage-contact-points/configure-oncall/
- ../../../alerting-rules/manage-contact-points/integrations/configure-oncall/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/manage-contact-points/integrations/configure-oncall/
- ../configure-oncall/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/manage-contact-points/configure-oncall/
canonical: https://grafana.com/docs/grafana/latest/alerting/configure-notifications/manage-contact-points/integrations/configure-oncall/
description: Configure the Alerting - Grafana OnCall integration to connect alerts generated by Grafana Alerting with Grafana OnCall
keywords:
@@ -9,8 +10,6 @@ keywords:
- alerting
- oncall
- integration
aliases:
- ../configure-oncall/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/manage-contact-points/configure-oncall/
labels:
products:
- cloud

View File

@@ -138,6 +138,8 @@ A data source that uses the result set from another panel in the same dashboard.
These built-in core data sources are also included in the Grafana documentation:
{{< column-list >}}
- [Alertmanager]({{< relref "./alertmanager" >}})
- [AWS CloudWatch]({{< relref "./aws-cloudwatch" >}})
- [Azure Monitor]({{< relref "./azure-monitor" >}})
@@ -157,6 +159,8 @@ These built-in core data sources are also included in the Grafana documentation:
- [Testdata]({{< relref "./testdata" >}})
- [Zipkin]({{< relref "./zipkin" >}})
{{< /column-list >}}
## Add additional data source plugins
You can add additional data sources as plugins (that are not available in core Grafana), which you can install or create yourself.

View File

@@ -238,6 +238,10 @@ You can attach these permissions to the IAM role or IAM user you configured in [
}
```
{{< admonition type="note" >}}
Cross-account observability lets you to retrieve metrics and logs across different accounts in a single region but you can't query EC2 Instance Attributes across accounts because those come from the EC2 API and not the CloudWatch API.
{{< /admonition >}}
### Configure CloudWatch settings
#### Namespaces of Custom Metrics

View File

@@ -43,6 +43,12 @@ With all of these dynamic elements, there's almost no limit to what a canvas can
We'd love your feedback on the canvas visualization. Please check out the [open Github issues](https://github.com/grafana/grafana/issues?page=1&q=is%3Aopen+is%3Aissue+label%3Aarea%2Fpanel%2Fcanvas) and [submit a new feature request](https://github.com/grafana/grafana/issues/new?assignees=&labels=type%2Ffeature-request,area%2Fpanel%2Fcanvas&title=Canvas:&projects=grafana-dataviz&template=1-feature_requests.md) as needed.
{{< /admonition >}}
## Configure a canvas visualization
The following video shows you how to create and configure a canvas visualization:
{{< youtube id="b7AYKoFcPpY" >}}
## Supported data formats
The canvas visualization is unique in that it doesn't have any specific data requirements. You can even start adding and configuring visual elements without providing any data. However, any data you plan to consume should be accessible through supported Grafana data sources and structured in a way that ensures smooth integration with your custom elements.
@@ -85,7 +91,23 @@ The text element lets you easily add text to the canvas. The element also suppor
### Icon
The icon element lets you add a supported icon to the canvas. Icons can have their color set based on thresholds / value mappings.
The icon element lets you add a supported icon to the canvas. Icons can have their color set based on thresholds or value mappings.
#### Add a custom icon
You can add a custom icon by referencing an SVG file. To add a custom icon, follow these steps:
1. Under **Icon > SVG Path**, if it's not already selected, select **Fixed** as your file source.
1. Click **Select a value** in the field below.
1. In the dialog box that opens, click the **URL** tab.
1. Enter the URL in the field below the **URL** tab.
{{< figure src="/media/docs/grafana/panels-visualizations/screenshot-canvas-custom-image-v11.3.png" max-width="300px" alt="Add a custom image URL" >}}
1. Click **Select**.
1. (Optional) Add a background image to your icon with the **Background (icon)** option by following the steps to [add a custom image](#add-custom-images-to-elements).
If you don't have an SVG file, you can use a rectangle element instead of an icon and set its background image to an image file type. To add a custom image for another element type, follow the steps to [add a custom image](#add-custom-images-to-elements).
### Server
@@ -105,6 +127,25 @@ A button click will only trigger an API call when [inline editing](#inline-editi
{{< docs/play title="Canvas Visualization: Buttons" url="https://play.grafana.org/d/c9ea65f5-ed5a-45cf-8fb7-f82af7c3afdf/" >}}
## Add custom images to elements
You can add custom background images to all elements except **Button** by referencing an image URL.
The image must be hosted at a URL that allows requests from your Grafana instance.
To upload a custom image, follow these steps:
1. Under **Background (\<ELEMENT TYPE\>)**, if it's not already selected, select **Fixed** as your image source.
{{< figure src="/media/docs/grafana/panels-visualizations/screenshot-canvas-custom-image-src-v11.3.png" max-width="300px" alt="Custom image source selection" >}}
1. Click **Select a value** in the field below.
1. In the dialog box that opens, click the **URL** tab.
1. Enter the URL in the field below the **URL** tab.
{{< figure src="/media/docs/grafana/panels-visualizations/screenshot-canvas-custom-image-v11.3.png" max-width="300px" alt="Add a custom image URL" >}}
1. Click **Select**.
## Connections
When building a canvas, you can connect elements together to create more complex visualizations. Connections are created by dragging from the connection anchor of one element to the connection anchor of another element. You can also create connections to the background of the canvas. Connection anchors are displayed when you hover over an element and inline editing is turned on.

View File

@@ -68,9 +68,11 @@ refs:
# Table
Tables are a highly flexible visualization designed to display data in columns and rows. They support various data types, including tables, time series, annotations, and raw JSON data. The table visualization can even take multiple data sets and provide the option to switch between them. With this versatility, it's the preferred visualization for viewing multiple data types, aiding in your data analysis needs.
Tables are a highly flexible visualization designed to display data in columns and rows.
The table visualization can take multiple datasets and provide the option to switch between them.
With this versatility, it's the preferred visualization for viewing multiple data types, aiding in your data analysis needs.
{{< figure src="/static/img/docs/tables/table_visualization.png" max-width="1200px" lightbox="true" alt="Table visualization" >}}
![Basic table visualization](/media/docs/grafana/panels-visualizations/screenshot-basic-table-v11.3.png)
You can use a table visualization to show datasets such as:
@@ -81,36 +83,43 @@ You can use a table visualization to show datasets such as:
Any information you might want to put in a spreadsheet can often be best visualized in a table.
Tables also provide different styles to visualize data inside the table cells such as colored text and cell backgrounds, gauges, sparklines, data links, JSON code, and images.
## Configure a table visualization
The following video provides a visual walkthrough of the options you can set in a table visualization. If you want to see a configuration in action, check out the video:
{{< youtube id="PCY7O8EJeJY" >}}
{{< docs/play title="Table Visualizations in Grafana" url="https://play.grafana.org/d/OhR1ID6Mk/" >}}
Tables also provide different styles to visualize data inside the table cells, such as colored text and cell backgrounds, gauges, sparklines, data links, JSON code, and images.
{{< admonition type="note" >}}
Annotations and alerts are not currently supported for tables.
{{< /admonition >}}
## Configure a table visualization
The following video provides a visual walkthrough of the options you can set in a table visualization.
If you want to see a configuration in action, check out the video:
{{< youtube id="PCY7O8EJeJY" >}}
{{< docs/play title="Table Visualizations in Grafana" url="https://play.grafana.org/d/OhR1ID6Mk/" >}}
## Supported data formats
The table visualization supports any data that has a column-row structure.
{{< admonition type="note" >}}
If youre using a cell type such as sparkline or JSON, the data requirements may differ in a way thats specific to that type. For more info refer to [Cell type](#cell-type).
{{< /admonition >}}
### Example
```
This example shows a basic dataset in which there's data for every table cell:
```csv
Column1, Column2, Column3
value1 , value2 , value3
value4 , value5 , value6
value7 , value8 , value9
```
If a cell is missing or the table cell-row structure is not complete, the table visualization wont display any of the data:
If a cell is missing or the table column-row structure is not complete, as in the following example, the table visualization wont display any of the data:
```
```csv
Column1, Column2, Column3
value1 , value2 , value3
gap1 , gap2
@@ -119,60 +128,67 @@ value4 , value5 , value6
If you need to hide columns, you can do so using [data transformations](ref:data-transformation), [field overrides](#field-overrides), or by [building a query](ref:build-query) that returns only the needed columns.
If youre using a cell type such as sparkline or JSON, the data requirements may differ in a way thats specific to that type. For more info refer to [Cell type](#cell-type).
## Column filtering
## Debugging in tables
You can temporarily change how column data is displayed using column filtering.
For example, you can show or hide specific values.
The table visualization helps with debugging when you need to know exactly what results your query is returning and why other visualizations might not be working. This functionality is also accessible in most visualizations by toggling on the **Table view** switch at the top of the panel:
### Turn on column filtering
![The Table view switch](/media/docs/grafana/panels-visualizations/screenshot-table-view-on-11.2.png)
## Turn on column filtering
To turn on column filtering, follow these steps:
1. In Grafana, navigate to the dashboard with the table with the columns that you want to filter.
1. On the table panel you want to filter, open the panel editor.
1. Expand the the **Table** options section.
1. Hover over any part of the panel to which you want to add the link to display the actions menu on the top right corner.
1. Click the menu and select **Edit**.
1. In the panel editor pane, expand the **Table** options section.
1. Toggle on the [**Column filter** switch](#table-options).
A filter icon appears next to each column title.
A filter icon (funnel) appears next to each column title.
{{< figure src="/static/img/docs/tables/column-filter-with-icon.png" max-width="350px" alt="Column filtering turned on" class="docs-image--no-shadow" >}}
### Filter column values
To filter column values, click the filter (funnel) icon next to a column title. Grafana displays the filter options for that column.
To filter column values, follow these steps:
{{< figure src="/static/img/docs/tables/filter-column-values.png" max-width="300px" alt="Filter column values" class="docs-image--no-shadow" >}}
1. Click the filter icon (funnel) next to a column title.
Click the check box next to the values that you want to display. Enter text in the search field at the top to show those values in the display so that you can select them rather than scroll to find them.
Grafana displays the filter options for that column.
Choose from several operators to display column values:
{{< figure src="/static/img/docs/tables/filter-column-values.png" max-width="300px" alt="Filter column values" class="docs-image--no-shadow" >}}
- **Contains** - Matches a regex pattern (operator by default).
- **Expression** - Evaluates a boolean expression. The character `$` represents the column value in the expression (for example, "$ >= 10 && $ <= 12").
- The typical comparison operators: `=`, `!=`, `<`, `<=`, `>`, `>=`.
1. Click the checkbox next to the values that you want to display or click **Select all**.
1. Enter text in the search field at the top to show those values in the display so that you can select them rather than scroll to find them.
1. Choose from several operators to display column values:
Click the check box above the **Ok** and **Cancel** buttons to add or remove all displayed values to/from the filter.
- **Contains** - Matches a regex pattern (operator by default).
- **Expression** - Evaluates a boolean expression. The character `$` represents the column value in the expression (for example, "$ >= 10 && $ <= 12").
- The typical comparison operators: `=`, `!=`, `<`, `<=`, `>`, `>=`.
1. Click the checkbox above the **Ok** and **Cancel** buttons to add or remove all displayed values to and from the filter.
### Clear column filters
Columns with filters applied have a blue funnel displayed next to the title.
Columns with filters applied have a blue filter displayed next to the title.
{{< figure src="/static/img/docs/tables/filtered-column.png" max-width="100px" alt="Filtered column" class="docs-image--no-shadow" >}}
To remove the filter, click the blue funnel icon and then click **Clear filter**.
To remove the filter, click the blue filter icon and then click **Clear filter**.
## Sort columns
Click a column title to change the sort order from default to descending to ascending. Each time you click, the sort order changes to the next option in the cycle. You can sort multiple columns by holding the `shift` key and clicking the column name.
Click a column title to change the sort order from default to descending to ascending.
Each time you click, the sort order changes to the next option in the cycle.
You can sort multiple columns by holding the `Shift` key and clicking the column name.
{{< figure src="/static/img/docs/tables/sort-descending.png" max-width="350px" alt="Sort descending" class="docs-image--no-shadow" >}}
## Dataset selector
If the data queried contains multiple datasets, a table displays a drop-down list at the bottom, so you can select the dataset you want to visualize.
This option is only available when you're editing the panel.
{{< figure src="/media/docs/grafana/panels-visualizations/TablePanelMultiSet.png" max-width="650px" alt="Table visualization with multiple datasets" class="docs-image--no-shadow" >}}
{{< figure src="/media/docs/grafana/panels-visualizations/screenshot-table-multi-dataset-v11.3.png" max-width="650px" alt="Table visualization with multiple datasets" >}}
## Configuration options
@@ -182,66 +198,70 @@ If the data queried contains multiple datasets, a table displays a drop-down lis
### Table options
{{% admonition type="note" %}}
If you are using a table created before Grafana 7.0, then you need to migrate to the new table version in order to see these options. To migrate, on the Panel tab, click **Table** visualization. Grafana updates the table version and you can then access all table options.
{{% /admonition %}}
| Option | Description |
| -------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Show table header | Show or hide column names imported from your data source. |
| Cell height | Set the height of the cell. Choose from **Small**, **Medium**, and **Large**. |
| Enable pagination | Toggle the switch to control how many table rows are visible at once. When switched on, the page size automatically adjusts to the height of the table. This option doesn't affect queries. |
| Minimum column width | Define the lower limit of the column width, in pixels. By default, the minimum width of the table column is 150 pixels. For small-screen devices, such as smartphones or tablets, reduce the default `150` pixel value to `50` to allow table-based panels to render correctly in dashboards. |
| Column width | Define a column width, in pixels, rather than allowing the width to be set automatically. By default, Grafana calculates the column width based on the table size and the minimum column width. |
| Column alignment | Set how Grafana should align cell contents. Choose from: **Auto** (default), **Left**, **Center**, and **Right**. |
| Column filter | Temporarily change how column data is displayed. For example, you can order values from highest to lowest or hide specific values. For more information, refer to [Filter table columns](#filter-table-columns). |
| Option | Description |
| -------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Show table header | Show or hide column names imported from your data source. |
| Cell height | Set the height of the cell. Choose from **Small**, **Medium**, or **Large**. |
| Enable pagination | Toggle the switch to control how many table rows are visible at once. When switched on, the page size automatically adjusts to the height of the table. This option doesn't affect queries. |
| Minimum column width | Define the lower limit of the column width, in pixels. By default, the minimum width of the table column is 150 pixels. For small-screen devices, such as mobile phones or tablets, reduce the value to `50` to allow table-based panels to render correctly in dashboards. |
| Column width | Define a column width, in pixels, rather than allowing the width to be set automatically. By default, Grafana calculates the column width based on the table size and the minimum column width. |
| Column alignment | Set how Grafana should align cell contents. Choose from: **Auto** (default), **Left**, **Center**, or **Right**. |
| Column filter | Temporarily change how column data is displayed. For example, show or hide specific values. For more information, refer to [Column filtering](#column-filtering). |
### Table footer options
Toggle the **Show table footer** switch on and off to control the display of the footer. When the toggle is switched on, you can use the table footer to show [calculations](ref:calculations) on fields.
Toggle the **Show table footer** switch on and off to control the display of the footer.
When the toggle is switched on, you can use the table footer to show [calculations](ref:calculations) on fields.
After you activate the table footer, make selections in the following options:
After you activate the table footer, make selections for the following options:
- **Calculation** - The calculation that you want to apply.
- **Fields** - The fields to which you want to apply the calculations. The system applies the calculation to all numeric fields if you do not select a field.
- **Count rows** - This options is displayed if you select the **Count** calculation. If you want to show the number of rows in the dataset instead of the number of values in the selected fields, toggle on the **Count rows** switch.
- **Count rows** - This option is displayed if you select the **Count** calculation. If you want to show the number of rows in the dataset instead of the number of values in the selected fields, toggle on the **Count rows** switch.
- **Fields** - The fields to which you want to apply the calculation. Grafana applies the calculation to all numeric fields if you don't select a field.
### Cell options
Cell options allow you to control how data is displayed in a table.
The options are:
- [Cell type](#cell-type) - Control the default cell display settings.
- [Wrap text](#wrap-text) - Wrap text in the cell that contains the longest content in your table.
- [Cell value inspect](#cell-value-inspect) - Enables value inspection from table cells.
#### Cell type
By default, Grafana automatically chooses display settings. You can override the settings by choosing one of the following options to set the default for all fields. Additional configuration is available for some cell types.
By default, Grafana automatically chooses display settings.
You can override these settings by choosing one of the following cell types to control the default display for all fields.
Additional configuration is available for some cell types.
{{% admonition type="note" %}}
If you set these in the Field tab, then the type will apply to all fields, including the time field. Many options will work best if you set them in the Override tab so that they can be restricted to one or more fields.
{{% /admonition %}}
If you want to apply a cell type to only some fields instead of all fields, you can do so using the **Cell options > Cell type** field override.
| Cell type | Description |
| ----------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Auto | The **Auto** cell type automatically displays values, with sensible defaults applied. |
| [Sparkline](#sparkline) | Shows values rendered as a sparkline. |
| [Colored text](#colored-text) | If thresholds are set, then the field text is displayed in the appropriate threshold color. |
| [Colored background](#colored-background) | If thresholds are set, then the field background is displayed in the appropriate threshold color. |
| [Gauge](#gauge) | Cells can be displayed as a graphical gauge, with several different presentation types. You can set the [Gauge display mode](#gauge-display-mode) and the [Value display](#value-display) options. |
| Data links | If you've configured data links, when the cell type is **Auto** mode, the cell text becomes clickable. If you change the cell type to **Data links**, the cell text reflects the titles of the configured data links. To control the application of data link text more granularly use a **Cell option > Cell type > Data links** field override. |
| [JSON View](#json-view) | Shows value formatted as code. |
| [Image](#image) | If you have a field value that is an image URL or a base64 encoded image you can configure the table to display it as an image. |
| Cell type | Description |
| ----------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Auto | Automatically displays values with sensible defaults applied. |
| [Sparkline](#sparkline) | Shows values rendered as a sparkline. |
| [Colored text](#colored-text) | If thresholds are set, then the field text is displayed in the appropriate threshold color. |
| [Colored background](#colored-background) | If thresholds are set, then the field background is displayed in the appropriate threshold color. |
| [Gauge](#gauge) | Cells can be displayed as a graphical gauge, with several different presentation types. You can set the [Gauge display mode](#gauge-display-mode) and the [Value display](#value-display) options. |
| Data links | If you've configured data links, when the cell type is **Auto**, the cell text becomes clickable. If you change the cell type to **Data links**, the cell text reflects the titles of the configured data links. To control the application of data link text more granularly, use a **Cell option > Cell type > Data links** field override. |
| [JSON View](#json-view) | Shows values formatted as code. |
| [Image](#image) | If the field value is an image URL or a base64 encoded image, the table displays the image. |
##### Sparkline
Shows values rendered as a sparkline. You can show sparklines using the [Time series to table transformation](ref:time-series-to-table-transformation) on data with multiple time series to process it into a format the table can show.
This cell type shows values rendered as a sparkline.
To show sparklines on data with multiple time series, use the [Time series to table transformation](ref:time-series-to-table-transformation) to process it into a format the table can show.
{{< figure src="/static/img/docs/tables/sparkline2.png" max-width="500px" alt="Sparkline" class="docs-image--no-shadow" >}}
![Table using sparkline cell type](/media/docs/grafana/panels-visualizations/screenshot-table-as-sparkline-v11.3.png)
You can customize sparklines with many of the same options as the [time series visualization](ref:time-series-panel) including line style and width, fill opacity, gradient mode, and more. You can also change the color of the sparkline by updating the [color scheme](ref:color-scheme) in the **Standard options** section of the panel configuration.
You can customize sparklines with many of the same options as the [time series visualization](ref:time-series-panel) including line style and width, fill opacity, gradient mode, and more.
You can also change the color of the sparkline by updating the [color scheme](ref:color-scheme) in the **Standard options** section of the panel configuration.
##### Colored text
If thresholds are set, then the field text is displayed in the appropriate threshold color.
If thresholds are set, with this cell type, the field text is displayed in the appropriate threshold color.
{{< figure src="/static/img/docs/tables/color-text.png" max-width="500px" alt="Color text" class="docs-image--no-shadow" >}}
![Table with colored text cell type](/media/docs/grafana/panels-visualizations/screenshot-table-colored-text-v11.3-2.png)
{{< admonition type="note" >}}
This is an experimental feature.
@@ -249,71 +269,66 @@ This is an experimental feature.
##### Colored background
If thresholds are set, then the field background is displayed in the appropriate threshold color.
If thresholds are set, with this cell type, the field background is displayed in the appropriate threshold color.
{{< figure src="/static/img/docs/tables/color-background.png" max-width="500px" alt="Color background" class="docs-image--no-shadow" >}}
![Table with colored background cell type](/media/docs/grafana/panels-visualizations/screenshot-table-colored-bkgrnd-v11.3-2.png)
Choose between **Basic** and **Gradient** to set the **Background display mode**.
- **Background display mode** - Choose between **Basic** and **Gradient**.
- **Apply to entire row** - Toggle the switch on to apply the background color that's configured for the cell to the whole row.
Toggle the **Apply to entire row** switch, to apply the background color that's configured for the cell to the whole row.
{{< figure src="/static/img/docs/tables/colored-rows.png" max-width="500px" alt="Colored row background" class="docs-image--no-shadow" >}}
![Table with background cell color applied to row](/media/docs/grafana/panels-visualizations/screenshot-table-colored-row-v11.3.png)
##### Gauge
Cells can be displayed as a graphical gauge, with several different presentation types controlled by the gauge display mode and the value display.
With this cell type, cells can be displayed as a graphical gauge, with several different presentation types controlled by the [gauge display mode](#gauge-display-mode) and the [value display](#value-display).
{{< admonition type="note" >}}
The maximum and minimum values of the gauges are configured automatically from the smallest and largest values in your whole data set. If you don't want the max/min values to be pulled from the whole data set, you can configure them for each column with field overrides.
The maximum and minimum values of the gauges are configured automatically from the smallest and largest values in your whole dataset.
If you don't want the max/min values to be pulled from the whole dataset, you can configure them for each column using [field overrides](#field-overrides).
{{< /admonition >}}
###### Gauge display mode
You can set three gauge display modes.
- **Basic** - Shows a simple gauge with the threshold levels defining the color of gauge.
<!-- prettier-ignore-start -->
{{< figure src="/static/img/docs/tables/basic-gauge.png" max-width="500px" alt="Gradient gauge" class="docs-image--no-shadow" >}}
| Option | Description |
| ------ | ----------- |
| Basic | Shows a simple gauge with the threshold levels defining the color of gauge. {{< figure src="/media/docs/grafana/panels-visualizations/screenshot-gauge-mode-basic-v11.3.png" alt="Table cell with basic gauge mode" >}} |
| Gradient | The threshold levels define a gradient. {{< figure src="/media/docs/grafana/panels-visualizations/screenshot-gauge-mode-gradient-v11.3.png" alt="Table cell with gradient gauge mode" >}} |
| Retro LCD | The gauge is split up in small cells that are lit or unlit. {{< figure src="/media/docs/grafana/panels-visualizations/screenshot-gauge-mode-retro-v11.3.png" alt="Table cell with retro LCD gauge mode" >}} |
- **Gradient** - The threshold levels define a gradient.
{{< figure src="/static/img/docs/tables/gradient-gauge.png" max-width="500px" alt="Gradient gauge" class="docs-image--no-shadow" >}}
- **Retro LCD** - The gauge is split up in small cells that are lit or unlit.
{{< figure src="/static/img/docs/tables/lcd-gauge.png" max-width="500px" alt="LCD gauge" class="docs-image--no-shadow" >}}
<!-- prettier-ignore-end -->
###### Value display
Labels displayed alongside of the gauges can be set to be colored by value, match the theme text color, or be hidden.
- **Value color**
<!-- prettier-ignore-start -->
{{< figure src="/static/img/docs/tables/value-color-mode.png" max-width="500px" alt="Color Label by Value" class="docs-image--no-shadow" >}}
| Option | Description |
| ------ | ----------- |
| Value color | Labels are colored by value. {{< figure src="/media/docs/grafana/panels-visualizations/screenshot-labels-value-color-v11.3.png" alt="Table with labels in value color" >}} |
| Text color | Labels match the theme text color. {{< figure src="/media/docs/grafana/panels-visualizations/screenshot-labels-text-color-v11.3.png" alt="Table with labels in theme color" >}} |
| Hidden | Labels are hidden. {{< figure src="/media/docs/grafana/panels-visualizations/screenshot-labels-hidden-v11.3.png" alt="Table with labels hidden" >}} |
- **Text color**
{{< figure src="/static/img/docs/tables/text-color-mode.png" max-width="500px" alt="Color Label by theme color" class="docs-image--no-shadow" >}}
- **Hidden**
{{< figure src="/static/img/docs/tables/hidden-mode.png" max-width="500px" alt="Hide Label" class="docs-image--no-shadow" >}}
<!-- prettier-ignore-end -->
##### JSON View
Shows value formatted as code. If a value is an object the JSON view allowing browsing the JSON object will appear on hover.
This cell type shows values formatted as code.
If a value is an object the JSON view allowing browsing the JSON object will appear on hover.
{{< figure src="/static/img/docs/tables/json-view.png" max-width="350px" alt="JSON view" class="docs-image--no-shadow" >}}
##### Image
{{< admonition type="note" >}}
Only available in Grafana 7.3+
{{< /admonition >}}
If you have a field value that is an image URL or a base64 encoded image, this cell type displays it as an image.
If you have a field value that is an image URL or a base64 encoded image you can configure the table to display it as an image.
![Table with image cell type](/media/docs/grafana/panels-visualizations/screenshot-table-cell-image-v11.3.png)
{{< figure src="/static/img/docs/v73/table_hover.gif" max-width="900px" alt="Table hover" >}}
Set the following options:
- **Alt text** - Set the alternative text of an image. The text will be available for screen readers and in cases when images can't be loaded.
- **Title text** - Set the text that's displayed when the image is hovered over with a cursor.
@@ -321,22 +336,26 @@ If you have a field value that is an image URL or a base64 encoded image you can
#### Wrap text
{{< admonition type="note" >}}
Text wrapping is in [public preview](https://grafana.com/docs/release-life-cycle/#public-preview), however, its available to use by default. Wed love hear from you about how this new feature is working. To provide feedback, you can open an issue in the [Grafana GitHub repository](https://github.com/grafana/grafana).
Text wrapping is in [public preview](https://grafana.com/docs/release-life-cycle/#public-preview), however, its available to use by default.
Wed love hear from you about how this new feature is working. To provide feedback, you can open an issue in the [Grafana GitHub repository](https://github.com/grafana/grafana).
{{< /admonition >}}
Toggle the **Wrap text** switch to wrap text in the cell with the longest content in your table. To wrap the text in a specific column only, use the Wrap Text option in a [field override](ref:field-override).
Toggle the **Wrap text** switch to wrap text in the cell that contains the longest content in your table.
To wrap the text in a specific column only, use the Wrap Text option in a [field override](ref:field-override).
This option isn't available when you set the cell type to **Gauge** or Data links,JSON View, Image.
This option is available for the following cell types: **Auto**, **Colored text**, and **Colored background**.
#### Cell value inspect
Enables value inspection from table cells. When the **Cell inspect value** switch is toggled on, clicking the inspect icon in a cell opens the **Inspect value** drawer.
The **Inspect value** drawer has two tabs, **Plain text** and **Code editor**. Grafana attempts to automatically detect the type of data in the cell and opens the drawer with the associated tab showing. However, you can switch back and forth between tabs.
The **Inspect value** drawer has two tabs, **Plain text** and **Code editor**.
Grafana attempts to automatically detect the type of data in the cell and opens the drawer with the associated tab showing.
However, you can switch back and forth between tabs.
Cell value inspection is only available when the **Cell type** selection is **Auto**, **Colored text**, **Colored background**, or **JSON View**.
This option is available for the following cell types: **Auto**, **Colored text**, **Colored background**, and **JSON View**.
This option isn't available when you set the cell type to **Gauge** or Data links, Image, .
If you want to apply this setting to only some fields instead of all fields, you can do so using the **Cell options > Cell value inspect** field override.
### Standard options

View File

@@ -837,9 +837,8 @@ that this organization already exists. Default is 1.
### auto_assign_org_role
The `auto_assign_org_role` setting determines the default role assigned to new users
in the main organization (if `auto_assign_org` setting is set to true).
The available options are `Viewer` (default), `Admin`, `Editor`, and `None`. For example:
The `auto_assign_org_role` setting determines the default role assigned to new users in the main organization if `auto_assign_org` setting is set to `true`.
You can set this to one of the following roles: (`Viewer` (default), `Admin`, `Editor`, and `None`). For example:
`auto_assign_org_role = Viewer`
@@ -1508,7 +1507,7 @@ Turns on tracing instrumentation. Only affects Grafana Javascript Agent.
### api_key
If `custom_endpoint` required authentication, you can set the api key here. Only relevant for Grafana Javascript Agent provider.
If `custom_endpoint` required authentication, you can set the API key here. Only relevant for Grafana Javascript Agent provider.
<hr>

View File

@@ -140,6 +140,11 @@ Example:
[auth.anonymous]
enabled = true
{{< admonition type="note" >}}
Enabling anonymous access is a disallowed configuration setting on Hosted Grafana and not recommended due [security implications](https://grafana.com/docs/grafana/latest/setup-grafana/configure-security/#implications-of-enabling-anonymous-access-to-dashboards).
For sharing dashboards with a wider audience, consider using the [public dashboard feature](https://grafana.com/docs/grafana/latest/dashboards/dashboard-public/) instead.
{{< /admonition >}}
# Organization name that should be used for unauthenticated users
org_name = Main Org.

View File

@@ -12,16 +12,20 @@ Grafana open source is open source visualization and analytics software. It allo
### Grafana Loki
Grafana Loki is an open source, set of components that can be composed into a fully featured logging stack. For more information, refer to [Loki documentation](/docs/loki/latest/).
Grafana Loki is an open-source set of components that can be composed into a fully featured logging stack. For more information, refer to [Loki documentation](https://grafana.com/docs/loki/<LOKI_VERSION>/).
### Grafana Tempo
Grafana Tempo is an open source, easy-to-use and high-volume distributed tracing backend. For more information, refer to [Tempo documentation](/docs/tempo/latest/?pg=oss-tempo&plcmt=hero-txt/).
Grafana Tempo is an open source, easy-to-use and high-volume distributed tracing backend. For more information, refer to [Tempo documentation](https://grafana.com/docs/tempo/<TEMPO_VERSION>/).
### Grafana Mimir
Grafana Mimir is an open source software project that provides a scalable long-term storage for Prometheus. For more information about Grafana Mimir, refer to [Grafana Mimir documentation](/docs/mimir/latest/).
Grafana Mimir is an open source software project that provides a scalable long-term storage for Prometheus. For more information about Grafana Mimir, refer to [Grafana Mimir documentation](https://grafana.com/docs/mimir/<MIMIR_VERSION>/).
### Grafana Pyroscope
Grafana Pyroscope is an open source software project for aggregating continuous profiling data. Continuous profiling is an observability signal that helps you understand your workloads resources usage. For more information, refer to [Grafana Pyroscope documentation](https://grafana.com/docs/pyroscope/<PYROSCOPE_VERSION>/).
### Grafana Oncall
Grafana OnCall is an open source incident response management tool built to help teams improve their collaboration and resolve incidents faster. For more information about Grafana OnCall, refer to [Grafana OnCall documentation](/docs/oncall/latest/).
Grafana OnCall is an open source incident response management tool built to help teams improve their collaboration and resolve incidents faster. For more information about Grafana OnCall, refer to [Grafana OnCall documentation](https://grafana.com/docs/oncall/<ONCALL_VERSION>/).

View File

@@ -277,3 +277,33 @@ _Available in public preview in all editions of Grafana_
[The SSO settings API](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/developers/http_api/sso-settings/) has been updated to include support for LDAP settings. This feature is experimental behind the feature flag `ssoSettingsLDAP`.
You will soon be able to configure LDAP from the UI and Terraform.
### Reduce number of required fields from the SAML form
<!-- #proj-grafana-sso-config -->
_Generally available in Grafana Enterprise and Grafana Cloud Pro and Advanced_
The private key and certificate fields are no longer mandatory in the SAML form. To configure SAML without providing a private key and a certificate you have to opt out from using signed requests.
{{< figure src="/media/docs/grafana/screenshot-grafana-11-2-saml-sign-requests.png" alt="Sign requests in SAML config form" >}}
### Generate SAML certificate and private key
<!-- #proj-grafana-sso-config -->
_Generally available in Grafana Enterprise and Grafana Cloud Pro_
You can generate a new certificate and private key for SAML directly from the UI form. Click on the **Generate key and certificate** button from the **Sign requests** tab in the SAML form and then fill in the information you want to be embedded in your generated certificate.
{{< video-embed src="/media/docs/grafana/screen-recording-11-2-generate-saml-certificate.mp4" >}}
### OpenID Connect Discovery URL for Generic OAuth
<!-- #proj-grafana-sso-config -->
_Generally available in all editions of Grafana_
The OpenID Connect Discovery URL is available in the Generic OAuth form. The info extracted from this URL will be used to populate the Auth URL, Token URL and API URL fields.
{{< video-embed src="/media/docs/grafana/screen-recording-11-2-openid-discovery-url.mp4" >}}

View File

@@ -1,5 +1,5 @@
{
"$schema": "node_modules/lerna/schemas/lerna-schema.json",
"npmClient": "yarn",
"version": "11.2.3"
"version": "11.2.4"
}

View File

@@ -3,7 +3,7 @@
"license": "AGPL-3.0-only",
"private": true,
"name": "grafana",
"version": "11.2.3",
"version": "11.2.4",
"repository": "github:grafana/grafana",
"scripts": {
"build": "NODE_ENV=production nx exec --verbose -- webpack --config scripts/webpack/webpack.prod.js",

View File

@@ -2,7 +2,7 @@
"author": "Grafana Labs",
"license": "Apache-2.0",
"name": "@grafana/data",
"version": "11.2.3",
"version": "11.2.4",
"description": "Grafana Data Library",
"keywords": [
"typescript"
@@ -36,7 +36,7 @@
},
"dependencies": {
"@braintree/sanitize-url": "7.0.1",
"@grafana/schema": "11.2.3",
"@grafana/schema": "11.2.4",
"@types/d3-interpolate": "^3.0.0",
"@types/string-hash": "1.1.3",
"d3-interpolate": "3.0.1",

View File

@@ -24,6 +24,7 @@ import { renameFieldsTransformer } from './transformers/rename';
import { renameByRegexTransformer } from './transformers/renameByRegex';
import { seriesToRowsTransformer } from './transformers/seriesToRows';
import { sortByTransformer } from './transformers/sortBy';
import { transposeTransformer } from './transformers/transpose';
export const standardTransformers = {
noopTransformer,
@@ -55,4 +56,5 @@ export const standardTransformers = {
groupingToMatrixTransformer,
limitTransformer,
groupToNestedTable,
transposeTransformer,
};

View File

@@ -37,6 +37,7 @@ export enum DataTransformerID {
limit = 'limit',
partitionByValues = 'partitionByValues',
timeSeriesTable = 'timeSeriesTable',
transpose = 'transpose',
formatTime = 'formatTime',
formatString = 'formatString',
regression = 'regression',

View File

@@ -0,0 +1,241 @@
import { DataTransformerConfig } from '@grafana/schema';
import { toDataFrame } from '../../dataframe/processDataFrame';
import { FieldType } from '../../types/dataFrame';
import { mockTransformationsRegistry } from '../../utils/tests/mockTransformationsRegistry';
import { transformDataFrame } from '../transformDataFrame';
import { DataTransformerID } from './ids';
import { transposeTransformer, TransposeTransformerOptions } from './transpose';
describe('Transpose transformer', () => {
beforeAll(() => {
mockTransformationsRegistry([transposeTransformer]);
});
it('should transpose full numeric values and keep numeric type', async () => {
const cfgA: DataTransformerConfig<TransposeTransformerOptions> = {
id: DataTransformerID.transpose,
options: {},
};
const seriesA = toDataFrame({
name: 'A',
fields: [
{ name: 'env', type: FieldType.string, values: ['dev', 'prod', 'staging', 'release', 'beta'] },
{ name: 'january', type: FieldType.number, values: [11, 12, 13, 14, 15] },
{ name: 'february', type: FieldType.number, values: [6, 7, 8, 9, 10] },
{ name: 'march', type: FieldType.number, values: [1, 2, 3, 4, 5] },
],
});
await expect(transformDataFrame([cfgA], [seriesA])).toEmitValuesWith((received) => {
const result = received[0];
expect(result[0].fields).toEqual([
{
name: 'Field',
type: FieldType.string,
values: ['january', 'february', 'march'],
config: {},
},
{
name: 'Value',
labels: { env: 'dev' },
type: FieldType.number,
values: [11, 6, 1],
config: {},
},
{
name: 'Value',
labels: { env: 'prod' },
type: FieldType.number,
values: [12, 7, 2],
config: {},
},
{
name: 'Value',
labels: { env: 'staging' },
type: FieldType.number,
values: [13, 8, 3],
config: {},
},
{
name: 'Value',
labels: { env: 'release' },
type: FieldType.number,
values: [14, 9, 4],
config: {},
},
{
name: 'Value',
labels: { env: 'beta' },
type: FieldType.number,
values: [15, 10, 5],
config: {},
},
]);
});
});
it('should transpose and use string field type', async () => {
const cfgB: DataTransformerConfig<TransposeTransformerOptions> = {
id: DataTransformerID.transpose,
options: {},
};
const seriesB = toDataFrame({
name: 'B',
fields: [
{ name: 'env', type: FieldType.string, values: ['dev', 'prod', 'staging', 'release', 'beta'] },
{ name: 'january', type: FieldType.number, values: [11, 12, 13, 14, 15] },
{ name: 'february', type: FieldType.number, values: [6, 7, 8, 9, 10] },
{ name: 'type', type: FieldType.string, values: ['metricA', 'metricB', 'metricC', 'metricD', 'metricE'] },
],
});
await expect(transformDataFrame([cfgB], [seriesB])).toEmitValuesWith((received) => {
const result = received[0];
expect(result[0].fields).toEqual([
{
name: 'Field',
type: FieldType.string,
values: ['january', 'february', 'type'],
config: {},
},
{
name: 'Value',
labels: { env: 'dev' },
type: FieldType.string,
values: ['11', '6', 'metricA'],
config: {},
},
{
name: 'Value',
labels: { env: 'prod' },
type: FieldType.string,
values: ['12', '7', 'metricB'],
config: {},
},
{
name: 'Value',
labels: { env: 'staging' },
type: FieldType.string,
values: ['13', '8', 'metricC'],
config: {},
},
{
name: 'Value',
labels: { env: 'release' },
type: FieldType.string,
values: ['14', '9', 'metricD'],
config: {},
},
{
name: 'Value',
labels: { env: 'beta' },
type: FieldType.string,
values: ['15', '10', 'metricE'],
config: {},
},
]);
});
});
it('should transpose and keep number types and add new headers', async () => {
const cfgC: DataTransformerConfig<TransposeTransformerOptions> = {
id: DataTransformerID.transpose,
options: {
firstFieldName: 'NewField',
},
};
const seriesC = toDataFrame({
name: 'C',
fields: [
{ name: 'A', type: FieldType.number, values: [1, 5] },
{ name: 'B', type: FieldType.number, values: [2, 6] },
{ name: 'C', type: FieldType.number, values: [3, 7] },
{ name: 'D', type: FieldType.number, values: [4, 8] },
],
});
await expect(transformDataFrame([cfgC], [seriesC])).toEmitValuesWith((received) => {
const result = received[0];
expect(result[0].fields).toEqual([
{
name: 'NewField',
type: FieldType.string,
values: ['A', 'B', 'C', 'D'],
config: {},
},
{
name: 'Value',
labels: { row: 1 },
type: FieldType.number,
values: [1, 2, 3, 4],
config: {},
},
{
name: 'Value',
labels: { row: 2 },
type: FieldType.number,
values: [5, 6, 7, 8],
config: {},
},
]);
});
});
it('should transpose and handle different types and rename first element', async () => {
const cfgD: DataTransformerConfig<TransposeTransformerOptions> = {
id: DataTransformerID.transpose,
options: {
firstFieldName: 'Field1',
},
};
const seriesD = toDataFrame({
name: 'D',
fields: [
{
name: 'time',
type: FieldType.time,
values: ['2024-06-10 08:30:00', '2024-06-10 08:31:00', '2024-06-10 08:32:00', '2024-06-10 08:33:00'],
},
{ name: 'value', type: FieldType.number, values: [1, 2, 3, 4] },
],
});
await expect(transformDataFrame([cfgD], [seriesD])).toEmitValuesWith((received) => {
const result = received[0];
expect(result[0].fields).toEqual([
{
name: 'Field1',
type: FieldType.string,
values: ['value'],
config: {},
},
{
name: 'Value',
labels: { time: '2024-06-10 08:30:00' },
type: FieldType.number,
values: [1],
config: {},
},
{
name: 'Value',
labels: { time: '2024-06-10 08:31:00' },
type: FieldType.number,
values: [2],
config: {},
},
{
name: 'Value',
labels: { time: '2024-06-10 08:32:00' },
type: FieldType.number,
values: [3],
config: {},
},
{
name: 'Value',
labels: { time: '2024-06-10 08:33:00' },
type: FieldType.number,
values: [4],
config: {},
},
]);
});
});
});

View File

@@ -0,0 +1,105 @@
import { map } from 'rxjs/operators';
import { DataFrame, Field, FieldType } from '../../types/dataFrame';
import { DataTransformerInfo } from '../../types/transformations';
import { DataTransformerID } from './ids';
export interface TransposeTransformerOptions {
firstFieldName?: string;
restFieldsName?: string;
}
export const transposeTransformer: DataTransformerInfo<TransposeTransformerOptions> = {
id: DataTransformerID.transpose,
name: 'Transpose',
description: 'Transpose the data frame',
defaultOptions: {},
operator: (options) => (source) =>
source.pipe(
map((data) => {
if (data.length === 0) {
return data;
}
return transposeDataFrame(options, data);
})
),
};
function transposeDataFrame(options: TransposeTransformerOptions, data: DataFrame[]): DataFrame[] {
return data.map((frame) => {
const firstField = frame.fields[0];
const firstName = !options.firstFieldName ? 'Field' : options.firstFieldName;
const restName = !options.restFieldsName ? 'Value' : options.restFieldsName;
const useFirstFieldAsHeaders =
firstField.type === FieldType.string || firstField.type === FieldType.time || firstField.type === FieldType.enum;
const headers = useFirstFieldAsHeaders
? [firstName, ...fieldValuesAsStrings(firstField, firstField.values)]
: [firstName, ...firstField.values.map((_, i) => restName)];
const rows = useFirstFieldAsHeaders
? frame.fields.map((field) => field.name).slice(1)
: frame.fields.map((field) => field.name);
const fieldType = determineFieldType(
useFirstFieldAsHeaders
? frame.fields.map((field) => field.type).slice(1)
: frame.fields.map((field) => field.type)
);
const newFields = headers.map((fieldName, index) => {
if (index === 0) {
return {
name: firstName,
type: FieldType.string,
config: {},
values: rows,
};
}
const values = frame.fields.map((field) => {
if (fieldType === FieldType.string) {
return fieldValuesAsStrings(field, [field.values[index - 1]])[0];
}
return field.values[index - 1];
});
const labelName = useFirstFieldAsHeaders ? firstField.name : 'row';
const labelValue = useFirstFieldAsHeaders ? fieldName : index;
return {
name: useFirstFieldAsHeaders ? restName : fieldName,
labels: {
[labelName]: labelValue,
},
type: fieldType,
config: {},
values: useFirstFieldAsHeaders ? values.slice(1) : values,
};
});
return {
...frame,
fields: newFields,
length: Math.max(...newFields.map((field) => field.values.length)),
};
});
}
function determineFieldType(fieldTypes: FieldType[]): FieldType {
const uniqueFieldTypes = new Set(fieldTypes);
return uniqueFieldTypes.size === 1 ? [...uniqueFieldTypes][0] : FieldType.string;
}
function fieldValuesAsStrings(field: Field, values: unknown[]) {
switch (field.type) {
case FieldType.time:
case FieldType.number:
case FieldType.boolean:
case FieldType.string:
return values.map((v) => `${v}`);
case FieldType.enum:
// @ts-ignore
return values.map((v) => field.config.type!.enum!.text![v]);
default:
return values.map((v) => JSON.stringify(v));
}
}

View File

@@ -2,7 +2,7 @@
"author": "Grafana Labs",
"license": "Apache-2.0",
"name": "@grafana/e2e-selectors",
"version": "11.2.3",
"version": "11.2.4",
"description": "Grafana End-to-End Test Selectors Library",
"keywords": [
"cli",

View File

@@ -1,7 +1,7 @@
{
"name": "@grafana/eslint-plugin",
"description": "ESLint rules for use within the Grafana repo. Not suitable (or supported) for external use.",
"version": "11.2.3",
"version": "11.2.4",
"main": "./index.cjs",
"author": "Grafana Labs",
"license": "Apache-2.0",

View File

@@ -2,7 +2,7 @@
"author": "Grafana Labs",
"license": "Apache-2.0",
"name": "@grafana/flamegraph",
"version": "11.2.3",
"version": "11.2.4",
"description": "Grafana flamegraph visualization component",
"keywords": [
"grafana",
@@ -44,8 +44,8 @@
],
"dependencies": {
"@emotion/css": "11.11.2",
"@grafana/data": "11.2.3",
"@grafana/ui": "11.2.3",
"@grafana/data": "11.2.4",
"@grafana/ui": "11.2.4",
"@leeoniya/ufuzzy": "1.0.14",
"d3": "^7.8.5",
"lodash": "4.17.21",

View File

@@ -1,6 +1,6 @@
{
"name": "@grafana/saga-icons",
"version": "11.2.3",
"version": "11.2.4",
"private": true,
"description": "Icons for Grafana",
"author": "Grafana Labs",

View File

@@ -3,7 +3,7 @@
"license": "AGPL-3.0-only",
"name": "@grafana/o11y-ds-frontend",
"private": true,
"version": "11.2.3",
"version": "11.2.4",
"description": "Library to manage traces in Grafana.",
"sideEffects": false,
"repository": {
@@ -18,12 +18,12 @@
},
"dependencies": {
"@emotion/css": "11.11.2",
"@grafana/data": "11.2.3",
"@grafana/e2e-selectors": "11.2.3",
"@grafana/data": "11.2.4",
"@grafana/e2e-selectors": "11.2.4",
"@grafana/experimental": "1.7.13",
"@grafana/runtime": "11.2.3",
"@grafana/schema": "11.2.3",
"@grafana/ui": "11.2.3",
"@grafana/runtime": "11.2.4",
"@grafana/schema": "11.2.4",
"@grafana/ui": "11.2.4",
"react-select": "5.8.0",
"react-use": "17.5.1",
"rxjs": "7.8.1",

View File

@@ -2,7 +2,7 @@
"name": "@grafana/plugin-configs",
"description": "Shared dependencies and files for core plugins",
"private": true,
"version": "11.2.3",
"version": "11.2.4",
"dependencies": {
"tslib": "2.6.3"
},

View File

@@ -2,7 +2,7 @@
"author": "Grafana Labs",
"license": "AGPL-3.0-only",
"name": "@grafana/prometheus",
"version": "11.2.3",
"version": "11.2.4",
"description": "Grafana Prometheus Library",
"keywords": [
"typescript"
@@ -38,12 +38,12 @@
"dependencies": {
"@emotion/css": "11.11.2",
"@floating-ui/react": "0.26.22",
"@grafana/data": "11.2.3",
"@grafana/data": "11.2.4",
"@grafana/experimental": "1.7.13",
"@grafana/faro-web-sdk": "1.9.0",
"@grafana/runtime": "11.2.3",
"@grafana/schema": "11.2.3",
"@grafana/ui": "11.2.3",
"@grafana/runtime": "11.2.4",
"@grafana/schema": "11.2.4",
"@grafana/ui": "11.2.4",
"@hello-pangea/dnd": "16.6.0",
"@leeoniya/ufuzzy": "1.0.14",
"@lezer/common": "1.2.1",
@@ -76,7 +76,7 @@
},
"devDependencies": {
"@emotion/eslint-plugin": "11.11.0",
"@grafana/e2e-selectors": "11.2.3",
"@grafana/e2e-selectors": "11.2.4",
"@grafana/tsconfig": "^1.3.0-rc1",
"@rollup/plugin-image": "3.0.3",
"@rollup/plugin-node-resolve": "15.2.3",

View File

@@ -91,7 +91,6 @@ export class PrometheusDatasource
basicAuth: any;
withCredentials: boolean;
interval: string;
queryTimeout: string | undefined;
httpMethod: string;
languageProvider: PrometheusLanguageProvider;
exemplarTraceIdDestinations: ExemplarTraceIdDestination[] | undefined;
@@ -120,7 +119,6 @@ export class PrometheusDatasource
this.basicAuth = instanceSettings.basicAuth;
this.withCredentials = Boolean(instanceSettings.withCredentials);
this.interval = instanceSettings.jsonData.timeInterval || '15s';
this.queryTimeout = instanceSettings.jsonData.queryTimeout;
this.httpMethod = instanceSettings.jsonData.httpMethod || 'GET';
this.exemplarTraceIdDestinations = instanceSettings.jsonData.exemplarTraceIdDestinations;
this.hasIncrementalQuery = instanceSettings.jsonData.incrementalQuerying ?? false;

View File

@@ -2,7 +2,7 @@
"author": "Grafana Labs",
"license": "Apache-2.0",
"name": "@grafana/runtime",
"version": "11.2.3",
"version": "11.2.4",
"description": "Grafana Runtime Library",
"keywords": [
"grafana",
@@ -37,11 +37,11 @@
"postpack": "mv package.json.bak package.json"
},
"dependencies": {
"@grafana/data": "11.2.3",
"@grafana/e2e-selectors": "11.2.3",
"@grafana/data": "11.2.4",
"@grafana/e2e-selectors": "11.2.4",
"@grafana/faro-web-sdk": "^1.3.6",
"@grafana/schema": "11.2.3",
"@grafana/ui": "11.2.3",
"@grafana/schema": "11.2.4",
"@grafana/ui": "11.2.4",
"history": "4.10.1",
"lodash": "4.17.21",
"rxjs": "7.8.1",

View File

@@ -2,7 +2,7 @@
"author": "Grafana Labs",
"license": "Apache-2.0",
"name": "@grafana/schema",
"version": "11.2.3",
"version": "11.2.4",
"description": "Grafana Schema Library",
"keywords": [
"typescript"

View File

@@ -8,7 +8,7 @@
//
// Run 'make gen-cue' from repository root to regenerate.
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface Options {
limit: number;

View File

@@ -10,7 +10,7 @@
import * as common from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface Options extends common.OptionsWithLegend, common.OptionsWithTooltip, common.OptionsWithTextFormatting {
/**

View File

@@ -10,7 +10,7 @@
import * as common from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface Options extends common.SingleStatBaseOptions {
displayMode: common.BarGaugeDisplayMode;

View File

@@ -10,7 +10,7 @@
import * as common from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export enum VizDisplayMode {
Candles = 'candles',

View File

@@ -10,7 +10,7 @@
import * as ui from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export enum HorizontalConstraint {
Center = 'center',

View File

@@ -10,7 +10,7 @@
import * as common from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface MetricStat {
/**

View File

@@ -8,7 +8,7 @@
//
// Run 'make gen-cue' from repository root to regenerate.
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface Options {
/**

View File

@@ -8,7 +8,7 @@
//
// Run 'make gen-cue' from repository root to regenerate.
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface Options {
selectedSeries: number;

View File

@@ -8,7 +8,7 @@
//
// Run 'make gen-cue' from repository root to regenerate.
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export type UpdateConfig = {
render: boolean,

View File

@@ -10,7 +10,7 @@
import * as common from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export type BucketAggregation = (DateHistogram | Histogram | Terms | Filters | GeoHashGrid | Nested);

View File

@@ -10,7 +10,7 @@
import * as common from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface Options extends common.SingleStatBaseOptions {
minVizHeight: number;

View File

@@ -10,7 +10,7 @@
import * as ui from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface Options {
basemap: ui.MapLayerOptions;

View File

@@ -10,7 +10,7 @@
import * as ui from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
/**
* Controls the color mode of the heatmap

View File

@@ -10,7 +10,7 @@
import * as common from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface Options extends common.OptionsWithLegend, common.OptionsWithTooltip {
/**

View File

@@ -10,7 +10,7 @@
import * as common from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface Options {
dedupStrategy: common.LogsDedupStrategy;

View File

@@ -10,7 +10,7 @@
import * as common from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export enum QueryEditorMode {
Builder = 'builder',

View File

@@ -8,7 +8,7 @@
//
// Run 'make gen-cue' from repository root to regenerate.
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface Options {
/**

View File

@@ -8,7 +8,7 @@
//
// Run 'make gen-cue' from repository root to regenerate.
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface ArcOption {
/**

View File

@@ -10,7 +10,7 @@
import * as common from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
/**
* Select the pie chart display style.

View File

@@ -10,7 +10,7 @@
import * as common from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface Options extends common.SingleStatBaseOptions {
colorMode: common.BigValueColorMode;

View File

@@ -10,7 +10,7 @@
import * as ui from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface Options extends ui.OptionsWithLegend, ui.OptionsWithTooltip, ui.OptionsWithTimezones {
/**

View File

@@ -10,7 +10,7 @@
import * as ui from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface Options extends ui.OptionsWithLegend, ui.OptionsWithTooltip, ui.OptionsWithTimezones {
/**

View File

@@ -10,7 +10,7 @@
import * as ui from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface Options {
/**

View File

@@ -8,7 +8,7 @@
//
// Run 'make gen-cue' from repository root to regenerate.
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export enum TextMode {
Code = 'code',

View File

@@ -10,7 +10,7 @@
import * as common from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
export interface Options extends common.OptionsWithTimezones {
legend: common.VizLegendOptions;

View File

@@ -10,7 +10,7 @@
import * as common from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
/**
* Identical to timeseries... except it does not have timezone settings

View File

@@ -10,7 +10,7 @@
import * as common from '@grafana/schema';
export const pluginVersion = "11.2.3";
export const pluginVersion = "11.2.4";
/**
* Auto is "table" in the UI

View File

@@ -3,7 +3,7 @@
"license": "AGPL-3.0-only",
"private": true,
"name": "@grafana/sql",
"version": "11.2.3",
"version": "11.2.4",
"repository": {
"type": "git",
"url": "http://github.com/grafana/grafana.git",
@@ -15,11 +15,11 @@
},
"dependencies": {
"@emotion/css": "11.11.2",
"@grafana/data": "11.2.3",
"@grafana/e2e-selectors": "11.2.3",
"@grafana/data": "11.2.4",
"@grafana/e2e-selectors": "11.2.4",
"@grafana/experimental": "1.7.13",
"@grafana/runtime": "11.2.3",
"@grafana/ui": "11.2.3",
"@grafana/runtime": "11.2.4",
"@grafana/ui": "11.2.4",
"@react-awesome-query-builder/ui": "6.6.2",
"immutable": "4.3.7",
"lodash": "4.17.21",

View File

@@ -2,7 +2,7 @@
"author": "Grafana Labs",
"license": "Apache-2.0",
"name": "@grafana/ui",
"version": "11.2.3",
"version": "11.2.4",
"description": "Grafana Components Library",
"keywords": [
"grafana",
@@ -50,10 +50,10 @@
"@emotion/css": "11.11.2",
"@emotion/react": "11.11.4",
"@floating-ui/react": "0.26.22",
"@grafana/data": "11.2.3",
"@grafana/e2e-selectors": "11.2.3",
"@grafana/data": "11.2.4",
"@grafana/e2e-selectors": "11.2.4",
"@grafana/faro-web-sdk": "^1.3.6",
"@grafana/schema": "11.2.3",
"@grafana/schema": "11.2.4",
"@hello-pangea/dnd": "16.6.0",
"@leeoniya/ufuzzy": "1.0.14",
"@monaco-editor/react": "4.6.0",

View File

@@ -1,7 +1,7 @@
import uPlot, { Scale, Range } from 'uplot';
import { DecimalCount, incrRoundDn, incrRoundUp, isBooleanUnit } from '@grafana/data';
import { ScaleOrientation, ScaleDirection, ScaleDistribution } from '@grafana/schema';
import { ScaleOrientation, ScaleDirection, ScaleDistribution, StackingMode } from '@grafana/schema';
import { PlotConfigBuilder } from '../types';
@@ -20,6 +20,7 @@ export interface ScaleProps {
linearThreshold?: number;
centeredZero?: boolean;
decimals?: DecimalCount;
stackingMode?: StackingMode;
}
export class UPlotScaleBuilder extends PlotConfigBuilder<ScaleProps, Scale> {
@@ -41,8 +42,19 @@ export class UPlotScaleBuilder extends PlotConfigBuilder<ScaleProps, Scale> {
orientation,
centeredZero,
decimals,
stackingMode,
} = this.props;
if (stackingMode === StackingMode.Percent) {
if (hardMin == null && softMin == null) {
softMin = 0;
}
if (hardMax == null && softMax == null) {
softMax = 1;
}
}
const distr = this.props.distribution;
const distribution = !isTime

View File

@@ -194,7 +194,7 @@ func (hs *HTTPServer) setDefaultFolderPermissions(ctx context.Context, orgID int
var permissions []accesscontrol.SetResourcePermissionCommand
if identity.IsIdentityType(user.GetID(), identity.TypeUser) {
if identity.IsIdentityType(user.GetID(), identity.TypeUser, identity.TypeServiceAccount) {
userID, err := identity.UserIdentifier(user.GetID())
if err != nil {
return err

View File

@@ -195,6 +195,20 @@ func (hs *HTTPServer) inviteExistingUserToOrg(c *contextmodel.ReqContext, user *
// 404: notFoundError
// 500: internalServerError
func (hs *HTTPServer) RevokeInvite(c *contextmodel.ReqContext) response.Response {
query := tempuser.GetTempUserByCodeQuery{Code: web.Params(c.Req)[":code"]}
queryResult, err := hs.tempUserService.GetTempUserByCode(c.Req.Context(), &query)
if err != nil {
if errors.Is(err, tempuser.ErrTempUserNotFound) {
return response.Error(http.StatusNotFound, "Invite not found", nil)
}
return response.Error(http.StatusInternalServerError, "Failed to get invite", err)
}
canRevoke := c.SignedInUser.GetOrgID() == queryResult.OrgID || c.SignedInUser.GetIsGrafanaAdmin()
if !canRevoke {
return response.Error(http.StatusForbidden, "Permission denied: not permitted to revoke invite", nil)
}
if ok, rsp := hs.updateTempUserStatus(c.Req.Context(), web.Params(c.Req)[":code"], tempuser.TmpUserRevoked); !ok {
return rsp
}

View File

@@ -350,16 +350,26 @@ func (sl *ServerLockService) executeFunc(ctx context.Context, actionName string,
}
func (sl *ServerLockService) createLock(ctx context.Context,
lockRow *serverLock, dbSession *sqlstore.DBSession) (*serverLock, error) {
lockRow *serverLock, dbSession *sqlstore.DBSession,
) (*serverLock, error) {
affected := int64(1)
rawSQL := `INSERT INTO server_lock (operation_uid, last_execution, version) VALUES (?, ?, ?)`
if sl.SQLStore.GetDBType() == migrator.Postgres {
rawSQL += ` RETURNING id`
rawSQL += ` ON CONFLICT DO NOTHING RETURNING id`
var id int64
_, err := dbSession.SQL(rawSQL, lockRow.OperationUID, lockRow.LastExecution, 0).Get(&id)
if err != nil {
return nil, err
}
if id == 0 {
// Considering the default isolation level (READ COMMITTED), an entry could be added to the table
// between the SELECT and the INSERT. And inserting a row with the same operation_uid would violate the unique
// constraint. In this case, the ON CONFLICT DO NOTHING clause will prevent generating an error.
// And the returning id will be 0 which means that there wasn't any row inserted (another server has the lock),
// therefore we return the ServerLockExistsError.
// https://www.postgresql.org/docs/current/transaction-iso.html#XACT-READ-COMMITTED
return nil, &ServerLockExistsError{actionName: lockRow.OperationUID}
}
lockRow.Id = id
} else {
res, err := dbSession.Exec(

View File

@@ -24,13 +24,14 @@ type doer interface {
// objects, we have to go through them and then serialize again into DataFrame which isn't very efficient. Using custom
// client we can parse response directly into DataFrame.
type Client struct {
doer doer
method string
baseUrl string
doer doer
method string
baseUrl string
queryTimeout string
}
func NewClient(d doer, method, baseUrl string) *Client {
return &Client{doer: d, method: method, baseUrl: baseUrl}
func NewClient(d doer, method, baseUrl, queryTimeout string) *Client {
return &Client{doer: d, method: method, baseUrl: baseUrl, queryTimeout: queryTimeout}
}
func (c *Client) QueryRange(ctx context.Context, q *models.Query) (*http.Response, error) {
@@ -41,6 +42,9 @@ func (c *Client) QueryRange(ctx context.Context, q *models.Query) (*http.Respons
"end": formatTime(tr.End),
"step": strconv.FormatFloat(tr.Step.Seconds(), 'f', -1, 64),
}
if c.queryTimeout != "" {
qv["timeout"] = c.queryTimeout
}
req, err := c.createQueryRequest(ctx, "api/v1/query_range", qv)
if err != nil {
@@ -58,6 +62,9 @@ func (c *Client) QueryInstant(ctx context.Context, q *models.Query) (*http.Respo
// Instead of aligning we use time point directly.
// https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries
qv := map[string]string{"query": q.Expr, "time": formatTime(q.End)}
if c.queryTimeout != "" {
qv["timeout"] = c.queryTimeout
}
req, err := c.createQueryRequest(ctx, "api/v1/query", qv)
if err != nil {
return nil, err

View File

@@ -30,7 +30,7 @@ func TestClient(t *testing.T) {
t.Run("QueryResource", func(t *testing.T) {
doer := &MockDoer{}
// The method here does not really matter for resource calls
client := NewClient(doer, http.MethodGet, "http://localhost:9090")
client := NewClient(doer, http.MethodGet, "http://localhost:9090", "60s")
t.Run("sends correct POST request", func(t *testing.T) {
req := &backend.CallResourceRequest{
@@ -86,7 +86,7 @@ func TestClient(t *testing.T) {
doer := &MockDoer{}
t.Run("sends correct POST query", func(t *testing.T) {
client := NewClient(doer, http.MethodPost, "http://localhost:9090")
client := NewClient(doer, http.MethodPost, "http://localhost:9090", "60s")
req := &models.Query{
Expr: "rate(ALERTS{job=\"test\" [$__rate_interval]})",
Start: time.Unix(0, 0),
@@ -108,12 +108,12 @@ func TestClient(t *testing.T) {
require.Equal(t, "application/x-www-form-urlencoded", doer.Req.Header.Get("Content-Type"))
body, err := io.ReadAll(doer.Req.Body)
require.NoError(t, err)
require.Equal(t, []byte("end=1234&query=rate%28ALERTS%7Bjob%3D%22test%22+%5B%24__rate_interval%5D%7D%29&start=0&step=1"), body)
require.Equal(t, []byte("end=1234&query=rate%28ALERTS%7Bjob%3D%22test%22+%5B%24__rate_interval%5D%7D%29&start=0&step=1&timeout=60s"), body)
require.Equal(t, "http://localhost:9090/api/v1/query_range", doer.Req.URL.String())
})
t.Run("sends correct GET query", func(t *testing.T) {
client := NewClient(doer, http.MethodGet, "http://localhost:9090")
client := NewClient(doer, http.MethodGet, "http://localhost:9090", "60s")
req := &models.Query{
Expr: "rate(ALERTS{job=\"test\" [$__rate_interval]})",
Start: time.Unix(0, 0),
@@ -135,7 +135,7 @@ func TestClient(t *testing.T) {
body, err := io.ReadAll(doer.Req.Body)
require.NoError(t, err)
require.Equal(t, []byte{}, body)
require.Equal(t, "http://localhost:9090/api/v1/query_range?end=1234&query=rate%28ALERTS%7Bjob%3D%22test%22+%5B%24__rate_interval%5D%7D%29&start=0&step=1", doer.Req.URL.String())
require.Equal(t, "http://localhost:9090/api/v1/query_range?end=1234&query=rate%28ALERTS%7Bjob%3D%22test%22+%5B%24__rate_interval%5D%7D%29&start=0&step=1&timeout=60s", doer.Req.URL.String())
})
})
}

View File

@@ -55,17 +55,21 @@ func New(
return nil, err
}
httpMethod, _ := maputil.GetStringOptional(jsonData, "httpMethod")
if httpMethod == "" {
httpMethod = http.MethodPost
}
timeInterval, err := maputil.GetStringOptional(jsonData, "timeInterval")
if err != nil {
return nil, err
}
if httpMethod == "" {
httpMethod = http.MethodPost
queryTimeout, err := maputil.GetStringOptional(jsonData, "queryTimeout")
if err != nil {
return nil, err
}
promClient := client.NewClient(httpClient, httpMethod, settings.URL)
promClient := client.NewClient(httpClient, httpMethod, settings.URL, queryTimeout)
// standard deviation sampler is the default for backwards compatibility
exemplarSampler := exemplar.NewStandardDeviationSampler
@@ -122,7 +126,7 @@ func (s *QueryData) handleQuery(ctx context.Context, bq backend.DataQuery, fromA
func (s *QueryData) fetch(traceCtx context.Context, client *client.Client, q *models.Query, enablePrometheusDataplane bool) *backend.DataResponse {
logger := s.log.FromContext(traceCtx)
logger.Debug("Sending query", "start", q.Start, "end", q.End, "step", q.Step, "query", q.Expr)
logger.Debug("Sending query", "start", q.Start, "end", q.End, "step", q.Step, "query", q.Expr /*, "queryTimeout", s.QueryTimeout*/)
dr := &backend.DataResponse{
Frames: data.Frames{},

View File

@@ -35,8 +35,9 @@ func New(
}
return &Resource{
log: plog,
promClient: client.NewClient(httpClient, httpMethod, settings.URL),
log: plog,
// we don't use queryTimeout for resource calls
promClient: client.NewClient(httpClient, httpMethod, settings.URL, ""),
}, nil
}

View File

@@ -18,6 +18,7 @@ import (
"github.com/grafana/grafana/pkg/services/accesscontrol/ossaccesscontrol"
"github.com/grafana/grafana/pkg/services/anonymous"
"github.com/grafana/grafana/pkg/services/anonymous/anonimpl"
"github.com/grafana/grafana/pkg/services/anonymous/validator"
"github.com/grafana/grafana/pkg/services/apiserver/standalone"
"github.com/grafana/grafana/pkg/services/auth"
"github.com/grafana/grafana/pkg/services/auth/authimpl"
@@ -54,6 +55,8 @@ var wireExtsBasicSet = wire.NewSet(
authimpl.ProvideUserAuthTokenService,
wire.Bind(new(auth.UserTokenService), new(*authimpl.UserAuthTokenService)),
wire.Bind(new(auth.UserTokenBackgroundService), new(*authimpl.UserAuthTokenService)),
validator.ProvideAnonUserLimitValidator,
wire.Bind(new(validator.AnonUserLimitValidator), new(*validator.AnonUserLimitValidatorImpl)),
anonimpl.ProvideAnonymousDeviceService,
wire.Bind(new(anonymous.Service), new(*anonimpl.AnonDeviceService)),
licensing.ProvideService,

View File

@@ -17,6 +17,7 @@ import (
"github.com/grafana/grafana/pkg/services/anonymous"
"github.com/grafana/grafana/pkg/services/anonymous/anonimpl/anonstore"
"github.com/grafana/grafana/pkg/services/anonymous/anonimpl/api"
"github.com/grafana/grafana/pkg/services/anonymous/validator"
"github.com/grafana/grafana/pkg/services/authn"
"github.com/grafana/grafana/pkg/services/org"
"github.com/grafana/grafana/pkg/setting"
@@ -28,23 +29,26 @@ const deviceIDHeader = "X-Grafana-Device-Id"
const keepFor = time.Hour * 24 * 61
type AnonDeviceService struct {
log log.Logger
localCache *localcache.CacheService
anonStore anonstore.AnonStore
serverLock *serverlock.ServerLockService
cfg *setting.Cfg
log log.Logger
localCache *localcache.CacheService
anonStore anonstore.AnonStore
serverLock *serverlock.ServerLockService
cfg *setting.Cfg
limitValidator validator.AnonUserLimitValidator
}
func ProvideAnonymousDeviceService(usageStats usagestats.Service, authBroker authn.Service,
sqlStore db.DB, cfg *setting.Cfg, orgService org.Service,
serverLockService *serverlock.ServerLockService, accesscontrol accesscontrol.AccessControl, routeRegister routing.RouteRegister,
validator validator.AnonUserLimitValidator,
) *AnonDeviceService {
a := &AnonDeviceService{
log: log.New("anonymous-session-service"),
localCache: localcache.New(29*time.Minute, 15*time.Minute),
anonStore: anonstore.ProvideAnonDBStore(sqlStore, cfg.AnonymousDeviceLimit),
serverLock: serverLockService,
cfg: cfg,
log: log.New("anonymous-session-service"),
localCache: localcache.New(29*time.Minute, 15*time.Minute),
anonStore: anonstore.ProvideAnonDBStore(sqlStore, cfg.AnonymousDeviceLimit),
serverLock: serverLockService,
cfg: cfg,
limitValidator: validator,
}
usageStats.RegisterMetricsFunc(a.usageStatFn)
@@ -81,6 +85,11 @@ func (a *AnonDeviceService) usageStatFn(ctx context.Context) (map[string]any, er
}
func (a *AnonDeviceService) tagDeviceUI(ctx context.Context, device *anonstore.Device) error {
err := a.limitValidator.Validate(ctx)
if err != nil {
return err
}
key := device.CacheKey()
if val, ok := a.localCache.Get(key); ok {
@@ -109,8 +118,7 @@ func (a *AnonDeviceService) tagDeviceUI(ctx context.Context, device *anonstore.D
return nil
}
func (a *AnonDeviceService) untagDevice(ctx context.Context,
identity *authn.Identity, r *authn.Request, err error) {
func (a *AnonDeviceService) untagDevice(ctx context.Context, _ *authn.Identity, r *authn.Request, err error) {
if err != nil {
return
}

View File

@@ -15,6 +15,7 @@ import (
"github.com/grafana/grafana/pkg/services/accesscontrol/actest"
"github.com/grafana/grafana/pkg/services/anonymous"
"github.com/grafana/grafana/pkg/services/anonymous/anonimpl/anonstore"
"github.com/grafana/grafana/pkg/services/anonymous/validator"
"github.com/grafana/grafana/pkg/services/authn/authntest"
"github.com/grafana/grafana/pkg/services/org/orgtest"
"github.com/grafana/grafana/pkg/setting"
@@ -123,7 +124,7 @@ func TestIntegrationDeviceService_tag(t *testing.T) {
t.Run(tc.name, func(t *testing.T) {
store := db.InitTestDB(t)
anonService := ProvideAnonymousDeviceService(&usagestats.UsageStatsMock{},
&authntest.FakeService{}, store, setting.NewCfg(), orgtest.NewOrgServiceFake(), nil, actest.FakeAccessControl{}, &routing.RouteRegisterImpl{})
&authntest.FakeService{}, store, setting.NewCfg(), orgtest.NewOrgServiceFake(), nil, actest.FakeAccessControl{}, &routing.RouteRegisterImpl{}, validator.FakeAnonUserLimitValidator{})
for _, req := range tc.req {
err := anonService.TagDevice(context.Background(), req.httpReq, req.kind)
@@ -161,7 +162,7 @@ func TestIntegrationAnonDeviceService_localCacheSafety(t *testing.T) {
}
store := db.InitTestDB(t)
anonService := ProvideAnonymousDeviceService(&usagestats.UsageStatsMock{},
&authntest.FakeService{}, store, setting.NewCfg(), orgtest.NewOrgServiceFake(), nil, actest.FakeAccessControl{}, &routing.RouteRegisterImpl{})
&authntest.FakeService{}, store, setting.NewCfg(), orgtest.NewOrgServiceFake(), nil, actest.FakeAccessControl{}, &routing.RouteRegisterImpl{}, validator.FakeAnonUserLimitValidator{})
req := &http.Request{
Header: http.Header{
@@ -259,7 +260,7 @@ func TestIntegrationDeviceService_SearchDevice(t *testing.T) {
store := db.InitTestDB(t)
cfg := setting.NewCfg()
cfg.AnonymousEnabled = true
anonService := ProvideAnonymousDeviceService(&usagestats.UsageStatsMock{}, &authntest.FakeService{}, store, cfg, orgtest.NewOrgServiceFake(), nil, actest.FakeAccessControl{}, &routing.RouteRegisterImpl{})
anonService := ProvideAnonymousDeviceService(&usagestats.UsageStatsMock{}, &authntest.FakeService{}, store, cfg, orgtest.NewOrgServiceFake(), nil, actest.FakeAccessControl{}, &routing.RouteRegisterImpl{}, validator.FakeAnonUserLimitValidator{})
for _, tc := range testCases {
err := store.Reset()
@@ -300,6 +301,7 @@ func TestIntegrationAnonDeviceService_DeviceLimitWithCache(t *testing.T) {
nil,
actest.FakeAccessControl{},
&routing.RouteRegisterImpl{},
validator.FakeAnonUserLimitValidator{},
)
// Define test cases

View File

@@ -0,0 +1,12 @@
package validator
import "context"
type FakeAnonUserLimitValidator struct {
}
var _ AnonUserLimitValidator = (*FakeAnonUserLimitValidator)(nil)
func (f FakeAnonUserLimitValidator) Validate(_ context.Context) error {
return nil
}

View File

@@ -0,0 +1,23 @@
package validator
import (
"context"
)
type AnonUserLimitValidator interface {
Validate(ctx context.Context) error
}
// AnonUserLimitValidatorImpl is used to validate the limit of Anonymous user
type AnonUserLimitValidatorImpl struct {
}
var _ AnonUserLimitValidator = (*AnonUserLimitValidatorImpl)(nil)
func ProvideAnonUserLimitValidator() *AnonUserLimitValidatorImpl {
return &AnonUserLimitValidatorImpl{}
}
func (a AnonUserLimitValidatorImpl) Validate(_ context.Context) error {
return nil
}

View File

@@ -166,7 +166,7 @@ func (cma *CloudMigrationAPI) GetSessionList(c *contextmodel.ReqContext) respons
ctx, span := cma.tracer.Start(c.Req.Context(), "MigrationAPI.GetSessionList")
defer span.End()
sl, err := cma.cloudMigrationService.GetSessionList(ctx)
sl, err := cma.cloudMigrationService.GetSessionList(ctx, c.OrgID)
if err != nil {
return response.ErrOrFallback(http.StatusInternalServerError, "session list error", err)
}
@@ -193,7 +193,7 @@ func (cma *CloudMigrationAPI) GetSession(c *contextmodel.ReqContext) response.Re
return response.Error(http.StatusBadRequest, "invalid session uid", err)
}
s, err := cma.cloudMigrationService.GetSession(ctx, uid)
s, err := cma.cloudMigrationService.GetSession(ctx, c.OrgID, uid)
if err != nil {
return response.ErrOrFallback(http.StatusNotFound, "session not found", err)
}
@@ -226,6 +226,7 @@ func (cma *CloudMigrationAPI) CreateSession(c *contextmodel.ReqContext) response
}
s, err := cma.cloudMigrationService.CreateSession(ctx, cloudmigration.CloudMigrationSessionRequest{
AuthToken: cmd.AuthToken,
OrgID: c.SignedInUser.OrgID,
})
if err != nil {
return response.ErrOrFallback(http.StatusInternalServerError, "session creation error", err)
@@ -260,7 +261,7 @@ func (cma *CloudMigrationAPI) RunMigration(c *contextmodel.ReqContext) response.
return response.ErrOrFallback(http.StatusBadRequest, "invalid migration uid", err)
}
result, err := cma.cloudMigrationService.RunMigration(ctx, uid)
result, err := cma.cloudMigrationService.RunMigration(ctx, c.OrgID, uid)
if err != nil {
return response.ErrOrFallback(http.StatusInternalServerError, "migration run error", err)
}
@@ -353,7 +354,7 @@ func (cma *CloudMigrationAPI) DeleteSession(c *contextmodel.ReqContext) response
return response.ErrOrFallback(http.StatusBadRequest, "invalid session uid", err)
}
_, err := cma.cloudMigrationService.DeleteSession(ctx, uid)
_, err := cma.cloudMigrationService.DeleteSession(ctx, c.OrgID, uid)
if err != nil {
return response.ErrOrFallback(http.StatusInternalServerError, "session delete error", err)
}
@@ -418,6 +419,7 @@ func (cma *CloudMigrationAPI) GetSnapshot(c *contextmodel.ReqContext) response.R
SessionUID: sessUid,
ResultPage: c.QueryInt("resultPage"),
ResultLimit: c.QueryInt("resultLimit"),
OrgID: c.SignedInUser.OrgID,
}
if q.ResultLimit == 0 {
q.ResultLimit = 100
@@ -491,6 +493,7 @@ func (cma *CloudMigrationAPI) GetSnapshotList(c *contextmodel.ReqContext) respon
SessionUID: uid,
Limit: c.QueryInt("limit"),
Page: c.QueryInt("page"),
OrgID: c.SignedInUser.OrgID,
}
if q.Limit == 0 {
q.Limit = 100
@@ -542,7 +545,7 @@ func (cma *CloudMigrationAPI) UploadSnapshot(c *contextmodel.ReqContext) respons
return response.ErrOrFallback(http.StatusBadRequest, "invalid snapshot uid", err)
}
if err := cma.cloudMigrationService.UploadSnapshot(ctx, sessUid, snapshotUid); err != nil {
if err := cma.cloudMigrationService.UploadSnapshot(ctx, c.OrgID, sessUid, snapshotUid); err != nil {
return response.ErrOrFallback(http.StatusInternalServerError, "error uploading snapshot", err)
}

View File

@@ -17,17 +17,17 @@ type Service interface {
DeleteToken(ctx context.Context, uid string) error
CreateSession(ctx context.Context, req CloudMigrationSessionRequest) (*CloudMigrationSessionResponse, error)
GetSession(ctx context.Context, migUID string) (*CloudMigrationSession, error)
DeleteSession(ctx context.Context, migUID string) (*CloudMigrationSession, error)
GetSessionList(context.Context) (*CloudMigrationSessionListResponse, error)
GetSession(ctx context.Context, orgID int64, migUID string) (*CloudMigrationSession, error)
DeleteSession(ctx context.Context, orgID int64, migUID string) (*CloudMigrationSession, error)
GetSessionList(ctx context.Context, orgID int64) (*CloudMigrationSessionListResponse, error)
RunMigration(ctx context.Context, migUID string) (*MigrateDataResponse, error)
RunMigration(ctx context.Context, orgID int64, migUID string) (*MigrateDataResponse, error)
GetMigrationStatus(ctx context.Context, runUID string) (*CloudMigrationSnapshot, error)
GetMigrationRunList(ctx context.Context, migUID string) (*CloudMigrationRunList, error)
CreateSnapshot(ctx context.Context, signedInUser *user.SignedInUser, sessionUid string) (*CloudMigrationSnapshot, error)
GetSnapshot(ctx context.Context, query GetSnapshotsQuery) (*CloudMigrationSnapshot, error)
GetSnapshotList(ctx context.Context, query ListSnapshotsQuery) ([]CloudMigrationSnapshot, error)
UploadSnapshot(ctx context.Context, sessionUid string, snapshotUid string) error
UploadSnapshot(ctx context.Context, orgID int64, sessionUid string, snapshotUid string) error
CancelSnapshot(ctx context.Context, sessionUid string, snapshotUid string) error
}

View File

@@ -337,10 +337,10 @@ func (s *Service) DeleteToken(ctx context.Context, tokenID string) error {
return nil
}
func (s *Service) GetSession(ctx context.Context, uid string) (*cloudmigration.CloudMigrationSession, error) {
ctx, span := s.tracer.Start(ctx, "CloudMigrationService.GetMigration")
func (s *Service) GetSession(ctx context.Context, orgID int64, uid string) (*cloudmigration.CloudMigrationSession, error) {
ctx, span := s.tracer.Start(ctx, "CloudMigrationService.GetSession")
defer span.End()
migration, err := s.store.GetMigrationSessionByUID(ctx, uid)
migration, err := s.store.GetMigrationSessionByUID(ctx, orgID, uid)
if err != nil {
return nil, err
}
@@ -348,8 +348,11 @@ func (s *Service) GetSession(ctx context.Context, uid string) (*cloudmigration.C
return migration, nil
}
func (s *Service) GetSessionList(ctx context.Context) (*cloudmigration.CloudMigrationSessionListResponse, error) {
values, err := s.store.GetCloudMigrationSessionList(ctx)
func (s *Service) GetSessionList(ctx context.Context, orgID int64) (*cloudmigration.CloudMigrationSessionListResponse, error) {
ctx, span := s.tracer.Start(ctx, "CloudMigrationService.GetSessionList")
defer span.End()
values, err := s.store.GetCloudMigrationSessionList(ctx, orgID)
if err != nil {
return nil, fmt.Errorf("retrieving session list from store: %w", err)
}
@@ -380,7 +383,7 @@ func (s *Service) CreateSession(ctx context.Context, cmd cloudmigration.CloudMig
return nil, fmt.Errorf("invalid token") // don't want to leak info here
}
migration := token.ToMigration()
migration := token.ToMigration(cmd.OrgID)
// validate token against GMS before saving
if err := s.ValidateToken(ctx, migration); err != nil {
return nil, fmt.Errorf("token validation: %w", err)
@@ -401,15 +404,15 @@ func (s *Service) CreateSession(ctx context.Context, cmd cloudmigration.CloudMig
}, nil
}
func (s *Service) RunMigration(ctx context.Context, uid string) (*cloudmigration.MigrateDataResponse, error) {
func (s *Service) RunMigration(ctx context.Context, orgID int64, uid string) (*cloudmigration.MigrateDataResponse, error) {
// Get migration to read the auth token
migration, err := s.GetSession(ctx, uid)
migration, err := s.GetSession(ctx, orgID, uid)
if err != nil {
return nil, fmt.Errorf("migration get error: %w", err)
}
// Get migration data JSON
request, err := s.getMigrationDataJSON(ctx, &user.SignedInUser{})
request, err := s.getMigrationDataJSON(ctx, &user.SignedInUser{OrgID: orgID})
if err != nil {
s.log.Error("error getting the json request body for migration run", "err", err.Error())
return nil, fmt.Errorf("migration data get error: %w", err)
@@ -469,8 +472,11 @@ func (s *Service) GetMigrationRunList(ctx context.Context, migUID string) (*clou
return runList, nil
}
func (s *Service) DeleteSession(ctx context.Context, sessionUID string) (*cloudmigration.CloudMigrationSession, error) {
session, snapshots, err := s.store.DeleteMigrationSessionByUID(ctx, sessionUID)
func (s *Service) DeleteSession(ctx context.Context, orgID int64, sessionUID string) (*cloudmigration.CloudMigrationSession, error) {
ctx, span := s.tracer.Start(ctx, "CloudMigrationService.DeleteSession")
defer span.End()
session, snapshots, err := s.store.DeleteMigrationSessionByUID(ctx, orgID, sessionUID)
if err != nil {
s.report(ctx, session, gmsclient.EventDisconnect, 0, err)
return nil, fmt.Errorf("deleting migration from db for session %v: %w", sessionUID, err)
@@ -488,7 +494,7 @@ func (s *Service) CreateSnapshot(ctx context.Context, signedInUser *user.SignedI
defer span.End()
// fetch session for the gms auth token
session, err := s.store.GetMigrationSessionByUID(ctx, sessionUid)
session, err := s.store.GetMigrationSessionByUID(ctx, signedInUser.GetOrgID(), sessionUid)
if err != nil {
return nil, fmt.Errorf("fetching migration session for uid %s: %w", sessionUid, err)
}
@@ -565,13 +571,13 @@ func (s *Service) GetSnapshot(ctx context.Context, query cloudmigration.GetSnaps
ctx, span := s.tracer.Start(ctx, "CloudMigrationService.GetSnapshot")
defer span.End()
sessionUid, snapshotUid := query.SessionUID, query.SnapshotUID
snapshot, err := s.store.GetSnapshotByUID(ctx, sessionUid, snapshotUid, query.ResultPage, query.ResultLimit)
orgID, sessionUid, snapshotUid := query.OrgID, query.SessionUID, query.SnapshotUID
snapshot, err := s.store.GetSnapshotByUID(ctx, orgID, sessionUid, snapshotUid, query.ResultPage, query.ResultLimit)
if err != nil {
return nil, fmt.Errorf("fetching snapshot for uid %s: %w", snapshotUid, err)
}
session, err := s.store.GetMigrationSessionByUID(ctx, sessionUid)
session, err := s.store.GetMigrationSessionByUID(ctx, orgID, sessionUid)
if err != nil {
return nil, fmt.Errorf("fetching session for uid %s: %w", sessionUid, err)
}
@@ -614,7 +620,7 @@ func (s *Service) GetSnapshot(ctx context.Context, query cloudmigration.GetSnaps
}
// Refresh the snapshot after the update
snapshot, err = s.store.GetSnapshotByUID(ctx, sessionUid, snapshotUid, query.ResultPage, query.ResultLimit)
snapshot, err = s.store.GetSnapshotByUID(ctx, orgID, sessionUid, snapshotUid, query.ResultPage, query.ResultLimit)
if err != nil {
return nil, fmt.Errorf("fetching snapshot for uid %s: %w", snapshotUid, err)
}
@@ -642,7 +648,7 @@ func (s *Service) GetSnapshotList(ctx context.Context, query cloudmigration.List
return snapshotList, nil
}
func (s *Service) UploadSnapshot(ctx context.Context, sessionUid string, snapshotUid string) error {
func (s *Service) UploadSnapshot(ctx context.Context, orgID int64, sessionUid string, snapshotUid string) error {
ctx, span := s.tracer.Start(ctx, "CloudMigrationService.UploadSnapshot",
trace.WithAttributes(
attribute.String("sessionUid", sessionUid),
@@ -652,7 +658,7 @@ func (s *Service) UploadSnapshot(ctx context.Context, sessionUid string, snapsho
defer span.End()
// fetch session for the gms auth token
session, err := s.store.GetMigrationSessionByUID(ctx, sessionUid)
session, err := s.store.GetMigrationSessionByUID(ctx, orgID, sessionUid)
if err != nil {
return fmt.Errorf("fetching migration session for uid %s: %w", sessionUid, err)
}
@@ -660,6 +666,7 @@ func (s *Service) UploadSnapshot(ctx context.Context, sessionUid string, snapsho
snapshot, err := s.GetSnapshot(ctx, cloudmigration.GetSnapshotsQuery{
SnapshotUID: snapshotUid,
SessionUID: sessionUid,
OrgID: orgID,
})
if err != nil {
return fmt.Errorf("fetching snapshot with uid %s: %w", snapshotUid, err)

View File

@@ -29,11 +29,11 @@ func (s *NoopServiceImpl) ValidateToken(ctx context.Context, cm cloudmigration.C
return cloudmigration.ErrFeatureDisabledError
}
func (s *NoopServiceImpl) GetSession(ctx context.Context, uid string) (*cloudmigration.CloudMigrationSession, error) {
func (s *NoopServiceImpl) GetSession(ctx context.Context, orgID int64, uid string) (*cloudmigration.CloudMigrationSession, error) {
return nil, cloudmigration.ErrFeatureDisabledError
}
func (s *NoopServiceImpl) GetSessionList(ctx context.Context) (*cloudmigration.CloudMigrationSessionListResponse, error) {
func (s *NoopServiceImpl) GetSessionList(ctx context.Context, orgID int64) (*cloudmigration.CloudMigrationSessionListResponse, error) {
return nil, cloudmigration.ErrFeatureDisabledError
}
@@ -49,7 +49,7 @@ func (s *NoopServiceImpl) GetMigrationRunList(ctx context.Context, uid string) (
return nil, cloudmigration.ErrFeatureDisabledError
}
func (s *NoopServiceImpl) DeleteSession(ctx context.Context, uid string) (*cloudmigration.CloudMigrationSession, error) {
func (s *NoopServiceImpl) DeleteSession(ctx context.Context, orgID int64, uid string) (*cloudmigration.CloudMigrationSession, error) {
return nil, cloudmigration.ErrFeatureDisabledError
}
@@ -57,7 +57,7 @@ func (s *NoopServiceImpl) CreateMigrationRun(context.Context, cloudmigration.Clo
return "", cloudmigration.ErrInternalNotImplementedError
}
func (s *NoopServiceImpl) RunMigration(context.Context, string) (*cloudmigration.MigrateDataResponse, error) {
func (s *NoopServiceImpl) RunMigration(context.Context, int64, string) (*cloudmigration.MigrateDataResponse, error) {
return nil, cloudmigration.ErrFeatureDisabledError
}
@@ -73,7 +73,7 @@ func (s *NoopServiceImpl) GetSnapshotList(ctx context.Context, query cloudmigrat
return nil, cloudmigration.ErrFeatureDisabledError
}
func (s *NoopServiceImpl) UploadSnapshot(ctx context.Context, sessionUid string, snapshotUid string) error {
func (s *NoopServiceImpl) UploadSnapshot(ctx context.Context, orgID int64, sessionUid string, snapshotUid string) error {
return cloudmigration.ErrFeatureDisabledError
}

View File

@@ -9,7 +9,6 @@ import (
"github.com/google/uuid"
"github.com/grafana/grafana/pkg/api/routing"
"github.com/grafana/grafana/pkg/components/simplejson"
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/infra/kvstore"
"github.com/grafana/grafana/pkg/infra/tracing"
@@ -31,7 +30,6 @@ import (
"github.com/grafana/grafana/pkg/setting"
"github.com/prometheus/client_golang/prometheus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"go.opentelemetry.io/otel/sdk/trace/tracetest"
)
@@ -74,6 +72,7 @@ func Test_CreateGetRunMigrationsAndRuns(t *testing.T) {
cmd := cloudmigration.CloudMigrationSessionRequest{
AuthToken: createTokenResp.Token,
OrgID: 1,
}
createResp, err := s.CreateSession(context.Background(), cmd)
@@ -81,20 +80,20 @@ func Test_CreateGetRunMigrationsAndRuns(t *testing.T) {
require.NotEmpty(t, createResp.UID)
require.NotEmpty(t, createResp.Slug)
getMigResp, err := s.GetSession(context.Background(), createResp.UID)
getMigResp, err := s.GetSession(context.Background(), 1, createResp.UID)
require.NoError(t, err)
require.NotNil(t, getMigResp)
require.Equal(t, createResp.UID, getMigResp.UID)
require.Equal(t, createResp.Slug, getMigResp.Slug)
listResp, err := s.GetSessionList(context.Background())
listResp, err := s.GetSessionList(context.Background(), 1)
require.NoError(t, err)
require.NotNil(t, listResp)
require.Equal(t, 1, len(listResp.Sessions))
require.Equal(t, createResp.UID, listResp.Sessions[0].UID)
require.Equal(t, createResp.Slug, listResp.Sessions[0].Slug)
runResp, err := s.RunMigration(ctxWithSignedInUser(), createResp.UID)
runResp, err := s.RunMigration(ctxWithSignedInUser(), 1, createResp.UID)
require.NoError(t, err)
require.NotNil(t, runResp)
resultItemsByType := make(map[string]int)
@@ -375,22 +374,19 @@ func Test_OnlyQueriesStatusFromGMSWhenRequired(t *testing.T) {
func Test_DeletedDashboardsNotMigrated(t *testing.T) {
s := setUpServiceTest(t, false).(*Service)
/** NOTE: this is not used at the moment since we changed the service
// modify what the mock returns for just this test case
dashMock := s.dashboardService.(*dashboards.FakeDashboardService)
dashMock.On("GetAllDashboards", mock.Anything).Return(
[]*dashboards.Dashboard{
{
UID: "1",
Data: simplejson.New(),
},
{
UID: "2",
Data: simplejson.New(),
Deleted: time.Now(),
},
{UID: "1", OrgID: 1, Data: simplejson.New()},
{UID: "2", OrgID: 1, Data: simplejson.New(), Deleted: time.Now()},
},
nil,
)
*/
data, err := s.getMigrationDataJSON(context.TODO(), &user.SignedInUser{OrgID: 1})
assert.NoError(t, err)
@@ -555,7 +551,7 @@ func TestDeleteSession(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
t.Cleanup(cancel)
session, err := s.DeleteSession(ctx, "invalid-session-uid")
session, err := s.DeleteSession(ctx, 2, "invalid-session-uid")
require.Nil(t, session)
require.Error(t, err)
})
@@ -570,6 +566,7 @@ func TestDeleteSession(t *testing.T) {
cmd := cloudmigration.CloudMigrationSessionRequest{
AuthToken: createTokenResp.Token,
OrgID: 3,
}
createResp, err := s.CreateSession(ctx, cmd)
@@ -577,12 +574,12 @@ func TestDeleteSession(t *testing.T) {
require.NotEmpty(t, createResp.UID)
require.NotEmpty(t, createResp.Slug)
deletedSession, err := s.DeleteSession(ctx, createResp.UID)
deletedSession, err := s.DeleteSession(ctx, cmd.OrgID, createResp.UID)
require.NoError(t, err)
require.NotNil(t, deletedSession)
require.Equal(t, deletedSession.UID, createResp.UID)
notFoundSession, err := s.GetSession(ctx, deletedSession.UID)
notFoundSession, err := s.GetSession(ctx, cmd.OrgID, deletedSession.UID)
require.ErrorIs(t, err, cloudmigration.ErrMigrationNotFound)
require.Nil(t, notFoundSession)
})
@@ -638,7 +635,7 @@ func setUpServiceTest(t *testing.T, withDashboardMock bool) cloudmigration.Servi
spanRecorder := tracetest.NewSpanRecorder()
tracer := tracing.InitializeTracerForTest(tracing.WithSpanProcessor(spanRecorder))
mockFolder := &foldertest.FakeService{
ExpectedFolder: &folder.Folder{UID: "folderUID", Title: "Folder"},
ExpectedFolder: &folder.Folder{UID: "folderUID", OrgID: 1, Title: "Folder"},
}
cfg := setting.NewCfg()
@@ -651,6 +648,7 @@ func setUpServiceTest(t *testing.T, withDashboardMock bool) cloudmigration.Servi
cfg.CloudMigration.SnapshotFolder = filepath.Join(os.TempDir(), uuid.NewString())
dashboardService := dashboards.NewFakeDashboardService(t)
/**
if withDashboardMock {
dashboardService.On("GetAllDashboards", mock.Anything).Return(
[]*dashboards.Dashboard{
@@ -662,14 +660,28 @@ func setUpServiceTest(t *testing.T, withDashboardMock bool) cloudmigration.Servi
nil,
)
}
*/
dsService := &datafakes.FakeDataSourceService{
DataSources: []*datasources.DataSource{
{Name: "mmm", Type: "mysql"},
{Name: "ZZZ", Type: "infinity"},
{Name: "mmm", OrgID: 1, Type: "mysql"},
{Name: "ZZZ", OrgID: 1, Type: "infinity"},
},
}
// Insert test data for dashboard test, should be removed later
_, err = sqlStore.GetSqlxSession().Exec(context.Background(), `
INSERT INTO
dashboard (id, org_id, data, deleted, slug, title, created, version, updated )
VALUES
(1, 1, '{}', null, 'asdf', 'ghjk', '2024-03-27 15:30:43.000' , '1','2024-03-27 15:30:43.000' ),
(2, 1, '{}', '2024-03-27 15:30:43.000','qwert', 'yuio', '2024-03-27 15:30:43.000' , '2','2024-03-27 15:30:43.000'),
(3, 2, '{}', null, 'asdf', 'ghjk', '2024-03-27 15:30:43.000' , '1','2024-03-27 15:30:43.000' ),
(4, 2, '{}', '2024-03-27 15:30:43.000','qwert', 'yuio', '2024-03-27 15:30:43.000' , '2','2024-03-27 15:30:43.000');
`,
)
require.NoError(t, err)
s, err := ProvideService(
cfg,
featuremgmt.WithFeatures(

View File

@@ -56,21 +56,21 @@ func (m FakeServiceImpl) CreateSession(_ context.Context, _ cloudmigration.Cloud
}, nil
}
func (m FakeServiceImpl) GetSession(_ context.Context, _ string) (*cloudmigration.CloudMigrationSession, error) {
func (m FakeServiceImpl) GetSession(_ context.Context, _ int64, _ string) (*cloudmigration.CloudMigrationSession, error) {
if m.ReturnError {
return nil, fmt.Errorf("mock error")
}
return &cloudmigration.CloudMigrationSession{UID: "fake"}, nil
}
func (m FakeServiceImpl) DeleteSession(_ context.Context, _ string) (*cloudmigration.CloudMigrationSession, error) {
func (m FakeServiceImpl) DeleteSession(_ context.Context, _ int64, _ string) (*cloudmigration.CloudMigrationSession, error) {
if m.ReturnError {
return nil, fmt.Errorf("mock error")
}
return &cloudmigration.CloudMigrationSession{UID: "fake"}, nil
}
func (m FakeServiceImpl) GetSessionList(_ context.Context) (*cloudmigration.CloudMigrationSessionListResponse, error) {
func (m FakeServiceImpl) GetSessionList(_ context.Context, _ int64) (*cloudmigration.CloudMigrationSessionListResponse, error) {
if m.ReturnError {
return nil, fmt.Errorf("mock error")
}
@@ -82,7 +82,7 @@ func (m FakeServiceImpl) GetSessionList(_ context.Context) (*cloudmigration.Clou
}, nil
}
func (m FakeServiceImpl) RunMigration(_ context.Context, _ string) (*cloudmigration.MigrateDataResponse, error) {
func (m FakeServiceImpl) RunMigration(_ context.Context, _ int64, _ string) (*cloudmigration.MigrateDataResponse, error) {
if m.ReturnError {
return nil, fmt.Errorf("mock error")
}
@@ -170,7 +170,7 @@ func (m FakeServiceImpl) GetSnapshotList(ctx context.Context, query cloudmigrati
}, nil
}
func (m FakeServiceImpl) UploadSnapshot(ctx context.Context, sessionUid string, snapshotUid string) error {
func (m FakeServiceImpl) UploadSnapshot(ctx context.Context, _ int64, sessionUid string, snapshotUid string) error {
if m.ReturnError {
return fmt.Errorf("mock error")
}

View File

@@ -25,7 +25,7 @@ import (
func (s *Service) getMigrationDataJSON(ctx context.Context, signedInUser *user.SignedInUser) (*cloudmigration.MigrateDataRequest, error) {
// Data sources
dataSources, err := s.getDataSourceCommands(ctx)
dataSources, err := s.getDataSourceCommands(ctx, signedInUser)
if err != nil {
s.log.Error("Failed to get datasources", "err", err)
return nil, err
@@ -85,14 +85,17 @@ func (s *Service) getMigrationDataJSON(ctx context.Context, signedInUser *user.S
return migrationData, nil
}
func (s *Service) getDataSourceCommands(ctx context.Context) ([]datasources.AddDataSourceCommand, error) {
dataSources, err := s.dsService.GetAllDataSources(ctx, &datasources.GetAllDataSourcesQuery{})
func (s *Service) getDataSourceCommands(ctx context.Context, signedInUser *user.SignedInUser) ([]datasources.AddDataSourceCommand, error) {
ctx, span := s.tracer.Start(ctx, "CloudMigrationService.getDataSourceCommands")
defer span.End()
dataSources, err := s.dsService.GetDataSources(ctx, &datasources.GetDataSourcesQuery{OrgID: signedInUser.GetOrgID()})
if err != nil {
s.log.Error("Failed to get all datasources", "err", err)
return nil, err
}
result := []datasources.AddDataSourceCommand{}
result := make([]datasources.AddDataSourceCommand, 0, len(dataSources))
for _, dataSource := range dataSources {
// Decrypt secure json to send raw credentials
decryptedData, err := s.secretsService.DecryptJsonData(ctx, dataSource.SecureJsonData)
@@ -124,7 +127,10 @@ func (s *Service) getDataSourceCommands(ctx context.Context) ([]datasources.AddD
// getDashboardAndFolderCommands returns the json payloads required by the dashboard and folder creation APIs
func (s *Service) getDashboardAndFolderCommands(ctx context.Context, signedInUser *user.SignedInUser) ([]dashboards.Dashboard, []folder.CreateFolderCommand, error) {
dashs, err := s.dashboardService.GetAllDashboards(ctx)
ctx, span := s.tracer.Start(ctx, "CloudMigrationService.getDashboardAndFolderCommands")
defer span.End()
dashs, err := s.store.GetAllDashboardsByOrgId(ctx, signedInUser.GetOrgID())
if err != nil {
return nil, nil, err
}
@@ -150,20 +156,21 @@ func (s *Service) getDashboardAndFolderCommands(ctx context.Context, signedInUse
folders, err := s.folderService.GetFolders(ctx, folder.GetFoldersQuery{
UIDs: folderUids,
SignedInUser: signedInUser,
OrgID: signedInUser.GetOrgID(),
WithFullpathUIDs: true,
})
if err != nil {
return nil, nil, err
}
folderCmds := make([]folder.CreateFolderCommand, len(folders))
for i, f := range folders {
folderCmds[i] = folder.CreateFolderCommand{
folderCmds := make([]folder.CreateFolderCommand, 0, len(folders))
for _, f := range folders {
folderCmds = append(folderCmds, folder.CreateFolderCommand{
UID: f.UID,
Title: f.Title,
Description: f.Description,
ParentUID: f.ParentUID,
}
})
}
return dashboardCmds, folderCmds, nil

View File

@@ -4,15 +4,16 @@ import (
"context"
"github.com/grafana/grafana/pkg/services/cloudmigration"
"github.com/grafana/grafana/pkg/services/dashboards"
)
type store interface {
CreateMigrationSession(ctx context.Context, session cloudmigration.CloudMigrationSession) (*cloudmigration.CloudMigrationSession, error)
GetMigrationSessionByUID(ctx context.Context, uid string) (*cloudmigration.CloudMigrationSession, error)
GetCloudMigrationSessionList(ctx context.Context) ([]*cloudmigration.CloudMigrationSession, error)
GetMigrationSessionByUID(ctx context.Context, orgID int64, uid string) (*cloudmigration.CloudMigrationSession, error)
GetCloudMigrationSessionList(ctx context.Context, orgID int64) ([]*cloudmigration.CloudMigrationSession, error)
// DeleteMigrationSessionByUID deletes the migration session, and all the related snapshot and resources.
// the work is done in a transaction.
DeleteMigrationSessionByUID(ctx context.Context, uid string) (*cloudmigration.CloudMigrationSession, []cloudmigration.CloudMigrationSnapshot, error)
DeleteMigrationSessionByUID(ctx context.Context, orgID int64, uid string) (*cloudmigration.CloudMigrationSession, []cloudmigration.CloudMigrationSnapshot, error)
CreateMigrationRun(ctx context.Context, cmr cloudmigration.CloudMigrationSnapshot) (string, error)
GetMigrationStatus(ctx context.Context, cmrUID string) (*cloudmigration.CloudMigrationSnapshot, error)
@@ -21,12 +22,16 @@ type store interface {
CreateSnapshot(ctx context.Context, snapshot cloudmigration.CloudMigrationSnapshot) (string, error)
UpdateSnapshot(ctx context.Context, snapshot cloudmigration.UpdateSnapshotCmd) error
GetSnapshotByUID(ctx context.Context, sessUid, id string, resultPage int, resultLimit int) (*cloudmigration.CloudMigrationSnapshot, error)
GetSnapshotByUID(ctx context.Context, orgID int64, sessUid, id string, resultPage int, resultLimit int) (*cloudmigration.CloudMigrationSnapshot, error)
GetSnapshotList(ctx context.Context, query cloudmigration.ListSnapshotsQuery) ([]cloudmigration.CloudMigrationSnapshot, error)
DeleteSnapshot(ctx context.Context, snapshotUid string) error
CreateUpdateSnapshotResources(ctx context.Context, snapshotUid string, resources []cloudmigration.CloudMigrationResource) error
GetSnapshotResources(ctx context.Context, snapshotUid string, page int, limit int) ([]cloudmigration.CloudMigrationResource, error)
GetSnapshotResourceStats(ctx context.Context, snapshotUid string) (*cloudmigration.SnapshotResourceStats, error)
DeleteSnapshotResources(ctx context.Context, snapshotUid string) error
// Deleted because were not used externally
// - DeleteSnapshot(ctx context.Context, snapshotUid string) error
// - CreateUpdateSnapshotResources(ctx context.Context, snapshotUid string, resources []cloudmigration.CloudMigrationResource) error
// - GetSnapshotResources(ctx context.Context, snapshotUid string, page int, limit int) ([]cloudmigration.CloudMigrationResource, error)
// - GetSnapshotResourceStats(ctx context.Context, snapshotUid string) (*cloudmigration.SnapshotResourceStats, error)
// - DeleteSnapshotResources(ctx context.Context, snapshotUid string) error
// TODO move this function dashboards/databases/databases.go
GetAllDashboardsByOrgId(ctx context.Context, orgID int64) ([]*dashboards.Dashboard, error)
}

View File

@@ -8,6 +8,7 @@ import (
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/services/cloudmigration"
"github.com/grafana/grafana/pkg/services/dashboards"
"github.com/grafana/grafana/pkg/services/secrets"
secretskv "github.com/grafana/grafana/pkg/services/secrets/kvstore"
"github.com/grafana/grafana/pkg/services/sqlstore"
@@ -28,10 +29,10 @@ const (
GetAllSnapshots = -1
)
func (ss *sqlStore) GetMigrationSessionByUID(ctx context.Context, uid string) (*cloudmigration.CloudMigrationSession, error) {
func (ss *sqlStore) GetMigrationSessionByUID(ctx context.Context, orgID int64, uid string) (*cloudmigration.CloudMigrationSession, error) {
var cm cloudmigration.CloudMigrationSession
err := ss.db.WithDbSession(ctx, func(sess *db.Session) error {
exist, err := sess.Where("uid=?", uid).Get(&cm)
exist, err := sess.Where("org_id=? AND uid=?", orgID, uid).Get(&cm)
if err != nil {
return err
}
@@ -89,11 +90,10 @@ func (ss *sqlStore) CreateMigrationSession(ctx context.Context, migration cloudm
return &migration, nil
}
func (ss *sqlStore) GetCloudMigrationSessionList(ctx context.Context) ([]*cloudmigration.CloudMigrationSession, error) {
func (ss *sqlStore) GetCloudMigrationSessionList(ctx context.Context, orgID int64) ([]*cloudmigration.CloudMigrationSession, error) {
var migrations = make([]*cloudmigration.CloudMigrationSession, 0)
err := ss.db.WithDbSession(ctx, func(sess *db.Session) error {
sess.OrderBy("created DESC")
return sess.Find(&migrations)
return sess.Where("org_id=?", orgID).OrderBy("created DESC").Find(&migrations)
})
if err != nil {
return nil, err
@@ -110,10 +110,10 @@ func (ss *sqlStore) GetCloudMigrationSessionList(ctx context.Context) ([]*cloudm
return migrations, nil
}
func (ss *sqlStore) DeleteMigrationSessionByUID(ctx context.Context, uid string) (*cloudmigration.CloudMigrationSession, []cloudmigration.CloudMigrationSnapshot, error) {
func (ss *sqlStore) DeleteMigrationSessionByUID(ctx context.Context, orgID int64, uid string) (*cloudmigration.CloudMigrationSession, []cloudmigration.CloudMigrationSnapshot, error) {
var c cloudmigration.CloudMigrationSession
err := ss.db.WithDbSession(ctx, func(sess *db.Session) error {
exist, err := sess.Where("uid=?", uid).Get(&c)
exist, err := sess.Where("org_id=? AND uid=?", orgID, uid).Get(&c)
if err != nil {
return err
}
@@ -139,11 +139,11 @@ func (ss *sqlStore) DeleteMigrationSessionByUID(ctx context.Context, uid string)
err = ss.db.InTransaction(ctx, func(ctx context.Context) error {
for _, snapshot := range snapshots {
err := ss.DeleteSnapshotResources(ctx, snapshot.UID)
err := ss.deleteSnapshotResources(ctx, snapshot.UID)
if err != nil {
return fmt.Errorf("deleting snapshot resource from db: %w", err)
}
err = ss.DeleteSnapshot(ctx, snapshot.UID)
err = ss.deleteSnapshot(ctx, orgID, snapshot.UID)
if err != nil {
return fmt.Errorf("deleting snapshot from db: %w", err)
}
@@ -257,7 +257,7 @@ func (ss *sqlStore) UpdateSnapshot(ctx context.Context, update cloudmigration.Up
// Update resources if set
if len(update.Resources) > 0 {
if err := ss.CreateUpdateSnapshotResources(ctx, update.UID, update.Resources); err != nil {
if err := ss.createUpdateSnapshotResources(ctx, update.UID, update.Resources); err != nil {
return err
}
}
@@ -267,7 +267,7 @@ func (ss *sqlStore) UpdateSnapshot(ctx context.Context, update cloudmigration.Up
return err
}
func (ss *sqlStore) DeleteSnapshot(ctx context.Context, snapshotUid string) error {
func (ss *sqlStore) deleteSnapshot(ctx context.Context, orgID int64, snapshotUid string) error {
return ss.db.WithDbSession(ctx, func(sess *sqlstore.DBSession) error {
_, err := sess.Delete(cloudmigration.CloudMigrationSnapshot{
UID: snapshotUid,
@@ -276,9 +276,16 @@ func (ss *sqlStore) DeleteSnapshot(ctx context.Context, snapshotUid string) erro
})
}
func (ss *sqlStore) GetSnapshotByUID(ctx context.Context, sessionUid, uid string, resultPage int, resultLimit int) (*cloudmigration.CloudMigrationSnapshot, error) {
func (ss *sqlStore) GetSnapshotByUID(ctx context.Context, orgID int64, sessionUid, uid string, resultPage int, resultLimit int) (*cloudmigration.CloudMigrationSnapshot, error) {
// first we check if the session exists, using orgId and sessionUid
session, err := ss.GetMigrationSessionByUID(ctx, orgID, sessionUid)
if err != nil || session == nil {
return nil, err
}
// now we get the snapshot
var snapshot cloudmigration.CloudMigrationSnapshot
err := ss.db.WithDbSession(ctx, func(sess *db.Session) error {
err = ss.db.WithDbSession(ctx, func(sess *db.Session) error {
exist, err := sess.Where("session_uid=? AND uid=?", sessionUid, uid).Get(&snapshot)
if err != nil {
return err
@@ -300,11 +307,11 @@ func (ss *sqlStore) GetSnapshotByUID(ctx context.Context, sessionUid, uid string
snapshot.EncryptionKey = []byte(secret)
}
resources, err := ss.GetSnapshotResources(ctx, uid, resultPage, resultLimit)
resources, err := ss.getSnapshotResources(ctx, uid, resultPage, resultLimit)
if err == nil {
snapshot.Resources = resources
}
stats, err := ss.GetSnapshotResourceStats(ctx, uid)
stats, err := ss.getSnapshotResourceStats(ctx, uid)
if err == nil {
snapshot.StatsRollup = *stats
}
@@ -317,7 +324,9 @@ func (ss *sqlStore) GetSnapshotByUID(ctx context.Context, sessionUid, uid string
func (ss *sqlStore) GetSnapshotList(ctx context.Context, query cloudmigration.ListSnapshotsQuery) ([]cloudmigration.CloudMigrationSnapshot, error) {
var snapshots = make([]cloudmigration.CloudMigrationSnapshot, 0)
err := ss.db.WithDbSession(ctx, func(sess *db.Session) error {
sess.Join("INNER", "cloud_migration_session", "cloud_migration_session.uid = cloud_migration_snapshot.session_uid")
sess.Join("INNER", "cloud_migration_session",
"cloud_migration_session.uid = cloud_migration_snapshot.session_uid AND cloud_migration_session.org_id = ?", query.OrgID,
)
if query.Limit != GetAllSnapshots {
offset := (query.Page - 1) * query.Limit
sess.Limit(query.Limit, offset)
@@ -339,7 +348,7 @@ func (ss *sqlStore) GetSnapshotList(ctx context.Context, query cloudmigration.Li
snapshot.EncryptionKey = []byte(secret)
}
if stats, err := ss.GetSnapshotResourceStats(ctx, snapshot.UID); err != nil {
if stats, err := ss.getSnapshotResourceStats(ctx, snapshot.UID); err != nil {
return nil, err
} else {
snapshot.StatsRollup = *stats
@@ -351,7 +360,7 @@ func (ss *sqlStore) GetSnapshotList(ctx context.Context, query cloudmigration.Li
// CreateUpdateSnapshotResources either updates a migration resource for a snapshot, or creates it if it does not exist
// If the uid is not known, it uses snapshot_uid + resource_uid as a lookup
func (ss *sqlStore) CreateUpdateSnapshotResources(ctx context.Context, snapshotUid string, resources []cloudmigration.CloudMigrationResource) error {
func (ss *sqlStore) createUpdateSnapshotResources(ctx context.Context, snapshotUid string, resources []cloudmigration.CloudMigrationResource) error {
return ss.db.InTransaction(ctx, func(ctx context.Context) error {
sql := "UPDATE cloud_migration_resource SET status=?, error_string=? WHERE uid=? OR (snapshot_uid=? AND resource_uid=?)"
err := ss.db.WithDbSession(ctx, func(sess *sqlstore.DBSession) error {
@@ -385,7 +394,7 @@ func (ss *sqlStore) CreateUpdateSnapshotResources(ctx context.Context, snapshotU
})
}
func (ss *sqlStore) GetSnapshotResources(ctx context.Context, snapshotUid string, page int, limit int) ([]cloudmigration.CloudMigrationResource, error) {
func (ss *sqlStore) getSnapshotResources(ctx context.Context, snapshotUid string, page int, limit int) ([]cloudmigration.CloudMigrationResource, error) {
if page < 1 {
page = 1
}
@@ -407,7 +416,7 @@ func (ss *sqlStore) GetSnapshotResources(ctx context.Context, snapshotUid string
return resources, nil
}
func (ss *sqlStore) GetSnapshotResourceStats(ctx context.Context, snapshotUid string) (*cloudmigration.SnapshotResourceStats, error) {
func (ss *sqlStore) getSnapshotResourceStats(ctx context.Context, snapshotUid string) (*cloudmigration.SnapshotResourceStats, error) {
typeCounts := make([]struct {
Count int `json:"count"`
Type string `json:"type"`
@@ -454,7 +463,7 @@ func (ss *sqlStore) GetSnapshotResourceStats(ctx context.Context, snapshotUid st
return stats, nil
}
func (ss *sqlStore) DeleteSnapshotResources(ctx context.Context, snapshotUid string) error {
func (ss *sqlStore) deleteSnapshotResources(ctx context.Context, snapshotUid string) error {
return ss.db.WithDbSession(ctx, func(sess *sqlstore.DBSession) error {
_, err := sess.Delete(cloudmigration.CloudMigrationResource{
SnapshotUID: snapshotUid,
@@ -497,3 +506,19 @@ func (ss *sqlStore) decryptToken(ctx context.Context, cm *cloudmigration.CloudMi
return nil
}
// TODO move this function dashboards/databases/databases.go
func (ss *sqlStore) GetAllDashboardsByOrgId(ctx context.Context, orgID int64) ([]*dashboards.Dashboard, error) {
//ctx, span := tracer.Start(ctx, "dashboards.database.GetAllDashboardsByOrgId")
//defer span.End()
var dashs = make([]*dashboards.Dashboard, 0)
err := ss.db.WithDbSession(ctx, func(session *db.Session) error {
// "deleted IS NULL" is to avoid deleted dashboards
return session.Where("org_id = ? AND deleted IS NULL", orgID).Find(&dashs)
})
if err != nil {
return nil, err
}
return dashs, nil
}

View File

@@ -26,7 +26,7 @@ func Test_GetAllCloudMigrationSessions(t *testing.T) {
ctx := context.Background()
t.Run("get all cloud_migration_session entries", func(t *testing.T) {
value, err := s.GetCloudMigrationSessionList(ctx)
value, err := s.GetCloudMigrationSessionList(ctx, 1)
require.NoError(t, err)
require.Equal(t, 3, len(value))
for _, m := range value {
@@ -55,6 +55,7 @@ func Test_CreateMigrationSession(t *testing.T) {
cm := cloudmigration.CloudMigrationSession{
AuthToken: encodeToken("token"),
Slug: "fake_stack",
OrgID: 3,
StackID: 1234,
RegionSlug: "fake_slug",
ClusterSlug: "fake_cluster_slug",
@@ -64,7 +65,7 @@ func Test_CreateMigrationSession(t *testing.T) {
require.NotEmpty(t, sess.ID)
require.NotEmpty(t, sess.UID)
getRes, err := s.GetMigrationSessionByUID(ctx, sess.UID)
getRes, err := s.GetMigrationSessionByUID(ctx, 3, sess.UID)
require.NoError(t, err)
require.Equal(t, sess.ID, getRes.ID)
require.Equal(t, sess.UID, getRes.UID)
@@ -81,13 +82,15 @@ func Test_GetMigrationSessionByUID(t *testing.T) {
ctx := context.Background()
t.Run("find session by uid", func(t *testing.T) {
uid := "qwerty"
mig, err := s.GetMigrationSessionByUID(ctx, uid)
orgId := int64(1)
mig, err := s.GetMigrationSessionByUID(ctx, orgId, uid)
require.NoError(t, err)
require.Equal(t, uid, mig.UID)
require.Equal(t, orgId, mig.OrgID)
})
t.Run("returns error if session is not found by uid", func(t *testing.T) {
_, err := s.GetMigrationSessionByUID(ctx, "fake_uid_1234")
_, err := s.GetMigrationSessionByUID(ctx, 1, "fake_uid_1234")
require.ErrorIs(t, cloudmigration.ErrMigrationNotFound, err)
})
}
@@ -171,7 +174,10 @@ func Test_SnapshotManagement(t *testing.T) {
ctx := context.Background()
t.Run("tests the snapshot lifecycle", func(t *testing.T) {
session, err := s.CreateMigrationSession(ctx, cloudmigration.CloudMigrationSession{})
session, err := s.CreateMigrationSession(ctx, cloudmigration.CloudMigrationSession{
OrgID: 1,
AuthToken: encodeToken("token"),
})
require.NoError(t, err)
// create a snapshot
@@ -185,7 +191,7 @@ func Test_SnapshotManagement(t *testing.T) {
require.NotEmpty(t, snapshotUid)
//retrieve it from the db
snapshot, err := s.GetSnapshotByUID(ctx, session.UID, snapshotUid, 0, 0)
snapshot, err := s.GetSnapshotByUID(ctx, 1, session.UID, snapshotUid, 0, 0)
require.NoError(t, err)
require.Equal(t, cloudmigration.SnapshotStatusCreating, snapshot.Status)
@@ -194,22 +200,22 @@ func Test_SnapshotManagement(t *testing.T) {
require.NoError(t, err)
//retrieve it again
snapshot, err = s.GetSnapshotByUID(ctx, session.UID, snapshotUid, 0, 0)
snapshot, err = s.GetSnapshotByUID(ctx, 1, session.UID, snapshotUid, 0, 0)
require.NoError(t, err)
require.Equal(t, cloudmigration.SnapshotStatusCreating, snapshot.Status)
// lists snapshots and ensures it's in there
snapshots, err := s.GetSnapshotList(ctx, cloudmigration.ListSnapshotsQuery{SessionUID: session.UID, Page: 1, Limit: 100})
snapshots, err := s.GetSnapshotList(ctx, cloudmigration.ListSnapshotsQuery{SessionUID: session.UID, OrgID: 1, Page: 1, Limit: 100})
require.NoError(t, err)
require.Len(t, snapshots, 1)
require.Equal(t, *snapshot, snapshots[0])
// delete snapshot
err = s.DeleteSnapshot(ctx, snapshotUid)
err = s.deleteSnapshot(ctx, 1, snapshotUid)
require.NoError(t, err)
// now we expect not to find the snapshot
snapshot, err = s.GetSnapshotByUID(ctx, session.UID, snapshotUid, 0, 0)
snapshot, err = s.GetSnapshotByUID(ctx, 1, session.UID, snapshotUid, 0, 0)
require.ErrorIs(t, err, cloudmigration.ErrSnapshotNotFound)
require.Nil(t, snapshot)
})
@@ -221,12 +227,12 @@ func Test_SnapshotResources(t *testing.T) {
t.Run("tests CRUD of snapshot resources", func(t *testing.T) {
// Get the default rows from the test
resources, err := s.GetSnapshotResources(ctx, "poiuy", 0, 100)
resources, err := s.getSnapshotResources(ctx, "poiuy", 0, 100)
assert.NoError(t, err)
assert.Len(t, resources, 3)
// create a new resource and update an existing resource
err = s.CreateUpdateSnapshotResources(ctx, "poiuy", []cloudmigration.CloudMigrationResource{
err = s.createUpdateSnapshotResources(ctx, "poiuy", []cloudmigration.CloudMigrationResource{
{
Type: cloudmigration.DatasourceDataType,
RefID: "mi39fj",
@@ -240,7 +246,7 @@ func Test_SnapshotResources(t *testing.T) {
assert.NoError(t, err)
// Get resources again
resources, err = s.GetSnapshotResources(ctx, "poiuy", 0, 100)
resources, err = s.getSnapshotResources(ctx, "poiuy", 0, 100)
assert.NoError(t, err)
assert.Len(t, resources, 4)
// ensure existing resource was updated
@@ -259,7 +265,7 @@ func Test_SnapshotResources(t *testing.T) {
}
// check stats
stats, err := s.GetSnapshotResourceStats(ctx, "poiuy")
stats, err := s.getSnapshotResourceStats(ctx, "poiuy")
assert.NoError(t, err)
assert.Equal(t, map[cloudmigration.MigrateDataType]int{
cloudmigration.DatasourceDataType: 2,
@@ -273,10 +279,10 @@ func Test_SnapshotResources(t *testing.T) {
assert.Equal(t, 4, stats.Total)
// delete snapshot resources
err = s.DeleteSnapshotResources(ctx, "poiuy")
err = s.deleteSnapshotResources(ctx, "poiuy")
assert.NoError(t, err)
// make sure they're gone
resources, err = s.GetSnapshotResources(ctx, "poiuy", 0, 100)
resources, err = s.getSnapshotResources(ctx, "poiuy", 0, 100)
assert.NoError(t, err)
assert.Len(t, resources, 0)
})
@@ -289,7 +295,7 @@ func TestGetSnapshotList(t *testing.T) {
ctx := context.Background()
t.Run("returns list of snapshots that belong to a session", func(t *testing.T) {
snapshots, err := s.GetSnapshotList(ctx, cloudmigration.ListSnapshotsQuery{SessionUID: sessionUID, Page: 1, Limit: 100})
snapshots, err := s.GetSnapshotList(ctx, cloudmigration.ListSnapshotsQuery{SessionUID: sessionUID, OrgID: 1, Page: 1, Limit: 100})
require.NoError(t, err)
ids := make([]string, 0)
@@ -310,7 +316,7 @@ func TestGetSnapshotList(t *testing.T) {
t.Run("if the session is deleted, snapshots can't be retrieved anymore", func(t *testing.T) {
// Delete the session.
_, _, err := s.DeleteMigrationSessionByUID(ctx, sessionUID)
_, _, err := s.DeleteMigrationSessionByUID(ctx, 1, sessionUID)
require.NoError(t, err)
// Fetch the snapshots that belong to the deleted session.
@@ -382,15 +388,17 @@ func setUpTest(t *testing.T) (*sqlstore.SQLStore, *sqlStore) {
// insert cloud migration test data
_, err := testDB.GetSqlxSession().Exec(ctx, `
INSERT INTO
cloud_migration_session (id, uid, auth_token, slug, stack_id, region_slug, cluster_slug, created, updated)
cloud_migration_session (id, uid, org_id, auth_token, slug, stack_id, region_slug, cluster_slug, created, updated)
VALUES
(1,'qwerty', ?, '11111', 11111, 'test', 'test', '2024-03-25 15:30:36.000', '2024-03-27 15:30:43.000'),
(2,'asdfgh', ?, '22222', 22222, 'test', 'test', '2024-03-25 15:30:36.000', '2024-03-27 15:30:43.000'),
(3,'zxcvbn', ?, '33333', 33333, 'test', 'test', '2024-03-25 15:30:36.000', '2024-03-27 15:30:43.000');
(1,'qwerty', 1, ?, '11111', 11111, 'test', 'test', '2024-03-25 15:30:36.000', '2024-03-27 15:30:43.000'),
(2,'asdfgh', 1, ?, '22222', 22222, 'test', 'test', '2024-03-25 15:30:36.000', '2024-03-27 15:30:43.000'),
(3,'zxcvbn', 1, ?, '33333', 33333, 'test', 'test', '2024-03-25 15:30:36.000', '2024-03-27 15:30:43.000'),
(4,'zxcvbn_org2', 2, ?, '33333', 33333, 'test', 'test', '2024-03-25 15:30:36.000', '2024-03-27 15:30:43.000');
`,
encodeToken("12345"),
encodeToken("6789"),
encodeToken("777"),
encodeToken("0987"),
)
require.NoError(t, err)
@@ -399,9 +407,10 @@ func setUpTest(t *testing.T) (*sqlstore.SQLStore, *sqlStore) {
INSERT INTO
cloud_migration_snapshot (session_uid, uid, created, updated, finished, status)
VALUES
('qwerty', 'poiuy', '2024-03-25 15:30:36.000', '2024-03-27 15:30:43.000', '2024-03-27 15:30:43.000', "finished"),
('qwerty', 'lkjhg', '2024-03-25 15:30:36.000', '2024-03-27 15:30:43.000', '2024-03-27 15:30:43.000', "finished"),
('zxcvbn', 'mnbvvc', '2024-03-25 15:30:36.000', '2024-03-27 15:30:43.000', '2024-03-27 15:30:43.000', "finished");
('qwerty', 'poiuy', '2024-03-25 15:30:36.000', '2024-03-27 15:30:43.000', '2024-03-27 15:30:43.000', "finished"),
('qwerty', 'lkjhg', '2024-03-26 15:30:36.000', '2024-03-27 15:30:43.000', '2024-03-27 15:30:43.000', "finished"),
('zxcvbn', 'mnbvvc', '2024-03-25 15:30:36.000', '2024-03-27 15:30:43.000', '2024-03-27 15:30:43.000', "finished"),
('zxcvbn_org2', 'mnbvvc_org2', '2024-03-25 15:30:36.000', '2024-03-27 15:30:43.000', '2024-03-27 15:30:43.000', "finished");
`,
)
require.NoError(t, err)
@@ -419,7 +428,8 @@ func setUpTest(t *testing.T) (*sqlstore.SQLStore, *sqlStore) {
('mnbvde', 'poiuy', 'DATASOURCE', 'jf38gh', 'OK', ''),
('qwerty', 'poiuy', 'DASHBOARD', 'ejcx4d', 'ERROR', 'fake error'),
('zxcvbn', 'poiuy', 'FOLDER', 'fi39fj', 'PENDING', ''),
('4fi9sd', '39fi39', 'FOLDER', 'fi39fj', 'OK', '');
('4fi9sd', '39fi39', 'FOLDER', 'fi39fj', 'OK', ''),
('4fi9ee', 'mnbvvc_org2', 'DATASOURCE', 'fi39asd', 'OK', '');
`,
)
require.NoError(t, err)

View File

@@ -21,6 +21,7 @@ var (
// CloudMigrationSession represents a configured migration token
type CloudMigrationSession struct {
ID int64 `xorm:"pk autoincr 'id'"`
OrgID int64 `xorm:"org_id"`
UID string `xorm:"uid"`
AuthToken string
Slug string
@@ -118,6 +119,8 @@ type CloudMigrationRunList struct {
type CloudMigrationSessionRequest struct {
AuthToken string
// OrgId in the on prem instance
OrgID int64
}
type CloudMigrationSessionResponse struct {
@@ -133,6 +136,7 @@ type CloudMigrationSessionListResponse struct {
type GetSnapshotsQuery struct {
SnapshotUID string
OrgID int64
SessionUID string
ResultPage int
ResultLimit int
@@ -140,6 +144,7 @@ type GetSnapshotsQuery struct {
type ListSnapshotsQuery struct {
SessionUID string
OrgID int64
Page int
Limit int
}
@@ -162,13 +167,14 @@ type Base64EncodedTokenPayload struct {
Instance Base64HGInstance
}
func (p Base64EncodedTokenPayload) ToMigration() CloudMigrationSession {
func (p Base64EncodedTokenPayload) ToMigration(orgID int64) CloudMigrationSession {
return CloudMigrationSession{
AuthToken: p.Token,
Slug: p.Instance.Slug,
StackID: p.Instance.StackID,
RegionSlug: p.Instance.RegionSlug,
ClusterSlug: p.Instance.ClusterSlug,
OrgID: orgID,
}
}

View File

@@ -492,7 +492,7 @@ func (dr *DashboardServiceImpl) setDefaultPermissions(ctx context.Context, dto *
userID, err := identity.IntIdentifier(dto.User.GetID())
if err != nil {
dr.log.Error("Could not make user admin", "dashboard", dash.Title, "id", dto.User.GetID(), "error", err)
} else if identity.IsIdentityType(dto.User.GetID(), identity.TypeUser) {
} else if identity.IsIdentityType(dto.User.GetID(), identity.TypeUser, identity.TypeServiceAccount) {
permissions = append(permissions, accesscontrol.SetResourcePermissionCommand{
UserID: userID, Permission: dashboardaccess.PERMISSION_ADMIN.String(),
})

View File

@@ -158,7 +158,7 @@ type AlertNG struct {
func (ng *AlertNG) init() error {
// AlertNG should be initialized before the cancellation deadline of initCtx
initCtx, cancelFunc := context.WithTimeout(context.Background(), 30*time.Second)
initCtx, cancelFunc := context.WithTimeout(context.Background(), ng.Cfg.UnifiedAlerting.InitializationTimeout)
defer cancelFunc()
ng.store.Logger = ng.Log

View File

@@ -46,7 +46,8 @@ func SetupTestEnv(tb testing.TB, baseInterval time.Duration) (*ngalert.AlertNG,
cfg := setting.NewCfg()
cfg.UnifiedAlerting = setting.UnifiedAlertingSettings{
BaseInterval: setting.SchedulerBaseInterval,
BaseInterval: setting.SchedulerBaseInterval,
InitializationTimeout: 30 * time.Second,
}
// AlertNG database migrations run and the relative database tables are created only when it's enabled
cfg.UnifiedAlerting.Enabled = new(bool)

View File

@@ -499,6 +499,7 @@ func setupEnv(t *testing.T, replStore db.ReplDB, cfg *setting.Cfg, b bus.Bus, qu
ac := acimpl.ProvideAccessControl(featuremgmt.WithFeatures(), zanzana.NewNoopClient())
ruleStore, err := ngstore.ProvideDBStore(cfg, featuremgmt.WithFeatures(), sqlStore, &foldertest.FakeService{}, &dashboards.FakeDashboardService{}, ac)
require.NoError(t, err)
cfg.UnifiedAlerting.InitializationTimeout = 30 * time.Second
_, err = ngalert.ProvideService(
cfg, featuremgmt.WithFeatures(), nil, nil, routing.NewRouteRegister(), sqlStore, ngalertfakes.NewFakeKVStore(t), nil, nil, quotaService,
secretsService, nil, m, &foldertest.FakeService{}, &acmock.Mock{}, &dashboards.FakeDashboardService{}, nil, b, &acmock.Mock{},

View File

@@ -86,7 +86,7 @@ func (m *orphanedServiceAccountPermissions) exec(sess *xorm.Session, mg *migrato
}
// delete all orphaned permissions
rawDelete := "DELETE FROM permission AS p WHERE p.kind = 'serviceaccounts' AND p.identifier IN(?" + strings.Repeat(",?", len(orphaned)-1) + ")"
rawDelete := "DELETE FROM permission WHERE kind = 'serviceaccounts' AND identifier IN(?" + strings.Repeat(",?", len(orphaned)-1) + ")"
deleteArgs := make([]any, 0, len(orphaned)+1)
deleteArgs = append(deleteArgs, rawDelete)
for _, id := range orphaned {

View File

@@ -66,7 +66,7 @@ func addCloudMigrationsMigrations(mg *Migrator) {
}))
// --- v2 - asynchronous workflow refactor
sessionTable := Table{
migrationSessionTable := Table{
Name: "cloud_migration_session",
Columns: []*Column{
{Name: "id", Type: DB_BigInt, IsPrimaryKey: true, IsAutoIncrement: true},
@@ -99,7 +99,7 @@ func addCloudMigrationsMigrations(mg *Migrator) {
},
}
addTableReplaceMigrations(mg, migrationTable, sessionTable, 2, map[string]string{
addTableReplaceMigrations(mg, migrationTable, migrationSessionTable, 2, map[string]string{
"id": "id",
"uid": "uid",
"auth_token": "auth_token",
@@ -158,4 +158,9 @@ func addCloudMigrationsMigrations(mg *Migrator) {
// -- delete the snapshot result column while still in the experimental phase
mg.AddMigration("delete cloud_migration_snapshot.result column", NewRawSQLMigration("ALTER TABLE cloud_migration_snapshot DROP COLUMN result"))
// -- Adds org_id column for for all elements - defaults to 1 (default org)
mg.AddMigration("add cloud_migration_session.org_id column", NewAddColumnMigration(migrationSessionTable, &Column{
Name: "org_id", Type: DB_BigInt, Nullable: false, Default: "1",
}))
}

View File

@@ -158,6 +158,9 @@ func addUserMigrations(mg *Migrator) {
// Service accounts login were not unique per org. this migration is part of making it unique per org
// to be able to create service accounts that are unique per org
mg.AddMigration(usermig.AllowSameLoginCrossOrgs, &usermig.ServiceAccountsSameLoginCrossOrgs{})
// Before it was fixed, the previous migration introduced the org_id again in logins that already had it.
// This migration removes the duplicate org_id from the login.
mg.AddMigration(usermig.DedupOrgInLogin, &usermig.ServiceAccountsDeduplicateOrgInLogin{})
// Users login and email should be in lower case
mg.AddMigration(usermig.LowerCaseUserLoginAndEmail, &usermig.UsersLowerCaseLoginAndEmail{})

View File

@@ -9,6 +9,7 @@ import (
const (
AllowSameLoginCrossOrgs = "update login field with orgid to allow for multiple service accounts with same name across orgs"
DedupOrgInLogin = "update service accounts login field orgid to appear only once"
)
// Service accounts login were not unique per org. this migration is part of making it unique per org
@@ -76,3 +77,60 @@ func (p *ServiceAccountsSameLoginCrossOrgs) Exec(sess *xorm.Session, mg *migrato
}
return err
}
type ServiceAccountsDeduplicateOrgInLogin struct {
migrator.MigrationBase
}
func (p *ServiceAccountsDeduplicateOrgInLogin) SQL(dialect migrator.Dialect) string {
return "code migration"
}
func (p *ServiceAccountsDeduplicateOrgInLogin) Exec(sess *xorm.Session, mg *migrator.Migrator) error {
dialect := mg.Dialect
var err error
// var logins []Login
switch dialect.DriverName() {
case migrator.Postgres:
_, err = sess.Exec(`
UPDATE "user" AS u
SET login = 'sa-' || org_id::text || SUBSTRING(login FROM LENGTH('sa-' || org_id::text || '-' || org_id::text)+1)
WHERE login IS NOT NULL
AND is_service_account = true
AND login LIKE 'sa-' || org_id::text || '-' || org_id::text || '-%'
AND NOT EXISTS (
SELECT 1
FROM "user" AS u2
WHERE u2.login = 'sa-' || u.org_id::text || SUBSTRING(u.login FROM LENGTH('sa-' || u.org_id::text || '-' || u.org_id::text)+1)
);;
`)
case migrator.MySQL:
_, err = sess.Exec(`
UPDATE user AS u
LEFT JOIN user AS u2 ON u2.login = CONCAT('sa-', u.org_id, SUBSTRING(u.login, LENGTH(CONCAT('sa-', u.org_id, '-', u.org_id))+1))
SET u.login = CONCAT('sa-', u.org_id, SUBSTRING(u.login, LENGTH(CONCAT('sa-', u.org_id, '-', u.org_id))+1))
WHERE u.login IS NOT NULL
AND u.is_service_account = 1
AND u.login LIKE CONCAT('sa-', u.org_id, '-', u.org_id, '-%')
AND u2.login IS NULL;
`)
case migrator.SQLite:
_, err = sess.Exec(`
UPDATE ` + dialect.Quote("user") + ` AS u
SET login = 'sa-' || CAST(u.org_id AS TEXT) || SUBSTRING(u.login, LENGTH('sa-'||CAST(u.org_id AS TEXT)||'-'||CAST(u.org_id AS TEXT))+1)
WHERE u.login IS NOT NULL
AND u.is_service_account = 1
AND u.login LIKE 'sa-'||CAST(u.org_id AS TEXT)||'-'||CAST(u.org_id AS TEXT)||'-%'
AND NOT EXISTS (
SELECT 1
FROM ` + dialect.Quote("user") + `AS u2
WHERE u2.login = 'sa-' || CAST(u.org_id AS TEXT) || SUBSTRING(u.login, LENGTH('sa-'||CAST(u.org_id AS TEXT)||'-'||CAST(u.org_id AS TEXT))+1)
);;
`)
default:
return fmt.Errorf("dialect not supported: %s", dialect)
}
return err
}

View File

@@ -285,3 +285,162 @@ func TestIntegrationServiceAccountMigration(t *testing.T) {
})
}
}
func TestIntegrationServiceAccountDedupOrgMigration(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test in short mode")
}
// Run initial migration to have a working DB
x := setupTestDB(t)
type migrationTestCase struct {
desc string
serviceAccounts []*user.User
wantServiceAccounts []*user.User
}
testCases := []migrationTestCase{
{
desc: "no change",
serviceAccounts: []*user.User{
{
ID: 1,
UID: "u1",
Name: "sa-1-nochange",
Login: "sa-1-nochange",
Email: "sa-1-nochange@example.org",
OrgID: 1,
Created: now,
Updated: now,
IsServiceAccount: true,
},
{
ID: 2,
UID: "u2",
Name: "sa-2-nochange",
Login: "sa-2-nochange",
Email: "sa-2-nochange@example.org",
OrgID: 2,
Created: now,
Updated: now,
IsServiceAccount: true,
},
},
wantServiceAccounts: []*user.User{
{
ID: 1,
Login: "sa-1-nochange",
},
{
ID: 2,
Login: "sa-2-nochange",
},
},
},
{
desc: "dedup org in login",
serviceAccounts: []*user.User{
{
ID: 3,
UID: "u3",
Name: "sa-1-dedup",
Login: "sa-1-1-dedup",
Email: "sa-1-dedup@example.org",
OrgID: 1,
Created: now,
Updated: now,
IsServiceAccount: true,
},
{
ID: 4,
UID: "u4",
Name: "sa-6480-dedup",
Login: "sa-6480-6480-dedup",
Email: "sa-6480-dedup@example.org",
OrgID: 6480,
Created: now,
Updated: now,
IsServiceAccount: true,
},
},
wantServiceAccounts: []*user.User{
{
ID: 3,
Login: "sa-1-dedup",
},
{
ID: 4,
Login: "sa-6480-dedup",
},
},
},
{
desc: "handle conflicts",
serviceAccounts: []*user.User{
{
ID: 5,
UID: "u5",
Name: "sa-2-conflict",
Login: "sa-2-conflict",
Email: "sa-2-conflict@example.org",
OrgID: 2,
Created: now,
Updated: now,
IsServiceAccount: true,
},
{
ID: 6,
UID: "u6",
Name: "sa-2b-conflict",
Login: "sa-2-2-conflict",
Email: "sa-2b-conflict@example.org",
OrgID: 2,
Created: now,
Updated: now,
IsServiceAccount: true,
},
},
wantServiceAccounts: []*user.User{
{
ID: 5,
Login: "sa-2-conflict",
},
{
ID: 6,
Login: "sa-2-2-conflict",
},
},
},
}
for _, tc := range testCases {
t.Run(tc.desc, func(t *testing.T) {
// Remove migration and permissions
_, errDeleteMig := x.Exec(`DELETE FROM migration_log WHERE migration_id = ?`, usermig.DedupOrgInLogin)
require.NoError(t, errDeleteMig)
// insert service accounts
serviceAccoutsCount, err := x.Insert(tc.serviceAccounts)
require.NoError(t, err)
require.Equal(t, int64(len(tc.serviceAccounts)), serviceAccoutsCount)
// run the migration
usermigrator := migrator.NewMigrator(x, &setting.Cfg{Logger: log.New("usermigration.test")})
usermigrator.AddMigration(usermig.DedupOrgInLogin, &usermig.ServiceAccountsDeduplicateOrgInLogin{})
errRunningMig := usermigrator.Start(false, 0)
require.NoError(t, errRunningMig)
// Check service accounts
resultingServiceAccounts := []user.User{}
err = x.Table("user").Find(&resultingServiceAccounts)
require.NoError(t, err)
for i := range tc.wantServiceAccounts {
for _, sa := range resultingServiceAccounts {
if sa.ID == tc.wantServiceAccounts[i].ID {
assert.Equal(t, tc.wantServiceAccounts[i].Login, sa.Login)
}
}
}
})
}
}

View File

@@ -45,6 +45,7 @@ const (
}
}
`
alertingDefaultInitializationTimeout = 30 * time.Second
evaluatorDefaultEvaluationTimeout = 30 * time.Second
schedulerDefaultAdminConfigPollInterval = time.Minute
schedulerDefaultExecuteAlerts = true
@@ -90,6 +91,7 @@ type UnifiedAlertingSettings struct {
HARedisMaxConns int
HARedisTLSEnabled bool
HARedisTLSConfig dstls.ClientConfig
InitializationTimeout time.Duration
MaxAttempts int64
MinInterval time.Duration
EvaluationTimeout time.Duration
@@ -223,6 +225,11 @@ func (cfg *Cfg) ReadUnifiedAlertingSettings(iniFile *ini.File) error {
uaCfg.DisabledOrgs[orgID] = struct{}{}
}
uaCfg.InitializationTimeout, err = gtime.ParseDuration(valueAsString(ua, "initialization_timeout", (alertingDefaultInitializationTimeout).String()))
if err != nil {
return err
}
uaCfg.AdminConfigPollInterval, err = gtime.ParseDuration(valueAsString(ua, "admin_config_poll_interval", (schedulerDefaultAdminConfigPollInterval).String()))
if err != nil {
return err

View File

@@ -26,6 +26,7 @@ func TestCfg_ReadUnifiedAlertingSettings(t *testing.T) {
require.Equal(t, 200*time.Millisecond, cfg.UnifiedAlerting.HAGossipInterval)
require.Equal(t, time.Minute, cfg.UnifiedAlerting.HAPushPullInterval)
require.Equal(t, 6*time.Hour, cfg.UnifiedAlerting.HAReconnectTimeout)
require.Equal(t, alertingDefaultInitializationTimeout, cfg.UnifiedAlerting.InitializationTimeout)
}
// With peers set, it correctly parses them.
@@ -35,10 +36,13 @@ func TestCfg_ReadUnifiedAlertingSettings(t *testing.T) {
require.NoError(t, err)
_, err = s.NewKey("ha_peers", "hostname1:9090,hostname2:9090,hostname3:9090")
require.NoError(t, err)
_, err = s.NewKey("initialization_timeout", "123s")
require.NoError(t, err)
require.NoError(t, cfg.ReadUnifiedAlertingSettings(cfg.Raw))
require.Len(t, cfg.UnifiedAlerting.HAPeers, 3)
require.ElementsMatch(t, []string{"hostname1:9090", "hostname2:9090", "hostname3:9090"}, cfg.UnifiedAlerting.HAPeers)
require.Equal(t, 123*time.Second, cfg.UnifiedAlerting.InitializationTimeout)
}
t.Run("should read 'scheduler_tick_interval'", func(t *testing.T) {

View File

@@ -495,7 +495,7 @@ func (e *AzureLogAnalyticsDatasource) createRequest(ctx context.Context, queryUR
}
if query.AppInsightsQuery {
body["applications"] = query.Resources
body["applications"] = []string{query.Resources[0]}
}
jsonValue, err := json.Marshal(body)

View File

@@ -649,7 +649,7 @@ func TestLogAnalyticsCreateRequest(t *testing.T) {
TimeColumn: "timestamp",
})
require.NoError(t, err)
expectedBody := fmt.Sprintf(`{"applications":["/subscriptions/test-sub/resourceGroups/test-rg/providers/Microsoft.Insights/components/r1","/subscriptions/test-sub/resourceGroups/test-rg/providers/Microsoft.Insights/components/r2"],"query":"","query_datetimescope_column":"timestamp","query_datetimescope_from":"%s","query_datetimescope_to":"%s","timespan":"%s/%s"}`, from.Format(time.RFC3339), to.Format(time.RFC3339), from.Format(time.RFC3339), to.Format(time.RFC3339))
expectedBody := fmt.Sprintf(`{"applications":["/subscriptions/test-sub/resourceGroups/test-rg/providers/Microsoft.Insights/components/r1"],"query":"","query_datetimescope_column":"timestamp","query_datetimescope_from":"%s","query_datetimescope_to":"%s","timespan":"%s/%s"}`, from.Format(time.RFC3339), to.Format(time.RFC3339), from.Format(time.RFC3339), to.Format(time.RFC3339))
body, err := io.ReadAll(req.Body)
require.NoError(t, err)
if !cmp.Equal(string(body), expectedBody) {

Some files were not shown because too many files have changed in this diff Show More