Compare commits

...

66 Commits

Author SHA1 Message Date
Sofia Papagiannaki
822ff7595e Improve comments and error message. 2021-12-06 09:27:52 +02:00
Kyle Brandt
515ebf4b4c security: fix dir traversal issue 2021-12-03 13:21:53 -05:00
Dimitris Sotirakis
68fe9e3431 [v8.0.x] Backport 36759 to v8.0.x (#36774)
* Delete verify-drone from windows

* Sync drone yaml

* Downgrade grabpl version

* Move publish-frontend-metrics step
2021-07-15 09:49:19 +02:00
Grot (@grafanabot)
beeef8b96b Update queries.md (#31941) (#36765)
* Update queries.md

Completing some examples to allow newbie people to understand relative time and time shift time

* Update docs/sources/panels/queries.md

Co-authored-by: Geshi <ohayo@geshii.moe>

Co-authored-by: Diana Payton <52059945+oddlittlebird@users.noreply.github.com>
Co-authored-by: Geshi <ohayo@geshii.moe>
(cherry picked from commit e8d5b2431e)

Co-authored-by: castillo92 <37965565+castillo92@users.noreply.github.com>
2021-07-14 18:47:21 +02:00
Grot (@grafanabot)
634112e1d4 "Release: Updated versions in package to 8.0.6" (#36750) 2021-07-14 13:15:33 +02:00
Marcus Andersson
1116ef1983 [v8.0.x] Transformations: add 'prepare time series' transformer (#36748)
Co-authored-by: Ryan McKinley <ryantxu@gmail.com>
2021-07-14 12:23:02 +02:00
Grot (@grafanabot)
dfaae953f8 Postgres/MySQL/MSSQL: Fix name of time field should be named Time for time series queries (#36720) (#36746)
Name of time field changed in v8 for time series queries from Time to the name of the selected
time column, i.e. time or time_sec. These changes should make sure that the name of time field
is always returned as Time for time series queries.

Fixes #36059

Co-authored-by: Ryan McKinley <ryantxu@gmail.com>
(cherry picked from commit 10c892fa5b)

Co-authored-by: Marcus Efraimsson <marcus.efraimsson@gmail.com>
2021-07-14 11:52:27 +02:00
Grot (@grafanabot)
d6135f54a9 change template expansion missing value handling (#36679) (#36715)
(cherry picked from commit 310d3ebe3d)

Co-authored-by: David Parrott <stomp.box.yo@gmail.com>
2021-07-14 10:30:40 +01:00
Grot (@grafanabot)
1d5ca3ae77 fix gzipped plugin asset response (#36721) (#36741)
(cherry picked from commit 7dbe388d4e)

Co-authored-by: Will Browne <wbrowne@users.noreply.github.com>
2021-07-14 10:11:12 +02:00
Alexander Emelin
9fec4a7f80 Live: handle influx input with incomplete/asymmetrical field set (#36664) (#36726)
(cherry picked from commit 607c5d2555)
2021-07-13 22:17:46 +03:00
Alexander Emelin
ad3d82abee Live: avoid panic when type changes (#35394) (#36723)
(cherry picked from commit 4b8d796c54)

Co-authored-by: Ryan McKinley <ryantxu@gmail.com>
2021-07-13 20:55:18 +03:00
Grot (@grafanabot)
2575e64e8a Add ValueString to the documentation for alerts (#36654) (#36676)
(cherry picked from commit 36cb396568)

Co-authored-by: George Robinson <85952834+gerobinson@users.noreply.github.com>
2021-07-13 14:20:58 +01:00
Grot (@grafanabot)
278c85ca07 Fix Postgres query handling null values for smallint (#36648) (#36688)
* Fix Postgres query handling null values for smallint

* Fix converting to int16

(cherry picked from commit 5d01add7da)

Co-authored-by: idafurjes <36131195+idafurjes@users.noreply.github.com>
2021-07-13 13:49:31 +02:00
Grot (@grafanabot)
d11e65d3ac live: better error logging in push API (#36601) (#36623)
(cherry picked from commit e1358eeb76)

Co-authored-by: Alexander Emelin <frvzmb@gmail.com>
2021-07-13 14:43:15 +03:00
Grot (@grafanabot)
028e41b152 Plugins: Improve grafana-cli UX + API response messaging for plugin install incompatibility scenario (#36556) (#36692)
* improve UX for plugin install incompatability

* refactor test

(cherry picked from commit e06335ffe9)

Co-authored-by: Will Browne <wbrowne@users.noreply.github.com>
2021-07-13 10:21:41 +02:00
Grot (@grafanabot)
68374a988a Avoid breaking on fieldConfig without defaults field (#36666) (#36690)
This would result in a `Dashboard init failed` error when migrating
dashboards with a folded panel that has a `fieldConfig` but has not
defined `fieldConfig.defaults`.

(cherry picked from commit 81511e34d9)

Co-authored-by: Gustaf Lindstedt <gustaflindstedt@protonmail.com>
2021-07-13 10:14:05 +02:00
Grot (@grafanabot)
fb60ab66f1 TimeSeries: Improve tooltip positioning when tooltip overflows (#36440) (#36672)
* TimeSeries: Improve tooltip positioning when tooltip overflows

* VizTooltip: Use react-popper, extract positioning calculation into util function + add unit tests

* VizTooltip: Keep ref as tooltipRef

* Use popper only for VizTooltip positioning

* VizTooltip: Set altAxis to true to prevent overflow on y axis

Co-authored-by: Dominik Prokop <dominik.prokop@grafana.com>
(cherry picked from commit b1d576c5da)

Co-authored-by: Ashley Harrison <ashley.harrison@grafana.com>
2021-07-12 18:15:05 +02:00
Grot (@grafanabot)
3d786313c2 Alerting: A better and cleaner way to know if Alertmanager is initialised (#36659) (#36665)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
(cherry picked from commit 8efe1856e2)

Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
2021-07-12 19:22:35 +05:30
Grot (@grafanabot)
a65c0a491e Links: Fixes issue with some links causing full page reload (#36631) (#36649)
(cherry picked from commit 6c5d0db255)

Co-authored-by: Torkel Ödegaard <torkel@grafana.org>
2021-07-12 15:40:35 +02:00
Grot (@grafanabot)
fea410ebb1 Add StreamName dimension for AWS/KinesisVideo namespace (#36655) (#36663)
(cherry picked from commit 37c3e6f9b9)

Co-authored-by: Brent Cetinich <73208365+brentcetinich@users.noreply.github.com>
2021-07-12 14:46:08 +02:00
Grot (@grafanabot)
fb9e6d3286 Tempo: show hex strings instead of uints for ids (#36471) (#36662)
(cherry picked from commit ee89e51c48)

Co-authored-by: Zoltán Bedi <zoltan.bedi@gmail.com>
2021-07-12 14:33:51 +02:00
Grot (@grafanabot)
e439828db9 Alerting: Fix potential panic in Alertmanager when starting up (#36562) (#36638)
* Alerting: Fix potential panic in Alertmanager when starting up

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>

* Fix reviews

Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
(cherry picked from commit e19c690426)

Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
2021-07-12 13:52:27 +02:00
Grot (@grafanabot)
1e4cd19824 A11y: ARIA hide image/link duplicate in news panel (#36642) (#36651)
* fix(a11y): Hide image/link duplicate

* fix: proper heading and time markup

(cherry picked from commit b8fbe70c14)

Co-authored-by: Tobias Skarhed <1438972+tskarhed@users.noreply.github.com>
2021-07-12 11:12:27 +02:00
Grot (@grafanabot)
b4e40c3280 influxdb: influxql: better tag-value filtering (#36570) (#36639)
(cherry picked from commit cc460110b1)

Co-authored-by: Gábor Farkas <gabor.farkas@gmail.com>
2021-07-12 10:15:32 +02:00
Grot (@grafanabot)
249ad5c256 InfluxDB: Flux: fix backward compatibility for some queries (#36603) (#36615)
* influxdb: flux: better backward-compatibility

* added comment-explanation

(cherry picked from commit e4ece0530a)

Co-authored-by: Gábor Farkas <gabor.farkas@gmail.com>
2021-07-12 09:16:49 +02:00
Grot (@grafanabot)
ae7e4811b1 DashboardList: Fix issue not re-fetching dashboard list after variable change (#36591) (#36625)
(cherry picked from commit c7d2d70799)

Co-authored-by: Torkel Ödegaard <torkel@grafana.org>
2021-07-11 10:59:25 +02:00
Grot (@grafanabot)
bb3432361c Database: Fix incorrect format of isolation level configuration parameter for MySQL (#36565) (#36618)
(cherry picked from commit ca2223f705)

Co-authored-by: Marcus Efraimsson <marcus.efraimsson@gmail.com>
2021-07-09 20:06:08 +02:00
Grot (@grafanabot)
2dd848c344 Stat: use shared data min/max for y auto-ranging (#36497) (#36617)
(cherry picked from commit bb1dac3c72)

Co-authored-by: Leon Sorokin <leeoniya@gmail.com>
2021-07-09 18:54:51 +02:00
Grot (@grafanabot)
cb71fddd24 Add AWS/AmazonMQ dimensions (#36573) (#36613)
(cherry picked from commit 1dc5d037e4)

Co-authored-by: Andres Martinez Gotor <andres.martinez@grafana.com>
2021-07-09 16:41:04 +02:00
Grot (@grafanabot)
1623150bb5 CloudWatch/Logs: Fix log alerts in new unified alerting (#36558) (#36605)
* Pass FromAlert header from new alerting

* Add better error messages

(cherry picked from commit ea2ba06b93)

Co-authored-by: Andrej Ocenas <mr.ocenas@gmail.com>
2021-07-09 14:58:35 +02:00
Grot (@grafanabot)
1734acbc9f live: better overview in docs (#36506) (#36577)
Co-authored-by: Diana Payton <52059945+oddlittlebird@users.noreply.github.com>
(cherry picked from commit 10a942aad0)

Co-authored-by: Alexander Emelin <frvzmb@gmail.com>
2021-07-08 20:28:08 +02:00
Grot (@grafanabot)
2d406f13de ReleaseNotes: Updated changelog and release notes for 8.0.5 (#36554) (#36566)
* ReleaseNotes: Updated changelog and release notes for 8.0.5

* Update _index.md

* Update CHANGELOG.md

Co-authored-by: Diana Payton <52059945+oddlittlebird@users.noreply.github.com>

Co-authored-by: Tobias Skarhed <1438972+tskarhed@users.noreply.github.com>
Co-authored-by: Diana Payton <52059945+oddlittlebird@users.noreply.github.com>
(cherry picked from commit 6266a9e77a)
2021-07-08 17:38:14 +02:00
Grot (@grafanabot)
6bc37296c7 Alerting: Allow space in label and annotation names (#36549) (#36557)
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
(cherry picked from commit 94d2520a84)

Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
2021-07-08 09:54:21 -04:00
Grot (@grafanabot)
cbb2aa5001 Release: Updated versions in package to 8.0.5 (#36553) 2021-07-08 12:30:44 +02:00
Grot (@grafanabot)
4c16d55e11 allow for appropriate content-type to be set (#36545) (#36550)
(cherry picked from commit 2616580bae)

Co-authored-by: Will Browne <wbrowne@users.noreply.github.com>
2021-07-08 11:35:42 +02:00
Grot (@grafanabot)
d9858e0af9 Timeseries Panel: Retain alerts when migrating from old graph (#36514) (#36546)
Closes #36106

(cherry picked from commit 8d66db09bf)

Co-authored-by: kay delaney <45561153+kaydelaney@users.noreply.github.com>
2021-07-08 11:07:16 +02:00
Grot (@grafanabot)
18c547d695 Live: document allowed_origins (#36433) (#36495)
(cherry picked from commit e60950a8c7)

Co-authored-by: Alexander Emelin <frvzmb@gmail.com>
2021-07-07 10:56:43 -07:00
Grot (@grafanabot)
2671c7d6cd Alerting API: Restrict access to Alertmanager configuration (#36507) (#36516)
* Alerting API: Restrict access to Alertmanager configuration to viewers

(cherry picked from commit fc90d47863)

Co-authored-by: Sofia Papagiannaki <papagian@users.noreply.github.com>
2021-07-07 16:56:00 +03:00
Grot (@grafanabot)
fffbb74aaf Links: Fix links to other apps outside Grafana when under sub path (#36498) (#36515)
(cherry picked from commit eed1f36613)

Co-authored-by: Torkel Ödegaard <torkel@grafana.org>
2021-07-07 15:49:31 +02:00
Grot (@grafanabot)
074ef08db3 Table: Fixes color for data links (#36446) (#36496)
* Table: add styling for anchor tag

* inherit color from parent to anchor tag

Co-authored-by: Torkel Ödegaard <torkel@grafana.com>
(cherry picked from commit 97dca963a9)

Co-authored-by: Tharun Rajendran <rajendrantharun@live.com>
2021-07-07 15:26:46 +02:00
Grot (@grafanabot)
5e83e3862a CloudWatch Logs: If Grafana Live isn't enabled, don't use the Live Channel (#36358) (#36510)
* If Live isn't enabled, don't use the Live Channel

* ..Import Config if you want to use it!

(cherry picked from commit 3e95c3826a)

Co-authored-by: Thomas Cave <github@thomas.cave.dev>
2021-07-07 14:08:35 +02:00
Grot (@grafanabot)
350bb10cdd Plugins: Improve API response for plugin assets (#36352) (#36505)
* improve API response for plugin assets 403

* remove unnecessary newline

(cherry picked from commit 333d520528)

Co-authored-by: Will Browne <wbrowne@users.noreply.github.com>
2021-07-07 12:40:37 +02:00
achatterjee-grafana
5cdd8dfed1 Docs: Removed folder and 3 files within. (#36487) 2021-07-06 21:13:25 +02:00
Will Browne
79e2da441a resolve conflicts (#36431) 2021-07-06 17:44:30 +02:00
Grot (@grafanabot)
0131a8ef05 Docs: Fix Azure Monitor refs (#36458) (#36478)
* Docs: Fix Azure Monitor refs

* more fixes

Co-authored-by: Robby Milo <robbymilo@gmail.com>
(cherry picked from commit eabf3fb674)

Co-authored-by: Josh Hunt <joshhunt@users.noreply.github.com>
2021-07-06 16:47:56 +02:00
Grot (@grafanabot)
cc843e67ae TooltipPlugin: Prevent Tooltip render if field is undefined (#36260) (#36464)
* Tooltip Plugin: Prevent tooltip render if focusedSeriesIdx is out of range

* TooltipPlugin: Also prevent render in multi case

* TooltipPlugin: Return null if field is undefined

(cherry picked from commit 96a3cc3cd8)

Co-authored-by: Ashley Harrison <ashley.harrison@grafana.com>
2021-07-06 12:19:40 +02:00
Grot (@grafanabot)
efbfb42f15 AzureMonitor: Fix issue where resource group name is missing on the resource picker button (#36400) (#36462)
* AzureMonitor: Fix issue where resource group name is missing in the UI

* fix

(cherry picked from commit 2a4191a2ee)

Co-authored-by: Josh Hunt <joshhunt@users.noreply.github.com>
2021-07-06 11:46:08 +02:00
Grot (@grafanabot)
fd515d1318 Folders: Return 409 Conflict status when folder already exists (#36429) (#36461)
* Return 409 Conflict when trying to post folder that already exists

* Fix tests

* Update documentation for new error message in folders api

(cherry picked from commit a18d3007a7)

Co-authored-by: Dimitris Sotirakis <dimitrios.sotirakis@grafana.com>
2021-07-06 11:32:58 +02:00
Grot (@grafanabot)
4efbf432a4 CloudwatchLogs: send error down to client (#36277) (#36438)
* CloudwatchLogs: send error down to client

* Move error handling down to startLiveQuery

(cherry picked from commit 0ae8a85828)

Co-authored-by: Zoltán Bedi <zoltan.bedi@gmail.com>
2021-07-05 16:34:29 +02:00
Grot (@grafanabot)
61ea316f10 DateFormats: Fix reading correct setting key for use_browser_locale (#36428) (#36434)
(cherry picked from commit 4932b9dfa4)

Co-authored-by: Torkel Ödegaard <torkel@grafana.org>
2021-07-05 15:27:35 +02:00
Leonard Gram
ab26f9c820 Azure OAuth: debug logs for user information (#36389) (#36394)
(cherry picked from commit 09a96ad2ad)
2021-07-05 11:05:49 +02:00
Grot (@grafanabot)
92ab8a4189 TimeSeries: Do not show series in tooltip if it's hidden in the viz (#36353) (#36423)
(cherry picked from commit 7ae656ff16)

Co-authored-by: Dominik Prokop <dominik.prokop@grafana.com>
2021-07-05 10:44:33 +02:00
Grot (@grafanabot)
190d7d12e3 Alerting: Fix prometheus API to check folder permissions (#36301) (#36421)
(cherry picked from commit 8a3edf280e)

Co-authored-by: Sofia Papagiannaki <papagian@users.noreply.github.com>
2021-07-05 11:31:58 +03:00
Grot (@grafanabot)
6c79c846ba Snapshots: Fixex snapshots absolute time range issue (#36350) (#36382)
(cherry picked from commit 09fed51be5)

Co-authored-by: Torkel Ödegaard <torkel@grafana.org>
2021-07-05 10:14:38 +02:00
Grot (@grafanabot)
862c9a2c73 fix: #36322 HistoryWrapper constructor history param not work (#36367) (#36399)
(cherry picked from commit 63715dcdef)

Co-authored-by: MeetzhDing <meet.zhding@foxmail.com>
2021-07-05 09:02:35 +02:00
Grot (@grafanabot)
fc45cf178e Allow white labeling loading logo (#36174) (#36386)
* Allow to whitelabeling loading logo

* Add loading_logo to documentation

* Change loading_logo to loading_logo_url

(cherry picked from commit ef05596e07)

Co-authored-by: Selene <selenepinillos@gmail.com>
2021-07-05 09:50:57 +03:00
Grot (@grafanabot)
d03007de01 Dashboards: Add IsFolder field into models.GetDashboardQuery (#36214) (#36388)
* Add IsFolder field into models.GetDashboardQuery

* Reverted folderId - return dummy success when calling get folder with id 0

* Moved condition to upper level - add test

(cherry picked from commit 084c9c8746)

Co-authored-by: Dimitris Sotirakis <dimitrios.sotirakis@grafana.com>
2021-07-02 19:19:08 +03:00
Grot (@grafanabot)
1b23e38a86 AzureMonitor: Refresh documentation (#35371) (#36397)
* AzureMonitor: Refresh documentation

* logs/kusto

* finish up logs

* variables for log analytics

* Apply suggestions from code review

Co-authored-by: achatterjee-grafana <70489351+achatterjee-grafana@users.noreply.github.com>

* finish up main topics

* finish docs?

* typos and other review comments

* add link to sample arg queries

* split up azure docs

* workaround weird code duplication issue

* Update docs/sources/datasources/azuremonitor/template-variables.md

Co-authored-by: Sarah Zinger <sarahzinger@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: achatterjee-grafana <70489351+achatterjee-grafana@users.noreply.github.com>

* feedback

Co-authored-by: achatterjee-grafana <70489351+achatterjee-grafana@users.noreply.github.com>
Co-authored-by: Sarah Zinger <sarahzinger@users.noreply.github.com>
(cherry picked from commit ca5379d64d)

Co-authored-by: Josh Hunt <joshhunt@users.noreply.github.com>
2021-07-02 17:17:24 +02:00
Grot (@grafanabot)
7eb704f49d Docs: Update bar-gauge-panel.md (#36250) (#36395)
may be a mistake here?

(cherry picked from commit 379ed0a6f7)

Co-authored-by: dreamChenp12 <86648316+dreamChenp12@users.noreply.github.com>
2021-07-02 17:13:35 +02:00
Grot (@grafanabot)
df2500a928 Docs: Add $__rate_interval variable to global variables (#36378) (#36391)
* Add  variable documentation to global variables

* Update docs/sources/variables/variable-types/global-variables.md

Co-authored-by: achatterjee-grafana <70489351+achatterjee-grafana@users.noreply.github.com>

Co-authored-by: achatterjee-grafana <70489351+achatterjee-grafana@users.noreply.github.com>
(cherry picked from commit 0ca6fdd310)

Co-authored-by: Ivana Huckova <30407135+ivanahuckova@users.noreply.github.com>
2021-07-02 16:38:24 +02:00
Grot (@grafanabot)
1a598bf75e Docs: Improve title and documentation for share shortened link in Explore (#36380) (#36392)
* Improve title and documentation for share shortened link in Explore

* Update docs/sources/explore/_index.md

Co-authored-by: achatterjee-grafana <70489351+achatterjee-grafana@users.noreply.github.com>

Co-authored-by: achatterjee-grafana <70489351+achatterjee-grafana@users.noreply.github.com>
(cherry picked from commit 09e49f6118)

Co-authored-by: Ivana Huckova <30407135+ivanahuckova@users.noreply.github.com>
2021-07-02 16:34:22 +02:00
Grot (@grafanabot)
bc0c5f118b DashboardQueryRunner: Fixes unrestrained subscriptions being created (#36371) (#36375)
(cherry picked from commit b741245960)

Co-authored-by: Hugo Häggmark <hugo.haggmark@grafana.com>
2021-07-02 11:13:58 +02:00
Grot (@grafanabot)
ef45e1016a Remove AWS CW client cache (#36311) (#36372)
(cherry picked from commit 30dc4025c2)

Co-authored-by: Andres Martinez Gotor <andres.martinez@grafana.com>
2021-07-02 10:35:28 +02:00
Grot (@grafanabot)
468523a1ce Docs: Add security warning about using Grafana 8 alerts with multiple organisations (#36308) (#36354)
* Docs: Add security warning about using Grafana 8 alerts with multiple orgs

(cherry picked from commit d525a5a469)

Co-authored-by: Sofia Papagiannaki <papagian@users.noreply.github.com>
2021-07-01 18:53:20 +03:00
Grot (@grafanabot)
740aedbcbb ReleaseNotes: Updated changelog and release notes for 8.0.4 (#36347) (#36356)
* ReleaseNotes: Updated changelog and release notes for 8.0.4

* Update _index.md

Co-authored-by: Jack Westbrook <jack.westbrook@gmail.com>
(cherry picked from commit 3cce67c044)
2021-07-01 17:49:17 +02:00
Grot (@grafanabot)
dc4007e7bd TimeSeries: Fixes x-axis time format when tick increment is larger than a year (#36335) (#36348)
* TimeSeries: Fixes x-axis time format when tick increment is larger than a year

* removed modal change

* removed modal change

(cherry picked from commit f152180dc3)

Co-authored-by: Torkel Ödegaard <torkel@grafana.org>
2021-07-01 14:00:45 +02:00
150 changed files with 5201 additions and 1206 deletions

View File

@@ -17,7 +17,7 @@ steps:
image: grafana/build-container:1.4.1
commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/grabpl
- chmod +x bin/grabpl
- ./bin/grabpl verify-drone
- curl -fLO https://github.com/jwilder/dockerize/releases/download/v$${DOCKERIZE_VERSION}/dockerize-linux-amd64-v$${DOCKERIZE_VERSION}.tar.gz
@@ -258,7 +258,7 @@ steps:
image: grafana/build-container:1.4.1
commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/grabpl
- chmod +x bin/grabpl
- ./bin/grabpl verify-drone
- curl -fLO https://github.com/jwilder/dockerize/releases/download/v$${DOCKERIZE_VERSION}/dockerize-linux-amd64-v$${DOCKERIZE_VERSION}.tar.gz
@@ -323,17 +323,6 @@ steps:
depends_on:
- initialize
- name: publish-frontend-metrics
image: grafana/build-container:1.4.1
commands:
- ./scripts/ci-frontend-metrics.sh | ./bin/grabpl publish-metrics $${GRAFANA_MISC_STATS_API_KEY}
environment:
GRAFANA_MISC_STATS_API_KEY:
from_secret: grafana_misc_stats_api_key
failure: ignore
depends_on:
- initialize
- name: build-backend
image: grafana/build-container:1.4.1
commands:
@@ -351,6 +340,17 @@ steps:
- initialize
- test-frontend
- name: publish-frontend-metrics
image: grafana/build-container:1.4.1
commands:
- ./scripts/ci-frontend-metrics.sh | ./bin/grabpl publish-metrics $${GRAFANA_MISC_STATS_API_KEY}
environment:
GRAFANA_MISC_STATS_API_KEY:
from_secret: grafana_misc_stats_api_key
failure: ignore
depends_on:
- build-frontend
- name: build-plugins
image: grafana/build-container:1.4.1
commands:
@@ -589,8 +589,7 @@ steps:
image: grafana/ci-wix:0.1.1
commands:
- $$ProgressPreference = "SilentlyContinue"
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/windows/grabpl.exe -OutFile grabpl.exe
- .\grabpl.exe verify-drone
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/windows/grabpl.exe -OutFile grabpl.exe
- name: build-windows-installer
image: grafana/ci-wix:0.1.1
@@ -639,7 +638,7 @@ steps:
image: grafana/build-container:1.4.1
commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/grabpl
- chmod +x bin/grabpl
- ./bin/grabpl verify-drone
environment:
@@ -724,7 +723,7 @@ steps:
image: grafana/build-container:1.4.1
commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/grabpl
- chmod +x bin/grabpl
- ./bin/grabpl verify-drone
- ./bin/grabpl verify-version ${DRONE_TAG}
@@ -1030,8 +1029,7 @@ steps:
image: grafana/ci-wix:0.1.1
commands:
- $$ProgressPreference = "SilentlyContinue"
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/windows/grabpl.exe -OutFile grabpl.exe
- .\grabpl.exe verify-drone
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/windows/grabpl.exe -OutFile grabpl.exe
- name: build-windows-installer
image: grafana/ci-wix:0.1.1
@@ -1081,7 +1079,7 @@ steps:
image: grafana/build-container:1.4.1
commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/grabpl
- chmod +x bin/grabpl
- git clone "https://$${GITHUB_TOKEN}@github.com/grafana/grafana-enterprise.git"
- cd grafana-enterprise
@@ -1506,7 +1504,7 @@ steps:
image: grafana/ci-wix:0.1.1
commands:
- $$ProgressPreference = "SilentlyContinue"
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/windows/grabpl.exe -OutFile grabpl.exe
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/windows/grabpl.exe -OutFile grabpl.exe
- git clone "https://$$env:GITHUB_TOKEN@github.com/grafana/grafana-enterprise.git"
- cd grafana-enterprise
- git checkout ${DRONE_TAG}
@@ -1523,7 +1521,6 @@ steps:
- rm -force grabpl.exe
- C:\App\grabpl.exe init-enterprise C:\App\grafana-enterprise
- cp C:\App\grabpl.exe grabpl.exe
- .\grabpl.exe verify-drone
depends_on:
- clone
@@ -1575,7 +1572,7 @@ steps:
image: grafana/build-container:1.4.1
commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/grabpl
- chmod +x bin/grabpl
- ./bin/grabpl verify-drone
- ./bin/grabpl verify-version ${DRONE_TAG}
@@ -1680,7 +1677,7 @@ steps:
image: grafana/build-container:1.4.1
commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/grabpl
- chmod +x bin/grabpl
- ./bin/grabpl verify-drone
- ./bin/grabpl verify-version v7.3.0-test
@@ -1975,8 +1972,7 @@ steps:
image: grafana/ci-wix:0.1.1
commands:
- $$ProgressPreference = "SilentlyContinue"
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/windows/grabpl.exe -OutFile grabpl.exe
- .\grabpl.exe verify-drone
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/windows/grabpl.exe -OutFile grabpl.exe
- name: build-windows-installer
image: grafana/ci-wix:0.1.1
@@ -2026,7 +2022,7 @@ steps:
image: grafana/build-container:1.4.1
commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/grabpl
- chmod +x bin/grabpl
- git clone "https://$${GITHUB_TOKEN}@github.com/grafana/grafana-enterprise.git"
- cd grafana-enterprise
@@ -2445,7 +2441,7 @@ steps:
image: grafana/ci-wix:0.1.1
commands:
- $$ProgressPreference = "SilentlyContinue"
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/windows/grabpl.exe -OutFile grabpl.exe
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/windows/grabpl.exe -OutFile grabpl.exe
- git clone "https://$$env:GITHUB_TOKEN@github.com/grafana/grafana-enterprise.git"
- cd grafana-enterprise
- git checkout main
@@ -2462,7 +2458,6 @@ steps:
- rm -force grabpl.exe
- C:\App\grabpl.exe init-enterprise C:\App\grafana-enterprise
- cp C:\App\grabpl.exe grabpl.exe
- .\grabpl.exe verify-drone
depends_on:
- clone
@@ -2514,7 +2509,7 @@ steps:
image: grafana/build-container:1.4.1
commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/grabpl
- chmod +x bin/grabpl
- ./bin/grabpl verify-drone
- ./bin/grabpl verify-version v7.3.0-test
@@ -2619,7 +2614,7 @@ steps:
image: grafana/build-container:1.4.1
commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/grabpl
- chmod +x bin/grabpl
- ./bin/grabpl verify-drone
- curl -fLO https://github.com/jwilder/dockerize/releases/download/v$${DOCKERIZE_VERSION}/dockerize-linux-amd64-v$${DOCKERIZE_VERSION}.tar.gz
@@ -2889,8 +2884,7 @@ steps:
image: grafana/ci-wix:0.1.1
commands:
- $$ProgressPreference = "SilentlyContinue"
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/windows/grabpl.exe -OutFile grabpl.exe
- .\grabpl.exe verify-drone
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/windows/grabpl.exe -OutFile grabpl.exe
- name: build-windows-installer
image: grafana/ci-wix:0.1.1
@@ -2936,7 +2930,7 @@ steps:
image: grafana/build-container:1.4.1
commands:
- mkdir -p bin
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/grabpl
- curl -fL -o bin/grabpl https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/grabpl
- chmod +x bin/grabpl
- git clone "https://$${GITHUB_TOKEN}@github.com/grafana/grafana-enterprise.git"
- cd grafana-enterprise
@@ -3358,7 +3352,7 @@ steps:
image: grafana/ci-wix:0.1.1
commands:
- $$ProgressPreference = "SilentlyContinue"
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.0.0/windows/grabpl.exe -OutFile grabpl.exe
- Invoke-WebRequest https://grafana-downloads.storage.googleapis.com/grafana-build-pipeline/v2.2.8/windows/grabpl.exe -OutFile grabpl.exe
- git clone "https://$$env:GITHUB_TOKEN@github.com/grafana/grafana-enterprise.git"
- cd grafana-enterprise
- git checkout $$env:DRONE_BRANCH
@@ -3375,7 +3369,6 @@ steps:
- rm -force grabpl.exe
- C:\App\grabpl.exe init-enterprise C:\App\grafana-enterprise
- cp C:\App\grabpl.exe grabpl.exe
- .\grabpl.exe verify-drone
depends_on:
- clone

View File

@@ -1,4 +1,49 @@
<!-- 8.0.5 START -->
# 8.0.5 (2021-07-08)
### Features and enhancements
* **Cloudwatch Logs:** Send error down to client. [#36277](https://github.com/grafana/grafana/pull/36277), [@zoltanbedi](https://github.com/zoltanbedi)
* **Folders:** Return 409 Conflict status when folder already exists. [#36429](https://github.com/grafana/grafana/pull/36429), [@dsotirakis](https://github.com/dsotirakis)
* **TimeSeries:** Do not show series in tooltip if it's hidden in the viz. [#36353](https://github.com/grafana/grafana/pull/36353), [@dprokop](https://github.com/dprokop)
### Bug fixes
* **AzureMonitor:** Fix issue where resource group name is missing on the resource picker button. [#36400](https://github.com/grafana/grafana/pull/36400), [@joshhunt](https://github.com/joshhunt)
* **Chore:** Fix AWS auth assuming role with workspace IAM. [#36430](https://github.com/grafana/grafana/pull/36430), [@wbrowne](https://github.com/wbrowne)
* **DashboardQueryRunner:** Fixes unrestrained subscriptions being created. [#36371](https://github.com/grafana/grafana/pull/36371), [@hugohaggmark](https://github.com/hugohaggmark)
* **DateFormats:** Fix reading correct setting key for use_browser_locale. [#36428](https://github.com/grafana/grafana/pull/36428), [@torkelo](https://github.com/torkelo)
* **Links:** Fix links to other apps outside Grafana when under sub path. [#36498](https://github.com/grafana/grafana/pull/36498), [@torkelo](https://github.com/torkelo)
* **Snapshots:** Fix snapshot absolute time range issue. [#36350](https://github.com/grafana/grafana/pull/36350), [@torkelo](https://github.com/torkelo)
* **Table:** Fix data link color. [#36446](https://github.com/grafana/grafana/pull/36446), [@tharun208](https://github.com/tharun208)
* **Time Series:** Fix X-axis time format when tick increment is larger than a year. [#36335](https://github.com/grafana/grafana/pull/36335), [@torkelo](https://github.com/torkelo)
* **Tooltip Plugin:** Prevent tooltip render if field is undefined. [#36260](https://github.com/grafana/grafana/pull/36260), [@ashharrison90](https://github.com/ashharrison90)
<!-- 8.0.5 END -->
<!-- 8.0.4 START -->
# 8.0.4 (2021-07-01)
### Features and enhancements
* **Live:** Rely on app url for origin check. [#35983](https://github.com/grafana/grafana/pull/35983), [@FZambia](https://github.com/FZambia)
* **PieChart:** Sort legend descending, update placeholder to show default …. [#36062](https://github.com/grafana/grafana/pull/36062), [@ashharrison90](https://github.com/ashharrison90)
* **TimeSeries panel:** Do not reinitialize plot when thresholds mode change. [#35952](https://github.com/grafana/grafana/pull/35952), [@dprokop](https://github.com/dprokop)
### Bug fixes
* **Elasticsearch:** Allow case sensitive custom options in date_histogram interval. [#36168](https://github.com/grafana/grafana/pull/36168), [@Elfo404](https://github.com/Elfo404)
* **Elasticsearch:** Restore previous field naming strategy when using variables. [#35624](https://github.com/grafana/grafana/pull/35624), [@Elfo404](https://github.com/Elfo404)
* **Explore:** Fix import of queries between SQL data sources. [#36210](https://github.com/grafana/grafana/pull/36210), [@ivanahuckova](https://github.com/ivanahuckova)
* **InfluxDB:** InfluxQL query editor: fix retention policy handling. [#36022](https://github.com/grafana/grafana/pull/36022), [@gabor](https://github.com/gabor)
* **Loki:** Send correct time range in template variable queries. [#36268](https://github.com/grafana/grafana/pull/36268), [@ivanahuckova](https://github.com/ivanahuckova)
* **TimeSeries:** Preserve RegExp series overrides when migrating from old graph panel. [#36134](https://github.com/grafana/grafana/pull/36134), [@ashharrison90](https://github.com/ashharrison90)
<!-- 8.0.4 END -->
<!-- 8.0.3 START -->
# 8.0.3 (2021-06-18)

View File

@@ -98,7 +98,7 @@ aliases = ["/docs/grafana/latest/guides/reference/admin"]
<img src="/static/img/docs/logos/icon_cloudwatch.svg">
<h5>AWS CloudWatch</h5>
</a>
<a href="{{< relref "datasources/azuremonitor.md" >}}" class="nav-cards__item nav-cards__item--ds">
<a href="{{< relref "datasources/azuremonitor/_index.md" >}}" class="nav-cards__item nav-cards__item--ds">
<img src="/static/img/docs/logos/icon_azure_monitor.jpg">
<h5>Azure Monitor</h5>
</a>

View File

@@ -1508,6 +1508,23 @@ Refer to [Grafana Live configuration documentation]({{< relref "../live/configur
0 disables Grafana Live, -1 means unlimited connections.
### allowed_origins
> **Note**: Available in Grafana v8.0.4 and later versions.
The `allowed_origins` option is a comma-separated list of additional origins (`Origin` header of HTTP Upgrade request during WebSocket connection establishment) that will be accepted by Grafana Live.
If not set (default), then the origin is matched over [root_url]({{< relref "#root_url" >}}) which should be sufficient for most scenarios.
Origin patterns support wildcard symbol "*".
For example:
```ini
[live]
allowed_origins = "https://*.example.com"
```
<hr>
## [plugin.grafana-image-renderer]

View File

@@ -19,7 +19,11 @@ Alerts have four main components:
You can create and edit alerting rules for Grafana managed alerts, Cortex alerts, and Loki alerts as well as see alerting information from prometheus-compatible data sources in a single, searchable view. For more information, on how to create and edit alerts and notifications, refer to [Overview of Grafana 8.0 alerts]({{< relref "../alerting/unified-alerting/_index.md" >}}).
As part of the new alert changes, we have introduced a new data source, Alertmanager, which includes built-in support for Prometheus Alertmanager. It is presently in alpha and it not accessible unless alpha plugins are enabled in Grafana settings. For more information, refer to [Alertmanager data source]({{< relref "../datasources/alertmanager.md" >}}).
For handling notifications for Grafana managed alerts, we use an embedded Alertmanager. You can configure its contact points, notification policies, silences and templates from the new Grafana alerting UI by selecting `Grafana` from the Alertmanager dropdown on the top of the respective tab.
> **Note:** Currently the configuration of this embedded Alertmanager is shared across organisations. Therefore users are advised to use the new Grafana 8 Alerts only if they have one organisation otherwise all contact points, notification policies, silences and templates for Grafana managed alerts will be visible by all organizations.
As part of the new alert changes, we have introduced a new data source, Alertmanager, which includes built-in support for Prometheus Alertmanager. It is presently in alpha and it not accessible unless alpha plugins are enabled in Grafana settings. For more information, refer to [Alertmanager data source]({{< relref "../datasources/alertmanager.md" >}}). If such a data source is present, then you can view and modify its silences, contact points and notification policies from the Grafana alerting UI by selecting it from the Alertmanager dropdown on the top of respective tab.
> **Note:** Out of the box, Grafana still supports old Grafana alerts. They are legacy alerts at this time, and will be deprecated in a future release. For more information, refer to [Legacy Grafana alerts]({{< relref "./old-alerting/_index.md" >}}).

View File

@@ -9,6 +9,10 @@ weight = 400
Contact points define where to send notifications about alerts that match a particular [notification policy]({{< relref "./notification-policies.md" >}}). A contact point can contain one or more contact point types, eg email, slack, webhook and so on. A notification will dispatched to all contact point types defined on a contact point. [Templating]({{< relref "./message-templating/_index.md" >}}) can be used to customize contact point type message with alert data. Grafana alerting UI can be used to configure both Grafana managed contact points and contact points for an [external Alertmanager if one is configured]({{< relref "../../datasources/alertmanager.md" >}}).
Grafana alerting UI allows you to configure contact points for the Grafana managed alerts (handled by the embedded Alertmanager) as well as contact points for an [external Alertmanager if one is configured]({{< relref "../../datasources/alertmanager.md" >}}), using the Alertmanager dropdown.
> **Note:** Currently the configuration of the embedded Alertmanager is shared across organisations. Therefore users are advised to use the new Grafana 8 Alerts only if they have one organisation otherwise contact points for the Grafana managed alerts will be visible by all organizations.
## Add a contact point
1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**.

View File

@@ -10,7 +10,6 @@ weight = 400
Notifications sent via [contact points]({{< relref "../contact-points.md" >}}) are built using templates. Grafana comes with default templates which you can customize. Grafana's notification templates are based on the [Go templating system](https://golang.org/pkg/text/template) where some fields are evaluated as text, while others are evaluated as HTML which can affect escaping. Since most of the contact point fields can be templated, you can create reusable templates and them in multiple contact points. See [template data reference]({{< relref "./template-data.md" >}}) to check what variables are available in the templates.
## Using templating in contact point fields
This section shows an example of using templating to render a number of firing or resolved alerts in Slack message title, and listing alerts with status and name in the message body:
@@ -21,6 +20,10 @@ This section shows an example of using templating to render a number of firing o
You can create named templates and then reuse them in contact point fields or other templates.
Grafana alerting UI allows you to configure templates for the Grafana managed alerts (handled by the embedded Alertmanager) as well as templates for an [external Alertmanager if one is configured]({{< relref "../../../datasources/alertmanager.md" >}}), using the Alertmanager dropdown.
> **Note:** Currently the configuration of the embedded Alertmanager is shared across organisations. Therefore users are advised to use the new Grafana 8 Alerts only if they have one organisation otherwise templates for the Grafana managed alerts will be visible by all organizations
### Create a template
1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**.
1. Click **Add template**.

View File

@@ -37,6 +37,7 @@ SilenceURL | string | Link to grafana silence for with labels for this aler
DashboardURL | string | Link to grafana dashboard, if alert rule belongs to one. Only for Grafana managed alerts.
PanelURL | string | Link to grafana dashboard panel, if alert rule belongs to one. Only for Grafana managed alerts.
Fingerprint | string | Fingerprint that can be used to identify the alert.
ValueString | string | A string that contains the labels and value of each reduced expression in the alert.
## KeyValue

View File

@@ -9,8 +9,9 @@ weight = 400
Notification policies determine how alerts are routed to contact points. Policies have a tree structure, where each policy can have one or more child policies. Each policy except for the root policy can also match specific alert labels. Each alert enters policy tree at the root and then traverses each child policy. If `Continue matching subsequent sibling nodes` is not checked, it stops at the first matching node, otherwise, it continues matching it's siblings as well. If an alert does not match any children of a policy, the alert is handled based on the configuration settings of this policy and notified to the contact point configured on this policy. Alert that does not match any specific policy is handled by the root policy.
Grafana alerting UI allows you to configure Grafana notification policies as well as notification policies (routes) for an [external Alertmanager if one is configured]({{< relref "../../datasources/alertmanager.md" >}}).
Grafana alerting UI allows you to configure notification policies for the Grafana managed alerts (handled by the embedded Alertmanager) as well as notification policies for an [external Alertmanager if one is configured]({{< relref "../../datasources/alertmanager.md" >}}), using the Alertmanager dropdown.
> **Note:** Currently the configuration of the embedded Alertmanager is shared across organisations. Therefore users are advised to use the new Grafana 8 Alerts only if they have one organisation otherwise notification policies for the Grafana managed alerts will be visible by all organizations.
## Edit notification policies

View File

@@ -11,6 +11,10 @@ Grafana allows to you to prevent notifications from one or more alert rules by c
Silences do not prevent alert rules from being evaluated. They also do not stop alert instances being shown in the user interface. Silences only prevent notifications from being created.
Grafana alerting UI allows you to configure silences for the Grafana managed alerts (handled by the embedded Alertmanager) as well as silences for an [external Alertmanager if one is configured]({{< relref "../../datasources/alertmanager.md" >}}), using the Alertmanager dropdown.
> **Note:** Currently the configuration of the embedded Alertmanager is shared across organisations. Therefore users are advised to use the new Grafana 8 Alerts only if they have one organisation otherwise silences for the Grafana managed alerts will be visible by all organizations.
## Add a silence
To add a silence:

View File

@@ -18,7 +18,7 @@ The following data sources are officially supported:
- [Alertmanager]({{< relref "alertmanager.md" >}})
- [AWS CloudWatch]({{< relref "cloudwatch.md" >}})
- [Azure Monitor]({{< relref "azuremonitor.md" >}})
- [Azure Monitor]({{< relref "azuremonitor/_index.md" >}})
- [Elasticsearch]({{< relref "elasticsearch.md" >}})
- [Google Cloud Monitoring]({{< relref "google-cloud-monitoring/_index.md" >}})
- [Graphite]({{< relref "graphite.md" >}})
@@ -46,4 +46,3 @@ In addition to the data sources that you have configured in your Grafana, there
## Data source plugins
Since Grafana 3.0 you can install data sources as plugins. Check out [Grafana.com/plugins](https://grafana.com/plugins) for more data sources.

View File

@@ -1,434 +0,0 @@
+++
title = "Azure Monitor"
description = "Guide for using Azure Monitor in Grafana"
keywords = ["grafana", "microsoft", "azure", "monitor", "application", "insights", "log", "analytics", "guide"]
aliases = ["/docs/grafana/latest/features/datasources/azuremonitor"]
weight = 300
+++
# Azure Monitor data source
The Azure Monitor data source supports multiple services in the Azure cloud:
- **[Azure Monitor Metrics]({{< relref "#query-the-metrics-service" >}})** (or Metrics) is the platform service that provides a single source for monitoring Azure resources.
- **[Azure Monitor Logs]({{< relref "#query-the-logs-service" >}})** (or Logs) gives you access to log data collected by Azure Monitor.
- **[Azure Resource Graph]({{< relref "#query-the-azure-resource-graph-service" >}})** allows you to query the resources on your Azure subscription.
## Add the data source
The Azure Monitor data source can access metrics from three different services. Configure access to the services that you plan to use. To use different credentials for different Azure services, configure multiple Azure Monitor data sources.
- [Guide to setting up an Azure Active Directory Application for Azure Monitor.](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal)
- [Guide to setting up an Azure Active Directory Application for Azure Monitor Logs.](https://dev.loganalytics.io/documentation/Authorization/AAD-Setup)
1. Accessed from the Grafana main menu, newly installed data sources can be added immediately within the Data Sources section. Next, click the "Add data source" button in the upper right. The Azure Monitor data source will be available for selection in the Cloud section in the list of data sources.
1. In the name field, Grafana will automatically fill in a name for the data source - `Azure Monitor` or something like `Azure Monitor - 3`. If you are going to configure multiple data sources, then change the name to something more informative.
1. Fill in the Azure AD App Registration details:
- **Tenant Id** (Azure Active Directory -> Properties -> Directory ID)
- **Client Id** (Azure Active Directory -> App Registrations -> Choose your app -> Application ID)
- **Client Secret** (Azure Active Directory -> App Registrations -> Choose your app -> Keys)
- **Default Subscription Id** (Subscriptions -> Choose subscription -> Overview -> Subscription ID)
1. Paste these four items into the fields in the Azure Monitor API Details section:
{{< figure src="/static/img/docs/v62/config_1_azure_monitor_details.png" class="docs-image--no-shadow" caption="Azure Monitor Configuration Details" >}}
- The Subscription Id can be changed per query. Save the data source and refresh the page to see the list of subscriptions available for the specified Client Id.
1. Test that the configuration details are correct by clicking on the "Save & Test" button:
{{< figure src="/static/img/docs/v62/config_3_save_and_test.png" class="docs-image--no-shadow" caption="Save and Test" >}}
Alternatively on step 4 if creating a new Azure Active Directory App, use the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/?view=azure-cli-latest):
```bash
az ad sp create-for-rbac -n "http://localhost:3000"
```
## Choose a Service
In the query editor for a panel, after choosing your Azure Monitor data source, the first option is to choose a service. There are three options here:
- Metrics
- Logs
- Azure Resource Graph
The query editor changes depending on which one you pick. Metrics is the default.
In Grafana 7.4, the Azure Monitor query type was renamed to Metrics, and Azure Logs Analytics was renamed to Logs. In Grafana 8.0 Application Insights and Insights Analytics is unavailable for new panels, in favor of querying through Metrics and Logs.
## Query the Metrics service
The Metrics service provides metrics for all the Azure services that you have running. It helps you understand how your applications on Azure are performing and to proactively find issues affecting your applications.
If your Azure Monitor credentials give you access to multiple subscriptions, then choose the appropriate subscription first.
Examples of metrics that you can get from the service are:
- `Microsoft.Compute/virtualMachines - Percentage CPU`
- `Microsoft.Network/networkInterfaces - Bytes sent`
- `Microsoft.Storage/storageAccounts - Used Capacity`
{{< figure src="/static/img/docs/v60/azuremonitor-service-query-editor.png" class="docs-image--no-shadow" caption="Metrics Query Editor" >}}
As of Grafana 7.1, the query editor allows you to query multiple dimensions for metrics that support them. Metrics that support multiple dimensions are those listed in the [Azure Monitor supported Metrics List](https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-supported) that have one or more values listed in the "Dimension" column for the metric.
### Format legend keys with aliases for Metrics
The default legend formatting for the Metrics API is:
`metricName{dimensionName=dimensionValue,dimensionTwoName=DimensionTwoValue}`
> **Note:** Before Grafana 7.1, the formatting included the resource name in the default: `resourceName{dimensionName=dimensionValue}.metricName`. As of Grafana 7.1, the resource name has been removed from the default legend.
These can be quite long, but this formatting can be changed by using aliases. In the **Legend Format** field, you can combine the aliases defined below any way you want.
Metrics examples:
- `Blob Type: {{ blobtype }}`
- `{{ resourcegroup }} - {{ resourcename }}`
### Alias patterns for Metrics
- `{{ resourcegroup }}` = replaced with the value of the Resource Group
- `{{ namespace }}` = replaced with the value of the Namespace (e.g. Microsoft.Compute/virtualMachines)
- `{{ resourcename }}` = replaced with the value of the Resource Name
- `{{ metric }}` = replaced with metric name (e.g. Percentage CPU)
- `{{ dimensionname }}` = _Legacy as of 7.1+ (for backwards compatibility)_ replaced with the first dimension's key/label (as sorted by the key/label) (e.g. blobtype)
- `{{ dimensionvalue }}` = _Legacy as of 7.1+ (for backwards compatibility)_ replaced with first dimension's value (as sorted by the key/label) (e.g. BlockBlob)
- `{{ arbitraryDim }}` = _Available in 7.1+_ replaced with the value of the corresponding dimension. (e.g. `{{ blobtype }}` becomes BlockBlob)
### Create template variables for Metrics
Instead of hard-coding things like server, application and sensor name in your metric queries you can use variables in their place. Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns make it easy to change the data being displayed in your dashboard.
Note that the Metrics service does not support multiple values yet. If you want to visualize multiple time series (for example, metrics for server1 and server2) then you have to add multiple queries to able to view them on the same graph or in the same table.
The Metrics data source Plugin provides the following queries you can specify in the `Query` field in the Variable edit view. They allow you to fill a variable's options list.
| Name | Description |
| -------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------- |
| `Subscriptions()` | Returns a list of subscriptions. |
| `ResourceGroups()` | Returns a list of resource groups. |
| `ResourceGroups(12345678-aaaa-bbbb-cccc-123456789aaa)` | Returns a list of resource groups for a specified subscription. |
| `Namespaces(aResourceGroup)` | Returns a list of namespaces for the specified resource group. |
| `Namespaces(12345678-aaaa-bbbb-cccc-123456789aaa, aResourceGroup)` | Returns a list of namespaces for the specified resource group and subscription. |
| `ResourceNames(aResourceGroup, aNamespace)` | Returns a list of resource names. |
| `ResourceNames(12345678-aaaa-bbbb-cccc-123456789aaa, aResourceGroup, aNamespace)` | Returns a list of resource names for a specified subscription. |
| `MetricNamespace(aResourceGroup, aNamespace, aResourceName)` | Returns a list of metric namespaces. |
| `MetricNamespace(12345678-aaaa-bbbb-cccc-123456789aaa, aResourceGroup, aNamespace, aResourceName)` | Returns a list of metric namespaces for a specified subscription. |
| `MetricNames(aResourceGroup, aNamespace, aResourceName)` | Returns a list of metric names. |
| `MetricNames(12345678-aaaa-bbbb-cccc-123456789aaa, aResourceGroup, aNamespace, aResourceName)` | Returns a list of metric names for a specified subscription. |
Examples:
- Resource Groups query: `ResourceGroups()`
- Passing in metric name variable: `Namespaces(cosmo)`
- Chaining template variables: `ResourceNames($rg, $ns)`
- Do not quote parameters: `MetricNames(hg, Microsoft.Network/publicIPAddresses, grafanaIP)`
{{< figure src="/static/img/docs/v60/azuremonitor-service-variables.png" class="docs-image--no-shadow" caption="Nested Azure Monitor Template Variables" >}}
Check out the [Templating]({{< relref "../variables/_index.md" >}}) documentation for an introduction to the templating feature and the different
types of template variables.
### List of supported Azure Monitor metrics
Not all metrics returned by the Azure Monitor Metrics API have values. To make it easier for you when building a query, the Grafana data source has a list of supported metrics and ignores metrics which will never have values. This list is updated regularly as new services and metrics are added to the Azure cloud. For more information about the list of metrics, refer to [current supported namespaces](https://github.com/grafana/grafana/blob/main/public/app/plugins/datasource/grafana-azure-monitor-datasource/azure_monitor/supported_namespaces.ts).
### Alerting
Grafana alerting is supported for the Azure Monitor service. This is not Azure Alerts support. For more information about Grafana alerting, refer to [how alerting in Grafana works]({{< relref "../alerting/_index.md" >}}).
{{< figure src="/static/img/docs/v60/azuremonitor-alerting.png" class="docs-image--no-shadow" caption="Azure Monitor Alerting" >}}
## Query the Logs service
Queries are written in the [Kusto Query Language](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/). A Logs query can be formatted as time series data or as table data.
If your credentials give you access to multiple subscriptions, then choose the appropriate subscription before entering queries.
### Time series queries
Time series queries are for the Graph panel and other panels like the SingleStat panel. Each query must contain at least a datetime column and a numeric value column. The result must also be sorted in ascending order by the datetime column.
Here is an example query that returns the aggregated count grouped by hour:
```kusto
Perf
| where $__timeFilter(TimeGenerated)
| summarize count() by bin(TimeGenerated, 1h)
| order by TimeGenerated asc
```
A query can also have one or more non-numeric/non-datetime columns, and those columns are considered dimensions and become labels in the response. For example, a query that returns the aggregated count grouped by hour, Computer, and the CounterName:
```kusto
Perf
| where $__timeFilter(TimeGenerated)
| summarize count() by bin(TimeGenerated, 1h), Computer, CounterName
| order by TimeGenerated asc
```
You can also select additional number value columns (with, or without multiple dimensions). For example, getting a count and average value by hour, Computer, CounterName, and InstanceName:
```kusto
Perf
| where $__timeFilter(TimeGenerated)
| summarize Samples=count(), ["Avg Value"]=avg(CounterValue)
by bin(TimeGenerated, $__interval), Computer, CounterName, InstanceName
| order by TimeGenerated asc
```
> **Tip**: In the above query, the Kusto syntax `Samples=count()` and `["Avg Value"]=...` is used to rename those columns — the second syntax allowing for the space. This changes the name of the metric that Grafana uses, and as a result, things like series legends and table columns will match what you specify. Here `Samples` is displayed instead of `_count`.
{{< figure src="/static/img/docs/azure-monitor/logs_multi-value_multi-dim.png" class="docs-image--no-shadow" caption="Azure Logs query with multiple values and multiple dimensions" >}}
### Table queries
Table queries are mainly used in the Table panel and show a list of columns and rows. This example query returns rows with the six specified columns:
```kusto
AzureActivity
| where $__timeFilter()
| project TimeGenerated, ResourceGroup, Category, OperationName, ActivityStatus, Caller
| order by TimeGenerated desc
```
### Format the display name for Log Analytics
The default display name format is:
`metricName{dimensionName=dimensionValue,dimensionTwoName=DimensionTwoValue}`
This can be customized by using the [display name field option]({{< relref "../panels/standard-options.md#display-name" >}}).
### Logs macros
To make writing queries easier there are several Grafana macros that can be used in the where clause of a query:
- `$__timeFilter()` - Expands to
`TimeGenerated ≥ datetime(2018-06-05T18:09:58.907Z) and`
`TimeGenerated ≤ datetime(2018-06-05T20:09:58.907Z)` where the from and to datetimes are from the Grafana time picker.
- `$__timeFilter(datetimeColumn)` - Expands to
`datetimeColumn ≥ datetime(2018-06-05T18:09:58.907Z) and`
`datetimeColumn ≤ datetime(2018-06-05T20:09:58.907Z)` where the from and to datetimes are from the Grafana time picker.
- `$__timeFrom()` - Returns the From datetime from the Grafana picker. Example: `datetime(2018-06-05T18:09:58.907Z)`.
- `$__timeTo()` - Returns the From datetime from the Grafana picker. Example: `datetime(2018-06-05T20:09:58.907Z)`.
- `$__escapeMulti($myVar)` - is to be used with multi-value template variables that contain illegal characters. If `$myVar` has the following two values as a string `'\\grafana-vm\Network(eth0)\Total','\\hello!'`, then it expands to: `@'\\grafana-vm\Network(eth0)\Total', @'\\hello!'`. If using single value variables there is no need for this macro, simply escape the variable inline instead - `@'\$myVar'`.
- `$__contains(colName, $myVar)` - is to be used with multi-value template variables. If `$myVar` has the value `'value1','value2'`, it expands to: `colName in ('value1','value2')`.
If using the `All` option, then check the `Include All Option` checkbox and in the `Custom all value` field type in the following value: `all`. If `$myVar` has value `all` then the macro will instead expand to `1 == 1`. For template variables with a lot of options, this will increase the query performance by not building a large "where..in" clause.
### Logs builtin variables
There are also some Grafana variables that can be used in Logs queries:
- `$__interval` - Grafana calculates the minimum time grain that can be used to group by time in queries. For more information about `$__interval`, refer to [interval variables]({{< relref "../variables/variable-types/_index.md#interval-variables" >}}). It returns a time grain like `5m` or `1h` that can be used in the bin function. E.g. `summarize count() by bin(TimeGenerated, $__interval)`
### Templating with variables for Logs
Any Log Analytics query that returns a list of values can be used in the `Query` field in the Variable edit view. There is also one Grafana function for Log Analytics that returns a list of workspaces.
Refer to the [Variables]({{< relref "../variables/_index.md" >}}) documentation for an introduction to the templating feature and the different
types of template variables.
| Name | Description |
| -------------------------------------------------- | ------------------------------------------------------------------------------------------------------ |
| `workspaces()` | Returns a list of workspaces for the default subscription. |
| `workspaces(12345678-aaaa-bbbb-cccc-123456789aaa)` | Returns a list of workspaces for the specified subscription (the parameter can be quoted or unquoted). |
Example variable queries:
<!-- prettier-ignore-start -->
| Query | Description |
| --------------------------------------------------------------------------------------- | --------------------------------------------------------- |
| `subscriptions()` | Returns a list of Azure subscriptions |
| `workspaces()` | Returns a list of workspaces for default subscription |
| `workspaces("12345678-aaaa-bbbb-cccc-123456789aaa")` | Returns a list of workspaces for a specified subscription |
| `workspaces("$subscription")` | With template variable for the subscription parameter |
| `workspace("myWorkspace").Heartbeat \| distinct Computer` | Returns a list of Virtual Machines |
| `workspace("$workspace").Heartbeat \| distinct Computer` | Returns a list of Virtual Machines with template variable |
| `workspace("$workspace").Perf \| distinct ObjectName` | Returns a list of objects from the Perf table |
| `workspace("$workspace").Perf \| where ObjectName == "$object" \| distinct CounterName` | Returns a list of metric names from the Perf table |
<!-- prettier-ignore-end -->
Example of a time series query using variables:
```kusto
Perf
| where ObjectName == "$object" and CounterName == "$metric"
| where TimeGenerated >= $__timeFrom() and TimeGenerated <= $__timeTo()
| where $__contains(Computer, $computer)
| summarize avg(CounterValue) by bin(TimeGenerated, $__interval), Computer
| order by TimeGenerated asc
```
### Deep linking from Grafana panels to the Azure Metric Logs query editor in Azure Portal
> Only available in Grafana v7.0+.
{{< figure src="/static/img/docs/v70/azure-log-analytics-deep-linking.png" max-width="500px" class="docs-image--right" caption="Logs deep linking" >}}
Click on a time series in the panel to see a context menu with a link to `View in Azure Portal`. Clicking that link opens the Azure Metric Logs query editor in the Azure Portal and runs the query from the Grafana panel there.
If you're not currently logged in to the Azure Portal, then the link opens the login page. The provided link is valid for any account, but it only displays the query if your account has access to the Azure Metric Logs workspace specified in the query.
<div class="clearfix"></div>
## Query the Azure Resource Graph service
Azure Resource Graph (ARG) is a service in Azure that is designed to extend Azure Resource Management by providing efficient and performant resource exploration with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment. By querying ARG, you can query resources with complex filtering, iteratively explore resources based on governance requirements, and assess the impact of applying policies in a vast cloud environment.
{{< figure src="/static/img/docs/azure-monitor/azure-resource-graph.png" class="docs-image--no-shadow" caption="Azure Resource Graph editor" max-width="650px" >}}
### Table queries
Queries are written in the [Kusto Query Language](https://docs.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language). Not all Kusto language features are available in ARG. An Azure Resource Graph query is formatted as table data.
If your credentials give you access to multiple subscriptions, then you can choose multiple subscriptions before entering queries.
### Sort results by resource properties
Here is an example query that returns any type of resource, but only the name, type, and location properties:
```kusto
Resources
| project name, type, location
| order by name asc
```
The query uses `order by` to sort the properties by the `name` property in ascending (asc) order. You can change what property to sort by and the order (`asc` or `desc`). The query uses `project` to show the listed properties in the results. You can add or remove properties.
### Query resources with complex filtering
Filtering for Azure resources with a tag name of `Environment` that have a value of `Internal`. You can change these to any desired tag key and value. The `=~` in the `type` match tells Resource Graph to be case insensitive. You can project by other properties or add/ remove more.
The tag key is case sensitive. `Environment` and `environment` will give different results. For example, a query that returns a list of resources with a specified tag value:
```kusto
Resources
| where tags.environment=~'internal'
| project name
```
### Group and aggregate the values by property
You can also use `summarize` and `count` to define how to group and aggregate the values by property. For example, returning count of healthy, unhealthy, and not applicable resources per recommendation:
```kusto
securityresources
| where type == 'microsoft.security/assessments'
| extend resourceId=id,
    recommendationId=name,
    resourceType=type,
    recommendationName=properties.displayName,
    source=properties.resourceDetails.Source,
    recommendationState=properties.status.code,
    description=properties.metadata.description,
    assessmentType=properties.metadata.assessmentType,
    remediationDescription=properties.metadata.remediationDescription,
    policyDefinitionId=properties.metadata.policyDefinitionId,
    implementationEffort=properties.metadata.implementationEffort,
    recommendationSeverity=properties.metadata.severity,
    category=properties.metadata.categories,
    userImpact=properties.metadata.userImpact,
    threats=properties.metadata.threats,
    portalLink=properties.links.azurePortal
| summarize numberOfResources=count(resourceId) by tostring(recommendationName), tostring(recommendationState)
```
## Configure the data source with provisioning
You can configure data sources using config files with Grafanas provisioning system. For more information on how it works and all the settings you can set for data sources on the [provisioning docs page]({{< relref "../administration/provisioning/#datasources" >}})
Here are some provisioning examples for this data source.
### Azure AD App Registration (client secret)
```yaml
# config file version
apiVersion: 1
datasources:
- name: Azure Monitor
type: grafana-azure-monitor-datasource
access: proxy
jsonData:
azureAuthType: clientsecret
cloudName: azuremonitor # See table below
tenantId: <tenant-id>
clientId: <client-id>
subscriptionId: <subscription-id> # Optional, default subscription
secureJsonData:
clientSecret: <client-secret>
version: 1
```
### Managed Identity
```yaml
# config file version
apiVersion: 1
datasources:
- name: Azure Monitor
type: grafana-azure-monitor-datasource
access: proxy
jsonData:
azureAuthType: msi
subscriptionId: <subscription-id> # Optional, default subscription
version: 1
```
### App Registration (client secret)
```yaml
datasources:
- name: Azure Monitor
type: grafana-azure-monitor-datasource
access: proxy
jsonData:
azureAuthType: clientsecret
cloudName: azuremonitor # See table below
tenantId: <tenant-id>
clientId: <client-id>
subscriptionId: <subscription-id> # Optional, default subscription
secureJsonData:
clientSecret: <client-secret>
version: 1
```
### Supported cloud names
| Azure Cloud | Value |
| ------------------------------------------------ | -------------------------- |
| Microsoft Azure public cloud | `azuremonitor` (_default_) |
| Microsoft Chinese national cloud | `chinaazuremonitor` |
| US Government cloud | `govazuremonitor` |
| Microsoft German national cloud ("Black Forest") | `germanyazuremonitor` |
## Deprecated Application Insights and Insights Analytics
Application Insights and Insights Analytics are two ways to query the same Azure Application Insights data, which can also be queried from Metrics and Logs. In Grafana 8.0, Application Insights and Insights Analytics are deprecated and made read-only in favor of querying this data through Metrics and Logs. Existing queries will continue to work, but you cannot edit them. New panels are not able to use Application Insights or Insights Analytics.
For Application Insights, new queries can be made with the Metrics query type by selecting the "Application Insights" resource type.
{{< figure src="/static/img/docs/azure-monitor/app-insights-metrics.png" max-width="650px" class="docs-image--no-shadow" caption="Azure Monitor Application Insights example" >}}
For Insights Analaytics, new queries can be written with Kusto in the Logs query type by selecting your Application Insights resource.
{{< figure src="/static/img/docs/azure-monitor/app-insights-logs.png" max-width="650px" class="docs-image--no-shadow" caption="Azure Logs Application Insights example" >}}
The new resource picker for Logs shows all resources on your Azure subscription compatible with Logs.
{{< figure src="/static/img/docs/azure-monitor/app-insights-resource-picker.png" max-width="650px" class="docs-image--no-shadow" caption="Azure Logs Application Insights resource picker" >}}
Azure Monitor Metrics and Azure Monitor Logs do not use Application Insights API keys, so make sure the data source is configured with an Azure AD app registration that has access to Application Insights

View File

@@ -0,0 +1,255 @@
+++
title = "Azure Monitor"
description = "Guide for using Azure Monitor in Grafana"
keywords = ["grafana", "microsoft", "azure", "monitor", "application", "insights", "log", "analytics", "guide"]
aliases = ["/docs/grafana/latest/features/datasources/azuremonitor"]
weight = 300
+++
# Azure Monitor data source
Grafana includes built-in support for Azure Monitor, the Azure service to maximize the availability and performance of your applications and services in the Azure Cloud. The Azure Monitor data source supports visualizing data from three Azure services:
- **Azure Monitor Metrics** to collect numeric data from resources in your Azure account.
- **Azure Monitor Logs** to collect log and performance data from your Azure account, and query using the powerful Kusto Language.
- **Azure Resource Graph** to quickly query your Azure resources across subscriptions.
This topic explains configuring, querying, and other options specific to the Azure Monitor data source. Refer to [Add a data source]({{< relref "../add-a-data-source.md" >}}) for instructions on how to add a data source to Grafana.
## Azure Monitor configuration
To access Azure Monitor configuration, hover your mouse over the **Configuration** (gear) icon, click **Data Sources**, and then select the Azure Monitor data source. If you haven't already, you'll need to [add the Azure Monitor data source]({{< relref "../add-a-data-source.md" >}}).
You must create an app registration and service principal in Azure AD to authenticate the data source. See the [Azure documentation](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#get-tenant-and-app-id-values-for-signing-in) for configuration details. Alternatively, if you are hosting Grafana in Azure (e.g. App Service, or Azure Virtual Machines) you can configure the Azure Monitor data source to use Managed Identity to securely authenticate without entering credentials into Grafana. Refer to [Configuring using Managed Identity](#configuring-using-managed-identity) for more details.
| Name | Description |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Authentication | Enables Managed Identity. Selecting Managed Identity will hide many of the fields below. See [Configuring using Managed Identity](#configuring-using-managed-identity) for more details. |
| Azure Cloud | The national cloud for your Azure account. For most users, this is the default "Azure". For more information, see [the Azure documentation.](https://docs.microsoft.com/en-us/azure/active-directory/develop/authentication-national-cloud) |
| Directory (tenant) ID | The directory/tenant ID for the Azure AD app registration to use for authentication. See [Get tenant and app ID values for signing in](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#get-tenant-and-app-id-values-for-signing-in) from the Azure documentation. |
| Application (client) ID | The application/client ID for the Azure AD app registration to use for authentication. |
| Client secret | The application client secret for the Azure AD app registration to use for authentication. See [Create a new application secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret) from the Azure documentation. |
| Default subscription | _(optional)_ Sets a default subscription for template variables to use |
| Default workspace | _(optional)_ Sets a default workspace for Log Analytics-based template variable queries to use |
## Azure Monitor query editor
The Azure Monitor data source has three different modes depending on which Azure service you wish to query:
- **Metrics** for [Azure Monitor Metrics](#querying-azure-monitor-metrics)
- **Logs** for [Azure Monitor Logs](#querying-azure-monitor-logs)
- [**Azure Resource Graph**](#querying-azure-resource-graph)
### Querying Azure Monitor Metrics
Azure Monitor Metrics collects numeric data from [supported resources](https://docs.microsoft.com/en-us/azure/azure-monitor/monitor-reference) and allows you to query them to investigate the health and utilization of your resources to maximise availability and performance.
Metrics are a lightweight format that only stores simple numeric data in a particular structure. Metrics is capable for supporting near real-time scenarios making it useful for fast detection of issues. Azure Monitor Logs can store a variety of different data types each with their own structure.
{{< figure src="/static/img/docs/azure-monitor/query-editor-metrics.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Logs Metrics sample query visualizing CPU percentage over time" >}}
#### Your first Azure Monitor Metrics query
1. Select the Metrics service
1. Select a resource to pull metrics from using the subscription, resource group, resource type, and resource fields.
1. Some resources, such as storage accounts, organise metrics under multiple metric namespaces. Grafana will pick a default namespace, but change this to see which other metrics are available.
1. Select a metric from the Metric field.
Optionally, you can apply further aggregations or filter by dimensions for further analysis.
1. Change the aggregation from the default average to show minimum, maximum or total values.
1. Set a specific custom time grain. By default Grafana will automatically select a time grain interval based on your selected time range.
1. For metrics that have multiple dimensions, you can split and filter further the returned metrics. For example, the Application Insights dependency calls metric supports returning multiple time series for successful vs unsuccessful calls.
{{< figure src="/static/img/docs/azure-monitor/query-editor-metrics-dimensions.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Monitor Metrics screenshot showing Dimensions" >}}
The options available will change depending on what is most relevant to the selected metric.
#### Legend alias formatting
The legend label for Metrics can be changed using aliases. In the Legend Format field, you can combine aliases defined below any way you want e.g
- `Blob Type: {{ blobtype }}` becomes `Blob Type: PageBlob`, `Blob Type: BlockBlob`
- `{{ resourcegroup }} - {{ resourcename }}` becomes `production - web_server`
| Alias pattern | Description |
| ----------------------------- | ------------------------------------------------------------------------------------------- |
| `{{ resourcegroup }}` | Replaced with the the resource group |
| `{{ namespace }}` | Replaced with the resource type / namespace (e.g. Microsoft.Compute/virtualMachines) |
| `{{ resourcename }}` | Replaced with the resource name |
| `{{ metric }}` | Replaced with the metric name (e.g. Percentage CPU) |
| _`{{ arbitaryDimensionID }}`_ | Replaced with the value of the specified dimension. (e.g. {{ blobtype }} becomes BlockBlob) |
| `{{ dimensionname }}` | _(Legacy for backwards compatibility)_ Replaced with the name of the first dimension |
| `{{ dimensionvalue }}` | _(Legacy for backwards compatibility)_ Replaced with the value of the first dimension |
#### Supported Azure Monitor metrics
Not all metrics returned by the Azure Monitor Metrics API have values. To make it easier for you when building a query, the Grafana data source has a list of supported metrics and ignores metrics which will never have values. This list is updated regularly as new services and metrics are added to the Azure cloud. For more information about the list of metrics, refer to [current supported namespaces](https://github.com/grafana/grafana/blob/main/public/app/plugins/datasource/grafana-azure-monitor-datasource/azure_monitor/supported_namespaces.ts).
### Querying Azure Monitor Logs
Azure Monitor Logs collects and organises log and performance data from [supported resources](https://docs.microsoft.com/en-us/azure/azure-monitor/monitor-reference) and makes many sources of data available to query together with the sophisticated [Kusto Query Language (KQL)](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/).
While Azure Monitor Metrics only stores simplified numerical data, Logs can store different data types each with their own structure and can perform complexe analysis of data using KQL.
{{< figure src="/static/img/docs/azure-monitor/query-editor-logs.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Monitor Logs sample query comparing successful requests to failed requests" >}}
#### Your first Azure Monitor Logs query
1. Select the Logs service
2. Select a resource to query. Alternatively, you can dynamically query all resources under a single resource group or subscription.
3. Enter in your KQL query. See below for examples.
##### Kusto Query Language
Azure Monitor Logs queries are written using the Kusto Query Language (KQL), a rich language designed to be easy to read and write, which should be familiar to those know who SQL. The Azure documentation has plenty of resource to help with learning KQL:
- [Log queries in Azure Monitor](https://docs.microsoft.com/en-us/azure/azure-monitor/logs/log-query-overview)
- [Getting started with Kusto](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/concepts/)
- [Tutorial: Use Kusto queries in Azure Monitor](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor)
- [SQL to Kusto cheat sheet](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/sqlcheatsheet)
Here is an example query that returns a virtual machine's CPU performance, averaged over 5m time grains
```kusto
Perf
# $__timeFilter is a special Grafana macro that filters the results to the time span of the dashboard
| where $__timeFilter(TimeGenerated)
| where CounterName == "% Processor Time"
| summarize avg(CounterValue) by bin(TimeGenerated, 5m), Computer
| order by TimeGenerated asc
```
Time series queries are for values that change over time, usually for graph visualisations such as the Time series panel. Each query should return at least a datetime column and a numeric value column. The result must also be sorted in ascending order by the datetime column.
A query can also have one or more non-numeric/non-datetime columns, and those columns are considered dimensions and become labels in the response. For example, a query that returns the aggregated count grouped by hour, Computer, and the CounterName:
```kusto
Perf
| where $__timeFilter(TimeGenerated)
| summarize count() by bin(TimeGenerated, 1h), Computer, CounterName
| order by TimeGenerated asc
```
You can also select additional number value columns (with, or without multiple dimensions). For example, getting a count and average value by hour, Computer, CounterName, and InstanceName:
```kusto
Perf
| where $__timeFilter(TimeGenerated)
| summarize Samples=count(), ["Avg Value"]=avg(CounterValue)
by bin(TimeGenerated, $__interval), Computer, CounterName, InstanceName
| order by TimeGenerated asc
```
Table queries are mainly used in the Table panel and show a list of columns and rows. This example query returns rows with the six specified columns:
```kusto
AzureActivity
| where $__timeFilter()
| project TimeGenerated, ResourceGroup, Category, OperationName, ActivityStatus, Caller
| order by TimeGenerated desc
```
##### Logs macros
To make writing queries easier there are several Grafana macros that can be used in the where clause of a query:
| Macro | Description |
| ------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `$__timeFilter()` | Used to filter the results to the time range of the dashboard.<br/>Example: `TimeGenerated >= datetime(2018-06-05T18:09:58.907Z) and TimeGenerated <= datetime(2018-06-05T20:09:58.907Z)`. |
| `$__timeFilter(datetimeColumn)` | Like `$__timeFilter()`, but specifies a custom field to filter on. |
| `$__timeFrom()` | Expands to the start of the dashboard time range.<br/>Example: `datetime(2018-06-05T18:09:58.907Z)`. |
| `$__timeTo()` | Expands to the end of the dashboard time range.<br/>Example: `datetime(2018-06-05T20:09:58.907Z)`. |
| `$__escapeMulti($myVar)` | Used with multi-value template variables that contain illegal characters.<br/>If `$myVar` has the following two values as a string `'\\grafana-vm\Network(eth0)\Total','\\hello!'`, then it expands to `@'\\grafana-vm\Network(eth0)\Total', @'\\hello!'`.<br/><br/>If using single value variables there is no need for this macro, simply escape the variable inline instead - `@'\$myVar'`. |
| `$__contains(colName, $myVar)` | Used with multi-value template variables.<br/>If `$myVar` has the value `'value1','value2'`, it expands to: `colName in ('value1','value2')`.<br/><br/>If using the `All` option, then check the `Include All Option` checkbox and in the `Custom all value` field type in the value `all`. If `$myVar` has value `all` then the macro will instead expand to `1 == 1`. For template variables with a lot of options, this will increase the query performance by not building a large "where..in" clause. |
Additionally, Grafana has the built-in `$__interval` macro
### Querying Azure Resource Graph
Azure Resource Graph (ARG) is a service in Azure that is designed to extend Azure Resource Management by providing efficient and performant resource exploration, with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment. By querying ARG, you can query resources with complex filtering, iteratively explore resources based on governance requirements, and assess the impact of applying policies in a vast cloud environment.
{{< figure src="/static/img/docs/azure-monitor/query-editor-arg.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Resource Graph sample query listing virtual machines on an account" >}}
### Your first Azure Resource Graph query
ARG queries are written in a variant of the [Kusto Query Language](https://docs.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language), but not all Kusto language features are available in ARG. An Azure Resource Graph query is formatted as table data.
If your credentials give you access to multiple subscriptions, then you can choose multiple subscriptions before entering queries.
#### Sort results by resource properties
Here is an example query that returns all resources in the selected subscriptions, but only the name, type, and location properties:
```kusto
Resources
| project name, type, location
| order by name asc
```
The query uses `order by` to sort the properties by the `name` property in ascending (`asc`) order. You can change what property to sort by and the order (`asc` or `desc`). The query uses `project` to show only the listed properties in the results. You can add or remove properties.
#### Query resources with complex filtering
Filtering for Azure resources with a tag name of `environment` that have a value of `Internal`. You can change these to any desired tag key and value. The `=~` in the `type` match tells Resource Graph to be case insensitive. You can project by other properties or add/remove more.
For example, a query that returns a list of resources with an `environment` tag value of `Internal`:
```kusto
Resources
| where tags.environment=~'internal'
| project name
```
#### Group and aggregate the values by property
You can also use `summarize` and `count` to define how to group and aggregate the values by property. For example, returning count of healthy, unhealthy, and not applicable resources per recommendation:
```kusto
securityresources
| where type == 'microsoft.security/assessments'
| extend resourceId=id,
    recommendationId=name,
    resourceType=type,
    recommendationName=properties.displayName,
    source=properties.resourceDetails.Source,
    recommendationState=properties.status.code,
    description=properties.metadata.description,
    assessmentType=properties.metadata.assessmentType,
    remediationDescription=properties.metadata.remediationDescription,
    policyDefinitionId=properties.metadata.policyDefinitionId,
    implementationEffort=properties.metadata.implementationEffort,
    recommendationSeverity=properties.metadata.severity,
    category=properties.metadata.categories,
    userImpact=properties.metadata.userImpact,
    threats=properties.metadata.threats,
    portalLink=properties.links.azurePortal
| summarize numberOfResources=count(resourceId) by tostring(recommendationName), tostring(recommendationState)
```
In Azure Resource Graph many nested properties (`properties.displayName`) are of a `dynamic` type, and should be cast to a string with `tostring()` to operate on them.
The Azure documentation also hosts [many sample queries](https://docs.microsoft.com/en-gb/azure/governance/resource-graph/samples/starter) to help you get started
## Going further with Azure Monitor
See the following topics to learn more about the Azure Monitor data source:
- [Azure Monitor template variables]({{< relref "./template-variables.md" >}}) for more interactive, dynamic, and reusable dashboards.
- [Provisioning Azure Monitor]({{< relref "./provisioning.md" >}}) for configuring the Azure Monitor data source using YAML files
- [Deprecating Application Insights]({{< relref "./provisioning.md" >}}) and migrating to Metrics and Logs queries
### Configuring using Managed Identity
Customers who host Grafana in Azure (e.g. App Service, Azure Virtual Machines) and have managed identity enabled on their VM, will now be able to use the managed identity to configure Azure Monitor in Grafana. This will simplify the data source configuration, requiring the data source to be securely authenticated without having to manually configure credentials via Azure AD App Registrations for each data source. For more details on Azure managed identities, refer to the [Azure documentation](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview).
To enable managed identity for Grafana, set the `managed_identity_enabled` flag in the `[azure]` section of the [Grafana server config](https://grafana.com/docs/grafana/latest/administration/configuration/#azure).
```ini
[azure]
managed_identity_enabled = true
```
Then, in the Azure Monitor data source configuration and set Authentication to Managed Identity. The directory ID, application ID and client secret fields will be hidden and the data source will use managed identity for authenticating to Azure Monitor Metrics, Logs, and Azure Resource Graph.
{{< figure src="/static/img/docs/azure-monitor/managed-identity.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Monitor Metrics screenshot showing Dimensions" >}}

View File

@@ -0,0 +1,28 @@
+++
title = "Application Insights deprecation"
description = "Template to provision the Azure Monitor data source"
keywords = ["grafana", "microsoft", "azure", "monitor", "application", "insights", "log", "analytics", "guide"]
weight = 999
+++
# Deprecated Application Insights and Insights Analytics
Application Insights and Insights Analytics are two ways to query the same Azure Application Insights data, which can also be queried from Metrics and Logs. In Grafana 8.0, Application Insights and Insights Analytics are deprecated and made read-only in favor of querying this data through Metrics and Logs. Existing queries will continue to work, but you cannot edit them. New panels are not able to use Application Insights or Insights Analytics.
Azure Monitor Metrics and Azure Monitor Logs do not use Application Insights API keys, so make sure the data source is configured with an Azure AD app registration that has access to Application Insights.
## Application Insights
New Application Insights queries can be made with the Metrics service and selecting the "Application Insights" resource type. Application Insights has metrics available between two different metric
{{< figure src="/static/img/docs/azure-monitor/app-insights-metrics.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Monitor Application Insights example" >}}
## Insights Analytics
New Insights Analaytics queries can be written with Kusto in the Logs query type by selecting your Application Insights resource.
{{< figure src="/static/img/docs/azure-monitor/app-insights-logs.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Logs Application Insights example" >}}
The new resource picker for Logs shows all resources on your Azure subscription compatible with Logs.
{{< figure src="/static/img/docs/azure-monitor/app-insights-resource-picker.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Logs Application Insights resource picker" >}}

View File

@@ -0,0 +1,56 @@
+++
title = "Provisioning Azure Monitor"
description = "Template to provision the Azure Monitor data source"
keywords = ["grafana", "microsoft", "azure", "monitor", "application", "insights", "log", "analytics", "guide"]
weight = 2
+++
# Configure the data source with provisioning
You can configure data sources using config files with Grafanas provisioning system. For more information on how it works and all the settings you can set for data sources on the [Provisioning documentation page]({{< relref "../../administration/provisioning/#datasources" >}})
Here are some provisioning examples for this data source.
## Azure AD App Registration (client secret)
```yaml
apiVersion: 1 # config file version
datasources:
- name: Azure Monitor
type: grafana-azure-monitor-datasource
access: proxy
jsonData:
azureAuthType: clientsecret
cloudName: azuremonitor # See table below
tenantId: <tenant-id>
clientId: <client-id>
subscriptionId: <subscription-id> # Optional, default subscription
secureJsonData:
clientSecret: <client-secret>
version: 1
```
## Managed Identity
```yaml
apiVersion: 1 # config file version
datasources:
- name: Azure Monitor
type: grafana-azure-monitor-datasource
access: proxy
jsonData:
azureAuthType: msi
subscriptionId: <subscription-id> # Optional, default subscription
version: 1
```
## Supported cloud names
| Azure Cloud | Value |
| ------------------------------------------------ | -------------------------- |
| Microsoft Azure public cloud | `azuremonitor` (_default_) |
| Microsoft Chinese national cloud | `chinaazuremonitor` |
| US Government cloud | `govazuremonitor` |
| Microsoft German national cloud ("Black Forest") | `germanyazuremonitor` |

View File

@@ -0,0 +1,53 @@
+++
title = "Azure Monitor template variables"
description = "Using template variables with Azure Monitor in Grafana"
keywords = ["grafana", "microsoft", "azure", "monitor", "application", "insights", "log", "analytics", "guide"]
weight = 2
+++
# Template variables
Instead of hard-coding values for fields like resource group or resource name in your queries, you can use variables in their place to create more interactive, dynamic, and reusable dashboards.
Check out the [Templating]({{< relref "../../variables/_index.md" >}}) documentation for an introduction to the templating feature and the different
types of template variables.
The Azure Monitor data source provides the following queries you can specify in the Query field in the Variable edit view
| Name | Description |
| ---------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ |
| `Subscriptions()` | Returns subscriptions. |
| `ResourceGroups()` | Returns resource groups. |
| `ResourceGroups(subscriptionID)` | Returns resource groups for a specified subscription. |
| `Namespaces(aResourceGroup)` | Returns namespaces for the default subscription and specified resource group. |
| `Namespaces(subscriptionID, aResourceGroup)` | Returns namespaces for the specified subscription and resource group. |
| `ResourceNames(aResourceGroup, aNamespace)` | Returns a list of resource names. |
| `ResourceNames(subscriptionID, aResourceGroup, aNamespace)` | Returns a list of resource names for a specified subscription. |
| `MetricNamespace(aResourceGroup, aNamespace, aResourceName)` | Returns a list of metric namespaces. |
| `MetricNamespace(subscriptionID, aResourceGroup, aNamespace, aResourceName)` | Returns a list of metric namespaces for a specified subscription. |
| `MetricNames(aResourceGroup, aMetricDefinition, aResourceName, aMetricNamespace)` | Returns a list of metric names. |
| `MetricNames(aSubscriptionID, aMetricDefinition, aResourceName, aMetricNamespace)` | Returns a list of metric names for a specified subscription. |
| `workspaces()` | Returns a list of workspaces for the default subscription. |
| `workspaces(subscriptionID)` | Returns a list of workspaces for the specified subscription (the parameter can be quoted or unquoted). |
Where a subscription ID is not specified, a default subscription must be specified in the data source configuration, which will be used.
Any Log Analytics KQL query that returns a single list of values can also be used in the Query field. For example:
| Query | Description |
| ----------------------------------------------------------------------------------------- | --------------------------------------------------------- |
| `workspace("myWorkspace").Heartbeat \| distinct Computer` | Returns a list of Virtual Machines |
| `workspace("$workspace").Heartbeat \| distinct Computer` | Returns a list of Virtual Machines with template variable |
| `workspace("$workspace").Perf \| distinct ObjectName` | Returns a list of objects from the Perf table |
| `workspace("$workspace").Perf \| where ObjectName == "$object"` `\| distinct CounterName` | Returns a list of metric names from the Perf table |
Example of a time series query using variables:
```kusto
Perf
| where ObjectName == "$object" and CounterName == "$metric"
| where TimeGenerated >= $__timeFrom() and TimeGenerated <= $__timeTo()
| where $__contains(Computer, $computer)
| summarize avg(CounterValue) by bin(TimeGenerated, $__interval), Computer
| order by TimeGenerated asc
```

View File

@@ -25,6 +25,7 @@ You can change the following elements:
- Login title (will not appear if a login logo is set, Grafana v7.0+)
- Login subtitle (will not appear if a login logo is set, Grafana v7.0+)
- Login box background (Grafana v7.0+)
- Loading logo
> You will have to host your logo and other images used by the white labeling feature separately. Make sure Grafana can access the URL where the assets are stored.
@@ -62,6 +63,9 @@ The configuration file in Grafana Enterprise contains the following options. Eac
# Set to complete URL to override apple/ios icon
;apple_touch_icon =
# Set to complete URL to override loading logo
;loading_logo_url =
```
You can replace the default footer links (Documentation, Support, Community) and even add your own custom links.
An example follows for replacing the default footer and help links with new custom links.

View File

@@ -63,4 +63,4 @@ After you've navigated to Explore, you should notice a "Back" button in the Expl
> **Note:** Available in Grafana 7.3 and later versions.
The Share shortened link capability allows you to create smaller and simpler URLs of the format /goto/:uid instead of using longer URLs with query parameters. To create a shortened link, click the **Share** option in Explore toolbar. Any shortened links that are never used will be automatically deleted after 7 days.
The Share shortened link capability allows you to create smaller and simpler URLs of the format /goto/:uid instead of using longer URLs with query parameters. To create a shortened link to the executed query, click the **Share** option in the Explore toolbar. A shortened link that is never used will automatically get deleted after seven (7) days.

View File

@@ -154,6 +154,7 @@ Status Codes:
There can be different reasons for this:
- The folder has been changed by someone else, `status=version-mismatch`
The response body will have the following properties:
```http

View File

@@ -1,6 +1,6 @@
+++
title = "Grafana Live"
aliases = []
aliases = ["/docs/grafana/latest/live/live-feature-overview/"]
weight = 115
+++
@@ -13,3 +13,35 @@ With Grafana Live, you can push event data to a frontend as soon as an event occ
This could be notifications about dashboard changes, new frames for rendered data, and so on. Live features can help eliminate a page reload or polling in many places, it can stream Internet of things (IOT) sensors or any other real-time data to panels.
> **Note:** By `real-time`, we indicate a soft real-time. Due to network latencies, garbage collection cycles, and so on, the delay of a delivered message can be up to several hundred milliseconds or higher.
## Concepts
Grafana Live sends data to clients over persistent WebSocket connection. Grafana frontend subscribes on channels to receive data which was published into that channel in other words PUB/SUB mechanics is used. All subscriptions on a page multiplexed inside a single WebSocket connection. There are some rules regarding Live channel names see [Live channel]({{< relref "./live-channel.md" >}}).
Handling persistent connections like WebSocket in scale may require operating system and infrastructure tuning. That's why by default Grafana Live supports 100 simultaneous connections max. For more details on how to tune this limit, refer to [Live configuration section]({{< relref "configure-grafana-live.md" >}}).
## Features
Having a way to send data to clients in real-time opens a road for new ways of data interaction and visualization. Below we describe Grafana Live features supported at the moment.
### Dashboard change notifications
As soon as there is a change to the dashboard layout, it is automatically reflected on other devices connected to Grafana Live.
### Data streaming from plugins
With Grafana Live, backend data source plugins can stream updates to frontend panels.
For data source plugin channels, Grafana uses `ds` scope. Namespace in the case of data source channels is a data source unique ID (UID) which is issued by Grafana at the moment of data source creation. The path is a custom string that plugin authors free to choose themselves (just make sure it consists of allowed symbols).
For example, a data source channel looks like this: `ds/<DATASOURCE_UID>/<CUSTOM_PATH>`.
Refer to the tutorial about [building a streaming data source backend plugin](https://grafana.com/tutorials/build-a-streaming-data-source-plugin/) for more details.
The basic streaming example included in Grafana core streams frames with some generated data to a panel. To look at it create a new panel and point it to the `-- Grafana --` data source. Next, choose `Live Measurements` and select the `plugin/testdata/random-20Hz-stream` channel.
### Data streaming from Telegraf
A new API endpoint `/api/live/push/:streamId` allows accepting metrics data in Influx format from Telegraf. These metrics are transformed into Grafana data frames and published to channels.
Refer to the tutorial about [streaming metrics from Telegraf to Grafana](https://grafana.com/tutorials/stream-metrics-from-telegraf-to-grafana/) for more information.

View File

@@ -19,6 +19,14 @@ The number of maximum WebSocket connections users can establish with Grafana is
In case you want to increase this limit, ensure that your server and infrastructure allow handling more connections. The following sections discuss several common problems which could happen when managing persistent connections, in particular WebSocket connections.
## Request origin check
To avoid hijacking of WebSocket connection Grafana Live checks the Origin request header sent by a client in an HTTP Upgrade request. Requests without Origin header pass through without any origin check.
By default, Live accepts connections with Origin header that matches configured [root_url]({{< relref "../administration/configuration.md#root_url" >}}) (which is a public Grafana URL).
It is possible to provide a list of additional origin patterns to allow WebSocket connections from. This can be achieved using the [allowed_origins]({{< relref "../administration/configuration.md#allowed_origins" >}}) option of Grafana Live configuration.
### Resource usage
Each persistent connection costs some memory on a server. Typically, this should be about 50 KB per connection at this moment. Thus a server with 1 GB RAM is expected to handle about 20k connections max. Each active connection consumes additional CPU resources since the client and server send PING/PONG frames to each other to maintain a connection.

View File

@@ -1,32 +0,0 @@
+++
title = "Live feature overview"
description = "Grafana Live feature overview"
keywords = ["Grafana", "live", "guide"]
weight = 100
+++
# Grafana Live feature overview
This topic explains the current Grafana Live capabilities.
## Dashboard change notifications
As soon as there is a change to the dashboard layout, it is automatically reflected on other devices connected to Grafana Live.
## Data streaming from plugins
With Grafana Live, backend data source plugins can stream updates to frontend panels.
For data source plugin channels Grafana uses `ds` scope. Namespace in the case of data source channels is a data source unique ID (UID) which is issued by Grafana at the moment of data source creation. The path is a custom string that plugin authors free to choose themselves (just make sure it consists of allowed symbols).
For example, a data source channel looks like this: `ds/<DATASOURCE_UID>/<CUSTOM_PATH>`.
Refer to the tutorial about [building a streaming data source backend plugin](https://grafana.com/tutorials/build-a-streaming-data-source-plugin/) for more details.
The basic streaming example included in Grafana core streams frames with some generated data to a panel. To look at it create a new panel and point it to the `-- Grafana --` data source. Next, choose `Live Measurements` and select the `plugin/testdata/random-20Hz-stream` channel.
## Data streaming from Telegraf
A new API endpoint `/api/live/push/:streamId` allows accepting metrics data in Influx format from Telegraf. These metrics are transformed into Grafana data frames and published to channels.
Refer to the tutorial about [streaming metrics from Telegraf to Grafana](https://grafana.com/tutorials/stream-metrics-from-telegraf-to-grafana/) for more information.

View File

@@ -1,36 +0,0 @@
+++
title = "Field options and overrides"
keywords = ["grafana", "field options", "documentation", "format fields"]
aliases = ["/docs/grafana/latest/panels/field-configuration-options/", "/docs/grafana/latest/panels/field-options/"]
weight = 500
+++
# Field options and overrides
This section explains what field options and field overrides in Grafana are and how to use them. It also includes [examples](#examples) if you need an idea of how this feature might be useful in the real world.
The data model used in Grafana, the [data frame]({{< relref "../../developers/plugins/data-frames.md" >}}), is a columnar-oriented table structure that unifies both time series and table query results. Each column within this structure is called a _field_. A field can represent a single time series or table column.
Field options allow you to change how the data is displayed in your visualizations. Options and overrides that you apply do not change the data, they change how Grafana displays the data.
## Field options
_Field options_, both standard and custom, can be found in the Field tab in the panel editor. Changes on this tab apply to all fields (i.e. series/columns). For example, if you change the unit to percentage, then all fields with numeric values are displayed in percentages. Learn how to apply a field option in [Configure all fields]({{< relref "configure-all-fields.md" >}}).
## Field overrides
_Field overrides_ can be added in the Overrides tab in the panel editor. There you can add the same options as you find in the Field tab, but they are only applied to specific fields. Learn how to apply an override in [Configure specific fields]({{< relref "configure-specific-fields.md" >}}).
## Available field options and overrides
Field option types are common to both field options and field overrides. The only difference is whether the change will apply to all fields (apply in the Field tab) or to a subset of fields (apply in the Overrides tab).
- [Standard options]({{< relref "../standard-options.md" >}}) apply to all panel visualizations that allow transformations.
- [Table field options]({{< relref "../visualizations/table/table-field-options.md" >}}), which only apply to table panel visualizations.
## Examples
Here are some examples of how you might use this feature:
- [Field option example]({{< relref "configure-all-fields.md#field-option-example" >}})
- [Field override example]({{< relref "configure-specific-fields.md#field-override-example" >}})

View File

@@ -1,51 +0,0 @@
+++
title = "Configure all fields"
keywords = ["grafana", "field options", "documentation", "format fields", "change all fields"]
weight = 200
+++
# Configure all fields
To change how all fields display data, you can change an option in the Field tab. In the Overrides tab, you can then override the field options for [specific fields]({{< relref "configure-specific-fields.md" >}}).
For example, you could change the number of decimal places shown in all fields by changing the **Decimals** option. For more information about options, refer to:
- [Standard options]({{< relref "../standard-options.md" >}}), apply to all visualizations that allow transformations.
- [Table field options]({{< relref "../visualizations/table/table-field-options.md" >}}), which only apply to table panel visualizations.
## Change a field option
You can change as many options as you want to.
1. Navigate to the panel you want to edit, click the panel title, and then click **Edit**.
1. Click the **Field** tab.
1. Find the option you want to change. You can define:
- [Standard options]({{< relref "../standard-options.md" >}}), which apply to all panel visualizations that allow transformations.
- [Table field options]({{< relref "../visualizations/table/table-field-options.md" >}}), which only apply to table panel visualizations.
1. Add options by adding values in the fields. To return options to default values, delete the white text in the fields.
1. When finished, click **Save** to save all panel edits to the dashboard.
## Field option example
Lets assume that our result set is a data frame that consists of two fields: time and temperature.
| time | temperature |
| :-----------------: | :---------: |
| 2020-01-02 03:04:00 | 45.0 |
| 2020-01-02 03:05:00 | 47.0 |
| 2020-01-02 03:06:00 | 48.0 |
Each field (column) of this structure can have field options applied that alter the way its values are displayed. This means that you can, for example, set the Unit to Temperature > Celsius, resulting in the following table:
| time | temperature |
| :-----------------: | :---------: |
| 2020-01-02 03:04:00 | 45.0 °C |
| 2020-01-02 03:05:00 | 47.0 °C |
| 2020-01-02 03:06:00 | 48.0 °C |
While we're at it, the decimal place doesn't add anything to this display. You can change the Decimals from `auto` to zero (`0`), resulting in the following table:
| time | temperature |
| :-----------------: | :---------: |
| 2020-01-02 03:04:00 | 45 °C |
| 2020-01-02 03:05:00 | 47 °C |
| 2020-01-02 03:06:00 | 48 °C |

View File

@@ -1,64 +0,0 @@
+++
title = "Configure specific fields"
keywords = ["grafana", "field options", "documentation", "format fields", "overrides", "override fields"]
weight = 300
+++
# Configure specific fields
Overrides allow you to change the settings for one or more fields. Field options for overrides are exactly the same as the field options available in a particular visualization. The only difference is that you choose which fields to apply them to.
For example, you could change the number of decimal places shown in all numeric fields or columns by changing the **Decimals** option for **Fields with type** that matches **Numeric**. For more information about options, refer to:
- [Standard options]({{< relref "../standard-options.md" >}}), which apply to all panel visualizations that allow transformations.
- [Table field options]({{< relref "../visualizations/table/table-field-options.md" >}}), which only apply to table panel visualizations.
## Add a field override
You can override as many field options as you want to.
1. Navigate to the panel you want to edit, click the panel title, and then click **Edit**.
1. Click the **Overrides** tab.
1. Click **Add an override for**.
1. Select which fields an override rule will be applied to:
- **Fields with name -** Select a field from the list of all available fields. Properties you add to a rule with this selector are only applied to this single field.
- **Fields with name matching regex -** Specify fields to override with a regular expression. Properties you add to a rule with this selector are applied to all fields where the field name match the regex.
- **Fields with type -** Select fields by type, such as string, numeric, and so on. Properties you add to a rule with this selector are applied to all fields that match the selected type.
- **Fields returned by query -** Select all fields returned by a specific query, such as A, B, or C. Properties you add to a rule with this selector are applied to all fields returned by the selected query.
1. Click **Add override property**.
1. Select the field option that you want to apply.
- [Standard options]({{< relref "../standard-options.md" >}}), which apply to all panel visualizations that allow transformations.
- [Table field options]({{< relref "../visualizations/table/table-field-options.md" >}}), which only apply to table panel visualizations.
1. Enter options by adding values in the fields. To return options to default values, delete the white text in the fields.
1. Continue to add overrides to this field by clicking **Add override property**, or you can click **Add override** and select a different field to add overrides to.
1. When finished, click **Save** to save all panel edits to the dashboard.
## Delete a field override
1. Navigate to the Overrides tab that contains the override that you want to delete.
1. Click the trash can icon next to the override.
## Field override example
Lets assume that our result set is a data frame that consists of four fields: time, high temp, low temp, and humidity.
| time | high temp | low temp | humidity |
| ------------------- | --------- | -------- | -------- |
| 2020-01-02 03:04:00 | 45.0 | 30.0 | 67 |
| 2020-01-02 03:05:00 | 47.0 | 34.0 | 68 |
| 2020-01-02 03:06:00 | 48.0 | 31.0 | 68 |
Let's apply the field options from the [field option example]({{< relref "configure-all-fields.md#field-option-example" >}}) to apply the Celsius unit and get rid of the decimal place. This results in the following table:
| time | high temp | low temp | humidity |
| ------------------- | --------- | -------- | -------- |
| 2020-01-02 03:04:00 | 45 °C | 30 °C | 67 °C |
| 2020-01-02 03:05:00 | 47 °C | 34 °C | 68 °C |
| 2020-01-02 03:06:00 | 48 °C | 31 °C | 68 °C |
The temperature fields look good, but the humidity is nonsensical. We can fix this by applying a field option override to the humidity field and change the unit to Misc > percent (0-100). This results in a table that makes a lot more sense:
| time | high temp | low temp | humidity |
| ------------------- | --------- | -------- | -------- |
| 2020-01-02 03:04:00 | 45 °C | 30 °C | 67% |
| 2020-01-02 03:05:00 | 47 °C | 34 °C | 68% |
| 2020-01-02 03:06:00 | 48 °C | 31 °C | 68% |

View File

@@ -101,6 +101,32 @@ Panel data source query options:
- **Cache timeout -** (This field is only visible if available in your data source.) If your time series store has a query cache, then this option can override the default cache timeout. Specified as a numeric value in seconds.
### Examples:
- **Relative time:**
| Example | Relative time field |
| ------------------- | --------------------|
| Last 5 minutes | `now-5m` |
| The day so far | `now/d` |
| Last 5 days | `now-5d/d` |
| This week so far | `now/w` |
| Last 2 years | `now-2y/y` |
- **Time shift:**
| Example | Time shift field |
| ------------------- | --------------------|
| Last entire week | `1w/w` |
| Two entire weeks ago | `2w/w` |
| Last entire month | `1M/M` |
| This entire year | `1d/y` |
| Last entire year | `1y/y` |
### Query inspector button
You can click **Query inspector** to open the Query tab of the panel inspector where you can see the query request sent by the panel and the response.
@@ -129,4 +155,4 @@ You can:
If your data source supports them, then Grafana displays the **Expression** button and shows any existing expressions in the query editor list.
For more information about expressions, refer to [Expressions]({{< relref "expressions.md" >}}).
For more information about expressions, refer to [Expressions]({{< relref "expressions.md" >}}).

View File

@@ -46,7 +46,7 @@ Choose a stacking direction.
- **Auto -** Grafana selects what it thinks is the best orientation.
- **Horizontal -** Bars stretch horizontally, left to right.
- **Vertical -** Bars stretch vertically, top to bottom.
- **Vertical -** Bars stretch vertically, bottom to top.
### Display mode

View File

@@ -8,6 +8,8 @@ weight = 10000
Here you can find detailed release notes that list everything that is included in every release as well as notices
about deprecations, breaking changes as well as changes that relate to plugin development.
- [Release notes for 8.0.5]({{< relref "release-notes-8-0-5" >}})
- [Release notes for 8.0.4]({{< relref "release-notes-8-0-4" >}})
- [Release notes for 8.0.3]({{< relref "release-notes-8-0-3" >}})
- [Release notes for 8.0.2]({{< relref "release-notes-8-0-2" >}})
- [Release notes for 8.0.1]({{< relref "release-notes-8-0-1" >}})

View File

@@ -0,0 +1,25 @@
+++
title = "Release notes for Grafana 8.0.4"
[_build]
list = false
+++
<!-- Auto generated by update changelog github action -->
# Release notes for Grafana 8.0.4
### Features and enhancements
* **Live:** Rely on app url for origin check. [#35983](https://github.com/grafana/grafana/pull/35983), [@FZambia](https://github.com/FZambia)
* **PieChart:** Sort legend descending, update placeholder to show default …. [#36062](https://github.com/grafana/grafana/pull/36062), [@ashharrison90](https://github.com/ashharrison90)
* **TimeSeries panel:** Do not reinitialize plot when thresholds mode change. [#35952](https://github.com/grafana/grafana/pull/35952), [@dprokop](https://github.com/dprokop)
### Bug fixes
* **Elasticsearch:** Allow case sensitive custom options in date_histogram interval. [#36168](https://github.com/grafana/grafana/pull/36168), [@Elfo404](https://github.com/Elfo404)
* **Elasticsearch:** Restore previous field naming strategy when using variables. [#35624](https://github.com/grafana/grafana/pull/35624), [@Elfo404](https://github.com/Elfo404)
* **Explore:** Fix import of queries between SQL data sources. [#36210](https://github.com/grafana/grafana/pull/36210), [@ivanahuckova](https://github.com/ivanahuckova)
* **InfluxDB:** InfluxQL query editor: fix retention policy handling. [#36022](https://github.com/grafana/grafana/pull/36022), [@gabor](https://github.com/gabor)
* **Loki:** Send correct time range in template variable queries. [#36268](https://github.com/grafana/grafana/pull/36268), [@ivanahuckova](https://github.com/ivanahuckova)
* **TimeSeries:** Preserve RegExp series overrides when migrating from old graph panel. [#36134](https://github.com/grafana/grafana/pull/36134), [@ashharrison90](https://github.com/ashharrison90)

View File

@@ -0,0 +1,28 @@
+++
title = "Release notes for Grafana 8.0.5"
[_build]
list = false
+++
<!-- Auto generated by update changelog github action -->
# Release notes for Grafana 8.0.5
### Features and enhancements
* **Cloudwatch Logs:** Send error down to client. [#36277](https://github.com/grafana/grafana/pull/36277), [@zoltanbedi](https://github.com/zoltanbedi)
* **Folders:** Return 409 Conflict status when folder already exists. [#36429](https://github.com/grafana/grafana/pull/36429), [@dsotirakis](https://github.com/dsotirakis)
* **TimeSeries:** Do not show series in tooltip if it's hidden in the viz. [#36353](https://github.com/grafana/grafana/pull/36353), [@dprokop](https://github.com/dprokop)
### Bug fixes
* **AzureMonitor:** Fix issue where resource group name is missing on the resource picker button. [#36400](https://github.com/grafana/grafana/pull/36400), [@joshhunt](https://github.com/joshhunt)
* **Chore:** Fix AWS auth assuming role with workspace IAM. [#36430](https://github.com/grafana/grafana/pull/36430), [@wbrowne](https://github.com/wbrowne)
* **DashboardQueryRunner:** Fixes unrestrained subscriptions being created. [#36371](https://github.com/grafana/grafana/pull/36371), [@hugohaggmark](https://github.com/hugohaggmark)
* **DateFormats:** Fix reading correct setting key for use_browser_locale. [#36428](https://github.com/grafana/grafana/pull/36428), [@torkelo](https://github.com/torkelo)
* **Links:** Fix links to other apps outside Grafana when under sub path. [#36498](https://github.com/grafana/grafana/pull/36498), [@torkelo](https://github.com/torkelo)
* **Snapshots:** Fix snapshot absolute time range issue. [#36350](https://github.com/grafana/grafana/pull/36350), [@torkelo](https://github.com/torkelo)
* **Table:** Fix data link color. [#36446](https://github.com/grafana/grafana/pull/36446), [@tharun208](https://github.com/tharun208)
* **Time Series:** Fix X-axis time format when tick increment is larger than a year. [#36335](https://github.com/grafana/grafana/pull/36335), [@torkelo](https://github.com/torkelo)
* **Tooltip Plugin:** Prevent Tooltip render if field is undefined. [#36260](https://github.com/grafana/grafana/pull/36260), [@ashharrison90](https://github.com/ashharrison90)

View File

@@ -72,6 +72,10 @@ This variable is the ID of the current organization.
Currently only supported for Prometheus data sources. This variable represents the range for the current dashboard. It is calculated by `to - from`. It has a millisecond and a second representation called `$__range_ms` and `$__range_s`.
## $__rate_interval
Currently only supported for Prometheus data sources. The `$__rate_interval` variable is meant to be used in the rate function. Refer to [Prometheus query variables]({{< relref "../../datasources/prometheus.md">}}) for details.
## $timeFilter or $__timeFilter
The `$timeFilter` variable returns the currently selected time range as an expression. For example, the time range interval `Last 7 days` expression is `time > now() - 7d`.

View File

@@ -142,7 +142,7 @@ For more information, refer to the [Elasticsearch docs]({{<relref "../datasource
The Azure Monitor query type was renamed to Metrics and Azure Logs Analytics was renamed to Logs to match the service names in Azure and align the concepts with the rest of Grafana.
[Azure Monitor]({{< relref "../datasources/azuremonitor.md" >}}) was updated to reflect this change.
[Azure Monitor]({{< relref "../datasources/azuremonitor/_index.md" >}}) was updated to reflect this change.
### MQL support added for Google Cloud Monitoring

View File

@@ -74,7 +74,7 @@ In the upcoming Grafana 8.0 release, Application Insights and Insights Analytics
Grafana 7.5 includes a deprecation notice for these queries, and some documentation to help users prepare for the upcoming changes.
For more information, refer to [Deprecating Application Insights and Insights Analytics]({{< relref "../datasources/azuremonitor.md#deprecating-application-insights-and-insights-analytics" >}}).
For more information, refer to [Deprecating Application Insights and Insights Analytics]({{< relref "../datasources/azuremonitor/_index.md#deprecating-application-insights-and-insights-analytics" >}}).
### Cloudwatch data source enhancements
@@ -98,7 +98,7 @@ server:
http_listen_port: 3101
```
[Azure Monitor data source]({{< relref "../datasources/azuremonitor.md" >}}) was updated as a result of this change.
[Azure Monitor data source]({{< relref "../datasources/azuremonitor/_index.md" >}}) was updated as a result of this change.
## Enterprise features
@@ -109,6 +109,7 @@ These features are included in the Grafana Enterprise edition.
When caching is enabled, Grafana temporarily stores the results of data source queries. When you or another user submit the same query again, the results return from the cache instead of from the data source (such as Splunk or ServiceNow).
Query caching advantages:
- Faster dashboard load times, especially for popular dashboards.
- Reduced API costs.
- Reduced likelihood that APIs will rate-limit or throttle requests.

View File

@@ -22,7 +22,7 @@ The new alerts in Grafana 8.0 are an opt-in feature that centralizes alerting in
As part of the new alert changes, we have introduced a new data source, Alertmanager, which includes built-in support for Prometheus Alertmanager. It is presently in alpha and it not accessible unless alpha plugins are enabled in Grafana settings. For more information, refer to [Alertmanager data source]({{< relref "../datasources/alertmanager.md" >}}).
> **Note:** Out of the box, Grafana still supports old Grafana alerts. They are legacy alerts at this time, and will be deprecated in a future release.
> **Note:** Out of the box, Grafana still supports old Grafana alerts. They are legacy alerts at this time, and will be deprecated in a future release.
To learn more about the differences between new alerts and the legacy alerts, refer to [What's New with Grafana 8 Alerts]({{< relref "../alerting/difference-old-new.md" >}}).
@@ -180,9 +180,9 @@ The Azure Monitor data source now supports Managed Identity for users hosting Gr
Also, in addition to querying Log Analytics Workspaces, you can now query the logs for any individual [supported resource](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/metrics-supported), or for all resources in a subscription or resource group.
> **Note:** In Grafana 7.5 we started the deprecation for separate Application Insights queries, in favor of querying Application Insights resources through Metrics and Logs. In Grafana 8.0 new Application Insights and Insights Analytics queries cannot be made, and existing queries have been made read-only. For more details, refer to the [Deprecating Application Insights]({{< relref "../datasources/azuremonitor.md#deprecating-application-insights" >}}.
> **Note:** In Grafana 7.5 we started the deprecation for separate Application Insights queries, in favor of querying Application Insights resources through Metrics and Logs. In Grafana 8.0 new Application Insights and Insights Analytics queries cannot be made, and existing queries have been made read-only. For more details, refer to the [Deprecating Application Insights]({{< relref "../datasources/azuremonitor/_index.md#deprecating-application-insights" >}}.
[Azure Monitor data source]({{< relref "../datasources/azuremonitor.md" >}}) was updated as a result of these changes.
[Azure Monitor data source]({{< relref "../datasources/azuremonitor/_index.md" >}}) was updated as a result of these changes.
#### Elasticsearch data source

4
go.mod
View File

@@ -50,8 +50,7 @@ require (
github.com/google/uuid v1.2.0
github.com/gorilla/websocket v1.4.2
github.com/gosimple/slug v1.9.0
github.com/grafana/grafana-aws-sdk v0.4.0
github.com/grafana/grafana-live-sdk v0.0.6
github.com/grafana/grafana-aws-sdk v0.7.0
github.com/grafana/grafana-plugin-sdk-go v0.105.0
github.com/grafana/loki v1.6.2-0.20210520072447-15d417efe103
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0
@@ -60,6 +59,7 @@ require (
github.com/hashicorp/go-version v1.3.0
github.com/inconshreveable/log15 v0.0.0-20180818164646-67afb5ed74ec
github.com/influxdata/influxdb-client-go/v2 v2.2.3
github.com/influxdata/line-protocol v0.0.0-20210311194329-9aa0e372d097
github.com/jmespath/go-jmespath v0.4.0
github.com/json-iterator/go v1.1.11
github.com/jung-kurt/gofpdf v1.16.2

9
go.sum
View File

@@ -294,8 +294,6 @@ github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA
github.com/census-instrumentation/opencensus-proto v0.3.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/centrifugal/centrifuge v0.17.0 h1:ANZMhcR8pFbRUPdv45nrIhhZcsSOdtshT3YM4v1/NHY=
github.com/centrifugal/centrifuge v0.17.0/go.mod h1:AEFs3KPGRpvX1jCe24NDlGWQu7DPa7vdzeY/aUluOm0=
github.com/centrifugal/centrifuge-go v0.7.1/go.mod h1:G8cXpoTVd8l6CMHh9LWyUJOEfu6cjrm4SGdT36E15Hc=
github.com/centrifugal/protocol v0.3.5/go.mod h1:2YbBCaDwQHl37ErRdMrKSj18X2yVvpkQYtSX6aVbe5A=
github.com/centrifugal/protocol v0.5.0 h1:h71u2Q53yhplftmUk1tjc+Mu6TKJ/eO3YRD3h7Qjvj4=
github.com/centrifugal/protocol v0.5.0/go.mod h1:ru2N4pwiND/jE+XLtiLYbUo3YmgqgniGNW9f9aRgoVI=
github.com/certifi/gocertifi v0.0.0-20191021191039-0944d244cd40/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
@@ -914,12 +912,9 @@ github.com/gosimple/slug v1.9.0 h1:r5vDcYrFz9BmfIAMC829un9hq7hKM4cHUrsv36LbEqs=
github.com/gosimple/slug v1.9.0/go.mod h1:AMZ+sOVe65uByN3kgEyf9WEBKBCSS+dJjMX9x4vDJbg=
github.com/grafana/go-mssqldb v0.0.0-20210326084033-d0ce3c521036 h1:GplhUk6Xes5JIhUUrggPcPBhOn+eT8+WsHiebvq7GgA=
github.com/grafana/go-mssqldb v0.0.0-20210326084033-d0ce3c521036/go.mod h1:xbL0rPBG9cCiLr28tMa8zpbdarY27NDyej4t/EjAShU=
github.com/grafana/grafana-aws-sdk v0.4.0 h1:JmTaXfOJ/ydHSWH9kEt8Yhfb9kAhIW4LUOO3SWCviYg=
github.com/grafana/grafana-aws-sdk v0.4.0/go.mod h1:+pPo5U+pX0zWimR7YBc7ASeSQfbRkcTyQYqMiAj7G5U=
github.com/grafana/grafana-live-sdk v0.0.6 h1:P1QFn0ZradOJp3zVpfG0STZMP+pgZrW0e0zvpqOrYVI=
github.com/grafana/grafana-live-sdk v0.0.6/go.mod h1:f15hHmWyLdFjmuWLsjeKeZnq/HnNQ3QkoPcaEww45AY=
github.com/grafana/grafana-aws-sdk v0.7.0 h1:D+Lhxi3P/7vpyDHUK/fdX9bL2mRz8hLG04ucNf1E02o=
github.com/grafana/grafana-aws-sdk v0.7.0/go.mod h1:+pPo5U+pX0zWimR7YBc7ASeSQfbRkcTyQYqMiAj7G5U=
github.com/grafana/grafana-plugin-sdk-go v0.79.0/go.mod h1:NvxLzGkVhnoBKwzkst6CFfpMFKwAdIUZ1q8ssuLeF60=
github.com/grafana/grafana-plugin-sdk-go v0.91.0/go.mod h1:Ot3k7nY7P6DXmUsDgKvNB7oG1v7PRyTdmnYVoS554bU=
github.com/grafana/grafana-plugin-sdk-go v0.105.0 h1:I0r88FtnXkWw4F0t36cmRCupizY4cPkK+6PKKqbyx9Q=
github.com/grafana/grafana-plugin-sdk-go v0.105.0/go.mod h1:D7x3ah+1d4phNXpbnOaxa/osSaZlwh9/ZUnGGzegRbk=
github.com/grafana/loki v1.6.2-0.20210520072447-15d417efe103 h1:qCmofFVwQR9QnsinstVqI1NPLMVl33jNCnOCXEAVn6E=

View File

@@ -4,5 +4,5 @@
"packages": [
"packages/*"
],
"version": "8.0.4"
"version": "8.0.6"
}

View File

@@ -3,7 +3,7 @@
"license": "AGPL-3.0-only",
"private": true,
"name": "grafana",
"version": "8.0.4",
"version": "8.0.6",
"repository": "github:grafana/grafana",
"scripts": {
"api-tests": "jest --notify --watch --config=devenv/e2e-api-tests/jest.js",

View File

@@ -2,7 +2,7 @@
"author": "Grafana Labs",
"license": "Apache-2.0",
"name": "@grafana/data",
"version": "8.0.4",
"version": "8.0.6",
"description": "Grafana Data Library",
"keywords": [
"typescript"

View File

@@ -23,4 +23,5 @@ export enum DataTransformerID {
groupBy = 'groupBy',
sortBy = 'sortBy',
histogram = 'histogram',
prepareTimeSeries = 'prepareTimeSeries',
}

View File

@@ -124,6 +124,7 @@ export interface PanelEditorProps<T = any> {
export interface PanelModel<TOptions = any> {
/** ID of the panel within the current dashboard */
id: number;
alert?: any;
/** Panel options */
options: TOptions;
/** Field options configuration */

View File

@@ -57,7 +57,7 @@ describe('locationUtil', () => {
});
test('absolute url with subdirectory subUrl', () => {
const urlWithoutMaster = locationUtil.stripBaseFromUrl('http://www.domain.com:9877/thisShouldRemain/subUrl/');
expect(urlWithoutMaster).toBe('/thisShouldRemain/subUrl/');
expect(urlWithoutMaster).toBe('http://www.domain.com:9877/thisShouldRemain/subUrl/');
});
});

View File

@@ -17,16 +17,10 @@ const stripBaseFromUrl = (url: string): string => {
const isAbsoluteUrl = url.startsWith('http');
let segmentToStrip = appSubUrl;
if (!url.startsWith('/')) {
if (!url.startsWith('/') || isAbsoluteUrl) {
segmentToStrip = `${window.location.origin}${appSubUrl}`;
}
if (isAbsoluteUrl) {
segmentToStrip = url.startsWith(`${window.location.origin}${appSubUrl}`)
? `${window.location.origin}${appSubUrl}`
: `${window.location.origin}`;
}
return url.length > 0 && url.indexOf(segmentToStrip) === 0 ? url.slice(segmentToStrip.length - stripExtraChars) : url;
};

View File

@@ -2,7 +2,7 @@
"author": "Grafana Labs",
"license": "Apache-2.0",
"name": "@grafana/e2e-selectors",
"version": "8.0.4",
"version": "8.0.6",
"description": "Grafana End-to-End Test Selectors Library",
"keywords": [
"cli",

View File

@@ -2,7 +2,7 @@
"author": "Grafana Labs",
"license": "Apache-2.0",
"name": "@grafana/e2e",
"version": "8.0.4",
"version": "8.0.6",
"description": "Grafana End-to-End Test Library",
"keywords": [
"cli",
@@ -44,7 +44,7 @@
"types": "src/index.ts",
"dependencies": {
"@cypress/webpack-preprocessor": "4.1.3",
"@grafana/e2e-selectors": "8.0.4",
"@grafana/e2e-selectors": "8.0.6",
"@grafana/tsconfig": "^1.0.0-rc1",
"@mochajs/json-file-reporter": "^1.2.0",
"blink-diff": "1.0.13",

View File

@@ -2,7 +2,7 @@
"author": "Grafana Labs",
"license": "Apache-2.0",
"name": "@grafana/runtime",
"version": "8.0.4",
"version": "8.0.6",
"description": "Grafana Runtime Library",
"keywords": [
"grafana",
@@ -22,9 +22,9 @@
"typecheck": "tsc --noEmit"
},
"dependencies": {
"@grafana/data": "8.0.4",
"@grafana/e2e-selectors": "8.0.4",
"@grafana/ui": "8.0.4",
"@grafana/data": "8.0.6",
"@grafana/e2e-selectors": "8.0.6",
"@grafana/ui": "8.0.6",
"history": "4.10.1",
"systemjs": "0.20.19",
"systemjs-plugin-css": "0.1.37"

View File

@@ -31,9 +31,10 @@ export class HistoryWrapper implements LocationService {
constructor(history?: H.History) {
// If no history passed create an in memory one if being called from test
this.history =
history || process.env.NODE_ENV === 'test'
history ||
(process.env.NODE_ENV === 'test'
? H.createMemoryHistory({ initialEntries: ['/'] })
: H.createBrowserHistory({ basename: config.appSubUrl ?? '/' });
: H.createBrowserHistory({ basename: config.appSubUrl ?? '/' }));
this.partial = this.partial.bind(this);
this.push = this.push.bind(this);

View File

@@ -2,7 +2,7 @@
"author": "Grafana Labs",
"license": "Apache-2.0",
"name": "@grafana/toolkit",
"version": "8.0.4",
"version": "8.0.6",
"description": "Grafana Toolkit",
"keywords": [
"grafana",
@@ -28,10 +28,10 @@
"dependencies": {
"@babel/core": "7.13.14",
"@babel/preset-env": "7.13.12",
"@grafana/data": "8.0.4",
"@grafana/data": "8.0.6",
"@grafana/eslint-config": "2.4.0",
"@grafana/tsconfig": "^1.0.0-rc1",
"@grafana/ui": "8.0.4",
"@grafana/ui": "8.0.6",
"@types/command-exists": "^1.2.0",
"@types/expect-puppeteer": "3.3.1",
"@types/fs-extra": "^8.1.0",

View File

@@ -2,7 +2,7 @@
"author": "Grafana Labs",
"license": "Apache-2.0",
"name": "@grafana/ui",
"version": "8.0.4",
"version": "8.0.6",
"description": "Grafana Components Library",
"keywords": [
"grafana",
@@ -29,8 +29,8 @@
"@emotion/css": "11.1.3",
"@emotion/react": "11.1.5",
"@grafana/aws-sdk": "0.0.3",
"@grafana/data": "8.0.4",
"@grafana/e2e-selectors": "8.0.4",
"@grafana/data": "8.0.6",
"@grafana/e2e-selectors": "8.0.6",
"@grafana/slate-react": "0.22.10-grafana",
"@grafana/tsconfig": "^1.0.0-rc1",
"@monaco-editor/react": "4.1.1",

View File

@@ -143,6 +143,7 @@ export class Sparkline extends PureComponent<SparklineProps, State> {
direction: ScaleDirection.Up,
min: field.config.min,
max: field.config.max,
getDataMinMax: () => field.state?.range,
});
builder.addAxis({

View File

@@ -43,6 +43,9 @@ export const getTableStyles = (theme: GrafanaTheme2) => {
display: inline-flex;
}
}
a {
color: inherit;
}
`;
};

View File

@@ -1,10 +1,7 @@
import React from 'react';
import { css } from '@emotion/css';
import { Portal } from '../Portal/Portal';
import { Dimensions, TimeZone } from '@grafana/data';
import { FlotPosition } from '../Graph/types';
import { VizTooltipContainer } from './VizTooltipContainer';
import { useStyles } from '../../themes';
import { TooltipDisplayMode } from './models.gen';
// Describes active dimensions user interacts with
@@ -49,30 +46,14 @@ export interface VizTooltipProps {
* @public
*/
export const VizTooltip: React.FC<VizTooltipProps> = ({ content, position, offset }) => {
const styles = useStyles(getStyles);
if (position) {
return (
<Portal className={styles.portal}>
<VizTooltipContainer position={position} offset={offset || { x: 0, y: 0 }}>
{content}
</VizTooltipContainer>
</Portal>
<VizTooltipContainer position={position} offset={offset || { x: 0, y: 0 }}>
{content}
</VizTooltipContainer>
);
}
return null;
};
VizTooltip.displayName = 'VizTooltip';
const getStyles = () => {
return {
portal: css`
position: absolute;
top: 0;
left: 0;
pointer-events: none;
width: 100%;
height: 100%;
`,
};
};

View File

@@ -1,9 +1,9 @@
import React, { useState, useLayoutEffect, useRef, HTMLAttributes, useMemo } from 'react';
import React, { useState, HTMLAttributes, useMemo } from 'react';
import { css, cx } from '@emotion/css';
import { useStyles2 } from '../../themes';
import { getTooltipContainerStyles } from '../../themes/mixins';
import useWindowSize from 'react-use/lib/useWindowSize';
import { Dimensions2D, GrafanaTheme2 } from '@grafana/data';
import { GrafanaTheme2 } from '@grafana/data';
import { usePopper } from 'react-popper';
/**
* @public
@@ -24,78 +24,47 @@ export const VizTooltipContainer: React.FC<VizTooltipContainerProps> = ({
className,
...otherProps
}) => {
const tooltipRef = useRef<HTMLDivElement>(null);
const [tooltipMeasurement, setTooltipMeasurement] = useState<Dimensions2D>({ width: 0, height: 0 });
const { width, height } = useWindowSize();
const [placement, setPlacement] = useState({
x: positionX + offsetX,
y: positionY + offsetY,
});
const resizeObserver = useMemo(
() =>
// TS has hard time playing games with @types/resize-observer-browser, hence the ignore
// @ts-ignore
new ResizeObserver((entries) => {
for (let entry of entries) {
const tW = Math.floor(entry.contentRect.width + 2 * 8); // adding padding until Safari supports borderBoxSize
const tH = Math.floor(entry.contentRect.height + 2 * 8);
if (tooltipMeasurement.width !== tW || tooltipMeasurement.height !== tH) {
setTooltipMeasurement({
width: tW,
height: tH,
});
}
}
}),
[tooltipMeasurement.height, tooltipMeasurement.width]
const [tooltipRef, setTooltipRef] = useState<HTMLDivElement | null>(null);
const virtualElement = useMemo(
() => ({
getBoundingClientRect() {
return { top: positionY, left: positionX, bottom: positionY, right: positionX, width: 0, height: 0 };
},
}),
[positionY, positionX]
);
useLayoutEffect(() => {
if (tooltipRef.current) {
resizeObserver.observe(tooltipRef.current);
}
return () => {
resizeObserver.disconnect();
};
}, [resizeObserver]);
// Make sure tooltip does not overflow window
useLayoutEffect(() => {
let xO = 0,
yO = 0;
if (tooltipRef && tooltipRef.current) {
const xOverflow = width - (positionX + tooltipMeasurement.width);
const yOverflow = height - (positionY + tooltipMeasurement.height);
if (xOverflow < 0) {
xO = tooltipMeasurement.width;
}
if (yOverflow < 0) {
yO = tooltipMeasurement.height;
}
}
setPlacement({
x: positionX + offsetX - xO,
y: positionY + offsetY - yO,
});
}, [width, height, positionX, offsetX, positionY, offsetY, tooltipMeasurement.width, tooltipMeasurement.height]);
const { styles: popperStyles, attributes } = usePopper(virtualElement, tooltipRef, {
placement: 'bottom-start',
modifiers: [
{ name: 'arrow', enabled: false },
{
name: 'preventOverflow',
enabled: true,
options: {
altAxis: true,
rootBoundary: 'viewport',
},
},
{
name: 'offset',
options: {
offset: [offsetX, offsetY],
},
},
],
});
const styles = useStyles2(getStyles);
return (
<div
ref={tooltipRef}
ref={setTooltipRef}
style={{
position: 'fixed',
left: 0,
top: 0,
transform: `translate3d(${placement.x}px, ${placement.y}px, 0)`,
transition: 'all ease-out 0.1s',
...popperStyles.popper,
display: popperStyles.popper?.transform ? 'block' : 'none',
transition: 'all ease-out 0.2s',
}}
{...attributes.popper}
{...otherProps}
className={cx(styles.wrapper, className)}
>

View File

@@ -170,7 +170,7 @@ function formatTime(self: uPlot, splits: number[], axisIdx: number, foundSpace:
const yearRoundedToDay = Math.round(timeUnitSize.year / timeUnitSize.day) * timeUnitSize.day;
const incrementRoundedToDay = Math.round(foundIncr / timeUnitSize.day) * timeUnitSize.day;
let format = systemDateFormats.interval.minute;
let format = systemDateFormats.interval.year;
if (foundIncr < timeUnitSize.second) {
format = systemDateFormats.interval.second.replace('ss', 'ss.SS');

View File

@@ -2,7 +2,7 @@ import uPlot, { Scale, Range } from 'uplot';
import { PlotConfigBuilder } from '../types';
import { ScaleOrientation, ScaleDirection } from '../config';
import { ScaleDistribution } from '../models.gen';
import { isBooleanUnit } from '@grafana/data';
import { isBooleanUnit, NumericRange } from '@grafana/data';
export interface ScaleProps {
scaleKey: string;
@@ -16,6 +16,7 @@ export interface ScaleProps {
orientation: ScaleOrientation;
direction: ScaleDirection;
log?: number;
getDataMinMax?: () => NumericRange | undefined;
}
export class UPlotScaleBuilder extends PlotConfigBuilder<ScaleProps, Scale> {
@@ -62,6 +63,15 @@ export class UPlotScaleBuilder extends PlotConfigBuilder<ScaleProps, Scale> {
// uPlot range function
const rangeFn = (u: uPlot, dataMin: number, dataMax: number, scaleKey: string) => {
let { getDataMinMax } = this.props;
// cumulative data min/max across multiple charts, usually via VizRepeater
if (getDataMinMax) {
let dataRange = getDataMinMax()!;
dataMin = dataRange.min!;
dataMax = dataRange.max!;
}
const scale = u.scales[scaleKey];
let minMax: uPlot.Range.MinMax = [dataMin, dataMax];

View File

@@ -135,6 +135,10 @@ export const TooltipPlugin: React.FC<TooltipPluginProps> = ({
if (mode === TooltipDisplayMode.Single && focusedSeriesIdx !== null) {
const field = otherProps.data.fields[focusedSeriesIdx];
if (!field) {
return null;
}
const fieldFmt = field.display || getDisplayProcessor({ field, timeZone, theme });
const display = fieldFmt(field.values.get(focusedPointIdx));
@@ -160,10 +164,12 @@ export const TooltipPlugin: React.FC<TooltipPluginProps> = ({
const frame = otherProps.data;
const field = frame.fields[i];
if (
!field ||
field === xField ||
field.type === FieldType.time ||
field.type !== FieldType.number ||
field.config.custom?.hideFrom?.tooltip
field.config.custom?.hideFrom?.tooltip ||
field.config.custom?.hideFrom?.viz
) {
continue;
}

View File

@@ -1,6 +1,6 @@
{
"name": "@jaegertracing/jaeger-ui-components",
"version": "8.0.4",
"version": "8.0.6",
"main": "src/index.ts",
"types": "src/index.ts",
"license": "Apache-2.0",
@@ -16,8 +16,8 @@
"dependencies": {
"@emotion/css": "11.1.3",
"@emotion/react": "11.1.5",
"@grafana/data": "8.0.4",
"@grafana/ui": "8.0.4",
"@grafana/data": "8.0.6",
"@grafana/ui": "8.0.6",
"@types/classnames": "^2.2.7",
"@types/deep-freeze": "^0.1.1",
"@types/hoist-non-react-statics": "^3.3.1",

View File

@@ -26,6 +26,7 @@ type IndexViewData struct {
AppTitle string
Sentry *setting.Sentry
ContentDeliveryURL string
LoadingLogo template.URL
// Nonce is a cryptographic identifier for use with Content Security Policy.
Nonce string
}

View File

@@ -146,8 +146,6 @@ func ToFolderErrorResponse(err error) response.Response {
}
if errors.Is(err, models.ErrFolderTitleEmpty) ||
errors.Is(err, models.ErrFolderSameNameExists) ||
errors.Is(err, models.ErrFolderWithSameUIDExists) ||
errors.Is(err, models.ErrDashboardTypeMismatch) ||
errors.Is(err, models.ErrDashboardInvalidUid) ||
errors.Is(err, models.ErrDashboardUidTooLong) {
@@ -162,6 +160,11 @@ func ToFolderErrorResponse(err error) response.Response {
return response.JSON(404, util.DynMap{"status": "not-found", "message": models.ErrFolderNotFound.Error()})
}
if errors.Is(err, models.ErrFolderSameNameExists) ||
errors.Is(err, models.ErrFolderWithSameUIDExists) {
return response.Error(409, err.Error(), nil)
}
if errors.Is(err, models.ErrFolderVersionMismatch) {
return response.JSON(412, util.DynMap{"status": "version-mismatch", "message": models.ErrFolderVersionMismatch.Error()})
}

View File

@@ -46,9 +46,9 @@ func TestFoldersAPIEndpoint(t *testing.T) {
Error error
ExpectedStatusCode int
}{
{Error: models.ErrFolderWithSameUIDExists, ExpectedStatusCode: 400},
{Error: models.ErrFolderWithSameUIDExists, ExpectedStatusCode: 409},
{Error: models.ErrFolderTitleEmpty, ExpectedStatusCode: 400},
{Error: models.ErrFolderSameNameExists, ExpectedStatusCode: 400},
{Error: models.ErrFolderSameNameExists, ExpectedStatusCode: 409},
{Error: models.ErrDashboardInvalidUid, ExpectedStatusCode: 400},
{Error: models.ErrDashboardUidTooLong, ExpectedStatusCode: 400},
{Error: models.ErrFolderAccessDenied, ExpectedStatusCode: 403},
@@ -102,9 +102,9 @@ func TestFoldersAPIEndpoint(t *testing.T) {
Error error
ExpectedStatusCode int
}{
{Error: models.ErrFolderWithSameUIDExists, ExpectedStatusCode: 400},
{Error: models.ErrFolderWithSameUIDExists, ExpectedStatusCode: 409},
{Error: models.ErrFolderTitleEmpty, ExpectedStatusCode: 400},
{Error: models.ErrFolderSameNameExists, ExpectedStatusCode: 400},
{Error: models.ErrFolderSameNameExists, ExpectedStatusCode: 409},
{Error: models.ErrDashboardInvalidUid, ExpectedStatusCode: 400},
{Error: models.ErrDashboardUidTooLong, ExpectedStatusCode: 400},
{Error: models.ErrFolderAccessDenied, ExpectedStatusCode: 403},

View File

@@ -461,6 +461,7 @@ func (hs *HTTPServer) setIndexViewData(c *models.ReqContext) (*dtos.IndexViewDat
Sentry: &hs.Cfg.Sentry,
Nonce: c.RequestNonce,
ContentDeliveryURL: hs.Cfg.GetContentDeliveryURL(hs.License.ContentDeliveryPrefix()),
LoadingLogo: "public/img/grafana_icon.svg",
}
if hs.Cfg.FeatureToggles["accesscontrol"] {

View File

@@ -3,14 +3,13 @@ package api
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"os"
"path/filepath"
"sort"
"strings"
"gopkg.in/macaron.v1"
"github.com/grafana/grafana-plugin-sdk-go/backend"
"github.com/grafana/grafana/pkg/api/dtos"
"github.com/grafana/grafana/pkg/api/response"
@@ -266,16 +265,29 @@ func (hs *HTTPServer) GetPluginAssets(c *models.ReqContext) {
return
}
requestedFile := filepath.Clean(c.Params("*"))
pluginFilePath := filepath.Join(plugin.PluginDir, requestedFile)
// prepend slash for cleaning relative paths
requestedFile := filepath.Clean(filepath.Join("/", c.Params("*")))
rel, err := filepath.Rel("/", requestedFile)
if err != nil {
// slash is prepended above therefore this is not expected to fail
c.Handle(hs.Cfg, 500, "Failed to get the relative path", err)
return
}
absPluginDir, err := filepath.Abs(plugin.PluginDir)
if err != nil {
c.Handle(hs.Cfg, 500, "Failed to get plugin absolute path", nil)
return
}
pluginFilePath := filepath.Join(absPluginDir, rel)
// It's safe to ignore gosec warning G304 since we already clean the requested file path and subsequently
// use this with a prefix of the plugin's directory, which is set during plugin loading
// nolint:gosec
f, err := os.Open(pluginFilePath)
if err != nil {
if os.IsNotExist(err) {
c.Handle(hs.Cfg, 404, "Could not find plugin file", err)
c.Handle(hs.Cfg, 404, "Plugin file not found", err)
return
}
c.Handle(hs.Cfg, 500, "Could not open plugin file", err)
@@ -294,22 +306,17 @@ func (hs *HTTPServer) GetPluginAssets(c *models.ReqContext) {
}
if shouldExclude(fi) {
c.Handle(hs.Cfg, 404, "Plugin file not found", nil)
c.Handle(hs.Cfg, 403, "Plugin file access forbidden",
fmt.Errorf("access is forbidden to executable plugin file %s", pluginFilePath))
return
}
headers := func(c *macaron.Context) {
if hs.Cfg.Env == setting.Dev {
c.Resp.Header().Set("Cache-Control", "max-age=0, must-revalidate, no-cache")
} else {
c.Resp.Header().Set("Cache-Control", "public, max-age=3600")
}
if hs.Cfg.Env == setting.Dev {
headers = func(c *macaron.Context) {
c.Resp.Header().Set("Cache-Control", "max-age=0, must-revalidate, no-cache")
}
}
headers(c.Context)
http.ServeContent(c.Resp, c.Req.Request, pluginFilePath, fi.ModTime(), f)
}
@@ -393,8 +400,9 @@ func (hs *HTTPServer) InstallPlugin(c *models.ReqContext, dto dtos.InstallPlugin
if errors.As(err, &versionNotFoundErr) {
return response.Error(http.StatusNotFound, "Plugin version not found", err)
}
if errors.Is(err, installer.ErrPluginNotFound) {
return response.Error(http.StatusNotFound, "Plugin not found", err)
var clientError installer.Response4xxError
if errors.As(err, &clientError) {
return response.Error(clientError.StatusCode, clientError.Message, err)
}
if errors.Is(err, plugins.ErrInstallCorePlugin) {
return response.Error(http.StatusForbidden, "Cannot install or change a Core plugin", err)

View File

@@ -157,7 +157,7 @@ func (s *Service) buildGraph(req *Request) (*simple.DirectedGraph, error) {
case dsName == DatasourceName || dsUID == DatasourceUID:
node, err = buildCMDNode(dp, rn)
default: // If it's not an expression query, it's a data source query.
node, err = s.buildDSNode(dp, rn, req.OrgId)
node, err = s.buildDSNode(dp, rn, req)
}
if err != nil {
return nil, err

View File

@@ -142,6 +142,7 @@ type DSNode struct {
timeRange TimeRange
intervalMS int64
maxDP int64
request Request
}
// NodeType returns the data pipeline node type.
@@ -149,7 +150,7 @@ func (dn *DSNode) NodeType() NodeType {
return TypeDatasourceNode
}
func (s *Service) buildDSNode(dp *simple.DirectedGraph, rn *rawNode, orgID int64) (*DSNode, error) {
func (s *Service) buildDSNode(dp *simple.DirectedGraph, rn *rawNode, req *Request) (*DSNode, error) {
encodedQuery, err := json.Marshal(rn.Query)
if err != nil {
return nil, err
@@ -160,12 +161,13 @@ func (s *Service) buildDSNode(dp *simple.DirectedGraph, rn *rawNode, orgID int64
id: dp.NewNode().ID(),
refID: rn.RefID,
},
orgID: orgID,
orgID: req.OrgId,
query: json.RawMessage(encodedQuery),
queryType: rn.QueryType,
intervalMS: defaultIntervalMS,
maxDP: defaultMaxDP,
timeRange: rn.TimeRange,
request: *req,
}
rawDsID, ok := rn.Query["datasourceId"]
@@ -231,6 +233,7 @@ func (dn *DSNode) Execute(ctx context.Context, vars mathexp.Vars, s *Service) (m
resp, err := s.queryData(ctx, &backend.QueryDataRequest{
PluginContext: pc,
Queries: q,
Headers: dn.request.Headers,
})
if err != nil {

View File

@@ -206,6 +206,7 @@ func (s *Service) queryData(ctx context.Context, req *backend.QueryDataRequest)
tQ := plugins.DataQuery{
TimeRange: &timeRange,
Queries: queries,
Headers: req.Headers,
}
// Execute the converted queries

View File

@@ -54,8 +54,10 @@ func (s *SocialAzureAD) UserInfo(_ *http.Client, token *oauth2.Token) (*BasicUse
}
role := extractRole(claims)
logger.Debug("AzureAD OAuth: extracted role", "email", email, "role", role)
groups := extractGroups(claims)
logger.Debug("AzureAD OAuth: extracted groups", "email", email, "groups", groups)
if !s.IsGroupMember(groups) {
return nil, errMissingGroupMembership
}

View File

@@ -41,20 +41,23 @@ const (
)
var (
ErrPluginNotFound = errors.New("plugin not found")
reGitBuild = regexp.MustCompile("^[a-zA-Z0-9_.-]*/")
reGitBuild = regexp.MustCompile("^[a-zA-Z0-9_.-]*/")
)
type BadRequestError struct {
Message string
Status string
type Response4xxError struct {
Message string
StatusCode int
SystemInfo string
}
func (e *BadRequestError) Error() string {
func (e Response4xxError) Error() string {
if len(e.Message) > 0 {
return fmt.Sprintf("%s: %s", e.Status, e.Message)
if len(e.SystemInfo) > 0 {
return fmt.Sprintf("%s (%s)", e.Message, e.SystemInfo)
}
return fmt.Sprintf("%d: %s", e.StatusCode, e.Message)
}
return e.Status
return fmt.Sprintf("%d", e.StatusCode)
}
type ErrVersionUnsupported struct {
@@ -248,7 +251,7 @@ func (i *Installer) DownloadFile(pluginID string, tmpFile *os.File, url string,
// slow network. As this is CLI operation hanging is not a big of an issue as user can just abort.
bodyReader, err := i.sendRequestWithoutTimeout(url)
if err != nil {
return errutil.Wrap("Failed to send request", err)
return err
}
defer func() {
if err := bodyReader.Close(); err != nil {
@@ -274,11 +277,7 @@ func (i *Installer) getPluginMetadataFromPluginRepo(pluginID, pluginRepoURL stri
i.log.Debugf("Fetching metadata for plugin \"%s\" from repo %s", pluginID, pluginRepoURL)
body, err := i.sendRequestGetBytes(pluginRepoURL, "repo", pluginID)
if err != nil {
if errors.Is(err, ErrPluginNotFound) {
i.log.Errorf("failed to find plugin '%s' in plugin repository. Please check if plugin ID is correct", pluginID)
return Plugin{}, err
}
return Plugin{}, errutil.Wrap("Failed to send request", err)
return Plugin{}, err
}
var data Plugin
@@ -354,14 +353,6 @@ func (i *Installer) createRequest(URL string, subPaths ...string) (*http.Request
}
func (i *Installer) handleResponse(res *http.Response) (io.ReadCloser, error) {
if res.StatusCode == 404 {
return nil, ErrPluginNotFound
}
if res.StatusCode/100 != 2 && res.StatusCode/100 != 4 {
return nil, fmt.Errorf("API returned invalid status: %s", res.Status)
}
if res.StatusCode/100 == 4 {
body, err := ioutil.ReadAll(res.Body)
defer func() {
@@ -370,7 +361,7 @@ func (i *Installer) handleResponse(res *http.Response) (io.ReadCloser, error) {
}
}()
if err != nil || len(body) == 0 {
return nil, &BadRequestError{Status: res.Status}
return nil, Response4xxError{StatusCode: res.StatusCode}
}
var message string
var jsonBody map[string]string
@@ -380,7 +371,11 @@ func (i *Installer) handleResponse(res *http.Response) (io.ReadCloser, error) {
} else {
message = jsonBody["message"]
}
return nil, &BadRequestError{Status: res.Status, Message: message}
return nil, Response4xxError{StatusCode: res.StatusCode, Message: message, SystemInfo: i.fullSystemInfoString()}
}
if res.StatusCode/100 != 2 {
return nil, fmt.Errorf("API returned invalid status: %s", res.Status)
}
return res.Body, nil

View File

@@ -61,6 +61,9 @@ func (dr *dashboardServiceImpl) GetFolders(limit int64) ([]*models.Folder, error
}
func (dr *dashboardServiceImpl) GetFolderByID(id int64) (*models.Folder, error) {
if id == 0 {
return &models.Folder{Id: id, Title: "General"}, nil
}
query := models.GetDashboardQuery{OrgId: dr.orgId, Id: id}
dashFolder, err := getFolder(query)
if err != nil {

View File

@@ -7,21 +7,20 @@ import (
"github.com/grafana/grafana/pkg/dashboards"
"github.com/grafana/grafana/pkg/models"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/grafana/grafana/pkg/services/guardian"
. "github.com/smartystreets/goconvey/convey"
)
func TestFolderService(t *testing.T) {
Convey("Folder service tests", t, func() {
t.Run("Folder service tests", func(t *testing.T) {
service := dashboardServiceImpl{
orgId: 1,
user: &models.SignedInUser{UserId: 1},
dashboardStore: &fakeDashboardStore{},
}
Convey("Given user has no permissions", func() {
t.Run("Given user has no permissions", func(t *testing.T) {
origNewGuardian := guardian.New
guardian.MockDashboardGuardian(&guardian.FakeDashboardGuardian{})
@@ -38,41 +37,47 @@ func TestFolderService(t *testing.T) {
validationError: models.ErrDashboardUpdateAccessDenied,
}
Convey("When get folder by id should return access denied error", func() {
t.Run("When get folder by id should return access denied error", func(t *testing.T) {
_, err := service.GetFolderByID(1)
So(err, ShouldEqual, models.ErrFolderAccessDenied)
require.Equal(t, err, models.ErrFolderAccessDenied)
})
Convey("When get folder by uid should return access denied error", func() {
t.Run("When get folder by id, with id = 0 should return default folder", func(t *testing.T) {
folder, err := service.GetFolderByID(0)
require.NoError(t, err)
require.Equal(t, folder, &models.Folder{Id: 0, Title: "General"})
})
t.Run("When get folder by uid should return access denied error", func(t *testing.T) {
_, err := service.GetFolderByUID("uid")
So(err, ShouldEqual, models.ErrFolderAccessDenied)
require.Equal(t, err, models.ErrFolderAccessDenied)
})
Convey("When creating folder should return access denied error", func() {
t.Run("When creating folder should return access denied error", func(t *testing.T) {
_, err := service.CreateFolder("Folder", "")
So(err, ShouldEqual, models.ErrFolderAccessDenied)
require.Equal(t, err, models.ErrFolderAccessDenied)
})
Convey("When updating folder should return access denied error", func() {
t.Run("When updating folder should return access denied error", func(t *testing.T) {
err := service.UpdateFolder("uid", &models.UpdateFolderCommand{
Uid: "uid",
Title: "Folder",
})
So(err, ShouldEqual, models.ErrFolderAccessDenied)
require.Equal(t, err, models.ErrFolderAccessDenied)
})
Convey("When deleting folder by uid should return access denied error", func() {
t.Run("When deleting folder by uid should return access denied error", func(t *testing.T) {
_, err := service.DeleteFolder("uid")
So(err, ShouldNotBeNil)
So(err, ShouldEqual, models.ErrFolderAccessDenied)
require.Error(t, err)
require.Equal(t, err, models.ErrFolderAccessDenied)
})
Reset(func() {
t.Cleanup(func() {
guardian.New = origNewGuardian
})
})
Convey("Given user has permission to save", func() {
t.Run("Given user has permission to save", func(t *testing.T) {
origNewGuardian := guardian.New
guardian.MockDashboardGuardian(&guardian.FakeDashboardGuardian{CanSaveValue: true})
@@ -102,30 +107,30 @@ func TestFolderService(t *testing.T) {
return nil
})
Convey("When creating folder should not return access denied error", func() {
t.Run("When creating folder should not return access denied error", func(t *testing.T) {
_, err := service.CreateFolder("Folder", "")
So(err, ShouldBeNil)
require.NoError(t, err)
})
Convey("When updating folder should not return access denied error", func() {
t.Run("When updating folder should not return access denied error", func(t *testing.T) {
err := service.UpdateFolder("uid", &models.UpdateFolderCommand{
Uid: "uid",
Title: "Folder",
})
So(err, ShouldBeNil)
require.NoError(t, err)
})
Convey("When deleting folder by uid should not return access denied error", func() {
t.Run("When deleting folder by uid should not return access denied error", func(t *testing.T) {
_, err := service.DeleteFolder("uid")
So(err, ShouldBeNil)
require.NoError(t, err)
})
Reset(func() {
t.Cleanup(func() {
guardian.New = origNewGuardian
})
})
Convey("Given user has permission to view", func() {
t.Run("Given user has permission to view", func(t *testing.T) {
origNewGuardian := guardian.New
guardian.MockDashboardGuardian(&guardian.FakeDashboardGuardian{CanViewValue: true})
@@ -138,26 +143,26 @@ func TestFolderService(t *testing.T) {
return nil
})
Convey("When get folder by id should return folder", func() {
t.Run("When get folder by id should return folder", func(t *testing.T) {
f, _ := service.GetFolderByID(1)
So(f.Id, ShouldEqual, dashFolder.Id)
So(f.Uid, ShouldEqual, dashFolder.Uid)
So(f.Title, ShouldEqual, dashFolder.Title)
require.Equal(t, f.Id, dashFolder.Id)
require.Equal(t, f.Uid, dashFolder.Uid)
require.Equal(t, f.Title, dashFolder.Title)
})
Convey("When get folder by uid should return folder", func() {
t.Run("When get folder by uid should return folder", func(t *testing.T) {
f, _ := service.GetFolderByUID("uid")
So(f.Id, ShouldEqual, dashFolder.Id)
So(f.Uid, ShouldEqual, dashFolder.Uid)
So(f.Title, ShouldEqual, dashFolder.Title)
require.Equal(t, f.Id, dashFolder.Id)
require.Equal(t, f.Uid, dashFolder.Uid)
require.Equal(t, f.Title, dashFolder.Title)
})
Reset(func() {
t.Cleanup(func() {
guardian.New = origNewGuardian
})
})
Convey("Should map errors correct", func() {
t.Run("Should map errors correct", func(t *testing.T) {
testCases := []struct {
ActualError error
ExpectedError error

View File

@@ -4,8 +4,8 @@ import (
"errors"
"fmt"
"github.com/grafana/grafana-live-sdk/telemetry"
"github.com/grafana/grafana-live-sdk/telemetry/telegraf"
"github.com/grafana/grafana/pkg/services/live/telemetry"
"github.com/grafana/grafana/pkg/services/live/telemetry/telegraf"
)
type Converter struct {

View File

@@ -88,6 +88,7 @@ func (g *Gateway) Handle(ctx *models.ReqContext) {
for _, mf := range metricFrames {
err := stream.Push(ctx.SignedInUser.OrgId, mf.Key(), mf.Frame())
if err != nil {
logger.Error("Error pushing frame", "error", err, "data", string(body))
ctx.Resp.WriteHeader(http.StatusInternalServerError)
return
}

View File

@@ -191,6 +191,7 @@ func (s *Handler) ServeHTTP(rw http.ResponseWriter, r *http.Request) {
for _, mf := range metricFrames {
err := stream.Push(user.OrgId, mf.Key(), mf.Frame())
if err != nil {
logger.Error("Error pushing frame", "error", err, "data", string(body))
return
}
}

View File

@@ -0,0 +1,16 @@
package telemetry
import "github.com/grafana/grafana-plugin-sdk-go/data"
// Converter can convert input to Grafana Data Frames.
type Converter interface {
Convert(data []byte) ([]FrameWrapper, error)
}
// FrameWrapper is a wrapper over data.Frame.
type FrameWrapper interface {
// Key returns a key which describes Frame metrics.
Key() string
// Frame allows getting data.Frame.
Frame() *data.Frame
}

View File

@@ -0,0 +1,352 @@
package telegraf
import (
"fmt"
"sort"
"time"
"github.com/grafana/grafana-plugin-sdk-go/data"
"github.com/grafana/grafana-plugin-sdk-go/data/converters"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/grafana/grafana/pkg/services/live/telemetry"
influx "github.com/influxdata/line-protocol"
)
var (
logger = log.New("live.telemetry.telegraf")
)
var _ telemetry.Converter = (*Converter)(nil)
// Converter converts Telegraf metrics to Grafana frames.
type Converter struct {
parser *influx.Parser
useLabelsColumn bool
useFloat64Numbers bool
}
// ConverterOption ...
type ConverterOption func(*Converter)
// WithUseLabelsColumn ...
func WithUseLabelsColumn(enabled bool) ConverterOption {
return func(h *Converter) {
h.useLabelsColumn = enabled
}
}
// WithFloat64Numbers will convert all numbers met to float64 type.
func WithFloat64Numbers(enabled bool) ConverterOption {
return func(h *Converter) {
h.useFloat64Numbers = enabled
}
}
// NewConverter creates new Converter from Influx/Telegraf format to Grafana Data Frames.
// This converter generates one frame for each input metric name and time combination.
func NewConverter(opts ...ConverterOption) *Converter {
c := &Converter{
parser: influx.NewParser(influx.NewMetricHandler()),
}
for _, opt := range opts {
opt(c)
}
return c
}
// Each unique metric frame identified by name and time.
func getFrameKey(m influx.Metric) string {
return m.Name() + "_" + m.Time().String()
}
// Convert metrics.
func (c *Converter) Convert(body []byte) ([]telemetry.FrameWrapper, error) {
metrics, err := c.parser.Parse(body)
if err != nil {
return nil, fmt.Errorf("error parsing metrics: %w", err)
}
if !c.useLabelsColumn {
return c.convertWideFields(metrics)
}
return c.convertWithLabelsColumn(metrics)
}
func (c *Converter) convertWideFields(metrics []influx.Metric) ([]telemetry.FrameWrapper, error) {
// maintain the order of frames as they appear in input.
var frameKeyOrder []string
metricFrames := make(map[string]*metricFrame)
for _, m := range metrics {
frameKey := getFrameKey(m)
frame, ok := metricFrames[frameKey]
if ok {
// Existing frame.
err := frame.extend(m)
if err != nil {
return nil, err
}
} else {
frameKeyOrder = append(frameKeyOrder, frameKey)
frame = newMetricFrame(m, c.useFloat64Numbers)
err := frame.extend(m)
if err != nil {
return nil, err
}
metricFrames[frameKey] = frame
}
}
frameWrappers := make([]telemetry.FrameWrapper, 0, len(metricFrames))
for _, key := range frameKeyOrder {
frameWrappers = append(frameWrappers, metricFrames[key])
}
return frameWrappers, nil
}
func (c *Converter) convertWithLabelsColumn(metrics []influx.Metric) ([]telemetry.FrameWrapper, error) {
// maintain the order of frames as they appear in input.
var frameKeyOrder []string
metricFrames := make(map[string]*metricFrame)
for _, m := range metrics {
frameKey := m.Name()
frame, ok := metricFrames[frameKey]
if ok {
// Existing frame.
err := frame.append(m)
if err != nil {
return nil, err
}
} else {
frameKeyOrder = append(frameKeyOrder, frameKey)
frame = newMetricFrameLabelsColumn(m, c.useFloat64Numbers)
err := frame.append(m)
if err != nil {
return nil, err
}
metricFrames[frameKey] = frame
}
}
frameWrappers := make([]telemetry.FrameWrapper, 0, len(metricFrames))
for _, key := range frameKeyOrder {
frame := metricFrames[key]
// For all fields except labels and time fill columns with nulls in
// case of unequal length.
for i := 2; i < len(frame.fields); i++ {
if frame.fields[i].Len() < frame.fields[0].Len() {
numNulls := frame.fields[0].Len() - frame.fields[i].Len()
for j := 0; j < numNulls; j++ {
frame.fields[i].Append(nil)
}
}
}
frameWrappers = append(frameWrappers, frame)
}
return frameWrappers, nil
}
type metricFrame struct {
useFloatNumbers bool
key string
fields []*data.Field
fieldCache map[string]int
}
// newMetricFrame will return a new frame with length 1.
func newMetricFrame(m influx.Metric, useFloatNumbers bool) *metricFrame {
s := &metricFrame{
useFloatNumbers: useFloatNumbers,
key: m.Name(),
fields: make([]*data.Field, 1),
}
s.fields[0] = data.NewField("time", nil, []time.Time{m.Time()})
return s
}
// newMetricFrame will return a new frame with length 1.
func newMetricFrameLabelsColumn(m influx.Metric, useFloatNumbers bool) *metricFrame {
s := &metricFrame{
useFloatNumbers: useFloatNumbers,
key: m.Name(),
fields: make([]*data.Field, 2),
fieldCache: map[string]int{},
}
s.fields[0] = data.NewField("labels", nil, []string{})
s.fields[1] = data.NewField("time", nil, []time.Time{})
return s
}
// Key returns a key which describes Frame metrics.
func (s *metricFrame) Key() string {
return s.key
}
// Frame transforms metricFrame to Grafana data.Frame.
func (s *metricFrame) Frame() *data.Frame {
return data.NewFrame(s.key, s.fields...)
}
// extend existing metricFrame fields.
func (s *metricFrame) extend(m influx.Metric) error {
fields := m.FieldList()
sort.Slice(fields, func(i, j int) bool {
return fields[i].Key < fields[j].Key
})
labels := tagsToLabels(m.TagList())
for _, f := range fields {
ft, v, err := s.getFieldTypeAndValue(f)
if err != nil {
return err
}
field := data.NewFieldFromFieldType(ft, 1)
field.Name = f.Key
field.Labels = labels
field.Set(0, v)
s.fields = append(s.fields, field)
}
return nil
}
func tagsToLabels(tags []*influx.Tag) data.Labels {
labels := data.Labels{}
for i := 0; i < len(tags); i += 1 {
labels[tags[i].Key] = tags[i].Value
}
return labels
}
// append to existing metricFrame fields.
func (s *metricFrame) append(m influx.Metric) error {
s.fields[0].Append(tagsToLabels(m.TagList()).String()) // TODO, use labels.String()
s.fields[1].Append(m.Time())
fields := m.FieldList()
sort.Slice(fields, func(i, j int) bool {
return fields[i].Key < fields[j].Key
})
for _, f := range fields {
ft, v, err := s.getFieldTypeAndValue(f)
if err != nil {
return err
}
if index, ok := s.fieldCache[f.Key]; ok {
field := s.fields[index]
if ft != field.Type() {
logger.Warn("error appending values", "type", field.Type(), "expect", ft, "value", v, "key", f.Key, "line", m)
if field.Type() == data.FieldTypeNullableString && v != nil {
str := fmt.Sprintf("%v", f.Value)
v = &str
} else {
v = nil
}
}
// If field does not have a desired length till this moment
// we fill it with nulls up to the currently processed index.
if field.Len() < s.fields[0].Len()-1 {
numNulls := s.fields[0].Len() - 1 - field.Len()
for i := 0; i < numNulls; i++ {
field.Append(nil)
}
}
field.Append(v)
} else {
field := data.NewFieldFromFieldType(ft, 0)
field.Name = f.Key
// If field appeared at the moment when we already filled some columns
// we fill it with nulls up to the currently processed index.
if field.Len() < s.fields[0].Len()-1 {
numNulls := s.fields[0].Len() - 1 - field.Len()
for i := 0; i < numNulls; i++ {
field.Append(nil)
}
}
field.Append(v)
s.fields = append(s.fields, field)
s.fieldCache[f.Key] = len(s.fields) - 1
}
}
return nil
}
// float64FieldTypeFor converts all numbers to float64.
// The precision can be lost during big int64 or uint64 conversion to float64.
func float64FieldTypeFor(t interface{}) data.FieldType {
switch t.(type) {
case int8:
return data.FieldTypeFloat64
case int16:
return data.FieldTypeFloat64
case int32:
return data.FieldTypeFloat64
case int64:
return data.FieldTypeFloat64
case uint8:
return data.FieldTypeFloat64
case uint16:
return data.FieldTypeFloat64
case uint32:
return data.FieldTypeFloat64
case uint64:
return data.FieldTypeFloat64
case float32:
return data.FieldTypeFloat64
case float64:
return data.FieldTypeFloat64
case bool:
return data.FieldTypeBool
case string:
return data.FieldTypeString
case time.Time:
return data.FieldTypeTime
}
return data.FieldTypeUnknown
}
func (s *metricFrame) getFieldTypeAndValue(f *influx.Field) (data.FieldType, interface{}, error) {
var ft data.FieldType
if s.useFloatNumbers {
ft = float64FieldTypeFor(f.Value)
} else {
ft = data.FieldTypeFor(f.Value)
}
if ft == data.FieldTypeUnknown {
return ft, nil, fmt.Errorf("unknown type: %t", f.Value)
}
// Make all fields nullable.
ft = ft.NullableType()
convert, ok := getConvertFunc(ft)
if !ok {
return ft, nil, fmt.Errorf("no converter %s=%v (%T) %s", f.Key, f.Value, f.Value, ft.ItemTypeString())
}
v, err := convert(f.Value)
if err != nil {
return ft, nil, fmt.Errorf("value convert error: %v", err)
}
return ft, v, nil
}
func getConvertFunc(ft data.FieldType) (func(v interface{}) (interface{}, error), bool) {
var convert func(v interface{}) (interface{}, error)
switch ft {
case data.FieldTypeNullableString:
convert = converters.AnyToNullableString.Converter
case data.FieldTypeNullableFloat64:
convert = converters.JSONValueToNullableFloat64.Converter
case data.FieldTypeNullableBool:
convert = converters.BoolToNullableBool.Converter
case data.FieldTypeNullableInt64:
convert = converters.JSONValueToNullableInt64.Converter
default:
return nil, false
}
return convert, true
}

View File

@@ -0,0 +1,270 @@
package telegraf
import (
"encoding/json"
"flag"
"fmt"
"io/ioutil"
"path/filepath"
"testing"
"github.com/grafana/grafana-plugin-sdk-go/backend"
"github.com/grafana/grafana-plugin-sdk-go/data"
"github.com/grafana/grafana-plugin-sdk-go/experimental"
"github.com/stretchr/testify/require"
)
func loadTestData(tb testing.TB, file string) []byte {
tb.Helper()
// Safe to disable, this is a test.
// nolint:gosec
content, err := ioutil.ReadFile(filepath.Join("testdata", file+".txt"))
require.NoError(tb, err, "expected to be able to read file")
require.True(tb, len(content) > 0)
return content
}
func checkTestData(tb testing.TB, file string) *backend.DataResponse {
tb.Helper()
// Safe to disable, this is a test.
// nolint:gosec
content, err := ioutil.ReadFile(filepath.Join("testdata", file+".txt"))
require.NoError(tb, err, "expected to be able to read file")
require.True(tb, len(content) > 0)
converter := NewConverter(WithUseLabelsColumn(true))
frameWrappers, err := converter.Convert(content)
require.NoError(tb, err)
dr := &backend.DataResponse{}
for _, w := range frameWrappers {
dr.Frames = append(dr.Frames, w.Frame())
}
err = experimental.CheckGoldenDataResponse(filepath.Join("testdata", file+".golden.txt"), dr, *update)
require.NoError(tb, err)
return dr
}
func TestNewConverter(t *testing.T) {
c := NewConverter(WithUseLabelsColumn(true))
require.True(t, c.useLabelsColumn)
}
func TestConverter_Convert(t *testing.T) {
testCases := []struct {
Name string
NumFields int
FieldLength int
NumFrames int
}{
{Name: "single_metric", NumFields: 6, FieldLength: 1, NumFrames: 1},
{Name: "same_metrics_same_labels_different_time", NumFields: 6, FieldLength: 1, NumFrames: 3},
{Name: "same_metrics_different_labels_different_time", NumFields: 6, FieldLength: 1, NumFrames: 2},
{Name: "same_metrics_different_labels_same_time", NumFields: 131, FieldLength: 1, NumFrames: 1},
}
for _, tt := range testCases {
t.Run(tt.Name, func(t *testing.T) {
testData := loadTestData(t, tt.Name)
converter := NewConverter()
frameWrappers, err := converter.Convert(testData)
require.NoError(t, err)
require.Len(t, frameWrappers, tt.NumFrames)
for _, fw := range frameWrappers {
frame := fw.Frame()
require.Len(t, frame.Fields, tt.NumFields)
require.Equal(t, tt.FieldLength, frame.Fields[0].Len())
_, err := data.FrameToJSON(frame, data.IncludeAll)
require.NoError(t, err)
}
})
}
}
func TestConverter_Convert_LabelsColumn(t *testing.T) {
testCases := []struct {
Name string
NumFields int
FieldLength int
NumFrames int
}{
{Name: "single_metric", NumFields: 7, FieldLength: 1, NumFrames: 1},
{Name: "same_metrics_same_labels_different_time", NumFields: 7, FieldLength: 3, NumFrames: 1},
{Name: "same_metrics_different_labels_different_time", NumFields: 7, FieldLength: 2, NumFrames: 1},
{Name: "same_metrics_different_labels_same_time", NumFields: 12, FieldLength: 13, NumFrames: 1},
{Name: "incomplete_fields", NumFields: 4, FieldLength: 4, NumFrames: 1},
{Name: "incomplete_fields_2", NumFields: 4, FieldLength: 5, NumFrames: 1},
{Name: "incomplete_fields_full", NumFrames: 5},
}
for _, tt := range testCases {
t.Run(tt.Name, func(t *testing.T) {
testData := loadTestData(t, tt.Name)
if *pprint {
fmt.Println(string(testData))
}
converter := NewConverter(WithUseLabelsColumn(true))
frameWrappers, err := converter.Convert(testData)
require.NoError(t, err)
require.Len(t, frameWrappers, tt.NumFrames)
for _, fw := range frameWrappers {
frame := fw.Frame()
if tt.NumFrames == 1 {
require.Len(t, frame.Fields, tt.NumFields)
require.Equal(t, tt.FieldLength, frame.Fields[0].Len())
}
_, err := data.FrameToJSON(frame, data.IncludeAll)
require.NoError(t, err)
if *pprint {
s, err := frame.StringTable(100, 100)
require.NoError(t, err)
fmt.Println(s)
}
}
})
}
}
var update = flag.Bool("update", false, "update golden files")
var pprint = flag.Bool("pprint", false, "pretty print test case")
func TestConverter_Convert_NumFrameFields(t *testing.T) {
testData := loadTestData(t, "same_metrics_different_labels_same_time")
converter := NewConverter()
frameWrappers, err := converter.Convert(testData)
require.NoError(t, err)
require.Len(t, frameWrappers, 1)
frameWrapper := frameWrappers[0]
goldenFile := filepath.Join("testdata", "golden_wide.json")
frame := frameWrapper.Frame()
require.Len(t, frame.Fields, 131) // 10 measurements across 13 metrics + time field.
frameJSON, err := json.MarshalIndent(frame, "", " ")
require.NoError(t, err)
if *update {
if err := ioutil.WriteFile(goldenFile, frameJSON, 0600); err != nil {
t.Fatal(err)
}
}
// Safe to disable, this is a test.
// nolint:gosec
want, err := ioutil.ReadFile(goldenFile)
if err != nil {
t.Fatal(err)
}
require.JSONEqf(t, string(frameJSON), string(want), "not matched with golden file")
}
func TestConverter_Convert_ChangingTypes(t *testing.T) {
dr := checkTestData(t, "changing_types_NaN")
require.NotNil(t, dr)
}
func TestConverter_Convert_FieldOrder(t *testing.T) {
converter := NewConverter()
testData := loadTestData(t, "single_metric")
frameWrappers, err := converter.Convert(testData)
require.NoError(t, err)
require.Len(t, frameWrappers, 1)
frameJSON1, err := data.FrameToJSON(frameWrappers[0].Frame(), data.IncludeAll)
require.NoError(t, err)
testDataDifferentOrder := loadTestData(t, "single_metric_different_field_order")
frameWrappers, err = converter.Convert(testDataDifferentOrder)
require.NoError(t, err)
require.Len(t, frameWrappers, 1)
frameJSON2, err := data.FrameToJSON(frameWrappers[0].Frame(), data.IncludeAll)
require.NoError(t, err)
require.JSONEqf(t, string(frameJSON1), string(frameJSON2), "frames must match")
}
func BenchmarkConverter_Convert_Wide(b *testing.B) {
testData := loadTestData(b, "same_metrics_different_labels_same_time")
converter := NewConverter()
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := converter.Convert(testData)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkConverter_Convert_LabelsColumn(b *testing.B) {
testData := loadTestData(b, "same_metrics_different_labels_same_time")
converter := NewConverter(WithUseLabelsColumn(true))
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := converter.Convert(testData)
if err != nil {
b.Fatal(err)
}
}
}
func TestConverter_Convert_NumFrameFields_LabelsColumn(t *testing.T) {
testData := loadTestData(t, "same_metrics_different_labels_same_time")
converter := NewConverter(WithUseLabelsColumn(true))
frameWrappers, err := converter.Convert(testData)
require.NoError(t, err)
require.Len(t, frameWrappers, 1)
frameWrapper := frameWrappers[0]
goldenFile := filepath.Join("testdata", "golden_labels_column.json")
frame := frameWrapper.Frame()
require.Len(t, frame.Fields, 12)
frameJSON, err := json.MarshalIndent(frame, "", " ")
require.NoError(t, err)
if *update {
if err := ioutil.WriteFile(goldenFile, frameJSON, 0600); err != nil {
t.Fatal(err)
}
}
// Safe to disable, this is a test.
// nolint:gosec
want, err := ioutil.ReadFile(goldenFile)
if err != nil {
t.Fatal(err)
}
require.JSONEqf(t, string(frameJSON), string(want), "not matched with golden file")
}
func TestConverter_Convert_MixedNumberTypes_OK(t *testing.T) {
testData := loadTestData(t, "mixed_number_types")
converter := NewConverter(WithFloat64Numbers(true))
frameWrappers, err := converter.Convert(testData)
require.NoError(t, err)
require.Len(t, frameWrappers, 2)
}
func TestConverter_Convert_MixedNumberTypes_OK_LabelsColumn(t *testing.T) {
testData := loadTestData(t, "mixed_number_types")
converter := NewConverter(WithUseLabelsColumn(true), WithFloat64Numbers(true))
frameWrappers, err := converter.Convert(testData)
require.NoError(t, err)
require.Len(t, frameWrappers, 1)
}
func TestConverter_Convert_PartInput(t *testing.T) {
testData := loadTestData(t, "part_metrics_different_labels_different_time")
converter := NewConverter()
frameWrappers, err := converter.Convert(testData)
require.NoError(t, err)
require.Len(t, frameWrappers, 2)
}
func TestConverter_Convert_PartInput_LabelsColumn(t *testing.T) {
testData := loadTestData(t, "part_metrics_different_labels_different_time")
converter := NewConverter(WithUseLabelsColumn(true))
frameWrappers, err := converter.Convert(testData)
require.NoError(t, err)
require.Len(t, frameWrappers, 1)
}

View File

@@ -0,0 +1,19 @@
🌟 This was machine generated. Do not edit. 🌟
Frame[0]
Name: system
Dimensions: 5 Fields by 4 Rows
+----------------+-------------------------------+------------------+-----------------+-----------------+
| Name: labels | Name: time | Name: sensor | Name: sensor2 | Name: state |
| Labels: | Labels: | Labels: | Labels: | Labels: |
| Type: []string | Type: []time.Time | Type: []*float64 | Type: []*string | Type: []*string |
+----------------+-------------------------------+------------------+-----------------+-----------------+
| host=A | 2021-03-22 01:51:30 -0700 PDT | 0 | NaN | aaa |
| host=B | 2021-03-22 01:51:30 -0700 PDT | null | 0 | bbb |
| host=A | 2021-03-22 01:51:31 -0700 PDT | null | 0 | ccc |
| host=B | 2021-03-22 01:51:31 -0700 PDT | 0 | NaN | 1 |
+----------------+-------------------------------+------------------+-----------------+-----------------+
====== TEST DATA RESPONSE (arrow base64) ======
FRAME=QVJST1cxAAD/////mAIAABAAAAAAAAoADgAMAAsABAAKAAAAFAAAAAAAAAEDAAoADAAAAAgABAAKAAAACAAAAFQAAAACAAAAKAAAAAQAAADw/f//CAAAAAwAAAAAAAAAAAAAAAUAAAByZWZJZAAAABD+//8IAAAAEAAAAAYAAABzeXN0ZW0AAAQAAABuYW1lAAAAAAUAAACoAQAAMAEAANAAAABgAAAABAAAAE7///8UAAAAPAAAADwAAAAAAAUBOAAAAAEAAAAEAAAAbP7//wgAAAAQAAAABQAAAHN0YXRlAAAABAAAAG5hbWUAAAAAAAAAAGT+//8FAAAAc3RhdGUAAACm////FAAAADwAAAA8AAAAAAAFATgAAAABAAAABAAAAMT+//8IAAAAEAAAAAcAAABzZW5zb3IyAAQAAABuYW1lAAAAAAAAAAC8/v//BwAAAHNlbnNvcjIAAAASABgAFAATABIADAAAAAgABAASAAAAFAAAADwAAAA8AAAAAAADATwAAAABAAAABAAAADD///8IAAAAEAAAAAYAAABzZW5zb3IAAAQAAABuYW1lAAAAAAAAAACi////AAACAAYAAABzZW5zb3IAAJ7///8UAAAAPAAAAEQAAAAAAAAKRAAAAAEAAAAEAAAAjP///wgAAAAQAAAABAAAAHRpbWUAAAAABAAAAG5hbWUAAAAAAAAAAAAABgAIAAYABgAAAAAAAwAEAAAAdGltZQAAEgAYABQAAAATAAwAAAAIAAQAEgAAABQAAABEAAAASAAAAAAAAAVEAAAAAQAAAAwAAAAIAAwACAAEAAgAAAAIAAAAEAAAAAYAAABsYWJlbHMAAAQAAABuYW1lAAAAAAAAAAAEAAQABAAAAAYAAABsYWJlbHMAAAAAAAD/////eAEAABQAAAAAAAAADAAWABQAEwAMAAQADAAAAMAAAAAAAAAAFAAAAAAAAAMDAAoAGAAMAAgABAAKAAAAFAAAAOgAAAAEAAAAAAAAAAAAAAANAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGAAAAAAAAAAYAAAAAAAAABgAAAAAAAAAMAAAAAAAAAAAAAAAAAAAADAAAAAAAAAAIAAAAAAAAABQAAAAAAAAAAgAAAAAAAAAWAAAAAAAAAAgAAAAAAAAAHgAAAAAAAAAAAAAAAAAAAB4AAAAAAAAABgAAAAAAAAAkAAAAAAAAAAIAAAAAAAAAJgAAAAAAAAAAAAAAAAAAACYAAAAAAAAABgAAAAAAAAAsAAAAAAAAAAQAAAAAAAAAAAAAAAFAAAABAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAEAAAAAAAAAAIAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAABgAAAAwAAAASAAAAGAAAAAAAAABob3N0PUFob3N0PUJob3N0PUFob3N0PUIANEvZC55uFgA0S9kLnm4WAP7lFAyebhYA/uUUDJ5uFgkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAwAAAAQAAAAFAAAACAAAAAAAAABOYU4wME5hTgAAAAADAAAABgAAAAkAAAAKAAAAAAAAAGFhYWJiYmNjYzEAAAAAAAAQAAAADAAUABIADAAIAAQADAAAABAAAAAsAAAAOAAAAAAAAwABAAAAqAIAAAAAAACAAQAAAAAAAMAAAAAAAAAAAAAAAAAAAAAAAAoADAAAAAgABAAKAAAACAAAAFQAAAACAAAAKAAAAAQAAADw/f//CAAAAAwAAAAAAAAAAAAAAAUAAAByZWZJZAAAABD+//8IAAAAEAAAAAYAAABzeXN0ZW0AAAQAAABuYW1lAAAAAAUAAACoAQAAMAEAANAAAABgAAAABAAAAE7///8UAAAAPAAAADwAAAAAAAUBOAAAAAEAAAAEAAAAbP7//wgAAAAQAAAABQAAAHN0YXRlAAAABAAAAG5hbWUAAAAAAAAAAGT+//8FAAAAc3RhdGUAAACm////FAAAADwAAAA8AAAAAAAFATgAAAABAAAABAAAAMT+//8IAAAAEAAAAAcAAABzZW5zb3IyAAQAAABuYW1lAAAAAAAAAAC8/v//BwAAAHNlbnNvcjIAAAASABgAFAATABIADAAAAAgABAASAAAAFAAAADwAAAA8AAAAAAADATwAAAABAAAABAAAADD///8IAAAAEAAAAAYAAABzZW5zb3IAAAQAAABuYW1lAAAAAAAAAACi////AAACAAYAAABzZW5zb3IAAJ7///8UAAAAPAAAAEQAAAAAAAAKRAAAAAEAAAAEAAAAjP///wgAAAAQAAAABAAAAHRpbWUAAAAABAAAAG5hbWUAAAAAAAAAAAAABgAIAAYABgAAAAAAAwAEAAAAdGltZQAAEgAYABQAAAATAAwAAAAIAAQAEgAAABQAAABEAAAASAAAAAAAAAVEAAAAAQAAAAwAAAAIAAwACAAEAAgAAAAIAAAAEAAAAAYAAABsYWJlbHMAAAQAAABuYW1lAAAAAAAAAAAEAAQABAAAAAYAAABsYWJlbHMAAMACAABBUlJPVzE=

View File

@@ -0,0 +1,4 @@
system,host=A sensor=0,sensor2="NaN",state="aaa" 1616403090000000000
system,host=B sensor="NaN",sensor2=0,state="bbb" 1616403090000000000
system,host=A sensor="NaN",sensor2=0,state="ccc" 1616403091000000000
system,host=B sensor=0,sensor2="NaN",state=1 1616403091000000000

View File

@@ -0,0 +1,285 @@
{
"schema": {
"name": "cpu",
"fields": [
{
"name": "labels",
"type": "string",
"typeInfo": {
"frame": "string"
}
},
{
"name": "time",
"type": "time",
"typeInfo": {
"frame": "time.Time"
}
},
{
"name": "usage_guest",
"type": "number",
"typeInfo": {
"frame": "float64",
"nullable": true
}
},
{
"name": "usage_guest_nice",
"type": "number",
"typeInfo": {
"frame": "float64",
"nullable": true
}
},
{
"name": "usage_idle",
"type": "number",
"typeInfo": {
"frame": "float64",
"nullable": true
}
},
{
"name": "usage_iowait",
"type": "number",
"typeInfo": {
"frame": "float64",
"nullable": true
}
},
{
"name": "usage_irq",
"type": "number",
"typeInfo": {
"frame": "float64",
"nullable": true
}
},
{
"name": "usage_nice",
"type": "number",
"typeInfo": {
"frame": "float64",
"nullable": true
}
},
{
"name": "usage_softirq",
"type": "number",
"typeInfo": {
"frame": "float64",
"nullable": true
}
},
{
"name": "usage_steal",
"type": "number",
"typeInfo": {
"frame": "float64",
"nullable": true
}
},
{
"name": "usage_system",
"type": "number",
"typeInfo": {
"frame": "float64",
"nullable": true
}
},
{
"name": "usage_user",
"type": "number",
"typeInfo": {
"frame": "float64",
"nullable": true
}
}
]
},
"data": {
"values": [
[
"cpu=cpu0, host=MacBook-Pro-Alexander.local",
"cpu=cpu1, host=MacBook-Pro-Alexander.local",
"cpu=cpu2, host=MacBook-Pro-Alexander.local",
"cpu=cpu3, host=MacBook-Pro-Alexander.local",
"cpu=cpu4, host=MacBook-Pro-Alexander.local",
"cpu=cpu5, host=MacBook-Pro-Alexander.local",
"cpu=cpu6, host=MacBook-Pro-Alexander.local",
"cpu=cpu7, host=MacBook-Pro-Alexander.local",
"cpu=cpu8, host=MacBook-Pro-Alexander.local",
"cpu=cpu9, host=MacBook-Pro-Alexander.local",
"cpu=cpu10, host=MacBook-Pro-Alexander.local",
"cpu=cpu11, host=MacBook-Pro-Alexander.local",
"cpu=cpu-total, host=MacBook-Pro-Alexander.local"
],
[
1616403090000,
1616403090000,
1616403090000,
1616403090000,
1616403090000,
1616403090000,
1616403090000,
1616403090000,
1616403090000,
1616403090000,
1616403090000,
1616403090000,
1616403090000
],
[
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
[
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
[
82.00000000012005,
100,
88.23529411773097,
100,
91.91919191902859,
100,
93.0000000000291,
100,
95.04950495055924,
100,
100,
100,
95.8368026645606
],
[
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
[
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
[
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
[
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
[
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
],
[
6.000000000005457,
0,
4.901960784315911,
0,
3.0303030303011163,
0,
2.000000000001023,
0,
1.9801980198033176,
0,
0,
0,
1.4987510408004405
],
[
12.000000000033651,
0,
6.862745098042275,
0,
5.0505050504922915,
0,
5.000000000006821,
0,
2.970297029704976,
0,
0,
0,
2.6644462947563388
]
]
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,4 @@
node_cpu,cpu=7,mode=user seconds_total=6410.4799999999996 1625783151607273176
node_cpu,cpu=0,mode=user guest_seconds_total=0 1625783151607273176
node_cpu,cpu=0,mode=nice guest_seconds_total=0 1625783151607273176
node_cpu,cpu=1,mode=user guest_seconds_total=0 1625783151607273176

View File

@@ -0,0 +1,5 @@
node_cpu,cpu=7,mode=user seconds_total=6410.4799999999996 1625783151607273170
node_cpu,cpu=0,mode=user guest_seconds_total=0 1625783151607273175
node_cpu,cpu=0,mode=nice guest_seconds_total=0 1625783151607273175
node_cpu,cpu=1,mode=user guest_seconds_total=0 1625783151607273175
node_cpu,cpu=7,mode=user seconds_total=2410.4799999999996 1625783151607273178

View File

@@ -0,0 +1,210 @@
node_cpu,cpu=0,mode=idle seconds_total=99081.979999999996 1625842606118404128
node_cpu,cpu=0,mode=iowait seconds_total=53.490000000000002 1625842606118404128
node_cpu,cpu=0,mode=irq seconds_total=0 1625842606118404128
node_cpu,cpu=0,mode=nice seconds_total=7.2599999999999998 1625842606118404128
node_cpu,cpu=0,mode=softirq seconds_total=147.97 1625842606118404128
node_cpu,cpu=0,mode=steal seconds_total=0 1625842606118404128
node_cpu,cpu=0,mode=system seconds_total=1851.9000000000001 1625842606118404128
node_cpu,cpu=0,mode=user seconds_total=7192.9799999999996 1625842606118404128
node_cpu,cpu=1,mode=idle seconds_total=2119.0999999999999 1625842606118404128
node_cpu,cpu=1,mode=iowait seconds_total=1.3899999999999999 1625842606118404128
node_cpu,cpu=1,mode=irq seconds_total=0 1625842606118404128
node_cpu,cpu=1,mode=nice seconds_total=7.2400000000000002 1625842606118404128
node_cpu,cpu=1,mode=softirq seconds_total=13.390000000000001 1625842606118404128
node_cpu,cpu=1,mode=steal seconds_total=0 1625842606118404128
node_cpu,cpu=1,mode=system seconds_total=1717.53 1625842606118404128
node_cpu,cpu=1,mode=user seconds_total=7760.1400000000003 1625842606118404128
node_cpu,cpu=2,mode=idle seconds_total=2115.0100000000002 1625842606118404128
node_cpu,cpu=2,mode=iowait seconds_total=1.1499999999999999 1625842606118404128
node_cpu,cpu=2,mode=irq seconds_total=0 1625842606118404128
node_cpu,cpu=2,mode=nice seconds_total=7.3799999999999999 1625842606118404128
node_cpu,cpu=2,mode=softirq seconds_total=909.46000000000004 1625842606118404128
node_cpu,cpu=2,mode=steal seconds_total=0 1625842606118404128
node_cpu,cpu=2,mode=system seconds_total=1772.5 1625842606118404128
node_cpu,cpu=2,mode=user seconds_total=7481.4899999999998 1625842606118404128
node_cpu,cpu=3,mode=idle seconds_total=2136.0300000000002 1625842606118404128
node_cpu,cpu=3,mode=iowait seconds_total=1.47 1625842606118404128
node_cpu,cpu=3,mode=irq seconds_total=0 1625842606118404128
node_cpu,cpu=3,mode=nice seconds_total=6.5300000000000002 1625842606118404128
node_cpu,cpu=3,mode=softirq seconds_total=9.75 1625842606118404128
node_cpu,cpu=3,mode=steal seconds_total=0 1625842606118404128
node_cpu,cpu=3,mode=system seconds_total=1721.2 1625842606118404128
node_cpu,cpu=3,mode=user seconds_total=7675.8699999999999 1625842606118404128
node_cpu,cpu=4,mode=idle seconds_total=2135.77 1625842606118404128
node_cpu,cpu=4,mode=iowait seconds_total=1.1799999999999999 1625842606118404128
node_cpu,cpu=4,mode=irq seconds_total=0 1625842606118404128
node_cpu,cpu=4,mode=nice seconds_total=9.0600000000000005 1625842606118404128
node_cpu,cpu=4,mode=softirq seconds_total=8.4199999999999999 1625842606118404128
node_cpu,cpu=4,mode=steal seconds_total=0 1625842606118404128
node_cpu,cpu=4,mode=system seconds_total=1687.9400000000001 1625842606118404128
node_cpu,cpu=4,mode=user seconds_total=8106.6899999999996 1625842606118404128
node_cpu,cpu=5,mode=idle seconds_total=2135.7800000000002 1625842606118404128
node_cpu,cpu=5,mode=iowait seconds_total=1.3600000000000001 1625842606118404128
node_cpu,cpu=5,mode=irq seconds_total=0 1625842606118404128
node_cpu,cpu=5,mode=nice seconds_total=8.3000000000000007 1625842606118404128
node_cpu,cpu=5,mode=softirq seconds_total=7.9800000000000004 1625842606118404128
node_cpu,cpu=5,mode=steal seconds_total=0 1625842606118404128
node_cpu,cpu=5,mode=system seconds_total=1656.8199999999999 1625842606118404128
node_cpu,cpu=5,mode=user seconds_total=7809.5100000000002 1625842606118404128
node_cpu,cpu=6,mode=idle seconds_total=2142.21 1625842606118404128
node_cpu,cpu=6,mode=iowait seconds_total=1.5600000000000001 1625842606118404128
node_cpu,cpu=6,mode=irq seconds_total=0 1625842606118404128
node_cpu,cpu=6,mode=nice seconds_total=11.5 1625842606118404128
node_cpu,cpu=6,mode=softirq seconds_total=7.6100000000000003 1625842606118404128
node_cpu,cpu=6,mode=steal seconds_total=0 1625842606118404128
node_cpu,cpu=6,mode=system seconds_total=1655.5599999999999 1625842606118404128
node_cpu,cpu=6,mode=user seconds_total=7865.5200000000004 1625842606118404128
node_cpu,cpu=7,mode=idle seconds_total=2136.9899999999998 1625842606118404128
node_cpu,cpu=7,mode=iowait seconds_total=1.45 1625842606118404128
node_cpu,cpu=7,mode=irq seconds_total=0 1625842606118404128
node_cpu,cpu=7,mode=nice seconds_total=7.3200000000000003 1625842606118404128
node_cpu,cpu=7,mode=softirq seconds_total=6.9900000000000002 1625842606118404128
node_cpu,cpu=7,mode=steal seconds_total=0 1625842606118404128
node_cpu,cpu=7,mode=system seconds_total=1826.8299999999999 1625842606118404128
node_cpu,cpu=7,mode=user seconds_total=7717.8699999999999 1625842606118404128
node_cpu,cpu=0,mode=user guest_seconds_total=0 1625842606118404128
node_cpu,cpu=0,mode=nice guest_seconds_total=0 1625842606118404128
node_cpu,cpu=1,mode=user guest_seconds_total=0 1625842606118404128
node_cpu,cpu=1,mode=nice guest_seconds_total=0 1625842606118404128
node_cpu,cpu=2,mode=user guest_seconds_total=0 1625842606118404128
node_cpu,cpu=2,mode=nice guest_seconds_total=0 1625842606118404128
node_cpu,cpu=3,mode=user guest_seconds_total=0 1625842606118404128
node_cpu,cpu=3,mode=nice guest_seconds_total=0 1625842606118404128
node_cpu,cpu=4,mode=user guest_seconds_total=0 1625842606118404128
node_cpu,cpu=4,mode=nice guest_seconds_total=0 1625842606118404128
node_cpu,cpu=5,mode=user guest_seconds_total=0 1625842606118404128
node_cpu,cpu=5,mode=nice guest_seconds_total=0 1625842606118404128
node_cpu,cpu=6,mode=user guest_seconds_total=0 1625842606118404128
node_cpu,cpu=6,mode=nice guest_seconds_total=0 1625842606118404128
node_cpu,cpu=7,mode=user guest_seconds_total=0 1625842606118404128
node_cpu,cpu=7,mode=nice guest_seconds_total=0 1625842606118404128
node_disk,device=nvme0n1 reads_completed_total=1.3411377179449652e-304 1625842606119336434
node_disk,device=dm-0 reads_completed_total=2.1994805592372739e-304 1625842606119375842
node_disk,device=dm-1 reads_completed_total=2.1922059384353575e-304 1625842606119395462
node_disk,device=dm-2 reads_completed_total=6.9694889474988109e-307 1625842606119415905
node_disk,device=nvme0n1 reads_merged_total=8.6959111101821589e-305 1625842606119338423
node_disk,device=dm-0 reads_merged_total=0 1625842606119376783
node_disk,device=dm-1 reads_merged_total=0 1625842606119396342
node_disk,device=dm-2 reads_merged_total=0 1625842606119416865
node_disk,device=nvme0n1 read_bytes_total=7.9506563754880824e-303 1625842606119339523
node_disk,device=dm-0 read_bytes_total=7.9279261327783147e-303 1625842606119377699
node_disk,device=dm-1 read_bytes_total=7.9161757824572278e-303 1625842606119397292
node_disk,device=dm-2 read_bytes_total=1.0474344384425674e-305 1625842606119417800
node_disk,device=nvme0n1 read_time_seconds_total=3.1966722639409534e-305 1625842606119340423
node_disk,device=dm-0 read_time_seconds_total=8.756452044066651e-305 1625842606119378591
node_disk,device=dm-1 read_time_seconds_total=8.747852873623908e-305 1625842606119398381
node_disk,device=dm-2 read_time_seconds_total=5.1317630061530409e-307 1625842606119418668
node_disk,device=nvme0n1 writes_completed_total=1.1674108966428034e-303 1625842606119341584
node_disk,device=dm-0 writes_completed_total=1.8509173462488364e-303 1625842606119379606
node_disk,device=dm-1 writes_completed_total=1.8359312113078591e-303 1625842606119399822
node_disk,device=dm-2 writes_completed_total=5.1303760431906859e-306 1625842606119419727
node_disk,device=nvme0n1 writes_merged_total=6.9575402615493329e-304 1625842606119342542
node_disk,device=dm-0 writes_merged_total=0 1625842606119380441
node_disk,device=dm-1 writes_merged_total=0 1625842606119400695
node_disk,device=dm-2 writes_merged_total=0 1625842606119420658
node_disk,device=nvme0n1 written_bytes_total=6.4553668769213912e-302 1625842606119343475
node_disk,device=dm-0 written_bytes_total=6.4198260893924029e-302 1625842606119381302
node_disk,device=dm-1 written_bytes_total=6.4584224950508319e-302 1625842606119401865
node_disk,device=dm-2 written_bytes_total=4.1043008345703274e-305 1625842606119421570
node_disk,device=nvme0n1 write_time_seconds_total=1.3076709948695808e-303 1625842606119344345
node_disk,device=dm-0 write_time_seconds_total=5.2982706852329569e-303 1625842606119382389
node_disk,device=dm-1 write_time_seconds_total=5.1035632749998652e-303 1625842606119402703
node_disk,device=dm-2 write_time_seconds_total=1.4557563381913418e-305 1625842606119422431
node_disk,device=nvme0n1 io_time_seconds_total=7.7729288595982396e-304 1625842606119346457
node_disk,device=dm-0 io_time_seconds_total=7.9020828517980929e-304 1625842606119383847
node_disk,device=dm-1 io_time_seconds_total=7.8996972754816854e-304 1625842606119404189
node_disk,device=dm-2 io_time_seconds_total=6.9348148732738551e-307 1625842606119424083
node_disk,device=nvme0n1 io_time_weighted_seconds_total=1.4177826789187767e-303 1625842606119347403
node_disk,device=dm-0 io_time_weighted_seconds_total=5.3858352057137237e-303 1625842606119384645
node_disk,device=dm-1 io_time_weighted_seconds_total=5.1910418037747827e-303 1625842606119405059
node_disk,device=dm-2 io_time_weighted_seconds_total=1.5070739682643407e-305 1625842606119425000
node_disk,device=nvme0n1 discards_completed_total=0 1625842606119348254
node_disk,device=dm-0 discards_completed_total=0 1625842606119385405
node_disk,device=dm-1 discards_completed_total=0 1625842606119405820
node_disk,device=dm-2 discards_completed_total=0 1625842606119425800
node_disk,device=nvme0n1 discards_merged_total=0 1625842606119348995
node_disk,device=dm-0 discards_merged_total=0 1625842606119386162
node_disk,device=dm-1 discards_merged_total=0 1625842606119406524
node_disk,device=dm-2 discards_merged_total=0 1625842606119426506
node_disk,device=nvme0n1 discarded_sectors_total=0 1625842606119349745
node_disk,device=dm-0 discarded_sectors_total=0 1625842606119386827
node_disk,device=dm-1 discarded_sectors_total=0 1625842606119407221
node_disk,device=dm-2 discarded_sectors_total=0 1625842606119427251
node_disk,device=nvme0n1 discard_time_seconds_total=0 1625842606119350390
node_disk,device=dm-0 discard_time_seconds_total=0 1625842606119387469
node_disk,device=dm-1 discard_time_seconds_total=0 1625842606119408059
node_disk,device=dm-2 discard_time_seconds_total=0 1625842606119427965
node_disk,device=nvme0n1 flush_requests_total=8.2776030773264636e-305 1625842606119351194
node_disk,device=dm-0 flush_requests_total=0 1625842606119388383
node_disk,device=dm-1 flush_requests_total=0 1625842606119408727
node_disk,device=dm-2 flush_requests_total=0 1625842606119428731
node_disk,device=nvme0n1 flush_requests_time_seconds_total=112.684 1625842606119352392
node_disk,device=dm-0 flush_requests_time_seconds_total=0 1625842606119389100
node_disk,device=dm-1 flush_requests_time_seconds_total=0 1625842606119409394
node_disk,device=dm-2 flush_requests_time_seconds_total=0 1625842606119429446
node intr_total=384262117 1625842606119685058
node context_switches_total=605111048 1625842606119685058
node forks_total=515402 1625842606119685058
node_memory MemTotal_bytes=16445845504 1625842606118926829
node_memory MemFree_bytes=3914833920 1625842606118926829
node_memory MemAvailable_bytes=10026749952 1625842606118926829
node_memory Buffers_bytes=1061900288 1625842606118926829
node_memory Cached_bytes=6257840128 1625842606118926829
node_memory SwapCached_bytes=1974272 1625842606118926829
node_memory Active_bytes=3568754688 1625842606118926829
node_memory Inactive_bytes=7085912064 1625842606118926829
node_memory Active_anon_bytes=58318848 1625842606118926829
node_memory Inactive_anon_bytes=4578213888 1625842606118926829
node_memory Active_file_bytes=3510435840 1625842606118926829
node_memory Inactive_file_bytes=2507698176 1625842606118926829
node_memory Unevictable_bytes=999981056 1625842606118926829
node_memory Mlocked_bytes=1232896 1625842606118926829
node_memory SwapTotal_bytes=1023406080 1625842606118926829
node_memory SwapFree_bytes=1000431616 1625842606118926829
node_memory Dirty_bytes=667648 1625842606118926829
node_memory Writeback_bytes=0 1625842606118926829
node_memory AnonPages_bytes=4332937216 1625842606118926829
node_memory Mapped_bytes=1069518848 1625842606118926829
node_memory Shmem_bytes=1332330496 1625842606118926829
node_memory KReclaimable_bytes=444194816 1625842606118926829
node_memory Slab_bytes=672362496 1625842606118926829
node_memory SReclaimable_bytes=444194816 1625842606118926829
node_memory SUnreclaim_bytes=228167680 1625842606118926829
node_memory KernelStack_bytes=26329088 1625842606118926829
node_memory PageTables_bytes=60489728 1625842606118926829
node_memory NFS_Unstable_bytes=0 1625842606118926829
node_memory Bounce_bytes=0 1625842606118926829
node_memory WritebackTmp_bytes=0 1625842606118926829
node_memory CommitLimit_bytes=9246326784 1625842606118926829
node_memory Committed_AS_bytes=20046229504 1625842606118926829
node_memory VmallocTotal_bytes=35184372087808 1625842606118926829
node_memory VmallocUsed_bytes=60338176 1625842606118926829
node_memory VmallocChunk_bytes=0 1625842606118926829
node_memory Percpu_bytes=13631488 1625842606118926829
node_memory HardwareCorrupted_bytes=0 1625842606118926829
node_memory AnonHugePages_bytes=0 1625842606118926829
node_memory ShmemHugePages_bytes=0 1625842606118926829
node_memory ShmemPmdMapped_bytes=0 1625842606118926829
node_memory FileHugePages_bytes=0 1625842606118926829
node_memory FilePmdMapped_bytes=0 1625842606118926829
node_memory HugePages_Total=0 1625842606118926829
node_memory HugePages_Free=0 1625842606118926829
node_memory HugePages_Rsvd=0 1625842606118926829
node_memory HugePages_Surp=0 1625842606118926829
node_memory Hugepagesize_bytes=2097152 1625842606118926829
node_memory Hugetlb_bytes=0 1625842606118926829
node_memory DirectMap4k_bytes=970108928 1625842606118926829
node_memory DirectMap2M_bytes=14799601664 1625842606118926829
node_memory DirectMap1G_bytes=2147483648 1625842606118926829
node_disk,device=nvme0n1 io_now=0 1625842606119345621
node_disk,device=dm-0 io_now=0 1625842606119383176
node_disk,device=dm-1 io_now=0 1625842606119403515
node_disk,device=dm-2 io_now=0 1625842606119423375
node_uname,sysname=Linux,release=5.11.0-22-generic,version=#23-Ubuntu\ SMP\ Thu\ Jun\ 17\ 00:34:23\ UTC\ 2021,machine=x86_64,nodename=monox,domainname=(none) info=1 1625842606119640661
node boot_time_seconds=1625634352 1625842606119685058
node procs_running=1 1625842606119685058
node procs_blocked=0 1625842606119685058
node time_seconds=1625842606.1201811 1625842606120180970
node load1=1.8 1625842606120195353
node load5=1.2 1625842606120195353
node load15=1.05 1625842606120195353

View File

@@ -0,0 +1,2 @@
avionics_actuator_ActuatorCommands,host=MacBook-Pro-Alexander.local tilt_deg_2=-17.7650375,surface_angle_deg_3=11.6852818,tilt_deg_0=-1.15918803,surface_angle_deg_8=-9.71168709,pitch_deg_4=-15.3900461,motor_current_A_6=12.0532084,pitch_brake_3=true,surface_angle_deg_1=-12.3403225,surface_angle_deg_7=29.6087742,tilt_brake_1=true,motor_current_A_11=15.4619112,motor_current_A_5=-5.99128914,pitch_brake_1=true,motor_current_A_2=9.32099056,tilt_brake_0=true,tilt_brake_4=false,tailno="GHIL",motor_current_A_1=2.68936205,pitch_brake_5=false,tilt_deg_3=47.3740387,surface_angle_deg_2=15.5803757,motor_current_A_4=24.7537708,tilt_deg_4=-32.8259926,pitch_brake_2=false,surface_angle_deg_0=-13.7655039,pitch_brake_0=true,tilt_deg_1=-20.9695129,motor_current_A_3=9.7648468,pitch_deg_5=9.11340141,motor_current_A_10=43.7951317,pitch_deg_1=8.13307095,surface_angle_deg_4=16.7721748,surface_angle_deg_6=-0.672622204,tilt_brake_2=true,motor_current_A_7=-15.5444078,surface_angle_deg_9=-27.5968456,tilt_brake_5=true,pitch_deg_2=-3.20253587,pitch_brake_4=true,tilt_deg_5=7.48156977,motor_current_A_8=37.8594284,pitch_deg_0=-29.9564457,tilt_brake_3=true,motor_current_A_0=1.99507976,surface_angle_deg_5=-16.7661037,pitch_deg_3=-20.904705,recorder="fcc1",motor_current_A_9=-19.5889759 1618400059121931000
avionics_actuator_ActuatorCommands,host=MacBook-Pro-Alexander.local surface_angle_deg_0=-16.4649525,pitch_brake_2=false,motor_current_A_10=42.0176544,pitch_brake_3=false,tailno="GHIL",pitch_deg_1=9.89322376,surface_angle_deg_2=11.1123266,pitch_deg_4=-16.7317562,tilt_brake_2=true,motor_current_A_8=36.8988419,tilt_deg_5=5.36965704,surface_angle_deg_4=17.8226891,surface_angle_deg_6=-0.707197368,tilt_deg_2=-20.144413,pitch_brake_5=true,pitch_deg_2=-2.82733965,tilt_brake_5=false,motor_current_A_1=5.20447683,pitch_brake_1=false,motor_current_A_9=-19.5628815,surface_angle_deg_1=-12.6342392,pitch_deg_3=-21i,tilt_brake_4=true,motor_current_A_6=8.51478672,motor_current_A_0=2.84927869,tilt_deg_0=2.93773961,tilt_brake_3=true,tilt_deg_1=-20.8846588,surface_angle_deg_3=8.33424473,tilt_deg_4=-28.9802303,pitch_deg_5=6.43800926,surface_angle_deg_7=28.5278912,motor_current_A_7=-17.6263618,recorder="fcc1",surface_angle_deg_5=-23.5173836,tilt_brake_0=true,motor_current_A_5=-5.96704578,tilt_deg_3=45.6446266,motor_current_A_11=11.097291,pitch_brake_0=true,surface_angle_deg_9=-27.9620895,motor_current_A_3=6.87531996,motor_current_A_4=23.7491093,pitch_deg_0=-29.835228,motor_current_A_2=6.56280565,pitch_brake_4=false,surface_angle_deg_8=-9.53332138,tilt_brake_1=true 1618400059173608000

View File

@@ -0,0 +1,17 @@
cpu,cpu=cpu9,host=MacBook-Pro-Alexander.local usage_guest=0,usage_user=0,usage_system=0,usage_idle=100,usage_nice=0,usage_iowait=0,usage_softirq=0,usage_irq=0,usage_steal=0,usage_guest_nice=0 1616403089000000000
cpu,cpu=cpu10,host=MacBook-Pro-Alexander.local usage_system=0,usage_idle=100,usage_nice=0,usage_guest=0,usage_guest_nice=0,usage_user=0,usage_iowait=0,usage_irq=0,usage_softirq=0,usage_steal=0 1616403089000000000
cpu,cpu=cpu11,host=MacBook-Pro-Alexander.local usage_guest_nice=0,usage_user=0,usage_idle=100,usage_nice=0,usage_softirq=0,usage_guest=0,usage_system=0,usage_iowait=0,usage_irq=0,usage_steal=0 1616403089000000000
cpu,cpu=cpu-total,host=MacBook-Pro-Alexander.local usage_nice=0,usage_iowait=0,usage_irq=0,usage_idle=95.8368026645606,usage_system=1.4987510408004405,usage_softirq=0,usage_steal=0,usage_guest=0,usage_guest_nice=0,usage_user=2.6644462947563388 1616403089000000000
cpu,cpu=cpu0,host=MacBook-Pro-Alexander.local usage_system=6.000000000005457,usage_idle=82.00000000012005,usage_nice=0,usage_irq=0,usage_steal=0,usage_guest=0,usage_guest_nice=0,usage_user=12.000000000033651,usage_iowait=0,usage_softirq=0 1616403090000000000
cpu,cpu=cpu1,host=MacBook-Pro-Alexander.local usage_user=0,usage_irq=0,usage_softirq=0,usage_steal=0,usage_guest_nice=0,usage_system=0,usage_idle=100,usage_nice=0,usage_iowait=0,usage_guest=0 1616403090000000000
cpu,cpu=cpu2,host=MacBook-Pro-Alexander.local usage_system=4.901960784315911,usage_idle=88.23529411773097,usage_iowait=0,usage_guest=0,usage_user=6.862745098042275,usage_nice=0,usage_irq=0,usage_softirq=0,usage_steal=0,usage_guest_nice=0 1616403090000000000
cpu,cpu=cpu3,host=MacBook-Pro-Alexander.local usage_user=0,usage_iowait=0,usage_steal=0,usage_guest_nice=0,usage_softirq=0,usage_guest=0,usage_system=0,usage_idle=100,usage_nice=0,usage_irq=0 1616403090000000000
cpu,cpu=cpu4,host=MacBook-Pro-Alexander.local usage_idle=91.91919191902859,usage_nice=0,usage_iowait=0,usage_steal=0,usage_guest=0,usage_guest_nice=0,usage_system=3.0303030303011163,usage_irq=0,usage_softirq=0,usage_user=5.0505050504922915 1616403090000000000
cpu,cpu=cpu5,host=MacBook-Pro-Alexander.local usage_softirq=0,usage_guest_nice=0,usage_idle=100,usage_nice=0,usage_iowait=0,usage_steal=0,usage_guest=0,usage_user=0,usage_system=0,usage_irq=0 1616403090000000000
cpu,cpu=cpu6,host=MacBook-Pro-Alexander.local usage_idle=93.0000000000291,usage_irq=0,usage_softirq=0,usage_steal=0,usage_guest_nice=0,usage_user=5.000000000006821,usage_system=2.000000000001023,usage_guest=0,usage_nice=0,usage_iowait=0 1616403090000000000
cpu,cpu=cpu7,host=MacBook-Pro-Alexander.local usage_guest_nice=0,usage_user=0,usage_system=0,usage_idle=100,usage_iowait=0,usage_guest=0,usage_nice=0,usage_irq=0,usage_softirq=0,usage_steal=0 1616403090000000000
cpu,cpu=cpu8,host=MacBook-Pro-Alexander.local usage_system=1.9801980198033176,usage_idle=95.04950495055924,usage_softirq=0,usage_steal=0,usage_guest_nice=0,usage_user=2.970297029704976,usage_nice=0,usage_iowait=0,usage_irq=0,usage_guest=0 1616403090000000000
cpu,cpu=cpu9,host=MacBook-Pro-Alexander.local usage_guest=0,usage_user=0,usage_system=0,usage_idle=100,usage_nice=0,usage_iowait=0,usage_softirq=0,usage_irq=0,usage_steal=0,usage_guest_nice=0 1616403090000000000
cpu,cpu=cpu10,host=MacBook-Pro-Alexander.local usage_system=0,usage_idle=100,usage_nice=0,usage_guest=0,usage_guest_nice=0,usage_user=0,usage_iowait=0,usage_irq=0,usage_softirq=0,usage_steal=0 1616403090000000000
cpu,cpu=cpu11,host=MacBook-Pro-Alexander.local usage_guest_nice=0,usage_user=0,usage_idle=100,usage_nice=0,usage_softirq=0,usage_guest=0,usage_system=0,usage_iowait=0,usage_irq=0,usage_steal=0 1616403090000000000
cpu,cpu=cpu-total,host=MacBook-Pro-Alexander.local usage_nice=0,usage_iowait=0,usage_irq=0,usage_idle=95.8368026645606,usage_system=1.4987510408004405,usage_softirq=0,usage_steal=0,usage_guest=0,usage_guest_nice=0,usage_user=2.6644462947563388 1616403090000000000

View File

@@ -0,0 +1,2 @@
system,host=MacBook-Pro-Alexander.local,mylabel=boom1 load15=2.00341796875,n_cpus=12i,n_users=6i,load1=3.15966796875,load5=2.3837890625 1616403089000000000
system,host=MacBook-Pro-Alexander.local,mylabel=boom2 load15=2.00341796875,n_cpus=11i,n_users=6i,load1=3.15966796875,load5=2.3837890625 1616403090000000000

View File

@@ -0,0 +1,13 @@
cpu,cpu=cpu0,host=MacBook-Pro-Alexander.local usage_system=6.000000000005457,usage_idle=82.00000000012005,usage_nice=0,usage_irq=0,usage_steal=0,usage_guest=0,usage_guest_nice=0,usage_user=12.000000000033651,usage_iowait=0,usage_softirq=0 1616403090000000000
cpu,cpu=cpu1,host=MacBook-Pro-Alexander.local usage_user=0,usage_irq=0,usage_softirq=0,usage_steal=0,usage_guest_nice=0,usage_system=0,usage_idle=100,usage_nice=0,usage_iowait=0,usage_guest=0 1616403090000000000
cpu,cpu=cpu2,host=MacBook-Pro-Alexander.local usage_system=4.901960784315911,usage_idle=88.23529411773097,usage_iowait=0,usage_guest=0,usage_user=6.862745098042275,usage_nice=0,usage_irq=0,usage_softirq=0,usage_steal=0,usage_guest_nice=0 1616403090000000000
cpu,cpu=cpu3,host=MacBook-Pro-Alexander.local usage_user=0,usage_iowait=0,usage_steal=0,usage_guest_nice=0,usage_softirq=0,usage_guest=0,usage_system=0,usage_idle=100,usage_nice=0,usage_irq=0 1616403090000000000
cpu,cpu=cpu4,host=MacBook-Pro-Alexander.local usage_idle=91.91919191902859,usage_nice=0,usage_iowait=0,usage_steal=0,usage_guest=0,usage_guest_nice=0,usage_system=3.0303030303011163,usage_irq=0,usage_softirq=0,usage_user=5.0505050504922915 1616403090000000000
cpu,cpu=cpu5,host=MacBook-Pro-Alexander.local usage_softirq=0,usage_guest_nice=0,usage_idle=100,usage_nice=0,usage_iowait=0,usage_steal=0,usage_guest=0,usage_user=0,usage_system=0,usage_irq=0 1616403090000000000
cpu,cpu=cpu6,host=MacBook-Pro-Alexander.local usage_idle=93.0000000000291,usage_irq=0,usage_softirq=0,usage_steal=0,usage_guest_nice=0,usage_user=5.000000000006821,usage_system=2.000000000001023,usage_guest=0,usage_nice=0,usage_iowait=0 1616403090000000000
cpu,cpu=cpu7,host=MacBook-Pro-Alexander.local usage_guest_nice=0,usage_user=0,usage_system=0,usage_idle=100,usage_iowait=0,usage_guest=0,usage_nice=0,usage_irq=0,usage_softirq=0,usage_steal=0 1616403090000000000
cpu,cpu=cpu8,host=MacBook-Pro-Alexander.local usage_system=1.9801980198033176,usage_idle=95.04950495055924,usage_softirq=0,usage_steal=0,usage_guest_nice=0,usage_user=2.970297029704976,usage_nice=0,usage_iowait=0,usage_irq=0,usage_guest=0 1616403090000000000
cpu,cpu=cpu9,host=MacBook-Pro-Alexander.local usage_guest=0,usage_user=0,usage_system=0,usage_idle=100,usage_nice=0,usage_iowait=0,usage_softirq=0,usage_irq=0,usage_steal=0,usage_guest_nice=0 1616403090000000000
cpu,cpu=cpu10,host=MacBook-Pro-Alexander.local usage_system=0,usage_idle=100,usage_nice=0,usage_guest=0,usage_guest_nice=0,usage_user=0,usage_iowait=0,usage_irq=0,usage_softirq=0,usage_steal=0 1616403090000000000
cpu,cpu=cpu11,host=MacBook-Pro-Alexander.local usage_guest_nice=0,usage_user=0,usage_idle=100,usage_nice=0,usage_softirq=0,usage_guest=0,usage_system=0,usage_iowait=0,usage_irq=0,usage_steal=0 1616403090000000000
cpu,cpu=cpu-total,host=MacBook-Pro-Alexander.local usage_nice=0,usage_iowait=0,usage_irq=0,usage_idle=95.8368026645606,usage_system=1.4987510408004405,usage_softirq=0,usage_steal=0,usage_guest=0,usage_guest_nice=0,usage_user=2.6644462947563388 1616403090000000000

View File

@@ -0,0 +1,3 @@
system,host=MacBook-Pro-Alexander.local,mylabel=boom load15=2.00341796875,n_cpus=12i,n_users=6i,load1=3.15966796875,load5=2.3837890625 1616403089000000000
system,host=MacBook-Pro-Alexander.local,mylabel=boom load15=2.00341796876,n_cpus=13i,n_users=7i,load1=3.15966796876,load5=2.3837890626 1616403090000000000
system,host=MacBook-Pro-Alexander.local,mylabel=boom load15=2.00341796877,n_cpus=14i,n_users=8i,load1=3.15966796877,load5=2.3837890627 1616403091000000000

View File

@@ -0,0 +1 @@
system,host=MacBook-Pro-Alexander.local,mylabel=boom load15=2.00341796875,n_cpus=12i,n_users=6i,load1=3.15966796875,load5=2.3837890625 1616403089000000000

View File

@@ -0,0 +1 @@
system,host=MacBook-Pro-Alexander.local,mylabel=boom load15=2.00341796875,n_users=6i,load1=3.15966796875,n_cpus=12i,load5=2.3837890625 1616403089000000000

View File

@@ -60,6 +60,9 @@ func (srv AlertmanagerSrv) RouteDeleteSilence(c *models.ReqContext) response.Res
}
func (srv AlertmanagerSrv) RouteGetAlertingConfig(c *models.ReqContext) response.Response {
if !c.HasUserRole(models.ROLE_EDITOR) {
return ErrResp(http.StatusForbidden, errors.New("permission denied"), "")
}
query := ngmodels.GetLatestAlertmanagerConfigurationQuery{}
if err := srv.store.GetLatestAlertmanagerConfiguration(&query); err != nil {
if errors.Is(err, store.ErrNoAlertmanagerConfiguration) {
@@ -146,6 +149,9 @@ func (srv AlertmanagerSrv) RouteGetAMAlerts(c *models.ReqContext) response.Respo
if errors.Is(err, notifier.ErrGetAlertsBadPayload) {
return ErrResp(http.StatusBadRequest, err, "")
}
if errors.Is(err, notifier.ErrGetAlertsUnavailable) {
return ErrResp(http.StatusServiceUnavailable, err, "")
}
// any other error here should be an unexpected failure and thus an internal error
return ErrResp(http.StatusInternalServerError, err, "")
}

View File

@@ -2,6 +2,7 @@ package api
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"time"
@@ -76,6 +77,13 @@ func (srv PrometheusSrv) RouteGetRuleStatuses(c *models.ReqContext) response.Res
continue
}
groupId, namespaceUID, namespace := r[0], r[1], r[2]
if _, err := srv.store.GetNamespaceByUID(namespaceUID, c.SignedInUser.OrgId, c.SignedInUser); err != nil {
if errors.Is(err, models.ErrFolderAccessDenied) {
// do not include it in the response
continue
}
return toNamespaceErrorResponse(err)
}
alertRuleQuery := ngmodels.ListRuleGroupAlertRulesQuery{OrgID: c.SignedInUser.OrgId, NamespaceUID: namespaceUID, RuleGroup: groupId}
if err := srv.store.GetRuleGroupAlertRules(&alertRuleQuery); err != nil {
ruleResponse.DiscoveryBase.Status = "error"

View File

@@ -117,6 +117,10 @@ type AlertExecCtx struct {
func GetExprRequest(ctx AlertExecCtx, data []models.AlertQuery, now time.Time) (*expr.Request, error) {
req := &expr.Request{
OrgId: ctx.OrgID,
Headers: map[string]string{
// Some data sources check this in query method as sometimes alerting needs special considerations.
"FromAlert": "true",
},
}
for i := range data {

View File

@@ -156,6 +156,16 @@ func New(cfg *setting.Cfg, store store.AlertingStore, m *metrics.Metrics) (*Aler
return am, nil
}
func (am *Alertmanager) Ready() bool {
// We consider AM as ready only when the config has been
// applied at least once successfully. Until then, some objects
// can still be nil.
am.reloadConfigMtx.RLock()
defer am.reloadConfigMtx.RUnlock()
return len(am.config) > 0
}
func (am *Alertmanager) Run(ctx context.Context) error {
// Make sure dispatcher starts. We can tolerate future reload failures.
if err := am.SyncAndApplyConfigFromDatabase(); err != nil {
@@ -269,7 +279,7 @@ func (am *Alertmanager) SyncAndApplyConfigFromDatabase() error {
// applyConfig applies a new configuration by re-initializing all components using the configuration provided.
// It is not safe to call concurrently.
func (am *Alertmanager) applyConfig(cfg *apimodels.PostableUserConfig, rawConfig []byte) error {
func (am *Alertmanager) applyConfig(cfg *apimodels.PostableUserConfig, rawConfig []byte) (err error) {
// First, let's make sure this config is not already loaded
var configChanged bool
if rawConfig == nil {
@@ -504,7 +514,7 @@ func (am *Alertmanager) PutAlerts(postableAlerts apimodels.PostableAlerts) error
am.Metrics.Resolved().Inc()
}
if err := alert.Validate(); err != nil {
if err := validateAlert(alert); err != nil {
if validationErr == nil {
validationErr = &AlertValidationError{}
}
@@ -528,6 +538,59 @@ func (am *Alertmanager) PutAlerts(postableAlerts apimodels.PostableAlerts) error
return nil
}
// validateAlert is a.Validate() while additionally allowing
// space for label and annotation names.
func validateAlert(a *types.Alert) error {
if a.StartsAt.IsZero() {
return fmt.Errorf("start time missing")
}
if !a.EndsAt.IsZero() && a.EndsAt.Before(a.StartsAt) {
return fmt.Errorf("start time must be before end time")
}
if err := validateLabelSet(a.Labels); err != nil {
return fmt.Errorf("invalid label set: %s", err)
}
if len(a.Labels) == 0 {
return fmt.Errorf("at least one label pair required")
}
if err := validateLabelSet(a.Annotations); err != nil {
return fmt.Errorf("invalid annotations: %s", err)
}
return nil
}
// validateLabelSet is ls.Validate() while additionally allowing
// space for label names.
func validateLabelSet(ls model.LabelSet) error {
for ln, lv := range ls {
if !isValidLabelName(ln) {
return fmt.Errorf("invalid name %q", ln)
}
if !lv.IsValid() {
return fmt.Errorf("invalid value %q", lv)
}
}
return nil
}
// isValidLabelName is ln.IsValid() while additionally allowing spaces.
// The regex for Prometheus data model is ^[a-zA-Z_][a-zA-Z0-9_]*$
// while we will follow ^[a-zA-Z_][a-zA-Z0-9_ ]*$
func isValidLabelName(ln model.LabelName) bool {
if len(ln) == 0 {
return false
}
for i, b := range ln {
if !((b >= 'a' && b <= 'z') ||
(b >= 'A' && b <= 'Z') ||
b == '_' ||
(i > 0 && (b == ' ' || (b >= '0' && b <= '9')))) {
return false
}
}
return true
}
// AlertValidationError is the error capturing the validation errors
// faced on the alerts.
type AlertValidationError struct {
@@ -538,7 +601,7 @@ type AlertValidationError struct {
func (e AlertValidationError) Error() string {
errMsg := ""
if len(e.Errors) != 0 {
errMsg := e.Errors[0].Error()
errMsg = e.Errors[0].Error()
for _, e := range e.Errors[1:] {
errMsg += ";" + e.Error()
}

View File

@@ -182,6 +182,36 @@ func TestPutAlert(t *testing.T) {
},
}
},
}, {
title: "Allow spaces in label and annotation name",
postableAlerts: apimodels.PostableAlerts{
PostableAlerts: []models.PostableAlert{
{
Annotations: models.LabelSet{"Dashboard URL": "http://localhost:3000"},
Alert: models.Alert{
Labels: models.LabelSet{"alertname": "Alert4", "Spaced Label": "works"},
GeneratorURL: "http://localhost/url1",
},
StartsAt: strfmt.DateTime{},
EndsAt: strfmt.DateTime{},
},
},
},
expAlerts: func(now time.Time) []*types.Alert {
return []*types.Alert{
{
Alert: model.Alert{
Annotations: model.LabelSet{"Dashboard URL": "http://localhost:3000"},
Labels: model.LabelSet{"alertname": "Alert4", "Spaced Label": "works"},
StartsAt: now,
EndsAt: now.Add(defaultResolveTimeout),
GeneratorURL: "http://localhost/url1",
},
UpdatedAt: now,
Timeout: true,
},
}
},
}, {
title: "Invalid labels",
postableAlerts: apimodels.PostableAlerts{

View File

@@ -16,6 +16,7 @@ import (
var (
ErrGetAlertsInternal = fmt.Errorf("unable to retrieve alerts(s) due to an internal error")
ErrGetAlertsUnavailable = fmt.Errorf("unable to retrieve alerts(s) as alertmanager is not initialised yet")
ErrGetAlertsBadPayload = fmt.Errorf("unable to retrieve alerts")
ErrGetAlertGroupsBadPayload = fmt.Errorf("unable to retrieve alerts groups")
)
@@ -27,6 +28,10 @@ func (am *Alertmanager) GetAlerts(active, silenced, inhibited bool, filter []str
res = apimodels.GettableAlerts{}
)
if !am.Ready() {
return res, ErrGetAlertsUnavailable
}
matchers, err := parseFilter(filter)
if err != nil {
am.logger.Error("failed to parse matchers", "err", err)

View File

@@ -123,7 +123,7 @@ func expandTemplate(name, text string, data map[string]string) (result string, r
}
}()
tmpl, err := text_template.New(name).Option("missingkey=zero").Parse(text)
tmpl, err := text_template.New(name).Option("missingkey=error").Parse(text)
if err != nil {
return "", fmt.Errorf("error parsing template %v: %s", name, err.Error())
}

View File

@@ -542,7 +542,7 @@ func (st DBstore) UpdateRuleGroup(cmd UpdateRuleGroupCmd) error {
func (st DBstore) GetOrgRuleGroups(query *ngmodels.ListOrgRuleGroupsQuery) error {
return st.SQLStore.WithDbSession(context.Background(), func(sess *sqlstore.DBSession) error {
var ruleGroups [][]string
q := "SELECT DISTINCT rule_group, namespace_uid, (select title from dashboard where org_id = alert_rule.org_id and uid = alert_rule.namespace_uid) FROM alert_rule WHERE org_id = ?"
q := "SELECT DISTINCT rule_group, namespace_uid, (select title from dashboard where org_id = alert_rule.org_id and uid = alert_rule.namespace_uid) AS namespace_title FROM alert_rule WHERE org_id = ? ORDER BY namespace_title"
if err := sess.SQL(q, query.OrgID).Find(&ruleGroups); err != nil {
return err
}

View File

@@ -236,7 +236,8 @@ func (ss *SQLStore) buildConnectionString() (string, error) {
}
if isolation := ss.dbCfg.IsolationLevel; isolation != "" {
cnnstr += "&tx_isolation=" + isolation
val := url.QueryEscape(fmt.Sprintf("'%s'", isolation))
cnnstr += fmt.Sprintf("&tx_isolation=%s", val)
}
cnnstr += ss.buildExtraConnectionString('&')

View File

@@ -47,7 +47,7 @@ func (cfg *Cfg) readDateFormats() {
cfg.DateFormats.Interval.Day = valueAsString(dateFormats, "interval_day", "YYYY-MM-DD")
cfg.DateFormats.Interval.Month = valueAsString(dateFormats, "interval_month", "YYYY-MM")
cfg.DateFormats.Interval.Year = "YYYY"
cfg.DateFormats.UseBrowserLocale = dateFormats.Key("date_format_use_browser_locale").MustBool(false)
cfg.DateFormats.UseBrowserLocale = dateFormats.Key("use_browser_locale").MustBool(false)
timezone, err := valueAsTimezone(dateFormats, "default_timezone")
if err != nil {

View File

@@ -118,6 +118,77 @@ func TestAMConfigAccess(t *testing.T) {
}
})
t.Run("when retrieve alertmanager configuration", func(t *testing.T) {
cfgBody := `
{
"template_files": null,
"alertmanager_config": {
"route": {
"receiver": "grafana-default-email"
},
"templates": null,
"receivers": [{
"name": "grafana-default-email",
"grafana_managed_receiver_configs": [{
"disableResolveMessage": false,
"uid": "",
"name": "email receiver",
"type": "email",
"secureFields": {},
"settings": {
"addresses": "<example@email.com>"
}
}]
}]
}
}
`
testCases := []testCase{
{
desc: "un-authenticated request should fail",
url: "http://%s/api/alertmanager/grafana/config/api/v1/alerts",
expStatus: http.StatusUnauthorized,
expBody: `{"message": "Unauthorized"}`,
},
{
desc: "viewer request should fail",
url: "http://viewer:viewer@%s/api/alertmanager/grafana/config/api/v1/alerts",
expStatus: http.StatusForbidden,
expBody: `{"message": "permission denied"}`,
},
{
desc: "editor request should succeed",
url: "http://editor:editor@%s/api/alertmanager/grafana/config/api/v1/alerts",
expStatus: http.StatusOK,
expBody: cfgBody,
},
{
desc: "admin request should succeed",
url: "http://admin:admin@%s/api/alertmanager/grafana/config/api/v1/alerts",
expStatus: http.StatusOK,
expBody: cfgBody,
},
}
for _, tc := range testCases {
t.Run(tc.desc, func(t *testing.T) {
resp, err := http.Get(fmt.Sprintf(tc.url, grafanaListedAddr))
t.Cleanup(func() {
require.NoError(t, resp.Body.Close())
})
require.NoError(t, err)
require.Equal(t, tc.expStatus, resp.StatusCode)
b, err := ioutil.ReadAll(resp.Body)
if tc.expStatus == http.StatusOK {
re := regexp.MustCompile(`"uid":"([\w|-]+)"`)
b = re.ReplaceAll(b, []byte(`"uid":""`))
}
require.NoError(t, err)
require.JSONEq(t, tc.expBody, string(b))
})
}
})
t.Run("when creating silence", func(t *testing.T) {
body := `
{

Some files were not shown because too many files have changed in this diff Show More