Compare commits

..

9 Commits

Author SHA1 Message Date
Michael Mandrus 74c21ce75c update workspace 2026-01-14 14:56:17 -05:00
Michael Mandrus ad7a6e9a7a Merge branch 'main' into mmandrus/secrets/dek-cache 2026-01-14 14:38:57 -05:00
Michael Mandrus b73869ea9c use noop cache 2025-11-18 12:10:56 -05:00
Michael Mandrus 3c2f629bb9 Merge remote-tracking branch 'origin/main' into mmandrus/secrets/dek-cache 2025-11-18 11:59:33 -05:00
Michael Mandrus 075761ec66 Merge remote-tracking branch 'origin/main' into mmandrus/secrets/dek-cache 2025-11-14 00:13:08 -05:00
Michael Mandrus 3974e88cbe flush the encryption cache during consolidation 2025-11-14 00:03:48 -05:00
Michael Mandrus 1da89b70a0 use the cache in most places 2025-11-13 23:55:31 -05:00
Michael Mandrus 197019f554 add namespace, plus unit tests for cache 2025-11-13 22:34:25 -05:00
Michael Mandrus 773baf47e1 pass dek cache into encryption manager 2025-11-13 15:33:25 -05:00
85 changed files with 2181 additions and 1906 deletions
@@ -4,8 +4,7 @@ comments: |
This file is used in the following visualizations: candlestick, heatmap, state timeline, status history, time series.
---
You can pan the panel time range left and right, and zoom it and in and out.
This, in turn, changes the dashboard time range.
You can zoom the panel time range in and out, which in turn, changes the dashboard time range.
**Zoom in** - Click and drag on the panel to zoom in on a particular time range.
@@ -17,9 +16,4 @@ For example, if the original time range is from 9:00 to 9:59, the time range cha
- Next range: 8:30 - 10:29
- Next range: 7:30 - 11:29
**Pan** - Click and drag the x-axis area of the panel to pan the time range.
The time range shifts by the distance you drag.
For example, if the original time range is from 9:00 to 9:59 and you drag 30 minutes to the right, the time range changes to 9:30 to 10:29.
For screen recordings showing these interactions, refer to the [Panel overview documentation](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/panel-overview/#pan-and-zoom-panel-time-range).
For screen recordings showing these interactions, refer to the [Panel overview documentation](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/panel-overview/#zoom-panel-time-range).
@@ -304,8 +304,7 @@ When things go bad, it often helps if you understand the context in which the fa
In the next part of the tutorial, we simulate some common use cases that someone would add annotations for.
1. To manually add an annotation, click anywhere on a graph line to open the data tooltip, then click **Add annotation**.
You can also press `Ctrl` or `Command` and click anywhere in the graph to open the **Add annotation** dialog box.
1. To manually add an annotation, click anywhere in your graph, then click **Add annotation**.
Note: you might need to save the dashboard first.
1. In **Description**, enter **Migrated user database**.
1. Click **Save**.
@@ -317,16 +317,13 @@ Click the **Copy time range to clipboard** icon to copy the current time range t
You can also copy and paste a time range using the keyboard shortcuts `t+c` and `t+v` respectively.
#### Zoom out
#### Zoom out (Cmd+Z or Ctrl+Z)
- Click the **Zoom out** icon to view a larger time range in the dashboard or panel visualizations
- Double click on the panel graph area (time series family visualizations only)
- Type the `t-` keyboard shortcut
Click the **Zoom out** icon to view a larger time range in the dashboard or panel visualization.
#### Zoom in
#### Zoom in (only applicable to graph visualizations)
- Click and drag horizontally in the panel graph area to select a time range (time series family visualizations only)
- Type the `t+` keyboard shortcut
Click and drag to select the time range in the visualization that you want to view.
#### Refresh dashboard
@@ -146,7 +146,7 @@ To create a variable, follow these steps:
- Variable drop-down lists are displayed in the order in which they're listed in the **Variables** in dashboard settings, so put the variables that you will change often at the top, so they will be shown first (far left on the dashboard).
- By default, variables don't have a default value. This means that the topmost value in the drop-down list is always preselected. If you want to pre-populate a variable with an empty value, you can use the following workaround in the variable settings:
1. Select the **Include All Option** checkbox.
2. In the **Custom all value** field, enter a value like `.+`.
2. In the **Custom all value** field, enter a value like `+`.
## Add a query variable
@@ -175,10 +175,9 @@ By hovering over a panel with the mouse you can use some shortcuts that will tar
- `pl`: Hide or show legend
- `pr`: Remove Panel
## Pan and zoom panel time range
## Zoom panel time range
You can pan the panel time range left and right, and zoom it and in and out.
This, in turn, changes the dashboard time range.
You can zoom the panel time range in and out, which in turn, changes the dashboard time range.
This feature is supported for the following visualizations:
@@ -192,7 +191,7 @@ This feature is supported for the following visualizations:
Click and drag on the panel to zoom in on a particular time range.
The following screen recordings show this interaction in the time series and candlestick visualizations:
The following screen recordings show this interaction in the time series and x visualizations:
Time series
@@ -212,7 +211,7 @@ For example, if the original time range is from 9:00 to 9:59, the time range cha
- Next range: 8:30 - 10:29
- Next range: 7:30 - 11:29
The following screen recordings demonstrate the preceding example in the time series and heatmap visualizations:
The following screen recordings demonstrate the preceding example in the time series and x visualizations:
Time series
@@ -222,19 +221,6 @@ Heatmap
{{< video-embed src="/media/docs/grafana/panels-visualizations/recording-heatmap-panel-time-zoom-out-mouse.mp4" >}}
### Pan
Click and drag the x-axis area of the panel to pan the time range.
The time range shifts by the distance you drag.
For example, if the original time range is from 9:00 to 9:59 and you drag 30 minutes to the right, the time range changes to 9:30 to 10:29.
The following screen recordings show this interaction in the time series visualization:
Time series
{{< video-embed src="/media/docs/grafana/panels-visualizations/recording-ts-time-pan-mouse.mp4" >}}
## Add a panel
To add a panel in a new dashboard click **+ Add visualization** in the middle of the dashboard:
@@ -92,9 +92,9 @@ The data is converted as follows:
{{< figure src="/media/docs/grafana/panels-visualizations/screenshot-candles-volume-v11.6.png" max-width="750px" alt="A candlestick visualization showing the price movements of specific asset." >}}
## Pan and zoom panel time range
## Zoom panel time range
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
## Configuration options
@@ -79,9 +79,9 @@ The data is converted as follows:
{{< figure src="/static/img/docs/heatmap-panel/heatmap.png" max-width="1025px" alt="A heatmap visualization showing the random walk distribution over time" >}}
## Pan and zoom panel time range
## Zoom panel time range
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
## Configuration options
@@ -93,9 +93,9 @@ You can also create a state timeline visualization using time series data. To do
![State timeline with time series](/media/docs/grafana/panels-visualizations/screenshot-state-timeline-time-series-v11.4.png)
## Pan and zoom panel time range
## Zoom panel time range
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
## Configuration options
@@ -85,9 +85,9 @@ The data is converted as follows:
{{< figure src="/static/img/docs/status-history-panel/status_history.png" max-width="1025px" alt="A status history panel with two time columns showing the status of two servers" >}}
## Pan and zoom panel time range
## Zoom panel time range
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
## Configuration options
@@ -167,9 +167,9 @@ The following example shows three series: Min, Max, and Value. The Min and Max s
{{< docs/shared lookup="visualizations/multiple-y-axes.md" source="grafana" version="<GRAFANA_VERSION>" leveloffset="+2" >}}
## Pan and zoom panel time range
## Zoom panel time range
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
## Configuration options
+2 -2
View File
@@ -32,14 +32,14 @@ require (
github.com/armon/go-radix v1.0.0 // @grafana/grafana-app-platform-squad
github.com/aws/aws-sdk-go v1.55.7 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2 v1.40.0 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2/credentials v1.18.21 // indirect; @grafana/grafana-operator-experience-squad
github.com/aws/aws-sdk-go-v2/credentials v1.18.21 // @grafana/grafana-operator-experience-squad
github.com/aws/aws-sdk-go-v2/service/cloudwatch v1.45.3 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs v1.51.0 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2/service/ec2 v1.225.2 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2/service/oam v1.18.3 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2/service/resourcegroupstaggingapi v1.26.6 // @grafana/aws-datasources
github.com/aws/aws-sdk-go-v2/service/secretsmanager v1.40.1 // @grafana/grafana-operator-experience-squad
github.com/aws/aws-sdk-go-v2/service/sts v1.39.1 // indirect; @grafana/grafana-operator-experience-squad
github.com/aws/aws-sdk-go-v2/service/sts v1.39.1 // @grafana/grafana-operator-experience-squad
github.com/aws/smithy-go v1.23.2 // @grafana/aws-datasources
github.com/beevik/etree v1.4.1 // @grafana/grafana-backend-group
github.com/benbjohnson/clock v1.3.5 // @grafana/alerting-backend
+1 -12
View File
@@ -32,8 +32,6 @@ import (
var (
logger = glog.New("data-proxy-log")
client = newHTTPClient()
errPluginProxyRouteAccessDenied = errors.New("plugin proxy route access denied")
)
type DataSourceProxy struct {
@@ -310,21 +308,12 @@ func (proxy *DataSourceProxy) validateRequest() error {
if err != nil {
return err
}
// issues/116273: When we have an empty input route (or input that becomes relative to "."), we do not want it
// to be ".". This is because the `CleanRelativePath` function will never return "./" prefixes, and as such,
// the common prefix we need is an empty string.
if r1 == "." && proxy.proxyPath != "." {
r1 = ""
}
if r2 == "." && route.Path != "." {
r2 = ""
}
if !strings.HasPrefix(r1, r2) {
continue
}
if !proxy.hasAccessToRoute(route) {
return errPluginProxyRouteAccessDenied
return errors.New("plugin proxy route access denied")
}
proxy.matchedRoute = route
-88
View File
@@ -673,94 +673,6 @@ func TestIntegrationDataSourceProxy_routeRule(t *testing.T) {
runDatasourceAuthTest(t, secretsService, secretsStore, cfg, test)
}
})
t.Run("Regression of 116273: Fallback routes should apply fallback route roles", func(t *testing.T) {
for _, tc := range []struct {
InputPath string
ConfigurationPath string
ExpectError bool
}{
{
InputPath: "api/v2/leak-ur-secrets",
ConfigurationPath: "",
ExpectError: true,
},
{
InputPath: "",
ConfigurationPath: "",
ExpectError: true,
},
{
InputPath: ".",
ConfigurationPath: ".",
ExpectError: true,
},
{
InputPath: "",
ConfigurationPath: ".",
ExpectError: false,
},
{
InputPath: "api",
ConfigurationPath: ".",
ExpectError: false,
},
} {
orEmptyStr := func(s string) string {
if s == "" {
return "<empty>"
}
return s
}
t.Run(
fmt.Sprintf("with inputPath=%s, configurationPath=%s, expectError=%v",
orEmptyStr(tc.InputPath), orEmptyStr(tc.ConfigurationPath), tc.ExpectError),
func(t *testing.T) {
ds := &datasources.DataSource{
UID: "dsUID",
JsonData: simplejson.New(),
}
routes := []*plugins.Route{
{
Path: tc.ConfigurationPath,
ReqRole: org.RoleAdmin,
Method: "GET",
},
{
Path: tc.ConfigurationPath,
ReqRole: org.RoleAdmin,
Method: "POST",
},
{
Path: tc.ConfigurationPath,
ReqRole: org.RoleAdmin,
Method: "PUT",
},
{
Path: tc.ConfigurationPath,
ReqRole: org.RoleAdmin,
Method: "DELETE",
},
}
req, err := http.NewRequestWithContext(t.Context(), "GET", "http://localhost/"+tc.InputPath, nil)
require.NoError(t, err, "failed to create HTTP request")
ctx := &contextmodel.ReqContext{
Context: &web.Context{Req: req},
SignedInUser: &user.SignedInUser{OrgRole: org.RoleViewer},
}
proxy, err := setupDSProxyTest(t, ctx, ds, routes, tc.InputPath)
require.NoError(t, err, "failed to setup proxy test")
err = proxy.validateRequest()
if tc.ExpectError {
require.ErrorIs(t, err, errPluginProxyRouteAccessDenied, "request was not denied due to access denied?")
} else {
require.NoError(t, err, "request was unexpectedly denied access")
}
},
)
}
})
}
// test DataSourceProxy request handling.
+2 -3
View File
@@ -16,7 +16,6 @@ import (
_ "github.com/blugelabs/bluge"
_ "github.com/blugelabs/bluge_segment_api"
_ "github.com/crewjam/saml"
_ "github.com/docker/go-connections/nat"
_ "github.com/go-jose/go-jose/v4"
_ "github.com/gobwas/glob"
_ "github.com/googleapis/gax-go/v2"
@@ -32,7 +31,6 @@ import (
_ "github.com/spf13/cobra" // used by the standalone apiserver cli
_ "github.com/spyzhov/ajson"
_ "github.com/stretchr/testify/require"
_ "github.com/testcontainers/testcontainers-go"
_ "gocloud.dev/secrets/awskms"
_ "gocloud.dev/secrets/azurekeyvault"
_ "gocloud.dev/secrets/gcpkms"
@@ -57,7 +55,8 @@ import (
_ "github.com/grafana/e2e"
_ "github.com/grafana/gofpdf"
_ "github.com/grafana/gomemcache/memcache"
_ "github.com/grafana/tempo/pkg/traceql"
_ "github.com/grafana/grafana/apps/alerting/alertenrichment/pkg/apis/alertenrichment/v1beta1"
_ "github.com/grafana/grafana/apps/scope/pkg/apis/scope/v0alpha1"
_ "github.com/grafana/tempo/pkg/traceql"
)
-602
View File
@@ -1,602 +0,0 @@
package models
import (
"context"
"testing"
"time"
"github.com/grafana/grafana-plugin-sdk-go/backend"
"github.com/stretchr/testify/require"
"go.opentelemetry.io/otel"
"github.com/grafana/grafana/pkg/promlib/intervalv2"
)
var (
testNow = time.Now()
testIntervalCalculator = intervalv2.NewCalculator()
testTracer = otel.Tracer("test/interval")
)
func TestCalculatePrometheusInterval(t *testing.T) {
_, span := testTracer.Start(context.Background(), "test")
defer span.End()
tests := []struct {
name string
queryInterval string
dsScrapeInterval string
intervalMs int64
intervalFactor int64
query backend.DataQuery
want time.Duration
wantErr bool
}{
{
name: "min step 2m with 300000 intervalMs",
queryInterval: "2m",
dsScrapeInterval: "",
intervalMs: 300000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 5 * time.Minute,
MaxDataPoints: 761,
},
want: 2 * time.Minute,
wantErr: false,
},
{
name: "min step 2m with 900000 intervalMs",
queryInterval: "2m",
dsScrapeInterval: "",
intervalMs: 900000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 15 * time.Minute,
MaxDataPoints: 175,
},
want: 2 * time.Minute,
wantErr: false,
},
{
name: "with step parameter",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(12 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 30 * time.Second,
wantErr: false,
},
{
name: "without step parameter",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 15 * time.Second,
wantErr: false,
},
{
name: "with high intervalFactor",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 10,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 20 * time.Minute,
wantErr: false,
},
{
name: "with low intervalFactor",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 2 * time.Minute,
wantErr: false,
},
{
name: "with specified scrape-interval in data source",
queryInterval: "",
dsScrapeInterval: "240s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 4 * time.Minute,
wantErr: false,
},
{
name: "with zero intervalFactor defaults to 1",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 0,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 15 * time.Second,
wantErr: false,
},
{
name: "with $__interval variable",
queryInterval: "$__interval",
dsScrapeInterval: "15s",
intervalMs: 60000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 120 * time.Second,
wantErr: false,
},
{
name: "with ${__interval} variable",
queryInterval: "${__interval}",
dsScrapeInterval: "15s",
intervalMs: 60000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 120 * time.Second,
wantErr: false,
},
{
name: "with ${__interval} variable and explicit interval",
queryInterval: "1m",
dsScrapeInterval: "15s",
intervalMs: 60000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 1 * time.Minute,
wantErr: false,
},
{
name: "with $__rate_interval variable",
queryInterval: "$__rate_interval",
dsScrapeInterval: "30s",
intervalMs: 100000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(2 * 24 * time.Hour),
},
Interval: 100 * time.Second,
MaxDataPoints: 12384,
},
want: 130 * time.Second,
wantErr: false,
},
{
name: "with ${__rate_interval} variable",
queryInterval: "${__rate_interval}",
dsScrapeInterval: "30s",
intervalMs: 100000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(2 * 24 * time.Hour),
},
Interval: 100 * time.Second,
MaxDataPoints: 12384,
},
want: 130 * time.Second,
wantErr: false,
},
{
name: "intervalMs 100s, minStep override 150s and scrape interval 30s",
queryInterval: "150s",
dsScrapeInterval: "30s",
intervalMs: 100000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(2 * 24 * time.Hour),
},
Interval: 100 * time.Second,
MaxDataPoints: 12384,
},
want: 150 * time.Second,
wantErr: false,
},
{
name: "intervalMs 120s, minStep override 150s and ds scrape interval 30s",
queryInterval: "150s",
dsScrapeInterval: "30s",
intervalMs: 120000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(2 * 24 * time.Hour),
},
Interval: 120 * time.Second,
MaxDataPoints: 12384,
},
want: 150 * time.Second,
wantErr: false,
},
{
name: "intervalMs 120s, minStep auto (interval not overridden) and ds scrape interval 30s",
queryInterval: "120s",
dsScrapeInterval: "30s",
intervalMs: 120000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(2 * 24 * time.Hour),
},
Interval: 120 * time.Second,
MaxDataPoints: 12384,
},
want: 120 * time.Second,
wantErr: false,
},
{
name: "interval and minStep are automatically calculated and ds scrape interval 30s and time range 1 hour",
queryInterval: "30s",
dsScrapeInterval: "30s",
intervalMs: 30000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 30 * time.Second,
MaxDataPoints: 12384,
},
want: 30 * time.Second,
wantErr: false,
},
{
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 1 hour",
queryInterval: "$__rate_interval",
dsScrapeInterval: "30s",
intervalMs: 30000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 30 * time.Second,
MaxDataPoints: 12384,
},
want: 2 * time.Minute,
wantErr: false,
},
{
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 2 days",
queryInterval: "$__rate_interval",
dsScrapeInterval: "30s",
intervalMs: 120000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(2 * 24 * time.Hour),
},
Interval: 120 * time.Second,
MaxDataPoints: 12384,
},
want: 150 * time.Second,
wantErr: false,
},
{
name: "minStep is $__interval and ds scrape interval 15s and time range 2 days",
queryInterval: "$__interval",
dsScrapeInterval: "15s",
intervalMs: 120000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(2 * 24 * time.Hour),
},
Interval: 120 * time.Second,
MaxDataPoints: 12384,
},
want: 120 * time.Second,
wantErr: false,
},
{
name: "with empty dsScrapeInterval defaults to 15s",
queryInterval: "",
dsScrapeInterval: "",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 15 * time.Second,
wantErr: false,
},
{
name: "with very short time range",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Minute),
},
Interval: 1 * time.Minute,
},
want: 15 * time.Second,
wantErr: false,
},
{
name: "with very long time range",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(30 * 24 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 30 * time.Minute,
wantErr: false,
},
{
name: "with manual interval override",
queryInterval: "5m",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 5 * time.Minute,
wantErr: false,
},
{
name: "minStep is auto and ds scrape interval 30s and time range 1 hour",
queryInterval: "",
dsScrapeInterval: "30s",
intervalMs: 30000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 30 * time.Second,
MaxDataPoints: 1613,
},
want: 30 * time.Second,
wantErr: false,
},
{
name: "minStep is auto and ds scrape interval 15s and time range 5 minutes",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 15000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(5 * time.Minute),
},
Interval: 15 * time.Second,
MaxDataPoints: 1055,
},
want: 15 * time.Second,
wantErr: false,
},
// Additional test cases for better coverage
{
name: "with $__interval_ms variable",
queryInterval: "$__interval_ms",
dsScrapeInterval: "15s",
intervalMs: 60000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 120 * time.Second,
wantErr: false,
},
{
name: "with ${__interval_ms} variable",
queryInterval: "${__interval_ms}",
dsScrapeInterval: "15s",
intervalMs: 60000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 120 * time.Second,
wantErr: false,
},
{
name: "with MaxDataPoints zero",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 1 * time.Minute,
MaxDataPoints: 0,
},
want: 15 * time.Second,
wantErr: false,
},
{
name: "with negative intervalFactor",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: -5,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: -10 * time.Minute,
wantErr: false,
},
{
name: "with invalid interval string that fails parsing",
queryInterval: "invalid-interval",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: time.Duration(0),
wantErr: true,
},
{
name: "with very small MaxDataPoints",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 1 * time.Minute,
MaxDataPoints: 10,
},
want: 5 * time.Minute,
wantErr: false,
},
{
name: "when safeInterval is larger than calculatedInterval",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 1 * time.Minute,
MaxDataPoints: 10000,
},
want: 15 * time.Second,
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := calculatePrometheusInterval(
tt.queryInterval,
tt.dsScrapeInterval,
tt.intervalMs,
tt.intervalFactor,
tt.query,
testIntervalCalculator,
)
if tt.wantErr {
require.Error(t, err)
return
}
require.NoError(t, err)
require.Equal(t, tt.want, got)
})
}
}
+41 -125
View File
@@ -92,6 +92,7 @@ const (
)
// Internal interval and range variables with {} syntax
// Repetitive code, we should have functionality to unify these
const (
varIntervalAlt = "${__interval}"
varIntervalMsAlt = "${__interval_ms}"
@@ -111,16 +112,8 @@ const (
UnknownQueryType TimeSeriesQueryType = "unknown"
)
// safeResolution is the maximum number of data points to prevent excessive resolution.
// This ensures queries don't exceed reasonable data point limits, improving performance
// and preventing potential memory issues. The value of 11000 provides a good balance
// between resolution and performance for most use cases.
var safeResolution = 11000
// rateIntervalMultiplier is the minimum multiplier for rate interval calculation.
// Rate intervals should be at least 4x the scrape interval to ensure accurate rate calculations.
const rateIntervalMultiplier = 4
// QueryModel includes both the common and specific values
// NOTE: this struct may have issues when decoding JSON that requires the special handling
// registered in https://github.com/grafana/grafana-plugin-sdk-go/blob/v0.228.0/experimental/apis/data/v0alpha1/query.go#L298
@@ -161,7 +154,7 @@ type Query struct {
// may be either a string or DataSourceRef
type internalQueryModel struct {
PrometheusQueryProperties `json:",inline"`
// sdkapi.CommonQueryProperties `json:",inline"`
//sdkapi.CommonQueryProperties `json:",inline"`
IntervalMS float64 `json:"intervalMs,omitempty"`
// The following properties may be part of the request payload, however they are not saved in panel JSON
@@ -279,121 +272,44 @@ func (query *Query) TimeRange() TimeRange {
}
}
// isRateIntervalVariable checks if the interval string is a rate interval variable
// ($__rate_interval, ${__rate_interval}, $__rate_interval_ms, or ${__rate_interval_ms})
func isRateIntervalVariable(interval string) bool {
return interval == varRateInterval ||
interval == varRateIntervalAlt ||
interval == varRateIntervalMs ||
interval == varRateIntervalMsAlt
}
// replaceVariable replaces both $__variable and ${__variable} formats in the expression
func replaceVariable(expr, dollarFormat, altFormat, replacement string) string {
expr = strings.ReplaceAll(expr, dollarFormat, replacement)
expr = strings.ReplaceAll(expr, altFormat, replacement)
return expr
}
// isManualIntervalOverride checks if the interval is a manually specified non-variable value
// that should override the calculated interval
func isManualIntervalOverride(interval string) bool {
return interval != "" &&
interval != varInterval &&
interval != varIntervalAlt &&
interval != varIntervalMs &&
interval != varIntervalMsAlt
}
// maxDuration returns the maximum of two durations
func maxDuration(a, b time.Duration) time.Duration {
if a > b {
return a
}
return b
}
// normalizeIntervalFactor ensures intervalFactor is at least 1
func normalizeIntervalFactor(factor int64) int64 {
if factor == 0 {
return 1
}
return factor
}
// calculatePrometheusInterval calculates the optimal step interval for a Prometheus query.
//
// The function determines the query step interval by considering multiple factors:
// - The minimum step specified in the query (queryInterval)
// - The data source scrape interval (dsScrapeInterval)
// - The requested interval in milliseconds (intervalMs)
// - The time range and maximum data points from the query
// - The interval factor multiplier
//
// Special handling:
// - Variable intervals ($__interval, $__rate_interval, etc.) are replaced with calculated values
// - Rate interval variables ($__rate_interval, ${__rate_interval}) use calculateRateInterval for proper rate() function support
// - Manual interval overrides (non-variable strings) take precedence over calculated values
// - The final interval ensures safe resolution limits are not exceeded
//
// Parameters:
// - queryInterval: The minimum step interval string (may contain variables like $__interval or $__rate_interval)
// - dsScrapeInterval: The data source scrape interval (e.g., "15s", "30s")
// - intervalMs: The requested interval in milliseconds
// - intervalFactor: Multiplier for the calculated interval (defaults to 1 if 0)
// - query: The backend data query containing time range and max data points
// - intervalCalculator: Calculator for determining optimal intervals
//
// Returns:
// - The calculated step interval as a time.Duration
// - An error if the interval cannot be calculated (e.g., invalid interval string)
func calculatePrometheusInterval(
queryInterval, dsScrapeInterval string,
intervalMs, intervalFactor int64,
query backend.DataQuery,
intervalCalculator intervalv2.Calculator,
) (time.Duration, error) {
// Preserve the original interval for later comparison, as it may be modified below
// we need to compare the original query model after it is overwritten below to variables so that we can
// calculate the rateInterval if it is equal to $__rate_interval or ${__rate_interval}
originalQueryInterval := queryInterval
// If we are using a variable for minStep, replace it with empty string
// so that the interval calculation proceeds with the default logic
// If we are using variable for interval/step, we will replace it with calculated interval
if isVariableInterval(queryInterval) {
queryInterval = ""
}
// Get the minimum interval from various sources (dsScrapeInterval, queryInterval, intervalMs)
minInterval, err := gtime.GetIntervalFrom(dsScrapeInterval, queryInterval, intervalMs, 15*time.Second)
if err != nil {
return time.Duration(0), err
}
// Calculate the optimal interval based on time range and max data points
calculatedInterval := intervalCalculator.Calculate(query.TimeRange, minInterval, query.MaxDataPoints)
// Calculate the safe interval to prevent too many data points
safeInterval := intervalCalculator.CalculateSafeInterval(query.TimeRange, int64(safeResolution))
// Use the larger of calculated or safe interval to ensure we don't exceed resolution limits
adjustedInterval := maxDuration(calculatedInterval.Value, safeInterval.Value)
adjustedInterval := safeInterval.Value
if calculatedInterval.Value > safeInterval.Value {
adjustedInterval = calculatedInterval.Value
}
// Handle rate interval variables: these require special calculation
if isRateIntervalVariable(originalQueryInterval) {
// here is where we compare for $__rate_interval or ${__rate_interval}
if originalQueryInterval == varRateInterval || originalQueryInterval == varRateIntervalAlt {
// Rate interval is final and is not affected by resolution
return calculateRateInterval(adjustedInterval, dsScrapeInterval), nil
}
// Handle manual interval override: if user specified a non-variable interval,
// it takes precedence over calculated values
if isManualIntervalOverride(originalQueryInterval) {
if parsedInterval, err := gtime.ParseIntervalStringToTimeDuration(originalQueryInterval); err == nil {
return parsedInterval, nil
} else {
queryIntervalFactor := intervalFactor
if queryIntervalFactor == 0 {
queryIntervalFactor = 1
}
// If parsing fails, fall through to calculated interval with factor
return time.Duration(int64(adjustedInterval) * queryIntervalFactor), nil
}
// Apply interval factor to the adjusted interval
normalizedFactor := normalizeIntervalFactor(intervalFactor)
return time.Duration(int64(adjustedInterval) * normalizedFactor), nil
}
// calculateRateInterval calculates the $__rate_interval value
@@ -415,8 +331,7 @@ func calculateRateInterval(
return time.Duration(0)
}
minRateInterval := rateIntervalMultiplier * scrapeIntervalDuration
rateInterval := maxDuration(queryInterval+scrapeIntervalDuration, minRateInterval)
rateInterval := time.Duration(int64(math.Max(float64(queryInterval+scrapeIntervalDuration), float64(4)*float64(scrapeIntervalDuration))))
return rateInterval
}
@@ -451,33 +366,34 @@ func InterpolateVariables(
rateInterval = calculateRateInterval(queryInterval, requestedMinStep)
}
// Replace interval variables (both $__var and ${__var} formats)
expr = replaceVariable(expr, varIntervalMs, varIntervalMsAlt, strconv.FormatInt(int64(calculatedStep/time.Millisecond), 10))
expr = replaceVariable(expr, varInterval, varIntervalAlt, gtime.FormatInterval(calculatedStep))
// Replace range variables (both $__var and ${__var} formats)
expr = replaceVariable(expr, varRangeMs, varRangeMsAlt, strconv.FormatInt(rangeMs, 10))
expr = replaceVariable(expr, varRangeS, varRangeSAlt, strconv.FormatInt(rangeSRounded, 10))
expr = replaceVariable(expr, varRange, varRangeAlt, strconv.FormatInt(rangeSRounded, 10)+"s")
// Replace rate interval variables (both $__var and ${__var} formats)
expr = replaceVariable(expr, varRateIntervalMs, varRateIntervalMsAlt, strconv.FormatInt(int64(rateInterval/time.Millisecond), 10))
expr = replaceVariable(expr, varRateInterval, varRateIntervalAlt, rateInterval.String())
expr = strings.ReplaceAll(expr, varIntervalMs, strconv.FormatInt(int64(calculatedStep/time.Millisecond), 10))
expr = strings.ReplaceAll(expr, varInterval, gtime.FormatInterval(calculatedStep))
expr = strings.ReplaceAll(expr, varRangeMs, strconv.FormatInt(rangeMs, 10))
expr = strings.ReplaceAll(expr, varRangeS, strconv.FormatInt(rangeSRounded, 10))
expr = strings.ReplaceAll(expr, varRange, strconv.FormatInt(rangeSRounded, 10)+"s")
expr = strings.ReplaceAll(expr, varRateIntervalMs, strconv.FormatInt(int64(rateInterval/time.Millisecond), 10))
expr = strings.ReplaceAll(expr, varRateInterval, rateInterval.String())
// Repetitive code, we should have functionality to unify these
expr = strings.ReplaceAll(expr, varIntervalMsAlt, strconv.FormatInt(int64(calculatedStep/time.Millisecond), 10))
expr = strings.ReplaceAll(expr, varIntervalAlt, gtime.FormatInterval(calculatedStep))
expr = strings.ReplaceAll(expr, varRangeMsAlt, strconv.FormatInt(rangeMs, 10))
expr = strings.ReplaceAll(expr, varRangeSAlt, strconv.FormatInt(rangeSRounded, 10))
expr = strings.ReplaceAll(expr, varRangeAlt, strconv.FormatInt(rangeSRounded, 10)+"s")
expr = strings.ReplaceAll(expr, varRateIntervalMsAlt, strconv.FormatInt(int64(rateInterval/time.Millisecond), 10))
expr = strings.ReplaceAll(expr, varRateIntervalAlt, rateInterval.String())
return expr
}
// isVariableInterval checks if the interval string is a variable interval
// (any of $__interval, ${__interval}, $__interval_ms, ${__interval_ms}, $__rate_interval, ${__rate_interval}, etc.)
func isVariableInterval(interval string) bool {
return interval == varInterval ||
interval == varIntervalAlt ||
interval == varIntervalMs ||
interval == varIntervalMsAlt ||
interval == varRateInterval ||
interval == varRateIntervalAlt ||
interval == varRateIntervalMs ||
interval == varRateIntervalMsAlt
if interval == varInterval || interval == varIntervalMs || interval == varRateInterval || interval == varRateIntervalMs {
return true
}
// Repetitive code, we should have functionality to unify these
if interval == varIntervalAlt || interval == varIntervalMsAlt || interval == varRateIntervalAlt || interval == varRateIntervalMsAlt {
return true
}
return false
}
// AlignTimeRange aligns query range to step and handles the time offset.
@@ -494,7 +410,7 @@ func AlignTimeRange(t time.Time, step time.Duration, offset int64) time.Time {
//go:embed query.types.json
var f embed.FS
// QueryTypeDefinitionListJSON returns the query type definitions
// QueryTypeDefinitionsJSON returns the query type definitions
func QueryTypeDefinitionListJSON() (json.RawMessage, error) {
return f.ReadFile("query.types.json")
}
+322 -2
View File
@@ -2,6 +2,7 @@ package models_test
import (
"context"
"fmt"
"reflect"
"testing"
"time"
@@ -13,7 +14,6 @@ import (
"go.opentelemetry.io/otel"
"github.com/grafana/grafana-plugin-sdk-go/backend/log"
"github.com/grafana/grafana/pkg/promlib/intervalv2"
"github.com/grafana/grafana/pkg/promlib/models"
)
@@ -50,6 +50,95 @@ func TestParse(t *testing.T) {
require.Equal(t, false, res.ExemplarQuery)
})
t.Run("parsing query model with step", func(t *testing.T) {
timeRange := backend.TimeRange{
From: now,
To: now.Add(12 * time.Hour),
}
q := queryContext(`{
"expr": "go_goroutines",
"format": "time_series",
"refId": "A"
}`, timeRange, time.Duration(1)*time.Minute)
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, time.Second*30, res.Step)
})
t.Run("parsing query model without step parameter", func(t *testing.T) {
timeRange := backend.TimeRange{
From: now,
To: now.Add(1 * time.Hour),
}
q := queryContext(`{
"expr": "go_goroutines",
"format": "time_series",
"intervalFactor": 1,
"refId": "A"
}`, timeRange, time.Duration(1)*time.Minute)
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, time.Second*15, res.Step)
})
t.Run("parsing query model with high intervalFactor", func(t *testing.T) {
timeRange := backend.TimeRange{
From: now,
To: now.Add(48 * time.Hour),
}
q := queryContext(`{
"expr": "go_goroutines",
"format": "time_series",
"intervalFactor": 10,
"refId": "A"
}`, timeRange, time.Duration(1)*time.Minute)
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, time.Minute*20, res.Step)
})
t.Run("parsing query model with low intervalFactor", func(t *testing.T) {
timeRange := backend.TimeRange{
From: now,
To: now.Add(48 * time.Hour),
}
q := queryContext(`{
"expr": "go_goroutines",
"format": "time_series",
"intervalFactor": 1,
"refId": "A"
}`, timeRange, time.Duration(1)*time.Minute)
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, time.Minute*2, res.Step)
})
t.Run("parsing query model specified scrape-interval in the data source", func(t *testing.T) {
timeRange := backend.TimeRange{
From: now,
To: now.Add(48 * time.Hour),
}
q := queryContext(`{
"expr": "go_goroutines",
"format": "time_series",
"intervalFactor": 1,
"refId": "A"
}`, timeRange, time.Duration(1)*time.Minute)
res, err := models.Parse(context.Background(), log.New(), span, q, "240s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, time.Minute*4, res.Step)
})
t.Run("parsing query model with $__interval variable", func(t *testing.T) {
timeRange := backend.TimeRange{
From: now,
@@ -87,7 +176,7 @@ func TestParse(t *testing.T) {
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, "rate(ALERTS{job=\"test\" [1m]})", res.Expr)
require.Equal(t, "rate(ALERTS{job=\"test\" [2m]})", res.Expr)
})
t.Run("parsing query model with $__interval_ms variable", func(t *testing.T) {
@@ -444,6 +533,232 @@ func TestParse(t *testing.T) {
})
}
func TestRateInterval(t *testing.T) {
_, span := tracer.Start(context.Background(), "operation")
defer span.End()
type args struct {
expr string
interval string
intervalMs int64
dsScrapeInterval string
timeRange *backend.TimeRange
}
tests := []struct {
name string
args args
want *models.Query
}{
{
name: "intervalMs 100s, minStep override 150s and scrape interval 30s",
args: args{
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
interval: "150s",
intervalMs: 100000,
dsScrapeInterval: "30s",
},
want: &models.Query{
Expr: "rate(rpc_durations_seconds_count[10m0s])",
Step: time.Second * 150,
},
},
{
name: "intervalMs 120s, minStep override 150s and ds scrape interval 30s",
args: args{
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
interval: "150s",
intervalMs: 120000,
dsScrapeInterval: "30s",
},
want: &models.Query{
Expr: "rate(rpc_durations_seconds_count[10m0s])",
Step: time.Second * 150,
},
},
{
name: "intervalMs 120s, minStep auto (interval not overridden) and ds scrape interval 30s",
args: args{
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
interval: "120s",
intervalMs: 120000,
dsScrapeInterval: "30s",
},
want: &models.Query{
Expr: "rate(rpc_durations_seconds_count[8m0s])",
Step: time.Second * 120,
},
},
{
name: "interval and minStep are automatically calculated and ds scrape interval 30s and time range 1 hour",
args: args{
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
interval: "30s",
intervalMs: 30000,
dsScrapeInterval: "30s",
timeRange: &backend.TimeRange{
From: now,
To: now.Add(1 * time.Hour),
},
},
want: &models.Query{
Expr: "rate(rpc_durations_seconds_count[2m0s])",
Step: time.Second * 30,
},
},
{
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 1 hour",
args: args{
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
interval: "$__rate_interval",
intervalMs: 30000,
dsScrapeInterval: "30s",
timeRange: &backend.TimeRange{
From: now,
To: now.Add(1 * time.Hour),
},
},
want: &models.Query{
Expr: "rate(rpc_durations_seconds_count[2m0s])",
Step: time.Minute * 2,
},
},
{
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 2 days",
args: args{
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
interval: "$__rate_interval",
intervalMs: 120000,
dsScrapeInterval: "30s",
timeRange: &backend.TimeRange{
From: now,
To: now.Add(2 * 24 * time.Hour),
},
},
want: &models.Query{
Expr: "rate(rpc_durations_seconds_count[2m30s])",
Step: time.Second * 150,
},
},
{
name: "minStep is $__rate_interval and ds scrape interval 15s and time range 2 days",
args: args{
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
interval: "$__interval",
intervalMs: 120000,
dsScrapeInterval: "15s",
timeRange: &backend.TimeRange{
From: now,
To: now.Add(2 * 24 * time.Hour),
},
},
want: &models.Query{
Expr: "rate(rpc_durations_seconds_count[8m0s])",
Step: time.Second * 120,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
q := mockQuery(tt.args.expr, tt.args.interval, tt.args.intervalMs, tt.args.timeRange)
q.MaxDataPoints = 12384
res, err := models.Parse(context.Background(), log.New(), span, q, tt.args.dsScrapeInterval, intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, tt.want.Expr, res.Expr)
require.Equal(t, tt.want.Step, res.Step)
})
}
t.Run("minStep is auto and ds scrape interval 30s and time range 1 hour", func(t *testing.T) {
query := backend.DataQuery{
RefID: "G",
QueryType: "",
MaxDataPoints: 1613,
Interval: 30 * time.Second,
TimeRange: backend.TimeRange{
From: now,
To: now.Add(1 * time.Hour),
},
JSON: []byte(`{
"datasource":{"type":"prometheus","uid":"zxS5e5W4k"},
"datasourceId":38,
"editorMode":"code",
"exemplar":false,
"expr":"sum(rate(process_cpu_seconds_total[$__rate_interval]))",
"instant":false,
"interval":"",
"intervalMs":30000,
"key":"Q-f96b6729-c47a-4ea8-8f71-a79774cf9bd5-0",
"legendFormat":"__auto",
"maxDataPoints":1613,
"range":true,
"refId":"G",
"requestId":"1G",
"utcOffsetSec":3600
}`),
}
res, err := models.Parse(context.Background(), log.New(), span, query, "30s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, "sum(rate(process_cpu_seconds_total[2m0s]))", res.Expr)
require.Equal(t, 30*time.Second, res.Step)
})
t.Run("minStep is auto and ds scrape interval 15s and time range 5 minutes", func(t *testing.T) {
query := backend.DataQuery{
RefID: "A",
QueryType: "",
MaxDataPoints: 1055,
Interval: 15 * time.Second,
TimeRange: backend.TimeRange{
From: now,
To: now.Add(5 * time.Minute),
},
JSON: []byte(`{
"datasource": {
"type": "prometheus",
"uid": "2z9d6ElGk"
},
"editorMode": "code",
"expr": "sum(rate(cache_requests_total[$__rate_interval]))",
"legendFormat": "__auto",
"range": true,
"refId": "A",
"exemplar": false,
"requestId": "1A",
"utcOffsetSec": 0,
"interval": "",
"datasourceId": 508,
"intervalMs": 15000,
"maxDataPoints": 1055
}`),
}
res, err := models.Parse(context.Background(), log.New(), span, query, "15s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, "sum(rate(cache_requests_total[1m0s]))", res.Expr)
require.Equal(t, 15*time.Second, res.Step)
})
}
func mockQuery(expr string, interval string, intervalMs int64, timeRange *backend.TimeRange) backend.DataQuery {
if timeRange == nil {
timeRange = &backend.TimeRange{
From: now,
To: now.Add(1 * time.Hour),
}
}
return backend.DataQuery{
Interval: time.Duration(intervalMs) * time.Millisecond,
JSON: []byte(fmt.Sprintf(`{
"expr": "%s",
"format": "time_series",
"interval": "%s",
"intervalMs": %v,
"intervalFactor": 1,
"refId": "A"
}`, expr, interval, intervalMs)),
TimeRange: *timeRange,
RefID: "A",
}
}
func queryContext(json string, timeRange backend.TimeRange, queryInterval time.Duration) backend.DataQuery {
return backend.DataQuery{
Interval: queryInterval,
@@ -453,6 +768,11 @@ func queryContext(json string, timeRange backend.TimeRange, queryInterval time.D
}
}
// AlignTimeRange aligns query range to step and handles the time offset.
// It rounds start and end down to a multiple of step.
// Prometheus caching is dependent on the range being aligned with the step.
// Rounding to the step can significantly change the start and end of the range for larger steps, i.e. a week.
// In rounding the range to a 1w step the range will always start on a Thursday.
func TestAlignTimeRange(t *testing.T) {
type args struct {
t time.Time
-96
View File
@@ -381,102 +381,6 @@ func TestPrometheus_parseTimeSeriesResponse(t *testing.T) {
})
}
func TestPrometheus_executedQueryString(t *testing.T) {
t.Run("executedQueryString should match expected format with intervalMs 300_000", func(t *testing.T) {
values := []p.SamplePair{
{Value: 1, Timestamp: 1000},
{Value: 2, Timestamp: 2000},
}
result := queryResult{
Type: p.ValMatrix,
Result: p.Matrix{
&p.SampleStream{
Metric: p.Metric{"app": "Application"},
Values: values,
},
},
}
queryJSON := `{
"expr": "test_metric",
"format": "time_series",
"intervalFactor": 1,
"interval": "2m",
"intervalMs": 300000,
"maxDataPoints": 761,
"refId": "A",
"range": true
}`
now := time.Now()
query := backend.DataQuery{
RefID: "A",
MaxDataPoints: 761,
Interval: 300000 * time.Millisecond,
TimeRange: backend.TimeRange{
From: now,
To: now.Add(48 * time.Hour),
},
JSON: []byte(queryJSON),
}
tctx, err := setup()
require.NoError(t, err)
res, err := execute(tctx, query, result, nil)
require.NoError(t, err)
require.Len(t, res, 1)
require.NotNil(t, res[0].Meta)
require.Equal(t, "Expr: test_metric\nStep: 2m0s", res[0].Meta.ExecutedQueryString)
})
t.Run("executedQueryString should match expected format with intervalMs 900_000", func(t *testing.T) {
values := []p.SamplePair{
{Value: 1, Timestamp: 1000},
{Value: 2, Timestamp: 2000},
}
result := queryResult{
Type: p.ValMatrix,
Result: p.Matrix{
&p.SampleStream{
Metric: p.Metric{"app": "Application"},
Values: values,
},
},
}
queryJSON := `{
"expr": "test_metric",
"format": "time_series",
"intervalFactor": 1,
"interval": "2m",
"intervalMs": 900000,
"maxDataPoints": 175,
"refId": "A",
"range": true
}`
now := time.Now()
query := backend.DataQuery{
RefID: "A",
MaxDataPoints: 175,
Interval: 900000 * time.Millisecond,
TimeRange: backend.TimeRange{
From: now,
To: now.Add(48 * time.Hour),
},
JSON: []byte(queryJSON),
}
tctx, err := setup()
require.NoError(t, err)
res, err := execute(tctx, query, result, nil)
require.NoError(t, err)
require.Len(t, res, 1)
require.NotNil(t, res[0].Meta)
require.Equal(t, "Expr: test_metric\nStep: 2m0s", res[0].Meta.ExecutedQueryString)
})
}
type queryResult struct {
Type p.ValueType `json:"resultType"`
Result any `json:"result"`
@@ -14,6 +14,9 @@ type EncryptionManager interface {
// implementation present at manager.EncryptionService.
Encrypt(ctx context.Context, namespace xkube.Namespace, payload []byte) (EncryptedPayload, error)
Decrypt(ctx context.Context, namespace xkube.Namespace, payload EncryptedPayload) ([]byte, error)
// Since consolidation occurs at a level above the EncryptionManager, we need to allow that process to manually flush the cache
FlushCache(namespace xkube.Namespace)
}
type EncryptedPayload struct {
@@ -7,11 +7,13 @@ import (
"fmt"
"strconv"
"sync"
"time"
"github.com/prometheus/client_golang/prometheus"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/trace"
"golang.org/x/sync/errgroup"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/grafana/grafana/pkg/infra/usagestats"
@@ -19,6 +21,7 @@ import (
"github.com/grafana/grafana/pkg/registry/apis/secret/encryption"
"github.com/grafana/grafana/pkg/registry/apis/secret/encryption/cipher"
"github.com/grafana/grafana/pkg/registry/apis/secret/xkube"
"github.com/grafana/grafana/pkg/setting"
"github.com/grafana/grafana/pkg/util"
)
@@ -26,6 +29,9 @@ type EncryptionManager struct {
tracer trace.Tracer
store contracts.DataKeyStorage
usageStats usagestats.Service
cfg *setting.Cfg
dataKeyCache encryption.DataKeyCache
mtx sync.Mutex
@@ -44,6 +50,8 @@ func ProvideEncryptionManager(
usageStats usagestats.Service,
enc cipher.Cipher,
providerConfig encryption.ProviderConfig,
dataKeyCache encryption.DataKeyCache,
cfg *setting.Cfg,
) (contracts.EncryptionManager, error) {
currentProviderID := providerConfig.CurrentProvider
if _, ok := providerConfig.AvailableProviders[currentProviderID]; !ok {
@@ -57,6 +65,8 @@ func ProvideEncryptionManager(
cipher: enc,
log: log.New("encryption"),
providerConfig: providerConfig,
dataKeyCache: dataKeyCache,
cfg: cfg,
}
s.registerUsageMetrics()
@@ -173,6 +183,11 @@ func (s *EncryptionManager) currentDataKey(ctx context.Context, namespace xkube.
// dataKeyByLabel looks up for data key in cache by label.
// Otherwise, it fetches it from database, decrypts it and caches it decrypted.
func (s *EncryptionManager) dataKeyByLabel(ctx context.Context, namespace, label string) (string, []byte, error) {
// 0. Get data key from in-memory cache.
if entry, exists := s.dataKeyCache.GetByLabel(namespace, label); exists && entry.Active {
return entry.Id, entry.DataKey, nil
}
// 1. Get data key from database.
dataKey, err := s.store.GetCurrentDataKey(ctx, namespace, label)
if err != nil {
@@ -194,6 +209,9 @@ func (s *EncryptionManager) dataKeyByLabel(ctx context.Context, namespace, label
return "", nil, err
}
// 3. Store the decrypted data key into the in-memory cache.
s.cacheDataKey(namespace, dataKey, decrypted)
return dataKey.UID, decrypted, nil
}
@@ -240,6 +258,9 @@ func (s *EncryptionManager) newDataKey(ctx context.Context, namespace string, la
return "", nil, err
}
// 4. Store the decrypted data key into the in-memory cache.
s.cacheDataKey(namespace, &dbDataKey, dataKey)
return id, dataKey, nil
}
@@ -303,6 +324,11 @@ func (s *EncryptionManager) dataKeyById(ctx context.Context, namespace, id strin
))
defer span.End()
// 0. Get data key from in-memory cache.
if entry, exists := s.dataKeyCache.GetById(namespace, id); exists && entry.Active {
return entry.DataKey, nil
}
// 1. Get encrypted data key from database.
dataKey, err := s.store.GetDataKey(ctx, namespace, id)
if err != nil {
@@ -321,9 +347,82 @@ func (s *EncryptionManager) dataKeyById(ctx context.Context, namespace, id strin
return nil, err
}
// 3. Store the decrypted data key into the in-memory cache.
s.cacheDataKey(namespace, dataKey, decrypted)
return decrypted, nil
}
func (s *EncryptionManager) GetProviders() encryption.ProviderConfig {
return s.providerConfig
}
func (s *EncryptionManager) FlushCache(namespace xkube.Namespace) {
s.dataKeyCache.Flush(namespace.String())
}
func (s *EncryptionManager) Run(ctx context.Context) error {
gc := time.NewTicker(s.cfg.SecretsManagement.DataKeysCacheCleanupInterval)
grp, gCtx := errgroup.WithContext(ctx)
for {
select {
case <-gc.C:
s.log.Debug("Removing expired data keys from cache...")
s.dataKeyCache.RemoveExpired()
s.log.Debug("Removing expired data keys from cache finished successfully")
case <-gCtx.Done():
s.log.Debug("Grafana is shutting down; stopping...")
gc.Stop()
if err := grp.Wait(); err != nil && !errors.Is(err, context.Canceled) {
return err
}
return nil
}
}
}
// NB: Much of this was copied or derived from the original implementation in the legacy SecretsService.
//
// Caching a data key is tricky, because at SecretsService level we cannot guarantee
// that a newly created data key has actually been persisted, depending on the different
// use cases that rely on SecretsService encryption and different database engines that
// we have support for, because the data key creation may have happened within a DB TX,
// that may fail afterwards.
//
// Therefore, if we cache a data key that hasn't been persisted with success (and won't),
// and later that one is used for a encryption operation (aside from the DB TX that created
// it), we may end up with data encrypted by a non-persisted data key, which could end up
// in (unrecoverable) data corruption.
//
// So, we cache the data key by id and/or by label, depending on the data key's lifetime,
// assuming that a data key older than a "caution period" should have been persisted.
//
// Look at the comments inline for further details.
// You can also take a look at the issue below for more context:
// https://github.com/grafana/grafana-enterprise/issues/4252
func (s *EncryptionManager) cacheDataKey(namespace string, dataKey *contracts.SecretDataKey, decrypted []byte) {
// First, we cache the data key by id, because cache "by id" is
// only used by decrypt operations, so no risk of corrupting data.
entry := &encryption.DataKeyCacheEntry{
Namespace: namespace,
Id: dataKey.UID,
Label: dataKey.Label,
DataKey: decrypted,
Active: dataKey.Active,
}
s.dataKeyCache.AddById(namespace, entry)
// Then, we cache the data key by label, ONLY if data key's lifetime
// is longer than a certain "caution period", because cache "by label"
// is used (only) by encrypt operations, and we want to ensure that
// no data key is cached for encryption ops before being persisted.
nowMinusCautionPeriod := time.Now().Add(-s.cfg.SecretsManagement.DataKeysCacheCautionPeriod)
if dataKey.Created.Before(nowMinusCautionPeriod) {
s.dataKeyCache.AddByLabel(namespace, entry)
}
}
@@ -4,6 +4,7 @@ import (
"context"
"errors"
"testing"
"time"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
@@ -201,6 +202,8 @@ func TestEncryptionService_UseCurrentProvider(t *testing.T) {
usageStats,
enc,
ossProviders,
&NoopDataKeyCache{},
cfg,
)
require.NoError(t, err)
@@ -226,6 +229,8 @@ func TestEncryptionService_UseCurrentProvider(t *testing.T) {
usageStats,
enc,
ossProviders,
&NoopDataKeyCache{},
cfg,
)
require.NoError(t, err)
@@ -275,6 +280,8 @@ func TestEncryptionService_SecretKeyVersionUpgrade(t *testing.T) {
usageStats,
enc,
ossProviders,
&NoopDataKeyCache{},
cfgV1,
)
require.NoError(t, err)
@@ -313,6 +320,8 @@ func TestEncryptionService_SecretKeyVersionUpgrade(t *testing.T) {
usageStats,
enc,
ossProvidersV2,
&NoopDataKeyCache{},
cfgV2,
)
require.NoError(t, err)
@@ -368,6 +377,8 @@ func TestEncryptionService_SecretKeyVersionUpgrade(t *testing.T) {
usageStats,
enc,
ossProviders,
&NoopDataKeyCache{},
cfgV1,
)
require.NoError(t, err)
@@ -392,6 +403,8 @@ func TestEncryptionService_SecretKeyVersionUpgrade(t *testing.T) {
usageStats,
enc,
ossProvidersV2,
&NoopDataKeyCache{},
cfgV2,
)
require.NoError(t, err)
@@ -573,6 +586,8 @@ func TestIntegration_SecretsService(t *testing.T) {
usageStats,
enc,
ossProviders,
&NoopDataKeyCache{},
cfg,
)
require.NoError(t, err)
@@ -610,6 +625,8 @@ func TestEncryptionService_ThirdPartyProviders(t *testing.T) {
enc, err := service.ProvideAESGCMCipherService(tracer, usageStats)
require.NoError(t, err)
cfg := &setting.Cfg{}
svc, err := ProvideEncryptionManager(
tracer,
nil,
@@ -621,6 +638,8 @@ func TestEncryptionService_ThirdPartyProviders(t *testing.T) {
encryption.ProviderID("fakeProvider.v1"): &fakeProvider{},
},
},
&NoopDataKeyCache{},
cfg,
)
require.NoError(t, err)
@@ -628,3 +647,88 @@ func TestEncryptionService_ThirdPartyProviders(t *testing.T) {
require.Len(t, encMgr.providerConfig.AvailableProviders, 1)
require.Contains(t, encMgr.providerConfig.AvailableProviders, encryption.ProviderID("fakeProvider.v1"))
}
func TestEncryptionService_FlushCache(t *testing.T) {
ctx := context.Background()
namespace := xkube.Namespace("test-namespace")
plaintext := []byte("secret data to encrypt")
// Set up the encryption manager with a real OSS DEK cache
testDB := sqlstore.NewTestStore(t, sqlstore.WithMigrator(migrator.New()))
tracer := noop.NewTracerProvider().Tracer("test")
database := database.ProvideDatabase(testDB, tracer)
cfg := &setting.Cfg{
SecretsManagement: setting.SecretsManagerSettings{
CurrentEncryptionProvider: "secret_key.v1",
ConfiguredKMSProviders: map[string]map[string]string{"secret_key.v1": {"secret_key": "SW2YcwTIb9zpOOhoPsMm"}},
DataKeysCacheTTL: time.Hour, // Long TTL to ensure keys don't expire during test
DataKeysCacheCautionPeriod: 0 * time.Second, // Override the caution period for testing
},
}
store, err := encryptionstorage.ProvideDataKeyStorage(database, tracer, nil)
require.NoError(t, err)
usageStats := &usagestats.UsageStatsMock{T: t}
enc, err := service.ProvideAESGCMCipherService(tracer, usageStats)
require.NoError(t, err)
ossProviders, err := osskmsproviders.ProvideOSSKMSProviders(cfg, enc)
require.NoError(t, err)
// Create a real OSS DEK cache
dekCache := ProvideOSSDataKeyCache(cfg)
encMgr, err := ProvideEncryptionManager(
tracer,
store,
usageStats,
enc,
ossProviders,
dekCache,
cfg,
)
require.NoError(t, err)
svc := encMgr.(*EncryptionManager)
// Encrypt some data - this will create a DEK and cache it
encrypted, err := svc.Encrypt(ctx, namespace, plaintext)
require.NoError(t, err)
// Verify we can decrypt - this should use the cached key
decrypted, err := svc.Decrypt(ctx, namespace, encrypted)
require.NoError(t, err)
assert.Equal(t, plaintext, decrypted)
// Get the data key ID from the encrypted payload
dataKeyID := encrypted.DataKeyID
// Verify the key is in the cache by checking both by ID and by label
label := encryption.KeyLabel(svc.providerConfig.CurrentProvider)
_, existsById := dekCache.GetById(namespace.String(), dataKeyID)
assert.True(t, existsById, "DEK should be cached by ID before flush")
_, existsByLabel := dekCache.GetByLabel(namespace.String(), label)
assert.True(t, existsByLabel, "DEK should be cached by label before flush")
// Flush the cache for this namespace
svc.FlushCache(namespace)
// Verify the cache is empty for this namespace
_, existsById = dekCache.GetById(namespace.String(), dataKeyID)
assert.False(t, existsById, "DEK should not be in cache by ID after flush")
_, existsByLabel = dekCache.GetByLabel(namespace.String(), label)
assert.False(t, existsByLabel, "DEK should not be in cache by label after flush")
// Verify we can still decrypt - this should fetch from DB and re-cache
decrypted, err = svc.Decrypt(ctx, namespace, encrypted)
require.NoError(t, err)
assert.Equal(t, plaintext, decrypted)
// Verify the key is back in the cache after the decrypt operation
_, existsById = dekCache.GetById(namespace.String(), dataKeyID)
assert.True(t, existsById, "DEK should be re-cached by ID after decrypt")
}
@@ -0,0 +1,130 @@
package manager
import (
"strconv"
"sync"
"time"
"github.com/grafana/grafana/pkg/registry/apis/secret/encryption"
"github.com/grafana/grafana/pkg/setting"
"github.com/prometheus/client_golang/prometheus"
)
type ossDataKeyCache struct {
mtx sync.RWMutex
byId map[string]map[string]*encryption.DataKeyCacheEntry
byLabel map[string]map[string]*encryption.DataKeyCacheEntry
cacheTTL time.Duration
}
func ProvideOSSDataKeyCache(cfg *setting.Cfg) encryption.DataKeyCache {
return &ossDataKeyCache{
byId: make(map[string]map[string]*encryption.DataKeyCacheEntry),
byLabel: make(map[string]map[string]*encryption.DataKeyCacheEntry),
cacheTTL: cfg.SecretsManagement.DataKeysCacheTTL,
}
}
func (c *ossDataKeyCache) GetById(namespace, id string) (_ *encryption.DataKeyCacheEntry, exists bool) {
defer func() {
cacheReadsCounter.With(prometheus.Labels{
"hit": strconv.FormatBool(exists),
"method": "byId",
}).Inc()
}()
c.mtx.RLock()
defer c.mtx.RUnlock()
entries, exists := c.byId[namespace]
if !exists {
return nil, false
}
entry, exists := entries[id]
if !exists || entry.IsExpired() || entry.Namespace != namespace {
return nil, false
}
return entry, true
}
func (c *ossDataKeyCache) GetByLabel(namespace, label string) (_ *encryption.DataKeyCacheEntry, exists bool) {
defer func() {
cacheReadsCounter.With(prometheus.Labels{
"hit": strconv.FormatBool(exists),
"method": "byLabel",
}).Inc()
}()
c.mtx.RLock()
defer c.mtx.RUnlock()
entries, exists := c.byLabel[namespace]
if !exists {
return nil, false
}
entry, exists := entries[label]
if !exists || entry.IsExpired() || entry.Namespace != namespace {
return nil, false
}
return entry, true
}
func (c *ossDataKeyCache) AddById(namespace string, entry *encryption.DataKeyCacheEntry) {
c.mtx.Lock()
defer c.mtx.Unlock()
entry.Expiration = time.Now().Add(c.cacheTTL)
entry.Namespace = namespace
entries, exists := c.byId[namespace]
if !exists {
entries = make(map[string]*encryption.DataKeyCacheEntry)
c.byId[namespace] = entries
}
entries[entry.Id] = entry
}
func (c *ossDataKeyCache) AddByLabel(namespace string, entry *encryption.DataKeyCacheEntry) {
c.mtx.Lock()
defer c.mtx.Unlock()
entry.Expiration = time.Now().Add(c.cacheTTL)
entry.Namespace = namespace
entries, exists := c.byLabel[namespace]
if !exists {
entries = make(map[string]*encryption.DataKeyCacheEntry)
c.byLabel[namespace] = entries
}
entries[entry.Label] = entry
}
func (c *ossDataKeyCache) RemoveExpired() {
c.mtx.Lock()
defer c.mtx.Unlock()
for _, entries := range c.byId {
for id, entry := range entries {
if entry.IsExpired() {
delete(entries, id)
}
}
}
for _, entries := range c.byLabel {
for label, entry := range entries {
if entry.IsExpired() {
delete(entries, label)
}
}
}
}
func (c *ossDataKeyCache) Flush(namespace string) {
c.mtx.Lock()
c.byId[namespace] = make(map[string]*encryption.DataKeyCacheEntry)
c.byLabel[namespace] = make(map[string]*encryption.DataKeyCacheEntry)
c.mtx.Unlock()
}
@@ -0,0 +1,570 @@
package manager
import (
"testing"
"time"
"github.com/grafana/grafana/pkg/registry/apis/secret/encryption"
"github.com/grafana/grafana/pkg/setting"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestOSSDataKeyCache(t *testing.T) {
t.Parallel()
settings := setting.NewCfg()
settings.SecretsManagement = setting.SecretsManagerSettings{
DataKeysCacheTTL: 999 * time.Hour, // avoid expiration for testing
}
cache := ProvideOSSDataKeyCache(settings)
namespace := "test-namespace"
entry := &encryption.DataKeyCacheEntry{
Id: "key-123",
Label: "2024-01-01@provider.key1",
DataKey: []byte("test-data-key"),
Active: true,
}
t.Run("AddById and GetById", func(t *testing.T) {
cache.AddById(namespace, entry)
retrieved, exists := cache.GetById(namespace, entry.Id)
require.True(t, exists, "entry should exist after adding")
assert.Equal(t, entry.Id, retrieved.Id)
assert.Equal(t, entry.Label, retrieved.Label)
assert.Equal(t, entry.DataKey, retrieved.DataKey)
assert.Equal(t, entry.Active, retrieved.Active)
assert.Equal(t, namespace, retrieved.Namespace)
assert.True(t, retrieved.Expiration.After(time.Now()), "expiration should be in the future")
})
t.Run("AddByLabel and GetByLabel", func(t *testing.T) {
cache.AddByLabel(namespace, entry)
retrieved, exists := cache.GetByLabel(namespace, entry.Label)
require.True(t, exists, "entry should exist after adding")
assert.Equal(t, entry.Id, retrieved.Id)
assert.Equal(t, entry.Label, retrieved.Label)
assert.Equal(t, entry.DataKey, retrieved.DataKey)
assert.Equal(t, entry.Active, retrieved.Active)
assert.Equal(t, namespace, retrieved.Namespace)
assert.True(t, retrieved.Expiration.After(time.Now()), "expiration should be in the future")
})
t.Run("GetById and GetByLabel are independent", func(t *testing.T) {
cache2 := ProvideOSSDataKeyCache(settings)
ns := "independent-test"
entryById := &encryption.DataKeyCacheEntry{
Id: "id-only-key",
Label: "label1",
DataKey: []byte("data1"),
}
entryByLabel := &encryption.DataKeyCacheEntry{
Id: "id2",
Label: "label-only-key",
DataKey: []byte("data2"),
}
cache2.AddById(ns, entryById)
cache2.AddByLabel(ns, entryByLabel)
// Should find by ID
retrieved, exists := cache2.GetById(ns, entryById.Id)
require.True(t, exists)
assert.Equal(t, entryById.Id, retrieved.Id)
// Should not find by label that wasn't added via AddByLabel
_, exists = cache2.GetByLabel(ns, entryById.Label)
assert.False(t, exists)
// Should find by label
retrieved, exists = cache2.GetByLabel(ns, entryByLabel.Label)
require.True(t, exists)
assert.Equal(t, entryByLabel.Label, retrieved.Label)
// Should not find by ID that wasn't added via AddById
_, exists = cache2.GetById(ns, entryByLabel.Id)
assert.False(t, exists)
})
}
func TestOSSDataKeyCache_FalseConditions(t *testing.T) {
t.Parallel()
settings := setting.NewCfg()
settings.SecretsManagement = setting.SecretsManagerSettings{
DataKeysCacheTTL: 999 * time.Hour,
}
cache := ProvideOSSDataKeyCache(settings)
namespace := "test-namespace"
entry := &encryption.DataKeyCacheEntry{
Id: "key-123",
Label: "2024-01-01@provider.key1",
DataKey: []byte("test-data-key"),
Active: true,
}
t.Run("GetById returns false for non-existent namespace", func(t *testing.T) {
_, exists := cache.GetById("non-existent-namespace", "any-id")
assert.False(t, exists)
})
t.Run("GetById returns false for non-existent id", func(t *testing.T) {
cache.AddById(namespace, entry)
_, exists := cache.GetById(namespace, "non-existent-id")
assert.False(t, exists)
})
t.Run("GetByLabel returns false for non-existent namespace", func(t *testing.T) {
_, exists := cache.GetByLabel("non-existent-namespace", "any-label")
assert.False(t, exists)
})
t.Run("GetByLabel returns false for non-existent label", func(t *testing.T) {
cache.AddByLabel(namespace, entry)
_, exists := cache.GetByLabel(namespace, "non-existent-label")
assert.False(t, exists)
})
t.Run("GetById returns false for expired entry", func(t *testing.T) {
shortTTLSettings := setting.NewCfg()
shortTTLSettings.SecretsManagement = setting.SecretsManagerSettings{
DataKeysCacheTTL: 1 * time.Millisecond,
}
shortCache := ProvideOSSDataKeyCache(shortTTLSettings)
namespace := "test-ns"
expiredEntry := &encryption.DataKeyCacheEntry{
Id: "expired-key",
Label: "expired-label",
DataKey: []byte("expired-data"),
}
shortCache.AddById(namespace, expiredEntry)
time.Sleep(10 * time.Millisecond)
_, exists := shortCache.GetById(namespace, expiredEntry.Id)
assert.False(t, exists, "should return false for expired entry")
})
t.Run("GetByLabel returns false for expired entry", func(t *testing.T) {
shortTTLSettings := setting.NewCfg()
shortTTLSettings.SecretsManagement = setting.SecretsManagerSettings{
DataKeysCacheTTL: 1 * time.Millisecond,
}
shortCache := ProvideOSSDataKeyCache(shortTTLSettings)
namespace := "test-ns"
expiredEntry := &encryption.DataKeyCacheEntry{
Id: "expired-key",
Label: "expired-label",
DataKey: []byte("expired-data"),
}
shortCache.AddByLabel(namespace, expiredEntry)
time.Sleep(10 * time.Millisecond)
_, exists := shortCache.GetByLabel(namespace, expiredEntry.Label)
assert.False(t, exists, "should return false for expired entry")
})
t.Run("GetById returns false when entry namespace doesn't match", func(t *testing.T) {
// This tests the entry.Namespace != namespace check in GetById
// This is a defensive check that shouldn't normally happen if AddById works correctly
testCache := ProvideOSSDataKeyCache(settings).(*ossDataKeyCache)
// Manually insert an entry with mismatched namespace to test the defensive check
mismatchedEntry := &encryption.DataKeyCacheEntry{
Id: "test-id",
Label: "test-label",
DataKey: []byte("test-data"),
Namespace: "wrong-namespace",
Expiration: time.Now().Add(999 * time.Hour),
}
testCache.mtx.Lock()
testCache.byId["correct-namespace"] = map[string]*encryption.DataKeyCacheEntry{
mismatchedEntry.Id: mismatchedEntry,
}
testCache.mtx.Unlock()
_, exists := testCache.GetById("correct-namespace", mismatchedEntry.Id)
assert.False(t, exists, "should return false when entry namespace doesn't match lookup namespace")
})
t.Run("GetByLabel returns false when entry namespace doesn't match", func(t *testing.T) {
// This tests the entry.Namespace != namespace check in GetByLabel
testCache := ProvideOSSDataKeyCache(settings).(*ossDataKeyCache)
// Manually insert an entry with mismatched namespace to test the defensive check
mismatchedEntry := &encryption.DataKeyCacheEntry{
Id: "test-id",
Label: "test-label",
DataKey: []byte("test-data"),
Namespace: "wrong-namespace",
Expiration: time.Now().Add(999 * time.Hour),
}
testCache.mtx.Lock()
testCache.byLabel["correct-namespace"] = map[string]*encryption.DataKeyCacheEntry{
"test-label": mismatchedEntry,
}
testCache.mtx.Unlock()
_, exists := testCache.GetByLabel("correct-namespace", mismatchedEntry.Label)
assert.False(t, exists, "should return false when entry namespace doesn't match lookup namespace")
})
}
// Test namespace isolation
func TestOSSDataKeyCache_NamespaceIsolation(t *testing.T) {
t.Parallel()
settings := setting.NewCfg()
settings.SecretsManagement = setting.SecretsManagerSettings{
DataKeysCacheTTL: 999 * time.Hour,
}
cache := ProvideOSSDataKeyCache(settings)
namespace1 := "namespace-1"
namespace2 := "namespace-2"
entry1 := &encryption.DataKeyCacheEntry{
Id: "shared-id",
Label: "shared-label",
DataKey: []byte("data-from-ns1"),
Active: true,
}
entry2 := &encryption.DataKeyCacheEntry{
Id: "shared-id",
Label: "shared-label",
DataKey: []byte("data-from-ns2"),
Active: false,
}
t.Run("entries with same ID in different namespaces are isolated", func(t *testing.T) {
cache.AddById(namespace1, entry1)
cache.AddById(namespace2, entry2)
retrieved1, exists := cache.GetById(namespace1, entry1.Id)
require.True(t, exists)
assert.Equal(t, entry1.DataKey, retrieved1.DataKey)
assert.Equal(t, namespace1, retrieved1.Namespace)
assert.True(t, retrieved1.Active)
retrieved2, exists := cache.GetById(namespace2, entry2.Id)
require.True(t, exists)
assert.Equal(t, entry2.DataKey, retrieved2.DataKey)
assert.Equal(t, namespace2, retrieved2.Namespace)
assert.False(t, retrieved2.Active)
})
t.Run("entries with same label in different namespaces are isolated", func(t *testing.T) {
cache.AddByLabel(namespace1, entry1)
cache.AddByLabel(namespace2, entry2)
retrieved1, exists := cache.GetByLabel(namespace1, entry1.Label)
require.True(t, exists)
assert.Equal(t, entry1.DataKey, retrieved1.DataKey)
assert.Equal(t, namespace1, retrieved1.Namespace)
assert.True(t, retrieved1.Active)
retrieved2, exists := cache.GetByLabel(namespace2, entry2.Label)
require.True(t, exists)
assert.Equal(t, entry2.DataKey, retrieved2.DataKey)
assert.Equal(t, namespace2, retrieved2.Namespace)
assert.False(t, retrieved2.Active)
})
t.Run("cannot retrieve entry from wrong namespace", func(t *testing.T) {
// flush both namespaces since the cache is full of stuff now
cache.Flush(namespace1)
cache.Flush(namespace2)
cache.AddById(namespace1, entry1)
_, exists := cache.GetById(namespace2, entry1.Id)
assert.False(t, exists, "should not find entry from different namespace")
cache.AddByLabel(namespace1, entry1)
_, exists = cache.GetByLabel(namespace2, entry1.Label)
assert.False(t, exists, "should not find entry from different namespace")
})
}
func TestOSSDataKeyCache_Expiration(t *testing.T) {
t.Parallel()
t.Run("entries expire after TTL", func(t *testing.T) {
settings := setting.NewCfg()
settings.SecretsManagement = setting.SecretsManagerSettings{
DataKeysCacheTTL: 50 * time.Millisecond,
}
cache := ProvideOSSDataKeyCache(settings)
namespace := "test-ns"
entry := &encryption.DataKeyCacheEntry{
Id: "expiring-key",
Label: "expiring-label",
DataKey: []byte("expiring-data"),
}
cache.AddById(namespace, entry)
cache.AddByLabel(namespace, entry)
// Should exist immediately
_, exists := cache.GetById(namespace, entry.Id)
assert.True(t, exists, "entry should exist immediately after adding")
_, exists = cache.GetByLabel(namespace, entry.Label)
assert.True(t, exists, "entry should exist immediately after adding")
// Wait for expiration
time.Sleep(100 * time.Millisecond)
// Should not exist after expiration
_, exists = cache.GetById(namespace, entry.Id)
assert.False(t, exists, "entry should not exist after TTL expires")
_, exists = cache.GetByLabel(namespace, entry.Label)
assert.False(t, exists, "entry should not exist after TTL expires")
})
t.Run("RemoveExpired removes only expired entries", func(t *testing.T) {
settings := setting.NewCfg()
settings.SecretsManagement = setting.SecretsManagerSettings{
DataKeysCacheTTL: 50 * time.Millisecond,
}
cache := ProvideOSSDataKeyCache(settings)
namespace := "test-ns"
// Add entries that will expire
expiredEntry1 := &encryption.DataKeyCacheEntry{
Id: "expired-1",
Label: "expired-label-1",
DataKey: []byte("expired-data-1"),
}
expiredEntry2 := &encryption.DataKeyCacheEntry{
Id: "expired-2",
Label: "expired-label-2",
DataKey: []byte("expired-data-2"),
}
cache.AddById(namespace, expiredEntry1)
cache.AddByLabel(namespace, expiredEntry2)
// Wait for expiration
time.Sleep(100 * time.Millisecond)
// Add fresh entries
freshEntry1 := &encryption.DataKeyCacheEntry{
Id: "fresh-1",
Label: "fresh-label-1",
DataKey: []byte("fresh-data-1"),
}
freshEntry2 := &encryption.DataKeyCacheEntry{
Id: "fresh-2",
Label: "fresh-label-2",
DataKey: []byte("fresh-data-2"),
}
cache.AddById(namespace, freshEntry1)
cache.AddByLabel(namespace, freshEntry2)
// Before RemoveExpired, expired entries still exist in the map
// but GetById/GetByLabel return false due to IsExpired() check
// Call RemoveExpired
cache.RemoveExpired()
// Fresh entries should still exist
_, exists := cache.GetById(namespace, freshEntry1.Id)
assert.True(t, exists, "fresh entry should still exist after RemoveExpired")
_, exists = cache.GetByLabel(namespace, freshEntry2.Label)
assert.True(t, exists, "fresh entry should still exist after RemoveExpired")
// Expired entries should not exist
ossCache := cache.(*ossDataKeyCache)
_, exists = ossCache.byId[namespace][expiredEntry1.Id]
assert.False(t, exists, "expired entry should not exist after RemoveExpired")
_, exists = ossCache.byLabel[namespace][expiredEntry2.Label]
assert.False(t, exists, "expired entry should not exist after RemoveExpired")
})
t.Run("RemoveExpired handles multiple namespaces", func(t *testing.T) {
settings := setting.NewCfg()
settings.SecretsManagement = setting.SecretsManagerSettings{
DataKeysCacheTTL: 50 * time.Millisecond,
}
cache := ProvideOSSDataKeyCache(settings)
ns1 := "namespace-1"
ns2 := "namespace-2"
ns1ExpiredEntry := &encryption.DataKeyCacheEntry{
Id: "expired-key-ns1",
Label: "expired-label-ns1",
DataKey: []byte("expired-data"),
}
ns2ExpiredEntry := &encryption.DataKeyCacheEntry{
Id: "expired-key-ns2",
Label: "expired-label-ns2",
DataKey: []byte("expired-data"),
}
cache.AddById(ns1, ns1ExpiredEntry)
cache.AddByLabel(ns1, ns1ExpiredEntry)
cache.AddById(ns2, ns2ExpiredEntry)
cache.AddByLabel(ns2, ns2ExpiredEntry)
time.Sleep(100 * time.Millisecond)
ns1FreshEntry := &encryption.DataKeyCacheEntry{
Id: "fresh-key-ns1",
Label: "fresh-label-ns1",
DataKey: []byte("fresh-data-ns1"),
}
ns2FreshEntry := &encryption.DataKeyCacheEntry{
Id: "fresh-key-ns2",
Label: "fresh-label-ns2",
DataKey: []byte("fresh-data-ns2"),
}
cache.AddById(ns1, ns1FreshEntry)
cache.AddByLabel(ns1, ns1FreshEntry)
cache.AddById(ns2, ns2FreshEntry)
cache.AddByLabel(ns2, ns2FreshEntry)
cache.RemoveExpired()
// Fresh entries in both namespaces should exist
_, exists := cache.GetById(ns1, ns1FreshEntry.Id)
assert.True(t, exists)
_, exists = cache.GetByLabel(ns1, ns1FreshEntry.Label)
assert.True(t, exists)
_, exists = cache.GetById(ns2, ns2FreshEntry.Id)
assert.True(t, exists)
_, exists = cache.GetByLabel(ns2, ns2FreshEntry.Label)
assert.True(t, exists)
// Expired entries in both namespaces should not exist
ossCache := cache.(*ossDataKeyCache)
_, exists = ossCache.byId[ns1][ns1ExpiredEntry.Id]
assert.False(t, exists)
_, exists = ossCache.byId[ns2][ns2ExpiredEntry.Id]
assert.False(t, exists)
_, exists = ossCache.byLabel[ns1][ns1ExpiredEntry.Label]
assert.False(t, exists)
_, exists = ossCache.byLabel[ns2][ns2ExpiredEntry.Label]
assert.False(t, exists)
})
}
// Test Flush()
func TestOSSDataKeyCache_Flush(t *testing.T) {
t.Parallel()
settings := setting.NewCfg()
settings.SecretsManagement = setting.SecretsManagerSettings{
DataKeysCacheTTL: 999 * time.Hour,
}
cache := ProvideOSSDataKeyCache(settings)
namespace1 := "namespace-1"
namespace2 := "namespace-2"
entry1 := &encryption.DataKeyCacheEntry{
Id: "key-1",
Label: "label-1",
DataKey: []byte("data-1"),
}
entry2 := &encryption.DataKeyCacheEntry{
Id: "key-2",
Label: "label-2",
DataKey: []byte("data-2"),
}
t.Run("Flush removes all entries from specified namespace", func(t *testing.T) {
cache.AddById(namespace1, entry1)
cache.AddByLabel(namespace1, entry1)
// Verify entries exist
_, exists := cache.GetById(namespace1, entry1.Id)
require.True(t, exists)
_, exists = cache.GetByLabel(namespace1, entry1.Label)
require.True(t, exists)
// Flush namespace1
cache.Flush(namespace1)
// Entries should no longer exist
_, exists = cache.GetById(namespace1, entry1.Id)
assert.False(t, exists, "entry should not exist after flush")
_, exists = cache.GetByLabel(namespace1, entry1.Label)
assert.False(t, exists, "entry should not exist after flush")
})
t.Run("Flush only affects specified namespace", func(t *testing.T) {
cache.AddById(namespace1, entry1)
cache.AddByLabel(namespace1, entry1)
cache.AddById(namespace2, entry2)
cache.AddByLabel(namespace2, entry2)
// Flush only namespace1
cache.Flush(namespace1)
// namespace1 entries should not exist
_, exists := cache.GetById(namespace1, entry1.Id)
assert.False(t, exists)
_, exists = cache.GetByLabel(namespace1, entry1.Label)
assert.False(t, exists)
// namespace2 entries should still exist
_, exists = cache.GetById(namespace2, entry2.Id)
assert.True(t, exists, "entries in other namespace should not be affected")
_, exists = cache.GetByLabel(namespace2, entry2.Label)
assert.True(t, exists, "entries in other namespace should not be affected")
})
t.Run("Flush on non-existent namespace does not panic", func(t *testing.T) {
assert.NotPanics(t, func() {
cache.Flush("non-existent-namespace")
})
})
t.Run("can add entries after flush", func(t *testing.T) {
cache.AddById(namespace1, entry1)
cache.Flush(namespace1)
// Add new entry after flush
newEntry := &encryption.DataKeyCacheEntry{
Id: "new-key",
Label: "new-label",
DataKey: []byte("new-data"),
}
cache.AddById(namespace1, newEntry)
// New entry should exist
_, exists := cache.GetById(namespace1, "new-key")
assert.True(t, exists, "should be able to add entries after flush")
})
}
@@ -0,0 +1,27 @@
package manager
import "github.com/grafana/grafana/pkg/registry/apis/secret/encryption"
// This is being used as the data key cache in both OSS and Enterprise while we discuss security requirements for DEK caching
type noopDataKeyCache struct {
}
func ProvideNoopDataKeyCache() encryption.DataKeyCache {
return &noopDataKeyCache{}
}
func (c *noopDataKeyCache) GetById(_ string, _ string) (*encryption.DataKeyCacheEntry, bool) {
return nil, false
}
func (c *noopDataKeyCache) GetByLabel(_ string, _ string) (*encryption.DataKeyCacheEntry, bool) {
return nil, false
}
func (c *noopDataKeyCache) AddById(_ string, _ *encryption.DataKeyCacheEntry) {}
func (c *noopDataKeyCache) AddByLabel(_ string, _ *encryption.DataKeyCacheEntry) {}
func (c *noopDataKeyCache) RemoveExpired() {}
func (c *noopDataKeyCache) Flush(_ string) {}
@@ -7,6 +7,7 @@ import (
"go.opentelemetry.io/otel/trace/noop"
"github.com/grafana/grafana/pkg/infra/usagestats"
"github.com/grafana/grafana/pkg/registry/apis/secret/encryption"
"github.com/grafana/grafana/pkg/registry/apis/secret/encryption/cipher/service"
osskmsproviders "github.com/grafana/grafana/pkg/registry/apis/secret/encryption/kmsproviders"
"github.com/grafana/grafana/pkg/services/sqlstore"
@@ -47,8 +48,32 @@ func setupTestService(tb testing.TB) *EncryptionManager {
usageStats,
enc,
ossProviders,
&NoopDataKeyCache{},
cfg,
)
require.NoError(tb, err)
return encMgr.(*EncryptionManager)
}
type NoopDataKeyCache struct {
}
func (c *NoopDataKeyCache) GetById(namespace, id string) (*encryption.DataKeyCacheEntry, bool) {
return nil, false
}
func (c *NoopDataKeyCache) GetByLabel(namespace, label string) (*encryption.DataKeyCacheEntry, bool) {
return nil, false
}
func (c *NoopDataKeyCache) AddById(namespace string, entry *encryption.DataKeyCacheEntry) {
}
func (c *NoopDataKeyCache) AddByLabel(namespace string, entry *encryption.DataKeyCacheEntry) {
}
func (c *NoopDataKeyCache) RemoveExpired() {
}
func (c *NoopDataKeyCache) Flush(namespace string) {}
@@ -40,3 +40,25 @@ func (id ProviderID) Kind() (string, error) {
func KeyLabel(providerID ProviderID) string {
return fmt.Sprintf("%s@%s", time.Now().Format("2006-01-02"), providerID)
}
type DataKeyCache interface {
GetById(namespace, id string) (*DataKeyCacheEntry, bool)
GetByLabel(namespace, label string) (*DataKeyCacheEntry, bool)
AddById(namespace string, entry *DataKeyCacheEntry)
AddByLabel(namespace string, entry *DataKeyCacheEntry)
RemoveExpired()
Flush(namespace string)
}
type DataKeyCacheEntry struct {
Namespace string
Id string
Label string
DataKey []byte
Active bool
Expiration time.Time
}
func (e DataKeyCacheEntry) IsExpired() bool {
return e.Expiration.Before(time.Now())
}
@@ -62,7 +62,7 @@ func setupTestService(t *testing.T, cfg *setting.Cfg) (*OSSKeeperService, error)
ossProviders, err := osskmsproviders.ProvideOSSKMSProviders(cfg, enc)
require.NoError(t, err)
encryptionManager, err := manager.ProvideEncryptionManager(tracer, dataKeyStore, usageStats, enc, ossProviders)
encryptionManager, err := manager.ProvideEncryptionManager(tracer, dataKeyStore, usageStats, enc, ossProviders, &manager.NoopDataKeyCache{}, cfg)
require.NoError(t, err)
// Initialize the keeper service
@@ -53,6 +53,9 @@ func (s *ConsolidationService) Consolidate(ctx context.Context) (err error) {
return fmt.Errorf("disabling all data keys: %w", err)
}
// Keep track of which namespaces we have already flushed so we get to take advantage of caching the new values
flushedNamespaces := make(map[string]bool)
// List all encrypted values.
encryptedValues, err := s.globalEncryptedValueStore.ListAll(ctx, contracts.ListOpts{}, nil)
if err != nil {
@@ -60,6 +63,12 @@ func (s *ConsolidationService) Consolidate(ctx context.Context) (err error) {
}
for _, ev := range encryptedValues {
// Flush the cache for this namespace if we haven't already
if !flushedNamespaces[ev.Namespace] {
s.encryptionManager.FlushCache(xkube.Namespace(ev.Namespace))
flushedNamespaces[ev.Namespace] = true
}
// Decrypt the value using its old data key.
decryptedValue, err := s.encryptionManager.Decrypt(ctx, xkube.Namespace(ev.Namespace), ev.EncryptedPayload)
if err != nil {
@@ -121,6 +121,8 @@ func Setup(t *testing.T, opts ...func(*SetupConfig)) Sut {
usageStats,
enc,
ossProviders,
&manager.NoopDataKeyCache{},
cfg,
)
require.NoError(t, err)
+6 -3
View File
@@ -488,7 +488,8 @@ func Initialize(ctx context.Context, cfg *setting.Cfg, opts Options, apiOpts api
if err != nil {
return nil, err
}
encryptionManager, err := manager2.ProvideEncryptionManager(tracer, dataKeyStorage, usageStats, cipher, providerConfig)
dataKeyCache := manager2.ProvideNoopDataKeyCache()
encryptionManager, err := manager2.ProvideEncryptionManager(tracer, dataKeyStorage, usageStats, cipher, providerConfig, dataKeyCache, cfg)
if err != nil {
return nil, err
}
@@ -1154,7 +1155,8 @@ func InitializeForTest(ctx context.Context, t sqlutil.ITestDB, testingT interfac
if err != nil {
return nil, err
}
encryptionManager, err := manager2.ProvideEncryptionManager(tracer, dataKeyStorage, usageStats, cipher, providerConfig)
dataKeyCache := manager2.ProvideNoopDataKeyCache()
encryptionManager, err := manager2.ProvideEncryptionManager(tracer, dataKeyStorage, usageStats, cipher, providerConfig, dataKeyCache, cfg)
if err != nil {
return nil, err
}
@@ -1716,7 +1718,8 @@ func InitializeForCLI(ctx context.Context, cfg *setting.Cfg) (Runner, error) {
if err != nil {
return Runner{}, err
}
encryptionManager, err := manager2.ProvideEncryptionManager(tracer, dataKeyStorage, usageStats, cipher, providerConfig)
dataKeyCache := manager2.ProvideNoopDataKeyCache()
encryptionManager, err := manager2.ProvideEncryptionManager(tracer, dataKeyStorage, usageStats, cipher, providerConfig, dataKeyCache, cfg)
if err != nil {
return Runner{}, err
}
+3
View File
@@ -18,6 +18,7 @@ import (
"github.com/grafana/grafana/pkg/registry/apis/secret"
"github.com/grafana/grafana/pkg/registry/apis/secret/contracts"
gsmKMSProviders "github.com/grafana/grafana/pkg/registry/apis/secret/encryption/kmsproviders"
gsmEncryptionManager "github.com/grafana/grafana/pkg/registry/apis/secret/encryption/manager"
"github.com/grafana/grafana/pkg/registry/apis/secret/secretkeeper"
secretService "github.com/grafana/grafana/pkg/registry/apis/secret/service"
"github.com/grafana/grafana/pkg/registry/apps/advisor"
@@ -152,6 +153,8 @@ var wireExtsBasicSet = wire.NewSet(
aggregatorrunner.ProvideNoopAggregatorConfigurator,
apisregistry.WireSetExts,
gsmKMSProviders.ProvideOSSKMSProviders,
//gsmEncryptionManager.ProvideOSSDataKeyCache, // Temporarily use noop cache
gsmEncryptionManager.ProvideNoopDataKeyCache,
secret.ProvideSecureValueClient,
provisioningExtras,
configProviderExtras,
+16
View File
@@ -11,8 +11,18 @@ const (
)
type SecretsManagerSettings struct {
// Which encryption provider to use to encrypt any new secrets
CurrentEncryptionProvider string
// The time to live for decrypted data keys in memory
DataKeysCacheTTL time.Duration
// The interval to remove expired data keys from the cache
DataKeysCacheCleanupInterval time.Duration
// The caution period is the time after which a data key is assumed to be persisted in the worst case scenario.
DataKeysCacheCautionPeriod time.Duration
// Whether to use a Redis cache for data keys instead of the in-memory cache
DataKeysCacheUseRedis bool
// ConfiguredKMSProviders is a map of KMS providers found in the config file. The keys are in the format of <provider>.<keyName>, and the values are a map of the properties in that section
// In OSS, the provider type can only be "secret_key". In Enterprise, it can additionally be one of: "aws_kms", "azure_keyvault", "google_kms", "hashicorp_vault"
ConfiguredKMSProviders map[string]map[string]string
@@ -73,6 +83,12 @@ func (cfg *Cfg) readSecretsManagerSettings() {
cfg.SecretsManagement.AWSKeeperAccessKeyID = secretsMgmt.Key("aws_access_key_id").MustString("")
cfg.SecretsManagement.AWSKeeperSecretAccessKey = secretsMgmt.Key("aws_secret_access_key").MustString("")
cfg.SecretsManagement.DataKeysCacheUseRedis = secretsMgmt.Key("data_keys_cache_use_redis").MustBool(false)
cfg.SecretsManagement.DataKeysCacheTTL = secretsMgmt.Key("data_keys_cache_ttl").MustDuration(15 * time.Minute)
cfg.SecretsManagement.DataKeysCacheCleanupInterval = secretsMgmt.Key("data_keys_cache_cleanup_interval").MustDuration(1 * time.Minute)
// We consider a "caution period" of 10m to be long enough for any database transaction that implied a data key creation to have finished successfully.
cfg.SecretsManagement.DataKeysCacheCautionPeriod = secretsMgmt.Key("data_keys_cache_caution_period").MustDuration(10 * time.Minute)
// Extract available KMS providers from configuration sections
providers := make(map[string]map[string]string)
for _, section := range cfg.Raw.Sections() {
+3 -24
View File
@@ -14,7 +14,6 @@ import (
"github.com/grafana/grafana/pkg/apimachinery/validation"
"github.com/grafana/grafana/pkg/storage/unified/sql/db"
"github.com/grafana/grafana/pkg/storage/unified/sql/dbutil"
"github.com/grafana/grafana/pkg/storage/unified/sql/rvmanager"
"github.com/grafana/grafana/pkg/storage/unified/sql/sqltemplate"
gocache "github.com/patrickmn/go-cache"
)
@@ -869,18 +868,10 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
if key.Action == DataActionDeleted {
generation = 0
}
// In compatibility mode, the previous RV, when available, is saved as a microsecond
// timestamp, as is done in the SQL backend.
previousRV := event.PreviousRV
if event.PreviousRV > 0 && isSnowflake(event.PreviousRV) {
previousRV = rvmanager.RVFromSnowflake(event.PreviousRV)
}
_, err := dbutil.Exec(ctx, tx, sqlKVUpdateLegacyResourceHistory, sqlKVLegacyUpdateHistoryRequest{
SQLTemplate: sqltemplate.New(kv.dialect),
GUID: key.GUID,
PreviousRV: previousRV,
PreviousRV: event.PreviousRV,
Generation: generation,
})
@@ -909,7 +900,7 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
Name: key.Name,
Action: action,
Folder: key.Folder,
PreviousRV: previousRV,
PreviousRV: event.PreviousRV,
})
if err != nil {
@@ -925,7 +916,7 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
Name: key.Name,
Action: action,
Folder: key.Folder,
PreviousRV: previousRV,
PreviousRV: event.PreviousRV,
})
if err != nil {
@@ -947,15 +938,3 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
return nil
}
// isSnowflake returns whether the argument passed is a snowflake ID (new) or a microsecond timestamp (old).
// We try to interpret the number as a microsecond timestamp first. If it represents a time in the past,
// it is considered a microsecond timestamp. Snowflake IDs are much larger integers and would lead
// to dates in the future if interpreted as a microsecond timestamp.
func isSnowflake(rv int64) bool {
ts := time.UnixMicro(rv)
oneHourFromNow := time.Now().Add(time.Hour)
isMicroSecRV := ts.Before(oneHourFromNow)
return !isMicroSecRV
}
+15 -9
View File
@@ -456,27 +456,33 @@ func testNotifierWatchMultipleEvents(t *testing.T, ctx context.Context, notifier
},
}
errCh := make(chan error)
go func() {
for _, event := range testEvents {
errCh <- eventStore.Save(ctx, event)
err := eventStore.Save(ctx, event)
require.NoError(t, err)
}
}()
// Receive events
receivedEvents := make([]string, 0, len(testEvents))
for len(receivedEvents) != len(testEvents) {
receivedEvents := make([]Event, 0, len(testEvents))
for i := 0; i < len(testEvents); i++ {
select {
case event := <-events:
receivedEvents = append(receivedEvents, event.Name)
case err := <-errCh:
require.NoError(t, err)
receivedEvents = append(receivedEvents, event)
case <-time.After(1 * time.Second):
t.Fatalf("Timed out waiting for event %d", len(receivedEvents)+1)
t.Fatalf("Timed out waiting for event %d", i+1)
}
}
// Verify all events were received
assert.Len(t, receivedEvents, len(testEvents))
// Verify the events match and ordered by resource version
receivedNames := make([]string, len(receivedEvents))
for i, event := range receivedEvents {
receivedNames[i] = event.Name
}
expectedNames := []string{"test-resource-1", "test-resource-2", "test-resource-3"}
assert.ElementsMatch(t, expectedNames, receivedEvents)
assert.ElementsMatch(t, expectedNames, receivedNames)
}
+2
View File
@@ -473,6 +473,8 @@ func (k *sqlKV) Delete(ctx context.Context, section string, key string) error {
return ErrNotFound
}
// TODO reflect change to resource table
return nil
}
@@ -347,7 +347,7 @@ func (k *kvStorageBackend) WriteEvent(ctx context.Context, event WriteEvent) (in
return 0, fmt.Errorf("failed to write data: %w", err)
}
rv = rvmanager.SnowflakeFromRV(rv)
rv = rvmanager.SnowflakeFromRv(rv)
dataKey.ResourceVersion = rv
} else {
err := k.dataStore.Save(ctx, dataKey, bytes.NewReader(event.Value))
@@ -689,6 +689,9 @@ func validateListHistoryRequest(req *resourcepb.ListRequest) error {
if key.Namespace == "" {
return fmt.Errorf("namespace is required")
}
if key.Name == "" {
return fmt.Errorf("name is required")
}
return nil
}
@@ -307,7 +307,7 @@ func (m *ResourceVersionManager) execBatch(ctx context.Context, group, resource
// Allocate the RVs
for i, guid := range guids {
guidToRV[guid] = rv
guidToSnowflakeRV[guid] = SnowflakeFromRV(rv)
guidToSnowflakeRV[guid] = SnowflakeFromRv(rv)
rvs[i] = rv
rv++
}
@@ -364,20 +364,12 @@ func (m *ResourceVersionManager) execBatch(ctx context.Context, group, resource
}
}
// takes a unix microsecond RV and transforms into a snowflake format. The timestamp is converted from microsecond to
// takes a unix microsecond rv and transforms into a snowflake format. The timestamp is converted from microsecond to
// millisecond (the integer division) and the remainder is saved in the stepbits section. machine id is always 0
func SnowflakeFromRV(rv int64) int64 {
func SnowflakeFromRv(rv int64) int64 {
return (((rv / 1000) - snowflake.Epoch) << (snowflake.NodeBits + snowflake.StepBits)) + (rv % 1000)
}
// It is generally not possible to convert from a snowflakeID to a microsecond RV due to the loss in precision
// (snowflake ID stores timestamp in milliseconds). However, this implementation stores the microsecond fraction
// in the step bits (see SnowflakeFromRV), allowing us to compute the microsecond timestamp.
func RVFromSnowflake(snowflakeID int64) int64 {
microSecFraction := snowflakeID & ((1 << snowflake.StepBits) - 1)
return ((snowflakeID>>(snowflake.NodeBits+snowflake.StepBits))+snowflake.Epoch)*1000 + microSecFraction
}
// helper utility to compare two RVs. The first RV must be in snowflake format. Will convert rv2 to snowflake and retry
// if comparison fails
func IsRvEqual(rv1, rv2 int64) bool {
@@ -385,7 +377,7 @@ func IsRvEqual(rv1, rv2 int64) bool {
return true
}
return rv1 == SnowflakeFromRV(rv2)
return rv1 == SnowflakeFromRv(rv2)
}
// Lock locks the resource version for the given key
@@ -63,13 +63,3 @@ func TestResourceVersionManager(t *testing.T) {
require.Equal(t, rv, int64(200))
})
}
func TestSnowflakeFromRVRoundtrips(t *testing.T) {
// 2026-01-12 19:33:58.806211 +0000 UTC
offset := int64(1768246438806211) // in microseconds
for n := range int64(100) {
ts := offset + n
require.Equal(t, ts, RVFromSnowflake(SnowflakeFromRV(ts)))
}
}
@@ -23,7 +23,6 @@ import (
"github.com/grafana/authlib/types"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/storage/unified/resource"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
sqldb "github.com/grafana/grafana/pkg/storage/unified/sql/db"
@@ -100,10 +99,6 @@ func RunStorageBackendTest(t *testing.T, newBackend NewBackendFunc, opts *TestOp
}
t.Run(tc.name, func(t *testing.T) {
if db.IsTestDbSQLite() {
t.Skip("Skipping tests on sqlite until channel notifier is implemented")
}
tc.fn(t, newBackend(context.Background()), opts.NSPrefix)
})
}
@@ -1171,7 +1166,7 @@ func runTestIntegrationBackendCreateNewResource(t *testing.T, backend resource.S
}))
server := newServer(t, backend)
ns := nsPrefix + "-create-rsrce" // create-resource
ns := nsPrefix + "-create-resource"
ctx = request.WithNamespace(ctx, ns)
request := &resourcepb.CreateRequest{
@@ -1612,7 +1607,7 @@ func (s *sliceBulkRequestIterator) RollbackRequested() bool {
func runTestIntegrationBackendOptimisticLocking(t *testing.T, backend resource.StorageBackend, nsPrefix string) {
ctx := testutil.NewTestContext(t, time.Now().Add(30*time.Second))
ns := nsPrefix + "-optimis-lock" // optimistic-locking. need to cut down on characters to not exceed namespace character limit (40)
ns := nsPrefix + "-optimistic-locking"
t.Run("concurrent updates with same RV - only one succeeds", func(t *testing.T) {
// Create initial resource with rv0 (no previous RV)
@@ -36,10 +36,6 @@ func NewTestSqlKvBackend(t *testing.T, ctx context.Context, withRvManager bool)
KvStore: kv,
}
if db.DriverName() == "sqlite3" {
kvOpts.UseChannelNotifier = true
}
if withRvManager {
dialect := sqltemplate.DialectForDriver(db.DriverName())
rvManager, err := rvmanager.NewResourceVersionManager(rvmanager.ResourceManagerOptions{
@@ -204,7 +200,7 @@ func verifyKeyPath(t *testing.T, db sqldb.DB, ctx context.Context, key *resource
var keyPathRV int64
if isSqlBackend {
// Convert microsecond RV to snowflake for key_path construction
keyPathRV = rvmanager.SnowflakeFromRV(resourceVersion)
keyPathRV = rvmanager.SnowflakeFromRv(resourceVersion)
} else {
// KV backend already provides snowflake RV
keyPathRV = resourceVersion
@@ -438,6 +434,9 @@ func verifyResourceHistoryTable(t *testing.T, db sqldb.DB, namespace string, res
rows, err := db.QueryContext(ctx, query, namespace)
require.NoError(t, err)
defer func() {
_ = rows.Close()
}()
var records []ResourceHistoryRecord
for rows.Next() {
@@ -461,34 +460,33 @@ func verifyResourceHistoryTable(t *testing.T, db sqldb.DB, namespace string, res
for resourceIdx, res := range resources {
// Check create record (action=1, generation=1)
createRecord := records[recordIndex]
verifyResourceHistoryRecord(t, createRecord, namespace, res, resourceIdx, 1, 0, 1, resourceVersions[resourceIdx][0])
verifyResourceHistoryRecord(t, createRecord, res, resourceIdx, 1, 0, 1, resourceVersions[resourceIdx][0])
recordIndex++
}
for resourceIdx, res := range resources {
// Check update record (action=2, generation=2)
updateRecord := records[recordIndex]
verifyResourceHistoryRecord(t, updateRecord, namespace, res, resourceIdx, 2, resourceVersions[resourceIdx][0], 2, resourceVersions[resourceIdx][1])
verifyResourceHistoryRecord(t, updateRecord, res, resourceIdx, 2, resourceVersions[resourceIdx][0], 2, resourceVersions[resourceIdx][1])
recordIndex++
}
for resourceIdx, res := range resources[:2] {
// Check delete record (action=3, generation=0) - only first 2 resources were deleted
deleteRecord := records[recordIndex]
verifyResourceHistoryRecord(t, deleteRecord, namespace, res, resourceIdx, 3, resourceVersions[resourceIdx][1], 0, resourceVersions[resourceIdx][2])
verifyResourceHistoryRecord(t, deleteRecord, res, resourceIdx, 3, resourceVersions[resourceIdx][1], 0, resourceVersions[resourceIdx][2])
recordIndex++
}
}
// verifyResourceHistoryRecord validates a single resource_history record
func verifyResourceHistoryRecord(t *testing.T, record ResourceHistoryRecord, namespace string, expectedRes struct{ name, folder string }, resourceIdx, expectedAction int, expectedPrevRV int64, expectedGeneration int, expectedRV int64) {
func verifyResourceHistoryRecord(t *testing.T, record ResourceHistoryRecord, expectedRes struct{ name, folder string }, resourceIdx, expectedAction int, expectedPrevRV int64, expectedGeneration int, expectedRV int64) {
// Validate GUID (should be non-empty)
require.NotEmpty(t, record.GUID, "GUID should not be empty")
// Validate group/resource/namespace/name
require.Equal(t, "playlist.grafana.app", record.Group)
require.Equal(t, "playlists", record.Resource)
require.Equal(t, namespace, record.Namespace)
require.Equal(t, expectedRes.name, record.Name)
// Validate value contains expected JSON - server modifies/formats the JSON differently for different operations
@@ -515,12 +513,8 @@ func verifyResourceHistoryRecord(t *testing.T, record ResourceHistoryRecord, nam
// For KV backend operations, expectedPrevRV is now in snowflake format (returned by KV backend)
// but resource_history table stores microsecond RV, so we need to use IsRvEqual for comparison
if strings.Contains(record.Namespace, "-kv") {
if expectedPrevRV == 0 {
require.Zero(t, record.PreviousResourceVersion)
} else {
require.Equal(t, expectedPrevRV, rvmanager.SnowflakeFromRV(record.PreviousResourceVersion),
"Previous resource version should match (KV backend snowflake format)")
}
require.True(t, rvmanager.IsRvEqual(expectedPrevRV, record.PreviousResourceVersion),
"Previous resource version should match (KV backend snowflake format)")
} else {
require.Equal(t, expectedPrevRV, record.PreviousResourceVersion)
}
@@ -552,6 +546,9 @@ func verifyResourceTable(t *testing.T, db sqldb.DB, namespace string, resources
rows, err := db.QueryContext(ctx, query, namespace)
require.NoError(t, err)
defer func() {
_ = rows.Close()
}()
var records []ResourceRecord
for rows.Next() {
@@ -615,6 +612,9 @@ func verifyResourceVersionTable(t *testing.T, db sqldb.DB, namespace string, res
// Check that we have exactly one entry for playlist.grafana.app/playlists
rows, err := db.QueryContext(ctx, query, "playlist.grafana.app", "playlists")
require.NoError(t, err)
defer func() {
_ = rows.Close()
}()
var records []ResourceVersionRecord
for rows.Next() {
@@ -649,7 +649,7 @@ func verifyResourceVersionTable(t *testing.T, db sqldb.DB, namespace string, res
isKvBackend := strings.Contains(namespace, "-kv")
recordResourceVersion := record.ResourceVersion
if isKvBackend {
recordResourceVersion = rvmanager.SnowflakeFromRV(record.ResourceVersion)
recordResourceVersion = rvmanager.SnowflakeFromRv(record.ResourceVersion)
}
require.Less(t, recordResourceVersion, int64(9223372036854775807), "resource_version should be reasonable")
@@ -841,20 +841,24 @@ func runMixedConcurrentOperations(t *testing.T, sqlServer, kvServer resource.Res
}
// SQL backend operations
wg.Go(func() {
wg.Add(1)
go func() {
defer wg.Done()
<-startBarrier // Wait for signal to start
if err := runBackendOperationsWithCounts(ctx, sqlServer, namespace+"-sql", "sql", opCounts); err != nil {
errors <- fmt.Errorf("SQL backend operations failed: %w", err)
}
})
}()
// KV backend operations
wg.Go(func() {
wg.Add(1)
go func() {
defer wg.Done()
<-startBarrier // Wait for signal to start
if err := runBackendOperationsWithCounts(ctx, kvServer, namespace+"-kv", "kv", opCounts); err != nil {
errors <- fmt.Errorf("KV backend operations failed: %w", err)
}
})
}()
// Start both goroutines simultaneously
close(startBarrier)
@@ -41,9 +41,17 @@ func TestIntegrationSQLKVStorageBackend(t *testing.T) {
testutil.SkipIntegrationTestInShortMode(t)
skipTests := map[string]bool{
TestWatchWriteEvents: true,
TestList: true,
TestBlobSupport: true,
TestGetResourceStats: true,
TestListHistory: true,
TestListHistoryErrorReporting: true,
TestListModifiedSince: true,
TestListTrash: true,
TestCreateNewResource: true,
TestGetResourceLastImportTime: true,
TestOptimisticLocking: true,
}
t.Run("Without RvManager", func(t *testing.T) {
@@ -51,7 +59,7 @@ func TestIntegrationSQLKVStorageBackend(t *testing.T) {
backend, _ := NewTestSqlKvBackend(t, ctx, false)
return backend
}, &TestOptions{
NSPrefix: "sqlkvstoragetest",
NSPrefix: "sqlkvstorage-test",
SkipTests: skipTests,
})
})
@@ -61,7 +69,7 @@ func TestIntegrationSQLKVStorageBackend(t *testing.T) {
backend, _ := NewTestSqlKvBackend(t, ctx, true)
return backend
}, &TestOptions{
NSPrefix: "sqlkvstoragetest-rvmanager",
NSPrefix: "sqlkvstorage-withrvmanager-test",
SkipTests: skipTests,
})
})
@@ -10,10 +10,10 @@ import (
"github.com/grafana/alerting/notify"
"github.com/grafana/alerting/receivers/schema"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/api/errors"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/grafana/grafana/apps/alerting/notifications/pkg/apis/alertingnotifications/v0alpha1"
"github.com/grafana/grafana/pkg/services/featuremgmt"
@@ -21,6 +21,7 @@ import (
"github.com/grafana/grafana/pkg/services/ngalert/models"
"github.com/grafana/grafana/pkg/tests/api/alerting"
"github.com/grafana/grafana/pkg/tests/apis"
test_common "github.com/grafana/grafana/pkg/tests/apis/alerting/notifications/common"
"github.com/grafana/grafana/pkg/tests/testinfra"
)
@@ -33,8 +34,7 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
},
})
receiverClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
receiverClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
cliCfg := helper.Org1.Admin.NewRestConfig()
alertingApi := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
@@ -58,9 +58,9 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
response := alertingApi.ConvertPrometheusPostAlertmanagerConfig(t, amConfig, headers)
require.Equal(t, "success", response.Status)
receiversRaw, err := receiverClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
receiversRaw, err := receiverClient.Client.List(ctx, v1.ListOptions{})
require.NoError(t, err)
raw, err := json.Marshal(receiversRaw)
raw, err := receiversRaw.MarshalJSON()
require.NoError(t, err)
expectedBytes, err := os.ReadFile(path.Join("test-data", "imported-expected-snapshot.json"))
@@ -74,7 +74,7 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
require.NoError(t, err)
}
receivers, err := receiverClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
receivers, err := receiverClient.List(ctx, v1.ListOptions{})
require.NoError(t, err)
t.Run("secure fields should be properly masked", func(t *testing.T) {
for _, receiver := range receivers.Items {
@@ -114,14 +114,14 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
toUpdate := receivers.Items[1]
toUpdate.Spec.Title = "another title"
_, err = receiverClient.Update(ctx, &toUpdate, resource.UpdateOptions{})
_, err = receiverClient.Update(ctx, &toUpdate, v1.UpdateOptions{})
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
})
t.Run("should not be able to delete", func(t *testing.T) {
toDelete := receivers.Items[1]
err = receiverClient.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: toDelete.Name}, resource.DeleteOptions{})
err = receiverClient.Delete(ctx, toDelete.Name, v1.DeleteOptions{})
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
})
}
@@ -15,12 +15,12 @@ import (
"github.com/grafana/alerting/notify/notifytest"
"github.com/grafana/alerting/receivers/line"
"github.com/grafana/alerting/receivers/schema"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/api/errors"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
"github.com/grafana/alerting/notify"
@@ -65,8 +65,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
client, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
client := test_common.NewReceiverClient(t, helper.Org1.Admin)
newResource := &v0alpha1.Receiver{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
@@ -78,42 +77,42 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
}
t.Run("create should fail if object name is specified", func(t *testing.T) {
receiver := newResource.Copy().(*v0alpha1.Receiver)
receiver.Name = "new-receiver"
_, err := client.Create(ctx, receiver, resource.CreateOptions{})
resource := newResource.Copy().(*v0alpha1.Receiver)
resource.Name = "new-receiver"
_, err := client.Create(ctx, resource, v1.CreateOptions{})
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
})
var resourceID resource.Identifier
var resourceID string
t.Run("create should succeed and provide resource name", func(t *testing.T) {
actual, err := client.Create(ctx, newResource, resource.CreateOptions{})
actual, err := client.Create(ctx, newResource, v1.CreateOptions{})
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
resourceID = actual.GetStaticMetadata().Identifier()
resourceID = actual.Name
})
t.Run("resource should be available by the identifier", func(t *testing.T) {
actual, err := client.Get(ctx, resourceID)
actual, err := client.Get(ctx, resourceID, v1.GetOptions{})
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
require.Equal(t, newResource.Spec, actual.Spec)
})
t.Run("update should rename receiver if name in the specification changes", func(t *testing.T) {
existing, err := client.Get(ctx, resourceID)
existing, err := client.Get(ctx, resourceID, v1.GetOptions{})
require.NoError(t, err)
updated := existing.Copy().(*v0alpha1.Receiver)
updated.Spec.Title = "another-newReceiver"
actual, err := client.Update(ctx, updated, resource.UpdateOptions{})
actual, err := client.Update(ctx, updated, v1.UpdateOptions{})
require.NoError(t, err)
require.Equal(t, updated.Spec, actual.Spec)
require.NotEqualf(t, updated.Name, actual.Name, "Update should change the resource name but it didn't")
require.NotEqualf(t, updated.ResourceVersion, actual.ResourceVersion, "Update should change the resource version but it didn't")
resource, err := client.Get(ctx, actual.GetStaticMetadata().Identifier())
resource, err := client.Get(ctx, actual.Name, v1.GetOptions{})
require.NoError(t, err)
require.Equal(t, actual.Spec, resource.Spec)
require.Equal(t, actual.Name, resource.Name)
@@ -141,8 +140,7 @@ func TestIntegrationResourcePermissions(t *testing.T) {
admin := org1.Admin
viewer := org1.Viewer
editor := org1.Editor
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(admin.GetClientRegistry())
require.NoError(t, err)
adminClient := test_common.NewReceiverClient(t, admin)
writeACMetadata := []string{"canWrite", "canDelete"}
allACMetadata := []string{"canWrite", "canDelete", "canReadSecrets", "canAdmin", "canModifyProtected"}
@@ -294,10 +292,8 @@ func TestIntegrationResourcePermissions(t *testing.T) {
},
} {
t.Run(tc.name, func(t *testing.T) {
createClient, err := v0alpha1.NewReceiverClientFromGenerator(tc.creatingUser.GetClientRegistry())
require.NoError(t, err)
client, err := v0alpha1.NewReceiverClientFromGenerator(tc.testUser.GetClientRegistry())
require.NoError(t, err)
createClient := test_common.NewReceiverClient(t, tc.creatingUser)
client := test_common.NewReceiverClient(t, tc.testUser)
var created = &v0alpha1.Receiver{
ObjectMeta: v1.ObjectMeta{
@@ -312,12 +308,12 @@ func TestIntegrationResourcePermissions(t *testing.T) {
require.NoError(t, err)
// Create receiver with creatingUser
created, err = createClient.Create(ctx, created, resource.CreateOptions{})
created, err = createClient.Create(ctx, created, v1.CreateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.NotNil(t, created)
defer func() {
_ = adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
_ = adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
}()
// Assign resource permissions
@@ -342,7 +338,7 @@ func TestIntegrationResourcePermissions(t *testing.T) {
// Obtain expected responses using admin client as source of truth.
expectedGetWithMetadata, expectedListWithMetadata := func() (*v0alpha1.Receiver, *v0alpha1.Receiver) {
expectedGet, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
expectedGet, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
require.NoError(t, err)
require.NotNil(t, expectedGet)
@@ -356,7 +352,7 @@ func TestIntegrationResourcePermissions(t *testing.T) {
expectedGetWithMetadata.SetAccessControl(ac)
}
expectedList, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
expectedList, err := adminClient.List(ctx, v1.ListOptions{})
require.NoError(t, err)
expectedListWithMetadata := extractReceiverFromList(expectedList, created.Name)
require.NotNil(t, expectedListWithMetadata)
@@ -372,26 +368,26 @@ func TestIntegrationResourcePermissions(t *testing.T) {
}()
t.Run("should be able to list receivers", func(t *testing.T) {
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
list, err := client.List(ctx, v1.ListOptions{})
require.NoError(t, err)
listedReceiver := extractReceiverFromList(list, created.Name)
assert.Equalf(t, expectedListWithMetadata, listedReceiver, "Expected %v but got %v", expectedListWithMetadata, listedReceiver)
})
t.Run("should be able to read receiver by resource identifier", func(t *testing.T) {
got, err := client.Get(ctx, expectedGetWithMetadata.GetStaticMetadata().Identifier())
got, err := client.Get(ctx, expectedGetWithMetadata.Name, v1.GetOptions{})
require.NoError(t, err)
assert.Equalf(t, expectedGetWithMetadata, got, "Expected %v but got %v", expectedGetWithMetadata, got)
})
} else {
t.Run("list receivers should be empty", func(t *testing.T) {
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
list, err := client.List(ctx, v1.ListOptions{})
require.NoError(t, err)
require.Emptyf(t, list.Items, "Expected no receivers but got %v", list.Items)
})
t.Run("should be forbidden to read receiver by name", func(t *testing.T) {
_, err := client.Get(ctx, created.GetStaticMetadata().Identifier())
_, err := client.Get(ctx, created.Name, v1.GetOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
}
@@ -563,12 +559,10 @@ func TestIntegrationAccessControl(t *testing.T) {
},
}
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
for _, tc := range testCases {
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
client, err := v0alpha1.NewReceiverClientFromGenerator(tc.user.GetClientRegistry())
require.NoError(t, err)
client := test_common.NewReceiverClient(t, tc.user)
var expected = &v0alpha1.Receiver{
ObjectMeta: v1.ObjectMeta{
@@ -586,29 +580,29 @@ func TestIntegrationAccessControl(t *testing.T) {
newReceiver.Spec.Title = fmt.Sprintf("receiver-2-%s", tc.user.Identity.GetLogin())
if tc.canCreate {
t.Run("should be able to create receiver", func(t *testing.T) {
actual, err := client.Create(ctx, newReceiver, resource.CreateOptions{})
actual, err := client.Create(ctx, newReceiver, v1.CreateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.Equal(t, newReceiver.Spec, actual.Spec)
t.Run("should fail if already exists", func(t *testing.T) {
_, err := client.Create(ctx, newReceiver, resource.CreateOptions{})
_, err := client.Create(ctx, newReceiver, v1.CreateOptions{})
require.Truef(t, errors.IsConflict(err), "expected bad request but got %s", err)
})
// Cleanup.
require.NoError(t, adminClient.Delete(ctx, actual.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
require.NoError(t, adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{}))
})
} else {
t.Run("should be forbidden to create", func(t *testing.T) {
_, err := client.Create(ctx, newReceiver, resource.CreateOptions{})
_, err := client.Create(ctx, newReceiver, v1.CreateOptions{})
require.Truef(t, errors.IsForbidden(err), "Payload %s", string(d))
})
}
// create resource to proceed with other tests. We don't use the one created above because the user will always
// have admin permissions on it.
expected, err = adminClient.Create(ctx, expected, resource.CreateOptions{})
expected, err = adminClient.Create(ctx, expected, v1.CreateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.NotNil(t, expected)
@@ -633,34 +627,34 @@ func TestIntegrationAccessControl(t *testing.T) {
expectedWithMetadata.SetAccessControl("canAdmin")
}
t.Run("should be able to list receivers", func(t *testing.T) {
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
list, err := client.List(ctx, v1.ListOptions{})
require.NoError(t, err)
require.Len(t, list.Items, 2) // default + created
})
t.Run("should be able to read receiver by resource identifier", func(t *testing.T) {
got, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
got, err := client.Get(ctx, expected.Name, v1.GetOptions{})
require.NoError(t, err)
require.Equal(t, expectedWithMetadata, got)
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("list receivers should be empty", func(t *testing.T) {
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
list, err := client.List(ctx, v1.ListOptions{})
require.NoError(t, err)
require.Emptyf(t, list.Items, "Expected no receivers but got %v", list.Items)
})
t.Run("should be forbidden to read receiver by name", func(t *testing.T) {
_, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
_, err := client.Get(ctx, expected.Name, v1.GetOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
@@ -674,7 +668,7 @@ func TestIntegrationAccessControl(t *testing.T) {
if tc.canUpdate {
t.Run("should be able to update receiver", func(t *testing.T) {
updated, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
updated, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
expected = updated
@@ -682,7 +676,7 @@ func TestIntegrationAccessControl(t *testing.T) {
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
up := updatedExpected.Copy().(*v0alpha1.Receiver)
up.Name = "notFound"
_, err := client.Update(ctx, up, resource.UpdateOptions{})
_, err := client.Update(ctx, up, v1.UpdateOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
@@ -692,7 +686,7 @@ func TestIntegrationAccessControl(t *testing.T) {
createIntegration(t, "webhook"),
}
expected, err = adminClient.Update(ctx, updatedExpected, resource.UpdateOptions{})
expected, err = adminClient.Update(ctx, updatedExpected, v1.UpdateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.NotNil(t, expected)
@@ -701,62 +695,60 @@ func TestIntegrationAccessControl(t *testing.T) {
if tc.canUpdateProtected {
t.Run("should be able to update protected fields of the receiver", func(t *testing.T) {
updated, err := client.Update(ctx, updatedProtected, resource.UpdateOptions{})
updated, err := client.Update(ctx, updatedProtected, v1.UpdateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.NotNil(t, updated)
expected = updated
})
} else {
t.Run("should be forbidden to edit protected fields of the receiver", func(t *testing.T) {
_, err := client.Update(ctx, updatedProtected, resource.UpdateOptions{})
_, err := client.Update(ctx, updatedProtected, v1.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
}
} else {
t.Run("should be forbidden to update receiver", func(t *testing.T) {
_, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
_, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
up := updatedExpected.Copy().(*v0alpha1.Receiver)
up.Name = "notFound"
_, err := client.Update(ctx, up, resource.UpdateOptions{
ResourceVersion: up.ResourceVersion,
})
_, err := client.Update(ctx, up, v1.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
require.Falsef(t, tc.canUpdateProtected, "Invalid combination of assertions. CanUpdateProtected should be false")
}
deleteOptions := resource.DeleteOptions{Preconditions: resource.DeleteOptionsPreconditions{ResourceVersion: expected.ResourceVersion}}
deleteOptions := v1.DeleteOptions{Preconditions: &v1.Preconditions{ResourceVersion: util.Pointer(expected.ResourceVersion)}}
if tc.canDelete {
t.Run("should be able to delete receiver", func(t *testing.T) {
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), deleteOptions)
err := client.Delete(ctx, expected.Name, deleteOptions)
require.NoError(t, err)
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to delete receiver", func(t *testing.T) {
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), deleteOptions)
err := client.Delete(ctx, expected.Name, deleteOptions)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
}
if tc.canRead {
t.Run("should get empty list if no receivers", func(t *testing.T) {
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
list, err := client.List(ctx, v1.ListOptions{})
require.NoError(t, err)
require.Len(t, list.Items, 1)
})
@@ -774,8 +766,7 @@ func TestIntegrationInUseMetadata(t *testing.T) {
cliCfg := helper.Org1.Admin.NewRestConfig()
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
// Prepare environment and create notification policy and rule that use receiver
alertmanagerRaw, err := testData.ReadFile(path.Join("test-data", "notification-settings.json"))
require.NoError(t, err)
@@ -822,7 +813,7 @@ func TestIntegrationInUseMetadata(t *testing.T) {
requestReceivers := func(t *testing.T, title string) (v0alpha1.Receiver, v0alpha1.Receiver) {
t.Helper()
receivers, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
receivers, err := adminClient.List(ctx, v1.ListOptions{})
require.NoError(t, err)
require.Len(t, receivers.Items, 2)
idx := slices.IndexFunc(receivers.Items, func(interval v0alpha1.Receiver) bool {
@@ -830,7 +821,7 @@ func TestIntegrationInUseMetadata(t *testing.T) {
})
receiverListed := receivers.Items[idx]
receiverGet, err := adminClient.Get(ctx, receiverListed.GetStaticMetadata().Identifier())
receiverGet, err := adminClient.Get(ctx, receiverListed.Name, v1.GetOptions{})
require.NoError(t, err)
return receiverListed, *receiverGet
@@ -855,9 +846,8 @@ func TestIntegrationInUseMetadata(t *testing.T) {
amConfig.AlertmanagerConfig.Route.Routes = amConfig.AlertmanagerConfig.Route.Routes[:1]
v1Route, err := routingtree.ConvertToK8sResource(helper.Org1.AdminServiceAccount.OrgId, *amConfig.AlertmanagerConfig.Route, "", func(int64) string { return "default" })
require.NoError(t, err)
routeAdminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
_, err = routeAdminClient.Update(ctx, v1Route, resource.UpdateOptions{})
routeAdminClient := test_common.NewRoutingTreeClient(t, helper.Org1.Admin)
_, err = routeAdminClient.Update(ctx, v1Route, v1.UpdateOptions{})
require.NoError(t, err)
receiverListed, receiverGet = requestReceivers(t, "user-defined")
@@ -878,7 +868,7 @@ func TestIntegrationInUseMetadata(t *testing.T) {
amConfig.AlertmanagerConfig.Route.Routes = nil
v1route, err := routingtree.ConvertToK8sResource(1, *amConfig.AlertmanagerConfig.Route, "", func(int64) string { return "default" })
require.NoError(t, err)
_, err = routeAdminClient.Update(ctx, v1route, resource.UpdateOptions{})
_, err = routeAdminClient.Update(ctx, v1route, v1.UpdateOptions{})
require.NoError(t, err)
// Remove the remaining rules.
@@ -902,8 +892,7 @@ func TestIntegrationProvisioning(t *testing.T) {
org := helper.Org1
admin := org.Admin
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
env := helper.GetEnv()
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
db, err := store.ProvideDBStore(env.Cfg, env.FeatureToggles, env.SQLStore, &foldertest.FakeService{}, &dashboards.FakeDashboardService{}, ac, bus.ProvideBus(tracing.InitializeTracerForTest()))
@@ -919,7 +908,7 @@ func TestIntegrationProvisioning(t *testing.T) {
createIntegration(t, "email"),
},
},
}, resource.CreateOptions{})
}, v1.CreateOptions{})
require.NoError(t, err)
require.Equal(t, "none", created.GetProvenanceStatus())
@@ -928,23 +917,23 @@ func TestIntegrationProvisioning(t *testing.T) {
UID: *created.Spec.Integrations[0].Uid,
}, admin.Identity.GetOrgID(), "API"))
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
require.NoError(t, err)
require.Equal(t, "API", got.GetProvenanceStatus())
})
t.Run("should not let update if provisioned", func(t *testing.T) {
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
require.NoError(t, err)
updated := got.Copy().(*v0alpha1.Receiver)
updated.Spec.Integrations = append(updated.Spec.Integrations, createIntegration(t, "email"))
_, err = adminClient.Update(ctx, updated, resource.UpdateOptions{})
_, err = adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
t.Run("should not let delete if provisioned", func(t *testing.T) {
err := adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
err := adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
}
@@ -955,10 +944,7 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
oldClient := test_common.NewReceiverClient(t, helper.Org1.Admin) // TODO replace with regular client once Delete works
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
receiver := v0alpha1.Receiver{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
@@ -969,22 +955,21 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
},
}
created, err := adminClient.Create(ctx, &receiver, resource.CreateOptions{})
created, err := adminClient.Create(ctx, &receiver, v1.CreateOptions{})
require.NoError(t, err)
require.NotNil(t, created)
require.NotEmpty(t, created.ResourceVersion)
t.Run("should conflict if version does not match", func(t *testing.T) {
t.Run("should forbid if version does not match", func(t *testing.T) {
updated := created.Copy().(*v0alpha1.Receiver)
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
ResourceVersion: "test",
})
updated.ResourceVersion = "test"
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
})
t.Run("should update if version matches", func(t *testing.T) {
updated := created.Copy().(*v0alpha1.Receiver)
updated.Spec.Integrations = append(updated.Spec.Integrations, createIntegration(t, "email"))
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.NoError(t, err)
for i, integration := range actualUpdated.Spec.Integrations {
updated.Spec.Integrations[i].Uid = integration.Uid
@@ -996,25 +981,25 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
updated := created.Copy().(*v0alpha1.Receiver)
updated.ResourceVersion = ""
updated.Spec.Integrations = append(updated.Spec.Integrations, createIntegration(t, "webhook"))
_, err := oldClient.Update(ctx, updated, v1.UpdateOptions{})
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err) // TODO Change that? K8s returns 400 instead.
})
t.Run("should fail to delete if version does not match", func(t *testing.T) {
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
require.NoError(t, err)
err = oldClient.Delete(ctx, actual.Name, v1.DeleteOptions{
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer("something"),
},
})
require.Truef(t, errors.IsConflict(err), "should get conflict error but got %s", err)
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
})
t.Run("should succeed if version matches", func(t *testing.T) {
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
require.NoError(t, err)
err = oldClient.Delete(ctx, actual.Name, v1.DeleteOptions{
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer(actual.ResourceVersion),
},
@@ -1022,10 +1007,10 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
require.NoError(t, err)
})
t.Run("should succeed if version is empty", func(t *testing.T) {
actual, err := adminClient.Create(ctx, &receiver, resource.CreateOptions{})
actual, err := adminClient.Create(ctx, &receiver, v1.CreateOptions{})
require.NoError(t, err)
err = oldClient.Delete(ctx, actual.Name, v1.DeleteOptions{
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer(actual.ResourceVersion),
},
@@ -1040,8 +1025,7 @@ func TestIntegrationPatch(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
receiver := v0alpha1.Receiver{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
@@ -1056,40 +1040,40 @@ func TestIntegrationPatch(t *testing.T) {
},
}
current, err := adminClient.Create(ctx, &receiver, resource.CreateOptions{})
current, err := adminClient.Create(ctx, &receiver, v1.CreateOptions{})
require.NoError(t, err)
require.NotNil(t, current)
t.Run("should patch with json patch", func(t *testing.T) {
current, err := adminClient.Get(ctx, current.GetStaticMetadata().Identifier())
current, err := adminClient.Get(ctx, current.Name, v1.GetOptions{})
require.NoError(t, err)
index := slices.IndexFunc(current.Spec.Integrations, func(t v0alpha1.ReceiverIntegration) bool {
return t.Type == "webhook"
})
patch := []resource.PatchOperation{
patch := []map[string]any{
{
Operation: "remove",
Path: fmt.Sprintf("/spec/integrations/%d/settings/username", index),
"op": "remove",
"path": fmt.Sprintf("/spec/integrations/%d/settings/username", index),
},
{
Operation: "remove",
Path: fmt.Sprintf("/spec/integrations/%d/secureFields/password", index),
"op": "remove",
"path": fmt.Sprintf("/spec/integrations/%d/secureFields/password", index),
},
{
Operation: "replace",
Path: fmt.Sprintf("/spec/integrations/%d/settings/authorization_scheme", index),
Value: "bearer",
"op": "replace",
"path": fmt.Sprintf("/spec/integrations/%d/settings/authorization_scheme", index),
"value": "bearer",
},
{
Operation: "add",
Path: fmt.Sprintf("/spec/integrations/%d/settings/authorization_credentials", index),
Value: "authz-token",
"op": "add",
"path": fmt.Sprintf("/spec/integrations/%d/settings/authorization_credentials", index),
"value": "authz-token",
},
{
Operation: "remove",
Path: fmt.Sprintf("/spec/integrations/%d/secureFields/authorization_credentials", index),
"op": "remove",
"path": fmt.Sprintf("/spec/integrations/%d/secureFields/authorization_credentials", index),
},
}
@@ -1100,7 +1084,10 @@ func TestIntegrationPatch(t *testing.T) {
delete(expected.SecureFields, "password")
expected.SecureFields["authorization_credentials"] = true
result, err := adminClient.Patch(ctx, current.GetStaticMetadata().Identifier(), resource.PatchRequest{Operations: patch}, resource.PatchOptions{})
patchData, err := json.Marshal(patch)
require.NoError(t, err)
result, err := adminClient.Patch(ctx, current.Name, types.JSONPatchType, patchData, v1.PatchOptions{})
require.NoError(t, err)
require.EqualValues(t, expected, result.Spec.Integrations[index])
@@ -1140,8 +1127,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
cliCfg := helper.Org1.Admin.NewRestConfig()
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
// Prepare environment and create notification policy and rule that use time receiver
alertmanagerRaw, err := testData.ReadFile(path.Join("test-data", "notification-settings.json"))
require.NoError(t, err)
@@ -1160,7 +1146,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
_, status, data := legacyCli.PostRulesGroupWithStatus(t, folderUID, &ruleGroup, false)
require.Equalf(t, http.StatusAccepted, status, "Failed to post Rule: %s", data)
receivers, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
receivers, err := adminClient.List(ctx, v1.ListOptions{})
require.NoError(t, err)
require.Len(t, receivers.Items, 2)
idx := slices.IndexFunc(receivers.Items, func(interval v0alpha1.Receiver) bool {
@@ -1178,7 +1164,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
expectedTitle := renamed.Spec.Title + "-new"
renamed.Spec.Title = expectedTitle
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
require.NoError(t, err)
updatedRuleGroup, status := legacyCli.GetRulesGroup(t, folderUID, ruleGroup.Name)
@@ -1192,7 +1178,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
assert.Equalf(t, expectedTitle, route.Receiver, "time receiver in routes should have been renamed but it did not")
}
actual, err = adminClient.Get(ctx, actual.GetStaticMetadata().Identifier())
actual, err = adminClient.Get(ctx, actual.Name, v1.GetOptions{})
require.NoError(t, err)
receiver = *actual
@@ -1208,20 +1194,20 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
t.Cleanup(func() {
require.NoError(t, db.DeleteProvenance(ctx, &currentRoute, orgID))
})
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
t.Run("provisioned rules", func(t *testing.T) {
ruleUid := currentRuleGroup.Rules[0].GrafanaManagedAlert.UID
rule := &ngmodels.AlertRule{UID: ruleUid}
require.NoError(t, db.SetProvenance(ctx, rule, orgID, "API"))
resource := &ngmodels.AlertRule{UID: ruleUid}
require.NoError(t, db.SetProvenance(ctx, resource, orgID, "API"))
t.Cleanup(func() {
require.NoError(t, db.DeleteProvenance(ctx, rule, orgID))
require.NoError(t, db.DeleteProvenance(ctx, resource, orgID))
})
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
@@ -1230,7 +1216,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
t.Run("Delete", func(t *testing.T) {
t.Run("should fail to delete if receiver is used in rule and routes", func(t *testing.T) {
err := adminClient.Delete(ctx, receiver.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
err := adminClient.Delete(ctx, receiver.Name, v1.DeleteOptions{})
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
@@ -1239,7 +1225,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
route.Routes[0].Receiver = ""
legacyCli.UpdateRoute(t, route, true)
err = adminClient.Delete(ctx, receiver.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
err = adminClient.Delete(ctx, receiver.Name, v1.DeleteOptions{})
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
})
@@ -1251,11 +1237,10 @@ func TestIntegrationCRUD(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
var defaultReceiver *v0alpha1.Receiver
t.Run("should list the default receiver", func(t *testing.T) {
items, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
items, err := adminClient.List(ctx, v1.ListOptions{})
require.NoError(t, err)
assert.Len(t, items.Items, 1)
defaultReceiver = &items.Items[0]
@@ -1264,7 +1249,7 @@ func TestIntegrationCRUD(t *testing.T) {
assert.NotEmpty(t, defaultReceiver.Name)
assert.NotEmpty(t, defaultReceiver.ResourceVersion)
defaultReceiver, err = adminClient.Get(ctx, defaultReceiver.GetStaticMetadata().Identifier())
defaultReceiver, err = adminClient.Get(ctx, defaultReceiver.Name, v1.GetOptions{})
require.NoError(t, err)
assert.NotEmpty(t, defaultReceiver.UID)
assert.NotEmpty(t, defaultReceiver.Name)
@@ -1277,7 +1262,7 @@ func TestIntegrationCRUD(t *testing.T) {
newDefault := defaultReceiver.Copy().(*v0alpha1.Receiver)
newDefault.Spec.Integrations = append(newDefault.Spec.Integrations, createIntegration(t, line.Type))
updatedReceiver, err := adminClient.Update(ctx, newDefault, resource.UpdateOptions{})
updatedReceiver, err := adminClient.Update(ctx, newDefault, v1.UpdateOptions{})
require.NoError(t, err)
expected := newDefault.Copy().(*v0alpha1.Receiver)
@@ -1305,12 +1290,12 @@ func TestIntegrationCRUD(t *testing.T) {
Integrations: []v0alpha1.ReceiverIntegration{},
},
}
_, err := adminClient.Create(ctx, newReceiver, resource.CreateOptions{})
_, err := adminClient.Create(ctx, newReceiver, v1.CreateOptions{})
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
t.Run("should not let delete default receiver", func(t *testing.T) {
err := adminClient.Delete(ctx, defaultReceiver.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
err := adminClient.Delete(ctx, defaultReceiver.Name, v1.DeleteOptions{})
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
@@ -1332,7 +1317,7 @@ func TestIntegrationCRUD(t *testing.T) {
Title: "all-receivers",
Integrations: integrations,
},
}, resource.CreateOptions{})
}, v1.CreateOptions{})
require.NoError(t, err)
require.Len(t, receiver.Spec.Integrations, len(integrations))
@@ -1357,7 +1342,7 @@ func TestIntegrationCRUD(t *testing.T) {
})
t.Run("should be able read what it is created", func(t *testing.T) {
get, err := adminClient.Get(ctx, receiver.GetStaticMetadata().Identifier())
get, err := adminClient.Get(ctx, receiver.Name, v1.GetOptions{})
require.NoError(t, err)
require.Equal(t, receiver, get)
t.Run("should return secrets in secureFields but not settings", func(t *testing.T) {
@@ -1409,7 +1394,7 @@ func TestIntegrationCRUD(t *testing.T) {
Title: fmt.Sprintf("invalid-%s", key),
Integrations: []v0alpha1.ReceiverIntegration{integration},
},
}, resource.CreateOptions{})
}, v1.CreateOptions{})
require.Errorf(t, err, "Expected error but got successful result: %v", receiver)
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest, got: %s", err)
})
@@ -1423,8 +1408,7 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
recv1 := &v0alpha1.Receiver{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
@@ -1436,7 +1420,7 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
},
},
}
recv1, err = adminClient.Create(ctx, recv1, resource.CreateOptions{})
recv1, err := adminClient.Create(ctx, recv1, v1.CreateOptions{})
require.NoError(t, err)
recv2 := &v0alpha1.Receiver{
@@ -1450,7 +1434,7 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
},
},
}
recv2, err = adminClient.Create(ctx, recv2, resource.CreateOptions{})
recv2, err = adminClient.Create(ctx, recv2, v1.CreateOptions{})
require.NoError(t, err)
env := helper.GetEnv()
@@ -1460,20 +1444,18 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
require.NoError(t, db.SetProvenance(ctx, &definitions.EmbeddedContactPoint{
UID: *recv2.Spec.Integrations[0].Uid,
}, helper.Org1.Admin.Identity.GetOrgID(), "API"))
recv2, err = adminClient.Get(ctx, recv2.GetStaticMetadata().Identifier())
recv2, err = adminClient.Get(ctx, recv2.Name, v1.GetOptions{})
require.NoError(t, err)
receivers, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
receivers, err := adminClient.List(ctx, v1.ListOptions{})
require.NoError(t, err)
require.Len(t, receivers.Items, 3) // Includes default.
t.Run("should filter by receiver name", func(t *testing.T) {
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{
"spec.title=" + recv1.Spec.Title,
},
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: "spec.title=" + recv1.Spec.Title,
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -1481,10 +1463,8 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
})
t.Run("should filter by metadata name", func(t *testing.T) {
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{
"metadata.name=" + recv2.Name,
},
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: "metadata.name=" + recv2.Name,
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -1493,10 +1473,8 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
t.Run("should filter by multiple filters", func(t *testing.T) {
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{
fmt.Sprintf("metadata.name=%s,spec.title=%s", recv2.Name, recv2.Spec.Title),
},
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: fmt.Sprintf("metadata.name=%s,spec.title=%s", recv2.Name, recv2.Spec.Title),
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -1504,10 +1482,8 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
})
t.Run("should be empty when filter does not match", func(t *testing.T) {
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{
fmt.Sprintf("metadata.name=%s", "unknown"),
},
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: fmt.Sprintf("metadata.name=%s", "unknown"),
})
require.NoError(t, err)
require.Empty(t, list.Items)
@@ -1521,8 +1497,7 @@ func persistInitialConfig(t *testing.T, amConfig definitions.PostableUserConfig)
helper := getTestHelper(t)
receiverClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
receiverClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
for _, receiver := range amConfig.AlertmanagerConfig.Receivers {
if receiver.Name == "grafana-default-email" {
continue
@@ -1548,7 +1523,7 @@ func persistInitialConfig(t *testing.T, amConfig definitions.PostableUserConfig)
})
}
created, err := receiverClient.Create(ctx, &toCreate, resource.CreateOptions{})
created, err := receiverClient.Create(ctx, &toCreate, v1.CreateOptions{})
require.NoError(t, err)
for i, integration := range created.Spec.Integrations {
@@ -1558,11 +1533,10 @@ func persistInitialConfig(t *testing.T, amConfig definitions.PostableUserConfig)
nsMapper := func(_ int64) string { return "default" }
routeClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
routeClient := test_common.NewRoutingTreeClient(t, helper.Org1.Admin)
v1route, err := routingtree.ConvertToK8sResource(helper.Org1.AdminServiceAccount.OrgId, *amConfig.AlertmanagerConfig.Route, "", nsMapper)
require.NoError(t, err)
_, err = routeClient.Update(ctx, v1route, resource.UpdateOptions{})
_, err = routeClient.Update(ctx, v1route, v1.UpdateOptions{})
require.NoError(t, err)
}
@@ -1,14 +1,10 @@
{
"kind": "ReceiverList",
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"metadata": {},
"items": [
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "Z3JhZmFuYS1kZWZhdWx0LWVtYWls",
"namespace": "default",
"uid": "zyXFk301pvwNz4HRPrTMKPMFO2934cPB7H1ZXmyM1TUX",
"resourceVersion": "a82b34036bdabbc4",
"annotations": {
"grafana.com/access/canAdmin": "true",
"grafana.com/access/canDelete": "true",
@@ -19,49 +15,53 @@
"grafana.com/inUse/routes": "1",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "none"
}
},
"name": "Z3JhZmFuYS1kZWZhdWx0LWVtYWls",
"namespace": "default",
"resourceVersion": "a82b34036bdabbc4",
"uid": "zyXFk301pvwNz4HRPrTMKPMFO2934cPB7H1ZXmyM1TUX"
},
"spec": {
"title": "grafana-default-email",
"integrations": [
{
"uid": "",
"type": "email",
"version": "v1",
"disableResolveMessage": false,
"settings": {
"addresses": "\u003cexample@email.com\u003e"
}
},
"type": "email",
"uid": "",
"version": "v1"
}
]
],
"title": "grafana-default-email"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
"grafana.com/canUse": "false",
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "Z3JhZmFuYS1kZWZhdWx0LWVtYWlsdGVzdC1jcmVhdGUtZ2V0LWNvbmZpZw",
"namespace": "default",
"uid": "JzW6DIlcxj4sRN8A2ULcwTXAmm0Vs0Z68aEBqXSvxK0X",
"resourceVersion": "b2823b50ffa1eff6",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
"grafana.com/canUse": "false",
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
"uid": "JzW6DIlcxj4sRN8A2ULcwTXAmm0Vs0Z68aEBqXSvxK0X"
},
"spec": {
"title": "grafana-default-emailtest-create-get-config",
"integrations": []
"integrations": [],
"title": "grafana-default-emailtest-create-get-config"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "ZGlzY29yZA",
"namespace": "default",
"uid": "8cH8Ql2S6VhPEVUhwlQEKYWyPbRJS7YKj2lEXdrehH8X",
"resourceVersion": "06e437697f62ac59",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -69,16 +69,19 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
},
"name": "ZGlzY29yZA",
"namespace": "default",
"resourceVersion": "06e437697f62ac59",
"uid": "8cH8Ql2S6VhPEVUhwlQEKYWyPbRJS7YKj2lEXdrehH8X"
},
"spec": {
"title": "discord",
"integrations": [
{
"uid": "",
"type": "discord",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"webhook_url": true
},
"settings": {
"http_config": {
"enable_http2": true,
@@ -92,19 +95,18 @@
"send_resolved": true,
"title": "{{ template \"discord.default.title\" . }}"
},
"secureFields": {
"webhook_url": true
}
"type": "discord",
"uid": "",
"version": "v0mimir1"
}
]
],
"title": "discord"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "ZW1haWw",
"namespace": "default",
"uid": "bhlvlN758xmnwVrHVPX0c5XvFHepenUbOXP0fuE6eUMX",
"resourceVersion": "9b3ffed277cee189",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -112,16 +114,19 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
},
"name": "ZW1haWw",
"namespace": "default",
"resourceVersion": "9b3ffed277cee189",
"uid": "bhlvlN758xmnwVrHVPX0c5XvFHepenUbOXP0fuE6eUMX"
},
"spec": {
"title": "email",
"integrations": [
{
"uid": "",
"type": "email",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"auth_password": true
},
"settings": {
"auth_username": "alertmanager",
"from": "alertmanager@example.com",
@@ -139,19 +144,18 @@
},
"to": "team@example.com"
},
"secureFields": {
"auth_password": true
}
"type": "email",
"uid": "",
"version": "v0mimir1"
}
]
],
"title": "email"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "amlyYQ",
"namespace": "default",
"uid": "7Pu4xcRXbvw4XEX279SoqyO8Ibo8cMl0vAJyYTsJ0NEX",
"resourceVersion": "deae9d34f8554205",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -159,16 +163,19 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
},
"name": "amlyYQ",
"namespace": "default",
"resourceVersion": "deae9d34f8554205",
"uid": "7Pu4xcRXbvw4XEX279SoqyO8Ibo8cMl0vAJyYTsJ0NEX"
},
"spec": {
"title": "jira",
"integrations": [
{
"uid": "",
"type": "jira",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"http_config.basic_auth.password": true
},
"settings": {
"api_url": "http://localhost/jira",
"custom_fields": {
@@ -196,19 +203,18 @@
"send_resolved": true,
"summary": "{{ template \"jira.default.summary\" . }}"
},
"secureFields": {
"http_config.basic_auth.password": true
}
"type": "jira",
"uid": "",
"version": "v0mimir1"
}
]
],
"title": "jira"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "bXN0ZWFtcw",
"namespace": "default",
"uid": "z7xTMDjrk1HAHXPEx78tQb63LXYA6ivXLOtz2Z09ucIX",
"resourceVersion": "95c8d082d65466a3",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -216,16 +222,19 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
},
"name": "bXN0ZWFtcw",
"namespace": "default",
"resourceVersion": "95c8d082d65466a3",
"uid": "z7xTMDjrk1HAHXPEx78tQb63LXYA6ivXLOtz2Z09ucIX"
},
"spec": {
"title": "msteams",
"integrations": [
{
"uid": "",
"type": "teams",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"webhook_url": true
},
"settings": {
"http_config": {
"enable_http2": true,
@@ -240,19 +249,18 @@
"text": "{{ template \"msteams.default.text\" . }}",
"title": "{{ template \"msteams.default.title\" . }}"
},
"secureFields": {
"webhook_url": true
}
"type": "teams",
"uid": "",
"version": "v0mimir1"
}
]
],
"title": "msteams"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "b3BzZ2VuaWU",
"namespace": "default",
"uid": "XmkZ214Dj030hvynYiwNLq8i6uRCjUYXMXjE5m19OKAX",
"resourceVersion": "8ee2957ba150ba16",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -260,16 +268,19 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
},
"name": "b3BzZ2VuaWU",
"namespace": "default",
"resourceVersion": "8ee2957ba150ba16",
"uid": "XmkZ214Dj030hvynYiwNLq8i6uRCjUYXMXjE5m19OKAX"
},
"spec": {
"title": "opsgenie",
"integrations": [
{
"uid": "",
"type": "opsgenie",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"api_key": true
},
"settings": {
"actions": "test actions",
"api_url": "http://localhost/opsgenie/",
@@ -300,19 +311,18 @@
"tags": "test-tags",
"update_alerts": true
},
"secureFields": {
"api_key": true
}
"type": "opsgenie",
"uid": "",
"version": "v0mimir1"
}
]
],
"title": "opsgenie"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "cGFnZXJkdXR5",
"namespace": "default",
"uid": "QNitkUCkwzrIc7WVCCJGGDyvXLyo9csSUVqfyStyctQX",
"resourceVersion": "fe673d5dcd67ccf0",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -320,16 +330,20 @@
"grafana.com/inUse/routes": "1",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
},
"name": "cGFnZXJkdXR5",
"namespace": "default",
"resourceVersion": "fe673d5dcd67ccf0",
"uid": "QNitkUCkwzrIc7WVCCJGGDyvXLyo9csSUVqfyStyctQX"
},
"spec": {
"title": "pagerduty",
"integrations": [
{
"uid": "",
"type": "pagerduty",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"routing_key": true,
"service_key": true
},
"settings": {
"class": "test class",
"client": "Alertmanager",
@@ -369,20 +383,18 @@
"source": "test source",
"url": "http://localhost/pagerduty"
},
"secureFields": {
"routing_key": true,
"service_key": true
}
"type": "pagerduty",
"uid": "",
"version": "v0mimir1"
}
]
],
"title": "pagerduty"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "cHVzaG92ZXI",
"namespace": "default",
"uid": "t2TJSktI6vyGfdbLOKmxH4eBqgcIGsAuW8Qm9m0HRycX",
"resourceVersion": "6ae076725ab463e0",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -390,16 +402,21 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
},
"name": "cHVzaG92ZXI",
"namespace": "default",
"resourceVersion": "6ae076725ab463e0",
"uid": "t2TJSktI6vyGfdbLOKmxH4eBqgcIGsAuW8Qm9m0HRycX"
},
"spec": {
"title": "pushover",
"integrations": [
{
"uid": "",
"type": "pushover",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"http_config.authorization.credentials": true,
"token": true,
"user_key": true
},
"settings": {
"expire": "1h0m0s",
"http_config": {
@@ -420,21 +437,18 @@
"title": "{{ template \"pushover.default.title\" . }}",
"url": "http://localhost/pushover"
},
"secureFields": {
"http_config.authorization.credentials": true,
"token": true,
"user_key": true
}
"type": "pushover",
"uid": "",
"version": "v0mimir1"
}
]
],
"title": "pushover"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "c2xhY2s",
"namespace": "default",
"uid": "xSB0hnoc9j1CnLCHR3VgeVGXdVXILM0p2dM64bbHN9oX",
"resourceVersion": "ec0e343029ff5d8b",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -442,16 +456,19 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
},
"name": "c2xhY2s",
"namespace": "default",
"resourceVersion": "ec0e343029ff5d8b",
"uid": "xSB0hnoc9j1CnLCHR3VgeVGXdVXILM0p2dM64bbHN9oX"
},
"spec": {
"title": "slack",
"integrations": [
{
"uid": "",
"type": "slack",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"api_url": true
},
"settings": {
"actions": [
{
@@ -505,19 +522,18 @@
"title_link": "http://localhost",
"username": "Alerting Team"
},
"secureFields": {
"api_url": true
}
"type": "slack",
"uid": "",
"version": "v0mimir1"
}
]
],
"title": "slack"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "c25z",
"namespace": "default",
"uid": "vSP8NtFr23hnqZqLxRgzUKfr1wOemOvZm1S6MYkfRI4X",
"resourceVersion": "77d734ad4c196d36",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -525,16 +541,19 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
},
"name": "c25z",
"namespace": "default",
"resourceVersion": "77d734ad4c196d36",
"uid": "vSP8NtFr23hnqZqLxRgzUKfr1wOemOvZm1S6MYkfRI4X"
},
"spec": {
"title": "sns",
"integrations": [
{
"uid": "",
"type": "sns",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"sigv4.SecretKey": true
},
"settings": {
"attributes": {
"key1": "value1"
@@ -558,19 +577,18 @@
"subject": "{{ template \"sns.default.subject\" . }}",
"topic_arn": "arn:aws:sns:us-east-1:123456789012:alerts"
},
"secureFields": {
"sigv4.SecretKey": true
}
"type": "sns",
"uid": "",
"version": "v0mimir1"
}
]
],
"title": "sns"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "dGVsZWdyYW0",
"namespace": "default",
"uid": "XLWjtmYcjP5PiqBCwZXX3YKHV1G8niRtpCakIpcHqoYX",
"resourceVersion": "d9850878a33e302e",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -578,16 +596,19 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
},
"name": "dGVsZWdyYW0",
"namespace": "default",
"resourceVersion": "d9850878a33e302e",
"uid": "XLWjtmYcjP5PiqBCwZXX3YKHV1G8niRtpCakIpcHqoYX"
},
"spec": {
"title": "telegram",
"integrations": [
{
"uid": "",
"type": "telegram",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"token": true
},
"settings": {
"api_url": "http://localhost/telegram-default",
"chat": -1001234567890,
@@ -603,19 +624,18 @@
"parse_mode": "MarkdownV2",
"send_resolved": true
},
"secureFields": {
"token": true
}
"type": "telegram",
"uid": "",
"version": "v0mimir1"
}
]
],
"title": "telegram"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "dmljdG9yb3Bz",
"namespace": "default",
"uid": "EWiwQ6TIW0GpEo46WusW7Nvg0HuD4QAbHf0JZ2OSOhEX",
"resourceVersion": "1e6886531440afc2",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -623,16 +643,19 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
},
"name": "dmljdG9yb3Bz",
"namespace": "default",
"resourceVersion": "1e6886531440afc2",
"uid": "EWiwQ6TIW0GpEo46WusW7Nvg0HuD4QAbHf0JZ2OSOhEX"
},
"spec": {
"title": "victorops",
"integrations": [
{
"uid": "",
"type": "victorops",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"api_key": true
},
"settings": {
"api_url": "http://localhost/victorops-default/",
"entity_display_name": "{{ template \"victorops.default.entity_display_name\" . }}",
@@ -651,19 +674,18 @@
"send_resolved": true,
"state_message": "{{ template \"victorops.default.state_message\" . }}"
},
"secureFields": {
"api_key": true
}
"type": "victorops",
"uid": "",
"version": "v0mimir1"
}
]
],
"title": "victorops"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "d2ViZXg",
"namespace": "default",
"uid": "wDNufI44UXHWq4ERRYenZ7XgXVV3Tjxaokz9IjMRZ54X",
"resourceVersion": "08fc955a08dfe9c0",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -671,16 +693,19 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
},
"name": "d2ViZXg",
"namespace": "default",
"resourceVersion": "08fc955a08dfe9c0",
"uid": "wDNufI44UXHWq4ERRYenZ7XgXVV3Tjxaokz9IjMRZ54X"
},
"spec": {
"title": "webex",
"integrations": [
{
"uid": "",
"type": "webex",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"http_config.authorization.credentials": true
},
"settings": {
"api_url": "http://localhost/webes-default",
"http_config": {
@@ -698,19 +723,18 @@
"room_id": "Y2lzY29zcGFyazovL3VzL1JPT00v12345678",
"send_resolved": true
},
"secureFields": {
"http_config.authorization.credentials": true
}
"type": "webex",
"uid": "",
"version": "v0mimir1"
}
]
],
"title": "webex"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "d2ViaG9vaw",
"namespace": "default",
"uid": "aKzigXATPp6HOh20yTrlTcuF2Y9IrPHridGIcWrJygsX",
"resourceVersion": "494392f899a7b410",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -718,16 +742,19 @@
"grafana.com/inUse/routes": "1",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
},
"name": "d2ViaG9vaw",
"namespace": "default",
"resourceVersion": "494392f899a7b410",
"uid": "aKzigXATPp6HOh20yTrlTcuF2Y9IrPHridGIcWrJygsX"
},
"spec": {
"title": "webhook",
"integrations": [
{
"uid": "",
"type": "webhook",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"url": true
},
"settings": {
"http_config": {
"enable_http2": true,
@@ -742,19 +769,18 @@
"timeout": "0s",
"url_file": ""
},
"secureFields": {
"url": true
}
"type": "webhook",
"uid": "",
"version": "v0mimir1"
}
]
],
"title": "webhook"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "d2VjaGF0",
"namespace": "default",
"uid": "jkXCvNrNVw7XX5nmYFyrGiA4ckAvJ282u2scW8KZq7IX",
"resourceVersion": "135913515cbc156b",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -762,16 +788,19 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
},
"name": "d2VjaGF0",
"namespace": "default",
"resourceVersion": "135913515cbc156b",
"uid": "jkXCvNrNVw7XX5nmYFyrGiA4ckAvJ282u2scW8KZq7IX"
},
"spec": {
"title": "wechat",
"integrations": [
{
"uid": "",
"type": "wechat",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"api_secret": true
},
"settings": {
"agent_id": "1000002",
"api_url": "http://localhost/wechat/",
@@ -791,12 +820,15 @@
"to_tag": "tag1",
"to_user": "user1"
},
"secureFields": {
"api_secret": true
}
"type": "wechat",
"uid": "",
"version": "v0mimir1"
}
]
],
"title": "wechat"
}
}
]
}
],
"kind": "ReceiverList",
"metadata": {}
}
@@ -8,7 +8,6 @@ import (
"testing"
"time"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/prometheus/alertmanager/config"
"github.com/prometheus/alertmanager/pkg/labels"
"github.com/prometheus/common/model"
@@ -40,11 +39,6 @@ import (
"github.com/grafana/grafana/pkg/util/testutil"
)
var defaultTreeIdentifier = resource.Identifier{
Namespace: apis.DefaultNamespace,
Name: v0alpha1.UserDefinedRoutingTreeName,
}
func TestMain(m *testing.M) {
testsuite.Run(m)
}
@@ -58,8 +52,7 @@ func TestIntegrationNotAllowedMethods(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
client := common.NewRoutingTreeClient(t, helper.Org1.Admin)
route := &v0alpha1.RoutingTree{
ObjectMeta: v1.ObjectMeta{
@@ -67,7 +60,11 @@ func TestIntegrationNotAllowedMethods(t *testing.T) {
},
Spec: v0alpha1.RoutingTreeSpec{},
}
_, err = client.Create(ctx, route, resource.CreateOptions{})
_, err := client.Create(ctx, route, v1.CreateOptions{})
assert.Error(t, err)
require.Truef(t, errors.IsMethodNotSupported(err), "Expected MethodNotSupported but got %s", err)
err = client.Client.DeleteCollection(ctx, v1.DeleteOptions{}, v1.ListOptions{})
assert.Error(t, err)
require.Truef(t, errors.IsMethodNotSupported(err), "Expected MethodNotSupported but got %s", err)
}
@@ -157,52 +154,50 @@ func TestIntegrationAccessControl(t *testing.T) {
}
admin := org1.Admin
adminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(admin.GetClientRegistry())
require.NoError(t, err)
adminClient := common.NewRoutingTreeClient(t, admin)
for _, tc := range testCases {
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(tc.user.GetClientRegistry())
require.NoError(t, err)
client := common.NewRoutingTreeClient(t, tc.user)
if tc.canRead {
t.Run("should be able to list routing trees", func(t *testing.T) {
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
list, err := client.List(ctx, v1.ListOptions{})
require.NoError(t, err)
require.Len(t, list.Items, 1)
require.Equal(t, v0alpha1.UserDefinedRoutingTreeName, list.Items[0].Name)
})
t.Run("should be able to read routing trees by resource identifier", func(t *testing.T) {
_, err := client.Get(ctx, defaultTreeIdentifier)
_, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
require.NoError(t, err)
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to list routing trees", func(t *testing.T) {
_, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
_, err := client.List(ctx, v1.ListOptions{})
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
t.Run("should be forbidden to read routing tree by name", func(t *testing.T) {
_, err := client.Get(ctx, defaultTreeIdentifier)
_, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
}
current, err := adminClient.Get(ctx, defaultTreeIdentifier)
current, err := adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
require.NoError(t, err)
expected := current.Copy().(*v0alpha1.RoutingTree)
expected.Spec.Routes = []v0alpha1.RoutingTreeRoute{
@@ -222,7 +217,7 @@ func TestIntegrationAccessControl(t *testing.T) {
if tc.canUpdate {
t.Run("should be able to update routing tree", func(t *testing.T) {
updated, err := client.Update(ctx, expected, resource.UpdateOptions{})
updated, err := client.Update(ctx, expected, v1.UpdateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
expected = updated
@@ -230,23 +225,21 @@ func TestIntegrationAccessControl(t *testing.T) {
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
up := expected.Copy().(*v0alpha1.RoutingTree)
up.Name = "notFound"
_, err := client.Update(ctx, up, resource.UpdateOptions{})
_, err := client.Update(ctx, up, v1.UpdateOptions{})
require.Error(t, err)
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to update routing tree", func(t *testing.T) {
_, err := client.Update(ctx, expected, resource.UpdateOptions{})
_, err := client.Update(ctx, expected, v1.UpdateOptions{})
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
up := expected.Copy().(*v0alpha1.RoutingTree)
up.Name = "notFound"
_, err := client.Update(ctx, up, resource.UpdateOptions{
ResourceVersion: up.ResourceVersion,
})
_, err := client.Update(ctx, up, v1.UpdateOptions{})
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
@@ -255,32 +248,32 @@ func TestIntegrationAccessControl(t *testing.T) {
if tc.canUpdate {
t.Run("should be able to reset routing tree", func(t *testing.T) {
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
err := client.Delete(ctx, expected.Name, v1.DeleteOptions{})
require.NoError(t, err)
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
require.Error(t, err)
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to reset routing tree", func(t *testing.T) {
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
err := client.Delete(ctx, expected.Name, v1.DeleteOptions{})
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
}
})
err := adminClient.Delete(ctx, defaultTreeIdentifier, resource.DeleteOptions{})
err := adminClient.Delete(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.DeleteOptions{})
require.NoError(t, err)
}
}
@@ -294,22 +287,21 @@ func TestIntegrationProvisioning(t *testing.T) {
org := helper.Org1
admin := org.Admin
adminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(admin.GetClientRegistry())
require.NoError(t, err)
adminClient := common.NewRoutingTreeClient(t, admin)
env := helper.GetEnv()
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
db, err := store.ProvideDBStore(env.Cfg, env.FeatureToggles, env.SQLStore, &foldertest.FakeService{}, &dashboards.FakeDashboardService{}, ac, bus.ProvideBus(tracing.InitializeTracerForTest()))
require.NoError(t, err)
current, err := adminClient.Get(ctx, defaultTreeIdentifier)
current, err := adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
require.NoError(t, err)
require.Equal(t, "none", current.GetProvenanceStatus())
t.Run("should provide provenance status", func(t *testing.T) {
require.NoError(t, db.SetProvenance(ctx, &definitions.Route{}, admin.Identity.GetOrgID(), "API"))
got, err := adminClient.Get(ctx, current.GetStaticMetadata().Identifier())
got, err := adminClient.Get(ctx, current.Name, v1.GetOptions{})
require.NoError(t, err)
require.Equal(t, "API", got.GetProvenanceStatus())
})
@@ -327,13 +319,13 @@ func TestIntegrationProvisioning(t *testing.T) {
},
}
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
t.Run("should not let delete if provisioned", func(t *testing.T) {
err := adminClient.Delete(ctx, current.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
err := adminClient.Delete(ctx, current.Name, v1.DeleteOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
}
@@ -344,37 +336,35 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := common.NewRoutingTreeClient(t, helper.Org1.Admin)
current, err := adminClient.Get(ctx, defaultTreeIdentifier)
current, err := adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
require.NoError(t, err)
require.NotEmpty(t, current.ResourceVersion)
t.Run("should forbid if version does not match", func(t *testing.T) {
updated := current.Copy().(*v0alpha1.RoutingTree)
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
ResourceVersion: "test",
})
updated.ResourceVersion = "test"
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.Error(t, err)
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
})
t.Run("should update if version matches", func(t *testing.T) {
updated := current.Copy().(*v0alpha1.RoutingTree)
updated.Spec.Defaults.GroupBy = append(updated.Spec.Defaults.GroupBy, "data")
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.NoError(t, err)
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
require.NotEqual(t, updated.ResourceVersion, actualUpdated.ResourceVersion)
})
t.Run("should update if version is empty", func(t *testing.T) {
current, err = adminClient.Get(ctx, defaultTreeIdentifier)
current, err = adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
require.NoError(t, err)
updated := current.Copy().(*v0alpha1.RoutingTree)
updated.ResourceVersion = ""
updated.Spec.Routes = append(updated.Spec.Routes, v0alpha1.RoutingTreeRoute{Continue: true})
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.NoError(t, err)
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
require.NotEqual(t, current.ResourceVersion, actualUpdated.ResourceVersion)
@@ -390,22 +380,20 @@ func TestIntegrationDataConsistency(t *testing.T) {
cliCfg := helper.Org1.Admin.NewRestConfig()
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
client := common.NewRoutingTreeClient(t, helper.Org1.Admin)
receiver := "grafana-default-email"
timeInterval := "test-time-interval"
createRoute := func(t *testing.T, route definitions.Route) {
t.Helper()
routeClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
routeClient := common.NewRoutingTreeClient(t, helper.Org1.Admin)
v1Route, err := routingtree.ConvertToK8sResource(helper.Org1.Admin.Identity.GetOrgID(), route, "", func(int64) string { return "default" })
require.NoError(t, err)
_, err = routeClient.Update(ctx, v1Route, resource.UpdateOptions{})
_, err = routeClient.Update(ctx, v1Route, v1.UpdateOptions{})
require.NoError(t, err)
}
_, err = common.NewTimeIntervalClient(t, helper.Org1.Admin).Create(ctx, &v0alpha1.TimeInterval{
_, err := common.NewTimeIntervalClient(t, helper.Org1.Admin).Create(ctx, &v0alpha1.TimeInterval{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
},
@@ -447,7 +435,7 @@ func TestIntegrationDataConsistency(t *testing.T) {
},
}
createRoute(t, route)
tree, err := client.Get(ctx, defaultTreeIdentifier)
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
require.NoError(t, err)
expected := []v0alpha1.RoutingTreeMatcher{
{
@@ -515,9 +503,9 @@ func TestIntegrationDataConsistency(t *testing.T) {
ensureMatcher(t, labels.MatchNotEqual, "matchers", "v"),
}
tree, err := client.Get(ctx, defaultTreeIdentifier)
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
require.NoError(t, err)
_, err = client.Update(ctx, tree, resource.UpdateOptions{})
_, err = client.Update(ctx, tree, v1.UpdateOptions{})
require.NoError(t, err)
cfg, _, _ = legacyCli.GetAlertmanagerConfigWithStatus(t)
@@ -554,7 +542,7 @@ func TestIntegrationDataConsistency(t *testing.T) {
createRoute(t, route)
t.Run("correctly reads all fields", func(t *testing.T) {
tree, err := client.Get(ctx, defaultTreeIdentifier)
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
require.NoError(t, err)
assert.Equal(t, v0alpha1.RoutingTreeRouteDefaults{
Receiver: receiver,
@@ -601,10 +589,10 @@ func TestIntegrationDataConsistency(t *testing.T) {
t.Run("correctly save all fields", func(t *testing.T) {
before, status, body := legacyCli.GetAlertmanagerConfigWithStatus(t)
require.Equalf(t, http.StatusOK, status, body)
tree, err := client.Get(ctx, defaultTreeIdentifier)
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
tree.Spec.Defaults.GroupBy = []string{"test-123", "test-456", "test-789"}
require.NoError(t, err)
_, err = client.Update(ctx, tree, resource.UpdateOptions{})
_, err = client.Update(ctx, tree, v1.UpdateOptions{})
require.NoError(t, err)
before.AlertmanagerConfig.Route.GroupByStr = []string{"test-123", "test-456", "test-789"}
@@ -652,7 +640,7 @@ func TestIntegrationDataConsistency(t *testing.T) {
}
createRoute(t, route)
tree, err := client.Get(ctx, defaultTreeIdentifier)
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
require.NoError(t, err)
assert.Equal(t, "foo🙂", tree.Spec.Routes[0].GroupBy[0])
expected := []v0alpha1.RoutingTreeMatcher{
@@ -678,8 +666,7 @@ func TestIntegrationExtraConfigsConflicts(t *testing.T) {
cliCfg := helper.Org1.Admin.NewRestConfig()
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
client := common.NewRoutingTreeClient(t, helper.Org1.Admin)
// Now upload a new extra config
testAlertmanagerConfigYAML := `
@@ -704,7 +691,7 @@ receivers:
}, headers)
require.Equal(t, "success", response.Status)
current, err := client.Get(ctx, defaultTreeIdentifier)
current, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
require.NoError(t, err)
updated := current.Copy().(*v0alpha1.RoutingTree)
updated.Spec.Routes = append(updated.Spec.Routes, v0alpha1.RoutingTreeRoute{
@@ -717,7 +704,7 @@ receivers:
},
})
_, err = client.Update(ctx, updated, resource.UpdateOptions{})
_, err = client.Update(ctx, updated, v1.UpdateOptions{})
require.Error(t, err)
require.Truef(t, errors.IsBadRequest(err), "Should get BadRequest error but got: %s", err)
@@ -725,6 +712,6 @@ receivers:
legacyCli.ConvertPrometheusDeleteAlertmanagerConfig(t, headers)
// and try again
_, err = client.Update(ctx, updated, resource.UpdateOptions{})
_, err = client.Update(ctx, updated, v1.UpdateOptions{})
require.NoError(t, err)
}
@@ -6,7 +6,6 @@ import (
"path"
"testing"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.yaml.in/yaml/v3"
@@ -19,6 +18,7 @@ import (
"github.com/grafana/grafana/pkg/services/ngalert/models"
"github.com/grafana/grafana/pkg/tests/api/alerting"
"github.com/grafana/grafana/pkg/tests/apis"
"github.com/grafana/grafana/pkg/tests/apis/alerting/notifications/common"
"github.com/grafana/grafana/pkg/tests/testinfra"
"github.com/grafana/grafana/pkg/util/testutil"
)
@@ -35,8 +35,7 @@ func TestIntegrationImportedTemplates(t *testing.T) {
},
})
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
client := common.NewTemplateGroupClient(t, helper.Org1.Admin)
cliCfg := helper.Org1.Admin.NewRestConfig()
alertingApi := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
@@ -58,7 +57,7 @@ func TestIntegrationImportedTemplates(t *testing.T) {
response := alertingApi.ConvertPrometheusPostAlertmanagerConfig(t, amConfig, headers)
require.Equal(t, "success", response.Status)
templates, err := client.List(context.Background(), apis.DefaultNamespace, resource.ListOptions{})
templates, err := client.List(context.Background(), metav1.ListOptions{})
require.NoError(t, err)
require.Len(t, templates.Items, 3)
@@ -91,12 +90,12 @@ func TestIntegrationImportedTemplates(t *testing.T) {
t.Run("should not be able to update", func(t *testing.T) {
tpl := templates.Items[1]
tpl.Spec.Content = "new content"
_, err := client.Update(context.Background(), &tpl, resource.UpdateOptions{})
_, err := client.Update(context.Background(), &tpl, metav1.UpdateOptions{})
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
})
t.Run("should not be able to delete", func(t *testing.T) {
err := client.Delete(context.Background(), templates.Items[1].GetStaticMetadata().Identifier(), resource.DeleteOptions{})
err := client.Delete(context.Background(), templates.Items[1].Name, metav1.DeleteOptions{})
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
})
@@ -109,14 +108,14 @@ func TestIntegrationImportedTemplates(t *testing.T) {
}
tpl.Spec.Kind = v0alpha1.TemplateGroupTemplateKindGrafana
created, err := client.Create(context.Background(), &tpl, resource.CreateOptions{})
created, err := client.Create(context.Background(), &tpl, metav1.CreateOptions{})
require.NoError(t, err)
assert.NotEqual(t, templates.Items[1].Name, created.Name)
})
t.Run("sort by kind and then name", func(t *testing.T) {
templates, err := client.List(context.Background(), apis.DefaultNamespace, resource.ListOptions{})
templates, err := client.List(context.Background(), metav1.ListOptions{})
require.NoError(t, err)
require.Len(t, templates.Items, 4)
@@ -7,7 +7,6 @@ import (
"testing"
"github.com/grafana/alerting/templates"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/api/errors"
@@ -46,8 +45,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
client := common.NewTemplateGroupClient(t, helper.Org1.Admin)
newTemplate := &v0alpha1.TemplateGroup{
ObjectMeta: v1.ObjectMeta{
@@ -63,23 +61,23 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
t.Run("create should fail if object name is specified", func(t *testing.T) {
template := newTemplate.Copy().(*v0alpha1.TemplateGroup)
template.Name = "new-templateGroup"
_, err := client.Create(ctx, template, resource.CreateOptions{})
_, err := client.Create(ctx, template, v1.CreateOptions{})
assert.Error(t, err)
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
})
var resourceID resource.Identifier
var resourceID string
t.Run("create should succeed and provide resource name", func(t *testing.T) {
actual, err := client.Create(ctx, newTemplate, resource.CreateOptions{})
actual, err := client.Create(ctx, newTemplate, v1.CreateOptions{})
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
resourceID = actual.GetStaticMetadata().Identifier()
resourceID = actual.Name
})
var existingTemplateGroup *v0alpha1.TemplateGroup
t.Run("resource should be available by the identifier", func(t *testing.T) {
actual, err := client.Get(ctx, resourceID)
actual, err := client.Get(ctx, resourceID, v1.GetOptions{})
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
require.Equal(t, newTemplate.Spec, actual.Spec)
@@ -92,12 +90,12 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
}
updated := existingTemplateGroup.Copy().(*v0alpha1.TemplateGroup)
updated.Spec.Title = "another-templateGroup"
actual, err := client.Update(ctx, updated, resource.UpdateOptions{})
actual, err := client.Update(ctx, updated, v1.UpdateOptions{})
require.NoError(t, err)
require.Equal(t, updated.Spec, actual.Spec)
require.NotEqualf(t, updated.Name, actual.Name, "Update should change the resource name but it didn't")
resource, err := client.Get(ctx, actual.GetStaticMetadata().Identifier())
resource, err := client.Get(ctx, actual.Name, v1.GetOptions{})
require.NoError(t, err)
require.Equal(t, actual, resource)
@@ -106,7 +104,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
var defaultTemplateGroup *v0alpha1.TemplateGroup
t.Run("default template should be available by the identifier", func(t *testing.T) {
actual, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: templates.DefaultTemplateName})
actual, err := client.Get(ctx, templates.DefaultTemplateName, v1.GetOptions{})
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
@@ -124,7 +122,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
t.Run("create with reserved default title should work", func(t *testing.T) {
template := newTemplate.Copy().(*v0alpha1.TemplateGroup)
template.Spec.Title = defaultTemplateGroup.Spec.Title
actual, err := client.Create(ctx, template, resource.CreateOptions{})
actual, err := client.Create(ctx, template, v1.CreateOptions{})
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
@@ -132,7 +130,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
})
t.Run("default template should not be available by calculated UID", func(t *testing.T) {
actual, err := client.Get(ctx, newTemplateWithOverlappingName.GetStaticMetadata().Identifier())
actual, err := client.Get(ctx, newTemplateWithOverlappingName.Name, v1.GetOptions{})
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
@@ -217,13 +215,11 @@ func TestIntegrationAccessControl(t *testing.T) {
},
}
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := common.NewTemplateGroupClient(t, org1.Admin)
for _, tc := range testCases {
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(tc.user.GetClientRegistry())
require.NoError(t, err)
client := common.NewTemplateGroupClient(t, tc.user)
var expected = &v0alpha1.TemplateGroup{
ObjectMeta: v1.ObjectMeta{
@@ -241,12 +237,12 @@ func TestIntegrationAccessControl(t *testing.T) {
if tc.canCreate {
t.Run("should be able to create template group", func(t *testing.T) {
actual, err := client.Create(ctx, expected, resource.CreateOptions{})
actual, err := client.Create(ctx, expected, v1.CreateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.Equal(t, expected.Spec, actual.Spec)
t.Run("should fail if already exists", func(t *testing.T) {
_, err := client.Create(ctx, actual, resource.CreateOptions{})
_, err := client.Create(ctx, actual, v1.CreateOptions{})
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
})
@@ -254,45 +250,45 @@ func TestIntegrationAccessControl(t *testing.T) {
})
} else {
t.Run("should be forbidden to create", func(t *testing.T) {
_, err := client.Create(ctx, expected, resource.CreateOptions{})
_, err := client.Create(ctx, expected, v1.CreateOptions{})
require.Truef(t, errors.IsForbidden(err), "Payload %s", string(d))
})
// create resource to proceed with other tests
expected, err = adminClient.Create(ctx, expected, resource.CreateOptions{})
expected, err = adminClient.Create(ctx, expected, v1.CreateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.NotNil(t, expected)
}
if tc.canRead {
t.Run("should be able to list template groups", func(t *testing.T) {
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
list, err := client.List(ctx, v1.ListOptions{})
require.NoError(t, err)
require.Len(t, list.Items, 2) // Includes default template.
})
t.Run("should be able to read template group by resource identifier", func(t *testing.T) {
got, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
got, err := client.Get(ctx, expected.Name, v1.GetOptions{})
require.NoError(t, err)
require.Equal(t, expected.Spec, got.Spec)
require.Equal(t, expected, got)
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to list template groups", func(t *testing.T) {
_, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
_, err := client.List(ctx, v1.ListOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
t.Run("should be forbidden to read template group by name", func(t *testing.T) {
_, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
_, err := client.Get(ctx, expected.Name, v1.GetOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
@@ -306,7 +302,7 @@ func TestIntegrationAccessControl(t *testing.T) {
if tc.canUpdate {
t.Run("should be able to update template group", func(t *testing.T) {
updated, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
updated, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
expected = updated
@@ -314,54 +310,52 @@ func TestIntegrationAccessControl(t *testing.T) {
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
up := updatedExpected.Copy().(*v0alpha1.TemplateGroup)
up.Name = "notFound"
_, err := client.Update(ctx, up, resource.UpdateOptions{})
_, err := client.Update(ctx, up, v1.UpdateOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to update template group", func(t *testing.T) {
_, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
_, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
up := updatedExpected.Copy().(*v0alpha1.TemplateGroup)
up.Name = "notFound"
_, err := client.Update(ctx, up, resource.UpdateOptions{
ResourceVersion: up.ResourceVersion,
})
_, err := client.Update(ctx, up, v1.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
}
deleteOptions := v1.DeleteOptions{Preconditions: &v1.Preconditions{ResourceVersion: util.Pointer(expected.ResourceVersion)}}
oldClient := common.NewTemplateGroupClient(t, tc.user) // TODO replace with normal client once delete is fixed
if tc.canDelete {
t.Run("should be able to delete template group", func(t *testing.T) {
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
err := client.Delete(ctx, expected.Name, deleteOptions)
require.NoError(t, err)
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to delete template group", func(t *testing.T) {
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
err := client.Delete(ctx, expected.Name, deleteOptions)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
}
if tc.canRead {
t.Run("should get list with just default template if no template groups", func(t *testing.T) {
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
list, err := client.List(ctx, v1.ListOptions{})
require.NoError(t, err)
require.Len(t, list.Items, 1)
require.Equal(t, templates.DefaultTemplateName, list.Items[0].Name)
@@ -380,8 +374,7 @@ func TestIntegrationProvisioning(t *testing.T) {
org := helper.Org1
admin := org.Admin
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(admin.GetClientRegistry())
require.NoError(t, err)
adminClient := common.NewTemplateGroupClient(t, admin)
env := helper.GetEnv()
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
@@ -397,7 +390,7 @@ func TestIntegrationProvisioning(t *testing.T) {
Content: `{{ define "test" }} test {{ end }}`,
Kind: v0alpha1.TemplateGroupTemplateKindGrafana,
},
}, resource.CreateOptions{})
}, v1.CreateOptions{})
require.NoError(t, err)
require.Equal(t, "none", created.GetProvenanceStatus())
@@ -406,7 +399,7 @@ func TestIntegrationProvisioning(t *testing.T) {
Name: created.Spec.Title,
}, admin.Identity.GetOrgID(), "API"))
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
require.NoError(t, err)
require.Equal(t, "API", got.GetProvenanceStatus())
})
@@ -414,12 +407,12 @@ func TestIntegrationProvisioning(t *testing.T) {
updated := created.Copy().(*v0alpha1.TemplateGroup)
updated.Spec.Content = `{{ define "another-test" }} test {{ end }}`
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
t.Run("should not let delete if provisioned", func(t *testing.T) {
err := adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
err := adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
}
@@ -430,9 +423,8 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
oldClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
adminClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
template := v0alpha1.TemplateGroup{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
@@ -444,22 +436,21 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
},
}
created, err := adminClient.Create(ctx, &template, resource.CreateOptions{})
created, err := adminClient.Create(ctx, &template, v1.CreateOptions{})
require.NoError(t, err)
require.NotNil(t, created)
require.NotEmpty(t, created.ResourceVersion)
t.Run("should forbid if version does not match", func(t *testing.T) {
updated := created.Copy().(*v0alpha1.TemplateGroup)
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
ResourceVersion: "test",
})
updated.ResourceVersion = "test"
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
})
t.Run("should update if version matches", func(t *testing.T) {
updated := created.Copy().(*v0alpha1.TemplateGroup)
updated.Spec.Content = `{{ define "test-another" }} test {{ end }}`
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.NoError(t, err)
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
require.NotEqual(t, updated.ResourceVersion, actualUpdated.ResourceVersion)
@@ -469,16 +460,16 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
updated.ResourceVersion = ""
updated.Spec.Content = `{{ define "test-another-2" }} test {{ end }}`
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.NoError(t, err)
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
require.NotEqual(t, created.ResourceVersion, actualUpdated.ResourceVersion)
})
t.Run("should fail to delete if version does not match", func(t *testing.T) {
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
require.NoError(t, err)
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer("something"),
},
@@ -486,10 +477,10 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
})
t.Run("should succeed if version matches", func(t *testing.T) {
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
require.NoError(t, err)
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer(actual.ResourceVersion),
},
@@ -497,10 +488,10 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
require.NoError(t, err)
})
t.Run("should succeed if version is empty", func(t *testing.T) {
actual, err := adminClient.Create(ctx, &template, resource.CreateOptions{})
actual, err := adminClient.Create(ctx, &template, v1.CreateOptions{})
require.NoError(t, err)
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer(actual.ResourceVersion),
},
@@ -515,8 +506,7 @@ func TestIntegrationPatch(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
template := v0alpha1.TemplateGroup{
ObjectMeta: v1.ObjectMeta{
@@ -529,10 +519,8 @@ func TestIntegrationPatch(t *testing.T) {
},
}
current, err := adminClient.Create(ctx, &template, resource.CreateOptions{})
current, err := adminClient.Create(ctx, &template, v1.CreateOptions{})
require.NoError(t, err)
oldClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
require.NotNil(t, current)
require.NotEmpty(t, current.ResourceVersion)
@@ -543,7 +531,7 @@ func TestIntegrationPatch(t *testing.T) {
}
}`
result, err := oldClient.Patch(ctx, current.GetStaticMetadata().Identifier().Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
result, err := adminClient.Patch(ctx, current.Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
require.NoError(t, err)
require.Equal(t, `{{ define "test-another" }} test {{ end }}`, result.Spec.Content)
current = result
@@ -552,15 +540,18 @@ func TestIntegrationPatch(t *testing.T) {
t.Run("should patch with json patch", func(t *testing.T) {
expected := `{{ define "test-json-patch" }} test {{ end }}`
patch := []resource.PatchOperation{
patch := []map[string]interface{}{
{
Operation: "replace",
Path: "/spec/content",
Value: expected,
"op": "replace",
"path": "/spec/content",
"value": expected,
},
}
result, err := adminClient.Patch(ctx, current.GetStaticMetadata().Identifier(), resource.PatchRequest{Operations: patch}, resource.PatchOptions{})
patchData, err := json.Marshal(patch)
require.NoError(t, err)
result, err := adminClient.Patch(ctx, current.Name, types.JSONPatchType, patchData, v1.PatchOptions{})
require.NoError(t, err)
expectedSpec := current.Spec
expectedSpec.Content = expected
@@ -574,8 +565,7 @@ func TestIntegrationListSelector(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
template1 := &v0alpha1.TemplateGroup{
ObjectMeta: v1.ObjectMeta{
@@ -587,7 +577,7 @@ func TestIntegrationListSelector(t *testing.T) {
Kind: v0alpha1.TemplateGroupTemplateKindGrafana,
},
}
template1, err = adminClient.Create(ctx, template1, resource.CreateOptions{})
template1, err := adminClient.Create(ctx, template1, v1.CreateOptions{})
require.NoError(t, err)
template2 := &v0alpha1.TemplateGroup{
@@ -600,7 +590,7 @@ func TestIntegrationListSelector(t *testing.T) {
Kind: v0alpha1.TemplateGroupTemplateKindGrafana,
},
}
template2, err = adminClient.Create(ctx, template2, resource.CreateOptions{})
template2, err = adminClient.Create(ctx, template2, v1.CreateOptions{})
require.NoError(t, err)
env := helper.GetEnv()
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
@@ -609,18 +599,18 @@ func TestIntegrationListSelector(t *testing.T) {
require.NoError(t, db.SetProvenance(ctx, &definitions.NotificationTemplate{
Name: template2.Spec.Title,
}, helper.Org1.Admin.Identity.GetOrgID(), "API"))
template2, err = adminClient.Get(ctx, template2.GetStaticMetadata().Identifier())
template2, err = adminClient.Get(ctx, template2.Name, v1.GetOptions{})
require.NoError(t, err)
tmpls, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
tmpls, err := adminClient.List(ctx, v1.ListOptions{})
require.NoError(t, err)
require.Len(t, tmpls.Items, 3) // Includes default template.
t.Run("should filter by template name", func(t *testing.T) {
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{"spec.title=" + template1.Spec.Title},
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: "spec.title=" + template1.Spec.Title,
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -628,8 +618,8 @@ func TestIntegrationListSelector(t *testing.T) {
})
t.Run("should filter by template metadata name", func(t *testing.T) {
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{"metadata.name=" + template2.Name},
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: "metadata.name=" + template2.Name,
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -638,8 +628,8 @@ func TestIntegrationListSelector(t *testing.T) {
t.Run("should filter by multiple filters", func(t *testing.T) {
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s,spec.title=%s", template2.Name, template2.Spec.Title)},
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: fmt.Sprintf("metadata.name=%s,spec.title=%s", template2.Name, template2.Spec.Title),
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -647,8 +637,8 @@ func TestIntegrationListSelector(t *testing.T) {
})
t.Run("should be empty when filter does not match", func(t *testing.T) {
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s", "unknown")},
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: fmt.Sprintf("metadata.name=%s", "unknown"),
})
require.NoError(t, err)
require.Empty(t, list.Items)
@@ -656,17 +646,17 @@ func TestIntegrationListSelector(t *testing.T) {
t.Run("should filter by default template name", func(t *testing.T) {
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{"spec.title=" + v0alpha1.DefaultTemplateTitle},
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: "spec.title=" + v0alpha1.DefaultTemplateTitle,
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
require.Equal(t, templates.DefaultTemplateName, list.Items[0].Name)
// Now just non-default templates
list, err = adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{"spec.title!=" + v0alpha1.DefaultTemplateTitle}},
)
list, err = adminClient.List(ctx, v1.ListOptions{
FieldSelector: "spec.title!=" + v0alpha1.DefaultTemplateTitle,
})
require.NoError(t, err)
require.Len(t, list.Items, 2)
require.NotEqualf(t, templates.DefaultTemplateName, list.Items[0].Name, "Expected non-default template but got %s", list.Items[0].Name)
@@ -679,8 +669,7 @@ func TestIntegrationKinds(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
client := common.NewTemplateGroupClient(t, helper.Org1.Admin)
newTemplate := &v0alpha1.TemplateGroup{
ObjectMeta: v1.ObjectMeta{
@@ -694,17 +683,17 @@ func TestIntegrationKinds(t *testing.T) {
}
t.Run("should not let create Mimir template", func(t *testing.T) {
_, err := client.Create(ctx, newTemplate, resource.CreateOptions{})
_, err := client.Create(ctx, newTemplate, v1.CreateOptions{})
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
})
t.Run("should not let change kind", func(t *testing.T) {
newTemplate.Spec.Kind = v0alpha1.TemplateGroupTemplateKindGrafana
created, err := client.Create(ctx, newTemplate, resource.CreateOptions{})
created, err := client.Create(ctx, newTemplate, v1.CreateOptions{})
require.NoError(t, err)
created.Spec.Kind = v0alpha1.TemplateGroupTemplateKindMimir
_, err = client.Update(ctx, created, resource.UpdateOptions{})
_, err = client.Update(ctx, created, v1.UpdateOptions{})
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
})
}
@@ -10,7 +10,6 @@ import (
"slices"
"testing"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/prometheus/alertmanager/config"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -58,8 +57,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
client, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
client := common.NewTimeIntervalClient(t, helper.Org1.Admin)
newInterval := &v0alpha1.TimeInterval{
ObjectMeta: v1.ObjectMeta{
@@ -74,22 +72,22 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
t.Run("create should fail if object name is specified", func(t *testing.T) {
interval := newInterval.Copy().(*v0alpha1.TimeInterval)
interval.Name = "time-newInterval"
_, err := client.Create(ctx, interval, resource.CreateOptions{})
_, err := client.Create(ctx, interval, v1.CreateOptions{})
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
})
var resourceID resource.Identifier
var resourceID string
t.Run("create should succeed and provide resource name", func(t *testing.T) {
actual, err := client.Create(ctx, newInterval, resource.CreateOptions{})
actual, err := client.Create(ctx, newInterval, v1.CreateOptions{})
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
resourceID = actual.GetStaticMetadata().Identifier()
resourceID = actual.Name
})
var existingInterval *v0alpha1.TimeInterval
t.Run("resource should be available by the identifier", func(t *testing.T) {
actual, err := client.Get(ctx, resourceID)
actual, err := client.Get(ctx, resourceID, v1.GetOptions{})
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
require.Equal(t, newInterval.Spec, actual.Spec)
@@ -102,13 +100,13 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
}
updated := existingInterval.Copy().(*v0alpha1.TimeInterval)
updated.Spec.Name = "another-newInterval"
actual, err := client.Update(ctx, updated, resource.UpdateOptions{})
actual, err := client.Update(ctx, updated, v1.UpdateOptions{})
require.NoError(t, err)
require.Equal(t, updated.Spec, actual.Spec)
require.NotEqualf(t, updated.Name, actual.Name, "Update should change the resource name but it didn't")
require.NotEqualf(t, updated.ResourceVersion, actual.ResourceVersion, "Update should change the resource version but it didn't")
resource, err := client.Get(ctx, actual.GetStaticMetadata().Identifier())
resource, err := client.Get(ctx, actual.Name, v1.GetOptions{})
require.NoError(t, err)
require.Equal(t, actual, resource)
})
@@ -191,13 +189,11 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
},
}
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
for _, tc := range testCases {
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
client, err := v0alpha1.NewTimeIntervalClientFromGenerator(tc.user.GetClientRegistry())
require.NoError(t, err)
client := common.NewTimeIntervalClient(t, tc.user)
var expected = &v0alpha1.TimeInterval{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
@@ -213,12 +209,12 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
if tc.canCreate {
t.Run("should be able to create time interval", func(t *testing.T) {
actual, err := client.Create(ctx, expected, resource.CreateOptions{})
actual, err := client.Create(ctx, expected, v1.CreateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.Equal(t, expected.Spec, actual.Spec)
t.Run("should fail if already exists", func(t *testing.T) {
_, err := client.Create(ctx, actual, resource.CreateOptions{})
_, err := client.Create(ctx, actual, v1.CreateOptions{})
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
})
@@ -226,45 +222,45 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
})
} else {
t.Run("should be forbidden to create", func(t *testing.T) {
_, err := client.Create(ctx, expected, resource.CreateOptions{})
_, err := client.Create(ctx, expected, v1.CreateOptions{})
require.Truef(t, errors.IsForbidden(err), "Payload %s", string(d))
})
// create resource to proceed with other tests
expected, err = adminClient.Create(ctx, expected, resource.CreateOptions{})
expected, err = adminClient.Create(ctx, expected, v1.CreateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.NotNil(t, expected)
}
if tc.canRead {
t.Run("should be able to list time intervals", func(t *testing.T) {
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
list, err := client.List(ctx, v1.ListOptions{})
require.NoError(t, err)
require.Len(t, list.Items, 1)
})
t.Run("should be able to read time interval by resource identifier", func(t *testing.T) {
got, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
got, err := client.Get(ctx, expected.Name, v1.GetOptions{})
require.NoError(t, err)
require.Equal(t, expected.Spec, got.Spec)
require.Equal(t, expected, got)
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to list time intervals", func(t *testing.T) {
_, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
_, err := client.List(ctx, v1.ListOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
t.Run("should be forbidden to read time interval by name", func(t *testing.T) {
_, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
_, err := client.Get(ctx, expected.Name, v1.GetOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
@@ -278,7 +274,7 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
if tc.canUpdate {
t.Run("should be able to update time interval", func(t *testing.T) {
updated, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
updated, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
expected = updated
@@ -286,54 +282,52 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
up := updatedExpected.Copy().(*v0alpha1.TimeInterval)
up.Name = "notFound"
_, err := client.Update(ctx, up, resource.UpdateOptions{})
_, err := client.Update(ctx, up, v1.UpdateOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to update time interval", func(t *testing.T) {
_, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
_, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
up := updatedExpected.Copy().(*v0alpha1.TimeInterval)
up.Name = "notFound"
_, err := client.Update(ctx, up, resource.UpdateOptions{
ResourceVersion: up.ResourceVersion,
})
_, err := client.Update(ctx, up, v1.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
}
deleteOptions := v1.DeleteOptions{Preconditions: &v1.Preconditions{ResourceVersion: util.Pointer(expected.ResourceVersion)}}
oldClient := common.NewTimeIntervalClient(t, tc.user)
if tc.canDelete {
t.Run("should be able to delete time interval", func(t *testing.T) {
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
err := client.Delete(ctx, expected.Name, deleteOptions)
require.NoError(t, err)
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to delete time interval", func(t *testing.T) {
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
err := client.Delete(ctx, expected.Name, deleteOptions)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
}
if tc.canRead {
t.Run("should get empty list if no mute timings", func(t *testing.T) {
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
list, err := client.List(ctx, v1.ListOptions{})
require.NoError(t, err)
require.Len(t, list.Items, 0)
})
@@ -351,8 +345,7 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
org := helper.Org1
admin := org.Admin
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
env := helper.GetEnv()
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
@@ -367,7 +360,7 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
Name: "time-interval-1",
TimeIntervals: fakes.IntervalGenerator{}.GenerateMany(2),
},
}, resource.CreateOptions{})
}, v1.CreateOptions{})
require.NoError(t, err)
require.Equal(t, "none", created.GetProvenanceStatus())
@@ -378,7 +371,7 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
},
}, admin.Identity.GetOrgID(), "API"))
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
require.NoError(t, err)
require.Equal(t, "API", got.GetProvenanceStatus())
})
@@ -386,12 +379,12 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
updated := created.Copy().(*v0alpha1.TimeInterval)
updated.Spec.TimeIntervals = fakes.IntervalGenerator{}.GenerateMany(2)
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
t.Run("should not let delete if provisioned", func(t *testing.T) {
err := adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
err := adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
}
@@ -402,9 +395,7 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
oldClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
interval := v0alpha1.TimeInterval{
ObjectMeta: v1.ObjectMeta{
@@ -416,22 +407,21 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
},
}
created, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
created, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
require.NoError(t, err)
require.NotNil(t, created)
require.NotEmpty(t, created.ResourceVersion)
t.Run("should forbid if version does not match", func(t *testing.T) {
updated := created.Copy().(*v0alpha1.TimeInterval)
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
ResourceVersion: "test",
})
updated.ResourceVersion = "test"
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
})
t.Run("should update if version matches", func(t *testing.T) {
updated := created.Copy().(*v0alpha1.TimeInterval)
updated.Spec.TimeIntervals = fakes.IntervalGenerator{}.GenerateMany(2)
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.NoError(t, err)
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
require.NotEqual(t, updated.ResourceVersion, actualUpdated.ResourceVersion)
@@ -441,16 +431,16 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
updated.ResourceVersion = ""
updated.Spec.TimeIntervals = fakes.IntervalGenerator{}.GenerateMany(2)
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
require.NoError(t, err)
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
require.NotEqual(t, created.ResourceVersion, actualUpdated.ResourceVersion)
})
t.Run("should fail to delete if version does not match", func(t *testing.T) {
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
require.NoError(t, err)
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer("something"),
},
@@ -458,10 +448,10 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
})
t.Run("should succeed if version matches", func(t *testing.T) {
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
require.NoError(t, err)
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer(actual.ResourceVersion),
},
@@ -469,10 +459,10 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
require.NoError(t, err)
})
t.Run("should succeed if version is empty", func(t *testing.T) {
actual, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
actual, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
require.NoError(t, err)
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer(actual.ResourceVersion),
},
@@ -487,9 +477,7 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
oldClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
interval := v0alpha1.TimeInterval{
ObjectMeta: v1.ObjectMeta{
@@ -501,7 +489,7 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
},
}
current, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
current, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
require.NoError(t, err)
require.NotNil(t, current)
require.NotEmpty(t, current.ResourceVersion)
@@ -513,7 +501,7 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
}
}`
result, err := oldClient.Patch(ctx, current.GetStaticMetadata().Identifier().Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
result, err := adminClient.Patch(ctx, current.Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
require.NoError(t, err)
require.Empty(t, result.Spec.TimeIntervals)
current = result
@@ -522,15 +510,18 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
t.Run("should patch with json patch", func(t *testing.T) {
expected := fakes.IntervalGenerator{}.Generate()
patch := []resource.PatchOperation{
patch := []map[string]interface{}{
{
Operation: "add",
Path: "/spec/time_intervals/-",
Value: expected,
"op": "add",
"path": "/spec/time_intervals/-",
"value": expected,
},
}
result, err := adminClient.Patch(ctx, current.GetStaticMetadata().Identifier(), resource.PatchRequest{Operations: patch}, resource.PatchOptions{})
patchData, err := json.Marshal(patch)
require.NoError(t, err)
result, err := adminClient.Patch(ctx, current.Name, types.JSONPatchType, patchData, v1.PatchOptions{})
require.NoError(t, err)
expectedSpec := v0alpha1.TimeIntervalSpec{
Name: current.Spec.Name,
@@ -549,8 +540,7 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
interval1 := &v0alpha1.TimeInterval{
ObjectMeta: v1.ObjectMeta{
@@ -561,7 +551,7 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
TimeIntervals: fakes.IntervalGenerator{}.GenerateMany(2),
},
}
interval1, err = adminClient.Create(ctx, interval1, resource.CreateOptions{})
interval1, err := adminClient.Create(ctx, interval1, v1.CreateOptions{})
require.NoError(t, err)
interval2 := &v0alpha1.TimeInterval{
@@ -573,7 +563,7 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
TimeIntervals: fakes.IntervalGenerator{}.GenerateMany(2),
},
}
interval2, err = adminClient.Create(ctx, interval2, resource.CreateOptions{})
interval2, err = adminClient.Create(ctx, interval2, v1.CreateOptions{})
require.NoError(t, err)
env := helper.GetEnv()
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
@@ -584,18 +574,18 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
Name: interval2.Spec.Name,
},
}, helper.Org1.Admin.Identity.GetOrgID(), "API"))
interval2, err = adminClient.Get(ctx, interval2.GetStaticMetadata().Identifier())
interval2, err = adminClient.Get(ctx, interval2.Name, v1.GetOptions{})
require.NoError(t, err)
intervals, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
intervals, err := adminClient.List(ctx, v1.ListOptions{})
require.NoError(t, err)
require.Len(t, intervals.Items, 2)
t.Run("should filter by interval name", func(t *testing.T) {
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{"spec.name=" + interval1.Spec.Name},
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: "spec.name=" + interval1.Spec.Name,
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -603,8 +593,8 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
})
t.Run("should filter by interval metadata name", func(t *testing.T) {
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{"metadata.name=" + interval2.Name},
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: "metadata.name=" + interval2.Name,
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -613,8 +603,8 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
t.Run("should filter by multiple filters", func(t *testing.T) {
t.Skip("disabled until app installer supports it")
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s", interval2.Name), fmt.Sprintf("spec.name=%s", interval2.Spec.Name)},
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: fmt.Sprintf("metadata.name=%s,spec.name=%s", interval2.Name, interval2.Spec.Name),
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -622,8 +612,8 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
})
t.Run("should be empty when filter does not match", func(t *testing.T) {
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s", "unknown")},
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: fmt.Sprintf("metadata.name=%s", "unknown"),
})
require.NoError(t, err)
require.Empty(t, list.Items)
@@ -657,20 +647,18 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
})
}
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
v1intervals, err := timeinterval.ConvertToK8sResources(orgID, mtis, func(int64) string { return "default" }, nil)
require.NoError(t, err)
for _, interval := range v1intervals.Items {
_, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
_, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
require.NoError(t, err)
}
routeClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
routeClient := common.NewRoutingTreeClient(t, helper.Org1.Admin)
v1route, err := routingtree.ConvertToK8sResource(helper.Org1.Admin.Identity.GetOrgID(), *amConfig.AlertmanagerConfig.Route, "", func(int64) string { return "default" })
require.NoError(t, err)
_, err = routeClient.Update(ctx, v1route, resource.UpdateOptions{})
_, err = routeClient.Update(ctx, v1route, v1.UpdateOptions{})
require.NoError(t, err)
postGroupRaw, err := testData.ReadFile(path.Join("test-data", "rulegroup-1.json"))
@@ -687,7 +675,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
currentRuleGroup, status := legacyCli.GetRulesGroup(t, folderUID, ruleGroup.Name)
require.Equal(t, http.StatusAccepted, status)
intervals, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
intervals, err := adminClient.List(ctx, v1.ListOptions{})
require.NoError(t, err)
require.Len(t, intervals.Items, 3)
intervalIdx := slices.IndexFunc(intervals.Items, func(interval v0alpha1.TimeInterval) bool {
@@ -712,7 +700,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
renamed := interval.Copy().(*v0alpha1.TimeInterval)
renamed.Spec.Name += "-new"
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
require.NoError(t, err)
updatedRuleGroup, status := legacyCli.GetRulesGroup(t, folderUID, ruleGroup.Name)
@@ -744,20 +732,20 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
t.Cleanup(func() {
require.NoError(t, db.DeleteProvenance(ctx, &currentRoute, orgID))
})
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
t.Run("provisioned rules", func(t *testing.T) {
ruleUid := currentRuleGroup.Rules[0].GrafanaManagedAlert.UID
rule := &ngmodels.AlertRule{UID: ruleUid}
require.NoError(t, db.SetProvenance(ctx, rule, orgID, "API"))
resource := &ngmodels.AlertRule{UID: ruleUid}
require.NoError(t, db.SetProvenance(ctx, resource, orgID, "API"))
t.Cleanup(func() {
require.NoError(t, db.DeleteProvenance(ctx, rule, orgID))
require.NoError(t, db.DeleteProvenance(ctx, resource, orgID))
})
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
@@ -766,7 +754,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
t.Run("Delete", func(t *testing.T) {
t.Run("should fail to delete if time interval is used in rule and routes", func(t *testing.T) {
err := adminClient.Delete(ctx, interval.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
err := adminClient.Delete(ctx, interval.Name, v1.DeleteOptions{})
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
@@ -775,7 +763,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
route.Routes[0].MuteTimeIntervals = nil
legacyCli.UpdateRoute(t, route, true)
err = adminClient.Delete(ctx, interval.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
err = adminClient.Delete(ctx, interval.Name, v1.DeleteOptions{})
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
@@ -785,7 +773,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
})
intervalToDelete := intervals.Items[idx]
err = adminClient.Delete(ctx, intervalToDelete.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
err = adminClient.Delete(ctx, intervalToDelete.Name, v1.DeleteOptions{})
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
})
@@ -797,8 +785,7 @@ func TestIntegrationTimeIntervalValidation(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
testCases := []struct {
name string
@@ -832,7 +819,7 @@ func TestIntegrationTimeIntervalValidation(t *testing.T) {
},
Spec: tc.interval,
}
_, err := adminClient.Create(ctx, i, resource.CreateOptions{})
_, err := adminClient.Create(ctx, i, v1.CreateOptions{})
require.Error(t, err)
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest, got: %s", err)
})
+1 -10
View File
@@ -14,7 +14,7 @@ import (
"testing"
"time"
appsdk_k8s "github.com/grafana/grafana-app-sdk/k8s"
githubConnection "github.com/grafana/grafana/apps/provisioning/pkg/connection/github"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/api/errors"
@@ -28,8 +28,6 @@ import (
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
githubConnection "github.com/grafana/grafana/apps/provisioning/pkg/connection/github"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/configprovider"
@@ -59,8 +57,6 @@ import (
const (
Org1 = "Org1"
Org2 = "OrgB"
DefaultNamespace = "default"
)
var (
@@ -449,11 +445,6 @@ func (c *User) RESTClient(t *testing.T, gv *schema.GroupVersion) *rest.RESTClien
return client
}
func (c *User) GetClientRegistry() *appsdk_k8s.ClientRegistry {
restConfig := c.NewRestConfig()
return appsdk_k8s.NewClientRegistry(*restConfig, appsdk_k8s.DefaultClientConfig())
}
type RequestParams struct {
User User
Method string // GET, POST, PATCH, etc
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Tlačítko Spustit dotaz protokolů Azure",
"body-switching-to-builder": "Přepnutím na nástroj pro tvorbu zahodíte aktuální dotaz KQL a vymažete editor KQL. Opravdu chcete pokračovat?",
"body-switching-to-kql": "Přepnutím na KQL zahodíte aktuální nastavení nástroje pro tvorbu. Opravdu chcete pokračovat?",
"button-kick-start-your-query": "Spustit dotaz",
"button-run-query": "Spustit dotaz",
"confirmText-switch-to": "Přepnout na {{newMode}}",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Azure-Logs starten Ihre Abfrage-Schaltfläche",
"body-switching-to-builder": "Wenn Sie zum Builder wechseln, wird Ihre aktuelle KQL-Abfrage verworfen und der KQL-Editor gelöscht. Sind Sie sicher?",
"body-switching-to-kql": "Wenn Sie zu KQL wechseln, werden Ihre aktuellen Builder-Einstellungen verworfen. Sind Sie sicher?",
"button-kick-start-your-query": "Starten Sie Ihre Abfrage",
"button-run-query": "Abfrage ausführen",
"confirmText-switch-to": "Wechseln zu {{newMode}}",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Los logs de Azure inician su botón de consulta",
"body-switching-to-builder": "Cambiar al modo Constructor descartará tu consulta KQL actual y se borrará el editor KQL. ¿Quieres continuar?",
"body-switching-to-kql": "Cambiar a KQL descartará la configuración actual del constructor. ¿Quieres continuar?",
"button-kick-start-your-query": "Inicie su consulta",
"button-run-query": "Ejecutar consulta",
"confirmText-switch-to": "Cambiar a {{newMode}}",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Bouton Lancer votre requête des journaux Azure",
"body-switching-to-builder": "Passer en mode Builder supprimera votre requête KQL actuelle et réinitialisera l’éditeur KQL. Voulez-vous continuer ?",
"body-switching-to-kql": "Passer en mode KQL supprimera vos paramètres Builder actuels. Voulez-vous continuer ?",
"button-kick-start-your-query": "Lancer votre requête",
"button-run-query": "Exécuter la requête",
"confirmText-switch-to": "Passer en mode {{newMode}}",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Azure-naplók előbeállításos lekérdezése gomb",
"body-switching-to-builder": "A szerkesztő módra váltás elveti az aktuális KQL-lekérdezést és törli a KQL-szerkesztőben lévő tartalmat. Biztosan folytatja?",
"body-switching-to-kql": "A KQL-re váltás elveti az aktuális szerkesztő beállításait. Biztosan folytatja?",
"button-kick-start-your-query": "Előbeállításos lekérdezés",
"button-run-query": "Lekérdezés futtatása",
"confirmText-switch-to": "Váltás erre: {{newMode}}",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Tombol kueri untuk memulai menggunakan log Azure",
"body-switching-to-builder": "Beralih ke Builder akan membuang kueri KQL Anda saat ini dan menghapus editor KQL. Anda yakin?",
"body-switching-to-kql": "Beralih ke KQL akan membuang pengaturan pembangun Anda saat ini. Anda yakin?",
"button-kick-start-your-query": "Mulai kueri Anda",
"button-run-query": "Jalankan kueri",
"confirmText-switch-to": "Beralih ke {{newMode}}",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "I registri di Azure avviano il pulsante della query",
"body-switching-to-builder": "Il passaggio a Builder eliminerà la query KQL corrente e cancellerà l'editor KQL. Vuoi davvero continuare?",
"body-switching-to-kql": "Il passaggio a KQL annullerà le impostazioni correnti del generatore. Vuoi davvero continuare?",
"button-kick-start-your-query": "Avvia la query",
"button-run-query": "Esegui query",
"confirmText-switch-to": "Passa a {{newMode}}",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Azureログのクエリ開始ボタン",
"body-switching-to-builder": "ビルダーに切り替えると、現在のKQLクエリが破棄され、KQLエディターがクリアされます。よろしいですか?",
"body-switching-to-kql": "KQLに切り替えると、現在のビルダー設定が破棄されます。よろしいですか?",
"button-kick-start-your-query": "クエリを開始",
"button-run-query": "クエリの実行",
"confirmText-switch-to": "{{newMode}}に切り替える",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Azure 로그 쿼리 시작 버튼",
"body-switching-to-builder": "빌더로 전환하면 현재 KQL 쿼리가 취소되고 KQL 편집기가 지워집니다. 정말 진행하시겠어요?",
"body-switching-to-kql": "KQL로 전환하면 현재 빌더 설정이 취소됩니다. 정말 진행하시겠어요?",
"button-kick-start-your-query": "쿼리 시작하기",
"button-run-query": "쿼리 실행",
"confirmText-switch-to": "{{newMode}} 모드로 전환",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Knop Start query met Azure-logboeken",
"body-switching-to-builder": "Als je overschakelt naar Bouwer, wordt je huidige KQL-query verwijderd en wordt de KQL-editor gewist. Weet je het zeker?",
"body-switching-to-kql": "Als je overschakelt naar KQL, worden je huidige bouwerinstellingen verwijderd. Weet je het zeker?",
"button-kick-start-your-query": "Start je query",
"button-run-query": "Query uitvoeren",
"confirmText-switch-to": "Overschakelen naar {{newMode}}",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Przycisk Uruchom zapytanie dotyczący dzienników Azure",
"body-switching-to-builder": "Przełączenie na tryb konstruktora spowoduje odrzucenie bieżącego zapytania KQL i usunięcie danych z edytora KQL. Czy na pewno?",
"body-switching-to-kql": "Przełączenie na KQL spowoduje odrzucenie bieżących ustawień konstruktora. Czy na pewno?",
"button-kick-start-your-query": "Uruchom zapytanie",
"button-run-query": "Uruchom zapytanie",
"confirmText-switch-to": "Przełącz na {{newMode}}",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Botão de iniciar consulta de logs do Azure",
"body-switching-to-builder": "Mudar para o construtor descartará sua consulta KQL atual e limpará o editor KQL. Tem certeza de que deseja continuar?",
"body-switching-to-kql": "Mudar para o KQL descartará suas configurações atuais do construtor. Tem certeza de que deseja continuar?",
"button-kick-start-your-query": "Iniciar sua consulta",
"button-run-query": "Executar consulta",
"confirmText-switch-to": "Alternar para {{newMode}}",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Botão de início de consulta de registos do Azure",
"body-switching-to-builder": "Mudar para o Construtor descartará a sua consulta KQL atual e limpará o editor KQL. Tem a certeza?",
"body-switching-to-kql": "Mudar para KQL descartará as suas definições atuais do construtor. Tem a certeza?",
"button-kick-start-your-query": "Dê início à sua consulta",
"button-run-query": "Executar consulta",
"confirmText-switch-to": "Mudar para {{newMode}}",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Кнопка запуска запроса журналов Azure",
"body-switching-to-builder": "Переключение на конструктор приведет к отмене текущего KQL-запроса и очистке редактора KQL. Вы уверены?",
"body-switching-to-kql": "Переключение на KQL приведет к отмене текущих настроек конструктора. Вы уверены?",
"button-kick-start-your-query": "Запустить запрос",
"button-run-query": "Выполнить запрос",
"confirmText-switch-to": "Переключиться на {{newMode}}",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Knapp för att kickstarta Azure-loggfrågor",
"body-switching-to-builder": "Om du byter till Builder kommer din nuvarande KQL-förfrågan att kasseras och KQL-redigeraren att rensas. Är du säker på att du vill göra det?",
"body-switching-to-kql": "Om du byter till KQL kommer dina nuvarande byggarinställningar att kasseras. Är du säker?",
"button-kick-start-your-query": "Kickstarta din fråga",
"button-run-query": "Kör fråga",
"confirmText-switch-to": "Byt till {{newMode}}",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Azure günlükleri sorgu hızlı başlatma düğmesi",
"body-switching-to-builder": "Oluşturucuya geçmek, mevcut KQL sorgunuzu siler ve KQL düzenleyicisini temizler. Emin misiniz?",
"body-switching-to-kql": "KQL'ye geçmek, mevcut oluşturucu ayarlarınızı siler. Emin misiniz?",
"button-kick-start-your-query": "Sorgunuzu hızlı başlatın",
"button-run-query": "Sorgu çalıştır",
"confirmText-switch-to": "Şuna geç: {{newMode}}",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Azure 日志启动查询按钮",
"body-switching-to-builder": "切换到构建器将丢弃当前的 KQL 查询并清除 KQL 编辑器。您确定吗?",
"body-switching-to-kql": "切换到 KQL 将丢弃当前的构建器设置。您确定吗?",
"button-kick-start-your-query": "启动您的查询",
"button-run-query": "运行查询",
"confirmText-switch-to": "切换到 {{newMode}}",
@@ -204,6 +204,7 @@
"query-header": {
"aria-label-kick-start": "Azure 紀錄會啟動您的查詢按鈕",
"body-switching-to-builder": "切換到建立器將捨棄您目前的 KQL 查詢並清除 KQL 編輯器。您確定嗎?",
"body-switching-to-kql": "切換到 KQL 將捨棄您目前的建立器設定。您確定嗎?",
"button-kick-start-your-query": "啟動您的查詢",
"button-run-query": "執行查詢",
"confirmText-switch-to": "切換到 {{newMode}}",
+4 -10
View File
@@ -813,8 +813,7 @@
"label-integration": "Integrace",
"label-notification-settings": "Nastavení oznámení",
"label-section": "Nepovinné nastavení: {{name}}",
"test": "Test",
"tooltip-legacy-version": ""
"test": "Test"
},
"classic-condition-viewer": {
"of": "Z",
@@ -2193,14 +2192,11 @@
"provisioning": {
"badge-tooltip-provenance": "Tento zdroj byl zajištěn prostřednictvím {{provenance}} a nelze ho upravovat přes uživatelské rozhraní",
"badge-tooltip-standard": "Tento zdroj byl zajištěn a nelze ho upravovat přes uživatelské rozhraní",
"body-imported": "",
"body-provisioned": "Tento {{resource}} byl zajištěn. To znamená, že byl vytvořen konfigurací. Pokud chcete {{resource}} aktualizovat, obraťte se na správce serveru.",
"title-imported": "",
"title-provisioned": "{{resource}} nelze upravit přes uživatelské rozhraní"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "Zajištěno"
}
},
@@ -2703,6 +2699,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6427,15 +6424,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "Klasický",
"json": "JSON",
"v1-resource": "Zdroj V1",
"v2-resource": "Zdroj V2",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "Opravdu chcete obnovit nástěnku na verzi {{version}}? Veškeré neuložené změny budou ztraceny.",
@@ -7034,7 +7028,6 @@
"drone-datasource": "Zdroj dat Drone",
"git-lab-integration-and-datasource": "Integrace a zdroj dat GitLab",
"honeycomb-integration-and-datasource": "Integrace a zdroj dat Honeycomb",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Integrace a zdroj dat Jira",
"logic-monitor-devices-datasource": "Zdroj dat zařízení LogicMonitor",
"mongo-db-integration-and-data-source": "Integrace a zdroj dat MongoDB",
@@ -7897,6 +7890,7 @@
"export-externally-label": "Exportovat nástěnku pro použití v jiné instanci",
"export-format": "Formát",
"export-mode": "Model",
"export-remove-ds-refs": "Odebrat podrobnosti o nasazení",
"info-text": "Zkopírujte nebo stáhněte soubor obsahující definici vaší nástěnky",
"title": "Exportovat nástěnku"
},
+4 -10
View File
@@ -807,8 +807,7 @@
"label-integration": "Integration",
"label-notification-settings": "Benachrichtigungseinstellungen",
"label-section": "Optionale {{name}}-Einstellungen",
"test": "Test",
"tooltip-legacy-version": ""
"test": "Test"
},
"classic-condition-viewer": {
"of": "VON",
@@ -2177,14 +2176,11 @@
"provisioning": {
"badge-tooltip-provenance": "Diese Ressource wurde über {{provenance}} bereitgestellt und kann nicht über die Benutzeroberfläche bearbeitet werden",
"badge-tooltip-standard": "Diese Ressource wurde bereitgestellt und kann nicht über die Benutzeroberfläche bearbeitet werden",
"body-imported": "",
"body-provisioned": "Die Ressource {{resource}} wurde bereitgestellt, d. h. es wurde von config erstellt. Bitte kontaktieren Sie Ihren Serveradministrator für die Aktualisierung von {{resource}}.",
"title-imported": "",
"title-provisioned": "Die Ressource {{resource}} kann nicht über die Benutzeroberfläche bearbeitet werden"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "Bereitgestellt"
}
},
@@ -2681,6 +2677,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6383,15 +6380,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "Klassisch",
"json": "JSON",
"v1-resource": "V1-Ressource",
"v2-resource": "V2-Ressource",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "Sind Sie sicher, dass Sie das Dashboard auf die Version {{version}} zurücksetzen möchten? Alle nicht gespeicherten Änderungen gehen verloren.",
@@ -6986,7 +6980,6 @@
"drone-datasource": "Drone-Datenquelle",
"git-lab-integration-and-datasource": "GitLab-Integration und -Datenquelle",
"honeycomb-integration-and-datasource": "Honeycomb-Integration und -Datenquelle",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Jira-Integration und -Datenquelle",
"logic-monitor-devices-datasource": "LogicMonitor Devices-Datenquelle",
"mongo-db-integration-and-data-source": "MongoDB-Integration und -Datenquelle",
@@ -7845,6 +7838,7 @@
"export-externally-label": "Exportieren Sie das Dashboard, um es in einer anderen Instanz zu verwenden",
"export-format": "Format",
"export-mode": "Modell",
"export-remove-ds-refs": "Bereitstellungsdetails entfernen",
"info-text": "Kopieren oder downloaden Sie eine Datei, die die Definition Ihres Dashboards enthält",
"title": "Dashboard exportieren"
},
+4 -10
View File
@@ -807,8 +807,7 @@
"label-integration": "Integración",
"label-notification-settings": "Ajustes de notificaciones",
"label-section": "Ajustes de {{name}} opcionales",
"test": "Prueba",
"tooltip-legacy-version": ""
"test": "Prueba"
},
"classic-condition-viewer": {
"of": "DE",
@@ -2177,14 +2176,11 @@
"provisioning": {
"badge-tooltip-provenance": "Este recurso se ha aprovisionado a través de {{provenance}} y no se puede editar mediante la interfaz de usuario",
"badge-tooltip-standard": "Este recurso se ha aprovisionado y no se puede editar a través de la interfaz de usuario",
"body-imported": "",
"body-provisioned": "Este {{resource}} se ha aprovisionado, lo que significa que ha sido creado por la configuración. Ponte en contacto con el administrador del servidor para actualizar este {{resource}}.",
"title-imported": "",
"title-provisioned": "Este {{resource}} no se puede editar a través de la interfaz de usuario"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "Provisionado"
}
},
@@ -2681,6 +2677,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6383,15 +6380,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "Clásico",
"json": "JSON",
"v1-resource": "Recurso V1",
"v2-resource": "Recurso V2",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "¿Seguro que quieres restaurar el dashboard a la versión {{version}}? Todos los cambios no guardados se perderán.",
@@ -6986,7 +6980,6 @@
"drone-datasource": "Fuente de datos de Drone",
"git-lab-integration-and-datasource": "Integración y fuente de datos de GitLab",
"honeycomb-integration-and-datasource": "Integración y fuente de datos de Honeycomb",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Integración y fuente de datos de Jira",
"logic-monitor-devices-datasource": "Fuente de datos de dispositivos de LogicMonitor",
"mongo-db-integration-and-data-source": "Integración y fuente de datos de MongoDB",
@@ -7845,6 +7838,7 @@
"export-externally-label": "Exportar el panel de control para utilizarlo en otra instancia",
"export-format": "Formato",
"export-mode": "Modelo",
"export-remove-ds-refs": "Eliminar detalles de implementación",
"info-text": "Copiar o descargar un archivo que contenga la definición de su dashboard",
"title": "Exportar panel"
},
+4 -10
View File
@@ -807,8 +807,7 @@
"label-integration": "Intégration",
"label-notification-settings": "Paramètres de notification",
"label-section": "Paramètres facultatifs : {{name}}",
"test": "Test",
"tooltip-legacy-version": ""
"test": "Test"
},
"classic-condition-viewer": {
"of": "DE",
@@ -2177,14 +2176,11 @@
"provisioning": {
"badge-tooltip-provenance": "Cette ressource a été mise en service via {{provenance}} et ne peut pas être modifiée via l'interface utilisateur",
"badge-tooltip-standard": "Cette ressource a été mise en service et ne peut pas être modifiée via l'interface utilisateur",
"body-imported": "",
"body-provisioned": "Cette {{resource}} a été mise en service, cela signifie quelle a été créée par configuration. Veuillez contacter votre administrateur de serveur pour mettre à jour cette {{resource}}.",
"title-imported": "",
"title-provisioned": "Cette {{resource}} ne peut pas être modifiée via linterface utilisateur"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "Mis en service"
}
},
@@ -2681,6 +2677,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6383,15 +6380,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "Classique",
"json": "JSON",
"v1-resource": "Ressource V1",
"v2-resource": "Ressource V2",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "Voulez-vous vraiment restaurer le tableau de bord dans sa version {{version}} ? Toutes les modifications non enregistrées seront perdues.",
@@ -6986,7 +6980,6 @@
"drone-datasource": "Source de données Drone",
"git-lab-integration-and-datasource": "Intégration et source de données GitLab",
"honeycomb-integration-and-datasource": "Intégration et source de données Honeycomb",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Intégration et source de données Jira",
"logic-monitor-devices-datasource": "Source de données LogicMonitor Devices",
"mongo-db-integration-and-data-source": "Intégration et source de données MongoDB",
@@ -7845,6 +7838,7 @@
"export-externally-label": "Exporter le tableau de bord pour l'utiliser dans une autre instance",
"export-format": "Format",
"export-mode": "Modèle",
"export-remove-ds-refs": "Supprimer les détails du déploiement",
"info-text": "Copier ou télécharger un fichier contenant la définition de votre tableau de bord",
"title": "Exporter le tableau de bord"
},
+4 -10
View File
@@ -807,8 +807,7 @@
"label-integration": "Integráció",
"label-notification-settings": "Értesítési beállítások",
"label-section": "Opcionális {{name}}-beállítások",
"test": "Teszt",
"tooltip-legacy-version": ""
"test": "Teszt"
},
"classic-condition-viewer": {
"of": "OF",
@@ -2177,14 +2176,11 @@
"provisioning": {
"badge-tooltip-provenance": "Ez az erőforrás ki van építve ({{provenance}}), és nem szerkeszthető a felhasználói felületen keresztül",
"badge-tooltip-standard": "Ez az erőforrás ki van építve, és nem szerkeszthető a felhasználói felületen keresztül",
"body-imported": "",
"body-provisioned": "Ez a(z) {{resource}} ki van építve, ami azt jelenti, hogy konfiguráció hozta létre. Ezen {{resource}} frissítéshez forduljon a kiszolgáló rendszergazdájához.",
"title-imported": "",
"title-provisioned": "Ez a(z) {{resource}} nem szerkeszthető a felhasználói felületen keresztül"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "Kiépítve"
}
},
@@ -2681,6 +2677,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6383,15 +6380,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "Klasszikus",
"json": "JSON",
"v1-resource": "V1 erőforrás",
"v2-resource": "V2 erőforrás",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "Biztosan visszaállítja az irányítópultot a(z) {{version}} verzióra? Az összes nem mentett módosítás elveszik.",
@@ -6986,7 +6980,6 @@
"drone-datasource": "Drone-adatforrás",
"git-lab-integration-and-datasource": "GitLab-integráció és -adatforrás",
"honeycomb-integration-and-datasource": "Honeycomb-integráció és -adatforrás",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Jira-integráció és -adatforrás",
"logic-monitor-devices-datasource": "LogicMonitor Devices-adatforrás",
"mongo-db-integration-and-data-source": "MongoDB-integráció és -adatforrás",
@@ -7845,6 +7838,7 @@
"export-externally-label": "Exportálja az irányítópultot egy másik példányban való használathoz",
"export-format": "Formátum",
"export-mode": "Modell",
"export-remove-ds-refs": "Üzembehelyezési részletek eltávolítása",
"info-text": "Másolja vagy töltse le az irányítópult definícióját tartalmazó fájlt",
"title": "Irányítópult exportálása"
},
+4 -10
View File
@@ -804,8 +804,7 @@
"label-integration": "Integrasi",
"label-notification-settings": "Pengaturan notifikasi",
"label-section": "Pengaturan {{name}} opsional",
"test": "Tes",
"tooltip-legacy-version": ""
"test": "Tes"
},
"classic-condition-viewer": {
"of": "DARI",
@@ -2169,14 +2168,11 @@
"provisioning": {
"badge-tooltip-provenance": "Sumber daya ini telah disediakan melalui {{provenance}} dan tidak dapat diedit melalui UI",
"badge-tooltip-standard": "Sumber daya ini telah disediakan dan tidak dapat diedit melalui UI",
"body-imported": "",
"body-provisioned": "{{resource}} ini telah disediakan, artinya, ini dibuat melalui konfigurasi. Hubungi admin server Anda untuk memperbarui {{resource}} ini.",
"title-imported": "",
"title-provisioned": "{{resource}} ini tidak dapat diedit melalui UI"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "Disediakan"
}
},
@@ -2670,6 +2666,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6361,15 +6358,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "Klasik",
"json": "JSON",
"v1-resource": "Sumber Daya V1",
"v2-resource": "Sumber Daya V2",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "Anda yakin ingin memulihkan dasbor ke versi {{version}}? Semua perubahan yang belum disimpan akan hilang.",
@@ -6962,7 +6956,6 @@
"drone-datasource": "Sumber data Drone",
"git-lab-integration-and-datasource": "Integrasi dan sumber data GitLab",
"honeycomb-integration-and-datasource": "Integrasi dan sumber data Honeycomb",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Integrasi dan sumber data Jira",
"logic-monitor-devices-datasource": "Sumber data Perangkat LogicMonitor",
"mongo-db-integration-and-data-source": "Integrasi dan sumber data MongoDB",
@@ -7819,6 +7812,7 @@
"export-externally-label": "Ekspor dasbor untuk digunakan di instans lain",
"export-format": "Format",
"export-mode": "Model",
"export-remove-ds-refs": "Hapus detail penyebaran",
"info-text": "Salin atau unduh file yang berisi definisi dasbor Anda",
"title": "Ekspor dasbor"
},
+4 -10
View File
@@ -807,8 +807,7 @@
"label-integration": "Integrazione",
"label-notification-settings": "Impostazioni delle notifiche",
"label-section": "Impostazioni {{name}} facoltative",
"test": "Prova",
"tooltip-legacy-version": ""
"test": "Prova"
},
"classic-condition-viewer": {
"of": "DI",
@@ -2177,14 +2176,11 @@
"provisioning": {
"badge-tooltip-provenance": "Questa risorsa è stata sottoposta a provisioning tramite {{provenance}} e non può essere modificata tramite l'interfaccia utente",
"badge-tooltip-standard": "Questa risorsa è stata sottoposta a provisioning e non può essere modificata tramite l'interfaccia utente",
"body-imported": "",
"body-provisioned": "Questa {{resource}} è stata sottoposta a provisioning, il che significa che è stata creata tramite configurazione. Contatta l'amministratore del server per aggiornare questa {{resource}}.",
"title-imported": "",
"title-provisioned": "Questa {{resource}} non può essere modificata tramite l'interfaccia utente"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "Fornito"
}
},
@@ -2681,6 +2677,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6383,15 +6380,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "Classico",
"json": "JSON",
"v1-resource": "Risorsa V1",
"v2-resource": "Risorsa V2",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "Desideri davvero ripristinare la dashboard alla versione {{version}}? Tutte le modifiche non salvate andranno perse.",
@@ -6986,7 +6980,6 @@
"drone-datasource": "Origine dati Drone",
"git-lab-integration-and-datasource": "Integrazione e origine dati GitLab",
"honeycomb-integration-and-datasource": "Integrazione e origine dati Honeycomb",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Integrazione e origine dati Jira",
"logic-monitor-devices-datasource": "Origine dati dispositivi LogicMonitor",
"mongo-db-integration-and-data-source": "Integrazione e origine dati MongoDB",
@@ -7845,6 +7838,7 @@
"export-externally-label": "Esporta il dashboard da utilizzare in un'altra istanza",
"export-format": "Formato",
"export-mode": "Modello",
"export-remove-ds-refs": "Rimuovi i dettagli della distribuzione",
"info-text": "Copia o scarica un file contenente la definizione della tua dashboard",
"title": "Esporta dashboard"
},
+4 -10
View File
@@ -804,8 +804,7 @@
"label-integration": "統合",
"label-notification-settings": "通知設定",
"label-section": "{{name}}のオプション設定",
"test": "テスト",
"tooltip-legacy-version": ""
"test": "テスト"
},
"classic-condition-viewer": {
"of": "の",
@@ -2169,14 +2168,11 @@
"provisioning": {
"badge-tooltip-provenance": "このリソースは{{provenance}}を介してプロビジョニングされており、UIから編集することはできません",
"badge-tooltip-standard": "このリソースはプロビジョニングされており、UIから編集することはできません",
"body-imported": "",
"body-provisioned": "この{{resource}}はプロビジョニングされたものです。つまり、設定によって作成されました。この{{resource}}を更新するには、サーバー管理者にお問い合わせください。",
"title-imported": "",
"title-provisioned": "この{{resource}}はUI上で編集できません"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "プロビジョニング済み"
}
},
@@ -2670,6 +2666,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6361,15 +6358,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "クラシック",
"json": "JSON",
"v1-resource": "V1リソース",
"v2-resource": "V2リソース",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "ダッシュボードをバージョン{{version}}に復元してもよろしいですか?未保存の変更はすべて失われます。",
@@ -6962,7 +6956,6 @@
"drone-datasource": "Droneデータソース",
"git-lab-integration-and-datasource": "GitLab統合・データソース",
"honeycomb-integration-and-datasource": "Honeycomb統合・データソース",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Jira統合・データソース",
"logic-monitor-devices-datasource": "LogicMonitorデバイスデータソース",
"mongo-db-integration-and-data-source": "MongoDB統合・データソース",
@@ -7819,6 +7812,7 @@
"export-externally-label": "ダッシュボードをエクスポートして別のインスタンスで使用する",
"export-format": "形式",
"export-mode": "モデル",
"export-remove-ds-refs": "デプロイの詳細を削除",
"info-text": "ダッシュボードの定義を含むファイルをコピーまたはダウンロード",
"title": "ダッシュボードをエクスポート"
},
+4 -10
View File
@@ -804,8 +804,7 @@
"label-integration": "통합",
"label-notification-settings": "알림 설정",
"label-section": "{{name}} 설정(선택 사항)",
"test": "테스트",
"tooltip-legacy-version": ""
"test": "테스트"
},
"classic-condition-viewer": {
"of": "의",
@@ -2169,14 +2168,11 @@
"provisioning": {
"badge-tooltip-provenance": "이 리소스는 {{provenance}}을(를) 통해 프로비저닝되었으며 UI를 통해 편집할 수 없습니다.",
"badge-tooltip-standard": "이 리소스는 프로비저닝되었으며 UI를 통해 편집할 수 없습니다.",
"body-imported": "",
"body-provisioned": "이 {{resource}}이(가) 프로비저닝되었으며, 이는 구성에 의해 생성되었음을 의미합니다. 이 {{resource}}을(를) 업데이트하려면 서버 관리자에게 문의하세요.",
"title-imported": "",
"title-provisioned": "이 {{resource}}은(는) UI에서 편집할 수 없습니다"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "프로비저닝됨"
}
},
@@ -2670,6 +2666,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6361,15 +6358,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "클래식",
"json": "JSON",
"v1-resource": "V1 리소스",
"v2-resource": "V2 리소스",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "정말 대시보드를 {{version}} 버전으로 복원하시겠어요? 저장하지 않은 변경 사항은 모두 손실됩니다.",
@@ -6962,7 +6956,6 @@
"drone-datasource": "Drone 데이터 소스",
"git-lab-integration-and-datasource": "GitLab 통합 및 데이터 소스",
"honeycomb-integration-and-datasource": "Honeycomb 통합 및 데이터 소스",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Jira 통합 및 데이터 소스",
"logic-monitor-devices-datasource": "LogicMonitor 장치 데이터 소스",
"mongo-db-integration-and-data-source": "MongoDB 통합 및 데이터 소스",
@@ -7819,6 +7812,7 @@
"export-externally-label": "다른 인스턴스에서 사용할 대시보드 내보내기",
"export-format": "형식",
"export-mode": "모델",
"export-remove-ds-refs": "배포 세부 정보 제거",
"info-text": "대시보드의 정의가 포함된 파일을 복사하거나 다운로드",
"title": "대시보드 내보내기"
},
+4 -10
View File
@@ -807,8 +807,7 @@
"label-integration": "Integratie",
"label-notification-settings": "Instellingen van meldingen",
"label-section": "Optionele instellingen voor {{name}}",
"test": "Testen",
"tooltip-legacy-version": ""
"test": "Testen"
},
"classic-condition-viewer": {
"of": "VAN",
@@ -2177,14 +2176,11 @@
"provisioning": {
"badge-tooltip-provenance": "Deze bron is geprovisioneerd via {{provenance}} en kan niet worden bewerkt via de gebruikersinterface",
"badge-tooltip-standard": "Deze bron is geprovisioneerd en kan niet worden bewerkt via de gebruikersinterface",
"body-imported": "",
"body-provisioned": "Deze {{resource}} is ingericht, dat betekent dat het is gemaakt door configuratie. Neem contact op met je serverbeheerder om deze {{resource}} bij te werken.",
"title-imported": "",
"title-provisioned": "Deze {{resource}} kan niet worden bewerkt via de gebruikersinterface"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "Provisioned"
}
},
@@ -2681,6 +2677,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6383,15 +6380,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "Klassiek",
"json": "JSON",
"v1-resource": "V1-bron",
"v2-resource": "V2-bron",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "Weet je zeker dat je het dashboard naar versie {{version}} wilt herstellen? Alle niet opgeslagen wijzigingen zullen verloren gaan.",
@@ -6986,7 +6980,6 @@
"drone-datasource": "Drone-gegevensbron",
"git-lab-integration-and-datasource": "GitLab-integratie en -gegevensbron",
"honeycomb-integration-and-datasource": "Honeycomb-integratie en -gegevensbron",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Jira-integratie en -gegevensbron",
"logic-monitor-devices-datasource": "LogicMonitor Devices-gegevensbron",
"mongo-db-integration-and-data-source": "MongoDB-integratie en -gegevensbron",
@@ -7845,6 +7838,7 @@
"export-externally-label": "Exporteer het dashboard om het in een andere instantie te gebruiken",
"export-format": "Formaat",
"export-mode": "Model",
"export-remove-ds-refs": "Implementatiegegevens verwijderen",
"info-text": "Kopieer of download een JSON-bestand met de JSON van je dashboard",
"title": "Dashboard exporteren"
},
+4 -10
View File
@@ -813,8 +813,7 @@
"label-integration": "Integracja",
"label-notification-settings": "Ustawienia powiadomień",
"label-section": "Ustawienia opcjonalne: {{name}}",
"test": "Test",
"tooltip-legacy-version": ""
"test": "Test"
},
"classic-condition-viewer": {
"of": "OF",
@@ -2193,14 +2192,11 @@
"provisioning": {
"badge-tooltip-provenance": "Ten zasób został skonfigurowany za pośrednictwem {{provenance}} i nie można go edytować z poziomu interfejsu użytkownika",
"badge-tooltip-standard": "Ten zasób został skonfigurowany i nie można go edytować z poziomu interfejsu użytkownika",
"body-imported": "",
"body-provisioned": "Zasób {{resource}} został aprowizowany, co oznacza, że został utworzony przez konfigurację. Skontaktuj się z administratorem serwera, aby zaktualizować zasób {{resource}}.",
"title-imported": "",
"title-provisioned": "Tego zasobu {{resource}} nie można edytować z poziomu interfejsu użytkownika"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "Po aprowizacji"
}
},
@@ -2703,6 +2699,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6427,15 +6424,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "Klasyczny",
"json": "JSON",
"v1-resource": "Zasób V1",
"v2-resource": "Zasób V2",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "Czy na pewno chcesz przywrócić pulpit w wersji {{version}}? Wszystkie niezapisane zmiany zostaną utracone.",
@@ -7034,7 +7028,6 @@
"drone-datasource": "Źródło danych Drone",
"git-lab-integration-and-datasource": "Integracja i źródło danych GitLab",
"honeycomb-integration-and-datasource": "Integracja i źródło danych Honeycomb",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Integracja i źródło danych Jira",
"logic-monitor-devices-datasource": "Źródło danych urządzeń LogicMonitor",
"mongo-db-integration-and-data-source": "Integracja i źródło danych MongoDB",
@@ -7897,6 +7890,7 @@
"export-externally-label": "Eksportuj pulpit, aby użyć go w innej instancji",
"export-format": "Format",
"export-mode": "Model",
"export-remove-ds-refs": "Usuń szczegóły wdrożenia",
"info-text": "Skopiuj lub pobierz plik zawierający definicję pulpitu",
"title": "Eksportuj pulpit"
},
+4 -10
View File
@@ -807,8 +807,7 @@
"label-integration": "Integração",
"label-notification-settings": "Configurações de notificações",
"label-section": "Configurações de {{name}} opcionais",
"test": "Teste",
"tooltip-legacy-version": ""
"test": "Teste"
},
"classic-condition-viewer": {
"of": "DE",
@@ -2177,14 +2176,11 @@
"provisioning": {
"badge-tooltip-provenance": "Este recurso foi provisionado via {{provenance}} e não pode ser editado por meio da interface do usuário",
"badge-tooltip-standard": "Este recurso foi provisionado e não pode ser editado por meio da interface do usuário",
"body-imported": "",
"body-provisioned": "Este recurso ({{resource}}) foi provisionado — ou seja, foi criado por meio de uma configuração. Entre em contato com o administrador do servidor para atualizar este {{resource}}.",
"title-imported": "",
"title-provisioned": "Este recurso ({{resource}}) não pode ser editado por meio da interface de usuário"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "Provisionado"
}
},
@@ -2681,6 +2677,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6383,15 +6380,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "Clássico",
"json": "JSON",
"v1-resource": "Recurso V1",
"v2-resource": "Recurso V2",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "Tem certeza de que deseja restaurar o painel para a versão {{version}}? Todas as alterações não salvas serão perdidas.",
@@ -6986,7 +6980,6 @@
"drone-datasource": "Fonte de dados do Drone",
"git-lab-integration-and-datasource": "Fonte de dados e integração do GitLab",
"honeycomb-integration-and-datasource": "Fonte de dados e integração do Honeycomb",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Fonte de dados e integração do Jira",
"logic-monitor-devices-datasource": "Fonte de dados de dispositivos do LogicMonitor",
"mongo-db-integration-and-data-source": "Fonte de dados e integração do MongoDB",
@@ -7845,6 +7838,7 @@
"export-externally-label": "Exporte o painel de controle para usá-lo em outra instância",
"export-format": "Formato",
"export-mode": "Modelo",
"export-remove-ds-refs": "Remover detalhes da implantação",
"info-text": "Copie ou baixe um arquivo contendo a definição do seu painel",
"title": "Exportar painel"
},
+4 -10
View File
@@ -807,8 +807,7 @@
"label-integration": "Integração",
"label-notification-settings": "Definições de notificação",
"label-section": "Definições de {{name}} opcionais",
"test": "Teste",
"tooltip-legacy-version": ""
"test": "Teste"
},
"classic-condition-viewer": {
"of": "DE",
@@ -2177,14 +2176,11 @@
"provisioning": {
"badge-tooltip-provenance": "Este recurso foi provisionado através de {{provenance}} e não pode ser editado através da interface do utilizador",
"badge-tooltip-standard": "Este recurso foi provisionado e não pode ser editado através da interface do utilizador",
"body-imported": "",
"body-provisioned": "Este {{resource}} foi aprovisionado, o que significa que foi criado por configuração. Entre em contacto com o seu administrador do servidor para atualizar este {{resource}}.",
"title-imported": "",
"title-provisioned": "Este {{resource}} não pode ser editado através da interface do utilizador"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "Aprovisionado"
}
},
@@ -2681,6 +2677,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6383,15 +6380,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "Clássico",
"json": "JSON",
"v1-resource": "Recurso V1",
"v2-resource": "Recurso V2",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "Tem a certeza de que pretende restaurar o painel de controlo para a versão {{version}}? Todas as alterações não guardadas serão perdidas.",
@@ -6986,7 +6980,6 @@
"drone-datasource": "Origem de dados do Drone",
"git-lab-integration-and-datasource": "Integração e origem de dados do GitLab",
"honeycomb-integration-and-datasource": "Integração e origem de dados do Honeycomb",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Integração e origem de dados do Jira",
"logic-monitor-devices-datasource": "Origem de dados de dispositivos do LogicMonitor",
"mongo-db-integration-and-data-source": "Integração e origem de dados do MongoDB",
@@ -7845,6 +7838,7 @@
"export-externally-label": "Exportar o painel de controlo para utilizar noutra instância",
"export-format": "Formato",
"export-mode": "Modelo",
"export-remove-ds-refs": "Remover detalhes de implementação",
"info-text": "Copiar ou descarregar um ficheiro que contém a definição do seu painel de controlo",
"title": "Exportar painel de controlo"
},
+4 -10
View File
@@ -813,8 +813,7 @@
"label-integration": "Интеграция",
"label-notification-settings": "Параметры уведомлений",
"label-section": "Дополнительные параметры {{name}}",
"test": "Тестирование",
"tooltip-legacy-version": ""
"test": "Тестирование"
},
"classic-condition-viewer": {
"of": "ИЗ",
@@ -2193,14 +2192,11 @@
"provisioning": {
"badge-tooltip-provenance": "Ресурс был подготовлен через {{provenance}} и не может быть изменен через пользовательский интерфейс",
"badge-tooltip-standard": "Ресурс был подготовлен и не может быть изменен через пользовательский интерфейс",
"body-imported": "",
"body-provisioned": "{{resource}} подготовлен, то есть был создан с помощью конфигурации. Чтобы обновить {{resource}}, обратитесь к администратору сервера.",
"title-imported": "",
"title-provisioned": "Этот {{resource}} невозможно отредактировать через пользовательский интерфейс"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "Подготовлено"
}
},
@@ -2703,6 +2699,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6427,15 +6424,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "Классический",
"json": "JSON",
"v1-resource": "Ресурс V1",
"v2-resource": "Ресурс V2",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "Действительно восстановить дашборд до версии {{version}}? Все несохраненные изменения будут потеряны.",
@@ -7034,7 +7028,6 @@
"drone-datasource": "Источник данных Drone",
"git-lab-integration-and-datasource": "Интеграция и источник данных GitLab",
"honeycomb-integration-and-datasource": "Интеграция и источник данных Honeycomb",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Интеграция и источник данных Jira",
"logic-monitor-devices-datasource": "Источник данных LogicMonitor Devices",
"mongo-db-integration-and-data-source": "Интеграция и источник данных MongoDB",
@@ -7897,6 +7890,7 @@
"export-externally-label": "Экспорт дашборда для использования в другом экземпляре",
"export-format": "Формат",
"export-mode": "Модель",
"export-remove-ds-refs": "Удалить сведения об использовании",
"info-text": "Скопируйте или загрузите файл, содержащий параметры вашего дашборда",
"title": "Экспорт дашборда"
},
+4 -10
View File
@@ -807,8 +807,7 @@
"label-integration": "Integration",
"label-notification-settings": "Aviseringsinställningar",
"label-section": "Valfria inställningar för {{name}}",
"test": "Test",
"tooltip-legacy-version": ""
"test": "Test"
},
"classic-condition-viewer": {
"of": "AV",
@@ -2177,14 +2176,11 @@
"provisioning": {
"badge-tooltip-provenance": "Den här resursen har etablerats via {{provenance}} och kan inte redigeras via användargränssnittet",
"badge-tooltip-standard": "Denna resurs har etablerats och kan inte redigeras via användargränssnittet",
"body-imported": "",
"body-provisioned": "Denna {{resource}} har provisionerats, vilket betyder att den skapades genom konfigurering. Kontakta serveradministratören om du vill uppdatera denna {{resource}}.",
"title-imported": "",
"title-provisioned": "Denna {{resource}} kan inte redigeras via användargränssnittet"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "Provisionerad"
}
},
@@ -2681,6 +2677,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6383,15 +6380,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "Classic",
"json": "JSON",
"v1-resource": "V1-resurs",
"v2-resource": "V2-resurs",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "Är du säker på att du vill återställa instrumentpanelen till version {{version}}? Alla ändringar som inte sparats kommer att gå förlorade.",
@@ -6986,7 +6980,6 @@
"drone-datasource": "Drone-datakälla",
"git-lab-integration-and-datasource": "GitLab-integrering och datakälla",
"honeycomb-integration-and-datasource": "Honeycomb-integrering och datakälla",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Jira-integrering och datakälla",
"logic-monitor-devices-datasource": "LogicMonitor Devices-datakälla",
"mongo-db-integration-and-data-source": "MongoDB-integrering och datakälla",
@@ -7845,6 +7838,7 @@
"export-externally-label": "Exportera instrumentpanelen om du vill använda i en annan instans",
"export-format": "Format",
"export-mode": "Modell",
"export-remove-ds-refs": "Ta bort distributionsinformation",
"info-text": "Kopiera eller ladda ner en fil som innehåller instrumentpanelens definition",
"title": "Exportera instrumentpanelen"
},
+4 -10
View File
@@ -807,8 +807,7 @@
"label-integration": "Entegrasyon",
"label-notification-settings": "Bildirim ayarları",
"label-section": "İsteğe bağlı {{name}} ayarları",
"test": "Test",
"tooltip-legacy-version": ""
"test": "Test"
},
"classic-condition-viewer": {
"of": "-",
@@ -2177,14 +2176,11 @@
"provisioning": {
"badge-tooltip-provenance": "Bu kaynak {{provenance}} aracılığıyla sağlanmıştır ve kullanıcı arayüzü üzerinden düzenlenemez",
"badge-tooltip-standard": "Bu kaynak sağlanmıştır ve kullanıcı arayüzü üzerinden düzenlenemez",
"body-imported": "",
"body-provisioned": "Bu {{resource}} sağlanmış; yani yapılandırma tarafından oluşturulmuş. Bu {{resource}} ögesini güncellemek için sunucu yöneticinize başvurun.",
"title-imported": "",
"title-provisioned": "Bu {{resource}} kullanıcı arayüzü üzerinden düzenlenemez"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "Sağlanan"
}
},
@@ -2681,6 +2677,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6383,15 +6380,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "Klasik",
"json": "JSON",
"v1-resource": "V1 Kaynağı",
"v2-resource": "V2 Kaynağı",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "Panoyu {{version}} sürümüne geri yüklemek istediğinize emin misiniz? Kaydedilmemiş tüm değişiklikler kaybolacaktır.",
@@ -6986,7 +6980,6 @@
"drone-datasource": "Drone veri kaynağı",
"git-lab-integration-and-datasource": "GitLab entegrasyonu ve veri kaynağı",
"honeycomb-integration-and-datasource": "Honeycomb entegrasyonu ve veri kaynağı",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Jira entegrasyonu ve veri kaynağı",
"logic-monitor-devices-datasource": "LogicMonitor Devices veri kaynağı",
"mongo-db-integration-and-data-source": "MongoDB entegrasyonu ve veri kaynağı",
@@ -7845,6 +7838,7 @@
"export-externally-label": "Başka bir oturumda kullanmak için panoyu dışa aktarın",
"export-format": "Biçim",
"export-mode": "Model",
"export-remove-ds-refs": "Dağıtım ayrıntılarını kaldır",
"info-text": "Panonuzun tanımını içeren bir dosyayı kopyalayın veya indirin",
"title": "Panoyu dışa aktar"
},
+4 -10
View File
@@ -804,8 +804,7 @@
"label-integration": "集成",
"label-notification-settings": "通知设置",
"label-section": "可选{{name}}设置",
"test": "测试",
"tooltip-legacy-version": ""
"test": "测试"
},
"classic-condition-viewer": {
"of": "OF",
@@ -2169,14 +2168,11 @@
"provisioning": {
"badge-tooltip-provenance": "此资源已通过 {{provenance}} 配置,无法通过用户界面编辑",
"badge-tooltip-standard": "此资源已配置,无法通过用户界面编辑",
"body-imported": "",
"body-provisioned": "此 {{resource}} 已预置,这意味着它是由配置创建的。请联系服务器管理员以更新此 {{resource}}。",
"title-imported": "",
"title-provisioned": "此 {{resource}} 无法通过用户界面编辑"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "已预置"
}
},
@@ -2670,6 +2666,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6361,15 +6358,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "经典版",
"json": "JSON",
"v1-resource": "V1 资源",
"v2-resource": "V2 资源",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "您确定要将数据面板还原到版本吗 {{version}}?所有未保存的更改都将丢失。",
@@ -6962,7 +6956,6 @@
"drone-datasource": "Drone 数据源",
"git-lab-integration-and-datasource": "GitLab 集成和数据源",
"honeycomb-integration-and-datasource": "Honeycomb 集成和数据源",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Jira 集成和数据源",
"logic-monitor-devices-datasource": "LogicMonitor Devices 数据源",
"mongo-db-integration-and-data-source": "MongoDB 集成和数据源",
@@ -7819,6 +7812,7 @@
"export-externally-label": "导出数据面板以便在其他实例中使用",
"export-format": "格式",
"export-mode": "模型",
"export-remove-ds-refs": "移除部署详情",
"info-text": "复制或下载包含数据面板定义的文件",
"title": "导出数据面板"
},
+4 -10
View File
@@ -804,8 +804,7 @@
"label-integration": "整合",
"label-notification-settings": "通知設定",
"label-section": "可選{{name}}設定",
"test": "測試",
"tooltip-legacy-version": ""
"test": "測試"
},
"classic-condition-viewer": {
"of": "的",
@@ -2169,14 +2168,11 @@
"provisioning": {
"badge-tooltip-provenance": "此資源已透過 {{provenance}} 設定,無法透過使用者介面編輯",
"badge-tooltip-standard": "此資源已設定,無法透過使用者介面編輯",
"body-imported": "",
"body-provisioned": "此{{resource}}已佈建,這表示其是透過設定所建立。請聯絡您的伺服器管理員以更新此{{resource}}。",
"title-imported": "",
"title-provisioned": "此{{resource}}無法透過使用者介面編輯"
},
"provisioning-badge": {
"badge": {
"text-converted-prometheus": "",
"text-provisioned": "已佈建"
}
},
@@ -2670,6 +2666,7 @@
},
"saved-searches": {
"actions-aria-label": "",
"apply-aria-label": "",
"apply-tooltip": "",
"button-label": "",
"cancel": "",
@@ -6361,15 +6358,12 @@
},
"resource-export": {
"label": {
"advanced-options": "",
"classic": "經典",
"json": "JSON",
"v1-resource": "V1 資源",
"v2-resource": "V2 資源",
"yaml": "YAML"
},
"share-externally": "",
"share-externally-tooltip": ""
}
},
"revert-dashboard-modal": {
"body-restore-version": "確定要將儀表板還原到版本{{version}}嗎?所有未儲存變更將會遺失。",
@@ -6962,7 +6956,6 @@
"drone-datasource": "Drone 資料來源",
"git-lab-integration-and-datasource": "GitLab 整合與資料來源",
"honeycomb-integration-and-datasource": "Honeycomb 整合與資料來源",
"ibmdb2-datasource": "",
"jira-integration-and-datasource": "Jira 整合與資料來源",
"logic-monitor-devices-datasource": "LogicMonitor 裝置資料來源",
"mongo-db-integration-and-data-source": "MongoDB 整合與資料來源",
@@ -7819,6 +7812,7 @@
"export-externally-label": "匯出儀表板以便在另一個執行個體中使用",
"export-format": "格式",
"export-mode": "模式",
"export-remove-ds-refs": "移除部署詳細資料",
"info-text": "複製或下載包含儀表板定義的檔案",
"title": "匯出儀表板"
},