Compare commits
7 Commits
main
...
wb/pkg-plu
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2c6e870a3a | ||
|
|
975480f10e | ||
|
|
3445cd3378 | ||
|
|
df790a4279 | ||
|
|
f3e28cf440 | ||
|
|
f8ab6ecc79 | ||
|
|
6a955eb6d1 |
@@ -4,8 +4,7 @@ comments: |
|
||||
This file is used in the following visualizations: candlestick, heatmap, state timeline, status history, time series.
|
||||
---
|
||||
|
||||
You can pan the panel time range left and right, and zoom it and in and out.
|
||||
This, in turn, changes the dashboard time range.
|
||||
You can zoom the panel time range in and out, which in turn, changes the dashboard time range.
|
||||
|
||||
**Zoom in** - Click and drag on the panel to zoom in on a particular time range.
|
||||
|
||||
@@ -17,9 +16,4 @@ For example, if the original time range is from 9:00 to 9:59, the time range cha
|
||||
- Next range: 8:30 - 10:29
|
||||
- Next range: 7:30 - 11:29
|
||||
|
||||
**Pan** - Click and drag the x-axis area of the panel to pan the time range.
|
||||
|
||||
The time range shifts by the distance you drag.
|
||||
For example, if the original time range is from 9:00 to 9:59 and you drag 30 minutes to the right, the time range changes to 9:30 to 10:29.
|
||||
|
||||
For screen recordings showing these interactions, refer to the [Panel overview documentation](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/panel-overview/#pan-and-zoom-panel-time-range).
|
||||
For screen recordings showing these interactions, refer to the [Panel overview documentation](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/panel-overview/#zoom-panel-time-range).
|
||||
@@ -304,8 +304,7 @@ When things go bad, it often helps if you understand the context in which the fa
|
||||
|
||||
In the next part of the tutorial, we simulate some common use cases that someone would add annotations for.
|
||||
|
||||
1. To manually add an annotation, click anywhere on a graph line to open the data tooltip, then click **Add annotation**.
|
||||
You can also press `Ctrl` or `Command` and click anywhere in the graph to open the **Add annotation** dialog box.
|
||||
1. To manually add an annotation, click anywhere in your graph, then click **Add annotation**.
|
||||
Note: you might need to save the dashboard first.
|
||||
1. In **Description**, enter **Migrated user database**.
|
||||
1. Click **Save**.
|
||||
|
||||
@@ -317,16 +317,13 @@ Click the **Copy time range to clipboard** icon to copy the current time range t
|
||||
|
||||
You can also copy and paste a time range using the keyboard shortcuts `t+c` and `t+v` respectively.
|
||||
|
||||
#### Zoom out
|
||||
#### Zoom out (Cmd+Z or Ctrl+Z)
|
||||
|
||||
- Click the **Zoom out** icon to view a larger time range in the dashboard or panel visualizations
|
||||
- Double click on the panel graph area (time series family visualizations only)
|
||||
- Type the `t-` keyboard shortcut
|
||||
Click the **Zoom out** icon to view a larger time range in the dashboard or panel visualization.
|
||||
|
||||
#### Zoom in
|
||||
#### Zoom in (only applicable to graph visualizations)
|
||||
|
||||
- Click and drag horizontally in the panel graph area to select a time range (time series family visualizations only)
|
||||
- Type the `t+` keyboard shortcut
|
||||
Click and drag to select the time range in the visualization that you want to view.
|
||||
|
||||
#### Refresh dashboard
|
||||
|
||||
|
||||
@@ -146,7 +146,7 @@ To create a variable, follow these steps:
|
||||
- Variable drop-down lists are displayed in the order in which they're listed in the **Variables** in dashboard settings, so put the variables that you will change often at the top, so they will be shown first (far left on the dashboard).
|
||||
- By default, variables don't have a default value. This means that the topmost value in the drop-down list is always preselected. If you want to pre-populate a variable with an empty value, you can use the following workaround in the variable settings:
|
||||
1. Select the **Include All Option** checkbox.
|
||||
2. In the **Custom all value** field, enter a value like `.+`.
|
||||
2. In the **Custom all value** field, enter a value like `+`.
|
||||
|
||||
## Add a query variable
|
||||
|
||||
|
||||
@@ -175,10 +175,9 @@ By hovering over a panel with the mouse you can use some shortcuts that will tar
|
||||
- `pl`: Hide or show legend
|
||||
- `pr`: Remove Panel
|
||||
|
||||
## Pan and zoom panel time range
|
||||
## Zoom panel time range
|
||||
|
||||
You can pan the panel time range left and right, and zoom it and in and out.
|
||||
This, in turn, changes the dashboard time range.
|
||||
You can zoom the panel time range in and out, which in turn, changes the dashboard time range.
|
||||
|
||||
This feature is supported for the following visualizations:
|
||||
|
||||
@@ -192,7 +191,7 @@ This feature is supported for the following visualizations:
|
||||
|
||||
Click and drag on the panel to zoom in on a particular time range.
|
||||
|
||||
The following screen recordings show this interaction in the time series and candlestick visualizations:
|
||||
The following screen recordings show this interaction in the time series and x visualizations:
|
||||
|
||||
Time series
|
||||
|
||||
@@ -212,7 +211,7 @@ For example, if the original time range is from 9:00 to 9:59, the time range cha
|
||||
- Next range: 8:30 - 10:29
|
||||
- Next range: 7:30 - 11:29
|
||||
|
||||
The following screen recordings demonstrate the preceding example in the time series and heatmap visualizations:
|
||||
The following screen recordings demonstrate the preceding example in the time series and x visualizations:
|
||||
|
||||
Time series
|
||||
|
||||
@@ -222,19 +221,6 @@ Heatmap
|
||||
|
||||
{{< video-embed src="/media/docs/grafana/panels-visualizations/recording-heatmap-panel-time-zoom-out-mouse.mp4" >}}
|
||||
|
||||
### Pan
|
||||
|
||||
Click and drag the x-axis area of the panel to pan the time range.
|
||||
|
||||
The time range shifts by the distance you drag.
|
||||
For example, if the original time range is from 9:00 to 9:59 and you drag 30 minutes to the right, the time range changes to 9:30 to 10:29.
|
||||
|
||||
The following screen recordings show this interaction in the time series visualization:
|
||||
|
||||
Time series
|
||||
|
||||
{{< video-embed src="/media/docs/grafana/panels-visualizations/recording-ts-time-pan-mouse.mp4" >}}
|
||||
|
||||
## Add a panel
|
||||
|
||||
To add a panel in a new dashboard click **+ Add visualization** in the middle of the dashboard:
|
||||
|
||||
@@ -92,9 +92,9 @@ The data is converted as follows:
|
||||
|
||||
{{< figure src="/media/docs/grafana/panels-visualizations/screenshot-candles-volume-v11.6.png" max-width="750px" alt="A candlestick visualization showing the price movements of specific asset." >}}
|
||||
|
||||
## Pan and zoom panel time range
|
||||
## Zoom panel time range
|
||||
|
||||
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
|
||||
## Configuration options
|
||||
|
||||
|
||||
@@ -79,9 +79,9 @@ The data is converted as follows:
|
||||
|
||||
{{< figure src="/static/img/docs/heatmap-panel/heatmap.png" max-width="1025px" alt="A heatmap visualization showing the random walk distribution over time" >}}
|
||||
|
||||
## Pan and zoom panel time range
|
||||
## Zoom panel time range
|
||||
|
||||
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
|
||||
## Configuration options
|
||||
|
||||
|
||||
@@ -93,9 +93,9 @@ You can also create a state timeline visualization using time series data. To do
|
||||
|
||||

|
||||
|
||||
## Pan and zoom panel time range
|
||||
## Zoom panel time range
|
||||
|
||||
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
|
||||
## Configuration options
|
||||
|
||||
|
||||
@@ -85,9 +85,9 @@ The data is converted as follows:
|
||||
|
||||
{{< figure src="/static/img/docs/status-history-panel/status_history.png" max-width="1025px" alt="A status history panel with two time columns showing the status of two servers" >}}
|
||||
|
||||
## Pan and zoom panel time range
|
||||
## Zoom panel time range
|
||||
|
||||
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
|
||||
## Configuration options
|
||||
|
||||
|
||||
@@ -167,9 +167,9 @@ The following example shows three series: Min, Max, and Value. The Min and Max s
|
||||
|
||||
{{< docs/shared lookup="visualizations/multiple-y-axes.md" source="grafana" version="<GRAFANA_VERSION>" leveloffset="+2" >}}
|
||||
|
||||
## Pan and zoom panel time range
|
||||
## Zoom panel time range
|
||||
|
||||
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
|
||||
|
||||
## Configuration options
|
||||
|
||||
|
||||
@@ -117,44 +117,6 @@ export const MyComponent = () => {
|
||||
};
|
||||
```
|
||||
|
||||
### Custom Header Rendering
|
||||
|
||||
Column headers can be customized using strings, React elements, or renderer functions. The `header` property accepts any value that matches React Table's `Renderer` type.
|
||||
|
||||
**Important:** When using custom header content, prefer inline elements (like `<span>`) over block elements (like `<div>`) to avoid layout issues. Block-level elements can cause extra spacing and alignment problems in table headers because they disrupt the table's inline flow. Use `display: inline-flex` or `display: inline-block` when you need flexbox or block-like behavior.
|
||||
|
||||
```tsx
|
||||
const columns: Array<Column<TableData>> = [
|
||||
// React element header
|
||||
{
|
||||
id: 'checkbox',
|
||||
header: (
|
||||
<>
|
||||
<label htmlFor="select-all" className="sr-only">
|
||||
Select all rows
|
||||
</label>
|
||||
<Checkbox id="select-all" />
|
||||
</>
|
||||
),
|
||||
cell: () => <Checkbox aria-label="Select row" />,
|
||||
},
|
||||
|
||||
// Function renderer header
|
||||
{
|
||||
id: 'firstName',
|
||||
header: () => (
|
||||
<span style={{ display: 'inline-flex', alignItems: 'center', gap: '8px' }}>
|
||||
<Icon name="user" size="sm" />
|
||||
<span>First Name</span>
|
||||
</span>
|
||||
),
|
||||
},
|
||||
|
||||
// String header
|
||||
{ id: 'lastName', header: 'Last name' },
|
||||
];
|
||||
```
|
||||
|
||||
### Custom Cell Rendering
|
||||
|
||||
Individual cells can be rendered using custom content dy defining a `cell` property on the column definition.
|
||||
|
||||
@@ -3,11 +3,8 @@ import { useCallback, useMemo, useState } from 'react';
|
||||
import { CellProps } from 'react-table';
|
||||
|
||||
import { LinkButton } from '../Button/Button';
|
||||
import { Checkbox } from '../Forms/Checkbox';
|
||||
import { Field } from '../Forms/Field';
|
||||
import { Icon } from '../Icon/Icon';
|
||||
import { Input } from '../Input/Input';
|
||||
import { Text } from '../Text/Text';
|
||||
|
||||
import { FetchDataArgs, InteractiveTable, InteractiveTableHeaderTooltip } from './InteractiveTable';
|
||||
import mdx from './InteractiveTable.mdx';
|
||||
@@ -300,40 +297,4 @@ export const WithControlledSort: StoryFn<typeof InteractiveTable> = (args) => {
|
||||
return <InteractiveTable {...args} data={data} pageSize={15} fetchData={fetchData} />;
|
||||
};
|
||||
|
||||
export const WithCustomHeader: TableStoryObj = {
|
||||
args: {
|
||||
columns: [
|
||||
// React element header
|
||||
{
|
||||
id: 'checkbox',
|
||||
header: (
|
||||
<>
|
||||
<label htmlFor="select-all" className="sr-only">
|
||||
Select all rows
|
||||
</label>
|
||||
<Checkbox id="select-all" />
|
||||
</>
|
||||
),
|
||||
cell: () => <Checkbox aria-label="Select row" />,
|
||||
},
|
||||
// Function renderer header
|
||||
{
|
||||
id: 'firstName',
|
||||
header: () => (
|
||||
<span style={{ display: 'inline-flex', alignItems: 'center', gap: '8px' }}>
|
||||
<Icon name="user" size="sm" />
|
||||
<Text element="span">First Name</Text>
|
||||
</span>
|
||||
),
|
||||
sortType: 'string',
|
||||
},
|
||||
// String header
|
||||
{ id: 'lastName', header: 'Last name', sortType: 'string' },
|
||||
{ id: 'car', header: 'Car', sortType: 'string' },
|
||||
{ id: 'age', header: 'Age', sortType: 'number' },
|
||||
],
|
||||
data: pageableData.slice(0, 10),
|
||||
getRowId: (r) => r.id,
|
||||
},
|
||||
};
|
||||
export default meta;
|
||||
|
||||
@@ -2,9 +2,6 @@ import { render, screen, within } from '@testing-library/react';
|
||||
import userEvent from '@testing-library/user-event';
|
||||
import * as React from 'react';
|
||||
|
||||
import { Checkbox } from '../Forms/Checkbox';
|
||||
import { Icon } from '../Icon/Icon';
|
||||
|
||||
import { InteractiveTable } from './InteractiveTable';
|
||||
import { Column } from './types';
|
||||
|
||||
@@ -250,104 +247,4 @@ describe('InteractiveTable', () => {
|
||||
expect(fetchData).toHaveBeenCalledWith({ sortBy: [{ id: 'id', desc: false }] });
|
||||
});
|
||||
});
|
||||
|
||||
describe('custom header rendering', () => {
|
||||
it('should render string headers', () => {
|
||||
const columns: Array<Column<TableData>> = [{ id: 'id', header: 'ID' }];
|
||||
const data: TableData[] = [{ id: '1', value: '1', country: 'Sweden' }];
|
||||
render(<InteractiveTable columns={columns} data={data} getRowId={getRowId} />);
|
||||
|
||||
expect(screen.getByRole('columnheader', { name: 'ID' })).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should render React element headers', () => {
|
||||
const columns: Array<Column<TableData>> = [
|
||||
{
|
||||
id: 'checkbox',
|
||||
header: (
|
||||
<>
|
||||
<label htmlFor="select-all" className="sr-only">
|
||||
Select all rows
|
||||
</label>
|
||||
<Checkbox id="select-all" data-testid="header-checkbox" />
|
||||
</>
|
||||
),
|
||||
cell: () => <Checkbox data-testid="cell-checkbox" aria-label="Select row" />,
|
||||
},
|
||||
];
|
||||
const data: TableData[] = [{ id: '1', value: '1', country: 'Sweden' }];
|
||||
render(<InteractiveTable columns={columns} data={data} getRowId={getRowId} />);
|
||||
|
||||
expect(screen.getByTestId('header-checkbox')).toBeInTheDocument();
|
||||
expect(screen.getByTestId('cell-checkbox')).toBeInTheDocument();
|
||||
expect(screen.getByLabelText('Select all rows')).toBeInTheDocument();
|
||||
expect(screen.getByLabelText('Select row')).toBeInTheDocument();
|
||||
expect(screen.getByText('Select all rows')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should render function renderer headers', () => {
|
||||
const columns: Array<Column<TableData>> = [
|
||||
{
|
||||
id: 'firstName',
|
||||
header: () => (
|
||||
<span style={{ display: 'inline-flex', alignItems: 'center', gap: '8px' }}>
|
||||
<Icon name="user" size="sm" data-testid="header-icon" />
|
||||
<span>First Name</span>
|
||||
</span>
|
||||
),
|
||||
sortType: 'string',
|
||||
},
|
||||
];
|
||||
const data: TableData[] = [{ id: '1', value: '1', country: 'Sweden' }];
|
||||
render(<InteractiveTable columns={columns} data={data} getRowId={getRowId} />);
|
||||
|
||||
expect(screen.getByTestId('header-icon')).toBeInTheDocument();
|
||||
expect(screen.getByRole('columnheader', { name: /first name/i })).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should render all header types together', () => {
|
||||
const columns: Array<Column<TableData>> = [
|
||||
{
|
||||
id: 'checkbox',
|
||||
header: (
|
||||
<>
|
||||
<label htmlFor="select-all" className="sr-only">
|
||||
Select all rows
|
||||
</label>
|
||||
<Checkbox id="select-all" data-testid="header-checkbox" />
|
||||
</>
|
||||
),
|
||||
cell: () => <Checkbox aria-label="Select row" />,
|
||||
},
|
||||
{
|
||||
id: 'id',
|
||||
header: () => (
|
||||
<span style={{ display: 'inline-flex', alignItems: 'center', gap: '8px' }}>
|
||||
<Icon name="user" size="sm" data-testid="header-icon" />
|
||||
<span>ID</span>
|
||||
</span>
|
||||
),
|
||||
sortType: 'string',
|
||||
},
|
||||
{ id: 'country', header: 'Country', sortType: 'string' },
|
||||
{ id: 'value', header: 'Value' },
|
||||
];
|
||||
const data: TableData[] = [
|
||||
{ id: '1', value: 'Value 1', country: 'Sweden' },
|
||||
{ id: '2', value: 'Value 2', country: 'Norway' },
|
||||
];
|
||||
render(<InteractiveTable columns={columns} data={data} getRowId={getRowId} />);
|
||||
|
||||
expect(screen.getByTestId('header-checkbox')).toBeInTheDocument();
|
||||
expect(screen.getByTestId('header-icon')).toBeInTheDocument();
|
||||
expect(screen.getByRole('columnheader', { name: 'Country' })).toBeInTheDocument();
|
||||
expect(screen.getByRole('columnheader', { name: 'Value' })).toBeInTheDocument();
|
||||
|
||||
// Verify data is rendered
|
||||
expect(screen.getByText('Sweden')).toBeInTheDocument();
|
||||
expect(screen.getByText('Norway')).toBeInTheDocument();
|
||||
expect(screen.getByText('Value 1')).toBeInTheDocument();
|
||||
expect(screen.getByText('Value 2')).toBeInTheDocument();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { ReactNode } from 'react';
|
||||
import { CellProps, DefaultSortTypes, HeaderProps, IdType, Renderer, SortByFn } from 'react-table';
|
||||
import { CellProps, DefaultSortTypes, IdType, SortByFn } from 'react-table';
|
||||
|
||||
export interface Column<TableData extends object> {
|
||||
/**
|
||||
@@ -11,9 +11,9 @@ export interface Column<TableData extends object> {
|
||||
*/
|
||||
cell?: (props: CellProps<TableData>) => ReactNode;
|
||||
/**
|
||||
* Header name. Can be a string, renderer function, or undefined. If `undefined` the header will be empty. Useful for action columns.
|
||||
* Header name. if `undefined` the header will be empty. Useful for action columns.
|
||||
*/
|
||||
header?: Renderer<HeaderProps<TableData>>;
|
||||
header?: string;
|
||||
/**
|
||||
* Column sort type. If `undefined` the column will not be sortable.
|
||||
* */
|
||||
|
||||
@@ -76,27 +76,21 @@ func (hs *HTTPServer) CreateDashboardSnapshot(c *contextmodel.ReqContext) {
|
||||
return
|
||||
}
|
||||
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
// Do not check permissions when the instance snapshot public mode is enabled
|
||||
if !hs.Cfg.SnapshotPublicMode {
|
||||
evaluator := ac.EvalAll(ac.EvalPermission(dashboards.ActionSnapshotsCreate), ac.EvalPermission(dashboards.ActionDashboardsRead, dashboards.ScopeDashboardsProvider.GetResourceScopeUID(cmd.Dashboard.GetNestedString("uid"))))
|
||||
if canSave, err := hs.AccessControl.Evaluate(c.Req.Context(), c.SignedInUser, evaluator); err != nil || !canSave {
|
||||
c.JsonApiErr(http.StatusForbidden, "forbidden", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
dashboardsnapshots.CreateDashboardSnapshot(c, snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: hs.Cfg.SnapshotEnabled,
|
||||
ExternalEnabled: hs.Cfg.ExternalEnabled,
|
||||
ExternalSnapshotName: hs.Cfg.ExternalSnapshotName,
|
||||
ExternalSnapshotURL: hs.Cfg.ExternalSnapshotUrl,
|
||||
}
|
||||
|
||||
if hs.Cfg.SnapshotPublicMode {
|
||||
// Public mode: no user or dashboard validation needed
|
||||
dashboardsnapshots.CreateDashboardSnapshotPublic(c, cfg, cmd, hs.dashboardsnapshotsService)
|
||||
return
|
||||
}
|
||||
|
||||
// Regular mode: check permissions
|
||||
evaluator := ac.EvalAll(ac.EvalPermission(dashboards.ActionSnapshotsCreate), ac.EvalPermission(dashboards.ActionDashboardsRead, dashboards.ScopeDashboardsProvider.GetResourceScopeUID(cmd.Dashboard.GetNestedString("uid"))))
|
||||
if canSave, err := hs.AccessControl.Evaluate(c.Req.Context(), c.SignedInUser, evaluator); err != nil || !canSave {
|
||||
c.JsonApiErr(http.StatusForbidden, "forbidden", err)
|
||||
return
|
||||
}
|
||||
|
||||
dashboardsnapshots.CreateDashboardSnapshot(c, cfg, cmd, hs.dashboardsnapshotsService)
|
||||
}, cmd, hs.dashboardsnapshotsService)
|
||||
}
|
||||
|
||||
// GET /api/snapshots/:key
|
||||
@@ -219,6 +213,13 @@ func (hs *HTTPServer) DeleteDashboardSnapshot(c *contextmodel.ReqContext) respon
|
||||
return response.Error(http.StatusUnauthorized, "OrgID mismatch", nil)
|
||||
}
|
||||
|
||||
if queryResult.External {
|
||||
err := dashboardsnapshots.DeleteExternalDashboardSnapshot(queryResult.ExternalDeleteURL)
|
||||
if err != nil {
|
||||
return response.Error(http.StatusInternalServerError, "Failed to delete external dashboard", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Dashboard can be empty (creation error or external snapshot). This means that the mustInt here returns a 0,
|
||||
// which before RBAC would result in a dashboard which has no ACL. A dashboard without an ACL would fallback
|
||||
// to the user’s org role, which for editors and admins would essentially always be allowed here. With RBAC,
|
||||
@@ -238,13 +239,6 @@ func (hs *HTTPServer) DeleteDashboardSnapshot(c *contextmodel.ReqContext) respon
|
||||
}
|
||||
}
|
||||
|
||||
if queryResult.External {
|
||||
err := dashboardsnapshots.DeleteExternalDashboardSnapshot(queryResult.ExternalDeleteURL)
|
||||
if err != nil {
|
||||
return response.Error(http.StatusInternalServerError, "Failed to delete external dashboard", err)
|
||||
}
|
||||
}
|
||||
|
||||
cmd := &dashboardsnapshots.DeleteDashboardSnapshotCommand{DeleteKey: queryResult.DeleteKey}
|
||||
|
||||
if err := hs.dashboardsnapshotsService.DeleteDashboardSnapshot(c.Req.Context(), cmd); err != nil {
|
||||
|
||||
@@ -20,8 +20,10 @@ import (
|
||||
"github.com/grafana/grafana/pkg/plugins"
|
||||
"github.com/grafana/grafana/pkg/plugins/config"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/pluginfakes"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/registry"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/signature"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/signature/statickey"
|
||||
"github.com/grafana/grafana/pkg/plugins/pluginassets/modulehash"
|
||||
"github.com/grafana/grafana/pkg/plugins/pluginscdn"
|
||||
accesscontrolmock "github.com/grafana/grafana/pkg/services/accesscontrol/mock"
|
||||
"github.com/grafana/grafana/pkg/services/apiserver/endpoints/request"
|
||||
@@ -80,7 +82,8 @@ func setupTestEnvironment(t *testing.T, cfg *setting.Cfg, features featuremgmt.F
|
||||
var pluginsAssets = passets
|
||||
if pluginsAssets == nil {
|
||||
sig := signature.ProvideService(pluginsCfg, statickey.New())
|
||||
pluginsAssets = pluginassets.ProvideService(pluginsCfg, pluginsCDN, sig, pluginStore)
|
||||
calc := modulehash.NewCalculator(pluginsCfg, registry.NewInMemory(), pluginsCDN, sig)
|
||||
pluginsAssets = pluginassets.ProvideService(pluginsCfg, pluginsCDN, calc)
|
||||
}
|
||||
|
||||
hs := &HTTPServer{
|
||||
@@ -714,6 +717,8 @@ func newPluginAssets() func() *pluginassets.Service {
|
||||
|
||||
func newPluginAssetsWithConfig(pCfg *config.PluginManagementCfg) func() *pluginassets.Service {
|
||||
return func() *pluginassets.Service {
|
||||
return pluginassets.ProvideService(pCfg, pluginscdn.ProvideService(pCfg), signature.ProvideService(pCfg, statickey.New()), &pluginstore.FakePluginStore{})
|
||||
cdn := pluginscdn.ProvideService(pCfg)
|
||||
calc := modulehash.NewCalculator(pCfg, registry.NewInMemory(), cdn, signature.ProvideService(pCfg, statickey.New()))
|
||||
return pluginassets.ProvideService(pCfg, cdn, calc)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -32,8 +32,6 @@ import (
|
||||
var (
|
||||
logger = glog.New("data-proxy-log")
|
||||
client = newHTTPClient()
|
||||
|
||||
errPluginProxyRouteAccessDenied = errors.New("plugin proxy route access denied")
|
||||
)
|
||||
|
||||
type DataSourceProxy struct {
|
||||
@@ -310,21 +308,12 @@ func (proxy *DataSourceProxy) validateRequest() error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// issues/116273: When we have an empty input route (or input that becomes relative to "."), we do not want it
|
||||
// to be ".". This is because the `CleanRelativePath` function will never return "./" prefixes, and as such,
|
||||
// the common prefix we need is an empty string.
|
||||
if r1 == "." && proxy.proxyPath != "." {
|
||||
r1 = ""
|
||||
}
|
||||
if r2 == "." && route.Path != "." {
|
||||
r2 = ""
|
||||
}
|
||||
if !strings.HasPrefix(r1, r2) {
|
||||
continue
|
||||
}
|
||||
|
||||
if !proxy.hasAccessToRoute(route) {
|
||||
return errPluginProxyRouteAccessDenied
|
||||
return errors.New("plugin proxy route access denied")
|
||||
}
|
||||
|
||||
proxy.matchedRoute = route
|
||||
|
||||
@@ -673,94 +673,6 @@ func TestIntegrationDataSourceProxy_routeRule(t *testing.T) {
|
||||
runDatasourceAuthTest(t, secretsService, secretsStore, cfg, test)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("Regression of 116273: Fallback routes should apply fallback route roles", func(t *testing.T) {
|
||||
for _, tc := range []struct {
|
||||
InputPath string
|
||||
ConfigurationPath string
|
||||
ExpectError bool
|
||||
}{
|
||||
{
|
||||
InputPath: "api/v2/leak-ur-secrets",
|
||||
ConfigurationPath: "",
|
||||
ExpectError: true,
|
||||
},
|
||||
{
|
||||
InputPath: "",
|
||||
ConfigurationPath: "",
|
||||
ExpectError: true,
|
||||
},
|
||||
{
|
||||
InputPath: ".",
|
||||
ConfigurationPath: ".",
|
||||
ExpectError: true,
|
||||
},
|
||||
{
|
||||
InputPath: "",
|
||||
ConfigurationPath: ".",
|
||||
ExpectError: false,
|
||||
},
|
||||
{
|
||||
InputPath: "api",
|
||||
ConfigurationPath: ".",
|
||||
ExpectError: false,
|
||||
},
|
||||
} {
|
||||
orEmptyStr := func(s string) string {
|
||||
if s == "" {
|
||||
return "<empty>"
|
||||
}
|
||||
return s
|
||||
}
|
||||
t.Run(
|
||||
fmt.Sprintf("with inputPath=%s, configurationPath=%s, expectError=%v",
|
||||
orEmptyStr(tc.InputPath), orEmptyStr(tc.ConfigurationPath), tc.ExpectError),
|
||||
func(t *testing.T) {
|
||||
ds := &datasources.DataSource{
|
||||
UID: "dsUID",
|
||||
JsonData: simplejson.New(),
|
||||
}
|
||||
routes := []*plugins.Route{
|
||||
{
|
||||
Path: tc.ConfigurationPath,
|
||||
ReqRole: org.RoleAdmin,
|
||||
Method: "GET",
|
||||
},
|
||||
{
|
||||
Path: tc.ConfigurationPath,
|
||||
ReqRole: org.RoleAdmin,
|
||||
Method: "POST",
|
||||
},
|
||||
{
|
||||
Path: tc.ConfigurationPath,
|
||||
ReqRole: org.RoleAdmin,
|
||||
Method: "PUT",
|
||||
},
|
||||
{
|
||||
Path: tc.ConfigurationPath,
|
||||
ReqRole: org.RoleAdmin,
|
||||
Method: "DELETE",
|
||||
},
|
||||
}
|
||||
|
||||
req, err := http.NewRequestWithContext(t.Context(), "GET", "http://localhost/"+tc.InputPath, nil)
|
||||
require.NoError(t, err, "failed to create HTTP request")
|
||||
ctx := &contextmodel.ReqContext{
|
||||
Context: &web.Context{Req: req},
|
||||
SignedInUser: &user.SignedInUser{OrgRole: org.RoleViewer},
|
||||
}
|
||||
proxy, err := setupDSProxyTest(t, ctx, ds, routes, tc.InputPath)
|
||||
require.NoError(t, err, "failed to setup proxy test")
|
||||
err = proxy.validateRequest()
|
||||
if tc.ExpectError {
|
||||
require.ErrorIs(t, err, errPluginProxyRouteAccessDenied, "request was not denied due to access denied?")
|
||||
} else {
|
||||
require.NoError(t, err, "request was unexpectedly denied access")
|
||||
}
|
||||
},
|
||||
)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// test DataSourceProxy request handling.
|
||||
|
||||
@@ -30,6 +30,7 @@ import (
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/registry"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/signature"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/signature/statickey"
|
||||
"github.com/grafana/grafana/pkg/plugins/pluginassets/modulehash"
|
||||
"github.com/grafana/grafana/pkg/plugins/pluginerrs"
|
||||
"github.com/grafana/grafana/pkg/plugins/pluginscdn"
|
||||
ac "github.com/grafana/grafana/pkg/services/accesscontrol"
|
||||
@@ -849,7 +850,8 @@ func Test_PluginsSettings(t *testing.T) {
|
||||
pCfg := &config.PluginManagementCfg{}
|
||||
pluginCDN := pluginscdn.ProvideService(pCfg)
|
||||
sig := signature.ProvideService(pCfg, statickey.New())
|
||||
hs.pluginAssets = pluginassets.ProvideService(pCfg, pluginCDN, sig, hs.pluginStore)
|
||||
calc := modulehash.NewCalculator(pCfg, registry.NewInMemory(), pluginCDN, sig)
|
||||
hs.pluginAssets = pluginassets.ProvideService(pCfg, pluginCDN, calc)
|
||||
hs.pluginErrorResolver = pluginerrs.ProvideStore(errTracker)
|
||||
hs.pluginsUpdateChecker, err = updatemanager.ProvidePluginsService(
|
||||
hs.Cfg,
|
||||
|
||||
144
pkg/plugins/pluginassets/modulehash/modulehash.go
Normal file
144
pkg/plugins/pluginassets/modulehash/modulehash.go
Normal file
@@ -0,0 +1,144 @@
|
||||
package modulehash
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
|
||||
"github.com/grafana/grafana/pkg/plugins"
|
||||
"github.com/grafana/grafana/pkg/plugins/config"
|
||||
"github.com/grafana/grafana/pkg/plugins/log"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/registry"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/signature"
|
||||
"github.com/grafana/grafana/pkg/plugins/pluginscdn"
|
||||
)
|
||||
|
||||
type Calculator struct {
|
||||
reg registry.Service
|
||||
cfg *config.PluginManagementCfg
|
||||
cdn *pluginscdn.Service
|
||||
signature *signature.Signature
|
||||
log log.Logger
|
||||
|
||||
moduleHashCache sync.Map
|
||||
}
|
||||
|
||||
func NewCalculator(cfg *config.PluginManagementCfg, reg registry.Service, cdn *pluginscdn.Service, signature *signature.Signature) *Calculator {
|
||||
return &Calculator{
|
||||
cfg: cfg,
|
||||
reg: reg,
|
||||
cdn: cdn,
|
||||
signature: signature,
|
||||
log: log.New("modulehash"),
|
||||
}
|
||||
}
|
||||
|
||||
// ModuleHash returns the module.js SHA256 hash for a plugin in the format expected by the browser for SRI checks.
|
||||
// The module hash is read from the plugin's MANIFEST.txt file.
|
||||
// The plugin can also be a nested plugin.
|
||||
// If the plugin is unsigned, an empty string is returned.
|
||||
// The results are cached to avoid repeated reads from the MANIFEST.txt file.
|
||||
func (c *Calculator) ModuleHash(ctx context.Context, pluginID, pluginVersion string) string {
|
||||
p, ok := c.reg.Plugin(ctx, pluginID, pluginVersion)
|
||||
if !ok {
|
||||
c.log.Error("Failed to calculate module hash as plugin is not registered", "pluginId", pluginID)
|
||||
return ""
|
||||
}
|
||||
k := c.moduleHashCacheKey(pluginID, pluginVersion)
|
||||
cachedValue, ok := c.moduleHashCache.Load(k)
|
||||
if ok {
|
||||
return cachedValue.(string)
|
||||
}
|
||||
mh, err := c.moduleHash(ctx, p, "")
|
||||
if err != nil {
|
||||
c.log.Error("Failed to calculate module hash", "pluginId", p.ID, "error", err)
|
||||
}
|
||||
c.moduleHashCache.Store(k, mh)
|
||||
return mh
|
||||
}
|
||||
|
||||
// moduleHash is the underlying function for ModuleHash. See its documentation for more information.
|
||||
// If the plugin is not a CDN plugin, the function will return an empty string.
|
||||
// It will read the module hash from the MANIFEST.txt in the [[plugins.FS]] of the provided plugin.
|
||||
// If childFSBase is provided, the function will try to get the hash from MANIFEST.txt for the provided children's
|
||||
// module.js file, rather than for the provided plugin.
|
||||
func (c *Calculator) moduleHash(ctx context.Context, p *plugins.Plugin, childFSBase string) (r string, err error) {
|
||||
if !c.cfg.Features.SriChecksEnabled {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
// Ignore unsigned plugins
|
||||
if !p.Signature.IsValid() {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
if p.Parent != nil {
|
||||
// The module hash is contained within the parent's MANIFEST.txt file.
|
||||
// For example, the parent's MANIFEST.txt will contain an entry similar to this:
|
||||
//
|
||||
// ```
|
||||
// "datasource/module.js": "1234567890abcdef..."
|
||||
// ```
|
||||
//
|
||||
// Recursively call moduleHash with the parent plugin and with the children plugin folder path
|
||||
// to get the correct module hash for the nested plugin.
|
||||
if childFSBase == "" {
|
||||
childFSBase = p.FS.Base()
|
||||
}
|
||||
return c.moduleHash(ctx, p.Parent, childFSBase)
|
||||
}
|
||||
|
||||
// Only CDN plugins are supported for SRI checks.
|
||||
// CDN plugins have the version as part of the URL, which acts as a cache-buster.
|
||||
// Needed due to: https://github.com/grafana/plugin-tools/pull/1426
|
||||
// FS plugins build before this change will have SRI mismatch issues.
|
||||
if !c.cdnEnabled(p.ID, p.FS) {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
manifest, err := c.signature.ReadPluginManifestFromFS(ctx, p.FS)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("read plugin manifest: %w", err)
|
||||
}
|
||||
if !manifest.IsV2() {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
var childPath string
|
||||
if childFSBase != "" {
|
||||
// Calculate the relative path of the child plugin folder from the parent plugin folder.
|
||||
childPath, err = p.FS.Rel(childFSBase)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("rel path: %w", err)
|
||||
}
|
||||
// MANIFETS.txt uses forward slashes as path separators.
|
||||
childPath = filepath.ToSlash(childPath)
|
||||
}
|
||||
moduleHash, ok := manifest.Files[path.Join(childPath, "module.js")]
|
||||
if !ok {
|
||||
return "", nil
|
||||
}
|
||||
return convertHashForSRI(moduleHash)
|
||||
}
|
||||
|
||||
func (c *Calculator) cdnEnabled(pluginID string, fs plugins.FS) bool {
|
||||
return c.cdn.PluginSupported(pluginID) || fs.Type().CDN()
|
||||
}
|
||||
|
||||
// convertHashForSRI takes a SHA256 hash string and returns it as expected by the browser for SRI checks.
|
||||
func convertHashForSRI(h string) (string, error) {
|
||||
hb, err := hex.DecodeString(h)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("hex decode string: %w", err)
|
||||
}
|
||||
return "sha256-" + base64.StdEncoding.EncodeToString(hb), nil
|
||||
}
|
||||
|
||||
// moduleHashCacheKey returns a unique key for the module hash cache.
|
||||
func (c *Calculator) moduleHashCacheKey(pluginId, pluginVersion string) string {
|
||||
return pluginId + ":" + pluginVersion
|
||||
}
|
||||
459
pkg/plugins/pluginassets/modulehash/modulehash_test.go
Normal file
459
pkg/plugins/pluginassets/modulehash/modulehash_test.go
Normal file
@@ -0,0 +1,459 @@
|
||||
package modulehash
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/grafana/grafana/pkg/plugins"
|
||||
"github.com/grafana/grafana/pkg/plugins/config"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/registry"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/signature"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/signature/statickey"
|
||||
"github.com/grafana/grafana/pkg/plugins/pluginscdn"
|
||||
)
|
||||
|
||||
func Test_ModuleHash(t *testing.T) {
|
||||
const (
|
||||
pluginID = "grafana-test-datasource"
|
||||
parentPluginID = "grafana-test-app"
|
||||
)
|
||||
for _, tc := range []struct {
|
||||
name string
|
||||
features *config.Features
|
||||
registry []*plugins.Plugin
|
||||
|
||||
// Can be used to configure plugin's fs
|
||||
// fs cdn type = loaded from CDN with no files on disk
|
||||
// fs local type = files on disk but served from CDN only if cdn=true
|
||||
plugin string
|
||||
|
||||
// When true, set cdn=true in config
|
||||
cdn bool
|
||||
expModuleHash string
|
||||
}{
|
||||
{
|
||||
name: "unsigned should not return module hash",
|
||||
plugin: pluginID,
|
||||
registry: []*plugins.Plugin{newPlugin(pluginID, withSignatureStatus(plugins.SignatureStatusUnsigned))},
|
||||
cdn: false,
|
||||
features: &config.Features{SriChecksEnabled: false},
|
||||
expModuleHash: "",
|
||||
},
|
||||
{
|
||||
plugin: pluginID,
|
||||
registry: []*plugins.Plugin{newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid"))),
|
||||
withClass(plugins.ClassExternal),
|
||||
)},
|
||||
cdn: true,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: newSRIHash(t, "5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03"),
|
||||
},
|
||||
{
|
||||
plugin: pluginID,
|
||||
registry: []*plugins.Plugin{newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid"))),
|
||||
withClass(plugins.ClassExternal),
|
||||
)},
|
||||
cdn: true,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: newSRIHash(t, "5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03"),
|
||||
},
|
||||
{
|
||||
plugin: pluginID,
|
||||
registry: []*plugins.Plugin{newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid"))),
|
||||
)},
|
||||
cdn: false,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: "",
|
||||
},
|
||||
{
|
||||
plugin: pluginID,
|
||||
registry: []*plugins.Plugin{newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid"))),
|
||||
)},
|
||||
cdn: true,
|
||||
features: &config.Features{SriChecksEnabled: false},
|
||||
expModuleHash: "",
|
||||
},
|
||||
{
|
||||
plugin: pluginID,
|
||||
registry: []*plugins.Plugin{newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid"))),
|
||||
)},
|
||||
cdn: false,
|
||||
features: &config.Features{SriChecksEnabled: false},
|
||||
expModuleHash: "",
|
||||
},
|
||||
{
|
||||
// parentPluginID (/)
|
||||
// └── pluginID (/datasource)
|
||||
name: "nested plugin should return module hash from parent MANIFEST.txt",
|
||||
plugin: pluginID,
|
||||
registry: []*plugins.Plugin{
|
||||
newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid-nested", "datasource"))),
|
||||
withParent(newPlugin(
|
||||
parentPluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid-nested"))),
|
||||
)),
|
||||
),
|
||||
},
|
||||
cdn: true,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: newSRIHash(t, "04d70db091d96c4775fb32ba5a8f84cc22893eb43afdb649726661d4425c6711"),
|
||||
},
|
||||
{
|
||||
// parentPluginID (/)
|
||||
// └── pluginID (/panels/one)
|
||||
name: "nested plugin deeper than one subfolder should return module hash from parent MANIFEST.txt",
|
||||
plugin: pluginID,
|
||||
registry: []*plugins.Plugin{
|
||||
newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid-nested", "panels", "one"))),
|
||||
withParent(newPlugin(
|
||||
parentPluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid-nested"))),
|
||||
)),
|
||||
),
|
||||
},
|
||||
cdn: true,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: newSRIHash(t, "cbd1ac2284645a0e1e9a8722a729f5bcdd2b831222728709c6360beecdd6143f"),
|
||||
},
|
||||
{
|
||||
// grand-parent-app (/)
|
||||
// ├── parent-datasource (/datasource)
|
||||
// │ └── child-panel (/datasource/panels/one)
|
||||
name: "nested plugin of a nested plugin should return module hash from parent MANIFEST.txt",
|
||||
registry: []*plugins.Plugin{
|
||||
newPlugin(
|
||||
"child-panel",
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid-deeply-nested", "datasource", "panels", "one"))),
|
||||
withParent(newPlugin(
|
||||
"parent-datasource",
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid-deeply-nested", "datasource"))),
|
||||
withParent(newPlugin(
|
||||
"grand-parent-app",
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid-deeply-nested"))),
|
||||
)),
|
||||
),
|
||||
),
|
||||
),
|
||||
},
|
||||
plugin: "child-panel",
|
||||
cdn: true,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: newSRIHash(t, "cbd1ac2284645a0e1e9a8722a729f5bcdd2b831222728709c6360beecdd6143f"),
|
||||
},
|
||||
{
|
||||
name: "nested plugin should not return module hash from parent if it's not registered in the registry",
|
||||
plugin: pluginID,
|
||||
registry: []*plugins.Plugin{newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid-nested", "panels", "one"))),
|
||||
withParent(newPlugin(
|
||||
parentPluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid-nested"))),
|
||||
)),
|
||||
)},
|
||||
cdn: false,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: "",
|
||||
},
|
||||
{
|
||||
name: "missing module.js entry from MANIFEST.txt should not return module hash",
|
||||
plugin: pluginID,
|
||||
registry: []*plugins.Plugin{newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-no-module-js"))),
|
||||
)},
|
||||
cdn: false,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: "",
|
||||
},
|
||||
{
|
||||
name: "signed status but missing MANIFEST.txt should not return module hash",
|
||||
plugin: pluginID,
|
||||
registry: []*plugins.Plugin{newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-no-manifest-txt"))),
|
||||
)},
|
||||
cdn: false,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: "",
|
||||
},
|
||||
} {
|
||||
if tc.name == "" {
|
||||
var expS string
|
||||
if tc.expModuleHash == "" {
|
||||
expS = "should not return module hash"
|
||||
} else {
|
||||
expS = "should return module hash"
|
||||
}
|
||||
tc.name = fmt.Sprintf("feature=%v, cdn_config=%v %s", tc.features.SriChecksEnabled, tc.cdn, expS)
|
||||
}
|
||||
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
var pluginSettings config.PluginSettings
|
||||
if tc.cdn {
|
||||
pluginSettings = config.PluginSettings{
|
||||
pluginID: {
|
||||
"cdn": "true",
|
||||
},
|
||||
parentPluginID: map[string]string{
|
||||
"cdn": "true",
|
||||
},
|
||||
"grand-parent-app": map[string]string{
|
||||
"cdn": "true",
|
||||
},
|
||||
}
|
||||
}
|
||||
features := tc.features
|
||||
if features == nil {
|
||||
features = &config.Features{}
|
||||
}
|
||||
pCfg := &config.PluginManagementCfg{
|
||||
PluginsCDNURLTemplate: "http://cdn.example.com",
|
||||
PluginSettings: pluginSettings,
|
||||
Features: *features,
|
||||
}
|
||||
|
||||
svc := NewCalculator(
|
||||
pCfg,
|
||||
newPluginRegistry(t, tc.registry...),
|
||||
pluginscdn.ProvideService(pCfg),
|
||||
signature.ProvideService(pCfg, statickey.New()),
|
||||
)
|
||||
mh := svc.ModuleHash(context.Background(), tc.plugin, "")
|
||||
require.Equal(t, tc.expModuleHash, mh)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_ModuleHash_Cache(t *testing.T) {
|
||||
pCfg := &config.PluginManagementCfg{
|
||||
PluginSettings: config.PluginSettings{},
|
||||
Features: config.Features{SriChecksEnabled: true},
|
||||
}
|
||||
svc := NewCalculator(
|
||||
pCfg,
|
||||
newPluginRegistry(t),
|
||||
pluginscdn.ProvideService(pCfg),
|
||||
signature.ProvideService(pCfg, statickey.New()),
|
||||
)
|
||||
const pluginID = "grafana-test-datasource"
|
||||
|
||||
t.Run("cache key", func(t *testing.T) {
|
||||
t.Run("with version", func(t *testing.T) {
|
||||
const pluginVersion = "1.0.0"
|
||||
p := newPlugin(pluginID, withInfo(plugins.Info{Version: pluginVersion}))
|
||||
k := svc.moduleHashCacheKey(p.ID, p.Info.Version)
|
||||
require.Equal(t, pluginID+":"+pluginVersion, k, "cache key should be correct")
|
||||
})
|
||||
|
||||
t.Run("without version", func(t *testing.T) {
|
||||
p := newPlugin(pluginID)
|
||||
k := svc.moduleHashCacheKey(p.ID, p.Info.Version)
|
||||
require.Equal(t, pluginID+":", k, "cache key should be correct")
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("ModuleHash usage", func(t *testing.T) {
|
||||
pV1 := newPlugin(
|
||||
pluginID,
|
||||
withInfo(plugins.Info{Version: "1.0.0"}),
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid"))),
|
||||
)
|
||||
|
||||
pCfg = &config.PluginManagementCfg{
|
||||
PluginsCDNURLTemplate: "https://cdn.grafana.com",
|
||||
PluginSettings: config.PluginSettings{
|
||||
pluginID: {
|
||||
"cdn": "true",
|
||||
},
|
||||
},
|
||||
Features: config.Features{SriChecksEnabled: true},
|
||||
}
|
||||
reg := newPluginRegistry(t, pV1)
|
||||
svc = NewCalculator(
|
||||
pCfg,
|
||||
reg,
|
||||
pluginscdn.ProvideService(pCfg),
|
||||
signature.ProvideService(pCfg, statickey.New()),
|
||||
)
|
||||
|
||||
k := svc.moduleHashCacheKey(pV1.ID, pV1.Info.Version)
|
||||
|
||||
_, ok := svc.moduleHashCache.Load(k)
|
||||
require.False(t, ok, "cache should initially be empty")
|
||||
|
||||
mhV1 := svc.ModuleHash(context.Background(), pV1.ID, pV1.Info.Version)
|
||||
pV1Exp := newSRIHash(t, "5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03")
|
||||
require.Equal(t, pV1Exp, mhV1, "returned value should be correct")
|
||||
|
||||
cachedMh, ok := svc.moduleHashCache.Load(k)
|
||||
require.True(t, ok)
|
||||
require.Equal(t, pV1Exp, cachedMh, "cache should contain the returned value")
|
||||
|
||||
t.Run("different version uses different cache key", func(t *testing.T) {
|
||||
pV2 := newPlugin(
|
||||
pluginID,
|
||||
withInfo(plugins.Info{Version: "2.0.0"}),
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
// different fs for different hash
|
||||
withFS(plugins.NewLocalFS(filepath.Join("../testdata", "module-hash-valid-nested"))),
|
||||
)
|
||||
err := reg.Add(context.Background(), pV2)
|
||||
require.NoError(t, err)
|
||||
|
||||
mhV2 := svc.ModuleHash(context.Background(), pV2.ID, pV2.Info.Version)
|
||||
require.NotEqual(t, mhV2, mhV1, "different version should have different hash")
|
||||
require.Equal(t, newSRIHash(t, "266c19bc148b22ddef2a288fc5f8f40855bda22ccf60be53340b4931e469ae2a"), mhV2)
|
||||
})
|
||||
|
||||
t.Run("cache should be used", func(t *testing.T) {
|
||||
// edit cache directly
|
||||
svc.moduleHashCache.Store(k, "hax")
|
||||
require.Equal(t, "hax", svc.ModuleHash(context.Background(), pV1.ID, pV1.Info.Version))
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
func TestConvertHashFromSRI(t *testing.T) {
|
||||
for _, tc := range []struct {
|
||||
hash string
|
||||
expHash string
|
||||
expErr bool
|
||||
}{
|
||||
{
|
||||
hash: "ddfcb449445064e6c39f0c20b15be3cb6a55837cf4781df23d02de005f436811",
|
||||
expHash: "sha256-3fy0SURQZObDnwwgsVvjy2pVg3z0eB3yPQLeAF9DaBE=",
|
||||
},
|
||||
{
|
||||
hash: "not-a-valid-hash",
|
||||
expErr: true,
|
||||
},
|
||||
} {
|
||||
t.Run(tc.hash, func(t *testing.T) {
|
||||
r, err := convertHashForSRI(tc.hash)
|
||||
if tc.expErr {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tc.expHash, r)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func newPlugin(pluginID string, cbs ...func(p *plugins.Plugin) *plugins.Plugin) *plugins.Plugin {
|
||||
p := &plugins.Plugin{
|
||||
JSONData: plugins.JSONData{
|
||||
ID: pluginID,
|
||||
},
|
||||
}
|
||||
for _, cb := range cbs {
|
||||
p = cb(p)
|
||||
}
|
||||
return p
|
||||
}
|
||||
|
||||
func withInfo(info plugins.Info) func(p *plugins.Plugin) *plugins.Plugin {
|
||||
return func(p *plugins.Plugin) *plugins.Plugin {
|
||||
p.Info = info
|
||||
return p
|
||||
}
|
||||
}
|
||||
|
||||
func withFS(fs plugins.FS) func(p *plugins.Plugin) *plugins.Plugin {
|
||||
return func(p *plugins.Plugin) *plugins.Plugin {
|
||||
p.FS = fs
|
||||
return p
|
||||
}
|
||||
}
|
||||
|
||||
func withSignatureStatus(status plugins.SignatureStatus) func(p *plugins.Plugin) *plugins.Plugin {
|
||||
return func(p *plugins.Plugin) *plugins.Plugin {
|
||||
p.Signature = status
|
||||
return p
|
||||
}
|
||||
}
|
||||
|
||||
func withParent(parent *plugins.Plugin) func(p *plugins.Plugin) *plugins.Plugin {
|
||||
return func(p *plugins.Plugin) *plugins.Plugin {
|
||||
p.Parent = parent
|
||||
return p
|
||||
}
|
||||
}
|
||||
|
||||
func withClass(class plugins.Class) func(p *plugins.Plugin) *plugins.Plugin {
|
||||
return func(p *plugins.Plugin) *plugins.Plugin {
|
||||
p.Class = class
|
||||
return p
|
||||
}
|
||||
}
|
||||
|
||||
func newSRIHash(t *testing.T, s string) string {
|
||||
r, err := convertHashForSRI(s)
|
||||
require.NoError(t, err)
|
||||
return r
|
||||
}
|
||||
|
||||
type pluginRegistry struct {
|
||||
registry.Service
|
||||
|
||||
reg map[string]*plugins.Plugin
|
||||
}
|
||||
|
||||
func newPluginRegistry(t *testing.T, ps ...*plugins.Plugin) *pluginRegistry {
|
||||
reg := &pluginRegistry{
|
||||
reg: make(map[string]*plugins.Plugin),
|
||||
}
|
||||
for _, p := range ps {
|
||||
err := reg.Add(context.Background(), p)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
return reg
|
||||
}
|
||||
|
||||
func (f *pluginRegistry) Plugin(_ context.Context, id, version string) (*plugins.Plugin, bool) {
|
||||
key := fmt.Sprintf("%s-%s", id, version)
|
||||
p, exists := f.reg[key]
|
||||
return p, exists
|
||||
}
|
||||
|
||||
func (f *pluginRegistry) Add(_ context.Context, p *plugins.Plugin) error {
|
||||
key := fmt.Sprintf("%s-%s", p.ID, p.Info.Version)
|
||||
f.reg[key] = p
|
||||
return nil
|
||||
}
|
||||
@@ -1,602 +0,0 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/grafana/grafana-plugin-sdk-go/backend"
|
||||
"github.com/stretchr/testify/require"
|
||||
"go.opentelemetry.io/otel"
|
||||
|
||||
"github.com/grafana/grafana/pkg/promlib/intervalv2"
|
||||
)
|
||||
|
||||
var (
|
||||
testNow = time.Now()
|
||||
testIntervalCalculator = intervalv2.NewCalculator()
|
||||
testTracer = otel.Tracer("test/interval")
|
||||
)
|
||||
|
||||
func TestCalculatePrometheusInterval(t *testing.T) {
|
||||
_, span := testTracer.Start(context.Background(), "test")
|
||||
defer span.End()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
queryInterval string
|
||||
dsScrapeInterval string
|
||||
intervalMs int64
|
||||
intervalFactor int64
|
||||
query backend.DataQuery
|
||||
want time.Duration
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "min step 2m with 300000 intervalMs",
|
||||
queryInterval: "2m",
|
||||
dsScrapeInterval: "",
|
||||
intervalMs: 300000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 5 * time.Minute,
|
||||
MaxDataPoints: 761,
|
||||
},
|
||||
want: 2 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "min step 2m with 900000 intervalMs",
|
||||
queryInterval: "2m",
|
||||
dsScrapeInterval: "",
|
||||
intervalMs: 900000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 15 * time.Minute,
|
||||
MaxDataPoints: 175,
|
||||
},
|
||||
want: 2 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with step parameter",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(12 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 30 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "without step parameter",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 15 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with high intervalFactor",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 10,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 20 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with low intervalFactor",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 2 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with specified scrape-interval in data source",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "240s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 4 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with zero intervalFactor defaults to 1",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 0,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 15 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with $__interval variable",
|
||||
queryInterval: "$__interval",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 60000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 120 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with ${__interval} variable",
|
||||
queryInterval: "${__interval}",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 60000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 120 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with ${__interval} variable and explicit interval",
|
||||
queryInterval: "1m",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 60000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 1 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with $__rate_interval variable",
|
||||
queryInterval: "$__rate_interval",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 100000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 100 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 130 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with ${__rate_interval} variable",
|
||||
queryInterval: "${__rate_interval}",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 100000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 100 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 130 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "intervalMs 100s, minStep override 150s and scrape interval 30s",
|
||||
queryInterval: "150s",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 100000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 100 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 150 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "intervalMs 120s, minStep override 150s and ds scrape interval 30s",
|
||||
queryInterval: "150s",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 120000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 120 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 150 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "intervalMs 120s, minStep auto (interval not overridden) and ds scrape interval 30s",
|
||||
queryInterval: "120s",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 120000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 120 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 120 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "interval and minStep are automatically calculated and ds scrape interval 30s and time range 1 hour",
|
||||
queryInterval: "30s",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 30000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 30 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 30 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 1 hour",
|
||||
queryInterval: "$__rate_interval",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 30000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 30 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 2 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 2 days",
|
||||
queryInterval: "$__rate_interval",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 120000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 120 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 150 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "minStep is $__interval and ds scrape interval 15s and time range 2 days",
|
||||
queryInterval: "$__interval",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 120000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 120 * time.Second,
|
||||
MaxDataPoints: 12384,
|
||||
},
|
||||
want: 120 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with empty dsScrapeInterval defaults to 15s",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 15 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with very short time range",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Minute),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 15 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with very long time range",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(30 * 24 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 30 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with manual interval override",
|
||||
queryInterval: "5m",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 5 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "minStep is auto and ds scrape interval 30s and time range 1 hour",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "30s",
|
||||
intervalMs: 30000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 30 * time.Second,
|
||||
MaxDataPoints: 1613,
|
||||
},
|
||||
want: 30 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "minStep is auto and ds scrape interval 15s and time range 5 minutes",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 15000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(5 * time.Minute),
|
||||
},
|
||||
Interval: 15 * time.Second,
|
||||
MaxDataPoints: 1055,
|
||||
},
|
||||
want: 15 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
// Additional test cases for better coverage
|
||||
{
|
||||
name: "with $__interval_ms variable",
|
||||
queryInterval: "$__interval_ms",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 60000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 120 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with ${__interval_ms} variable",
|
||||
queryInterval: "${__interval_ms}",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 60000,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: 120 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with MaxDataPoints zero",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
MaxDataPoints: 0,
|
||||
},
|
||||
want: 15 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with negative intervalFactor",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: -5,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: -10 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "with invalid interval string that fails parsing",
|
||||
queryInterval: "invalid-interval",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(48 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
},
|
||||
want: time.Duration(0),
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "with very small MaxDataPoints",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
MaxDataPoints: 10,
|
||||
},
|
||||
want: 5 * time.Minute,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "when safeInterval is larger than calculatedInterval",
|
||||
queryInterval: "",
|
||||
dsScrapeInterval: "15s",
|
||||
intervalMs: 0,
|
||||
intervalFactor: 1,
|
||||
query: backend.DataQuery{
|
||||
TimeRange: backend.TimeRange{
|
||||
From: testNow,
|
||||
To: testNow.Add(1 * time.Hour),
|
||||
},
|
||||
Interval: 1 * time.Minute,
|
||||
MaxDataPoints: 10000,
|
||||
},
|
||||
want: 15 * time.Second,
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got, err := calculatePrometheusInterval(
|
||||
tt.queryInterval,
|
||||
tt.dsScrapeInterval,
|
||||
tt.intervalMs,
|
||||
tt.intervalFactor,
|
||||
tt.query,
|
||||
testIntervalCalculator,
|
||||
)
|
||||
|
||||
if tt.wantErr {
|
||||
require.Error(t, err)
|
||||
return
|
||||
}
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.want, got)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -92,6 +92,7 @@ const (
|
||||
)
|
||||
|
||||
// Internal interval and range variables with {} syntax
|
||||
// Repetitive code, we should have functionality to unify these
|
||||
const (
|
||||
varIntervalAlt = "${__interval}"
|
||||
varIntervalMsAlt = "${__interval_ms}"
|
||||
@@ -111,16 +112,8 @@ const (
|
||||
UnknownQueryType TimeSeriesQueryType = "unknown"
|
||||
)
|
||||
|
||||
// safeResolution is the maximum number of data points to prevent excessive resolution.
|
||||
// This ensures queries don't exceed reasonable data point limits, improving performance
|
||||
// and preventing potential memory issues. The value of 11000 provides a good balance
|
||||
// between resolution and performance for most use cases.
|
||||
var safeResolution = 11000
|
||||
|
||||
// rateIntervalMultiplier is the minimum multiplier for rate interval calculation.
|
||||
// Rate intervals should be at least 4x the scrape interval to ensure accurate rate calculations.
|
||||
const rateIntervalMultiplier = 4
|
||||
|
||||
// QueryModel includes both the common and specific values
|
||||
// NOTE: this struct may have issues when decoding JSON that requires the special handling
|
||||
// registered in https://github.com/grafana/grafana-plugin-sdk-go/blob/v0.228.0/experimental/apis/data/v0alpha1/query.go#L298
|
||||
@@ -161,7 +154,7 @@ type Query struct {
|
||||
// may be either a string or DataSourceRef
|
||||
type internalQueryModel struct {
|
||||
PrometheusQueryProperties `json:",inline"`
|
||||
// sdkapi.CommonQueryProperties `json:",inline"`
|
||||
//sdkapi.CommonQueryProperties `json:",inline"`
|
||||
IntervalMS float64 `json:"intervalMs,omitempty"`
|
||||
|
||||
// The following properties may be part of the request payload, however they are not saved in panel JSON
|
||||
@@ -279,121 +272,44 @@ func (query *Query) TimeRange() TimeRange {
|
||||
}
|
||||
}
|
||||
|
||||
// isRateIntervalVariable checks if the interval string is a rate interval variable
|
||||
// ($__rate_interval, ${__rate_interval}, $__rate_interval_ms, or ${__rate_interval_ms})
|
||||
func isRateIntervalVariable(interval string) bool {
|
||||
return interval == varRateInterval ||
|
||||
interval == varRateIntervalAlt ||
|
||||
interval == varRateIntervalMs ||
|
||||
interval == varRateIntervalMsAlt
|
||||
}
|
||||
|
||||
// replaceVariable replaces both $__variable and ${__variable} formats in the expression
|
||||
func replaceVariable(expr, dollarFormat, altFormat, replacement string) string {
|
||||
expr = strings.ReplaceAll(expr, dollarFormat, replacement)
|
||||
expr = strings.ReplaceAll(expr, altFormat, replacement)
|
||||
return expr
|
||||
}
|
||||
|
||||
// isManualIntervalOverride checks if the interval is a manually specified non-variable value
|
||||
// that should override the calculated interval
|
||||
func isManualIntervalOverride(interval string) bool {
|
||||
return interval != "" &&
|
||||
interval != varInterval &&
|
||||
interval != varIntervalAlt &&
|
||||
interval != varIntervalMs &&
|
||||
interval != varIntervalMsAlt
|
||||
}
|
||||
|
||||
// maxDuration returns the maximum of two durations
|
||||
func maxDuration(a, b time.Duration) time.Duration {
|
||||
if a > b {
|
||||
return a
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
// normalizeIntervalFactor ensures intervalFactor is at least 1
|
||||
func normalizeIntervalFactor(factor int64) int64 {
|
||||
if factor == 0 {
|
||||
return 1
|
||||
}
|
||||
return factor
|
||||
}
|
||||
|
||||
// calculatePrometheusInterval calculates the optimal step interval for a Prometheus query.
|
||||
//
|
||||
// The function determines the query step interval by considering multiple factors:
|
||||
// - The minimum step specified in the query (queryInterval)
|
||||
// - The data source scrape interval (dsScrapeInterval)
|
||||
// - The requested interval in milliseconds (intervalMs)
|
||||
// - The time range and maximum data points from the query
|
||||
// - The interval factor multiplier
|
||||
//
|
||||
// Special handling:
|
||||
// - Variable intervals ($__interval, $__rate_interval, etc.) are replaced with calculated values
|
||||
// - Rate interval variables ($__rate_interval, ${__rate_interval}) use calculateRateInterval for proper rate() function support
|
||||
// - Manual interval overrides (non-variable strings) take precedence over calculated values
|
||||
// - The final interval ensures safe resolution limits are not exceeded
|
||||
//
|
||||
// Parameters:
|
||||
// - queryInterval: The minimum step interval string (may contain variables like $__interval or $__rate_interval)
|
||||
// - dsScrapeInterval: The data source scrape interval (e.g., "15s", "30s")
|
||||
// - intervalMs: The requested interval in milliseconds
|
||||
// - intervalFactor: Multiplier for the calculated interval (defaults to 1 if 0)
|
||||
// - query: The backend data query containing time range and max data points
|
||||
// - intervalCalculator: Calculator for determining optimal intervals
|
||||
//
|
||||
// Returns:
|
||||
// - The calculated step interval as a time.Duration
|
||||
// - An error if the interval cannot be calculated (e.g., invalid interval string)
|
||||
func calculatePrometheusInterval(
|
||||
queryInterval, dsScrapeInterval string,
|
||||
intervalMs, intervalFactor int64,
|
||||
query backend.DataQuery,
|
||||
intervalCalculator intervalv2.Calculator,
|
||||
) (time.Duration, error) {
|
||||
// Preserve the original interval for later comparison, as it may be modified below
|
||||
// we need to compare the original query model after it is overwritten below to variables so that we can
|
||||
// calculate the rateInterval if it is equal to $__rate_interval or ${__rate_interval}
|
||||
originalQueryInterval := queryInterval
|
||||
|
||||
// If we are using a variable for minStep, replace it with empty string
|
||||
// so that the interval calculation proceeds with the default logic
|
||||
// If we are using variable for interval/step, we will replace it with calculated interval
|
||||
if isVariableInterval(queryInterval) {
|
||||
queryInterval = ""
|
||||
}
|
||||
|
||||
// Get the minimum interval from various sources (dsScrapeInterval, queryInterval, intervalMs)
|
||||
minInterval, err := gtime.GetIntervalFrom(dsScrapeInterval, queryInterval, intervalMs, 15*time.Second)
|
||||
if err != nil {
|
||||
return time.Duration(0), err
|
||||
}
|
||||
|
||||
// Calculate the optimal interval based on time range and max data points
|
||||
calculatedInterval := intervalCalculator.Calculate(query.TimeRange, minInterval, query.MaxDataPoints)
|
||||
// Calculate the safe interval to prevent too many data points
|
||||
safeInterval := intervalCalculator.CalculateSafeInterval(query.TimeRange, int64(safeResolution))
|
||||
|
||||
// Use the larger of calculated or safe interval to ensure we don't exceed resolution limits
|
||||
adjustedInterval := maxDuration(calculatedInterval.Value, safeInterval.Value)
|
||||
adjustedInterval := safeInterval.Value
|
||||
if calculatedInterval.Value > safeInterval.Value {
|
||||
adjustedInterval = calculatedInterval.Value
|
||||
}
|
||||
|
||||
// Handle rate interval variables: these require special calculation
|
||||
if isRateIntervalVariable(originalQueryInterval) {
|
||||
// here is where we compare for $__rate_interval or ${__rate_interval}
|
||||
if originalQueryInterval == varRateInterval || originalQueryInterval == varRateIntervalAlt {
|
||||
// Rate interval is final and is not affected by resolution
|
||||
return calculateRateInterval(adjustedInterval, dsScrapeInterval), nil
|
||||
}
|
||||
|
||||
// Handle manual interval override: if user specified a non-variable interval,
|
||||
// it takes precedence over calculated values
|
||||
if isManualIntervalOverride(originalQueryInterval) {
|
||||
if parsedInterval, err := gtime.ParseIntervalStringToTimeDuration(originalQueryInterval); err == nil {
|
||||
return parsedInterval, nil
|
||||
} else {
|
||||
queryIntervalFactor := intervalFactor
|
||||
if queryIntervalFactor == 0 {
|
||||
queryIntervalFactor = 1
|
||||
}
|
||||
// If parsing fails, fall through to calculated interval with factor
|
||||
return time.Duration(int64(adjustedInterval) * queryIntervalFactor), nil
|
||||
}
|
||||
|
||||
// Apply interval factor to the adjusted interval
|
||||
normalizedFactor := normalizeIntervalFactor(intervalFactor)
|
||||
return time.Duration(int64(adjustedInterval) * normalizedFactor), nil
|
||||
}
|
||||
|
||||
// calculateRateInterval calculates the $__rate_interval value
|
||||
@@ -415,8 +331,7 @@ func calculateRateInterval(
|
||||
return time.Duration(0)
|
||||
}
|
||||
|
||||
minRateInterval := rateIntervalMultiplier * scrapeIntervalDuration
|
||||
rateInterval := maxDuration(queryInterval+scrapeIntervalDuration, minRateInterval)
|
||||
rateInterval := time.Duration(int64(math.Max(float64(queryInterval+scrapeIntervalDuration), float64(4)*float64(scrapeIntervalDuration))))
|
||||
return rateInterval
|
||||
}
|
||||
|
||||
@@ -451,33 +366,34 @@ func InterpolateVariables(
|
||||
rateInterval = calculateRateInterval(queryInterval, requestedMinStep)
|
||||
}
|
||||
|
||||
// Replace interval variables (both $__var and ${__var} formats)
|
||||
expr = replaceVariable(expr, varIntervalMs, varIntervalMsAlt, strconv.FormatInt(int64(calculatedStep/time.Millisecond), 10))
|
||||
expr = replaceVariable(expr, varInterval, varIntervalAlt, gtime.FormatInterval(calculatedStep))
|
||||
|
||||
// Replace range variables (both $__var and ${__var} formats)
|
||||
expr = replaceVariable(expr, varRangeMs, varRangeMsAlt, strconv.FormatInt(rangeMs, 10))
|
||||
expr = replaceVariable(expr, varRangeS, varRangeSAlt, strconv.FormatInt(rangeSRounded, 10))
|
||||
expr = replaceVariable(expr, varRange, varRangeAlt, strconv.FormatInt(rangeSRounded, 10)+"s")
|
||||
|
||||
// Replace rate interval variables (both $__var and ${__var} formats)
|
||||
expr = replaceVariable(expr, varRateIntervalMs, varRateIntervalMsAlt, strconv.FormatInt(int64(rateInterval/time.Millisecond), 10))
|
||||
expr = replaceVariable(expr, varRateInterval, varRateIntervalAlt, rateInterval.String())
|
||||
expr = strings.ReplaceAll(expr, varIntervalMs, strconv.FormatInt(int64(calculatedStep/time.Millisecond), 10))
|
||||
expr = strings.ReplaceAll(expr, varInterval, gtime.FormatInterval(calculatedStep))
|
||||
expr = strings.ReplaceAll(expr, varRangeMs, strconv.FormatInt(rangeMs, 10))
|
||||
expr = strings.ReplaceAll(expr, varRangeS, strconv.FormatInt(rangeSRounded, 10))
|
||||
expr = strings.ReplaceAll(expr, varRange, strconv.FormatInt(rangeSRounded, 10)+"s")
|
||||
expr = strings.ReplaceAll(expr, varRateIntervalMs, strconv.FormatInt(int64(rateInterval/time.Millisecond), 10))
|
||||
expr = strings.ReplaceAll(expr, varRateInterval, rateInterval.String())
|
||||
|
||||
// Repetitive code, we should have functionality to unify these
|
||||
expr = strings.ReplaceAll(expr, varIntervalMsAlt, strconv.FormatInt(int64(calculatedStep/time.Millisecond), 10))
|
||||
expr = strings.ReplaceAll(expr, varIntervalAlt, gtime.FormatInterval(calculatedStep))
|
||||
expr = strings.ReplaceAll(expr, varRangeMsAlt, strconv.FormatInt(rangeMs, 10))
|
||||
expr = strings.ReplaceAll(expr, varRangeSAlt, strconv.FormatInt(rangeSRounded, 10))
|
||||
expr = strings.ReplaceAll(expr, varRangeAlt, strconv.FormatInt(rangeSRounded, 10)+"s")
|
||||
expr = strings.ReplaceAll(expr, varRateIntervalMsAlt, strconv.FormatInt(int64(rateInterval/time.Millisecond), 10))
|
||||
expr = strings.ReplaceAll(expr, varRateIntervalAlt, rateInterval.String())
|
||||
return expr
|
||||
}
|
||||
|
||||
// isVariableInterval checks if the interval string is a variable interval
|
||||
// (any of $__interval, ${__interval}, $__interval_ms, ${__interval_ms}, $__rate_interval, ${__rate_interval}, etc.)
|
||||
func isVariableInterval(interval string) bool {
|
||||
return interval == varInterval ||
|
||||
interval == varIntervalAlt ||
|
||||
interval == varIntervalMs ||
|
||||
interval == varIntervalMsAlt ||
|
||||
interval == varRateInterval ||
|
||||
interval == varRateIntervalAlt ||
|
||||
interval == varRateIntervalMs ||
|
||||
interval == varRateIntervalMsAlt
|
||||
if interval == varInterval || interval == varIntervalMs || interval == varRateInterval || interval == varRateIntervalMs {
|
||||
return true
|
||||
}
|
||||
// Repetitive code, we should have functionality to unify these
|
||||
if interval == varIntervalAlt || interval == varIntervalMsAlt || interval == varRateIntervalAlt || interval == varRateIntervalMsAlt {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// AlignTimeRange aligns query range to step and handles the time offset.
|
||||
@@ -494,7 +410,7 @@ func AlignTimeRange(t time.Time, step time.Duration, offset int64) time.Time {
|
||||
//go:embed query.types.json
|
||||
var f embed.FS
|
||||
|
||||
// QueryTypeDefinitionListJSON returns the query type definitions
|
||||
// QueryTypeDefinitionsJSON returns the query type definitions
|
||||
func QueryTypeDefinitionListJSON() (json.RawMessage, error) {
|
||||
return f.ReadFile("query.types.json")
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ package models_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
@@ -13,7 +14,6 @@ import (
|
||||
"go.opentelemetry.io/otel"
|
||||
|
||||
"github.com/grafana/grafana-plugin-sdk-go/backend/log"
|
||||
|
||||
"github.com/grafana/grafana/pkg/promlib/intervalv2"
|
||||
"github.com/grafana/grafana/pkg/promlib/models"
|
||||
)
|
||||
@@ -50,6 +50,95 @@ func TestParse(t *testing.T) {
|
||||
require.Equal(t, false, res.ExemplarQuery)
|
||||
})
|
||||
|
||||
t.Run("parsing query model with step", func(t *testing.T) {
|
||||
timeRange := backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(12 * time.Hour),
|
||||
}
|
||||
|
||||
q := queryContext(`{
|
||||
"expr": "go_goroutines",
|
||||
"format": "time_series",
|
||||
"refId": "A"
|
||||
}`, timeRange, time.Duration(1)*time.Minute)
|
||||
|
||||
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, time.Second*30, res.Step)
|
||||
})
|
||||
|
||||
t.Run("parsing query model without step parameter", func(t *testing.T) {
|
||||
timeRange := backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(1 * time.Hour),
|
||||
}
|
||||
|
||||
q := queryContext(`{
|
||||
"expr": "go_goroutines",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"refId": "A"
|
||||
}`, timeRange, time.Duration(1)*time.Minute)
|
||||
|
||||
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, time.Second*15, res.Step)
|
||||
})
|
||||
|
||||
t.Run("parsing query model with high intervalFactor", func(t *testing.T) {
|
||||
timeRange := backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(48 * time.Hour),
|
||||
}
|
||||
|
||||
q := queryContext(`{
|
||||
"expr": "go_goroutines",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 10,
|
||||
"refId": "A"
|
||||
}`, timeRange, time.Duration(1)*time.Minute)
|
||||
|
||||
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, time.Minute*20, res.Step)
|
||||
})
|
||||
|
||||
t.Run("parsing query model with low intervalFactor", func(t *testing.T) {
|
||||
timeRange := backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(48 * time.Hour),
|
||||
}
|
||||
|
||||
q := queryContext(`{
|
||||
"expr": "go_goroutines",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"refId": "A"
|
||||
}`, timeRange, time.Duration(1)*time.Minute)
|
||||
|
||||
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, time.Minute*2, res.Step)
|
||||
})
|
||||
|
||||
t.Run("parsing query model specified scrape-interval in the data source", func(t *testing.T) {
|
||||
timeRange := backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(48 * time.Hour),
|
||||
}
|
||||
|
||||
q := queryContext(`{
|
||||
"expr": "go_goroutines",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"refId": "A"
|
||||
}`, timeRange, time.Duration(1)*time.Minute)
|
||||
|
||||
res, err := models.Parse(context.Background(), log.New(), span, q, "240s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, time.Minute*4, res.Step)
|
||||
})
|
||||
|
||||
t.Run("parsing query model with $__interval variable", func(t *testing.T) {
|
||||
timeRange := backend.TimeRange{
|
||||
From: now,
|
||||
@@ -87,7 +176,7 @@ func TestParse(t *testing.T) {
|
||||
|
||||
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "rate(ALERTS{job=\"test\" [1m]})", res.Expr)
|
||||
require.Equal(t, "rate(ALERTS{job=\"test\" [2m]})", res.Expr)
|
||||
})
|
||||
|
||||
t.Run("parsing query model with $__interval_ms variable", func(t *testing.T) {
|
||||
@@ -444,6 +533,232 @@ func TestParse(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestRateInterval(t *testing.T) {
|
||||
_, span := tracer.Start(context.Background(), "operation")
|
||||
defer span.End()
|
||||
type args struct {
|
||||
expr string
|
||||
interval string
|
||||
intervalMs int64
|
||||
dsScrapeInterval string
|
||||
timeRange *backend.TimeRange
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want *models.Query
|
||||
}{
|
||||
{
|
||||
name: "intervalMs 100s, minStep override 150s and scrape interval 30s",
|
||||
args: args{
|
||||
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
|
||||
interval: "150s",
|
||||
intervalMs: 100000,
|
||||
dsScrapeInterval: "30s",
|
||||
},
|
||||
want: &models.Query{
|
||||
Expr: "rate(rpc_durations_seconds_count[10m0s])",
|
||||
Step: time.Second * 150,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "intervalMs 120s, minStep override 150s and ds scrape interval 30s",
|
||||
args: args{
|
||||
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
|
||||
interval: "150s",
|
||||
intervalMs: 120000,
|
||||
dsScrapeInterval: "30s",
|
||||
},
|
||||
want: &models.Query{
|
||||
Expr: "rate(rpc_durations_seconds_count[10m0s])",
|
||||
Step: time.Second * 150,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "intervalMs 120s, minStep auto (interval not overridden) and ds scrape interval 30s",
|
||||
args: args{
|
||||
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
|
||||
interval: "120s",
|
||||
intervalMs: 120000,
|
||||
dsScrapeInterval: "30s",
|
||||
},
|
||||
want: &models.Query{
|
||||
Expr: "rate(rpc_durations_seconds_count[8m0s])",
|
||||
Step: time.Second * 120,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "interval and minStep are automatically calculated and ds scrape interval 30s and time range 1 hour",
|
||||
args: args{
|
||||
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
|
||||
interval: "30s",
|
||||
intervalMs: 30000,
|
||||
dsScrapeInterval: "30s",
|
||||
timeRange: &backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(1 * time.Hour),
|
||||
},
|
||||
},
|
||||
want: &models.Query{
|
||||
Expr: "rate(rpc_durations_seconds_count[2m0s])",
|
||||
Step: time.Second * 30,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 1 hour",
|
||||
args: args{
|
||||
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
|
||||
interval: "$__rate_interval",
|
||||
intervalMs: 30000,
|
||||
dsScrapeInterval: "30s",
|
||||
timeRange: &backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(1 * time.Hour),
|
||||
},
|
||||
},
|
||||
want: &models.Query{
|
||||
Expr: "rate(rpc_durations_seconds_count[2m0s])",
|
||||
Step: time.Minute * 2,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 2 days",
|
||||
args: args{
|
||||
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
|
||||
interval: "$__rate_interval",
|
||||
intervalMs: 120000,
|
||||
dsScrapeInterval: "30s",
|
||||
timeRange: &backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
},
|
||||
want: &models.Query{
|
||||
Expr: "rate(rpc_durations_seconds_count[2m30s])",
|
||||
Step: time.Second * 150,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "minStep is $__rate_interval and ds scrape interval 15s and time range 2 days",
|
||||
args: args{
|
||||
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
|
||||
interval: "$__interval",
|
||||
intervalMs: 120000,
|
||||
dsScrapeInterval: "15s",
|
||||
timeRange: &backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(2 * 24 * time.Hour),
|
||||
},
|
||||
},
|
||||
want: &models.Query{
|
||||
Expr: "rate(rpc_durations_seconds_count[8m0s])",
|
||||
Step: time.Second * 120,
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
q := mockQuery(tt.args.expr, tt.args.interval, tt.args.intervalMs, tt.args.timeRange)
|
||||
q.MaxDataPoints = 12384
|
||||
res, err := models.Parse(context.Background(), log.New(), span, q, tt.args.dsScrapeInterval, intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.want.Expr, res.Expr)
|
||||
require.Equal(t, tt.want.Step, res.Step)
|
||||
})
|
||||
}
|
||||
|
||||
t.Run("minStep is auto and ds scrape interval 30s and time range 1 hour", func(t *testing.T) {
|
||||
query := backend.DataQuery{
|
||||
RefID: "G",
|
||||
QueryType: "",
|
||||
MaxDataPoints: 1613,
|
||||
Interval: 30 * time.Second,
|
||||
TimeRange: backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(1 * time.Hour),
|
||||
},
|
||||
JSON: []byte(`{
|
||||
"datasource":{"type":"prometheus","uid":"zxS5e5W4k"},
|
||||
"datasourceId":38,
|
||||
"editorMode":"code",
|
||||
"exemplar":false,
|
||||
"expr":"sum(rate(process_cpu_seconds_total[$__rate_interval]))",
|
||||
"instant":false,
|
||||
"interval":"",
|
||||
"intervalMs":30000,
|
||||
"key":"Q-f96b6729-c47a-4ea8-8f71-a79774cf9bd5-0",
|
||||
"legendFormat":"__auto",
|
||||
"maxDataPoints":1613,
|
||||
"range":true,
|
||||
"refId":"G",
|
||||
"requestId":"1G",
|
||||
"utcOffsetSec":3600
|
||||
}`),
|
||||
}
|
||||
res, err := models.Parse(context.Background(), log.New(), span, query, "30s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "sum(rate(process_cpu_seconds_total[2m0s]))", res.Expr)
|
||||
require.Equal(t, 30*time.Second, res.Step)
|
||||
})
|
||||
|
||||
t.Run("minStep is auto and ds scrape interval 15s and time range 5 minutes", func(t *testing.T) {
|
||||
query := backend.DataQuery{
|
||||
RefID: "A",
|
||||
QueryType: "",
|
||||
MaxDataPoints: 1055,
|
||||
Interval: 15 * time.Second,
|
||||
TimeRange: backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(5 * time.Minute),
|
||||
},
|
||||
JSON: []byte(`{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "2z9d6ElGk"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(rate(cache_requests_total[$__rate_interval]))",
|
||||
"legendFormat": "__auto",
|
||||
"range": true,
|
||||
"refId": "A",
|
||||
"exemplar": false,
|
||||
"requestId": "1A",
|
||||
"utcOffsetSec": 0,
|
||||
"interval": "",
|
||||
"datasourceId": 508,
|
||||
"intervalMs": 15000,
|
||||
"maxDataPoints": 1055
|
||||
}`),
|
||||
}
|
||||
res, err := models.Parse(context.Background(), log.New(), span, query, "15s", intervalCalculator, false)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "sum(rate(cache_requests_total[1m0s]))", res.Expr)
|
||||
require.Equal(t, 15*time.Second, res.Step)
|
||||
})
|
||||
}
|
||||
|
||||
func mockQuery(expr string, interval string, intervalMs int64, timeRange *backend.TimeRange) backend.DataQuery {
|
||||
if timeRange == nil {
|
||||
timeRange = &backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(1 * time.Hour),
|
||||
}
|
||||
}
|
||||
return backend.DataQuery{
|
||||
Interval: time.Duration(intervalMs) * time.Millisecond,
|
||||
JSON: []byte(fmt.Sprintf(`{
|
||||
"expr": "%s",
|
||||
"format": "time_series",
|
||||
"interval": "%s",
|
||||
"intervalMs": %v,
|
||||
"intervalFactor": 1,
|
||||
"refId": "A"
|
||||
}`, expr, interval, intervalMs)),
|
||||
TimeRange: *timeRange,
|
||||
RefID: "A",
|
||||
}
|
||||
}
|
||||
|
||||
func queryContext(json string, timeRange backend.TimeRange, queryInterval time.Duration) backend.DataQuery {
|
||||
return backend.DataQuery{
|
||||
Interval: queryInterval,
|
||||
@@ -453,6 +768,11 @@ func queryContext(json string, timeRange backend.TimeRange, queryInterval time.D
|
||||
}
|
||||
}
|
||||
|
||||
// AlignTimeRange aligns query range to step and handles the time offset.
|
||||
// It rounds start and end down to a multiple of step.
|
||||
// Prometheus caching is dependent on the range being aligned with the step.
|
||||
// Rounding to the step can significantly change the start and end of the range for larger steps, i.e. a week.
|
||||
// In rounding the range to a 1w step the range will always start on a Thursday.
|
||||
func TestAlignTimeRange(t *testing.T) {
|
||||
type args struct {
|
||||
t time.Time
|
||||
|
||||
@@ -381,102 +381,6 @@ func TestPrometheus_parseTimeSeriesResponse(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestPrometheus_executedQueryString(t *testing.T) {
|
||||
t.Run("executedQueryString should match expected format with intervalMs 300_000", func(t *testing.T) {
|
||||
values := []p.SamplePair{
|
||||
{Value: 1, Timestamp: 1000},
|
||||
{Value: 2, Timestamp: 2000},
|
||||
}
|
||||
result := queryResult{
|
||||
Type: p.ValMatrix,
|
||||
Result: p.Matrix{
|
||||
&p.SampleStream{
|
||||
Metric: p.Metric{"app": "Application"},
|
||||
Values: values,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
queryJSON := `{
|
||||
"expr": "test_metric",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"interval": "2m",
|
||||
"intervalMs": 300000,
|
||||
"maxDataPoints": 761,
|
||||
"refId": "A",
|
||||
"range": true
|
||||
}`
|
||||
|
||||
now := time.Now()
|
||||
query := backend.DataQuery{
|
||||
RefID: "A",
|
||||
MaxDataPoints: 761,
|
||||
Interval: 300000 * time.Millisecond,
|
||||
TimeRange: backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(48 * time.Hour),
|
||||
},
|
||||
JSON: []byte(queryJSON),
|
||||
}
|
||||
tctx, err := setup()
|
||||
require.NoError(t, err)
|
||||
res, err := execute(tctx, query, result, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Len(t, res, 1)
|
||||
require.NotNil(t, res[0].Meta)
|
||||
require.Equal(t, "Expr: test_metric\nStep: 2m0s", res[0].Meta.ExecutedQueryString)
|
||||
})
|
||||
|
||||
t.Run("executedQueryString should match expected format with intervalMs 900_000", func(t *testing.T) {
|
||||
values := []p.SamplePair{
|
||||
{Value: 1, Timestamp: 1000},
|
||||
{Value: 2, Timestamp: 2000},
|
||||
}
|
||||
result := queryResult{
|
||||
Type: p.ValMatrix,
|
||||
Result: p.Matrix{
|
||||
&p.SampleStream{
|
||||
Metric: p.Metric{"app": "Application"},
|
||||
Values: values,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
queryJSON := `{
|
||||
"expr": "test_metric",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"interval": "2m",
|
||||
"intervalMs": 900000,
|
||||
"maxDataPoints": 175,
|
||||
"refId": "A",
|
||||
"range": true
|
||||
}`
|
||||
|
||||
now := time.Now()
|
||||
query := backend.DataQuery{
|
||||
RefID: "A",
|
||||
MaxDataPoints: 175,
|
||||
Interval: 900000 * time.Millisecond,
|
||||
TimeRange: backend.TimeRange{
|
||||
From: now,
|
||||
To: now.Add(48 * time.Hour),
|
||||
},
|
||||
JSON: []byte(queryJSON),
|
||||
}
|
||||
tctx, err := setup()
|
||||
require.NoError(t, err)
|
||||
res, err := execute(tctx, query, result, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Len(t, res, 1)
|
||||
require.NotNil(t, res[0].Meta)
|
||||
require.Equal(t, "Expr: test_metric\nStep: 2m0s", res[0].Meta.ExecutedQueryString)
|
||||
})
|
||||
}
|
||||
|
||||
type queryResult struct {
|
||||
Type p.ValueType `json:"resultType"`
|
||||
Result any `json:"result"`
|
||||
|
||||
6
pkg/server/wire_gen.go
generated
6
pkg/server/wire_gen.go
generated
@@ -715,7 +715,8 @@ func Initialize(ctx context.Context, cfg *setting.Cfg, opts Options, apiOpts api
|
||||
return nil, err
|
||||
}
|
||||
pluginscdnService := pluginscdn.ProvideService(pluginManagementCfg)
|
||||
pluginassetsService := pluginassets2.ProvideService(pluginManagementCfg, pluginscdnService, signatureSignature, pluginstoreService)
|
||||
calculator := pluginassets2.ProvideModuleHashCalculator(pluginManagementCfg, pluginscdnService, signatureSignature, inMemory)
|
||||
pluginassetsService := pluginassets2.ProvideService(pluginManagementCfg, pluginscdnService, calculator)
|
||||
avatarCacheServer := avatar.ProvideAvatarCacheServer(cfg)
|
||||
prefService := prefimpl.ProvideService(sqlStore, cfg)
|
||||
dashboardPermissionsService, err := ossaccesscontrol.ProvideDashboardPermissions(cfg, featureToggles, routeRegisterImpl, sqlStore, accessControl, ossLicensingService, dashboardService, folderimplService, acimplService, teamService, userService, actionSetService, dashboardServiceImpl, eventualRestConfigProvider)
|
||||
@@ -1383,7 +1384,8 @@ func InitializeForTest(ctx context.Context, t sqlutil.ITestDB, testingT interfac
|
||||
return nil, err
|
||||
}
|
||||
pluginscdnService := pluginscdn.ProvideService(pluginManagementCfg)
|
||||
pluginassetsService := pluginassets2.ProvideService(pluginManagementCfg, pluginscdnService, signatureSignature, pluginstoreService)
|
||||
calculator := pluginassets2.ProvideModuleHashCalculator(pluginManagementCfg, pluginscdnService, signatureSignature, inMemory)
|
||||
pluginassetsService := pluginassets2.ProvideService(pluginManagementCfg, pluginscdnService, calculator)
|
||||
avatarCacheServer := avatar.ProvideAvatarCacheServer(cfg)
|
||||
prefService := prefimpl.ProvideService(sqlStore, cfg)
|
||||
dashboardPermissionsService, err := ossaccesscontrol.ProvideDashboardPermissions(cfg, featureToggles, routeRegisterImpl, sqlStore, accessControl, ossLicensingService, dashboardService, folderimplService, acimplService, teamService, userService, actionSetService, dashboardServiceImpl, eventualRestConfigProvider)
|
||||
|
||||
@@ -36,9 +36,6 @@ var client = &http.Client{
|
||||
Transport: &http.Transport{Proxy: http.ProxyFromEnvironment},
|
||||
}
|
||||
|
||||
// CreateDashboardSnapshot creates a snapshot when running Grafana in regular mode.
|
||||
// It validates the user and dashboard exist before creating the snapshot.
|
||||
// This mode supports both local and external snapshots.
|
||||
func CreateDashboardSnapshot(c *contextmodel.ReqContext, cfg snapshot.SnapshotSharingOptions, cmd CreateDashboardSnapshotCommand, svc Service) {
|
||||
if !cfg.SnapshotsEnabled {
|
||||
c.JsonApiErr(http.StatusForbidden, "Dashboard Snapshots are disabled", nil)
|
||||
@@ -46,7 +43,6 @@ func CreateDashboardSnapshot(c *contextmodel.ReqContext, cfg snapshot.SnapshotSh
|
||||
}
|
||||
|
||||
uid := cmd.Dashboard.GetNestedString("uid")
|
||||
|
||||
user, err := identity.GetRequester(c.Req.Context())
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusBadRequest, "missing user in context", nil)
|
||||
@@ -63,18 +59,21 @@ func CreateDashboardSnapshot(c *contextmodel.ReqContext, cfg snapshot.SnapshotSh
|
||||
return
|
||||
}
|
||||
|
||||
cmd.ExternalURL = ""
|
||||
cmd.OrgID = user.GetOrgID()
|
||||
cmd.UserID, _ = identity.UserIdentifier(user.GetID())
|
||||
|
||||
if cmd.Name == "" {
|
||||
cmd.Name = "Unnamed snapshot"
|
||||
}
|
||||
|
||||
var snapshotURL string
|
||||
var snapshotUrl string
|
||||
cmd.ExternalURL = ""
|
||||
cmd.OrgID = user.GetOrgID()
|
||||
cmd.UserID, _ = identity.UserIdentifier(user.GetID())
|
||||
originalDashboardURL, err := createOriginalDashboardURL(&cmd)
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusInternalServerError, "Invalid app URL", err)
|
||||
return
|
||||
}
|
||||
|
||||
if cmd.External {
|
||||
// Handle external snapshot creation
|
||||
if !cfg.ExternalEnabled {
|
||||
c.JsonApiErr(http.StatusForbidden, "External dashboard creation is disabled", nil)
|
||||
return
|
||||
@@ -86,83 +85,40 @@ func CreateDashboardSnapshot(c *contextmodel.ReqContext, cfg snapshot.SnapshotSh
|
||||
return
|
||||
}
|
||||
|
||||
snapshotUrl = resp.Url
|
||||
cmd.Key = resp.Key
|
||||
cmd.DeleteKey = resp.DeleteKey
|
||||
cmd.ExternalURL = resp.Url
|
||||
cmd.ExternalDeleteURL = resp.DeleteUrl
|
||||
cmd.Dashboard = &common.Unstructured{}
|
||||
snapshotURL = resp.Url
|
||||
|
||||
metrics.MApiDashboardSnapshotExternal.Inc()
|
||||
} else {
|
||||
// Handle local snapshot creation
|
||||
originalDashboardURL, err := createOriginalDashboardURL(&cmd)
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusInternalServerError, "Invalid app URL", err)
|
||||
return
|
||||
cmd.Dashboard.SetNestedField(originalDashboardURL, "snapshot", "originalUrl")
|
||||
|
||||
if cmd.Key == "" {
|
||||
var err error
|
||||
cmd.Key, err = util.GetRandomString(32)
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusInternalServerError, "Could not generate random string", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
snapshotURL, err = prepareLocalSnapshot(&cmd, originalDashboardURL)
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusInternalServerError, "Could not generate random string", err)
|
||||
return
|
||||
if cmd.DeleteKey == "" {
|
||||
var err error
|
||||
cmd.DeleteKey, err = util.GetRandomString(32)
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusInternalServerError, "Could not generate random string", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
snapshotUrl = setting.ToAbsUrl("dashboard/snapshot/" + cmd.Key)
|
||||
|
||||
metrics.MApiDashboardSnapshotCreate.Inc()
|
||||
}
|
||||
|
||||
saveAndRespond(c, svc, cmd, snapshotURL)
|
||||
}
|
||||
|
||||
// CreateDashboardSnapshotPublic creates a snapshot when running Grafana in public mode.
|
||||
// In public mode, there is no user or dashboard information to validate.
|
||||
// Only local snapshots are supported (external snapshots are not available).
|
||||
func CreateDashboardSnapshotPublic(c *contextmodel.ReqContext, cfg snapshot.SnapshotSharingOptions, cmd CreateDashboardSnapshotCommand, svc Service) {
|
||||
if !cfg.SnapshotsEnabled {
|
||||
c.JsonApiErr(http.StatusForbidden, "Dashboard Snapshots are disabled", nil)
|
||||
return
|
||||
}
|
||||
|
||||
if cmd.Name == "" {
|
||||
cmd.Name = "Unnamed snapshot"
|
||||
}
|
||||
|
||||
snapshotURL, err := prepareLocalSnapshot(&cmd, "")
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusInternalServerError, "Could not generate random string", err)
|
||||
return
|
||||
}
|
||||
|
||||
metrics.MApiDashboardSnapshotCreate.Inc()
|
||||
|
||||
saveAndRespond(c, svc, cmd, snapshotURL)
|
||||
}
|
||||
|
||||
// prepareLocalSnapshot prepares the command for a local snapshot and returns the snapshot URL.
|
||||
func prepareLocalSnapshot(cmd *CreateDashboardSnapshotCommand, originalDashboardURL string) (string, error) {
|
||||
cmd.Dashboard.SetNestedField(originalDashboardURL, "snapshot", "originalUrl")
|
||||
|
||||
if cmd.Key == "" {
|
||||
key, err := util.GetRandomString(32)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
cmd.Key = key
|
||||
}
|
||||
|
||||
if cmd.DeleteKey == "" {
|
||||
deleteKey, err := util.GetRandomString(32)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
cmd.DeleteKey = deleteKey
|
||||
}
|
||||
|
||||
return setting.ToAbsUrl("dashboard/snapshot/" + cmd.Key), nil
|
||||
}
|
||||
|
||||
// saveAndRespond saves the snapshot and sends the response.
|
||||
func saveAndRespond(c *contextmodel.ReqContext, svc Service, cmd CreateDashboardSnapshotCommand, snapshotURL string) {
|
||||
result, err := svc.CreateDashboardSnapshot(c.Req.Context(), &cmd)
|
||||
if err != nil {
|
||||
c.JsonApiErr(http.StatusInternalServerError, "Failed to create snapshot", err)
|
||||
@@ -172,7 +128,7 @@ func saveAndRespond(c *contextmodel.ReqContext, svc Service, cmd CreateDashboard
|
||||
c.JSON(http.StatusOK, snapshot.DashboardCreateResponse{
|
||||
Key: result.Key,
|
||||
DeleteKey: result.DeleteKey,
|
||||
URL: snapshotURL,
|
||||
URL: snapshotUrl,
|
||||
DeleteURL: setting.ToAbsUrl("api/snapshots-delete/" + result.DeleteKey),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -20,30 +20,40 @@ import (
|
||||
"github.com/grafana/grafana/pkg/web"
|
||||
)
|
||||
|
||||
func createTestDashboard(t *testing.T) *common.Unstructured {
|
||||
t.Helper()
|
||||
dashboard := &common.Unstructured{}
|
||||
dashboardData := map[string]any{
|
||||
"uid": "test-dashboard-uid",
|
||||
"id": 123,
|
||||
func TestCreateDashboardSnapshot_DashboardNotFound(t *testing.T) {
|
||||
mockService := &MockService{}
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: true,
|
||||
ExternalEnabled: false,
|
||||
}
|
||||
dashboardBytes, _ := json.Marshal(dashboardData)
|
||||
_ = json.Unmarshal(dashboardBytes, dashboard)
|
||||
return dashboard
|
||||
}
|
||||
|
||||
func createTestUser() *user.SignedInUser {
|
||||
return &user.SignedInUser{
|
||||
testUser := &user.SignedInUser{
|
||||
UserID: 1,
|
||||
OrgID: 1,
|
||||
Login: "testuser",
|
||||
Name: "Test User",
|
||||
Email: "test@example.com",
|
||||
}
|
||||
}
|
||||
dashboard := &common.Unstructured{}
|
||||
dashboardData := map[string]interface{}{
|
||||
"uid": "test-dashboard-uid",
|
||||
"id": 123,
|
||||
}
|
||||
dashboardBytes, _ := json.Marshal(dashboardData)
|
||||
_ = json.Unmarshal(dashboardBytes, dashboard)
|
||||
|
||||
cmd := CreateDashboardSnapshotCommand{
|
||||
DashboardCreateCommand: snapshot.DashboardCreateCommand{
|
||||
Dashboard: dashboard,
|
||||
Name: "Test Snapshot",
|
||||
},
|
||||
}
|
||||
|
||||
mockService.On("ValidateDashboardExists", mock.Anything, int64(1), "test-dashboard-uid").
|
||||
Return(dashboards.ErrDashboardNotFound)
|
||||
|
||||
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
|
||||
req = req.WithContext(identity.WithRequester(req.Context(), testUser))
|
||||
|
||||
func createReqContext(t *testing.T, req *http.Request, testUser *user.SignedInUser) (*contextmodel.ReqContext, *httptest.ResponseRecorder) {
|
||||
t.Helper()
|
||||
recorder := httptest.NewRecorder()
|
||||
ctx := &contextmodel.ReqContext{
|
||||
Context: &web.Context{
|
||||
@@ -53,319 +63,13 @@ func createReqContext(t *testing.T, req *http.Request, testUser *user.SignedInUs
|
||||
SignedInUser: testUser,
|
||||
Logger: log.NewNopLogger(),
|
||||
}
|
||||
return ctx, recorder
|
||||
}
|
||||
|
||||
// TestCreateDashboardSnapshot tests snapshot creation in regular mode (non-public instance).
|
||||
// These tests cover scenarios when Grafana is running as a regular server with user authentication.
|
||||
func TestCreateDashboardSnapshot(t *testing.T) {
|
||||
t.Run("should return error when dashboard not found", func(t *testing.T) {
|
||||
mockService := &MockService{}
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: true,
|
||||
ExternalEnabled: false,
|
||||
}
|
||||
testUser := createTestUser()
|
||||
dashboard := createTestDashboard(t)
|
||||
|
||||
cmd := CreateDashboardSnapshotCommand{
|
||||
DashboardCreateCommand: snapshot.DashboardCreateCommand{
|
||||
Dashboard: dashboard,
|
||||
Name: "Test Snapshot",
|
||||
},
|
||||
}
|
||||
|
||||
mockService.On("ValidateDashboardExists", mock.Anything, int64(1), "test-dashboard-uid").
|
||||
Return(dashboards.ErrDashboardNotFound)
|
||||
|
||||
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
|
||||
req = req.WithContext(identity.WithRequester(req.Context(), testUser))
|
||||
ctx, recorder := createReqContext(t, req, testUser)
|
||||
|
||||
CreateDashboardSnapshot(ctx, cfg, cmd, mockService)
|
||||
|
||||
mockService.AssertExpectations(t)
|
||||
assert.Equal(t, http.StatusBadRequest, recorder.Code)
|
||||
var response map[string]any
|
||||
err := json.Unmarshal(recorder.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "Dashboard not found", response["message"])
|
||||
})
|
||||
|
||||
t.Run("should create external snapshot when external is enabled", func(t *testing.T) {
|
||||
externalServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, "/api/snapshots", r.URL.Path)
|
||||
assert.Equal(t, "POST", r.Method)
|
||||
|
||||
response := map[string]any{
|
||||
"key": "external-key",
|
||||
"deleteKey": "external-delete-key",
|
||||
"url": "https://external.example.com/dashboard/snapshot/external-key",
|
||||
"deleteUrl": "https://external.example.com/api/snapshots-delete/external-delete-key",
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
_ = json.NewEncoder(w).Encode(response)
|
||||
}))
|
||||
defer externalServer.Close()
|
||||
|
||||
mockService := NewMockService(t)
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: true,
|
||||
ExternalEnabled: true,
|
||||
ExternalSnapshotURL: externalServer.URL,
|
||||
}
|
||||
testUser := createTestUser()
|
||||
dashboard := createTestDashboard(t)
|
||||
|
||||
cmd := CreateDashboardSnapshotCommand{
|
||||
DashboardCreateCommand: snapshot.DashboardCreateCommand{
|
||||
Dashboard: dashboard,
|
||||
Name: "Test External Snapshot",
|
||||
External: true,
|
||||
},
|
||||
}
|
||||
|
||||
mockService.On("ValidateDashboardExists", mock.Anything, int64(1), "test-dashboard-uid").
|
||||
Return(nil)
|
||||
mockService.On("CreateDashboardSnapshot", mock.Anything, mock.Anything).
|
||||
Return(&DashboardSnapshot{
|
||||
Key: "external-key",
|
||||
DeleteKey: "external-delete-key",
|
||||
}, nil)
|
||||
|
||||
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
|
||||
req = req.WithContext(identity.WithRequester(req.Context(), testUser))
|
||||
ctx, recorder := createReqContext(t, req, testUser)
|
||||
|
||||
CreateDashboardSnapshot(ctx, cfg, cmd, mockService)
|
||||
|
||||
mockService.AssertExpectations(t)
|
||||
assert.Equal(t, http.StatusOK, recorder.Code)
|
||||
|
||||
var response map[string]any
|
||||
err := json.Unmarshal(recorder.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "external-key", response["key"])
|
||||
assert.Equal(t, "external-delete-key", response["deleteKey"])
|
||||
assert.Equal(t, "https://external.example.com/dashboard/snapshot/external-key", response["url"])
|
||||
})
|
||||
|
||||
t.Run("should return forbidden when external is disabled", func(t *testing.T) {
|
||||
mockService := NewMockService(t)
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: true,
|
||||
ExternalEnabled: false,
|
||||
}
|
||||
testUser := createTestUser()
|
||||
dashboard := createTestDashboard(t)
|
||||
|
||||
cmd := CreateDashboardSnapshotCommand{
|
||||
DashboardCreateCommand: snapshot.DashboardCreateCommand{
|
||||
Dashboard: dashboard,
|
||||
Name: "Test External Snapshot",
|
||||
External: true,
|
||||
},
|
||||
}
|
||||
|
||||
mockService.On("ValidateDashboardExists", mock.Anything, int64(1), "test-dashboard-uid").
|
||||
Return(nil)
|
||||
|
||||
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
|
||||
req = req.WithContext(identity.WithRequester(req.Context(), testUser))
|
||||
ctx, recorder := createReqContext(t, req, testUser)
|
||||
|
||||
CreateDashboardSnapshot(ctx, cfg, cmd, mockService)
|
||||
|
||||
mockService.AssertExpectations(t)
|
||||
assert.Equal(t, http.StatusForbidden, recorder.Code)
|
||||
|
||||
var response map[string]any
|
||||
err := json.Unmarshal(recorder.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "External dashboard creation is disabled", response["message"])
|
||||
})
|
||||
|
||||
t.Run("should create local snapshot", func(t *testing.T) {
|
||||
mockService := NewMockService(t)
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: true,
|
||||
}
|
||||
testUser := createTestUser()
|
||||
dashboard := createTestDashboard(t)
|
||||
|
||||
cmd := CreateDashboardSnapshotCommand{
|
||||
DashboardCreateCommand: snapshot.DashboardCreateCommand{
|
||||
Dashboard: dashboard,
|
||||
Name: "Test Local Snapshot",
|
||||
},
|
||||
Key: "local-key",
|
||||
DeleteKey: "local-delete-key",
|
||||
}
|
||||
|
||||
mockService.On("ValidateDashboardExists", mock.Anything, int64(1), "test-dashboard-uid").
|
||||
Return(nil)
|
||||
mockService.On("CreateDashboardSnapshot", mock.Anything, mock.Anything).
|
||||
Return(&DashboardSnapshot{
|
||||
Key: "local-key",
|
||||
DeleteKey: "local-delete-key",
|
||||
}, nil)
|
||||
|
||||
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
|
||||
req = req.WithContext(identity.WithRequester(req.Context(), testUser))
|
||||
ctx, recorder := createReqContext(t, req, testUser)
|
||||
|
||||
CreateDashboardSnapshot(ctx, cfg, cmd, mockService)
|
||||
|
||||
mockService.AssertExpectations(t)
|
||||
assert.Equal(t, http.StatusOK, recorder.Code)
|
||||
|
||||
var response map[string]any
|
||||
err := json.Unmarshal(recorder.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "local-key", response["key"])
|
||||
assert.Equal(t, "local-delete-key", response["deleteKey"])
|
||||
assert.Contains(t, response["url"], "dashboard/snapshot/local-key")
|
||||
assert.Contains(t, response["deleteUrl"], "api/snapshots-delete/local-delete-key")
|
||||
})
|
||||
}
|
||||
|
||||
// TestCreateDashboardSnapshotPublic tests snapshot creation in public mode.
|
||||
// These tests cover scenarios when Grafana is running as a public snapshot server
|
||||
// where no user authentication or dashboard validation is required.
|
||||
func TestCreateDashboardSnapshotPublic(t *testing.T) {
|
||||
t.Run("should create local snapshot without user context", func(t *testing.T) {
|
||||
mockService := NewMockService(t)
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: true,
|
||||
}
|
||||
dashboard := createTestDashboard(t)
|
||||
|
||||
cmd := CreateDashboardSnapshotCommand{
|
||||
DashboardCreateCommand: snapshot.DashboardCreateCommand{
|
||||
Dashboard: dashboard,
|
||||
Name: "Test Snapshot",
|
||||
},
|
||||
Key: "test-key",
|
||||
DeleteKey: "test-delete-key",
|
||||
}
|
||||
|
||||
mockService.On("CreateDashboardSnapshot", mock.Anything, mock.Anything).
|
||||
Return(&DashboardSnapshot{
|
||||
Key: "test-key",
|
||||
DeleteKey: "test-delete-key",
|
||||
}, nil)
|
||||
|
||||
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
|
||||
recorder := httptest.NewRecorder()
|
||||
ctx := &contextmodel.ReqContext{
|
||||
Context: &web.Context{
|
||||
Req: req,
|
||||
Resp: web.NewResponseWriter("POST", recorder),
|
||||
},
|
||||
Logger: log.NewNopLogger(),
|
||||
}
|
||||
|
||||
CreateDashboardSnapshotPublic(ctx, cfg, cmd, mockService)
|
||||
|
||||
mockService.AssertExpectations(t)
|
||||
assert.Equal(t, http.StatusOK, recorder.Code)
|
||||
|
||||
var response map[string]any
|
||||
err := json.Unmarshal(recorder.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "test-key", response["key"])
|
||||
assert.Equal(t, "test-delete-key", response["deleteKey"])
|
||||
assert.Contains(t, response["url"], "dashboard/snapshot/test-key")
|
||||
assert.Contains(t, response["deleteUrl"], "api/snapshots-delete/test-delete-key")
|
||||
})
|
||||
|
||||
t.Run("should return forbidden when snapshots are disabled", func(t *testing.T) {
|
||||
mockService := NewMockService(t)
|
||||
cfg := snapshot.SnapshotSharingOptions{
|
||||
SnapshotsEnabled: false,
|
||||
}
|
||||
dashboard := createTestDashboard(t)
|
||||
|
||||
cmd := CreateDashboardSnapshotCommand{
|
||||
DashboardCreateCommand: snapshot.DashboardCreateCommand{
|
||||
Dashboard: dashboard,
|
||||
Name: "Test Snapshot",
|
||||
},
|
||||
}
|
||||
|
||||
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
|
||||
recorder := httptest.NewRecorder()
|
||||
ctx := &contextmodel.ReqContext{
|
||||
Context: &web.Context{
|
||||
Req: req,
|
||||
Resp: web.NewResponseWriter("POST", recorder),
|
||||
},
|
||||
Logger: log.NewNopLogger(),
|
||||
}
|
||||
|
||||
CreateDashboardSnapshotPublic(ctx, cfg, cmd, mockService)
|
||||
|
||||
assert.Equal(t, http.StatusForbidden, recorder.Code)
|
||||
|
||||
var response map[string]any
|
||||
err := json.Unmarshal(recorder.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "Dashboard Snapshots are disabled", response["message"])
|
||||
})
|
||||
}
|
||||
|
||||
// TestDeleteExternalDashboardSnapshot tests deletion of external snapshots.
|
||||
// This function is called in public mode and doesn't require user context.
|
||||
func TestDeleteExternalDashboardSnapshot(t *testing.T) {
|
||||
t.Run("should return nil on successful deletion", func(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, "GET", r.Method)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
err := DeleteExternalDashboardSnapshot(server.URL)
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("should gracefully handle already deleted snapshot", func(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
response := map[string]any{
|
||||
"message": "Failed to get dashboard snapshot",
|
||||
}
|
||||
_ = json.NewEncoder(w).Encode(response)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
err := DeleteExternalDashboardSnapshot(server.URL)
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("should return error on unexpected status code", func(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusNotFound)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
err := DeleteExternalDashboardSnapshot(server.URL)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "unexpected response when deleting external snapshot")
|
||||
assert.Contains(t, err.Error(), "404")
|
||||
})
|
||||
|
||||
t.Run("should return error on 500 with different message", func(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
response := map[string]any{
|
||||
"message": "Some other error",
|
||||
}
|
||||
_ = json.NewEncoder(w).Encode(response)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
err := DeleteExternalDashboardSnapshot(server.URL)
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "500")
|
||||
})
|
||||
|
||||
CreateDashboardSnapshot(ctx, cfg, cmd, mockService)
|
||||
|
||||
mockService.AssertExpectations(t)
|
||||
assert.Equal(t, http.StatusBadRequest, recorder.Code)
|
||||
var response map[string]interface{}
|
||||
err := json.Unmarshal(recorder.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "Dashboard not found", response["message"])
|
||||
}
|
||||
|
||||
@@ -2,19 +2,15 @@ package pluginassets
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
|
||||
"github.com/Masterminds/semver/v3"
|
||||
|
||||
"github.com/grafana/grafana/pkg/infra/log"
|
||||
"github.com/grafana/grafana/pkg/plugins"
|
||||
"github.com/grafana/grafana/pkg/plugins/config"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/registry"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/signature"
|
||||
"github.com/grafana/grafana/pkg/plugins/pluginassets/modulehash"
|
||||
"github.com/grafana/grafana/pkg/plugins/pluginscdn"
|
||||
"github.com/grafana/grafana/pkg/services/pluginsintegration/pluginstore"
|
||||
)
|
||||
@@ -28,24 +24,27 @@ var (
|
||||
scriptLoadingMinSupportedVersion = semver.MustParse(CreatePluginVersionScriptSupportEnabled)
|
||||
)
|
||||
|
||||
func ProvideService(cfg *config.PluginManagementCfg, cdn *pluginscdn.Service, sig *signature.Signature, store pluginstore.Store) *Service {
|
||||
func ProvideService(cfg *config.PluginManagementCfg, cdn *pluginscdn.Service,
|
||||
calc *modulehash.Calculator) *Service {
|
||||
return &Service{
|
||||
cfg: cfg,
|
||||
cdn: cdn,
|
||||
signature: sig,
|
||||
store: store,
|
||||
log: log.New("pluginassets"),
|
||||
cfg: cfg,
|
||||
cdn: cdn,
|
||||
log: log.New("pluginassets"),
|
||||
calc: calc,
|
||||
}
|
||||
}
|
||||
|
||||
type Service struct {
|
||||
cfg *config.PluginManagementCfg
|
||||
cdn *pluginscdn.Service
|
||||
signature *signature.Signature
|
||||
store pluginstore.Store
|
||||
log log.Logger
|
||||
func ProvideModuleHashCalculator(cfg *config.PluginManagementCfg, cdn *pluginscdn.Service,
|
||||
signature *signature.Signature, reg registry.Service) *modulehash.Calculator {
|
||||
return modulehash.NewCalculator(cfg, reg, cdn, signature)
|
||||
}
|
||||
|
||||
moduleHashCache sync.Map
|
||||
type Service struct {
|
||||
cfg *config.PluginManagementCfg
|
||||
cdn *pluginscdn.Service
|
||||
calc *modulehash.Calculator
|
||||
|
||||
log log.Logger
|
||||
}
|
||||
|
||||
// LoadingStrategy calculates the loading strategy for a plugin.
|
||||
@@ -83,92 +82,8 @@ func (s *Service) LoadingStrategy(_ context.Context, p pluginstore.Plugin) plugi
|
||||
}
|
||||
|
||||
// ModuleHash returns the module.js SHA256 hash for a plugin in the format expected by the browser for SRI checks.
|
||||
// The module hash is read from the plugin's MANIFEST.txt file.
|
||||
// The plugin can also be a nested plugin.
|
||||
// If the plugin is unsigned, an empty string is returned.
|
||||
// The results are cached to avoid repeated reads from the MANIFEST.txt file.
|
||||
func (s *Service) ModuleHash(ctx context.Context, p pluginstore.Plugin) string {
|
||||
k := s.moduleHashCacheKey(p)
|
||||
cachedValue, ok := s.moduleHashCache.Load(k)
|
||||
if ok {
|
||||
return cachedValue.(string)
|
||||
}
|
||||
mh, err := s.moduleHash(ctx, p, "")
|
||||
if err != nil {
|
||||
s.log.Error("Failed to calculate module hash", "plugin", p.ID, "error", err)
|
||||
}
|
||||
s.moduleHashCache.Store(k, mh)
|
||||
return mh
|
||||
}
|
||||
|
||||
// moduleHash is the underlying function for ModuleHash. See its documentation for more information.
|
||||
// If the plugin is not a CDN plugin, the function will return an empty string.
|
||||
// It will read the module hash from the MANIFEST.txt in the [[plugins.FS]] of the provided plugin.
|
||||
// If childFSBase is provided, the function will try to get the hash from MANIFEST.txt for the provided children's
|
||||
// module.js file, rather than for the provided plugin.
|
||||
func (s *Service) moduleHash(ctx context.Context, p pluginstore.Plugin, childFSBase string) (r string, err error) {
|
||||
if !s.cfg.Features.SriChecksEnabled {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
// Ignore unsigned plugins
|
||||
if !p.Signature.IsValid() {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
if p.Parent != nil {
|
||||
// Nested plugin
|
||||
parent, ok := s.store.Plugin(ctx, p.Parent.ID)
|
||||
if !ok {
|
||||
return "", fmt.Errorf("parent plugin plugin %q for child plugin %q not found", p.Parent.ID, p.ID)
|
||||
}
|
||||
|
||||
// The module hash is contained within the parent's MANIFEST.txt file.
|
||||
// For example, the parent's MANIFEST.txt will contain an entry similar to this:
|
||||
//
|
||||
// ```
|
||||
// "datasource/module.js": "1234567890abcdef..."
|
||||
// ```
|
||||
//
|
||||
// Recursively call moduleHash with the parent plugin and with the children plugin folder path
|
||||
// to get the correct module hash for the nested plugin.
|
||||
if childFSBase == "" {
|
||||
childFSBase = p.Base()
|
||||
}
|
||||
return s.moduleHash(ctx, parent, childFSBase)
|
||||
}
|
||||
|
||||
// Only CDN plugins are supported for SRI checks.
|
||||
// CDN plugins have the version as part of the URL, which acts as a cache-buster.
|
||||
// Needed due to: https://github.com/grafana/plugin-tools/pull/1426
|
||||
// FS plugins build before this change will have SRI mismatch issues.
|
||||
if !s.cdnEnabled(p.ID, p.FS) {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
manifest, err := s.signature.ReadPluginManifestFromFS(ctx, p.FS)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("read plugin manifest: %w", err)
|
||||
}
|
||||
if !manifest.IsV2() {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
var childPath string
|
||||
if childFSBase != "" {
|
||||
// Calculate the relative path of the child plugin folder from the parent plugin folder.
|
||||
childPath, err = p.FS.Rel(childFSBase)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("rel path: %w", err)
|
||||
}
|
||||
// MANIFETS.txt uses forward slashes as path separators.
|
||||
childPath = filepath.ToSlash(childPath)
|
||||
}
|
||||
moduleHash, ok := manifest.Files[path.Join(childPath, "module.js")]
|
||||
if !ok {
|
||||
return "", nil
|
||||
}
|
||||
return convertHashForSRI(moduleHash)
|
||||
return s.calc.ModuleHash(ctx, p.ID, p.Info.Version)
|
||||
}
|
||||
|
||||
func (s *Service) compatibleCreatePluginVersion(ps map[string]string) bool {
|
||||
@@ -188,17 +103,3 @@ func (s *Service) compatibleCreatePluginVersion(ps map[string]string) bool {
|
||||
func (s *Service) cdnEnabled(pluginID string, fs plugins.FS) bool {
|
||||
return s.cdn.PluginSupported(pluginID) || fs.Type().CDN()
|
||||
}
|
||||
|
||||
// convertHashForSRI takes a SHA256 hash string and returns it as expected by the browser for SRI checks.
|
||||
func convertHashForSRI(h string) (string, error) {
|
||||
hb, err := hex.DecodeString(h)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("hex decode string: %w", err)
|
||||
}
|
||||
return "sha256-" + base64.StdEncoding.EncodeToString(hb), nil
|
||||
}
|
||||
|
||||
// moduleHashCacheKey returns a unique key for the module hash cache.
|
||||
func (s *Service) moduleHashCacheKey(p pluginstore.Plugin) string {
|
||||
return p.ID + ":" + p.Info.Version
|
||||
}
|
||||
|
||||
@@ -2,19 +2,14 @@ package pluginassets
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/grafana/grafana/pkg/infra/log"
|
||||
"github.com/grafana/grafana/pkg/plugins"
|
||||
"github.com/grafana/grafana/pkg/plugins/config"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/pluginfakes"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/signature"
|
||||
"github.com/grafana/grafana/pkg/plugins/manager/signature/statickey"
|
||||
"github.com/grafana/grafana/pkg/plugins/pluginscdn"
|
||||
"github.com/grafana/grafana/pkg/services/pluginsintegration/pluginstore"
|
||||
)
|
||||
@@ -179,349 +174,6 @@ func TestService_Calculate(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestService_ModuleHash(t *testing.T) {
|
||||
const (
|
||||
pluginID = "grafana-test-datasource"
|
||||
parentPluginID = "grafana-test-app"
|
||||
)
|
||||
for _, tc := range []struct {
|
||||
name string
|
||||
features *config.Features
|
||||
store []pluginstore.Plugin
|
||||
|
||||
// Can be used to configure plugin's fs
|
||||
// fs cdn type = loaded from CDN with no files on disk
|
||||
// fs local type = files on disk but served from CDN only if cdn=true
|
||||
plugin pluginstore.Plugin
|
||||
|
||||
// When true, set cdn=true in config
|
||||
cdn bool
|
||||
expModuleHash string
|
||||
}{
|
||||
{
|
||||
name: "unsigned should not return module hash",
|
||||
plugin: newPlugin(pluginID, withSignatureStatus(plugins.SignatureStatusUnsigned)),
|
||||
cdn: false,
|
||||
features: &config.Features{SriChecksEnabled: false},
|
||||
expModuleHash: "",
|
||||
},
|
||||
{
|
||||
plugin: newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-valid"))),
|
||||
withClass(plugins.ClassExternal),
|
||||
),
|
||||
cdn: true,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: newSRIHash(t, "5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03"),
|
||||
},
|
||||
{
|
||||
plugin: newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-valid"))),
|
||||
withClass(plugins.ClassExternal),
|
||||
),
|
||||
cdn: true,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: newSRIHash(t, "5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03"),
|
||||
},
|
||||
{
|
||||
plugin: newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-valid"))),
|
||||
),
|
||||
cdn: false,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: "",
|
||||
},
|
||||
{
|
||||
plugin: newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-valid"))),
|
||||
),
|
||||
cdn: true,
|
||||
features: &config.Features{SriChecksEnabled: false},
|
||||
expModuleHash: "",
|
||||
},
|
||||
{
|
||||
plugin: newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-valid"))),
|
||||
),
|
||||
cdn: false,
|
||||
features: &config.Features{SriChecksEnabled: false},
|
||||
expModuleHash: "",
|
||||
},
|
||||
{
|
||||
// parentPluginID (/)
|
||||
// └── pluginID (/datasource)
|
||||
name: "nested plugin should return module hash from parent MANIFEST.txt",
|
||||
store: []pluginstore.Plugin{
|
||||
newPlugin(
|
||||
parentPluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-valid-nested"))),
|
||||
),
|
||||
},
|
||||
plugin: newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-valid-nested", "datasource"))),
|
||||
withParent(parentPluginID),
|
||||
),
|
||||
cdn: true,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: newSRIHash(t, "04d70db091d96c4775fb32ba5a8f84cc22893eb43afdb649726661d4425c6711"),
|
||||
},
|
||||
{
|
||||
// parentPluginID (/)
|
||||
// └── pluginID (/panels/one)
|
||||
name: "nested plugin deeper than one subfolder should return module hash from parent MANIFEST.txt",
|
||||
store: []pluginstore.Plugin{
|
||||
newPlugin(
|
||||
parentPluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-valid-nested"))),
|
||||
),
|
||||
},
|
||||
plugin: newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-valid-nested", "panels", "one"))),
|
||||
withParent(parentPluginID),
|
||||
),
|
||||
cdn: true,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: newSRIHash(t, "cbd1ac2284645a0e1e9a8722a729f5bcdd2b831222728709c6360beecdd6143f"),
|
||||
},
|
||||
{
|
||||
// grand-parent-app (/)
|
||||
// ├── parent-datasource (/datasource)
|
||||
// │ └── child-panel (/datasource/panels/one)
|
||||
name: "nested plugin of a nested plugin should return module hash from parent MANIFEST.txt",
|
||||
store: []pluginstore.Plugin{
|
||||
newPlugin(
|
||||
"grand-parent-app",
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-valid-deeply-nested"))),
|
||||
),
|
||||
newPlugin(
|
||||
"parent-datasource",
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-valid-deeply-nested", "datasource"))),
|
||||
withParent("grand-parent-app"),
|
||||
),
|
||||
},
|
||||
plugin: newPlugin(
|
||||
"child-panel",
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-valid-deeply-nested", "datasource", "panels", "one"))),
|
||||
withParent("parent-datasource"),
|
||||
),
|
||||
cdn: true,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: newSRIHash(t, "cbd1ac2284645a0e1e9a8722a729f5bcdd2b831222728709c6360beecdd6143f"),
|
||||
},
|
||||
{
|
||||
name: "nested plugin should not return module hash from parent if it's not registered in the store",
|
||||
store: []pluginstore.Plugin{},
|
||||
plugin: newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-valid-nested", "panels", "one"))),
|
||||
withParent(parentPluginID),
|
||||
),
|
||||
cdn: false,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: "",
|
||||
},
|
||||
{
|
||||
name: "missing module.js entry from MANIFEST.txt should not return module hash",
|
||||
plugin: newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-no-module-js"))),
|
||||
),
|
||||
cdn: false,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: "",
|
||||
},
|
||||
{
|
||||
name: "signed status but missing MANIFEST.txt should not return module hash",
|
||||
plugin: newPlugin(
|
||||
pluginID,
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-no-manifest-txt"))),
|
||||
),
|
||||
cdn: false,
|
||||
features: &config.Features{SriChecksEnabled: true},
|
||||
expModuleHash: "",
|
||||
},
|
||||
} {
|
||||
if tc.name == "" {
|
||||
var expS string
|
||||
if tc.expModuleHash == "" {
|
||||
expS = "should not return module hash"
|
||||
} else {
|
||||
expS = "should return module hash"
|
||||
}
|
||||
tc.name = fmt.Sprintf("feature=%v, cdn_config=%v, class=%v %s", tc.features.SriChecksEnabled, tc.cdn, tc.plugin.Class, expS)
|
||||
}
|
||||
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
var pluginSettings config.PluginSettings
|
||||
if tc.cdn {
|
||||
pluginSettings = config.PluginSettings{
|
||||
pluginID: {
|
||||
"cdn": "true",
|
||||
},
|
||||
parentPluginID: map[string]string{
|
||||
"cdn": "true",
|
||||
},
|
||||
"grand-parent-app": map[string]string{
|
||||
"cdn": "true",
|
||||
},
|
||||
}
|
||||
}
|
||||
features := tc.features
|
||||
if features == nil {
|
||||
features = &config.Features{}
|
||||
}
|
||||
pCfg := &config.PluginManagementCfg{
|
||||
PluginsCDNURLTemplate: "http://cdn.example.com",
|
||||
PluginSettings: pluginSettings,
|
||||
Features: *features,
|
||||
}
|
||||
svc := ProvideService(
|
||||
pCfg,
|
||||
pluginscdn.ProvideService(pCfg),
|
||||
signature.ProvideService(pCfg, statickey.New()),
|
||||
pluginstore.NewFakePluginStore(tc.store...),
|
||||
)
|
||||
mh := svc.ModuleHash(context.Background(), tc.plugin)
|
||||
require.Equal(t, tc.expModuleHash, mh)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestService_ModuleHash_Cache(t *testing.T) {
|
||||
pCfg := &config.PluginManagementCfg{
|
||||
PluginSettings: config.PluginSettings{},
|
||||
Features: config.Features{SriChecksEnabled: true},
|
||||
}
|
||||
svc := ProvideService(
|
||||
pCfg,
|
||||
pluginscdn.ProvideService(pCfg),
|
||||
signature.ProvideService(pCfg, statickey.New()),
|
||||
pluginstore.NewFakePluginStore(),
|
||||
)
|
||||
const pluginID = "grafana-test-datasource"
|
||||
|
||||
t.Run("cache key", func(t *testing.T) {
|
||||
t.Run("with version", func(t *testing.T) {
|
||||
const pluginVersion = "1.0.0"
|
||||
p := newPlugin(pluginID, withInfo(plugins.Info{Version: pluginVersion}))
|
||||
k := svc.moduleHashCacheKey(p)
|
||||
require.Equal(t, pluginID+":"+pluginVersion, k, "cache key should be correct")
|
||||
})
|
||||
|
||||
t.Run("without version", func(t *testing.T) {
|
||||
p := newPlugin(pluginID)
|
||||
k := svc.moduleHashCacheKey(p)
|
||||
require.Equal(t, pluginID+":", k, "cache key should be correct")
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("ModuleHash usage", func(t *testing.T) {
|
||||
pV1 := newPlugin(
|
||||
pluginID,
|
||||
withInfo(plugins.Info{Version: "1.0.0"}),
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-valid"))),
|
||||
)
|
||||
|
||||
pCfg = &config.PluginManagementCfg{
|
||||
PluginsCDNURLTemplate: "https://cdn.grafana.com",
|
||||
PluginSettings: config.PluginSettings{
|
||||
pluginID: {
|
||||
"cdn": "true",
|
||||
},
|
||||
},
|
||||
Features: config.Features{SriChecksEnabled: true},
|
||||
}
|
||||
svc = ProvideService(
|
||||
pCfg,
|
||||
pluginscdn.ProvideService(pCfg),
|
||||
signature.ProvideService(pCfg, statickey.New()),
|
||||
pluginstore.NewFakePluginStore(),
|
||||
)
|
||||
|
||||
k := svc.moduleHashCacheKey(pV1)
|
||||
|
||||
_, ok := svc.moduleHashCache.Load(k)
|
||||
require.False(t, ok, "cache should initially be empty")
|
||||
|
||||
mhV1 := svc.ModuleHash(context.Background(), pV1)
|
||||
pV1Exp := newSRIHash(t, "5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03")
|
||||
require.Equal(t, pV1Exp, mhV1, "returned value should be correct")
|
||||
|
||||
cachedMh, ok := svc.moduleHashCache.Load(k)
|
||||
require.True(t, ok)
|
||||
require.Equal(t, pV1Exp, cachedMh, "cache should contain the returned value")
|
||||
|
||||
t.Run("different version uses different cache key", func(t *testing.T) {
|
||||
pV2 := newPlugin(
|
||||
pluginID,
|
||||
withInfo(plugins.Info{Version: "2.0.0"}),
|
||||
withSignatureStatus(plugins.SignatureStatusValid),
|
||||
// different fs for different hash
|
||||
withFS(plugins.NewLocalFS(filepath.Join("testdata", "module-hash-valid-nested"))),
|
||||
)
|
||||
mhV2 := svc.ModuleHash(context.Background(), pV2)
|
||||
require.NotEqual(t, mhV2, mhV1, "different version should have different hash")
|
||||
require.Equal(t, newSRIHash(t, "266c19bc148b22ddef2a288fc5f8f40855bda22ccf60be53340b4931e469ae2a"), mhV2)
|
||||
})
|
||||
|
||||
t.Run("cache should be used", func(t *testing.T) {
|
||||
// edit cache directly
|
||||
svc.moduleHashCache.Store(k, "hax")
|
||||
require.Equal(t, "hax", svc.ModuleHash(context.Background(), pV1))
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
func TestConvertHashFromSRI(t *testing.T) {
|
||||
for _, tc := range []struct {
|
||||
hash string
|
||||
expHash string
|
||||
expErr bool
|
||||
}{
|
||||
{
|
||||
hash: "ddfcb449445064e6c39f0c20b15be3cb6a55837cf4781df23d02de005f436811",
|
||||
expHash: "sha256-3fy0SURQZObDnwwgsVvjy2pVg3z0eB3yPQLeAF9DaBE=",
|
||||
},
|
||||
{
|
||||
hash: "not-a-valid-hash",
|
||||
expErr: true,
|
||||
},
|
||||
} {
|
||||
t.Run(tc.hash, func(t *testing.T) {
|
||||
r, err := convertHashForSRI(tc.hash)
|
||||
if tc.expErr {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tc.expHash, r)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func newPlugin(pluginID string, cbs ...func(p pluginstore.Plugin) pluginstore.Plugin) pluginstore.Plugin {
|
||||
p := pluginstore.Plugin{
|
||||
JSONData: plugins.JSONData{
|
||||
@@ -534,13 +186,6 @@ func newPlugin(pluginID string, cbs ...func(p pluginstore.Plugin) pluginstore.Pl
|
||||
return p
|
||||
}
|
||||
|
||||
func withInfo(info plugins.Info) func(p pluginstore.Plugin) pluginstore.Plugin {
|
||||
return func(p pluginstore.Plugin) pluginstore.Plugin {
|
||||
p.Info = info
|
||||
return p
|
||||
}
|
||||
}
|
||||
|
||||
func withFS(fs plugins.FS) func(p pluginstore.Plugin) pluginstore.Plugin {
|
||||
return func(p pluginstore.Plugin) pluginstore.Plugin {
|
||||
p.FS = fs
|
||||
@@ -548,13 +193,6 @@ func withFS(fs plugins.FS) func(p pluginstore.Plugin) pluginstore.Plugin {
|
||||
}
|
||||
}
|
||||
|
||||
func withSignatureStatus(status plugins.SignatureStatus) func(p pluginstore.Plugin) pluginstore.Plugin {
|
||||
return func(p pluginstore.Plugin) pluginstore.Plugin {
|
||||
p.Signature = status
|
||||
return p
|
||||
}
|
||||
}
|
||||
|
||||
func withAngular(angular bool) func(p pluginstore.Plugin) pluginstore.Plugin {
|
||||
return func(p pluginstore.Plugin) pluginstore.Plugin {
|
||||
p.Angular = plugins.AngularMeta{Detected: angular}
|
||||
@@ -562,13 +200,6 @@ func withAngular(angular bool) func(p pluginstore.Plugin) pluginstore.Plugin {
|
||||
}
|
||||
}
|
||||
|
||||
func withParent(parentID string) func(p pluginstore.Plugin) pluginstore.Plugin {
|
||||
return func(p pluginstore.Plugin) pluginstore.Plugin {
|
||||
p.Parent = &pluginstore.ParentPlugin{ID: parentID}
|
||||
return p
|
||||
}
|
||||
}
|
||||
|
||||
func withClass(class plugins.Class) func(p pluginstore.Plugin) pluginstore.Plugin {
|
||||
return func(p pluginstore.Plugin) pluginstore.Plugin {
|
||||
p.Class = class
|
||||
@@ -587,9 +218,3 @@ func newPluginSettings(pluginID string, kv map[string]string) config.PluginSetti
|
||||
pluginID: kv,
|
||||
}
|
||||
}
|
||||
|
||||
func newSRIHash(t *testing.T, s string) string {
|
||||
r, err := convertHashForSRI(s)
|
||||
require.NoError(t, err)
|
||||
return r
|
||||
}
|
||||
|
||||
@@ -131,6 +131,7 @@ var WireSet = wire.NewSet(
|
||||
plugincontext.ProvideBaseService,
|
||||
wire.Bind(new(plugincontext.BasePluginContextProvider), new(*plugincontext.BaseProvider)),
|
||||
plugininstaller.ProvideService,
|
||||
pluginassets.ProvideModuleHashCalculator,
|
||||
pluginassets.ProvideService,
|
||||
pluginchecker.ProvidePreinstall,
|
||||
wire.Bind(new(pluginchecker.Preinstall), new(*pluginchecker.PreinstallImpl)),
|
||||
|
||||
@@ -14,7 +14,6 @@ import (
|
||||
"github.com/grafana/grafana/pkg/apimachinery/validation"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/sql/db"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/sql/dbutil"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/sql/rvmanager"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/sql/sqltemplate"
|
||||
gocache "github.com/patrickmn/go-cache"
|
||||
)
|
||||
@@ -869,18 +868,10 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
|
||||
if key.Action == DataActionDeleted {
|
||||
generation = 0
|
||||
}
|
||||
|
||||
// In compatibility mode, the previous RV, when available, is saved as a microsecond
|
||||
// timestamp, as is done in the SQL backend.
|
||||
previousRV := event.PreviousRV
|
||||
if event.PreviousRV > 0 && isSnowflake(event.PreviousRV) {
|
||||
previousRV = rvmanager.RVFromSnowflake(event.PreviousRV)
|
||||
}
|
||||
|
||||
_, err := dbutil.Exec(ctx, tx, sqlKVUpdateLegacyResourceHistory, sqlKVLegacyUpdateHistoryRequest{
|
||||
SQLTemplate: sqltemplate.New(kv.dialect),
|
||||
GUID: key.GUID,
|
||||
PreviousRV: previousRV,
|
||||
PreviousRV: event.PreviousRV,
|
||||
Generation: generation,
|
||||
})
|
||||
|
||||
@@ -909,7 +900,7 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
|
||||
Name: key.Name,
|
||||
Action: action,
|
||||
Folder: key.Folder,
|
||||
PreviousRV: previousRV,
|
||||
PreviousRV: event.PreviousRV,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
@@ -925,7 +916,7 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
|
||||
Name: key.Name,
|
||||
Action: action,
|
||||
Folder: key.Folder,
|
||||
PreviousRV: previousRV,
|
||||
PreviousRV: event.PreviousRV,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
@@ -947,15 +938,3 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// isSnowflake returns whether the argument passed is a snowflake ID (new) or a microsecond timestamp (old).
|
||||
// We try to interpret the number as a microsecond timestamp first. If it represents a time in the past,
|
||||
// it is considered a microsecond timestamp. Snowflake IDs are much larger integers and would lead
|
||||
// to dates in the future if interpreted as a microsecond timestamp.
|
||||
func isSnowflake(rv int64) bool {
|
||||
ts := time.UnixMicro(rv)
|
||||
oneHourFromNow := time.Now().Add(time.Hour)
|
||||
isMicroSecRV := ts.Before(oneHourFromNow)
|
||||
|
||||
return !isMicroSecRV
|
||||
}
|
||||
|
||||
@@ -19,18 +19,13 @@ const (
|
||||
defaultBufferSize = 10000
|
||||
)
|
||||
|
||||
type notifier interface {
|
||||
Watch(context.Context, watchOptions) <-chan Event
|
||||
}
|
||||
|
||||
type pollingNotifier struct {
|
||||
type notifier struct {
|
||||
eventStore *eventStore
|
||||
log logging.Logger
|
||||
}
|
||||
|
||||
type notifierOptions struct {
|
||||
log logging.Logger
|
||||
useChannelNotifier bool
|
||||
log logging.Logger
|
||||
}
|
||||
|
||||
type watchOptions struct {
|
||||
@@ -49,26 +44,15 @@ func defaultWatchOptions() watchOptions {
|
||||
}
|
||||
}
|
||||
|
||||
func newNotifier(eventStore *eventStore, opts notifierOptions) notifier {
|
||||
func newNotifier(eventStore *eventStore, opts notifierOptions) *notifier {
|
||||
if opts.log == nil {
|
||||
opts.log = &logging.NoOpLogger{}
|
||||
}
|
||||
|
||||
if opts.useChannelNotifier {
|
||||
return &channelNotifier{}
|
||||
}
|
||||
|
||||
return &pollingNotifier{eventStore: eventStore, log: opts.log}
|
||||
}
|
||||
|
||||
type channelNotifier struct{}
|
||||
|
||||
func (cn *channelNotifier) Watch(ctx context.Context, opts watchOptions) <-chan Event {
|
||||
return nil
|
||||
return ¬ifier{eventStore: eventStore, log: opts.log}
|
||||
}
|
||||
|
||||
// Return the last resource version from the event store
|
||||
func (n *pollingNotifier) lastEventResourceVersion(ctx context.Context) (int64, error) {
|
||||
func (n *notifier) lastEventResourceVersion(ctx context.Context) (int64, error) {
|
||||
e, err := n.eventStore.LastEventKey(ctx)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
@@ -76,11 +60,11 @@ func (n *pollingNotifier) lastEventResourceVersion(ctx context.Context) (int64,
|
||||
return e.ResourceVersion, nil
|
||||
}
|
||||
|
||||
func (n *pollingNotifier) cacheKey(evt Event) string {
|
||||
func (n *notifier) cacheKey(evt Event) string {
|
||||
return fmt.Sprintf("%s~%s~%s~%s~%d", evt.Namespace, evt.Group, evt.Resource, evt.Name, evt.ResourceVersion)
|
||||
}
|
||||
|
||||
func (n *pollingNotifier) Watch(ctx context.Context, opts watchOptions) <-chan Event {
|
||||
func (n *notifier) Watch(ctx context.Context, opts watchOptions) <-chan Event {
|
||||
if opts.MinBackoff <= 0 {
|
||||
opts.MinBackoff = defaultMinBackoff
|
||||
}
|
||||
|
||||
@@ -13,7 +13,7 @@ import (
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func setupTestNotifier(t *testing.T) (*pollingNotifier, *eventStore) {
|
||||
func setupTestNotifier(t *testing.T) (*notifier, *eventStore) {
|
||||
db := setupTestBadgerDB(t)
|
||||
t.Cleanup(func() {
|
||||
err := db.Close()
|
||||
@@ -22,10 +22,10 @@ func setupTestNotifier(t *testing.T) (*pollingNotifier, *eventStore) {
|
||||
kv := NewBadgerKV(db)
|
||||
eventStore := newEventStore(kv)
|
||||
notifier := newNotifier(eventStore, notifierOptions{log: &logging.NoOpLogger{}})
|
||||
return notifier.(*pollingNotifier), eventStore
|
||||
return notifier, eventStore
|
||||
}
|
||||
|
||||
func setupTestNotifierSqlKv(t *testing.T) (*pollingNotifier, *eventStore) {
|
||||
func setupTestNotifierSqlKv(t *testing.T) (*notifier, *eventStore) {
|
||||
dbstore := db.InitTestDB(t)
|
||||
eDB, err := dbimpl.ProvideResourceDB(dbstore, setting.NewCfg(), nil)
|
||||
require.NoError(t, err)
|
||||
@@ -33,7 +33,7 @@ func setupTestNotifierSqlKv(t *testing.T) (*pollingNotifier, *eventStore) {
|
||||
require.NoError(t, err)
|
||||
eventStore := newEventStore(kv)
|
||||
notifier := newNotifier(eventStore, notifierOptions{log: &logging.NoOpLogger{}})
|
||||
return notifier.(*pollingNotifier), eventStore
|
||||
return notifier, eventStore
|
||||
}
|
||||
|
||||
func TestNewNotifier(t *testing.T) {
|
||||
@@ -49,7 +49,7 @@ func TestDefaultWatchOptions(t *testing.T) {
|
||||
assert.Equal(t, defaultBufferSize, opts.BufferSize)
|
||||
}
|
||||
|
||||
func runNotifierTestWith(t *testing.T, storeName string, newStoreFn func(*testing.T) (*pollingNotifier, *eventStore), testFn func(*testing.T, context.Context, *pollingNotifier, *eventStore)) {
|
||||
func runNotifierTestWith(t *testing.T, storeName string, newStoreFn func(*testing.T) (*notifier, *eventStore), testFn func(*testing.T, context.Context, *notifier, *eventStore)) {
|
||||
t.Run(storeName, func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
notifier, eventStore := newStoreFn(t)
|
||||
@@ -62,7 +62,7 @@ func TestNotifier_lastEventResourceVersion(t *testing.T) {
|
||||
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierLastEventResourceVersion)
|
||||
}
|
||||
|
||||
func testNotifierLastEventResourceVersion(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
|
||||
func testNotifierLastEventResourceVersion(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
|
||||
// Test with no events
|
||||
rv, err := notifier.lastEventResourceVersion(ctx)
|
||||
assert.Error(t, err)
|
||||
@@ -113,7 +113,7 @@ func TestNotifier_cachekey(t *testing.T) {
|
||||
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierCachekey)
|
||||
}
|
||||
|
||||
func testNotifierCachekey(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
|
||||
func testNotifierCachekey(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
|
||||
tests := []struct {
|
||||
name string
|
||||
event Event
|
||||
@@ -167,7 +167,7 @@ func TestNotifier_Watch_NoEvents(t *testing.T) {
|
||||
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierWatchNoEvents)
|
||||
}
|
||||
|
||||
func testNotifierWatchNoEvents(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
|
||||
func testNotifierWatchNoEvents(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
|
||||
ctx, cancel := context.WithTimeout(ctx, 500*time.Millisecond)
|
||||
defer cancel()
|
||||
|
||||
@@ -208,7 +208,7 @@ func TestNotifier_Watch_WithExistingEvents(t *testing.T) {
|
||||
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierWatchWithExistingEvents)
|
||||
}
|
||||
|
||||
func testNotifierWatchWithExistingEvents(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
|
||||
func testNotifierWatchWithExistingEvents(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
|
||||
ctx, cancel := context.WithTimeout(ctx, 2*time.Second)
|
||||
defer cancel()
|
||||
|
||||
@@ -282,7 +282,7 @@ func TestNotifier_Watch_EventDeduplication(t *testing.T) {
|
||||
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierWatchEventDeduplication)
|
||||
}
|
||||
|
||||
func testNotifierWatchEventDeduplication(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
|
||||
func testNotifierWatchEventDeduplication(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
|
||||
ctx, cancel := context.WithTimeout(ctx, 2*time.Second)
|
||||
defer cancel()
|
||||
|
||||
@@ -348,7 +348,7 @@ func TestNotifier_Watch_ContextCancellation(t *testing.T) {
|
||||
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierWatchContextCancellation)
|
||||
}
|
||||
|
||||
func testNotifierWatchContextCancellation(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
|
||||
func testNotifierWatchContextCancellation(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
|
||||
// Add an initial event so that lastEventResourceVersion doesn't return ErrNotFound
|
||||
@@ -394,7 +394,7 @@ func TestNotifier_Watch_MultipleEvents(t *testing.T) {
|
||||
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierWatchMultipleEvents)
|
||||
}
|
||||
|
||||
func testNotifierWatchMultipleEvents(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
|
||||
func testNotifierWatchMultipleEvents(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
|
||||
ctx, cancel := context.WithTimeout(ctx, 3*time.Second)
|
||||
defer cancel()
|
||||
rv := time.Now().UnixNano()
|
||||
@@ -456,27 +456,33 @@ func testNotifierWatchMultipleEvents(t *testing.T, ctx context.Context, notifier
|
||||
},
|
||||
}
|
||||
|
||||
errCh := make(chan error)
|
||||
go func() {
|
||||
for _, event := range testEvents {
|
||||
errCh <- eventStore.Save(ctx, event)
|
||||
err := eventStore.Save(ctx, event)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
}()
|
||||
|
||||
// Receive events
|
||||
receivedEvents := make([]string, 0, len(testEvents))
|
||||
for len(receivedEvents) != len(testEvents) {
|
||||
receivedEvents := make([]Event, 0, len(testEvents))
|
||||
for i := 0; i < len(testEvents); i++ {
|
||||
select {
|
||||
case event := <-events:
|
||||
receivedEvents = append(receivedEvents, event.Name)
|
||||
case err := <-errCh:
|
||||
require.NoError(t, err)
|
||||
receivedEvents = append(receivedEvents, event)
|
||||
case <-time.After(1 * time.Second):
|
||||
t.Fatalf("Timed out waiting for event %d", len(receivedEvents)+1)
|
||||
t.Fatalf("Timed out waiting for event %d", i+1)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify all events were received
|
||||
assert.Len(t, receivedEvents, len(testEvents))
|
||||
|
||||
// Verify the events match and ordered by resource version
|
||||
receivedNames := make([]string, len(receivedEvents))
|
||||
for i, event := range receivedEvents {
|
||||
receivedNames[i] = event.Name
|
||||
}
|
||||
|
||||
expectedNames := []string{"test-resource-1", "test-resource-2", "test-resource-3"}
|
||||
assert.ElementsMatch(t, expectedNames, receivedEvents)
|
||||
assert.ElementsMatch(t, expectedNames, receivedNames)
|
||||
}
|
||||
|
||||
@@ -473,6 +473,8 @@ func (k *sqlKV) Delete(ctx context.Context, section string, key string) error {
|
||||
return ErrNotFound
|
||||
}
|
||||
|
||||
// TODO reflect change to resource table
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -61,7 +61,7 @@ type kvStorageBackend struct {
|
||||
bulkLock *BulkLock
|
||||
dataStore *dataStore
|
||||
eventStore *eventStore
|
||||
notifier notifier
|
||||
notifier *notifier
|
||||
builder DocumentBuilder
|
||||
log logging.Logger
|
||||
withPruner bool
|
||||
@@ -91,7 +91,6 @@ type KVBackendOptions struct {
|
||||
Tracer trace.Tracer // TODO add tracing
|
||||
Reg prometheus.Registerer // TODO add metrics
|
||||
|
||||
UseChannelNotifier bool
|
||||
// Adding RvManager overrides the RV generated with snowflake in order to keep backwards compatibility with
|
||||
// unified/sql
|
||||
RvManager *rvmanager.ResourceVersionManager
|
||||
@@ -122,7 +121,7 @@ func NewKVStorageBackend(opts KVBackendOptions) (KVBackend, error) {
|
||||
bulkLock: NewBulkLock(),
|
||||
dataStore: newDataStore(kv),
|
||||
eventStore: eventStore,
|
||||
notifier: newNotifier(eventStore, notifierOptions{useChannelNotifier: opts.UseChannelNotifier}),
|
||||
notifier: newNotifier(eventStore, notifierOptions{}),
|
||||
snowflake: s,
|
||||
builder: StandardDocumentBuilder(), // For now we use the standard document builder.
|
||||
log: &logging.NoOpLogger{}, // Make this configurable
|
||||
@@ -347,7 +346,7 @@ func (k *kvStorageBackend) WriteEvent(ctx context.Context, event WriteEvent) (in
|
||||
return 0, fmt.Errorf("failed to write data: %w", err)
|
||||
}
|
||||
|
||||
rv = rvmanager.SnowflakeFromRV(rv)
|
||||
rv = rvmanager.SnowflakeFromRv(rv)
|
||||
dataKey.ResourceVersion = rv
|
||||
} else {
|
||||
err := k.dataStore.Save(ctx, dataKey, bytes.NewReader(event.Value))
|
||||
@@ -689,6 +688,9 @@ func validateListHistoryRequest(req *resourcepb.ListRequest) error {
|
||||
if key.Namespace == "" {
|
||||
return fmt.Errorf("namespace is required")
|
||||
}
|
||||
if key.Name == "" {
|
||||
return fmt.Errorf("name is required")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -307,7 +307,7 @@ func (m *ResourceVersionManager) execBatch(ctx context.Context, group, resource
|
||||
// Allocate the RVs
|
||||
for i, guid := range guids {
|
||||
guidToRV[guid] = rv
|
||||
guidToSnowflakeRV[guid] = SnowflakeFromRV(rv)
|
||||
guidToSnowflakeRV[guid] = SnowflakeFromRv(rv)
|
||||
rvs[i] = rv
|
||||
rv++
|
||||
}
|
||||
@@ -364,20 +364,12 @@ func (m *ResourceVersionManager) execBatch(ctx context.Context, group, resource
|
||||
}
|
||||
}
|
||||
|
||||
// takes a unix microsecond RV and transforms into a snowflake format. The timestamp is converted from microsecond to
|
||||
// takes a unix microsecond rv and transforms into a snowflake format. The timestamp is converted from microsecond to
|
||||
// millisecond (the integer division) and the remainder is saved in the stepbits section. machine id is always 0
|
||||
func SnowflakeFromRV(rv int64) int64 {
|
||||
func SnowflakeFromRv(rv int64) int64 {
|
||||
return (((rv / 1000) - snowflake.Epoch) << (snowflake.NodeBits + snowflake.StepBits)) + (rv % 1000)
|
||||
}
|
||||
|
||||
// It is generally not possible to convert from a snowflakeID to a microsecond RV due to the loss in precision
|
||||
// (snowflake ID stores timestamp in milliseconds). However, this implementation stores the microsecond fraction
|
||||
// in the step bits (see SnowflakeFromRV), allowing us to compute the microsecond timestamp.
|
||||
func RVFromSnowflake(snowflakeID int64) int64 {
|
||||
microSecFraction := snowflakeID & ((1 << snowflake.StepBits) - 1)
|
||||
return ((snowflakeID>>(snowflake.NodeBits+snowflake.StepBits))+snowflake.Epoch)*1000 + microSecFraction
|
||||
}
|
||||
|
||||
// helper utility to compare two RVs. The first RV must be in snowflake format. Will convert rv2 to snowflake and retry
|
||||
// if comparison fails
|
||||
func IsRvEqual(rv1, rv2 int64) bool {
|
||||
@@ -385,7 +377,7 @@ func IsRvEqual(rv1, rv2 int64) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
return rv1 == SnowflakeFromRV(rv2)
|
||||
return rv1 == SnowflakeFromRv(rv2)
|
||||
}
|
||||
|
||||
// Lock locks the resource version for the given key
|
||||
|
||||
@@ -63,13 +63,3 @@ func TestResourceVersionManager(t *testing.T) {
|
||||
require.Equal(t, rv, int64(200))
|
||||
})
|
||||
}
|
||||
|
||||
func TestSnowflakeFromRVRoundtrips(t *testing.T) {
|
||||
// 2026-01-12 19:33:58.806211 +0000 UTC
|
||||
offset := int64(1768246438806211) // in microseconds
|
||||
|
||||
for n := range int64(100) {
|
||||
ts := offset + n
|
||||
require.Equal(t, ts, RVFromSnowflake(SnowflakeFromRV(ts)))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -99,9 +99,6 @@ func NewResourceServer(opts ServerOptions) (resource.ResourceServer, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
isHA := isHighAvailabilityEnabled(opts.Cfg.SectionWithEnvOverrides("database"),
|
||||
opts.Cfg.SectionWithEnvOverrides("resource_api"))
|
||||
|
||||
if opts.Cfg.EnableSQLKVBackend {
|
||||
sqlkv, err := resource.NewSQLKV(eDB)
|
||||
if err != nil {
|
||||
@@ -109,10 +106,9 @@ func NewResourceServer(opts ServerOptions) (resource.ResourceServer, error) {
|
||||
}
|
||||
|
||||
kvBackendOpts := resource.KVBackendOptions{
|
||||
KvStore: sqlkv,
|
||||
Tracer: opts.Tracer,
|
||||
Reg: opts.Reg,
|
||||
UseChannelNotifier: !isHA,
|
||||
KvStore: sqlkv,
|
||||
Tracer: opts.Tracer,
|
||||
Reg: opts.Reg,
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
@@ -144,6 +140,9 @@ func NewResourceServer(opts ServerOptions) (resource.ResourceServer, error) {
|
||||
serverOptions.Backend = kvBackend
|
||||
serverOptions.Diagnostics = kvBackend
|
||||
} else {
|
||||
isHA := isHighAvailabilityEnabled(opts.Cfg.SectionWithEnvOverrides("database"),
|
||||
opts.Cfg.SectionWithEnvOverrides("resource_api"))
|
||||
|
||||
backend, err := NewBackend(BackendOptions{
|
||||
DBProvider: eDB,
|
||||
Reg: opts.Reg,
|
||||
|
||||
@@ -23,7 +23,6 @@ import (
|
||||
"github.com/grafana/authlib/types"
|
||||
|
||||
"github.com/grafana/grafana/pkg/apimachinery/utils"
|
||||
"github.com/grafana/grafana/pkg/infra/db"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resource"
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
|
||||
sqldb "github.com/grafana/grafana/pkg/storage/unified/sql/db"
|
||||
@@ -100,10 +99,6 @@ func RunStorageBackendTest(t *testing.T, newBackend NewBackendFunc, opts *TestOp
|
||||
}
|
||||
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
if db.IsTestDbSQLite() {
|
||||
t.Skip("Skipping tests on sqlite until channel notifier is implemented")
|
||||
}
|
||||
|
||||
tc.fn(t, newBackend(context.Background()), opts.NSPrefix)
|
||||
})
|
||||
}
|
||||
@@ -1171,7 +1166,7 @@ func runTestIntegrationBackendCreateNewResource(t *testing.T, backend resource.S
|
||||
}))
|
||||
|
||||
server := newServer(t, backend)
|
||||
ns := nsPrefix + "-create-rsrce" // create-resource
|
||||
ns := nsPrefix + "-create-resource"
|
||||
ctx = request.WithNamespace(ctx, ns)
|
||||
|
||||
request := &resourcepb.CreateRequest{
|
||||
@@ -1612,7 +1607,7 @@ func (s *sliceBulkRequestIterator) RollbackRequested() bool {
|
||||
|
||||
func runTestIntegrationBackendOptimisticLocking(t *testing.T, backend resource.StorageBackend, nsPrefix string) {
|
||||
ctx := testutil.NewTestContext(t, time.Now().Add(30*time.Second))
|
||||
ns := nsPrefix + "-optimis-lock" // optimistic-locking. need to cut down on characters to not exceed namespace character limit (40)
|
||||
ns := nsPrefix + "-optimistic-locking"
|
||||
|
||||
t.Run("concurrent updates with same RV - only one succeeds", func(t *testing.T) {
|
||||
// Create initial resource with rv0 (no previous RV)
|
||||
|
||||
@@ -36,10 +36,6 @@ func NewTestSqlKvBackend(t *testing.T, ctx context.Context, withRvManager bool)
|
||||
KvStore: kv,
|
||||
}
|
||||
|
||||
if db.DriverName() == "sqlite3" {
|
||||
kvOpts.UseChannelNotifier = true
|
||||
}
|
||||
|
||||
if withRvManager {
|
||||
dialect := sqltemplate.DialectForDriver(db.DriverName())
|
||||
rvManager, err := rvmanager.NewResourceVersionManager(rvmanager.ResourceManagerOptions{
|
||||
@@ -204,7 +200,7 @@ func verifyKeyPath(t *testing.T, db sqldb.DB, ctx context.Context, key *resource
|
||||
var keyPathRV int64
|
||||
if isSqlBackend {
|
||||
// Convert microsecond RV to snowflake for key_path construction
|
||||
keyPathRV = rvmanager.SnowflakeFromRV(resourceVersion)
|
||||
keyPathRV = rvmanager.SnowflakeFromRv(resourceVersion)
|
||||
} else {
|
||||
// KV backend already provides snowflake RV
|
||||
keyPathRV = resourceVersion
|
||||
@@ -438,6 +434,9 @@ func verifyResourceHistoryTable(t *testing.T, db sqldb.DB, namespace string, res
|
||||
|
||||
rows, err := db.QueryContext(ctx, query, namespace)
|
||||
require.NoError(t, err)
|
||||
defer func() {
|
||||
_ = rows.Close()
|
||||
}()
|
||||
|
||||
var records []ResourceHistoryRecord
|
||||
for rows.Next() {
|
||||
@@ -461,34 +460,33 @@ func verifyResourceHistoryTable(t *testing.T, db sqldb.DB, namespace string, res
|
||||
for resourceIdx, res := range resources {
|
||||
// Check create record (action=1, generation=1)
|
||||
createRecord := records[recordIndex]
|
||||
verifyResourceHistoryRecord(t, createRecord, namespace, res, resourceIdx, 1, 0, 1, resourceVersions[resourceIdx][0])
|
||||
verifyResourceHistoryRecord(t, createRecord, res, resourceIdx, 1, 0, 1, resourceVersions[resourceIdx][0])
|
||||
recordIndex++
|
||||
}
|
||||
|
||||
for resourceIdx, res := range resources {
|
||||
// Check update record (action=2, generation=2)
|
||||
updateRecord := records[recordIndex]
|
||||
verifyResourceHistoryRecord(t, updateRecord, namespace, res, resourceIdx, 2, resourceVersions[resourceIdx][0], 2, resourceVersions[resourceIdx][1])
|
||||
verifyResourceHistoryRecord(t, updateRecord, res, resourceIdx, 2, resourceVersions[resourceIdx][0], 2, resourceVersions[resourceIdx][1])
|
||||
recordIndex++
|
||||
}
|
||||
|
||||
for resourceIdx, res := range resources[:2] {
|
||||
// Check delete record (action=3, generation=0) - only first 2 resources were deleted
|
||||
deleteRecord := records[recordIndex]
|
||||
verifyResourceHistoryRecord(t, deleteRecord, namespace, res, resourceIdx, 3, resourceVersions[resourceIdx][1], 0, resourceVersions[resourceIdx][2])
|
||||
verifyResourceHistoryRecord(t, deleteRecord, res, resourceIdx, 3, resourceVersions[resourceIdx][1], 0, resourceVersions[resourceIdx][2])
|
||||
recordIndex++
|
||||
}
|
||||
}
|
||||
|
||||
// verifyResourceHistoryRecord validates a single resource_history record
|
||||
func verifyResourceHistoryRecord(t *testing.T, record ResourceHistoryRecord, namespace string, expectedRes struct{ name, folder string }, resourceIdx, expectedAction int, expectedPrevRV int64, expectedGeneration int, expectedRV int64) {
|
||||
func verifyResourceHistoryRecord(t *testing.T, record ResourceHistoryRecord, expectedRes struct{ name, folder string }, resourceIdx, expectedAction int, expectedPrevRV int64, expectedGeneration int, expectedRV int64) {
|
||||
// Validate GUID (should be non-empty)
|
||||
require.NotEmpty(t, record.GUID, "GUID should not be empty")
|
||||
|
||||
// Validate group/resource/namespace/name
|
||||
require.Equal(t, "playlist.grafana.app", record.Group)
|
||||
require.Equal(t, "playlists", record.Resource)
|
||||
require.Equal(t, namespace, record.Namespace)
|
||||
require.Equal(t, expectedRes.name, record.Name)
|
||||
|
||||
// Validate value contains expected JSON - server modifies/formats the JSON differently for different operations
|
||||
@@ -515,12 +513,8 @@ func verifyResourceHistoryRecord(t *testing.T, record ResourceHistoryRecord, nam
|
||||
// For KV backend operations, expectedPrevRV is now in snowflake format (returned by KV backend)
|
||||
// but resource_history table stores microsecond RV, so we need to use IsRvEqual for comparison
|
||||
if strings.Contains(record.Namespace, "-kv") {
|
||||
if expectedPrevRV == 0 {
|
||||
require.Zero(t, record.PreviousResourceVersion)
|
||||
} else {
|
||||
require.Equal(t, expectedPrevRV, rvmanager.SnowflakeFromRV(record.PreviousResourceVersion),
|
||||
"Previous resource version should match (KV backend snowflake format)")
|
||||
}
|
||||
require.True(t, rvmanager.IsRvEqual(expectedPrevRV, record.PreviousResourceVersion),
|
||||
"Previous resource version should match (KV backend snowflake format)")
|
||||
} else {
|
||||
require.Equal(t, expectedPrevRV, record.PreviousResourceVersion)
|
||||
}
|
||||
@@ -552,6 +546,9 @@ func verifyResourceTable(t *testing.T, db sqldb.DB, namespace string, resources
|
||||
|
||||
rows, err := db.QueryContext(ctx, query, namespace)
|
||||
require.NoError(t, err)
|
||||
defer func() {
|
||||
_ = rows.Close()
|
||||
}()
|
||||
|
||||
var records []ResourceRecord
|
||||
for rows.Next() {
|
||||
@@ -615,6 +612,9 @@ func verifyResourceVersionTable(t *testing.T, db sqldb.DB, namespace string, res
|
||||
// Check that we have exactly one entry for playlist.grafana.app/playlists
|
||||
rows, err := db.QueryContext(ctx, query, "playlist.grafana.app", "playlists")
|
||||
require.NoError(t, err)
|
||||
defer func() {
|
||||
_ = rows.Close()
|
||||
}()
|
||||
|
||||
var records []ResourceVersionRecord
|
||||
for rows.Next() {
|
||||
@@ -649,7 +649,7 @@ func verifyResourceVersionTable(t *testing.T, db sqldb.DB, namespace string, res
|
||||
isKvBackend := strings.Contains(namespace, "-kv")
|
||||
recordResourceVersion := record.ResourceVersion
|
||||
if isKvBackend {
|
||||
recordResourceVersion = rvmanager.SnowflakeFromRV(record.ResourceVersion)
|
||||
recordResourceVersion = rvmanager.SnowflakeFromRv(record.ResourceVersion)
|
||||
}
|
||||
|
||||
require.Less(t, recordResourceVersion, int64(9223372036854775807), "resource_version should be reasonable")
|
||||
@@ -841,20 +841,24 @@ func runMixedConcurrentOperations(t *testing.T, sqlServer, kvServer resource.Res
|
||||
}
|
||||
|
||||
// SQL backend operations
|
||||
wg.Go(func() {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
<-startBarrier // Wait for signal to start
|
||||
if err := runBackendOperationsWithCounts(ctx, sqlServer, namespace+"-sql", "sql", opCounts); err != nil {
|
||||
errors <- fmt.Errorf("SQL backend operations failed: %w", err)
|
||||
}
|
||||
})
|
||||
}()
|
||||
|
||||
// KV backend operations
|
||||
wg.Go(func() {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
<-startBarrier // Wait for signal to start
|
||||
if err := runBackendOperationsWithCounts(ctx, kvServer, namespace+"-kv", "kv", opCounts); err != nil {
|
||||
errors <- fmt.Errorf("KV backend operations failed: %w", err)
|
||||
}
|
||||
})
|
||||
}()
|
||||
|
||||
// Start both goroutines simultaneously
|
||||
close(startBarrier)
|
||||
|
||||
@@ -8,7 +8,6 @@ import (
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/grafana/grafana/pkg/storage/unified/resource"
|
||||
"github.com/grafana/grafana/pkg/util/testutil"
|
||||
)
|
||||
|
||||
func TestBadgerKVStorageBackend(t *testing.T) {
|
||||
@@ -37,13 +36,19 @@ func TestBadgerKVStorageBackend(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestIntegrationSQLKVStorageBackend(t *testing.T) {
|
||||
testutil.SkipIntegrationTestInShortMode(t)
|
||||
|
||||
func TestSQLKVStorageBackend(t *testing.T) {
|
||||
skipTests := map[string]bool{
|
||||
TestWatchWriteEvents: true,
|
||||
TestList: true,
|
||||
TestBlobSupport: true,
|
||||
TestGetResourceStats: true,
|
||||
TestListHistory: true,
|
||||
TestListHistoryErrorReporting: true,
|
||||
TestListModifiedSince: true,
|
||||
TestListTrash: true,
|
||||
TestCreateNewResource: true,
|
||||
TestGetResourceLastImportTime: true,
|
||||
TestOptimisticLocking: true,
|
||||
}
|
||||
|
||||
t.Run("Without RvManager", func(t *testing.T) {
|
||||
@@ -51,7 +56,7 @@ func TestIntegrationSQLKVStorageBackend(t *testing.T) {
|
||||
backend, _ := NewTestSqlKvBackend(t, ctx, false)
|
||||
return backend
|
||||
}, &TestOptions{
|
||||
NSPrefix: "sqlkvstoragetest",
|
||||
NSPrefix: "sqlkvstorage-test",
|
||||
SkipTests: skipTests,
|
||||
})
|
||||
})
|
||||
@@ -61,7 +66,7 @@ func TestIntegrationSQLKVStorageBackend(t *testing.T) {
|
||||
backend, _ := NewTestSqlKvBackend(t, ctx, true)
|
||||
return backend
|
||||
}, &TestOptions{
|
||||
NSPrefix: "sqlkvstoragetest-rvmanager",
|
||||
NSPrefix: "sqlkvstorage-withrvmanager-test",
|
||||
SkipTests: skipTests,
|
||||
})
|
||||
})
|
||||
|
||||
@@ -10,10 +10,10 @@ import (
|
||||
|
||||
"github.com/grafana/alerting/notify"
|
||||
"github.com/grafana/alerting/receivers/schema"
|
||||
"github.com/grafana/grafana-app-sdk/resource"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"k8s.io/apimachinery/pkg/api/errors"
|
||||
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
"github.com/grafana/grafana/apps/alerting/notifications/pkg/apis/alertingnotifications/v0alpha1"
|
||||
"github.com/grafana/grafana/pkg/services/featuremgmt"
|
||||
@@ -21,6 +21,7 @@ import (
|
||||
"github.com/grafana/grafana/pkg/services/ngalert/models"
|
||||
"github.com/grafana/grafana/pkg/tests/api/alerting"
|
||||
"github.com/grafana/grafana/pkg/tests/apis"
|
||||
test_common "github.com/grafana/grafana/pkg/tests/apis/alerting/notifications/common"
|
||||
"github.com/grafana/grafana/pkg/tests/testinfra"
|
||||
)
|
||||
|
||||
@@ -33,8 +34,7 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
|
||||
},
|
||||
})
|
||||
|
||||
receiverClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
receiverClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
|
||||
cliCfg := helper.Org1.Admin.NewRestConfig()
|
||||
alertingApi := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
|
||||
@@ -58,9 +58,9 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
|
||||
response := alertingApi.ConvertPrometheusPostAlertmanagerConfig(t, amConfig, headers)
|
||||
require.Equal(t, "success", response.Status)
|
||||
|
||||
receiversRaw, err := receiverClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
receiversRaw, err := receiverClient.Client.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
raw, err := json.Marshal(receiversRaw)
|
||||
raw, err := receiversRaw.MarshalJSON()
|
||||
require.NoError(t, err)
|
||||
|
||||
expectedBytes, err := os.ReadFile(path.Join("test-data", "imported-expected-snapshot.json"))
|
||||
@@ -74,7 +74,7 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
receivers, err := receiverClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
receivers, err := receiverClient.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
t.Run("secure fields should be properly masked", func(t *testing.T) {
|
||||
for _, receiver := range receivers.Items {
|
||||
@@ -114,14 +114,14 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
|
||||
toUpdate := receivers.Items[1]
|
||||
toUpdate.Spec.Title = "another title"
|
||||
|
||||
_, err = receiverClient.Update(ctx, &toUpdate, resource.UpdateOptions{})
|
||||
_, err = receiverClient.Update(ctx, &toUpdate, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not be able to delete", func(t *testing.T) {
|
||||
toDelete := receivers.Items[1]
|
||||
|
||||
err = receiverClient.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: toDelete.Name}, resource.DeleteOptions{})
|
||||
err = receiverClient.Delete(ctx, toDelete.Name, v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -15,12 +15,12 @@ import (
|
||||
"github.com/grafana/alerting/notify/notifytest"
|
||||
"github.com/grafana/alerting/receivers/line"
|
||||
"github.com/grafana/alerting/receivers/schema"
|
||||
"github.com/grafana/grafana-app-sdk/resource"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"k8s.io/apimachinery/pkg/api/errors"
|
||||
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/types"
|
||||
|
||||
"github.com/grafana/alerting/notify"
|
||||
|
||||
@@ -65,8 +65,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
client, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
client := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
newResource := &v0alpha1.Receiver{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
@@ -78,42 +77,42 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
}
|
||||
|
||||
t.Run("create should fail if object name is specified", func(t *testing.T) {
|
||||
receiver := newResource.Copy().(*v0alpha1.Receiver)
|
||||
receiver.Name = "new-receiver"
|
||||
_, err := client.Create(ctx, receiver, resource.CreateOptions{})
|
||||
resource := newResource.Copy().(*v0alpha1.Receiver)
|
||||
resource.Name = "new-receiver"
|
||||
_, err := client.Create(ctx, resource, v1.CreateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
|
||||
})
|
||||
|
||||
var resourceID resource.Identifier
|
||||
var resourceID string
|
||||
t.Run("create should succeed and provide resource name", func(t *testing.T) {
|
||||
actual, err := client.Create(ctx, newResource, resource.CreateOptions{})
|
||||
actual, err := client.Create(ctx, newResource, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
|
||||
resourceID = actual.GetStaticMetadata().Identifier()
|
||||
resourceID = actual.Name
|
||||
})
|
||||
|
||||
t.Run("resource should be available by the identifier", func(t *testing.T) {
|
||||
actual, err := client.Get(ctx, resourceID)
|
||||
actual, err := client.Get(ctx, resourceID, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
require.Equal(t, newResource.Spec, actual.Spec)
|
||||
})
|
||||
|
||||
t.Run("update should rename receiver if name in the specification changes", func(t *testing.T) {
|
||||
existing, err := client.Get(ctx, resourceID)
|
||||
existing, err := client.Get(ctx, resourceID, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
updated := existing.Copy().(*v0alpha1.Receiver)
|
||||
updated.Spec.Title = "another-newReceiver"
|
||||
|
||||
actual, err := client.Update(ctx, updated, resource.UpdateOptions{})
|
||||
actual, err := client.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, updated.Spec, actual.Spec)
|
||||
require.NotEqualf(t, updated.Name, actual.Name, "Update should change the resource name but it didn't")
|
||||
require.NotEqualf(t, updated.ResourceVersion, actual.ResourceVersion, "Update should change the resource version but it didn't")
|
||||
|
||||
resource, err := client.Get(ctx, actual.GetStaticMetadata().Identifier())
|
||||
resource, err := client.Get(ctx, actual.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, actual.Spec, resource.Spec)
|
||||
require.Equal(t, actual.Name, resource.Name)
|
||||
@@ -141,8 +140,7 @@ func TestIntegrationResourcePermissions(t *testing.T) {
|
||||
admin := org1.Admin
|
||||
viewer := org1.Viewer
|
||||
editor := org1.Editor
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := test_common.NewReceiverClient(t, admin)
|
||||
|
||||
writeACMetadata := []string{"canWrite", "canDelete"}
|
||||
allACMetadata := []string{"canWrite", "canDelete", "canReadSecrets", "canAdmin", "canModifyProtected"}
|
||||
@@ -294,10 +292,8 @@ func TestIntegrationResourcePermissions(t *testing.T) {
|
||||
},
|
||||
} {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
createClient, err := v0alpha1.NewReceiverClientFromGenerator(tc.creatingUser.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
client, err := v0alpha1.NewReceiverClientFromGenerator(tc.testUser.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
createClient := test_common.NewReceiverClient(t, tc.creatingUser)
|
||||
client := test_common.NewReceiverClient(t, tc.testUser)
|
||||
|
||||
var created = &v0alpha1.Receiver{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -312,12 +308,12 @@ func TestIntegrationResourcePermissions(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create receiver with creatingUser
|
||||
created, err = createClient.Create(ctx, created, resource.CreateOptions{})
|
||||
created, err = createClient.Create(ctx, created, v1.CreateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.NotNil(t, created)
|
||||
|
||||
defer func() {
|
||||
_ = adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
_ = adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
|
||||
}()
|
||||
|
||||
// Assign resource permissions
|
||||
@@ -342,7 +338,7 @@ func TestIntegrationResourcePermissions(t *testing.T) {
|
||||
|
||||
// Obtain expected responses using admin client as source of truth.
|
||||
expectedGetWithMetadata, expectedListWithMetadata := func() (*v0alpha1.Receiver, *v0alpha1.Receiver) {
|
||||
expectedGet, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
expectedGet, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, expectedGet)
|
||||
|
||||
@@ -356,7 +352,7 @@ func TestIntegrationResourcePermissions(t *testing.T) {
|
||||
expectedGetWithMetadata.SetAccessControl(ac)
|
||||
}
|
||||
|
||||
expectedList, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
expectedList, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
expectedListWithMetadata := extractReceiverFromList(expectedList, created.Name)
|
||||
require.NotNil(t, expectedListWithMetadata)
|
||||
@@ -372,26 +368,26 @@ func TestIntegrationResourcePermissions(t *testing.T) {
|
||||
}()
|
||||
|
||||
t.Run("should be able to list receivers", func(t *testing.T) {
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
listedReceiver := extractReceiverFromList(list, created.Name)
|
||||
assert.Equalf(t, expectedListWithMetadata, listedReceiver, "Expected %v but got %v", expectedListWithMetadata, listedReceiver)
|
||||
})
|
||||
|
||||
t.Run("should be able to read receiver by resource identifier", func(t *testing.T) {
|
||||
got, err := client.Get(ctx, expectedGetWithMetadata.GetStaticMetadata().Identifier())
|
||||
got, err := client.Get(ctx, expectedGetWithMetadata.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
assert.Equalf(t, expectedGetWithMetadata, got, "Expected %v but got %v", expectedGetWithMetadata, got)
|
||||
})
|
||||
} else {
|
||||
t.Run("list receivers should be empty", func(t *testing.T) {
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Emptyf(t, list.Items, "Expected no receivers but got %v", list.Items)
|
||||
})
|
||||
|
||||
t.Run("should be forbidden to read receiver by name", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
_, err := client.Get(ctx, created.Name, v1.GetOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
}
|
||||
@@ -563,12 +559,10 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
for _, tc := range testCases {
|
||||
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
|
||||
client, err := v0alpha1.NewReceiverClientFromGenerator(tc.user.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
client := test_common.NewReceiverClient(t, tc.user)
|
||||
|
||||
var expected = &v0alpha1.Receiver{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -586,29 +580,29 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
newReceiver.Spec.Title = fmt.Sprintf("receiver-2-%s", tc.user.Identity.GetLogin())
|
||||
if tc.canCreate {
|
||||
t.Run("should be able to create receiver", func(t *testing.T) {
|
||||
actual, err := client.Create(ctx, newReceiver, resource.CreateOptions{})
|
||||
actual, err := client.Create(ctx, newReceiver, v1.CreateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
|
||||
require.Equal(t, newReceiver.Spec, actual.Spec)
|
||||
|
||||
t.Run("should fail if already exists", func(t *testing.T) {
|
||||
_, err := client.Create(ctx, newReceiver, resource.CreateOptions{})
|
||||
_, err := client.Create(ctx, newReceiver, v1.CreateOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "expected bad request but got %s", err)
|
||||
})
|
||||
|
||||
// Cleanup.
|
||||
require.NoError(t, adminClient.Delete(ctx, actual.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
|
||||
require.NoError(t, adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{}))
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to create", func(t *testing.T) {
|
||||
_, err := client.Create(ctx, newReceiver, resource.CreateOptions{})
|
||||
_, err := client.Create(ctx, newReceiver, v1.CreateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "Payload %s", string(d))
|
||||
})
|
||||
}
|
||||
|
||||
// create resource to proceed with other tests. We don't use the one created above because the user will always
|
||||
// have admin permissions on it.
|
||||
expected, err = adminClient.Create(ctx, expected, resource.CreateOptions{})
|
||||
expected, err = adminClient.Create(ctx, expected, v1.CreateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.NotNil(t, expected)
|
||||
|
||||
@@ -633,34 +627,34 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
expectedWithMetadata.SetAccessControl("canAdmin")
|
||||
}
|
||||
t.Run("should be able to list receivers", func(t *testing.T) {
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 2) // default + created
|
||||
})
|
||||
|
||||
t.Run("should be able to read receiver by resource identifier", func(t *testing.T) {
|
||||
got, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
|
||||
got, err := client.Get(ctx, expected.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expectedWithMetadata, got)
|
||||
|
||||
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("list receivers should be empty", func(t *testing.T) {
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Emptyf(t, list.Items, "Expected no receivers but got %v", list.Items)
|
||||
})
|
||||
|
||||
t.Run("should be forbidden to read receiver by name", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
|
||||
_, err := client.Get(ctx, expected.Name, v1.GetOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
@@ -674,7 +668,7 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canUpdate {
|
||||
t.Run("should be able to update receiver", func(t *testing.T) {
|
||||
updated, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
|
||||
updated, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
|
||||
expected = updated
|
||||
@@ -682,7 +676,7 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
up := updatedExpected.Copy().(*v0alpha1.Receiver)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{})
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
@@ -692,7 +686,7 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
createIntegration(t, "webhook"),
|
||||
}
|
||||
|
||||
expected, err = adminClient.Update(ctx, updatedExpected, resource.UpdateOptions{})
|
||||
expected, err = adminClient.Update(ctx, updatedExpected, v1.UpdateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.NotNil(t, expected)
|
||||
|
||||
@@ -701,62 +695,60 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canUpdateProtected {
|
||||
t.Run("should be able to update protected fields of the receiver", func(t *testing.T) {
|
||||
updated, err := client.Update(ctx, updatedProtected, resource.UpdateOptions{})
|
||||
updated, err := client.Update(ctx, updatedProtected, v1.UpdateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.NotNil(t, updated)
|
||||
expected = updated
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to edit protected fields of the receiver", func(t *testing.T) {
|
||||
_, err := client.Update(ctx, updatedProtected, resource.UpdateOptions{})
|
||||
_, err := client.Update(ctx, updatedProtected, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
}
|
||||
} else {
|
||||
t.Run("should be forbidden to update receiver", func(t *testing.T) {
|
||||
_, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
|
||||
_, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
|
||||
up := updatedExpected.Copy().(*v0alpha1.Receiver)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{
|
||||
ResourceVersion: up.ResourceVersion,
|
||||
})
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
require.Falsef(t, tc.canUpdateProtected, "Invalid combination of assertions. CanUpdateProtected should be false")
|
||||
}
|
||||
|
||||
deleteOptions := resource.DeleteOptions{Preconditions: resource.DeleteOptionsPreconditions{ResourceVersion: expected.ResourceVersion}}
|
||||
deleteOptions := v1.DeleteOptions{Preconditions: &v1.Preconditions{ResourceVersion: util.Pointer(expected.ResourceVersion)}}
|
||||
|
||||
if tc.canDelete {
|
||||
t.Run("should be able to delete receiver", func(t *testing.T) {
|
||||
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), deleteOptions)
|
||||
err := client.Delete(ctx, expected.Name, deleteOptions)
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to delete receiver", func(t *testing.T) {
|
||||
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), deleteOptions)
|
||||
err := client.Delete(ctx, expected.Name, deleteOptions)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
|
||||
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
|
||||
}
|
||||
|
||||
if tc.canRead {
|
||||
t.Run("should get empty list if no receivers", func(t *testing.T) {
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
})
|
||||
@@ -774,8 +766,7 @@ func TestIntegrationInUseMetadata(t *testing.T) {
|
||||
cliCfg := helper.Org1.Admin.NewRestConfig()
|
||||
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
|
||||
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
// Prepare environment and create notification policy and rule that use receiver
|
||||
alertmanagerRaw, err := testData.ReadFile(path.Join("test-data", "notification-settings.json"))
|
||||
require.NoError(t, err)
|
||||
@@ -822,7 +813,7 @@ func TestIntegrationInUseMetadata(t *testing.T) {
|
||||
|
||||
requestReceivers := func(t *testing.T, title string) (v0alpha1.Receiver, v0alpha1.Receiver) {
|
||||
t.Helper()
|
||||
receivers, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
receivers, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, receivers.Items, 2)
|
||||
idx := slices.IndexFunc(receivers.Items, func(interval v0alpha1.Receiver) bool {
|
||||
@@ -830,7 +821,7 @@ func TestIntegrationInUseMetadata(t *testing.T) {
|
||||
})
|
||||
receiverListed := receivers.Items[idx]
|
||||
|
||||
receiverGet, err := adminClient.Get(ctx, receiverListed.GetStaticMetadata().Identifier())
|
||||
receiverGet, err := adminClient.Get(ctx, receiverListed.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
return receiverListed, *receiverGet
|
||||
@@ -855,9 +846,8 @@ func TestIntegrationInUseMetadata(t *testing.T) {
|
||||
amConfig.AlertmanagerConfig.Route.Routes = amConfig.AlertmanagerConfig.Route.Routes[:1]
|
||||
v1Route, err := routingtree.ConvertToK8sResource(helper.Org1.AdminServiceAccount.OrgId, *amConfig.AlertmanagerConfig.Route, "", func(int64) string { return "default" })
|
||||
require.NoError(t, err)
|
||||
routeAdminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
_, err = routeAdminClient.Update(ctx, v1Route, resource.UpdateOptions{})
|
||||
routeAdminClient := test_common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
_, err = routeAdminClient.Update(ctx, v1Route, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
receiverListed, receiverGet = requestReceivers(t, "user-defined")
|
||||
@@ -878,7 +868,7 @@ func TestIntegrationInUseMetadata(t *testing.T) {
|
||||
amConfig.AlertmanagerConfig.Route.Routes = nil
|
||||
v1route, err := routingtree.ConvertToK8sResource(1, *amConfig.AlertmanagerConfig.Route, "", func(int64) string { return "default" })
|
||||
require.NoError(t, err)
|
||||
_, err = routeAdminClient.Update(ctx, v1route, resource.UpdateOptions{})
|
||||
_, err = routeAdminClient.Update(ctx, v1route, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Remove the remaining rules.
|
||||
@@ -902,8 +892,7 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
org := helper.Org1
|
||||
|
||||
admin := org.Admin
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
env := helper.GetEnv()
|
||||
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
|
||||
db, err := store.ProvideDBStore(env.Cfg, env.FeatureToggles, env.SQLStore, &foldertest.FakeService{}, &dashboards.FakeDashboardService{}, ac, bus.ProvideBus(tracing.InitializeTracerForTest()))
|
||||
@@ -919,7 +908,7 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
createIntegration(t, "email"),
|
||||
},
|
||||
},
|
||||
}, resource.CreateOptions{})
|
||||
}, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "none", created.GetProvenanceStatus())
|
||||
|
||||
@@ -928,23 +917,23 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
UID: *created.Spec.Integrations[0].Uid,
|
||||
}, admin.Identity.GetOrgID(), "API"))
|
||||
|
||||
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "API", got.GetProvenanceStatus())
|
||||
})
|
||||
|
||||
t.Run("should not let update if provisioned", func(t *testing.T) {
|
||||
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
updated := got.Copy().(*v0alpha1.Receiver)
|
||||
updated.Spec.Integrations = append(updated.Spec.Integrations, createIntegration(t, "email"))
|
||||
|
||||
_, err = adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
_, err = adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not let delete if provisioned", func(t *testing.T) {
|
||||
err := adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
}
|
||||
@@ -955,10 +944,7 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
oldClient := test_common.NewReceiverClient(t, helper.Org1.Admin) // TODO replace with regular client once Delete works
|
||||
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
receiver := v0alpha1.Receiver{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
@@ -969,22 +955,21 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
created, err := adminClient.Create(ctx, &receiver, resource.CreateOptions{})
|
||||
created, err := adminClient.Create(ctx, &receiver, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, created)
|
||||
require.NotEmpty(t, created.ResourceVersion)
|
||||
|
||||
t.Run("should conflict if version does not match", func(t *testing.T) {
|
||||
t.Run("should forbid if version does not match", func(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.Receiver)
|
||||
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
|
||||
ResourceVersion: "test",
|
||||
})
|
||||
updated.ResourceVersion = "test"
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
t.Run("should update if version matches", func(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.Receiver)
|
||||
updated.Spec.Integrations = append(updated.Spec.Integrations, createIntegration(t, "email"))
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
for i, integration := range actualUpdated.Spec.Integrations {
|
||||
updated.Spec.Integrations[i].Uid = integration.Uid
|
||||
@@ -996,25 +981,25 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.Receiver)
|
||||
updated.ResourceVersion = ""
|
||||
updated.Spec.Integrations = append(updated.Spec.Integrations, createIntegration(t, "webhook"))
|
||||
_, err := oldClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err) // TODO Change that? K8s returns 400 instead.
|
||||
})
|
||||
t.Run("should fail to delete if version does not match", func(t *testing.T) {
|
||||
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = oldClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer("something"),
|
||||
},
|
||||
})
|
||||
require.Truef(t, errors.IsConflict(err), "should get conflict error but got %s", err)
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
t.Run("should succeed if version matches", func(t *testing.T) {
|
||||
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = oldClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer(actual.ResourceVersion),
|
||||
},
|
||||
@@ -1022,10 +1007,10 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
})
|
||||
t.Run("should succeed if version is empty", func(t *testing.T) {
|
||||
actual, err := adminClient.Create(ctx, &receiver, resource.CreateOptions{})
|
||||
actual, err := adminClient.Create(ctx, &receiver, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = oldClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer(actual.ResourceVersion),
|
||||
},
|
||||
@@ -1040,8 +1025,7 @@ func TestIntegrationPatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
receiver := v0alpha1.Receiver{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
@@ -1056,40 +1040,40 @@ func TestIntegrationPatch(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
current, err := adminClient.Create(ctx, &receiver, resource.CreateOptions{})
|
||||
current, err := adminClient.Create(ctx, &receiver, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, current)
|
||||
|
||||
t.Run("should patch with json patch", func(t *testing.T) {
|
||||
current, err := adminClient.Get(ctx, current.GetStaticMetadata().Identifier())
|
||||
current, err := adminClient.Get(ctx, current.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
index := slices.IndexFunc(current.Spec.Integrations, func(t v0alpha1.ReceiverIntegration) bool {
|
||||
return t.Type == "webhook"
|
||||
})
|
||||
|
||||
patch := []resource.PatchOperation{
|
||||
patch := []map[string]any{
|
||||
{
|
||||
Operation: "remove",
|
||||
Path: fmt.Sprintf("/spec/integrations/%d/settings/username", index),
|
||||
"op": "remove",
|
||||
"path": fmt.Sprintf("/spec/integrations/%d/settings/username", index),
|
||||
},
|
||||
{
|
||||
Operation: "remove",
|
||||
Path: fmt.Sprintf("/spec/integrations/%d/secureFields/password", index),
|
||||
"op": "remove",
|
||||
"path": fmt.Sprintf("/spec/integrations/%d/secureFields/password", index),
|
||||
},
|
||||
{
|
||||
Operation: "replace",
|
||||
Path: fmt.Sprintf("/spec/integrations/%d/settings/authorization_scheme", index),
|
||||
Value: "bearer",
|
||||
"op": "replace",
|
||||
"path": fmt.Sprintf("/spec/integrations/%d/settings/authorization_scheme", index),
|
||||
"value": "bearer",
|
||||
},
|
||||
{
|
||||
Operation: "add",
|
||||
Path: fmt.Sprintf("/spec/integrations/%d/settings/authorization_credentials", index),
|
||||
Value: "authz-token",
|
||||
"op": "add",
|
||||
"path": fmt.Sprintf("/spec/integrations/%d/settings/authorization_credentials", index),
|
||||
"value": "authz-token",
|
||||
},
|
||||
{
|
||||
Operation: "remove",
|
||||
Path: fmt.Sprintf("/spec/integrations/%d/secureFields/authorization_credentials", index),
|
||||
"op": "remove",
|
||||
"path": fmt.Sprintf("/spec/integrations/%d/secureFields/authorization_credentials", index),
|
||||
},
|
||||
}
|
||||
|
||||
@@ -1100,7 +1084,10 @@ func TestIntegrationPatch(t *testing.T) {
|
||||
delete(expected.SecureFields, "password")
|
||||
expected.SecureFields["authorization_credentials"] = true
|
||||
|
||||
result, err := adminClient.Patch(ctx, current.GetStaticMetadata().Identifier(), resource.PatchRequest{Operations: patch}, resource.PatchOptions{})
|
||||
patchData, err := json.Marshal(patch)
|
||||
require.NoError(t, err)
|
||||
|
||||
result, err := adminClient.Patch(ctx, current.Name, types.JSONPatchType, patchData, v1.PatchOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
require.EqualValues(t, expected, result.Spec.Integrations[index])
|
||||
@@ -1140,8 +1127,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
|
||||
cliCfg := helper.Org1.Admin.NewRestConfig()
|
||||
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
|
||||
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
// Prepare environment and create notification policy and rule that use time receiver
|
||||
alertmanagerRaw, err := testData.ReadFile(path.Join("test-data", "notification-settings.json"))
|
||||
require.NoError(t, err)
|
||||
@@ -1160,7 +1146,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
|
||||
_, status, data := legacyCli.PostRulesGroupWithStatus(t, folderUID, &ruleGroup, false)
|
||||
require.Equalf(t, http.StatusAccepted, status, "Failed to post Rule: %s", data)
|
||||
|
||||
receivers, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
receivers, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, receivers.Items, 2)
|
||||
idx := slices.IndexFunc(receivers.Items, func(interval v0alpha1.Receiver) bool {
|
||||
@@ -1178,7 +1164,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
|
||||
expectedTitle := renamed.Spec.Title + "-new"
|
||||
renamed.Spec.Title = expectedTitle
|
||||
|
||||
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
|
||||
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
updatedRuleGroup, status := legacyCli.GetRulesGroup(t, folderUID, ruleGroup.Name)
|
||||
@@ -1192,7 +1178,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
|
||||
assert.Equalf(t, expectedTitle, route.Receiver, "time receiver in routes should have been renamed but it did not")
|
||||
}
|
||||
|
||||
actual, err = adminClient.Get(ctx, actual.GetStaticMetadata().Identifier())
|
||||
actual, err = adminClient.Get(ctx, actual.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
receiver = *actual
|
||||
@@ -1208,20 +1194,20 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, db.DeleteProvenance(ctx, ¤tRoute, orgID))
|
||||
})
|
||||
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
|
||||
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
|
||||
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
|
||||
t.Run("provisioned rules", func(t *testing.T) {
|
||||
ruleUid := currentRuleGroup.Rules[0].GrafanaManagedAlert.UID
|
||||
rule := &ngmodels.AlertRule{UID: ruleUid}
|
||||
require.NoError(t, db.SetProvenance(ctx, rule, orgID, "API"))
|
||||
resource := &ngmodels.AlertRule{UID: ruleUid}
|
||||
require.NoError(t, db.SetProvenance(ctx, resource, orgID, "API"))
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, db.DeleteProvenance(ctx, rule, orgID))
|
||||
require.NoError(t, db.DeleteProvenance(ctx, resource, orgID))
|
||||
})
|
||||
|
||||
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
|
||||
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
|
||||
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
@@ -1230,7 +1216,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
|
||||
|
||||
t.Run("Delete", func(t *testing.T) {
|
||||
t.Run("should fail to delete if receiver is used in rule and routes", func(t *testing.T) {
|
||||
err := adminClient.Delete(ctx, receiver.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, receiver.Name, v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
|
||||
@@ -1239,7 +1225,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
|
||||
route.Routes[0].Receiver = ""
|
||||
legacyCli.UpdateRoute(t, route, true)
|
||||
|
||||
err = adminClient.Delete(ctx, receiver.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
err = adminClient.Delete(ctx, receiver.Name, v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
})
|
||||
@@ -1251,11 +1237,10 @@ func TestIntegrationCRUD(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
var defaultReceiver *v0alpha1.Receiver
|
||||
t.Run("should list the default receiver", func(t *testing.T) {
|
||||
items, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
items, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, items.Items, 1)
|
||||
defaultReceiver = &items.Items[0]
|
||||
@@ -1264,7 +1249,7 @@ func TestIntegrationCRUD(t *testing.T) {
|
||||
assert.NotEmpty(t, defaultReceiver.Name)
|
||||
assert.NotEmpty(t, defaultReceiver.ResourceVersion)
|
||||
|
||||
defaultReceiver, err = adminClient.Get(ctx, defaultReceiver.GetStaticMetadata().Identifier())
|
||||
defaultReceiver, err = adminClient.Get(ctx, defaultReceiver.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
assert.NotEmpty(t, defaultReceiver.UID)
|
||||
assert.NotEmpty(t, defaultReceiver.Name)
|
||||
@@ -1277,7 +1262,7 @@ func TestIntegrationCRUD(t *testing.T) {
|
||||
newDefault := defaultReceiver.Copy().(*v0alpha1.Receiver)
|
||||
newDefault.Spec.Integrations = append(newDefault.Spec.Integrations, createIntegration(t, line.Type))
|
||||
|
||||
updatedReceiver, err := adminClient.Update(ctx, newDefault, resource.UpdateOptions{})
|
||||
updatedReceiver, err := adminClient.Update(ctx, newDefault, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
expected := newDefault.Copy().(*v0alpha1.Receiver)
|
||||
@@ -1305,12 +1290,12 @@ func TestIntegrationCRUD(t *testing.T) {
|
||||
Integrations: []v0alpha1.ReceiverIntegration{},
|
||||
},
|
||||
}
|
||||
_, err := adminClient.Create(ctx, newReceiver, resource.CreateOptions{})
|
||||
_, err := adminClient.Create(ctx, newReceiver, v1.CreateOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not let delete default receiver", func(t *testing.T) {
|
||||
err := adminClient.Delete(ctx, defaultReceiver.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, defaultReceiver.Name, v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
|
||||
@@ -1332,7 +1317,7 @@ func TestIntegrationCRUD(t *testing.T) {
|
||||
Title: "all-receivers",
|
||||
Integrations: integrations,
|
||||
},
|
||||
}, resource.CreateOptions{})
|
||||
}, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, receiver.Spec.Integrations, len(integrations))
|
||||
|
||||
@@ -1357,7 +1342,7 @@ func TestIntegrationCRUD(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("should be able read what it is created", func(t *testing.T) {
|
||||
get, err := adminClient.Get(ctx, receiver.GetStaticMetadata().Identifier())
|
||||
get, err := adminClient.Get(ctx, receiver.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, receiver, get)
|
||||
t.Run("should return secrets in secureFields but not settings", func(t *testing.T) {
|
||||
@@ -1409,7 +1394,7 @@ func TestIntegrationCRUD(t *testing.T) {
|
||||
Title: fmt.Sprintf("invalid-%s", key),
|
||||
Integrations: []v0alpha1.ReceiverIntegration{integration},
|
||||
},
|
||||
}, resource.CreateOptions{})
|
||||
}, v1.CreateOptions{})
|
||||
require.Errorf(t, err, "Expected error but got successful result: %v", receiver)
|
||||
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest, got: %s", err)
|
||||
})
|
||||
@@ -1423,8 +1408,7 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
recv1 := &v0alpha1.Receiver{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
@@ -1436,7 +1420,7 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}
|
||||
recv1, err = adminClient.Create(ctx, recv1, resource.CreateOptions{})
|
||||
recv1, err := adminClient.Create(ctx, recv1, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
recv2 := &v0alpha1.Receiver{
|
||||
@@ -1450,7 +1434,7 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}
|
||||
recv2, err = adminClient.Create(ctx, recv2, resource.CreateOptions{})
|
||||
recv2, err = adminClient.Create(ctx, recv2, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
env := helper.GetEnv()
|
||||
@@ -1460,20 +1444,18 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
|
||||
require.NoError(t, db.SetProvenance(ctx, &definitions.EmbeddedContactPoint{
|
||||
UID: *recv2.Spec.Integrations[0].Uid,
|
||||
}, helper.Org1.Admin.Identity.GetOrgID(), "API"))
|
||||
recv2, err = adminClient.Get(ctx, recv2.GetStaticMetadata().Identifier())
|
||||
recv2, err = adminClient.Get(ctx, recv2.Name, v1.GetOptions{})
|
||||
|
||||
require.NoError(t, err)
|
||||
|
||||
receivers, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
receivers, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, receivers.Items, 3) // Includes default.
|
||||
|
||||
t.Run("should filter by receiver name", func(t *testing.T) {
|
||||
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{
|
||||
"spec.title=" + recv1.Spec.Title,
|
||||
},
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "spec.title=" + recv1.Spec.Title,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -1481,10 +1463,8 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("should filter by metadata name", func(t *testing.T) {
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{
|
||||
"metadata.name=" + recv2.Name,
|
||||
},
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "metadata.name=" + recv2.Name,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -1493,10 +1473,8 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
|
||||
|
||||
t.Run("should filter by multiple filters", func(t *testing.T) {
|
||||
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{
|
||||
fmt.Sprintf("metadata.name=%s,spec.title=%s", recv2.Name, recv2.Spec.Title),
|
||||
},
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: fmt.Sprintf("metadata.name=%s,spec.title=%s", recv2.Name, recv2.Spec.Title),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -1504,10 +1482,8 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("should be empty when filter does not match", func(t *testing.T) {
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{
|
||||
fmt.Sprintf("metadata.name=%s", "unknown"),
|
||||
},
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: fmt.Sprintf("metadata.name=%s", "unknown"),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Empty(t, list.Items)
|
||||
@@ -1521,8 +1497,7 @@ func persistInitialConfig(t *testing.T, amConfig definitions.PostableUserConfig)
|
||||
|
||||
helper := getTestHelper(t)
|
||||
|
||||
receiverClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
receiverClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
|
||||
for _, receiver := range amConfig.AlertmanagerConfig.Receivers {
|
||||
if receiver.Name == "grafana-default-email" {
|
||||
continue
|
||||
@@ -1548,7 +1523,7 @@ func persistInitialConfig(t *testing.T, amConfig definitions.PostableUserConfig)
|
||||
})
|
||||
}
|
||||
|
||||
created, err := receiverClient.Create(ctx, &toCreate, resource.CreateOptions{})
|
||||
created, err := receiverClient.Create(ctx, &toCreate, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
for i, integration := range created.Spec.Integrations {
|
||||
@@ -1558,11 +1533,10 @@ func persistInitialConfig(t *testing.T, amConfig definitions.PostableUserConfig)
|
||||
|
||||
nsMapper := func(_ int64) string { return "default" }
|
||||
|
||||
routeClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
routeClient := test_common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
v1route, err := routingtree.ConvertToK8sResource(helper.Org1.AdminServiceAccount.OrgId, *amConfig.AlertmanagerConfig.Route, "", nsMapper)
|
||||
require.NoError(t, err)
|
||||
_, err = routeClient.Update(ctx, v1route, resource.UpdateOptions{})
|
||||
_, err = routeClient.Update(ctx, v1route, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
|
||||
@@ -1,14 +1,10 @@
|
||||
{
|
||||
"kind": "ReceiverList",
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"metadata": {},
|
||||
"items": [
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "Z3JhZmFuYS1kZWZhdWx0LWVtYWls",
|
||||
"namespace": "default",
|
||||
"uid": "zyXFk301pvwNz4HRPrTMKPMFO2934cPB7H1ZXmyM1TUX",
|
||||
"resourceVersion": "a82b34036bdabbc4",
|
||||
"annotations": {
|
||||
"grafana.com/access/canAdmin": "true",
|
||||
"grafana.com/access/canDelete": "true",
|
||||
@@ -19,49 +15,53 @@
|
||||
"grafana.com/inUse/routes": "1",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "none"
|
||||
}
|
||||
},
|
||||
"name": "Z3JhZmFuYS1kZWZhdWx0LWVtYWls",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "a82b34036bdabbc4",
|
||||
"uid": "zyXFk301pvwNz4HRPrTMKPMFO2934cPB7H1ZXmyM1TUX"
|
||||
},
|
||||
"spec": {
|
||||
"title": "grafana-default-email",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "email",
|
||||
"version": "v1",
|
||||
"disableResolveMessage": false,
|
||||
"settings": {
|
||||
"addresses": "\u003cexample@email.com\u003e"
|
||||
}
|
||||
},
|
||||
"type": "email",
|
||||
"uid": "",
|
||||
"version": "v1"
|
||||
}
|
||||
]
|
||||
],
|
||||
"title": "grafana-default-email"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
"grafana.com/canUse": "false",
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
},
|
||||
"name": "Z3JhZmFuYS1kZWZhdWx0LWVtYWlsdGVzdC1jcmVhdGUtZ2V0LWNvbmZpZw",
|
||||
"namespace": "default",
|
||||
"uid": "JzW6DIlcxj4sRN8A2ULcwTXAmm0Vs0Z68aEBqXSvxK0X",
|
||||
"resourceVersion": "b2823b50ffa1eff6",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
"grafana.com/canUse": "false",
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
"uid": "JzW6DIlcxj4sRN8A2ULcwTXAmm0Vs0Z68aEBqXSvxK0X"
|
||||
},
|
||||
"spec": {
|
||||
"title": "grafana-default-emailtest-create-get-config",
|
||||
"integrations": []
|
||||
"integrations": [],
|
||||
"title": "grafana-default-emailtest-create-get-config"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "ZGlzY29yZA",
|
||||
"namespace": "default",
|
||||
"uid": "8cH8Ql2S6VhPEVUhwlQEKYWyPbRJS7YKj2lEXdrehH8X",
|
||||
"resourceVersion": "06e437697f62ac59",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -69,16 +69,19 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
},
|
||||
"name": "ZGlzY29yZA",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "06e437697f62ac59",
|
||||
"uid": "8cH8Ql2S6VhPEVUhwlQEKYWyPbRJS7YKj2lEXdrehH8X"
|
||||
},
|
||||
"spec": {
|
||||
"title": "discord",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "discord",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"webhook_url": true
|
||||
},
|
||||
"settings": {
|
||||
"http_config": {
|
||||
"enable_http2": true,
|
||||
@@ -92,19 +95,18 @@
|
||||
"send_resolved": true,
|
||||
"title": "{{ template \"discord.default.title\" . }}"
|
||||
},
|
||||
"secureFields": {
|
||||
"webhook_url": true
|
||||
}
|
||||
"type": "discord",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
}
|
||||
]
|
||||
],
|
||||
"title": "discord"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "ZW1haWw",
|
||||
"namespace": "default",
|
||||
"uid": "bhlvlN758xmnwVrHVPX0c5XvFHepenUbOXP0fuE6eUMX",
|
||||
"resourceVersion": "9b3ffed277cee189",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -112,16 +114,19 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
},
|
||||
"name": "ZW1haWw",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "9b3ffed277cee189",
|
||||
"uid": "bhlvlN758xmnwVrHVPX0c5XvFHepenUbOXP0fuE6eUMX"
|
||||
},
|
||||
"spec": {
|
||||
"title": "email",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "email",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"auth_password": true
|
||||
},
|
||||
"settings": {
|
||||
"auth_username": "alertmanager",
|
||||
"from": "alertmanager@example.com",
|
||||
@@ -139,19 +144,18 @@
|
||||
},
|
||||
"to": "team@example.com"
|
||||
},
|
||||
"secureFields": {
|
||||
"auth_password": true
|
||||
}
|
||||
"type": "email",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
}
|
||||
]
|
||||
],
|
||||
"title": "email"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "amlyYQ",
|
||||
"namespace": "default",
|
||||
"uid": "7Pu4xcRXbvw4XEX279SoqyO8Ibo8cMl0vAJyYTsJ0NEX",
|
||||
"resourceVersion": "deae9d34f8554205",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -159,16 +163,19 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
},
|
||||
"name": "amlyYQ",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "deae9d34f8554205",
|
||||
"uid": "7Pu4xcRXbvw4XEX279SoqyO8Ibo8cMl0vAJyYTsJ0NEX"
|
||||
},
|
||||
"spec": {
|
||||
"title": "jira",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "jira",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"http_config.basic_auth.password": true
|
||||
},
|
||||
"settings": {
|
||||
"api_url": "http://localhost/jira",
|
||||
"custom_fields": {
|
||||
@@ -196,19 +203,18 @@
|
||||
"send_resolved": true,
|
||||
"summary": "{{ template \"jira.default.summary\" . }}"
|
||||
},
|
||||
"secureFields": {
|
||||
"http_config.basic_auth.password": true
|
||||
}
|
||||
"type": "jira",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
}
|
||||
]
|
||||
],
|
||||
"title": "jira"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "bXN0ZWFtcw",
|
||||
"namespace": "default",
|
||||
"uid": "z7xTMDjrk1HAHXPEx78tQb63LXYA6ivXLOtz2Z09ucIX",
|
||||
"resourceVersion": "95c8d082d65466a3",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -216,16 +222,19 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
},
|
||||
"name": "bXN0ZWFtcw",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "95c8d082d65466a3",
|
||||
"uid": "z7xTMDjrk1HAHXPEx78tQb63LXYA6ivXLOtz2Z09ucIX"
|
||||
},
|
||||
"spec": {
|
||||
"title": "msteams",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "teams",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"webhook_url": true
|
||||
},
|
||||
"settings": {
|
||||
"http_config": {
|
||||
"enable_http2": true,
|
||||
@@ -240,19 +249,18 @@
|
||||
"text": "{{ template \"msteams.default.text\" . }}",
|
||||
"title": "{{ template \"msteams.default.title\" . }}"
|
||||
},
|
||||
"secureFields": {
|
||||
"webhook_url": true
|
||||
}
|
||||
"type": "teams",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
}
|
||||
]
|
||||
],
|
||||
"title": "msteams"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "b3BzZ2VuaWU",
|
||||
"namespace": "default",
|
||||
"uid": "XmkZ214Dj030hvynYiwNLq8i6uRCjUYXMXjE5m19OKAX",
|
||||
"resourceVersion": "8ee2957ba150ba16",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -260,16 +268,19 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
},
|
||||
"name": "b3BzZ2VuaWU",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "8ee2957ba150ba16",
|
||||
"uid": "XmkZ214Dj030hvynYiwNLq8i6uRCjUYXMXjE5m19OKAX"
|
||||
},
|
||||
"spec": {
|
||||
"title": "opsgenie",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "opsgenie",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"api_key": true
|
||||
},
|
||||
"settings": {
|
||||
"actions": "test actions",
|
||||
"api_url": "http://localhost/opsgenie/",
|
||||
@@ -300,19 +311,18 @@
|
||||
"tags": "test-tags",
|
||||
"update_alerts": true
|
||||
},
|
||||
"secureFields": {
|
||||
"api_key": true
|
||||
}
|
||||
"type": "opsgenie",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
}
|
||||
]
|
||||
],
|
||||
"title": "opsgenie"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "cGFnZXJkdXR5",
|
||||
"namespace": "default",
|
||||
"uid": "QNitkUCkwzrIc7WVCCJGGDyvXLyo9csSUVqfyStyctQX",
|
||||
"resourceVersion": "fe673d5dcd67ccf0",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -320,16 +330,20 @@
|
||||
"grafana.com/inUse/routes": "1",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
},
|
||||
"name": "cGFnZXJkdXR5",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "fe673d5dcd67ccf0",
|
||||
"uid": "QNitkUCkwzrIc7WVCCJGGDyvXLyo9csSUVqfyStyctQX"
|
||||
},
|
||||
"spec": {
|
||||
"title": "pagerduty",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "pagerduty",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"routing_key": true,
|
||||
"service_key": true
|
||||
},
|
||||
"settings": {
|
||||
"class": "test class",
|
||||
"client": "Alertmanager",
|
||||
@@ -369,20 +383,18 @@
|
||||
"source": "test source",
|
||||
"url": "http://localhost/pagerduty"
|
||||
},
|
||||
"secureFields": {
|
||||
"routing_key": true,
|
||||
"service_key": true
|
||||
}
|
||||
"type": "pagerduty",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
}
|
||||
]
|
||||
],
|
||||
"title": "pagerduty"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "cHVzaG92ZXI",
|
||||
"namespace": "default",
|
||||
"uid": "t2TJSktI6vyGfdbLOKmxH4eBqgcIGsAuW8Qm9m0HRycX",
|
||||
"resourceVersion": "6ae076725ab463e0",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -390,16 +402,21 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
},
|
||||
"name": "cHVzaG92ZXI",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "6ae076725ab463e0",
|
||||
"uid": "t2TJSktI6vyGfdbLOKmxH4eBqgcIGsAuW8Qm9m0HRycX"
|
||||
},
|
||||
"spec": {
|
||||
"title": "pushover",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "pushover",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"http_config.authorization.credentials": true,
|
||||
"token": true,
|
||||
"user_key": true
|
||||
},
|
||||
"settings": {
|
||||
"expire": "1h0m0s",
|
||||
"http_config": {
|
||||
@@ -420,21 +437,18 @@
|
||||
"title": "{{ template \"pushover.default.title\" . }}",
|
||||
"url": "http://localhost/pushover"
|
||||
},
|
||||
"secureFields": {
|
||||
"http_config.authorization.credentials": true,
|
||||
"token": true,
|
||||
"user_key": true
|
||||
}
|
||||
"type": "pushover",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
}
|
||||
]
|
||||
],
|
||||
"title": "pushover"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "c2xhY2s",
|
||||
"namespace": "default",
|
||||
"uid": "xSB0hnoc9j1CnLCHR3VgeVGXdVXILM0p2dM64bbHN9oX",
|
||||
"resourceVersion": "ec0e343029ff5d8b",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -442,16 +456,19 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
},
|
||||
"name": "c2xhY2s",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "ec0e343029ff5d8b",
|
||||
"uid": "xSB0hnoc9j1CnLCHR3VgeVGXdVXILM0p2dM64bbHN9oX"
|
||||
},
|
||||
"spec": {
|
||||
"title": "slack",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "slack",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"api_url": true
|
||||
},
|
||||
"settings": {
|
||||
"actions": [
|
||||
{
|
||||
@@ -505,19 +522,18 @@
|
||||
"title_link": "http://localhost",
|
||||
"username": "Alerting Team"
|
||||
},
|
||||
"secureFields": {
|
||||
"api_url": true
|
||||
}
|
||||
"type": "slack",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
}
|
||||
]
|
||||
],
|
||||
"title": "slack"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "c25z",
|
||||
"namespace": "default",
|
||||
"uid": "vSP8NtFr23hnqZqLxRgzUKfr1wOemOvZm1S6MYkfRI4X",
|
||||
"resourceVersion": "77d734ad4c196d36",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -525,16 +541,19 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
},
|
||||
"name": "c25z",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "77d734ad4c196d36",
|
||||
"uid": "vSP8NtFr23hnqZqLxRgzUKfr1wOemOvZm1S6MYkfRI4X"
|
||||
},
|
||||
"spec": {
|
||||
"title": "sns",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "sns",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"sigv4.SecretKey": true
|
||||
},
|
||||
"settings": {
|
||||
"attributes": {
|
||||
"key1": "value1"
|
||||
@@ -558,19 +577,18 @@
|
||||
"subject": "{{ template \"sns.default.subject\" . }}",
|
||||
"topic_arn": "arn:aws:sns:us-east-1:123456789012:alerts"
|
||||
},
|
||||
"secureFields": {
|
||||
"sigv4.SecretKey": true
|
||||
}
|
||||
"type": "sns",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
}
|
||||
]
|
||||
],
|
||||
"title": "sns"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "dGVsZWdyYW0",
|
||||
"namespace": "default",
|
||||
"uid": "XLWjtmYcjP5PiqBCwZXX3YKHV1G8niRtpCakIpcHqoYX",
|
||||
"resourceVersion": "d9850878a33e302e",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -578,16 +596,19 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
},
|
||||
"name": "dGVsZWdyYW0",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "d9850878a33e302e",
|
||||
"uid": "XLWjtmYcjP5PiqBCwZXX3YKHV1G8niRtpCakIpcHqoYX"
|
||||
},
|
||||
"spec": {
|
||||
"title": "telegram",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "telegram",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"token": true
|
||||
},
|
||||
"settings": {
|
||||
"api_url": "http://localhost/telegram-default",
|
||||
"chat": -1001234567890,
|
||||
@@ -603,19 +624,18 @@
|
||||
"parse_mode": "MarkdownV2",
|
||||
"send_resolved": true
|
||||
},
|
||||
"secureFields": {
|
||||
"token": true
|
||||
}
|
||||
"type": "telegram",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
}
|
||||
]
|
||||
],
|
||||
"title": "telegram"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "dmljdG9yb3Bz",
|
||||
"namespace": "default",
|
||||
"uid": "EWiwQ6TIW0GpEo46WusW7Nvg0HuD4QAbHf0JZ2OSOhEX",
|
||||
"resourceVersion": "1e6886531440afc2",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -623,16 +643,19 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
},
|
||||
"name": "dmljdG9yb3Bz",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "1e6886531440afc2",
|
||||
"uid": "EWiwQ6TIW0GpEo46WusW7Nvg0HuD4QAbHf0JZ2OSOhEX"
|
||||
},
|
||||
"spec": {
|
||||
"title": "victorops",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "victorops",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"api_key": true
|
||||
},
|
||||
"settings": {
|
||||
"api_url": "http://localhost/victorops-default/",
|
||||
"entity_display_name": "{{ template \"victorops.default.entity_display_name\" . }}",
|
||||
@@ -651,19 +674,18 @@
|
||||
"send_resolved": true,
|
||||
"state_message": "{{ template \"victorops.default.state_message\" . }}"
|
||||
},
|
||||
"secureFields": {
|
||||
"api_key": true
|
||||
}
|
||||
"type": "victorops",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
}
|
||||
]
|
||||
],
|
||||
"title": "victorops"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "d2ViZXg",
|
||||
"namespace": "default",
|
||||
"uid": "wDNufI44UXHWq4ERRYenZ7XgXVV3Tjxaokz9IjMRZ54X",
|
||||
"resourceVersion": "08fc955a08dfe9c0",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -671,16 +693,19 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
},
|
||||
"name": "d2ViZXg",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "08fc955a08dfe9c0",
|
||||
"uid": "wDNufI44UXHWq4ERRYenZ7XgXVV3Tjxaokz9IjMRZ54X"
|
||||
},
|
||||
"spec": {
|
||||
"title": "webex",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "webex",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"http_config.authorization.credentials": true
|
||||
},
|
||||
"settings": {
|
||||
"api_url": "http://localhost/webes-default",
|
||||
"http_config": {
|
||||
@@ -698,19 +723,18 @@
|
||||
"room_id": "Y2lzY29zcGFyazovL3VzL1JPT00v12345678",
|
||||
"send_resolved": true
|
||||
},
|
||||
"secureFields": {
|
||||
"http_config.authorization.credentials": true
|
||||
}
|
||||
"type": "webex",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
}
|
||||
]
|
||||
],
|
||||
"title": "webex"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "d2ViaG9vaw",
|
||||
"namespace": "default",
|
||||
"uid": "aKzigXATPp6HOh20yTrlTcuF2Y9IrPHridGIcWrJygsX",
|
||||
"resourceVersion": "494392f899a7b410",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -718,16 +742,19 @@
|
||||
"grafana.com/inUse/routes": "1",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
},
|
||||
"name": "d2ViaG9vaw",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "494392f899a7b410",
|
||||
"uid": "aKzigXATPp6HOh20yTrlTcuF2Y9IrPHridGIcWrJygsX"
|
||||
},
|
||||
"spec": {
|
||||
"title": "webhook",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "webhook",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"url": true
|
||||
},
|
||||
"settings": {
|
||||
"http_config": {
|
||||
"enable_http2": true,
|
||||
@@ -742,19 +769,18 @@
|
||||
"timeout": "0s",
|
||||
"url_file": ""
|
||||
},
|
||||
"secureFields": {
|
||||
"url": true
|
||||
}
|
||||
"type": "webhook",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
}
|
||||
]
|
||||
],
|
||||
"title": "webhook"
|
||||
}
|
||||
},
|
||||
{
|
||||
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
|
||||
"kind": "Receiver",
|
||||
"metadata": {
|
||||
"name": "d2VjaGF0",
|
||||
"namespace": "default",
|
||||
"uid": "jkXCvNrNVw7XX5nmYFyrGiA4ckAvJ282u2scW8KZq7IX",
|
||||
"resourceVersion": "135913515cbc156b",
|
||||
"annotations": {
|
||||
"grafana.com/access/canModifyProtected": "true",
|
||||
"grafana.com/access/canReadSecrets": "true",
|
||||
@@ -762,16 +788,19 @@
|
||||
"grafana.com/inUse/routes": "0",
|
||||
"grafana.com/inUse/rules": "0",
|
||||
"grafana.com/provenance": "converted_prometheus"
|
||||
}
|
||||
},
|
||||
"name": "d2VjaGF0",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "135913515cbc156b",
|
||||
"uid": "jkXCvNrNVw7XX5nmYFyrGiA4ckAvJ282u2scW8KZq7IX"
|
||||
},
|
||||
"spec": {
|
||||
"title": "wechat",
|
||||
"integrations": [
|
||||
{
|
||||
"uid": "",
|
||||
"type": "wechat",
|
||||
"version": "v0mimir1",
|
||||
"disableResolveMessage": false,
|
||||
"secureFields": {
|
||||
"api_secret": true
|
||||
},
|
||||
"settings": {
|
||||
"agent_id": "1000002",
|
||||
"api_url": "http://localhost/wechat/",
|
||||
@@ -791,12 +820,15 @@
|
||||
"to_tag": "tag1",
|
||||
"to_user": "user1"
|
||||
},
|
||||
"secureFields": {
|
||||
"api_secret": true
|
||||
}
|
||||
"type": "wechat",
|
||||
"uid": "",
|
||||
"version": "v0mimir1"
|
||||
}
|
||||
]
|
||||
],
|
||||
"title": "wechat"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"kind": "ReceiverList",
|
||||
"metadata": {}
|
||||
}
|
||||
|
||||
@@ -8,7 +8,6 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/grafana/grafana-app-sdk/resource"
|
||||
"github.com/prometheus/alertmanager/config"
|
||||
"github.com/prometheus/alertmanager/pkg/labels"
|
||||
"github.com/prometheus/common/model"
|
||||
@@ -40,11 +39,6 @@ import (
|
||||
"github.com/grafana/grafana/pkg/util/testutil"
|
||||
)
|
||||
|
||||
var defaultTreeIdentifier = resource.Identifier{
|
||||
Namespace: apis.DefaultNamespace,
|
||||
Name: v0alpha1.UserDefinedRoutingTreeName,
|
||||
}
|
||||
|
||||
func TestMain(m *testing.M) {
|
||||
testsuite.Run(m)
|
||||
}
|
||||
@@ -58,8 +52,7 @@ func TestIntegrationNotAllowedMethods(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
client := common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
|
||||
route := &v0alpha1.RoutingTree{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -67,7 +60,11 @@ func TestIntegrationNotAllowedMethods(t *testing.T) {
|
||||
},
|
||||
Spec: v0alpha1.RoutingTreeSpec{},
|
||||
}
|
||||
_, err = client.Create(ctx, route, resource.CreateOptions{})
|
||||
_, err := client.Create(ctx, route, v1.CreateOptions{})
|
||||
assert.Error(t, err)
|
||||
require.Truef(t, errors.IsMethodNotSupported(err), "Expected MethodNotSupported but got %s", err)
|
||||
|
||||
err = client.Client.DeleteCollection(ctx, v1.DeleteOptions{}, v1.ListOptions{})
|
||||
assert.Error(t, err)
|
||||
require.Truef(t, errors.IsMethodNotSupported(err), "Expected MethodNotSupported but got %s", err)
|
||||
}
|
||||
@@ -157,52 +154,50 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
}
|
||||
|
||||
admin := org1.Admin
|
||||
adminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := common.NewRoutingTreeClient(t, admin)
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
|
||||
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(tc.user.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
client := common.NewRoutingTreeClient(t, tc.user)
|
||||
|
||||
if tc.canRead {
|
||||
t.Run("should be able to list routing trees", func(t *testing.T) {
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
require.Equal(t, v0alpha1.UserDefinedRoutingTreeName, list.Items[0].Name)
|
||||
})
|
||||
|
||||
t.Run("should be able to read routing trees by resource identifier", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
_, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to list routing trees", func(t *testing.T) {
|
||||
_, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
_, err := client.List(ctx, v1.ListOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should be forbidden to read routing tree by name", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
_, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
current, err := adminClient.Get(ctx, defaultTreeIdentifier)
|
||||
current, err := adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
expected := current.Copy().(*v0alpha1.RoutingTree)
|
||||
expected.Spec.Routes = []v0alpha1.RoutingTreeRoute{
|
||||
@@ -222,7 +217,7 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canUpdate {
|
||||
t.Run("should be able to update routing tree", func(t *testing.T) {
|
||||
updated, err := client.Update(ctx, expected, resource.UpdateOptions{})
|
||||
updated, err := client.Update(ctx, expected, v1.UpdateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
|
||||
expected = updated
|
||||
@@ -230,23 +225,21 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
up := expected.Copy().(*v0alpha1.RoutingTree)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{})
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to update routing tree", func(t *testing.T) {
|
||||
_, err := client.Update(ctx, expected, resource.UpdateOptions{})
|
||||
_, err := client.Update(ctx, expected, v1.UpdateOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
|
||||
up := expected.Copy().(*v0alpha1.RoutingTree)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{
|
||||
ResourceVersion: up.ResourceVersion,
|
||||
})
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
@@ -255,32 +248,32 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canUpdate {
|
||||
t.Run("should be able to reset routing tree", func(t *testing.T) {
|
||||
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
err := client.Delete(ctx, expected.Name, v1.DeleteOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to reset routing tree", func(t *testing.T) {
|
||||
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
err := client.Delete(ctx, expected.Name, v1.DeleteOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
|
||||
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
|
||||
}
|
||||
})
|
||||
|
||||
err := adminClient.Delete(ctx, defaultTreeIdentifier, resource.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.DeleteOptions{})
|
||||
require.NoError(t, err)
|
||||
}
|
||||
}
|
||||
@@ -294,22 +287,21 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
org := helper.Org1
|
||||
|
||||
admin := org.Admin
|
||||
adminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := common.NewRoutingTreeClient(t, admin)
|
||||
|
||||
env := helper.GetEnv()
|
||||
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
|
||||
db, err := store.ProvideDBStore(env.Cfg, env.FeatureToggles, env.SQLStore, &foldertest.FakeService{}, &dashboards.FakeDashboardService{}, ac, bus.ProvideBus(tracing.InitializeTracerForTest()))
|
||||
require.NoError(t, err)
|
||||
|
||||
current, err := adminClient.Get(ctx, defaultTreeIdentifier)
|
||||
current, err := adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "none", current.GetProvenanceStatus())
|
||||
|
||||
t.Run("should provide provenance status", func(t *testing.T) {
|
||||
require.NoError(t, db.SetProvenance(ctx, &definitions.Route{}, admin.Identity.GetOrgID(), "API"))
|
||||
|
||||
got, err := adminClient.Get(ctx, current.GetStaticMetadata().Identifier())
|
||||
got, err := adminClient.Get(ctx, current.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "API", got.GetProvenanceStatus())
|
||||
})
|
||||
@@ -327,13 +319,13 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not let delete if provisioned", func(t *testing.T) {
|
||||
err := adminClient.Delete(ctx, current.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, current.Name, v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
}
|
||||
@@ -344,37 +336,35 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
|
||||
current, err := adminClient.Get(ctx, defaultTreeIdentifier)
|
||||
current, err := adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotEmpty(t, current.ResourceVersion)
|
||||
|
||||
t.Run("should forbid if version does not match", func(t *testing.T) {
|
||||
updated := current.Copy().(*v0alpha1.RoutingTree)
|
||||
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
|
||||
ResourceVersion: "test",
|
||||
})
|
||||
updated.ResourceVersion = "test"
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
t.Run("should update if version matches", func(t *testing.T) {
|
||||
updated := current.Copy().(*v0alpha1.RoutingTree)
|
||||
updated.Spec.Defaults.GroupBy = append(updated.Spec.Defaults.GroupBy, "data")
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
|
||||
require.NotEqual(t, updated.ResourceVersion, actualUpdated.ResourceVersion)
|
||||
})
|
||||
t.Run("should update if version is empty", func(t *testing.T) {
|
||||
current, err = adminClient.Get(ctx, defaultTreeIdentifier)
|
||||
current, err = adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
updated := current.Copy().(*v0alpha1.RoutingTree)
|
||||
updated.ResourceVersion = ""
|
||||
updated.Spec.Routes = append(updated.Spec.Routes, v0alpha1.RoutingTreeRoute{Continue: true})
|
||||
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
|
||||
require.NotEqual(t, current.ResourceVersion, actualUpdated.ResourceVersion)
|
||||
@@ -390,22 +380,20 @@ func TestIntegrationDataConsistency(t *testing.T) {
|
||||
cliCfg := helper.Org1.Admin.NewRestConfig()
|
||||
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
|
||||
|
||||
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
client := common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
|
||||
receiver := "grafana-default-email"
|
||||
timeInterval := "test-time-interval"
|
||||
createRoute := func(t *testing.T, route definitions.Route) {
|
||||
t.Helper()
|
||||
routeClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
routeClient := common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
v1Route, err := routingtree.ConvertToK8sResource(helper.Org1.Admin.Identity.GetOrgID(), route, "", func(int64) string { return "default" })
|
||||
require.NoError(t, err)
|
||||
_, err = routeClient.Update(ctx, v1Route, resource.UpdateOptions{})
|
||||
_, err = routeClient.Update(ctx, v1Route, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
_, err = common.NewTimeIntervalClient(t, helper.Org1.Admin).Create(ctx, &v0alpha1.TimeInterval{
|
||||
_, err := common.NewTimeIntervalClient(t, helper.Org1.Admin).Create(ctx, &v0alpha1.TimeInterval{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
},
|
||||
@@ -447,7 +435,7 @@ func TestIntegrationDataConsistency(t *testing.T) {
|
||||
},
|
||||
}
|
||||
createRoute(t, route)
|
||||
tree, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
expected := []v0alpha1.RoutingTreeMatcher{
|
||||
{
|
||||
@@ -515,9 +503,9 @@ func TestIntegrationDataConsistency(t *testing.T) {
|
||||
ensureMatcher(t, labels.MatchNotEqual, "matchers", "v"),
|
||||
}
|
||||
|
||||
tree, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
_, err = client.Update(ctx, tree, resource.UpdateOptions{})
|
||||
_, err = client.Update(ctx, tree, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
cfg, _, _ = legacyCli.GetAlertmanagerConfigWithStatus(t)
|
||||
@@ -554,7 +542,7 @@ func TestIntegrationDataConsistency(t *testing.T) {
|
||||
createRoute(t, route)
|
||||
|
||||
t.Run("correctly reads all fields", func(t *testing.T) {
|
||||
tree, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, v0alpha1.RoutingTreeRouteDefaults{
|
||||
Receiver: receiver,
|
||||
@@ -601,10 +589,10 @@ func TestIntegrationDataConsistency(t *testing.T) {
|
||||
t.Run("correctly save all fields", func(t *testing.T) {
|
||||
before, status, body := legacyCli.GetAlertmanagerConfigWithStatus(t)
|
||||
require.Equalf(t, http.StatusOK, status, body)
|
||||
tree, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
tree.Spec.Defaults.GroupBy = []string{"test-123", "test-456", "test-789"}
|
||||
require.NoError(t, err)
|
||||
_, err = client.Update(ctx, tree, resource.UpdateOptions{})
|
||||
_, err = client.Update(ctx, tree, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
before.AlertmanagerConfig.Route.GroupByStr = []string{"test-123", "test-456", "test-789"}
|
||||
@@ -652,7 +640,7 @@ func TestIntegrationDataConsistency(t *testing.T) {
|
||||
}
|
||||
|
||||
createRoute(t, route)
|
||||
tree, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "foo🙂", tree.Spec.Routes[0].GroupBy[0])
|
||||
expected := []v0alpha1.RoutingTreeMatcher{
|
||||
@@ -678,8 +666,7 @@ func TestIntegrationExtraConfigsConflicts(t *testing.T) {
|
||||
cliCfg := helper.Org1.Admin.NewRestConfig()
|
||||
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
|
||||
|
||||
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
client := common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
|
||||
// Now upload a new extra config
|
||||
testAlertmanagerConfigYAML := `
|
||||
@@ -704,7 +691,7 @@ receivers:
|
||||
}, headers)
|
||||
require.Equal(t, "success", response.Status)
|
||||
|
||||
current, err := client.Get(ctx, defaultTreeIdentifier)
|
||||
current, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
updated := current.Copy().(*v0alpha1.RoutingTree)
|
||||
updated.Spec.Routes = append(updated.Spec.Routes, v0alpha1.RoutingTreeRoute{
|
||||
@@ -717,7 +704,7 @@ receivers:
|
||||
},
|
||||
})
|
||||
|
||||
_, err = client.Update(ctx, updated, resource.UpdateOptions{})
|
||||
_, err = client.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsBadRequest(err), "Should get BadRequest error but got: %s", err)
|
||||
|
||||
@@ -725,6 +712,6 @@ receivers:
|
||||
legacyCli.ConvertPrometheusDeleteAlertmanagerConfig(t, headers)
|
||||
|
||||
// and try again
|
||||
_, err = client.Update(ctx, updated, resource.UpdateOptions{})
|
||||
_, err = client.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
@@ -6,7 +6,6 @@ import (
|
||||
"path"
|
||||
"testing"
|
||||
|
||||
"github.com/grafana/grafana-app-sdk/resource"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"go.yaml.in/yaml/v3"
|
||||
@@ -19,6 +18,7 @@ import (
|
||||
"github.com/grafana/grafana/pkg/services/ngalert/models"
|
||||
"github.com/grafana/grafana/pkg/tests/api/alerting"
|
||||
"github.com/grafana/grafana/pkg/tests/apis"
|
||||
"github.com/grafana/grafana/pkg/tests/apis/alerting/notifications/common"
|
||||
"github.com/grafana/grafana/pkg/tests/testinfra"
|
||||
"github.com/grafana/grafana/pkg/util/testutil"
|
||||
)
|
||||
@@ -35,8 +35,7 @@ func TestIntegrationImportedTemplates(t *testing.T) {
|
||||
},
|
||||
})
|
||||
|
||||
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
client := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
|
||||
cliCfg := helper.Org1.Admin.NewRestConfig()
|
||||
alertingApi := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
|
||||
@@ -58,7 +57,7 @@ func TestIntegrationImportedTemplates(t *testing.T) {
|
||||
response := alertingApi.ConvertPrometheusPostAlertmanagerConfig(t, amConfig, headers)
|
||||
require.Equal(t, "success", response.Status)
|
||||
|
||||
templates, err := client.List(context.Background(), apis.DefaultNamespace, resource.ListOptions{})
|
||||
templates, err := client.List(context.Background(), metav1.ListOptions{})
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Len(t, templates.Items, 3)
|
||||
@@ -91,12 +90,12 @@ func TestIntegrationImportedTemplates(t *testing.T) {
|
||||
t.Run("should not be able to update", func(t *testing.T) {
|
||||
tpl := templates.Items[1]
|
||||
tpl.Spec.Content = "new content"
|
||||
_, err := client.Update(context.Background(), &tpl, resource.UpdateOptions{})
|
||||
_, err := client.Update(context.Background(), &tpl, metav1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not be able to delete", func(t *testing.T) {
|
||||
err := client.Delete(context.Background(), templates.Items[1].GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
err := client.Delete(context.Background(), templates.Items[1].Name, metav1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
|
||||
})
|
||||
|
||||
@@ -109,14 +108,14 @@ func TestIntegrationImportedTemplates(t *testing.T) {
|
||||
}
|
||||
tpl.Spec.Kind = v0alpha1.TemplateGroupTemplateKindGrafana
|
||||
|
||||
created, err := client.Create(context.Background(), &tpl, resource.CreateOptions{})
|
||||
created, err := client.Create(context.Background(), &tpl, metav1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.NotEqual(t, templates.Items[1].Name, created.Name)
|
||||
})
|
||||
|
||||
t.Run("sort by kind and then name", func(t *testing.T) {
|
||||
templates, err := client.List(context.Background(), apis.DefaultNamespace, resource.ListOptions{})
|
||||
templates, err := client.List(context.Background(), metav1.ListOptions{})
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Len(t, templates.Items, 4)
|
||||
|
||||
@@ -7,7 +7,6 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/grafana/alerting/templates"
|
||||
"github.com/grafana/grafana-app-sdk/resource"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"k8s.io/apimachinery/pkg/api/errors"
|
||||
@@ -46,8 +45,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
client := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
|
||||
newTemplate := &v0alpha1.TemplateGroup{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -63,23 +61,23 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
t.Run("create should fail if object name is specified", func(t *testing.T) {
|
||||
template := newTemplate.Copy().(*v0alpha1.TemplateGroup)
|
||||
template.Name = "new-templateGroup"
|
||||
_, err := client.Create(ctx, template, resource.CreateOptions{})
|
||||
_, err := client.Create(ctx, template, v1.CreateOptions{})
|
||||
assert.Error(t, err)
|
||||
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
|
||||
})
|
||||
|
||||
var resourceID resource.Identifier
|
||||
var resourceID string
|
||||
t.Run("create should succeed and provide resource name", func(t *testing.T) {
|
||||
actual, err := client.Create(ctx, newTemplate, resource.CreateOptions{})
|
||||
actual, err := client.Create(ctx, newTemplate, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
|
||||
resourceID = actual.GetStaticMetadata().Identifier()
|
||||
resourceID = actual.Name
|
||||
})
|
||||
|
||||
var existingTemplateGroup *v0alpha1.TemplateGroup
|
||||
t.Run("resource should be available by the identifier", func(t *testing.T) {
|
||||
actual, err := client.Get(ctx, resourceID)
|
||||
actual, err := client.Get(ctx, resourceID, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
require.Equal(t, newTemplate.Spec, actual.Spec)
|
||||
@@ -92,12 +90,12 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
}
|
||||
updated := existingTemplateGroup.Copy().(*v0alpha1.TemplateGroup)
|
||||
updated.Spec.Title = "another-templateGroup"
|
||||
actual, err := client.Update(ctx, updated, resource.UpdateOptions{})
|
||||
actual, err := client.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, updated.Spec, actual.Spec)
|
||||
require.NotEqualf(t, updated.Name, actual.Name, "Update should change the resource name but it didn't")
|
||||
|
||||
resource, err := client.Get(ctx, actual.GetStaticMetadata().Identifier())
|
||||
resource, err := client.Get(ctx, actual.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, actual, resource)
|
||||
|
||||
@@ -106,7 +104,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
|
||||
var defaultTemplateGroup *v0alpha1.TemplateGroup
|
||||
t.Run("default template should be available by the identifier", func(t *testing.T) {
|
||||
actual, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: templates.DefaultTemplateName})
|
||||
actual, err := client.Get(ctx, templates.DefaultTemplateName, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
|
||||
@@ -124,7 +122,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
t.Run("create with reserved default title should work", func(t *testing.T) {
|
||||
template := newTemplate.Copy().(*v0alpha1.TemplateGroup)
|
||||
template.Spec.Title = defaultTemplateGroup.Spec.Title
|
||||
actual, err := client.Create(ctx, template, resource.CreateOptions{})
|
||||
actual, err := client.Create(ctx, template, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
|
||||
@@ -132,7 +130,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("default template should not be available by calculated UID", func(t *testing.T) {
|
||||
actual, err := client.Get(ctx, newTemplateWithOverlappingName.GetStaticMetadata().Identifier())
|
||||
actual, err := client.Get(ctx, newTemplateWithOverlappingName.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
|
||||
@@ -217,13 +215,11 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := common.NewTemplateGroupClient(t, org1.Admin)
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
|
||||
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(tc.user.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
client := common.NewTemplateGroupClient(t, tc.user)
|
||||
|
||||
var expected = &v0alpha1.TemplateGroup{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -241,12 +237,12 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canCreate {
|
||||
t.Run("should be able to create template group", func(t *testing.T) {
|
||||
actual, err := client.Create(ctx, expected, resource.CreateOptions{})
|
||||
actual, err := client.Create(ctx, expected, v1.CreateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.Equal(t, expected.Spec, actual.Spec)
|
||||
|
||||
t.Run("should fail if already exists", func(t *testing.T) {
|
||||
_, err := client.Create(ctx, actual, resource.CreateOptions{})
|
||||
_, err := client.Create(ctx, actual, v1.CreateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
|
||||
})
|
||||
|
||||
@@ -254,45 +250,45 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to create", func(t *testing.T) {
|
||||
_, err := client.Create(ctx, expected, resource.CreateOptions{})
|
||||
_, err := client.Create(ctx, expected, v1.CreateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "Payload %s", string(d))
|
||||
})
|
||||
|
||||
// create resource to proceed with other tests
|
||||
expected, err = adminClient.Create(ctx, expected, resource.CreateOptions{})
|
||||
expected, err = adminClient.Create(ctx, expected, v1.CreateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.NotNil(t, expected)
|
||||
}
|
||||
|
||||
if tc.canRead {
|
||||
t.Run("should be able to list template groups", func(t *testing.T) {
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 2) // Includes default template.
|
||||
})
|
||||
|
||||
t.Run("should be able to read template group by resource identifier", func(t *testing.T) {
|
||||
got, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
|
||||
got, err := client.Get(ctx, expected.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expected.Spec, got.Spec)
|
||||
require.Equal(t, expected, got)
|
||||
|
||||
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to list template groups", func(t *testing.T) {
|
||||
_, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
_, err := client.List(ctx, v1.ListOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should be forbidden to read template group by name", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
|
||||
_, err := client.Get(ctx, expected.Name, v1.GetOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
@@ -306,7 +302,7 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canUpdate {
|
||||
t.Run("should be able to update template group", func(t *testing.T) {
|
||||
updated, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
|
||||
updated, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
|
||||
expected = updated
|
||||
@@ -314,54 +310,52 @@ func TestIntegrationAccessControl(t *testing.T) {
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
up := updatedExpected.Copy().(*v0alpha1.TemplateGroup)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{})
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to update template group", func(t *testing.T) {
|
||||
_, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
|
||||
_, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
|
||||
up := updatedExpected.Copy().(*v0alpha1.TemplateGroup)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{
|
||||
ResourceVersion: up.ResourceVersion,
|
||||
})
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
deleteOptions := v1.DeleteOptions{Preconditions: &v1.Preconditions{ResourceVersion: util.Pointer(expected.ResourceVersion)}}
|
||||
oldClient := common.NewTemplateGroupClient(t, tc.user) // TODO replace with normal client once delete is fixed
|
||||
|
||||
if tc.canDelete {
|
||||
t.Run("should be able to delete template group", func(t *testing.T) {
|
||||
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
|
||||
err := client.Delete(ctx, expected.Name, deleteOptions)
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to delete template group", func(t *testing.T) {
|
||||
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
|
||||
err := client.Delete(ctx, expected.Name, deleteOptions)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
|
||||
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
|
||||
}
|
||||
|
||||
if tc.canRead {
|
||||
t.Run("should get list with just default template if no template groups", func(t *testing.T) {
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
require.Equal(t, templates.DefaultTemplateName, list.Items[0].Name)
|
||||
@@ -380,8 +374,7 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
org := helper.Org1
|
||||
|
||||
admin := org.Admin
|
||||
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := common.NewTemplateGroupClient(t, admin)
|
||||
|
||||
env := helper.GetEnv()
|
||||
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
|
||||
@@ -397,7 +390,7 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
Content: `{{ define "test" }} test {{ end }}`,
|
||||
Kind: v0alpha1.TemplateGroupTemplateKindGrafana,
|
||||
},
|
||||
}, resource.CreateOptions{})
|
||||
}, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "none", created.GetProvenanceStatus())
|
||||
|
||||
@@ -406,7 +399,7 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
Name: created.Spec.Title,
|
||||
}, admin.Identity.GetOrgID(), "API"))
|
||||
|
||||
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "API", got.GetProvenanceStatus())
|
||||
})
|
||||
@@ -414,12 +407,12 @@ func TestIntegrationProvisioning(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.TemplateGroup)
|
||||
updated.Spec.Content = `{{ define "another-test" }} test {{ end }}`
|
||||
|
||||
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not let delete if provisioned", func(t *testing.T) {
|
||||
err := adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
}
|
||||
@@ -430,9 +423,8 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
oldClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
adminClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
|
||||
template := v0alpha1.TemplateGroup{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
@@ -444,22 +436,21 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
created, err := adminClient.Create(ctx, &template, resource.CreateOptions{})
|
||||
created, err := adminClient.Create(ctx, &template, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, created)
|
||||
require.NotEmpty(t, created.ResourceVersion)
|
||||
|
||||
t.Run("should forbid if version does not match", func(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.TemplateGroup)
|
||||
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
|
||||
ResourceVersion: "test",
|
||||
})
|
||||
updated.ResourceVersion = "test"
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
t.Run("should update if version matches", func(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.TemplateGroup)
|
||||
updated.Spec.Content = `{{ define "test-another" }} test {{ end }}`
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
|
||||
require.NotEqual(t, updated.ResourceVersion, actualUpdated.ResourceVersion)
|
||||
@@ -469,16 +460,16 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
updated.ResourceVersion = ""
|
||||
updated.Spec.Content = `{{ define "test-another-2" }} test {{ end }}`
|
||||
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
|
||||
require.NotEqual(t, created.ResourceVersion, actualUpdated.ResourceVersion)
|
||||
})
|
||||
t.Run("should fail to delete if version does not match", func(t *testing.T) {
|
||||
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer("something"),
|
||||
},
|
||||
@@ -486,10 +477,10 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
t.Run("should succeed if version matches", func(t *testing.T) {
|
||||
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer(actual.ResourceVersion),
|
||||
},
|
||||
@@ -497,10 +488,10 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
})
|
||||
t.Run("should succeed if version is empty", func(t *testing.T) {
|
||||
actual, err := adminClient.Create(ctx, &template, resource.CreateOptions{})
|
||||
actual, err := adminClient.Create(ctx, &template, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer(actual.ResourceVersion),
|
||||
},
|
||||
@@ -515,8 +506,7 @@ func TestIntegrationPatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
|
||||
template := v0alpha1.TemplateGroup{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -529,10 +519,8 @@ func TestIntegrationPatch(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
current, err := adminClient.Create(ctx, &template, resource.CreateOptions{})
|
||||
current, err := adminClient.Create(ctx, &template, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
oldClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
|
||||
require.NotNil(t, current)
|
||||
require.NotEmpty(t, current.ResourceVersion)
|
||||
|
||||
@@ -543,7 +531,7 @@ func TestIntegrationPatch(t *testing.T) {
|
||||
}
|
||||
}`
|
||||
|
||||
result, err := oldClient.Patch(ctx, current.GetStaticMetadata().Identifier().Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
|
||||
result, err := adminClient.Patch(ctx, current.Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, `{{ define "test-another" }} test {{ end }}`, result.Spec.Content)
|
||||
current = result
|
||||
@@ -552,15 +540,18 @@ func TestIntegrationPatch(t *testing.T) {
|
||||
t.Run("should patch with json patch", func(t *testing.T) {
|
||||
expected := `{{ define "test-json-patch" }} test {{ end }}`
|
||||
|
||||
patch := []resource.PatchOperation{
|
||||
patch := []map[string]interface{}{
|
||||
{
|
||||
Operation: "replace",
|
||||
Path: "/spec/content",
|
||||
Value: expected,
|
||||
"op": "replace",
|
||||
"path": "/spec/content",
|
||||
"value": expected,
|
||||
},
|
||||
}
|
||||
|
||||
result, err := adminClient.Patch(ctx, current.GetStaticMetadata().Identifier(), resource.PatchRequest{Operations: patch}, resource.PatchOptions{})
|
||||
patchData, err := json.Marshal(patch)
|
||||
require.NoError(t, err)
|
||||
|
||||
result, err := adminClient.Patch(ctx, current.Name, types.JSONPatchType, patchData, v1.PatchOptions{})
|
||||
require.NoError(t, err)
|
||||
expectedSpec := current.Spec
|
||||
expectedSpec.Content = expected
|
||||
@@ -574,8 +565,7 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
|
||||
template1 := &v0alpha1.TemplateGroup{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -587,7 +577,7 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
Kind: v0alpha1.TemplateGroupTemplateKindGrafana,
|
||||
},
|
||||
}
|
||||
template1, err = adminClient.Create(ctx, template1, resource.CreateOptions{})
|
||||
template1, err := adminClient.Create(ctx, template1, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
template2 := &v0alpha1.TemplateGroup{
|
||||
@@ -600,7 +590,7 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
Kind: v0alpha1.TemplateGroupTemplateKindGrafana,
|
||||
},
|
||||
}
|
||||
template2, err = adminClient.Create(ctx, template2, resource.CreateOptions{})
|
||||
template2, err = adminClient.Create(ctx, template2, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
env := helper.GetEnv()
|
||||
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
|
||||
@@ -609,18 +599,18 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
require.NoError(t, db.SetProvenance(ctx, &definitions.NotificationTemplate{
|
||||
Name: template2.Spec.Title,
|
||||
}, helper.Org1.Admin.Identity.GetOrgID(), "API"))
|
||||
template2, err = adminClient.Get(ctx, template2.GetStaticMetadata().Identifier())
|
||||
template2, err = adminClient.Get(ctx, template2.Name, v1.GetOptions{})
|
||||
|
||||
require.NoError(t, err)
|
||||
|
||||
tmpls, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
tmpls, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, tmpls.Items, 3) // Includes default template.
|
||||
|
||||
t.Run("should filter by template name", func(t *testing.T) {
|
||||
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{"spec.title=" + template1.Spec.Title},
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "spec.title=" + template1.Spec.Title,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -628,8 +618,8 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("should filter by template metadata name", func(t *testing.T) {
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{"metadata.name=" + template2.Name},
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "metadata.name=" + template2.Name,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -638,8 +628,8 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
|
||||
t.Run("should filter by multiple filters", func(t *testing.T) {
|
||||
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s,spec.title=%s", template2.Name, template2.Spec.Title)},
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: fmt.Sprintf("metadata.name=%s,spec.title=%s", template2.Name, template2.Spec.Title),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -647,8 +637,8 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("should be empty when filter does not match", func(t *testing.T) {
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s", "unknown")},
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: fmt.Sprintf("metadata.name=%s", "unknown"),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Empty(t, list.Items)
|
||||
@@ -656,17 +646,17 @@ func TestIntegrationListSelector(t *testing.T) {
|
||||
|
||||
t.Run("should filter by default template name", func(t *testing.T) {
|
||||
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{"spec.title=" + v0alpha1.DefaultTemplateTitle},
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "spec.title=" + v0alpha1.DefaultTemplateTitle,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
require.Equal(t, templates.DefaultTemplateName, list.Items[0].Name)
|
||||
|
||||
// Now just non-default templates
|
||||
list, err = adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{"spec.title!=" + v0alpha1.DefaultTemplateTitle}},
|
||||
)
|
||||
list, err = adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "spec.title!=" + v0alpha1.DefaultTemplateTitle,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 2)
|
||||
require.NotEqualf(t, templates.DefaultTemplateName, list.Items[0].Name, "Expected non-default template but got %s", list.Items[0].Name)
|
||||
@@ -679,8 +669,7 @@ func TestIntegrationKinds(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
client := common.NewTemplateGroupClient(t, helper.Org1.Admin)
|
||||
|
||||
newTemplate := &v0alpha1.TemplateGroup{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -694,17 +683,17 @@ func TestIntegrationKinds(t *testing.T) {
|
||||
}
|
||||
|
||||
t.Run("should not let create Mimir template", func(t *testing.T) {
|
||||
_, err := client.Create(ctx, newTemplate, resource.CreateOptions{})
|
||||
_, err := client.Create(ctx, newTemplate, v1.CreateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not let change kind", func(t *testing.T) {
|
||||
newTemplate.Spec.Kind = v0alpha1.TemplateGroupTemplateKindGrafana
|
||||
created, err := client.Create(ctx, newTemplate, resource.CreateOptions{})
|
||||
created, err := client.Create(ctx, newTemplate, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
created.Spec.Kind = v0alpha1.TemplateGroupTemplateKindMimir
|
||||
_, err = client.Update(ctx, created, resource.UpdateOptions{})
|
||||
_, err = client.Update(ctx, created, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -10,7 +10,6 @@ import (
|
||||
"slices"
|
||||
"testing"
|
||||
|
||||
"github.com/grafana/grafana-app-sdk/resource"
|
||||
"github.com/prometheus/alertmanager/config"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
@@ -58,8 +57,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
client, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
client := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
|
||||
newInterval := &v0alpha1.TimeInterval{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -74,22 +72,22 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
t.Run("create should fail if object name is specified", func(t *testing.T) {
|
||||
interval := newInterval.Copy().(*v0alpha1.TimeInterval)
|
||||
interval.Name = "time-newInterval"
|
||||
_, err := client.Create(ctx, interval, resource.CreateOptions{})
|
||||
_, err := client.Create(ctx, interval, v1.CreateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
|
||||
})
|
||||
|
||||
var resourceID resource.Identifier
|
||||
var resourceID string
|
||||
t.Run("create should succeed and provide resource name", func(t *testing.T) {
|
||||
actual, err := client.Create(ctx, newInterval, resource.CreateOptions{})
|
||||
actual, err := client.Create(ctx, newInterval, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
|
||||
resourceID = actual.GetStaticMetadata().Identifier()
|
||||
resourceID = actual.Name
|
||||
})
|
||||
|
||||
var existingInterval *v0alpha1.TimeInterval
|
||||
t.Run("resource should be available by the identifier", func(t *testing.T) {
|
||||
actual, err := client.Get(ctx, resourceID)
|
||||
actual, err := client.Get(ctx, resourceID, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
|
||||
require.Equal(t, newInterval.Spec, actual.Spec)
|
||||
@@ -102,13 +100,13 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
|
||||
}
|
||||
updated := existingInterval.Copy().(*v0alpha1.TimeInterval)
|
||||
updated.Spec.Name = "another-newInterval"
|
||||
actual, err := client.Update(ctx, updated, resource.UpdateOptions{})
|
||||
actual, err := client.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, updated.Spec, actual.Spec)
|
||||
require.NotEqualf(t, updated.Name, actual.Name, "Update should change the resource name but it didn't")
|
||||
require.NotEqualf(t, updated.ResourceVersion, actual.ResourceVersion, "Update should change the resource version but it didn't")
|
||||
|
||||
resource, err := client.Get(ctx, actual.GetStaticMetadata().Identifier())
|
||||
resource, err := client.Get(ctx, actual.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, actual, resource)
|
||||
})
|
||||
@@ -191,13 +189,11 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
|
||||
client, err := v0alpha1.NewTimeIntervalClientFromGenerator(tc.user.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
client := common.NewTimeIntervalClient(t, tc.user)
|
||||
var expected = &v0alpha1.TimeInterval{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
@@ -213,12 +209,12 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canCreate {
|
||||
t.Run("should be able to create time interval", func(t *testing.T) {
|
||||
actual, err := client.Create(ctx, expected, resource.CreateOptions{})
|
||||
actual, err := client.Create(ctx, expected, v1.CreateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.Equal(t, expected.Spec, actual.Spec)
|
||||
|
||||
t.Run("should fail if already exists", func(t *testing.T) {
|
||||
_, err := client.Create(ctx, actual, resource.CreateOptions{})
|
||||
_, err := client.Create(ctx, actual, v1.CreateOptions{})
|
||||
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
|
||||
})
|
||||
|
||||
@@ -226,45 +222,45 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to create", func(t *testing.T) {
|
||||
_, err := client.Create(ctx, expected, resource.CreateOptions{})
|
||||
_, err := client.Create(ctx, expected, v1.CreateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "Payload %s", string(d))
|
||||
})
|
||||
|
||||
// create resource to proceed with other tests
|
||||
expected, err = adminClient.Create(ctx, expected, resource.CreateOptions{})
|
||||
expected, err = adminClient.Create(ctx, expected, v1.CreateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
require.NotNil(t, expected)
|
||||
}
|
||||
|
||||
if tc.canRead {
|
||||
t.Run("should be able to list time intervals", func(t *testing.T) {
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
})
|
||||
|
||||
t.Run("should be able to read time interval by resource identifier", func(t *testing.T) {
|
||||
got, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
|
||||
got, err := client.Get(ctx, expected.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expected.Spec, got.Spec)
|
||||
require.Equal(t, expected, got)
|
||||
|
||||
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to list time intervals", func(t *testing.T) {
|
||||
_, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
_, err := client.List(ctx, v1.ListOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should be forbidden to read time interval by name", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
|
||||
_, err := client.Get(ctx, expected.Name, v1.GetOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
|
||||
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
|
||||
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
@@ -278,7 +274,7 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
|
||||
|
||||
if tc.canUpdate {
|
||||
t.Run("should be able to update time interval", func(t *testing.T) {
|
||||
updated, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
|
||||
updated, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
|
||||
require.NoErrorf(t, err, "Payload %s", string(d))
|
||||
|
||||
expected = updated
|
||||
@@ -286,54 +282,52 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
up := updatedExpected.Copy().(*v0alpha1.TimeInterval)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{})
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to update time interval", func(t *testing.T) {
|
||||
_, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
|
||||
_, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
|
||||
up := updatedExpected.Copy().(*v0alpha1.TimeInterval)
|
||||
up.Name = "notFound"
|
||||
_, err := client.Update(ctx, up, resource.UpdateOptions{
|
||||
ResourceVersion: up.ResourceVersion,
|
||||
})
|
||||
_, err := client.Update(ctx, up, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
deleteOptions := v1.DeleteOptions{Preconditions: &v1.Preconditions{ResourceVersion: util.Pointer(expected.ResourceVersion)}}
|
||||
oldClient := common.NewTimeIntervalClient(t, tc.user)
|
||||
|
||||
if tc.canDelete {
|
||||
t.Run("should be able to delete time interval", func(t *testing.T) {
|
||||
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
|
||||
err := client.Delete(ctx, expected.Name, deleteOptions)
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
|
||||
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
|
||||
})
|
||||
})
|
||||
} else {
|
||||
t.Run("should be forbidden to delete time interval", func(t *testing.T) {
|
||||
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
|
||||
err := client.Delete(ctx, expected.Name, deleteOptions)
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
|
||||
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
|
||||
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
})
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
|
||||
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
|
||||
}
|
||||
|
||||
if tc.canRead {
|
||||
t.Run("should get empty list if no mute timings", func(t *testing.T) {
|
||||
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
list, err := client.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 0)
|
||||
})
|
||||
@@ -351,8 +345,7 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
|
||||
org := helper.Org1
|
||||
|
||||
admin := org.Admin
|
||||
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
|
||||
env := helper.GetEnv()
|
||||
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
|
||||
@@ -367,7 +360,7 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
|
||||
Name: "time-interval-1",
|
||||
TimeIntervals: fakes.IntervalGenerator{}.GenerateMany(2),
|
||||
},
|
||||
}, resource.CreateOptions{})
|
||||
}, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "none", created.GetProvenanceStatus())
|
||||
|
||||
@@ -378,7 +371,7 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
|
||||
},
|
||||
}, admin.Identity.GetOrgID(), "API"))
|
||||
|
||||
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, "API", got.GetProvenanceStatus())
|
||||
})
|
||||
@@ -386,12 +379,12 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.TimeInterval)
|
||||
updated.Spec.TimeIntervals = fakes.IntervalGenerator{}.GenerateMany(2)
|
||||
|
||||
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
|
||||
t.Run("should not let delete if provisioned", func(t *testing.T) {
|
||||
err := adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
}
|
||||
@@ -402,9 +395,7 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
oldClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
|
||||
interval := v0alpha1.TimeInterval{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -416,22 +407,21 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
created, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
|
||||
created, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, created)
|
||||
require.NotEmpty(t, created.ResourceVersion)
|
||||
|
||||
t.Run("should forbid if version does not match", func(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.TimeInterval)
|
||||
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
|
||||
ResourceVersion: "test",
|
||||
})
|
||||
updated.ResourceVersion = "test"
|
||||
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
t.Run("should update if version matches", func(t *testing.T) {
|
||||
updated := created.Copy().(*v0alpha1.TimeInterval)
|
||||
updated.Spec.TimeIntervals = fakes.IntervalGenerator{}.GenerateMany(2)
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
|
||||
require.NotEqual(t, updated.ResourceVersion, actualUpdated.ResourceVersion)
|
||||
@@ -441,16 +431,16 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
|
||||
updated.ResourceVersion = ""
|
||||
updated.Spec.TimeIntervals = fakes.IntervalGenerator{}.GenerateMany(2)
|
||||
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
|
||||
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
|
||||
require.NotEqual(t, created.ResourceVersion, actualUpdated.ResourceVersion)
|
||||
})
|
||||
t.Run("should fail to delete if version does not match", func(t *testing.T) {
|
||||
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer("something"),
|
||||
},
|
||||
@@ -458,10 +448,10 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
|
||||
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
|
||||
})
|
||||
t.Run("should succeed if version matches", func(t *testing.T) {
|
||||
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
|
||||
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer(actual.ResourceVersion),
|
||||
},
|
||||
@@ -469,10 +459,10 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
})
|
||||
t.Run("should succeed if version is empty", func(t *testing.T) {
|
||||
actual, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
|
||||
actual, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
|
||||
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
|
||||
Preconditions: &v1.Preconditions{
|
||||
ResourceVersion: util.Pointer(actual.ResourceVersion),
|
||||
},
|
||||
@@ -487,9 +477,7 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
oldClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
|
||||
interval := v0alpha1.TimeInterval{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -501,7 +489,7 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
current, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
|
||||
current, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, current)
|
||||
require.NotEmpty(t, current.ResourceVersion)
|
||||
@@ -513,7 +501,7 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
|
||||
}
|
||||
}`
|
||||
|
||||
result, err := oldClient.Patch(ctx, current.GetStaticMetadata().Identifier().Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
|
||||
result, err := adminClient.Patch(ctx, current.Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Empty(t, result.Spec.TimeIntervals)
|
||||
current = result
|
||||
@@ -522,15 +510,18 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
|
||||
t.Run("should patch with json patch", func(t *testing.T) {
|
||||
expected := fakes.IntervalGenerator{}.Generate()
|
||||
|
||||
patch := []resource.PatchOperation{
|
||||
patch := []map[string]interface{}{
|
||||
{
|
||||
Operation: "add",
|
||||
Path: "/spec/time_intervals/-",
|
||||
Value: expected,
|
||||
"op": "add",
|
||||
"path": "/spec/time_intervals/-",
|
||||
"value": expected,
|
||||
},
|
||||
}
|
||||
|
||||
result, err := adminClient.Patch(ctx, current.GetStaticMetadata().Identifier(), resource.PatchRequest{Operations: patch}, resource.PatchOptions{})
|
||||
patchData, err := json.Marshal(patch)
|
||||
require.NoError(t, err)
|
||||
|
||||
result, err := adminClient.Patch(ctx, current.Name, types.JSONPatchType, patchData, v1.PatchOptions{})
|
||||
require.NoError(t, err)
|
||||
expectedSpec := v0alpha1.TimeIntervalSpec{
|
||||
Name: current.Spec.Name,
|
||||
@@ -549,8 +540,7 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
|
||||
interval1 := &v0alpha1.TimeInterval{
|
||||
ObjectMeta: v1.ObjectMeta{
|
||||
@@ -561,7 +551,7 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
|
||||
TimeIntervals: fakes.IntervalGenerator{}.GenerateMany(2),
|
||||
},
|
||||
}
|
||||
interval1, err = adminClient.Create(ctx, interval1, resource.CreateOptions{})
|
||||
interval1, err := adminClient.Create(ctx, interval1, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
interval2 := &v0alpha1.TimeInterval{
|
||||
@@ -573,7 +563,7 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
|
||||
TimeIntervals: fakes.IntervalGenerator{}.GenerateMany(2),
|
||||
},
|
||||
}
|
||||
interval2, err = adminClient.Create(ctx, interval2, resource.CreateOptions{})
|
||||
interval2, err = adminClient.Create(ctx, interval2, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
env := helper.GetEnv()
|
||||
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
|
||||
@@ -584,18 +574,18 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
|
||||
Name: interval2.Spec.Name,
|
||||
},
|
||||
}, helper.Org1.Admin.Identity.GetOrgID(), "API"))
|
||||
interval2, err = adminClient.Get(ctx, interval2.GetStaticMetadata().Identifier())
|
||||
interval2, err = adminClient.Get(ctx, interval2.Name, v1.GetOptions{})
|
||||
|
||||
require.NoError(t, err)
|
||||
|
||||
intervals, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
intervals, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, intervals.Items, 2)
|
||||
|
||||
t.Run("should filter by interval name", func(t *testing.T) {
|
||||
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{"spec.name=" + interval1.Spec.Name},
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "spec.name=" + interval1.Spec.Name,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -603,8 +593,8 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("should filter by interval metadata name", func(t *testing.T) {
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{"metadata.name=" + interval2.Name},
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: "metadata.name=" + interval2.Name,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -613,8 +603,8 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
|
||||
|
||||
t.Run("should filter by multiple filters", func(t *testing.T) {
|
||||
t.Skip("disabled until app installer supports it")
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s", interval2.Name), fmt.Sprintf("spec.name=%s", interval2.Spec.Name)},
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: fmt.Sprintf("metadata.name=%s,spec.name=%s", interval2.Name, interval2.Spec.Name),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, list.Items, 1)
|
||||
@@ -622,8 +612,8 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("should be empty when filter does not match", func(t *testing.T) {
|
||||
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
|
||||
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s", "unknown")},
|
||||
list, err := adminClient.List(ctx, v1.ListOptions{
|
||||
FieldSelector: fmt.Sprintf("metadata.name=%s", "unknown"),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
require.Empty(t, list.Items)
|
||||
@@ -657,20 +647,18 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
v1intervals, err := timeinterval.ConvertToK8sResources(orgID, mtis, func(int64) string { return "default" }, nil)
|
||||
require.NoError(t, err)
|
||||
for _, interval := range v1intervals.Items {
|
||||
_, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
|
||||
_, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
routeClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
routeClient := common.NewRoutingTreeClient(t, helper.Org1.Admin)
|
||||
v1route, err := routingtree.ConvertToK8sResource(helper.Org1.Admin.Identity.GetOrgID(), *amConfig.AlertmanagerConfig.Route, "", func(int64) string { return "default" })
|
||||
require.NoError(t, err)
|
||||
_, err = routeClient.Update(ctx, v1route, resource.UpdateOptions{})
|
||||
_, err = routeClient.Update(ctx, v1route, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
postGroupRaw, err := testData.ReadFile(path.Join("test-data", "rulegroup-1.json"))
|
||||
@@ -687,7 +675,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
|
||||
currentRuleGroup, status := legacyCli.GetRulesGroup(t, folderUID, ruleGroup.Name)
|
||||
require.Equal(t, http.StatusAccepted, status)
|
||||
|
||||
intervals, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
|
||||
intervals, err := adminClient.List(ctx, v1.ListOptions{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, intervals.Items, 3)
|
||||
intervalIdx := slices.IndexFunc(intervals.Items, func(interval v0alpha1.TimeInterval) bool {
|
||||
@@ -712,7 +700,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
|
||||
renamed := interval.Copy().(*v0alpha1.TimeInterval)
|
||||
renamed.Spec.Name += "-new"
|
||||
|
||||
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
|
||||
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
updatedRuleGroup, status := legacyCli.GetRulesGroup(t, folderUID, ruleGroup.Name)
|
||||
@@ -744,20 +732,20 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, db.DeleteProvenance(ctx, ¤tRoute, orgID))
|
||||
})
|
||||
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
|
||||
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
|
||||
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
|
||||
t.Run("provisioned rules", func(t *testing.T) {
|
||||
ruleUid := currentRuleGroup.Rules[0].GrafanaManagedAlert.UID
|
||||
rule := &ngmodels.AlertRule{UID: ruleUid}
|
||||
require.NoError(t, db.SetProvenance(ctx, rule, orgID, "API"))
|
||||
resource := &ngmodels.AlertRule{UID: ruleUid}
|
||||
require.NoError(t, db.SetProvenance(ctx, resource, orgID, "API"))
|
||||
t.Cleanup(func() {
|
||||
require.NoError(t, db.DeleteProvenance(ctx, rule, orgID))
|
||||
require.NoError(t, db.DeleteProvenance(ctx, resource, orgID))
|
||||
})
|
||||
|
||||
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
|
||||
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
|
||||
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
@@ -766,7 +754,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
|
||||
|
||||
t.Run("Delete", func(t *testing.T) {
|
||||
t.Run("should fail to delete if time interval is used in rule and routes", func(t *testing.T) {
|
||||
err := adminClient.Delete(ctx, interval.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
err := adminClient.Delete(ctx, interval.Name, v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
|
||||
@@ -775,7 +763,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
|
||||
route.Routes[0].MuteTimeIntervals = nil
|
||||
legacyCli.UpdateRoute(t, route, true)
|
||||
|
||||
err = adminClient.Delete(ctx, interval.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
err = adminClient.Delete(ctx, interval.Name, v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
|
||||
@@ -785,7 +773,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
|
||||
})
|
||||
intervalToDelete := intervals.Items[idx]
|
||||
|
||||
err = adminClient.Delete(ctx, intervalToDelete.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
|
||||
err = adminClient.Delete(ctx, intervalToDelete.Name, v1.DeleteOptions{})
|
||||
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
|
||||
})
|
||||
})
|
||||
@@ -797,8 +785,7 @@ func TestIntegrationTimeIntervalValidation(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
helper := getTestHelper(t)
|
||||
|
||||
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
|
||||
require.NoError(t, err)
|
||||
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
@@ -832,7 +819,7 @@ func TestIntegrationTimeIntervalValidation(t *testing.T) {
|
||||
},
|
||||
Spec: tc.interval,
|
||||
}
|
||||
_, err := adminClient.Create(ctx, i, resource.CreateOptions{})
|
||||
_, err := adminClient.Create(ctx, i, v1.CreateOptions{})
|
||||
require.Error(t, err)
|
||||
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest, got: %s", err)
|
||||
})
|
||||
|
||||
@@ -14,7 +14,7 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
appsdk_k8s "github.com/grafana/grafana-app-sdk/k8s"
|
||||
githubConnection "github.com/grafana/grafana/apps/provisioning/pkg/connection/github"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"k8s.io/apimachinery/pkg/api/errors"
|
||||
@@ -28,8 +28,6 @@ import (
|
||||
"k8s.io/client-go/dynamic"
|
||||
"k8s.io/client-go/rest"
|
||||
|
||||
githubConnection "github.com/grafana/grafana/apps/provisioning/pkg/connection/github"
|
||||
|
||||
"github.com/grafana/grafana/pkg/apimachinery/identity"
|
||||
"github.com/grafana/grafana/pkg/apimachinery/utils"
|
||||
"github.com/grafana/grafana/pkg/configprovider"
|
||||
@@ -59,8 +57,6 @@ import (
|
||||
const (
|
||||
Org1 = "Org1"
|
||||
Org2 = "OrgB"
|
||||
|
||||
DefaultNamespace = "default"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -449,11 +445,6 @@ func (c *User) RESTClient(t *testing.T, gv *schema.GroupVersion) *rest.RESTClien
|
||||
return client
|
||||
}
|
||||
|
||||
func (c *User) GetClientRegistry() *appsdk_k8s.ClientRegistry {
|
||||
restConfig := c.NewRestConfig()
|
||||
return appsdk_k8s.NewClientRegistry(*restConfig, appsdk_k8s.DefaultClientConfig())
|
||||
}
|
||||
|
||||
type RequestParams struct {
|
||||
User User
|
||||
Method string // GET, POST, PATCH, etc
|
||||
|
||||
@@ -25,10 +25,6 @@ export class ExportAsCode extends ShareExportTab {
|
||||
public getTabLabel(): string {
|
||||
return t('export.json.title', 'Export dashboard');
|
||||
}
|
||||
|
||||
public getSubtitle(): string | undefined {
|
||||
return t('export.json.info-text', 'Copy or download a file containing the definition of your dashboard');
|
||||
}
|
||||
}
|
||||
|
||||
function ExportAsCodeRenderer({ model }: SceneComponentProps<ExportAsCode>) {
|
||||
@@ -57,6 +53,12 @@ function ExportAsCodeRenderer({ model }: SceneComponentProps<ExportAsCode>) {
|
||||
|
||||
return (
|
||||
<div data-testid={selector.container} className={styles.container}>
|
||||
<p>
|
||||
<Trans i18nKey="export.json.info-text">
|
||||
Copy or download a file containing the definition of your dashboard
|
||||
</Trans>
|
||||
</p>
|
||||
|
||||
{config.featureToggles.kubernetesDashboards ? (
|
||||
<ResourceExport
|
||||
dashboardJson={dashboardJson}
|
||||
|
||||
@@ -1,189 +0,0 @@
|
||||
import { render, screen, within } from '@testing-library/react';
|
||||
import userEvent from '@testing-library/user-event';
|
||||
import { AsyncState } from 'react-use/lib/useAsync';
|
||||
|
||||
import { selectors as e2eSelectors } from '@grafana/e2e-selectors';
|
||||
import { Dashboard } from '@grafana/schema';
|
||||
import { Spec as DashboardV2Spec } from '@grafana/schema/dist/esm/schema/dashboard/v2';
|
||||
|
||||
import { ExportMode, ResourceExport } from './ResourceExport';
|
||||
|
||||
type DashboardJsonState = AsyncState<{
|
||||
json: Dashboard | DashboardV2Spec | { error: unknown };
|
||||
hasLibraryPanels?: boolean;
|
||||
initialSaveModelVersion: 'v1' | 'v2';
|
||||
}>;
|
||||
|
||||
const selector = e2eSelectors.pages.ExportDashboardDrawer.ExportAsJson;
|
||||
|
||||
const createDefaultProps = (overrides?: Partial<Parameters<typeof ResourceExport>[0]>) => {
|
||||
const defaultProps: Parameters<typeof ResourceExport>[0] = {
|
||||
dashboardJson: {
|
||||
loading: false,
|
||||
value: {
|
||||
json: { title: 'Test Dashboard' } as Dashboard,
|
||||
hasLibraryPanels: false,
|
||||
initialSaveModelVersion: 'v1',
|
||||
},
|
||||
} as DashboardJsonState,
|
||||
isSharingExternally: false,
|
||||
exportMode: ExportMode.Classic,
|
||||
isViewingYAML: false,
|
||||
onExportModeChange: jest.fn(),
|
||||
onShareExternallyChange: jest.fn(),
|
||||
onViewYAML: jest.fn(),
|
||||
};
|
||||
|
||||
return { ...defaultProps, ...overrides };
|
||||
};
|
||||
|
||||
const createV2DashboardJson = (hasLibraryPanels = false): DashboardJsonState => ({
|
||||
loading: false,
|
||||
value: {
|
||||
json: {
|
||||
title: 'Test V2 Dashboard',
|
||||
spec: {
|
||||
elements: {},
|
||||
},
|
||||
} as unknown as DashboardV2Spec,
|
||||
hasLibraryPanels,
|
||||
initialSaveModelVersion: 'v2',
|
||||
},
|
||||
});
|
||||
|
||||
const expandOptions = async () => {
|
||||
const button = screen.getByRole('button', { expanded: false });
|
||||
await userEvent.click(button);
|
||||
};
|
||||
|
||||
describe('ResourceExport', () => {
|
||||
describe('export mode options for v1 dashboard', () => {
|
||||
it('should show three export mode options in correct order: Classic, V1 Resource, V2 Resource', async () => {
|
||||
render(<ResourceExport {...createDefaultProps()} />);
|
||||
await expandOptions();
|
||||
|
||||
const radioGroup = screen.getByRole('radiogroup', { name: /model/i });
|
||||
const labels = within(radioGroup)
|
||||
.getAllByRole('radio')
|
||||
.map((radio) => radio.parentElement?.textContent?.trim());
|
||||
|
||||
expect(labels).toHaveLength(3);
|
||||
expect(labels).toEqual(['Classic', 'V1 Resource', 'V2 Resource']);
|
||||
});
|
||||
|
||||
it('should have first option selected by default when exportMode is Classic', async () => {
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.Classic })} />);
|
||||
await expandOptions();
|
||||
|
||||
const radioGroup = screen.getByRole('radiogroup', { name: /model/i });
|
||||
const radios = within(radioGroup).getAllByRole('radio');
|
||||
expect(radios[0]).toBeChecked();
|
||||
});
|
||||
|
||||
it('should call onExportModeChange when export mode is changed', async () => {
|
||||
const onExportModeChange = jest.fn();
|
||||
render(<ResourceExport {...createDefaultProps({ onExportModeChange })} />);
|
||||
await expandOptions();
|
||||
|
||||
const radioGroup = screen.getByRole('radiogroup', { name: /model/i });
|
||||
const radios = within(radioGroup).getAllByRole('radio');
|
||||
await userEvent.click(radios[1]); // V1 Resource
|
||||
expect(onExportModeChange).toHaveBeenCalledWith(ExportMode.V1Resource);
|
||||
});
|
||||
});
|
||||
|
||||
describe('export mode options for v2 dashboard', () => {
|
||||
it('should not show export mode options', async () => {
|
||||
render(<ResourceExport {...createDefaultProps({ dashboardJson: createV2DashboardJson() })} />);
|
||||
await expandOptions();
|
||||
|
||||
expect(screen.queryByRole('radiogroup', { name: /model/i })).not.toBeInTheDocument();
|
||||
});
|
||||
});
|
||||
|
||||
describe('format options', () => {
|
||||
it('should not show format options when export mode is Classic', async () => {
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.Classic })} />);
|
||||
await expandOptions();
|
||||
|
||||
expect(screen.getByRole('radiogroup', { name: /model/i })).toBeInTheDocument();
|
||||
expect(screen.queryByRole('radiogroup', { name: /format/i })).not.toBeInTheDocument();
|
||||
});
|
||||
|
||||
it.each([ExportMode.V1Resource, ExportMode.V2Resource])(
|
||||
'should show format options when export mode is %s',
|
||||
async (exportMode) => {
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode })} />);
|
||||
await expandOptions();
|
||||
|
||||
expect(screen.getByRole('radiogroup', { name: /model/i })).toBeInTheDocument();
|
||||
expect(screen.getByRole('radiogroup', { name: /format/i })).toBeInTheDocument();
|
||||
}
|
||||
);
|
||||
|
||||
it('should have first format option selected when isViewingYAML is false', async () => {
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.V1Resource, isViewingYAML: false })} />);
|
||||
await expandOptions();
|
||||
|
||||
const formatGroup = screen.getByRole('radiogroup', { name: /format/i });
|
||||
const formatRadios = within(formatGroup).getAllByRole('radio');
|
||||
expect(formatRadios[0]).toBeChecked(); // JSON
|
||||
});
|
||||
|
||||
it('should have second format option selected when isViewingYAML is true', async () => {
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.V1Resource, isViewingYAML: true })} />);
|
||||
await expandOptions();
|
||||
|
||||
const formatGroup = screen.getByRole('radiogroup', { name: /format/i });
|
||||
const formatRadios = within(formatGroup).getAllByRole('radio');
|
||||
expect(formatRadios[1]).toBeChecked(); // YAML
|
||||
});
|
||||
|
||||
it('should call onViewYAML when format is changed', async () => {
|
||||
const onViewYAML = jest.fn();
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.V1Resource, onViewYAML })} />);
|
||||
await expandOptions();
|
||||
|
||||
const formatGroup = screen.getByRole('radiogroup', { name: /format/i });
|
||||
const formatRadios = within(formatGroup).getAllByRole('radio');
|
||||
await userEvent.click(formatRadios[1]); // YAML
|
||||
expect(onViewYAML).toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('share externally switch', () => {
|
||||
it('should show share externally switch for Classic mode', () => {
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.Classic })} />);
|
||||
|
||||
expect(screen.getByTestId(selector.exportExternallyToggle)).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should show share externally switch for V2Resource mode with V2 dashboard', () => {
|
||||
render(
|
||||
<ResourceExport
|
||||
{...createDefaultProps({
|
||||
dashboardJson: createV2DashboardJson(),
|
||||
exportMode: ExportMode.V2Resource,
|
||||
})}
|
||||
/>
|
||||
);
|
||||
|
||||
expect(screen.getByTestId(selector.exportExternallyToggle)).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should call onShareExternallyChange when switch is toggled', async () => {
|
||||
const onShareExternallyChange = jest.fn();
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.Classic, onShareExternallyChange })} />);
|
||||
|
||||
const switchElement = screen.getByTestId(selector.exportExternallyToggle);
|
||||
await userEvent.click(switchElement);
|
||||
expect(onShareExternallyChange).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should reflect isSharingExternally value in switch', () => {
|
||||
render(<ResourceExport {...createDefaultProps({ exportMode: ExportMode.Classic, isSharingExternally: true })} />);
|
||||
|
||||
expect(screen.getByTestId(selector.exportExternallyToggle)).toBeChecked();
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -4,8 +4,7 @@ import { selectors as e2eSelectors } from '@grafana/e2e-selectors';
|
||||
import { Trans, t } from '@grafana/i18n';
|
||||
import { Dashboard } from '@grafana/schema';
|
||||
import { Spec as DashboardV2Spec } from '@grafana/schema/dist/esm/schema/dashboard/v2';
|
||||
import { Alert, Icon, Label, RadioButtonGroup, Stack, Switch, Box, Tooltip } from '@grafana/ui';
|
||||
import { QueryOperationRow } from 'app/core/components/QueryOperationRow/QueryOperationRow';
|
||||
import { Alert, Label, RadioButtonGroup, Stack, Switch } from '@grafana/ui';
|
||||
import { DashboardJson } from 'app/features/manage-dashboards/types';
|
||||
|
||||
import { ExportableResource } from '../ShareExportTab';
|
||||
@@ -49,90 +48,80 @@ export function ResourceExport({
|
||||
|
||||
const switchExportLabel =
|
||||
exportMode === ExportMode.V2Resource
|
||||
? t('dashboard-scene.resource-export.share-externally', 'Share dashboard with another instance')
|
||||
: t('share-modal.export.share-externally-label', 'Export for sharing externally');
|
||||
const switchExportTooltip = t(
|
||||
'dashboard-scene.resource-export.share-externally-tooltip',
|
||||
'Removes all instance-specific metadata and data source references from the resource before export.'
|
||||
);
|
||||
? t('export.json.export-remove-ds-refs', 'Remove deployment details')
|
||||
: t('share-modal.export.share-externally-label', `Export for sharing externally`);
|
||||
const switchExportModeLabel = t('export.json.export-mode', 'Model');
|
||||
const switchExportFormatLabel = t('export.json.export-format', 'Format');
|
||||
|
||||
const exportResourceOptions = [
|
||||
{
|
||||
label: t('dashboard-scene.resource-export.label.classic', 'Classic'),
|
||||
value: ExportMode.Classic,
|
||||
},
|
||||
{
|
||||
label: t('dashboard-scene.resource-export.label.v1-resource', 'V1 Resource'),
|
||||
value: ExportMode.V1Resource,
|
||||
},
|
||||
{
|
||||
label: t('dashboard-scene.resource-export.label.v2-resource', 'V2 Resource'),
|
||||
value: ExportMode.V2Resource,
|
||||
},
|
||||
];
|
||||
|
||||
return (
|
||||
<>
|
||||
<QueryOperationRow
|
||||
id="Advanced options"
|
||||
index={0}
|
||||
title={t('dashboard-scene.resource-export.label.advanced-options', 'Advanced options')}
|
||||
isOpen={false}
|
||||
>
|
||||
<Box marginTop={2}>
|
||||
<Stack gap={1} direction="column">
|
||||
{initialSaveModelVersion === 'v1' && (
|
||||
<Stack gap={1} alignItems="center">
|
||||
<Label>{switchExportModeLabel}</Label>
|
||||
<RadioButtonGroup
|
||||
options={exportResourceOptions}
|
||||
value={exportMode}
|
||||
onChange={(value) => onExportModeChange(value)}
|
||||
aria-label={switchExportModeLabel}
|
||||
/>
|
||||
</Stack>
|
||||
)}
|
||||
|
||||
{exportMode !== ExportMode.Classic && (
|
||||
<Stack gap={1} alignItems="center">
|
||||
<Label>{switchExportFormatLabel}</Label>
|
||||
<RadioButtonGroup
|
||||
options={[
|
||||
{ label: t('dashboard-scene.resource-export.label.json', 'JSON'), value: 'json' },
|
||||
{ label: t('dashboard-scene.resource-export.label.yaml', 'YAML'), value: 'yaml' },
|
||||
]}
|
||||
value={isViewingYAML ? 'yaml' : 'json'}
|
||||
onChange={onViewYAML}
|
||||
aria-label={switchExportFormatLabel}
|
||||
/>
|
||||
</Stack>
|
||||
)}
|
||||
<Stack gap={2} direction="column">
|
||||
<Stack gap={1} direction="column">
|
||||
{initialSaveModelVersion === 'v1' && (
|
||||
<Stack alignItems="center">
|
||||
<Label>{switchExportModeLabel}</Label>
|
||||
<RadioButtonGroup
|
||||
options={[
|
||||
{ label: t('dashboard-scene.resource-export.label.classic', 'Classic'), value: ExportMode.Classic },
|
||||
{
|
||||
label: t('dashboard-scene.resource-export.label.v1-resource', 'V1 Resource'),
|
||||
value: ExportMode.V1Resource,
|
||||
},
|
||||
{
|
||||
label: t('dashboard-scene.resource-export.label.v2-resource', 'V2 Resource'),
|
||||
value: ExportMode.V2Resource,
|
||||
},
|
||||
]}
|
||||
value={exportMode}
|
||||
onChange={(value) => onExportModeChange(value)}
|
||||
/>
|
||||
</Stack>
|
||||
</Box>
|
||||
</QueryOperationRow>
|
||||
|
||||
{(isV2Dashboard ||
|
||||
exportMode === ExportMode.Classic ||
|
||||
(initialSaveModelVersion === 'v2' && exportMode === ExportMode.V1Resource)) && (
|
||||
<Stack gap={1} alignItems="start">
|
||||
<Label>
|
||||
<Stack gap={0.5} alignItems="center">
|
||||
<Tooltip content={switchExportTooltip} placement="bottom">
|
||||
<Icon name="info-circle" size="sm" />
|
||||
</Tooltip>
|
||||
{switchExportLabel}
|
||||
</Stack>
|
||||
</Label>
|
||||
<Switch
|
||||
label={switchExportLabel}
|
||||
value={isSharingExternally}
|
||||
onChange={onShareExternallyChange}
|
||||
data-testid={selector.exportExternallyToggle}
|
||||
/>
|
||||
</Stack>
|
||||
)}
|
||||
)}
|
||||
{initialSaveModelVersion === 'v2' && (
|
||||
<Stack alignItems="center">
|
||||
<Label>{switchExportModeLabel}</Label>
|
||||
<RadioButtonGroup
|
||||
options={[
|
||||
{
|
||||
label: t('dashboard-scene.resource-export.label.v2-resource', 'V2 Resource'),
|
||||
value: ExportMode.V2Resource,
|
||||
},
|
||||
{
|
||||
label: t('dashboard-scene.resource-export.label.v1-resource', 'V1 Resource'),
|
||||
value: ExportMode.V1Resource,
|
||||
},
|
||||
]}
|
||||
value={exportMode}
|
||||
onChange={(value) => onExportModeChange(value)}
|
||||
/>
|
||||
</Stack>
|
||||
)}
|
||||
{exportMode !== ExportMode.Classic && (
|
||||
<Stack gap={1} alignItems="center">
|
||||
<Label>{switchExportFormatLabel}</Label>
|
||||
<RadioButtonGroup
|
||||
options={[
|
||||
{ label: t('dashboard-scene.resource-export.label.json', 'JSON'), value: 'json' },
|
||||
{ label: t('dashboard-scene.resource-export.label.yaml', 'YAML'), value: 'yaml' },
|
||||
]}
|
||||
value={isViewingYAML ? 'yaml' : 'json'}
|
||||
onChange={onViewYAML}
|
||||
/>
|
||||
</Stack>
|
||||
)}
|
||||
{(isV2Dashboard ||
|
||||
exportMode === ExportMode.Classic ||
|
||||
(initialSaveModelVersion === 'v2' && exportMode === ExportMode.V1Resource)) && (
|
||||
<Stack gap={1} alignItems="start">
|
||||
<Label>{switchExportLabel}</Label>
|
||||
<Switch
|
||||
label={switchExportLabel}
|
||||
value={isSharingExternally}
|
||||
onChange={onShareExternallyChange}
|
||||
data-testid={selector.exportExternallyToggle}
|
||||
/>
|
||||
</Stack>
|
||||
)}
|
||||
</Stack>
|
||||
|
||||
{showV2LibPanelAlert && (
|
||||
<Alert
|
||||
@@ -141,7 +130,6 @@ export function ResourceExport({
|
||||
'Library panels will be converted to regular panels'
|
||||
)}
|
||||
severity="warning"
|
||||
topSpacing={2}
|
||||
>
|
||||
<Trans i18nKey="dashboard-scene.save-dashboard-form.schema-v2-library-panels-export">
|
||||
Due to limitations in the new dashboard schema (V2), library panels will be converted to regular panels with
|
||||
@@ -149,6 +137,6 @@ export function ResourceExport({
|
||||
</Trans>
|
||||
</Alert>
|
||||
)}
|
||||
</>
|
||||
</Stack>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -66,12 +66,7 @@ function ShareDrawerRenderer({ model }: SceneComponentProps<ShareDrawer>) {
|
||||
const dashboard = getDashboardSceneFor(model);
|
||||
|
||||
return (
|
||||
<Drawer
|
||||
title={activeShare?.getTabLabel()}
|
||||
subtitle={activeShare?.getSubtitle?.()}
|
||||
onClose={model.onDismiss}
|
||||
size="md"
|
||||
>
|
||||
<Drawer title={activeShare?.getTabLabel()} onClose={model.onDismiss} size="md">
|
||||
<ShareDrawerContext.Provider value={{ dashboard, onDismiss: model.onDismiss }}>
|
||||
{activeShare && <activeShare.Component model={activeShare} />}
|
||||
</ShareDrawerContext.Provider>
|
||||
|
||||
@@ -66,10 +66,6 @@ export class ShareExportTab extends SceneObjectBase<ShareExportTabState> impleme
|
||||
return t('share-modal.tab-title.export', 'Export');
|
||||
}
|
||||
|
||||
public getSubtitle(): string | undefined {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
public onShareExternallyChange = () => {
|
||||
this.setState({
|
||||
isSharingExternally: !this.state.isSharingExternally,
|
||||
|
||||
@@ -15,6 +15,5 @@ export interface SceneShareTab<T extends SceneShareTabState = SceneShareTabState
|
||||
|
||||
export interface ShareView extends SceneObject {
|
||||
getTabLabel(): string;
|
||||
getSubtitle?(): string | undefined;
|
||||
onDismiss?: () => void;
|
||||
}
|
||||
|
||||
@@ -2,9 +2,8 @@ import { render, screen } from '@testing-library/react';
|
||||
import { defaultsDeep } from 'lodash';
|
||||
import { Provider } from 'react-redux';
|
||||
|
||||
import { CoreApp, EventBusSrv, FieldType, getDefaultTimeRange, LoadingState } from '@grafana/data';
|
||||
import { config, PanelDataErrorViewProps } from '@grafana/runtime';
|
||||
import { usePanelContext } from '@grafana/ui';
|
||||
import { FieldType, getDefaultTimeRange, LoadingState } from '@grafana/data';
|
||||
import { PanelDataErrorViewProps } from '@grafana/runtime';
|
||||
import { configureStore } from 'app/store/configureStore';
|
||||
|
||||
import { PanelDataErrorView } from './PanelDataErrorView';
|
||||
@@ -17,24 +16,7 @@ jest.mock('app/features/dashboard/services/DashboardSrv', () => ({
|
||||
},
|
||||
}));
|
||||
|
||||
jest.mock('@grafana/ui', () => ({
|
||||
...jest.requireActual('@grafana/ui'),
|
||||
usePanelContext: jest.fn(),
|
||||
}));
|
||||
|
||||
const mockUsePanelContext = jest.mocked(usePanelContext);
|
||||
const RUN_QUERY_MESSAGE = 'Run a query to visualize it here or go to all visualizations to add other panel types';
|
||||
const panelContextRoot = {
|
||||
app: CoreApp.Dashboard,
|
||||
eventsScope: 'global',
|
||||
eventBus: new EventBusSrv(),
|
||||
};
|
||||
|
||||
describe('PanelDataErrorView', () => {
|
||||
beforeEach(() => {
|
||||
mockUsePanelContext.mockReturnValue(panelContextRoot);
|
||||
});
|
||||
|
||||
it('show No data when there is no data', () => {
|
||||
renderWithProps();
|
||||
|
||||
@@ -88,45 +70,6 @@ describe('PanelDataErrorView', () => {
|
||||
|
||||
expect(screen.getByText('Query returned nothing')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should show "Run a query..." message when no query is configured and feature toggle is enabled', () => {
|
||||
mockUsePanelContext.mockReturnValue(panelContextRoot);
|
||||
|
||||
const originalFeatureToggle = config.featureToggles.newVizSuggestions;
|
||||
config.featureToggles.newVizSuggestions = true;
|
||||
|
||||
renderWithProps({
|
||||
data: {
|
||||
state: LoadingState.Done,
|
||||
series: [],
|
||||
timeRange: getDefaultTimeRange(),
|
||||
},
|
||||
});
|
||||
|
||||
expect(screen.getByText(RUN_QUERY_MESSAGE)).toBeInTheDocument();
|
||||
|
||||
config.featureToggles.newVizSuggestions = originalFeatureToggle;
|
||||
});
|
||||
|
||||
it('should show "No data" message when feature toggle is disabled even without queries', () => {
|
||||
mockUsePanelContext.mockReturnValue(panelContextRoot);
|
||||
|
||||
const originalFeatureToggle = config.featureToggles.newVizSuggestions;
|
||||
config.featureToggles.newVizSuggestions = false;
|
||||
|
||||
renderWithProps({
|
||||
data: {
|
||||
state: LoadingState.Done,
|
||||
series: [],
|
||||
timeRange: getDefaultTimeRange(),
|
||||
},
|
||||
});
|
||||
|
||||
expect(screen.getByText('No data')).toBeInTheDocument();
|
||||
expect(screen.queryByText(RUN_QUERY_MESSAGE)).not.toBeInTheDocument();
|
||||
|
||||
config.featureToggles.newVizSuggestions = originalFeatureToggle;
|
||||
});
|
||||
});
|
||||
|
||||
function renderWithProps(overrides?: Partial<PanelDataErrorViewProps>) {
|
||||
|
||||
@@ -5,15 +5,14 @@ import {
|
||||
FieldType,
|
||||
getPanelDataSummary,
|
||||
GrafanaTheme2,
|
||||
PanelData,
|
||||
PanelDataSummary,
|
||||
PanelPluginVisualizationSuggestion,
|
||||
} from '@grafana/data';
|
||||
import { selectors } from '@grafana/e2e-selectors';
|
||||
import { t, Trans } from '@grafana/i18n';
|
||||
import { PanelDataErrorViewProps, locationService, config } from '@grafana/runtime';
|
||||
import { PanelDataErrorViewProps, locationService } from '@grafana/runtime';
|
||||
import { VizPanel } from '@grafana/scenes';
|
||||
import { Icon, usePanelContext, useStyles2 } from '@grafana/ui';
|
||||
import { usePanelContext, useStyles2 } from '@grafana/ui';
|
||||
import { CardButton } from 'app/core/components/CardButton';
|
||||
import { LS_VISUALIZATION_SELECT_TAB_KEY } from 'app/core/constants';
|
||||
import store from 'app/core/store';
|
||||
@@ -25,11 +24,6 @@ import { findVizPanelByKey, getVizPanelKeyForPanelId } from 'app/features/dashbo
|
||||
import { useDispatch } from 'app/types/store';
|
||||
|
||||
import { changePanelPlugin } from '../state/actions';
|
||||
import { hasData } from '../suggestions/utils';
|
||||
|
||||
function hasNoQueryConfigured(data: PanelData): boolean {
|
||||
return !data.request?.targets || data.request.targets.length === 0;
|
||||
}
|
||||
|
||||
export function PanelDataErrorView(props: PanelDataErrorViewProps) {
|
||||
const styles = useStyles2(getStyles);
|
||||
@@ -99,14 +93,8 @@ export function PanelDataErrorView(props: PanelDataErrorViewProps) {
|
||||
}
|
||||
};
|
||||
|
||||
const noData = !hasData(props.data);
|
||||
const noQueryConfigured = hasNoQueryConfigured(props.data);
|
||||
const showEmptyState =
|
||||
config.featureToggles.newVizSuggestions && context.app === CoreApp.PanelEditor && noQueryConfigured && noData;
|
||||
|
||||
return (
|
||||
<div className={styles.wrapper}>
|
||||
{showEmptyState && <Icon name="chart-line" size="xxxl" className={styles.emptyStateIcon} />}
|
||||
<div className={styles.message} data-testid={selectors.components.Panels.Panel.PanelDataErrorMessage}>
|
||||
{message}
|
||||
</div>
|
||||
@@ -143,17 +131,7 @@ function getMessageFor(
|
||||
return message;
|
||||
}
|
||||
|
||||
const noData = !hasData(data);
|
||||
const noQueryConfigured = hasNoQueryConfigured(data);
|
||||
|
||||
if (config.featureToggles.newVizSuggestions && noQueryConfigured && noData) {
|
||||
return t(
|
||||
'dashboard.new-panel.empty-state-message',
|
||||
'Run a query to visualize it here or go to all visualizations to add other panel types'
|
||||
);
|
||||
}
|
||||
|
||||
if (noData) {
|
||||
if (!data.series || data.series.length === 0 || data.series.every((frame) => frame.length === 0)) {
|
||||
return fieldConfig?.defaults.noValue ?? t('panel.panel-data-error-view.no-value.default', 'No data');
|
||||
}
|
||||
|
||||
@@ -198,9 +176,5 @@ const getStyles = (theme: GrafanaTheme2) => {
|
||||
width: '100%',
|
||||
maxWidth: '600px',
|
||||
}),
|
||||
emptyStateIcon: css({
|
||||
color: theme.colors.text.secondary,
|
||||
marginBottom: theme.spacing(2),
|
||||
}),
|
||||
};
|
||||
};
|
||||
|
||||
@@ -1,26 +1,29 @@
|
||||
import { SelectableValue } from '@grafana/data';
|
||||
import { RadioButtonGroup } from '@grafana/ui';
|
||||
|
||||
import { useDispatch } from '../../hooks/useStatelessReducer';
|
||||
import { EditorType } from '../../types';
|
||||
|
||||
import { useQuery } from './ElasticsearchQueryContext';
|
||||
import { changeEditorTypeAndResetQuery } from './state';
|
||||
|
||||
const BASE_OPTIONS: Array<SelectableValue<EditorType>> = [
|
||||
{ value: 'builder', label: 'Builder' },
|
||||
{ value: 'code', label: 'Code' },
|
||||
];
|
||||
|
||||
interface Props {
|
||||
value: EditorType;
|
||||
onChange: (editorType: EditorType) => void;
|
||||
}
|
||||
export const EditorTypeSelector = () => {
|
||||
const query = useQuery();
|
||||
const dispatch = useDispatch();
|
||||
|
||||
// Default to 'builder' if editorType is empty
|
||||
const editorType: EditorType = query.editorType === 'code' ? 'code' : 'builder';
|
||||
|
||||
const onChange = (newEditorType: EditorType) => {
|
||||
dispatch(changeEditorTypeAndResetQuery(newEditorType));
|
||||
};
|
||||
|
||||
export const EditorTypeSelector = ({ value, onChange }: Props) => {
|
||||
return (
|
||||
<RadioButtonGroup<EditorType>
|
||||
data-testid="elasticsearch-editor-type-toggle"
|
||||
size="sm"
|
||||
options={BASE_OPTIONS}
|
||||
value={value}
|
||||
onChange={onChange}
|
||||
/>
|
||||
<RadioButtonGroup<EditorType> fullWidth={false} options={BASE_OPTIONS} value={editorType} onChange={onChange} />
|
||||
);
|
||||
};
|
||||
|
||||
@@ -10,13 +10,9 @@ interface Props {
|
||||
onRunQuery: () => void;
|
||||
}
|
||||
|
||||
// This offset was chosen by testing to match Prometheus behavior
|
||||
const EDITOR_HEIGHT_OFFSET = 2;
|
||||
|
||||
export function RawQueryEditor({ value, onChange, onRunQuery }: Props) {
|
||||
const styles = useStyles2(getStyles);
|
||||
const editorRef = useRef<monacoTypes.editor.IStandaloneCodeEditor | null>(null);
|
||||
const containerRef = useRef<HTMLDivElement | null>(null);
|
||||
|
||||
const handleEditorDidMount = useCallback(
|
||||
(editor: monacoTypes.editor.IStandaloneCodeEditor, monaco: Monaco) => {
|
||||
@@ -26,22 +22,6 @@ export function RawQueryEditor({ value, onChange, onRunQuery }: Props) {
|
||||
editor.addCommand(monaco.KeyMod.CtrlCmd | monaco.KeyCode.Enter, () => {
|
||||
onRunQuery();
|
||||
});
|
||||
|
||||
// Make the editor resize itself so that the content fits (grows taller when necessary)
|
||||
// this code comes from the Prometheus query editor.
|
||||
// We may wish to consider abstracting it into the grafana/ui repo in the future
|
||||
const updateElementHeight = () => {
|
||||
const containerDiv = containerRef.current;
|
||||
if (containerDiv !== null) {
|
||||
const pixelHeight = editor.getContentHeight();
|
||||
containerDiv.style.height = `${pixelHeight + EDITOR_HEIGHT_OFFSET}px`;
|
||||
const pixelWidth = containerDiv.clientWidth;
|
||||
editor.layout({ width: pixelWidth, height: pixelHeight });
|
||||
}
|
||||
};
|
||||
|
||||
editor.onDidContentSizeChange(updateElementHeight);
|
||||
updateElementHeight();
|
||||
},
|
||||
[onRunQuery]
|
||||
);
|
||||
@@ -85,17 +65,7 @@ export function RawQueryEditor({ value, onChange, onRunQuery }: Props) {
|
||||
|
||||
return (
|
||||
<Box>
|
||||
<div ref={containerRef} className={styles.editorContainer}>
|
||||
<CodeEditor
|
||||
value={value ?? ''}
|
||||
language="json"
|
||||
width="100%"
|
||||
onBlur={handleQueryChange}
|
||||
monacoOptions={monacoOptions}
|
||||
onEditorDidMount={handleEditorDidMount}
|
||||
/>
|
||||
</div>
|
||||
<div className={styles.footer}>
|
||||
<div className={styles.header}>
|
||||
<Stack gap={1}>
|
||||
<Button
|
||||
size="sm"
|
||||
@@ -106,8 +76,20 @@ export function RawQueryEditor({ value, onChange, onRunQuery }: Props) {
|
||||
>
|
||||
Format
|
||||
</Button>
|
||||
<Button size="sm" variant="primary" icon="play" onClick={onRunQuery} tooltip="Run query (Ctrl/Cmd+Enter)">
|
||||
Run
|
||||
</Button>
|
||||
</Stack>
|
||||
</div>
|
||||
<CodeEditor
|
||||
value={value ?? ''}
|
||||
language="json"
|
||||
height={200}
|
||||
width="100%"
|
||||
onBlur={handleQueryChange}
|
||||
monacoOptions={monacoOptions}
|
||||
onEditorDidMount={handleEditorDidMount}
|
||||
/>
|
||||
</Box>
|
||||
);
|
||||
}
|
||||
@@ -118,11 +100,7 @@ const getStyles = (theme: GrafanaTheme2) => ({
|
||||
flexDirection: 'column',
|
||||
gap: theme.spacing(1),
|
||||
}),
|
||||
editorContainer: css({
|
||||
width: '100%',
|
||||
overflow: 'hidden',
|
||||
}),
|
||||
footer: css({
|
||||
header: css({
|
||||
display: 'flex',
|
||||
justifyContent: 'flex-end',
|
||||
padding: theme.spacing(0.5, 0),
|
||||
|
||||
@@ -1,16 +1,16 @@
|
||||
import { css } from '@emotion/css';
|
||||
import { useCallback, useEffect, useId, useState } from 'react';
|
||||
import { useEffect, useId, useState } from 'react';
|
||||
import { SemVer } from 'semver';
|
||||
|
||||
import { getDefaultTimeRange, GrafanaTheme2, QueryEditorProps } from '@grafana/data';
|
||||
import { config } from '@grafana/runtime';
|
||||
import { Alert, ConfirmModal, InlineField, InlineLabel, Input, QueryField, useStyles2 } from '@grafana/ui';
|
||||
import { Alert, InlineField, InlineLabel, Input, QueryField, useStyles2 } from '@grafana/ui';
|
||||
|
||||
import { ElasticsearchDataQuery } from '../../dataquery.gen';
|
||||
import { ElasticDatasource } from '../../datasource';
|
||||
import { useNextId } from '../../hooks/useNextId';
|
||||
import { useDispatch } from '../../hooks/useStatelessReducer';
|
||||
import { EditorType, ElasticsearchOptions } from '../../types';
|
||||
import { ElasticsearchOptions } from '../../types';
|
||||
import { isSupportedVersion, isTimeSeriesQuery, unsupportedVersionMessage } from '../../utils';
|
||||
|
||||
import { BucketAggregationsEditor } from './BucketAggregationsEditor';
|
||||
@@ -20,7 +20,7 @@ import { MetricAggregationsEditor } from './MetricAggregationsEditor';
|
||||
import { metricAggregationConfig } from './MetricAggregationsEditor/utils';
|
||||
import { QueryTypeSelector } from './QueryTypeSelector';
|
||||
import { RawQueryEditor } from './RawQueryEditor';
|
||||
import { changeAliasPattern, changeEditorTypeAndResetQuery, changeQuery, changeRawDSLQuery } from './state';
|
||||
import { changeAliasPattern, changeQuery, changeRawDSLQuery } from './state';
|
||||
|
||||
export type ElasticQueryEditorProps = QueryEditorProps<ElasticDatasource, ElasticsearchDataQuery, ElasticsearchOptions>;
|
||||
|
||||
@@ -97,61 +97,31 @@ const QueryEditorForm = ({ value, onRunQuery }: Props & { onRunQuery: () => void
|
||||
const inputId = useId();
|
||||
const styles = useStyles2(getStyles);
|
||||
|
||||
const [switchModalOpen, setSwitchModalOpen] = useState(false);
|
||||
const [pendingEditorType, setPendingEditorType] = useState<EditorType | null>(null);
|
||||
|
||||
const isTimeSeries = isTimeSeriesQuery(value);
|
||||
|
||||
const isCodeEditor = value.editorType === 'code';
|
||||
const rawDSLFeatureEnabled = config.featureToggles.elasticsearchRawDSLQuery;
|
||||
|
||||
// Default to 'builder' if editorType is empty
|
||||
const currentEditorType: EditorType = value.editorType === 'code' ? 'code' : 'builder';
|
||||
|
||||
const showBucketAggregationsEditor = value.metrics?.every(
|
||||
(metric) => metricAggregationConfig[metric.type].impliedQueryType === 'metrics'
|
||||
);
|
||||
|
||||
const onEditorTypeChange = useCallback((newEditorType: EditorType) => {
|
||||
// Show warning modal when switching modes
|
||||
setPendingEditorType(newEditorType);
|
||||
setSwitchModalOpen(true);
|
||||
}, []);
|
||||
|
||||
const confirmEditorTypeChange = useCallback(() => {
|
||||
if (pendingEditorType) {
|
||||
dispatch(changeEditorTypeAndResetQuery(pendingEditorType));
|
||||
}
|
||||
setSwitchModalOpen(false);
|
||||
setPendingEditorType(null);
|
||||
}, [dispatch, pendingEditorType]);
|
||||
|
||||
const cancelEditorTypeChange = useCallback(() => {
|
||||
setSwitchModalOpen(false);
|
||||
setPendingEditorType(null);
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<>
|
||||
<ConfirmModal
|
||||
isOpen={switchModalOpen}
|
||||
title="Switch editor"
|
||||
body="Switching between editors will reset your query. Are you sure you want to continue?"
|
||||
confirmText="Continue"
|
||||
onConfirm={confirmEditorTypeChange}
|
||||
onDismiss={cancelEditorTypeChange}
|
||||
/>
|
||||
<div className={styles.root}>
|
||||
<InlineLabel width={17}>Query type</InlineLabel>
|
||||
<div className={styles.queryItem}>
|
||||
<QueryTypeSelector />
|
||||
</div>
|
||||
{rawDSLFeatureEnabled && (
|
||||
<div style={{ marginLeft: 'auto' }}>
|
||||
<EditorTypeSelector value={currentEditorType} onChange={onEditorTypeChange} />
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
{rawDSLFeatureEnabled && (
|
||||
<div className={styles.root}>
|
||||
<InlineLabel width={17}>Editor type</InlineLabel>
|
||||
<div className={styles.queryItem}>
|
||||
<EditorTypeSelector />
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{isCodeEditor && rawDSLFeatureEnabled && (
|
||||
<RawQueryEditor
|
||||
|
||||
@@ -6383,15 +6383,12 @@
|
||||
},
|
||||
"resource-export": {
|
||||
"label": {
|
||||
"advanced-options": "Advanced options",
|
||||
"classic": "Classic",
|
||||
"json": "JSON",
|
||||
"v1-resource": "V1 Resource",
|
||||
"v2-resource": "V2 Resource",
|
||||
"yaml": "YAML"
|
||||
},
|
||||
"share-externally": "Share dashboard with another instance",
|
||||
"share-externally-tooltip": "Removes all instance-specific metadata and data source references from the resource before export."
|
||||
}
|
||||
},
|
||||
"revert-dashboard-modal": {
|
||||
"body-restore-version": "Are you sure you want to restore the dashboard to version {{version}}? All unsaved changes will be lost.",
|
||||
@@ -7845,6 +7842,7 @@
|
||||
"export-externally-label": "Export the dashboard to use in another instance",
|
||||
"export-format": "Format",
|
||||
"export-mode": "Model",
|
||||
"export-remove-ds-refs": "Remove deployment details",
|
||||
"info-text": "Copy or download a file containing the definition of your dashboard",
|
||||
"title": "Export dashboard"
|
||||
},
|
||||
|
||||
Reference in New Issue
Block a user