Compare commits

...

28 Commits

Author SHA1 Message Date
Ryan McKinley
aa945c8e2a always return a value 2025-12-18 09:23:41 +03:00
grafana-pr-automation[bot]
0b233d20dd I18n: Download translations from Crowdin (#115527)
New Crowdin translations by GitHub Action

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-18 00:40:27 +00:00
Kevin Minehart
b6d567b429 Docs: remove software-properties-common; it is unused and not available in debian:13 (#115482) 2025-12-17 15:55:59 -06:00
Yuri Tseretyan
db3503fb32 Alerting: Support for imported Templates (#114196)
* refactor template service to contstruct notification template in one place, get provenance before creating and calculate resource version after.
* refactor get by UID and name

* introduce template kind in NotificationTemplate
* introduce includeImported flag and use in the k8s api
* support imported templates
* add kind to template uid
* tests for imported templates
* update API model
* set kind to default templates
2025-12-17 20:26:22 +00:00
Renato Costa
370d5c2dc2 unified-storage: add Keys support to the sqlkv implementation (#115510)
* unified-storage: add `Keys` support to the sqlkv implementation

* add validation for sort option

* Revert sort order validation, assume desc when invalid
2025-12-17 15:03:59 -05:00
Will Assis
5861b6c0d5 unified-storage: fix modes 1/2 pagination in dashboard list view (#115511)
* unified-storage: fix modes 1/2 pagination in dashboard list view
2025-12-17 14:51:07 -05:00
Jesse David Peterson
fbd5fe4bd2 Docs: Add a "DO NOT MODIFY" warning to the public/img/* source code directory (#115502)
* docs(public-img): add DO NOT MODIFY warning

* docs(typo): use US English spelling of behaviour

Co-authored-by: Isabel Matwawana <76437239+imatwawana@users.noreply.github.com>

---------

Co-authored-by: Isabel Matwawana <76437239+imatwawana@users.noreply.github.com>
2025-12-17 15:48:04 -04:00
Dominik Prokop
973523fd1f V2: Fix ad hoc filter defaultKeys incorrectly set to static mode (#115508)
* V2: Fix ad hoc filter defaultKeys incorrectly set to static mode

* Fixture update
2025-12-17 19:47:09 +00:00
Sergej-Vlasov
f3d4181cf2 V2 -> V1 conversion: include empty properties array when converting overrides (#115495)
* adjust conversion file to include empty properties array in overrides

* fix lint error

* add test case for empty properties and fix incorrect regex to v1 conversion
2025-12-17 18:07:06 +01:00
Ida Štambuk
a44e839033 Dynamic Dashboards: Decrease min height of first grid child (#115497) 2025-12-17 17:44:20 +01:00
Roberto Jiménez Sánchez
7e45a300b9 Provisioning: Remove migration from legacy storage (#112505)
* Deprecate Legacy Storage Migration in Backend

* Change the messaging around legacy storage

* Disable cards to connect

* Commit import changes

* Block repository creation if resources are in legacy storage

* Update error message

* Prettify

* chore: uncomment unified migration

* chore: adapt and fix tests

* Remove legacy storage migration from frontend

* Refactor provisioning job options by removing legacy storage and history fields

- Removed the `History` field from `MigrateJobOptions` and related references in the codebase.
- Eliminated the `LegacyStorage` field from `RepositoryViewList` and its associated comments.
- Updated tests and generated OpenAPI schema to reflect these changes.
- Simplified the `MigrationWorker` by removing dependencies on legacy storage checks.

* Refactor OpenAPI schema and tests to remove deprecated fields

- Removed the `history` field from `MigrateJobOptions` and updated the OpenAPI schema accordingly.
- Eliminated the `legacyStorage` field from `RepositoryViewList` and its associated comments in the schema.
- Updated integration tests to reflect the removal of these fields.

* Fix typescript errors

* Refactor provisioning code to remove legacy storage dependencies

- Eliminated references to `dualwrite.Service` and related legacy storage checks across multiple files.
- Updated `APIBuilder`, `RepositoryController`, and `SyncWorker` to streamline resource handling without legacy storage considerations.
- Adjusted tests to reflect the removal of legacy storage mocks and dependencies, ensuring cleaner and more maintainable code.

* Fix unit tests

* Remove more references to legacy

* Enhance provisioning wizard with migration options

- Added a checkbox for migrating existing resources in the BootstrapStep component.
- Updated the form context to track the new migration option.
- Adjusted the SynchronizeStep and useCreateSyncJob hook to incorporate the migration logic.
- Enhanced localization with new descriptions and labels for migration features.

* Remove unused variable and dualwrite reference in provisioning code

- Eliminated an unused variable declaration in `provisioning_manifest.go`.
- Removed the `nil` reference for dualwrite in `repo_operator.go`, aligning with the standalone operator's assumption of unified storage.

* Update go.mod and go.sum to include new dependencies

- Added `github.com/grafana/grafana-app-sdk` version `0.48.5` and several indirect dependencies including `github.com/getkin/kin-openapi`, `github.com/hashicorp/errwrap`, and others.
- Updated `go.sum` to reflect the new dependencies and their respective versions.

* Refactor provisioning components for improved readability

- Simplified the import statement in HomePage.tsx by removing unnecessary line breaks.
- Consolidated props in the SynchronizeStep component for cleaner code.
- Enhanced the layout of the ProvisioningWizard component by streamlining the rendering of the SynchronizeStep.

* Deprecate MigrationWorker and clean up related comments

- Removed the deprecated MigrationWorker implementation and its associated comments from the provisioning code.
- This change reflects the ongoing effort to eliminate legacy components and improve code maintainability.

* Fix linting issues

* Add explicit comment

* Update useResourceStats hook in BootstrapStep component to accept selected target

- Modified the BootstrapStep component to pass the selected target to the useResourceStats hook.
- Updated related tests to reflect the change in expected arguments for the useResourceStats hook.

* fix(provisioning): Update migrate tests to match export-then-sync behavior for all repository types

Updates test expectations for folder-type repositories to match the
implementation changes where both folder and instance repository types
now run export followed by sync. Only the namespace cleaner is skipped
for folder-type repositories.

Changes:
- Update "should run export and sync for folder-type repositories" test to include export mocks
- Update "should fail when sync job fails for folder-type repositories" test to include export mocks
- Rename test to clarify that both export and sync run for folder types
- Add proper mock expectations for SetMessage, StrictMaxErrors, Process, and ResetResults

All migrate package tests now pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Update provisioning wizard text and improve resource counting display

- Enhanced descriptions for migrating existing resources to clarify that unmanaged resources will also be included.
- Refactored BootstrapStepResourceCounting component to simplify the rendering logic and ensure both external storage and unmanaged resources are displayed correctly.
- Updated alert messages in SynchronizeStep to reflect accurate information regarding resource management during migration.
- Adjusted localization strings for consistency with the new descriptions.

* Update provisioning wizard alert messages for clarity and accuracy

- Revised alert points to indicate that resources can still be modified during migration, with a note on potential export issues.
- Clarified that resources will be marked as managed post-provisioning and that dashboards remain accessible throughout the process.

* Fix issue with trigger wrong type of job

* Fix export failure when folder already exists in repository

When exporting resources to a repository, if a folder already exists,
the Read() method would fail with "path component is empty" error.

This occurred because:
1. Folders are identified by trailing slash (e.g., "Legacy Folder/")
2. The Read() method passes this path directly to GetTreeByPath()
3. GetTreeByPath() splits the path by "/" creating empty components
4. This causes the "path component is empty" error

The fix strips the trailing slash before calling GetTreeByPath() to
avoid empty path components, while still using the trailing slash
convention to identify directories.

The Create() method already handles this correctly by appending
".keep" to directory paths, which is why the first export succeeded
but subsequent exports failed.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Fix folder tree not updated when folder already exists in repository

When exporting resources and a folder already exists in the repository,
the folder was not being added to the FolderManager's tree. This caused
subsequent dashboard exports to fail with "folder NOT found in tree".

The fix adds the folder to fm.tree even when it already exists in the
repository, ensuring all folders are available for resource lookups.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Revert "Merge remote-tracking branch 'origin/uncomment-unified-migration-code' into cleanup/deprecate-legacy-storage-migration-in-provisioning"

This reverts commit 6440fae342, reversing
changes made to ec39fb04f2.

* fix: handle empty folder titles in path construction

- Skip folders with empty titles in dirPath to avoid empty path components
- Skip folders with empty paths before checking if they exist in repository
- Fix unit tests to properly check useResourceStats hook calls with type annotations

* Update workspace

* Fix BootstrapStep tests after reverting unified migration merge

Updated test expectations to match the current component behavior where
resource counts are displayed for both instance and folder sync options.

- Changed 'Empty' count expectation from 3 to 4 (2 cards × 2 counts each)
- Changed '7 resources' test to use findAllByText instead of findByText
  since the count appears in multiple cards

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Remove bubbletee deps

* Fix workspace

* provisioning: update error message to reference enableMigration config

Update the error message when provisioning cannot be used due to
incompatible data format to instruct users to enable data migration
for folders and dashboards using the enableMigration configuration
introduced in PR #114857.

Also update the test helper to include EnableMigration: true for both
dashboards and folders to match the new configuration pattern.

* provisioning: add comment explaining Mode5 and EnableMigration requirement

Add a comment in the integration test helper explaining that Provisioning
requires Mode5 (unified storage) and EnableMigration (data migration) as
it expects resources to be fully migrated to unified storage.

* Remove migrate resources checkbox from folder type provisioning wizard

- Remove checkbox UI for migrating existing resources in folder type
- Remove migrateExistingResources from migration logic
- Simplify migration to only use requiresMigration flag
- Remove unused translation keys
- Update i18n strings

* Fix linting

* Remove unnecessary React Fragment wrapper in BootstrapStep

* Address comments

---------

Co-authored-by: Rafael Paulovic <rafael.paulovic@grafana.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-17 17:22:17 +01:00
Paul Marbach
60298fb02a Gauge: Update labelling to include new gauge (#115499) 2025-12-17 10:46:25 -05:00
Ihor Yeromin
d8b3462406 SQL Expression: Always create new SQL Expression block from Transform with SQL tile (#114510)
* feat(sql-expression): new-block-on-each-click

* remove unused function
2025-12-17 16:27:47 +01:00
Will Browne
e0711d9d1d Plugins: Local provider for meta (#114474) 2025-12-17 16:25:54 +01:00
Dan83
eb392b6149 Forms: Remove gf-form from DataSourceLoadError.tsx (#113021)
* Forms: Remove gf-form from DataSourceLoadError.tsx

* Forms: Remove gf-form from DataSourceLoadError.tsx
2025-12-17 14:47:38 +00:00
Yunwen Zheng
3672d9c41d DashListItem: Added DashListItem shared component (#115384)
* DashListItem: Add DashListItem shared component and shared with DashList and RecentlyViewedDashboards
2025-12-17 09:40:04 -05:00
Peter Štibraný
8a160a8ca1 Convert unique keys in 3 tables to primary keys (#115421)
* Added method for adding migrations for convering unique to primary key.

Based on existing migration for `file` table (in `db_file_storage.go`) migrations.

* Added better default migration names. Added ability to override migration name.

* Use ConvertUniqueKeyToPrimaryKey for cloud_migration_snapshot_partition table.

* Convert resource_version UQE to PK.

* Convert secret_encrypted_value UQE to PK.

* Removed extra test.

* Removed testdata.

* Remove support for renaming migrations for now. We can bring it in later, when we want to convert existing migrations for file, file_meta and setting tables.

* Revert removal of ColumnName to ease backporting, since this field is referenced from enterprise code.

* Use quoted identifiers in Postgres statement.
2025-12-17 15:37:49 +01:00
Todd Treece
33e53db53a Plugins: Add tracing to pipeline (#115448) 2025-12-17 09:08:17 -05:00
Victor Cinaglia
af85563527 ServiceAccounts: Fix token expiration display & show date on hover (#115449) 2025-12-17 10:22:27 -03:00
Victor Marin
a1a665e26b Dashboards: Add values recommendations support for AdHocFilters and GroupBy variables (#114849)
* drilldown recommendations

* cleanup + tests

* refactor

* canary scenes

* update type

* canary scenes

* refactor types

* refactor

* do not pass userId

* canary scenes

* canary scenes

* bump scenes

* export recomendation type
2025-12-17 14:52:59 +02:00
Mustafa Sencer Özcan
40976bb1e4 fix: preserve the order when migrating the playlists (#115485)
fix: preserve the order
2025-12-17 13:48:19 +01:00
Victor Cinaglia
fe49ae05c0 Auth: Disable login prompt option for Google OAuth when "use_refresh_token" is enabled (#115367)
* Auth: Google OAuth consent prompt takes precedence when use_refresh_token is true

* Auth: Disable login prompt option for Google OAuth when use_refresh_token is true

* yarn run prettier:check --write

* feedback: validate login prompt when use_refresh_token is true
2025-12-17 09:03:29 -03:00
Ryan McKinley
d02b2a35cd Provisioning: Ignore dashboard change warning after save (#115401) 2025-12-17 10:17:57 +00:00
Kevin Minehart
e4202db28f CI: enable branch cleanup workflow (#115470)
enable branch cleanup workflow
2025-12-17 10:44:06 +01:00
Oleg Zaytsev
015219e49f Logs Panel: Integrate client-side search with Popover Menu (#114653)
* Explore: Add custom text highlighting to logs panel

Add ability to select text in log lines and highlight all occurrences
with persistent colors. Highlights are stored in URL state and cycle
through the theme's visualization palette.

- Add CustomHighlight type to ExploreLogsPanelState
- Implement LogListHighlightContext for state management
- Generate custom highlight grammar using Prism.js tokens
- Add "Highlight occurrences" option to popover menu
- Add "Reset highlights" control when highlights exist
- Fix pruneObject to preserve colorIndex: 0 in URL state

* Fix CI failures: formatting and i18n extraction

- Run prettier on LogLine.tsx
- Run i18n-extract to update translation strings

* Fix lint errors

- Use theme.shape.radius.default instead of literal '2px' in LogLine.tsx
- Remove unnecessary type assertion in grammar.ts

* Fix TypeScript error in grammar.ts

Use Record<string, GrammarValue> type for dynamic grammar object to allow string indexing without type assertions.

* Replace hardcoded HIGHLIGHT_COLOR_COUNT with actual theme palette length

Use useTheme2() hook to dynamically get the palette length instead of
hardcoding it to 50. This ensures the color cycling works correctly
regardless of the actual theme palette size.

* Backtrack to a stable point and revert changes

* Implement using search

* New translations

* LogListSearch: refactor search state

* PopoverMenu: add divider

* LogLine: remove padding and update border radius

* LogListSearch: add missing tooltips

* Refactor keybindings

* More cleanup

* LogListSearch: don't autoscroll with filterLogs

---------

Co-authored-by: Matias Chomicki <matyax@gmail.com>
2025-12-17 10:43:50 +01:00
Matias Chomicki
1b9e0fae8d Logs: more analytics (#115330)
* Logs: more analytics

* LogLineDetails: collect fields data

* analytics: report length count

* Prettier

* LogLineDetailsHeader: track details mode toggle
2025-12-17 10:25:28 +01:00
Ashley Harrison
fc4c699d85 Chore: More backwards compatible changes needed for react 19 (#115422)
backwards compatible changes needed for react 19
2025-12-17 09:21:39 +00:00
Rafael Bortolon Paulovic
aa3b9dc4da Unified: Run resource data migrations at startup (#114857)
* chore: uncomment unified migration

* chore: adapt and fix tests

* chore: dynamically bump max conns if needed during migration

* chore: copilot suggestions

* chore: pass ctx in RegisterMigration

* chore: make playlists opt-out and dashboards opt-in

* chore: adjust dashboard test

* chore: disable enable log in test

* chore: address review comments

- do not use pointer config
- add migration registry

* chore: more consistent naming

* chore: fix playlist discovery test
2025-12-17 10:09:57 +01:00
251 changed files with 7849 additions and 6378 deletions

View File

@@ -365,7 +365,9 @@
"type": "changedfiles",
"matches": [
"public/app/plugins/panel/gauge/**/*",
"/packages/grafana-ui/src/components/Gauge/**/*"
"public/app/plugins/panel/radialbar/**/*",
"/packages/grafana-ui/src/components/Gauge/**/*",
"/packages/grafana-ui/src/components/RadialGauge/**/*"
],
"action": "updateLabel",
"addLabel": "area/panel/gauge"

View File

@@ -14,5 +14,5 @@ jobs:
- uses: actions/checkout@v5
- uses: grafana/shared-workflows/actions/cleanup-branches@cleanup-branches/v0.2.1
with:
dry-run: true
dry-run: false
max-date: "1 month ago"

View File

@@ -165,6 +165,7 @@ require (
github.com/grafana/grafana-azure-sdk-go/v2 v2.3.1 // indirect
github.com/grafana/grafana/apps/provisioning v0.0.0 // indirect
github.com/grafana/grafana/pkg/apiserver v0.0.0 // indirect
github.com/grafana/grafana/pkg/semconv v0.0.0-20250804150913-990f1c69ecc2 // indirect
github.com/grafana/otel-profiling-go v0.5.1 // indirect
github.com/grafana/pyroscope-go/godeltaprof v0.1.9 // indirect
github.com/grafana/sqlds/v4 v4.2.7 // indirect

View File

@@ -1,6 +1,9 @@
package v0alpha1
TemplateKind: *"grafana" | "mimir"
TemplateGroupSpec: {
title: string
content: string
kind: TemplateKind
}

View File

@@ -2,13 +2,24 @@
package v0alpha1
// +k8s:openapi-gen=true
type TemplateGroupTemplateKind string
const (
TemplateGroupTemplateKindGrafana TemplateGroupTemplateKind = "grafana"
TemplateGroupTemplateKindMimir TemplateGroupTemplateKind = "mimir"
)
// +k8s:openapi-gen=true
type TemplateGroupSpec struct {
Title string `json:"title"`
Content string `json:"content"`
Title string `json:"title"`
Content string `json:"content"`
Kind TemplateGroupTemplateKind `json:"kind"`
}
// NewTemplateGroupSpec creates a new TemplateGroupSpec object.
func NewTemplateGroupSpec() *TemplateGroupSpec {
return &TemplateGroupSpec{}
return &TemplateGroupSpec{
Kind: TemplateGroupTemplateKindGrafana,
}
}

View File

@@ -26,7 +26,7 @@ var (
rawSchemaRoutingTreev0alpha1 = []byte(`{"Matcher":{"additionalProperties":false,"properties":{"label":{"type":"string"},"type":{"enum":["=","!=","=~","!~"],"type":"string"},"value":{"type":"string"}},"required":["type","label","value"],"type":"object"},"Route":{"additionalProperties":false,"properties":{"active_time_intervals":{"items":{"type":"string"},"type":"array"},"continue":{"type":"boolean"},"group_by":{"items":{"type":"string"},"type":"array"},"group_interval":{"type":"string"},"group_wait":{"type":"string"},"matchers":{"items":{"$ref":"#/components/schemas/Matcher"},"type":"array"},"mute_time_intervals":{"items":{"type":"string"},"type":"array"},"receiver":{"type":"string"},"repeat_interval":{"type":"string"},"routes":{"items":{"$ref":"#/components/schemas/Route"},"type":"array"}},"required":["continue"],"type":"object"},"RouteDefaults":{"additionalProperties":false,"properties":{"group_by":{"items":{"type":"string"},"type":"array"},"group_interval":{"type":"string"},"group_wait":{"type":"string"},"receiver":{"type":"string"},"repeat_interval":{"type":"string"}},"required":["receiver"],"type":"object"},"RoutingTree":{"properties":{"spec":{"$ref":"#/components/schemas/spec"}},"required":["spec"]},"spec":{"additionalProperties":false,"properties":{"defaults":{"$ref":"#/components/schemas/RouteDefaults"},"routes":{"items":{"$ref":"#/components/schemas/Route"},"type":"array"}},"required":["defaults","routes"],"type":"object"}}`)
versionSchemaRoutingTreev0alpha1 app.VersionSchema
_ = json.Unmarshal(rawSchemaRoutingTreev0alpha1, &versionSchemaRoutingTreev0alpha1)
rawSchemaTemplateGroupv0alpha1 = []byte(`{"TemplateGroup":{"properties":{"spec":{"$ref":"#/components/schemas/spec"}},"required":["spec"]},"spec":{"additionalProperties":false,"properties":{"content":{"type":"string"},"title":{"type":"string"}},"required":["title","content"],"type":"object"}}`)
rawSchemaTemplateGroupv0alpha1 = []byte(`{"TemplateGroup":{"properties":{"spec":{"$ref":"#/components/schemas/spec"}},"required":["spec"]},"TemplateKind":{"enum":["grafana","mimir"],"type":"string"},"spec":{"additionalProperties":false,"properties":{"content":{"type":"string"},"kind":{"$ref":"#/components/schemas/TemplateKind","default":"grafana"},"title":{"type":"string"}},"required":["title","content","kind"],"type":"object"}}`)
versionSchemaTemplateGroupv0alpha1 app.VersionSchema
_ = json.Unmarshal(rawSchemaTemplateGroupv0alpha1, &versionSchemaTemplateGroupv0alpha1)
rawSchemaTimeIntervalv0alpha1 = []byte(`{"Interval":{"additionalProperties":false,"properties":{"days_of_month":{"items":{"type":"string"},"type":"array"},"location":{"type":"string"},"months":{"items":{"type":"string"},"type":"array"},"times":{"items":{"$ref":"#/components/schemas/TimeRange"},"type":"array"},"weekdays":{"items":{"type":"string"},"type":"array"},"years":{"items":{"type":"string"},"type":"array"}},"type":"object"},"TimeInterval":{"properties":{"spec":{"$ref":"#/components/schemas/spec"}},"required":["spec"]},"TimeRange":{"additionalProperties":false,"properties":{"end_time":{"type":"string"},"start_time":{"type":"string"}},"required":["start_time","end_time"],"type":"object"},"spec":{"additionalProperties":false,"properties":{"name":{"type":"string"},"time_intervals":{"items":{"$ref":"#/components/schemas/Interval"},"type":"array"}},"required":["name","time_intervals"],"type":"object"}}`)

View File

@@ -0,0 +1,864 @@
{
"kind": "DashboardWithAccessInfo",
"apiVersion": "dashboard.grafana.app/v2beta1",
"metadata": {
"name": "value-mapping-test",
"namespace": "default",
"uid": "value-mapping-test",
"resourceVersion": "1765384157199094",
"generation": 2,
"creationTimestamp": "2025-11-19T20:09:28Z",
"labels": {
"grafana.app/deprecatedInternalID": "646372978987008"
}
},
"spec": {
"annotations": [
{
"kind": "AnnotationQuery",
"spec": {
"query": {
"kind": "DataQuery",
"group": "grafana",
"version": "v0",
"datasource": {
"name": "-- Grafana --"
},
"spec": {}
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations \u0026 Alerts",
"builtIn": true,
"legacyOptions": {
"type": "dashboard"
}
}
}
],
"cursorSync": "Off",
"description": "Test dashboard for all value mapping types and override matcher types",
"editable": true,
"elements": {
"panel-1": {
"kind": "Panel",
"spec": {
"id": 1,
"title": "ValueMap Example",
"description": "Panel with ValueMap mapping type - maps specific text values to colors and display text",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "DataQuery",
"group": "prometheus",
"version": "v0",
"datasource": {
"name": "prometheus-uid"
},
"spec": {
"expr": "up"
}
},
"refId": "A",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "VizConfig",
"group": "stat",
"version": "",
"spec": {
"options": {},
"fieldConfig": {
"defaults": {
"mappings": [
{
"type": "value",
"options": {
"critical": {
"text": "Critical!",
"color": "red",
"index": 0
},
"ok": {
"text": "OK",
"color": "green",
"index": 2
},
"warning": {
"text": "Warning",
"color": "orange",
"index": 1
}
}
}
]
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "status"
},
"properties": [
{
"id": "custom.width",
"value": 100
},
{
"id": "custom.align",
"value": "center"
}
]
}
]
}
}
}
}
},
"panel-2": {
"kind": "Panel",
"spec": {
"id": 2,
"title": "RangeMap Example",
"description": "Panel with RangeMap mapping type - maps numerical ranges to colors and display text",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "DataQuery",
"group": "prometheus",
"version": "v0",
"datasource": {
"name": "prometheus-uid"
},
"spec": {
"expr": "cpu_usage_percent"
}
},
"refId": "A",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "VizConfig",
"group": "gauge",
"version": "",
"spec": {
"options": {},
"fieldConfig": {
"defaults": {
"mappings": [
{
"type": "range",
"options": {
"from": 0,
"to": 50,
"result": {
"text": "Low",
"color": "green",
"index": 0
}
}
},
{
"type": "range",
"options": {
"from": 50,
"to": 80,
"result": {
"text": "Medium",
"color": "orange",
"index": 1
}
}
},
{
"type": "range",
"options": {
"from": 80,
"to": 100,
"result": {
"text": "High",
"color": "red",
"index": 2
}
}
}
]
},
"overrides": [
{
"matcher": {
"id": "byRegexp",
"options": "/^cpu_/"
},
"properties": [
{
"id": "unit",
"value": "percent"
},
{
"id": "decimals",
"value": 2
}
]
}
]
}
}
}
}
},
"panel-3": {
"kind": "Panel",
"spec": {
"id": 3,
"title": "RegexMap Example",
"description": "Panel with RegexMap mapping type - maps values matching regex patterns to colors",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "DataQuery",
"group": "prometheus",
"version": "v0",
"datasource": {
"name": "prometheus-uid"
},
"spec": {
"expr": "log_level"
}
},
"refId": "A",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "VizConfig",
"group": "stat",
"version": "",
"spec": {
"options": {},
"fieldConfig": {
"defaults": {
"mappings": [
{
"type": "regex",
"options": {
"pattern": "/^error.*/",
"result": {
"text": "Error",
"color": "red",
"index": 0
}
}
},
{
"type": "regex",
"options": {
"pattern": "/^warn.*/",
"result": {
"text": "Warning",
"color": "orange",
"index": 1
}
}
},
{
"type": "regex",
"options": {
"pattern": "/^info.*/",
"result": {
"text": "Info",
"color": "blue",
"index": 2
}
}
}
]
},
"overrides": [
{
"matcher": {
"id": "byType",
"options": "string"
},
"properties": [
{
"id": "custom.cellOptions",
"value": {
"type": "color-text"
}
}
]
}
]
}
}
}
}
},
"panel-4": {
"kind": "Panel",
"spec": {
"id": 4,
"title": "SpecialValueMap Example",
"description": "Panel with SpecialValueMap mapping type - maps special values like null, NaN, true, false to display text",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "DataQuery",
"group": "prometheus",
"version": "v0",
"datasource": {
"name": "prometheus-uid"
},
"spec": {
"expr": "some_metric"
}
},
"refId": "A",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "VizConfig",
"group": "stat",
"version": "",
"spec": {
"options": {},
"fieldConfig": {
"defaults": {
"mappings": [
{
"type": "special",
"options": {
"match": "null",
"result": {
"text": "No Data",
"color": "gray",
"index": 0
}
}
},
{
"type": "special",
"options": {
"match": "nan",
"result": {
"text": "Not a Number",
"color": "gray",
"index": 1
}
}
},
{
"type": "special",
"options": {
"match": "null+nan",
"result": {
"text": "N/A",
"color": "gray",
"index": 2
}
}
},
{
"type": "special",
"options": {
"match": "true",
"result": {
"text": "Yes",
"color": "green",
"index": 3
}
}
},
{
"type": "special",
"options": {
"match": "false",
"result": {
"text": "No",
"color": "red",
"index": 4
}
}
},
{
"type": "special",
"options": {
"match": "empty",
"result": {
"text": "Empty",
"color": "gray",
"index": 5
}
}
}
]
},
"overrides": [
{
"matcher": {
"id": "byFrameRefID",
"options": "A"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "blue",
"mode": "fixed"
}
}
]
}
]
}
}
}
}
},
"panel-6": {
"kind": "Panel",
"spec": {
"id": 6,
"title": "Empty Properties Override Example",
"description": "Panel with override that has empty properties array - tests conversion of overrides without any property modifications",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "DataQuery",
"group": "prometheus",
"version": "v0",
"datasource": {
"name": "prometheus-uid"
},
"spec": {
"expr": "empty_override_metric"
}
},
"refId": "A",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "VizConfig",
"group": "stat",
"version": "",
"spec": {
"options": {},
"fieldConfig": {
"defaults": {},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "field_with_empty_override"
},
"properties": []
}
]
}
}
}
}
},
"panel-5": {
"kind": "Panel",
"spec": {
"id": 5,
"title": "Combined Mappings and Overrides Example",
"description": "Panel with all mapping types combined - demonstrates mixing different mapping types and multiple override matchers",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "DataQuery",
"group": "prometheus",
"version": "v0",
"datasource": {
"name": "prometheus-uid"
},
"spec": {
"expr": "combined_metric"
}
},
"refId": "A",
"hidden": false
}
},
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "DataQuery",
"group": "prometheus",
"version": "v0",
"datasource": {
"name": "prometheus-uid"
},
"spec": {
"expr": "secondary_metric"
}
},
"refId": "B",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "VizConfig",
"group": "table",
"version": "",
"spec": {
"options": {},
"fieldConfig": {
"defaults": {
"mappings": [
{
"type": "value",
"options": {
"failure": {
"text": "Failure",
"color": "red",
"index": 1
},
"success": {
"text": "Success",
"color": "green",
"index": 0
}
}
},
{
"type": "range",
"options": {
"from": 0,
"to": 100,
"result": {
"text": "In Range",
"color": "blue",
"index": 2
}
}
},
{
"type": "regex",
"options": {
"pattern": "/^[A-Z]{3}-\\d+$/",
"result": {
"text": "ID Format",
"color": "purple",
"index": 3
}
}
},
{
"type": "special",
"options": {
"match": "null",
"result": {
"text": "Missing",
"color": "gray",
"index": 4
}
}
}
]
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "status"
},
"properties": [
{
"id": "custom.width",
"value": 120
},
{
"id": "custom.cellOptions",
"value": {
"type": "color-background"
}
}
]
},
{
"matcher": {
"id": "byRegexp",
"options": "/^value_/"
},
"properties": [
{
"id": "unit",
"value": "short"
},
{
"id": "min",
"value": 0
},
{
"id": "max",
"value": 100
}
]
},
{
"matcher": {
"id": "byType",
"options": "number"
},
"properties": [
{
"id": "decimals",
"value": 2
},
{
"id": "thresholds",
"value": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 50
},
{
"color": "red",
"value": 80
}
]
}
}
]
},
{
"matcher": {
"id": "byFrameRefID",
"options": "B"
},
"properties": [
{
"id": "displayName",
"value": "Secondary Query"
}
]
},
{
"matcher": {
"id": "byValue",
"options": {
"op": "gte",
"reducer": "allIsNull",
"value": 0
}
},
"properties": [
{
"id": "custom.hidden",
"value": true
}
]
}
]
}
}
}
}
}
},
"layout": {
"kind": "GridLayout",
"spec": {
"items": [
{
"kind": "GridLayoutItem",
"spec": {
"x": 0,
"y": 0,
"width": 12,
"height": 8,
"element": {
"kind": "ElementReference",
"name": "panel-1"
}
}
},
{
"kind": "GridLayoutItem",
"spec": {
"x": 12,
"y": 0,
"width": 12,
"height": 8,
"element": {
"kind": "ElementReference",
"name": "panel-2"
}
}
},
{
"kind": "GridLayoutItem",
"spec": {
"x": 0,
"y": 8,
"width": 12,
"height": 8,
"element": {
"kind": "ElementReference",
"name": "panel-3"
}
}
},
{
"kind": "GridLayoutItem",
"spec": {
"x": 12,
"y": 8,
"width": 12,
"height": 8,
"element": {
"kind": "ElementReference",
"name": "panel-4"
}
}
},
{
"kind": "GridLayoutItem",
"spec": {
"x": 0,
"y": 16,
"width": 24,
"height": 8,
"element": {
"kind": "ElementReference",
"name": "panel-5"
}
}
},
{
"kind": "GridLayoutItem",
"spec": {
"x": 0,
"y": 24,
"width": 12,
"height": 8,
"element": {
"kind": "ElementReference",
"name": "panel-6"
}
}
}
]
}
},
"links": [],
"liveNow": false,
"preload": false,
"tags": [
"value-mapping",
"overrides",
"test"
],
"timeSettings": {
"timezone": "browser",
"from": "now-6h",
"to": "now",
"autoRefresh": "",
"autoRefreshIntervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"hideTimepicker": false,
"fiscalYearStartMonth": 0
},
"title": "Value Mapping and Overrides Test",
"variables": []
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v1beta1"
}
}
}

View File

@@ -0,0 +1,640 @@
{
"kind": "DashboardWithAccessInfo",
"apiVersion": "dashboard.grafana.app/v0alpha1",
"metadata": {
"name": "value-mapping-test",
"namespace": "default",
"uid": "value-mapping-test",
"resourceVersion": "1765384157199094",
"generation": 2,
"creationTimestamp": "2025-11-19T20:09:28Z",
"labels": {
"grafana.app/deprecatedInternalID": "646372978987008"
}
},
"spec": {
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations \u0026 Alerts",
"type": "dashboard"
}
]
},
"description": "Test dashboard for all value mapping types and override matcher types",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"liveNow": false,
"panels": [
{
"description": "Panel with ValueMap mapping type - maps specific text values to colors and display text",
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"critical": {
"color": "red",
"index": 0,
"text": "Critical!"
},
"ok": {
"color": "green",
"index": 2,
"text": "OK"
},
"warning": {
"color": "orange",
"index": 1,
"text": "Warning"
}
},
"type": "value"
}
]
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "status"
},
"properties": [
{
"id": "custom.width",
"value": 100
},
{
"id": "custom.align",
"value": "center"
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"id": 1,
"options": {},
"pluginVersion": "",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"expr": "up",
"refId": "A"
}
],
"title": "ValueMap Example",
"type": "stat"
},
{
"description": "Panel with RangeMap mapping type - maps numerical ranges to colors and display text",
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"from": 0,
"result": {
"color": "green",
"index": 0,
"text": "Low"
},
"to": 50
},
"type": "range"
},
{
"options": {
"from": 50,
"result": {
"color": "orange",
"index": 1,
"text": "Medium"
},
"to": 80
},
"type": "range"
},
{
"options": {
"from": 80,
"result": {
"color": "red",
"index": 2,
"text": "High"
},
"to": 100
},
"type": "range"
}
]
},
"overrides": [
{
"matcher": {
"id": "byRegexp",
"options": "/^cpu_/"
},
"properties": [
{
"id": "unit",
"value": "percent"
},
{
"id": "decimals",
"value": 2
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"id": 2,
"options": {},
"pluginVersion": "",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"expr": "cpu_usage_percent",
"refId": "A"
}
],
"title": "RangeMap Example",
"type": "gauge"
},
{
"description": "Panel with RegexMap mapping type - maps values matching regex patterns to colors",
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"pattern": "/^error.*/",
"result": {
"color": "red",
"index": 0,
"text": "Error"
}
},
"type": "regex"
},
{
"options": {
"pattern": "/^warn.*/",
"result": {
"color": "orange",
"index": 1,
"text": "Warning"
}
},
"type": "regex"
},
{
"options": {
"pattern": "/^info.*/",
"result": {
"color": "blue",
"index": 2,
"text": "Info"
}
},
"type": "regex"
}
]
},
"overrides": [
{
"matcher": {
"id": "byType",
"options": "string"
},
"properties": [
{
"id": "custom.cellOptions",
"value": {
"type": "color-text"
}
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"id": 3,
"options": {},
"pluginVersion": "",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"expr": "log_level",
"refId": "A"
}
],
"title": "RegexMap Example",
"type": "stat"
},
{
"description": "Panel with SpecialValueMap mapping type - maps special values like null, NaN, true, false to display text",
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"match": "null",
"result": {
"color": "gray",
"index": 0,
"text": "No Data"
}
},
"type": "special"
},
{
"options": {
"match": "nan",
"result": {
"color": "gray",
"index": 1,
"text": "Not a Number"
}
},
"type": "special"
},
{
"options": {
"match": "null+nan",
"result": {
"color": "gray",
"index": 2,
"text": "N/A"
}
},
"type": "special"
},
{
"options": {
"match": "true",
"result": {
"color": "green",
"index": 3,
"text": "Yes"
}
},
"type": "special"
},
{
"options": {
"match": "false",
"result": {
"color": "red",
"index": 4,
"text": "No"
}
},
"type": "special"
},
{
"options": {
"match": "empty",
"result": {
"color": "gray",
"index": 5,
"text": "Empty"
}
},
"type": "special"
}
]
},
"overrides": [
{
"matcher": {
"id": "byFrameRefID",
"options": "A"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "blue",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"id": 4,
"options": {},
"pluginVersion": "",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"expr": "some_metric",
"refId": "A"
}
],
"title": "SpecialValueMap Example",
"type": "stat"
},
{
"description": "Panel with all mapping types combined - demonstrates mixing different mapping types and multiple override matchers",
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"failure": {
"color": "red",
"index": 1,
"text": "Failure"
},
"success": {
"color": "green",
"index": 0,
"text": "Success"
}
},
"type": "value"
},
{
"options": {
"from": 0,
"result": {
"color": "blue",
"index": 2,
"text": "In Range"
},
"to": 100
},
"type": "range"
},
{
"options": {
"pattern": "/^[A-Z]{3}-\\d+$/",
"result": {
"color": "purple",
"index": 3,
"text": "ID Format"
}
},
"type": "regex"
},
{
"options": {
"match": "null",
"result": {
"color": "gray",
"index": 4,
"text": "Missing"
}
},
"type": "special"
}
]
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "status"
},
"properties": [
{
"id": "custom.width",
"value": 120
},
{
"id": "custom.cellOptions",
"value": {
"type": "color-background"
}
}
]
},
{
"matcher": {
"id": "byRegexp",
"options": "/^value_/"
},
"properties": [
{
"id": "unit",
"value": "short"
},
{
"id": "min",
"value": 0
},
{
"id": "max",
"value": 100
}
]
},
{
"matcher": {
"id": "byType",
"options": "number"
},
"properties": [
{
"id": "decimals",
"value": 2
},
{
"id": "thresholds",
"value": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 50
},
{
"color": "red",
"value": 80
}
]
}
}
]
},
{
"matcher": {
"id": "byFrameRefID",
"options": "B"
},
"properties": [
{
"id": "displayName",
"value": "Secondary Query"
}
]
},
{
"matcher": {
"id": "byValue",
"options": {
"op": "gte",
"reducer": "allIsNull",
"value": 0
}
},
"properties": [
{
"id": "custom.hidden",
"value": true
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 16
},
"id": 5,
"options": {},
"pluginVersion": "",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"expr": "combined_metric",
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"expr": "secondary_metric",
"refId": "B"
}
],
"title": "Combined Mappings and Overrides Example",
"type": "table"
},
{
"description": "Panel with override that has empty properties array - tests conversion of overrides without any property modifications",
"fieldConfig": {
"overrides": [
{
"matcher": {
"id": "byName",
"options": "field_with_empty_override"
},
"properties": []
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"id": 6,
"options": {},
"pluginVersion": "",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"expr": "empty_override_metric",
"refId": "A"
}
],
"title": "Empty Properties Override Example",
"type": "stat"
}
],
"preload": false,
"refresh": "",
"schemaVersion": 42,
"tags": [
"value-mapping",
"overrides",
"test"
],
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
]
},
"timezone": "browser",
"title": "Value Mapping and Overrides Test"
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v2beta1"
}
}
}

View File

@@ -0,0 +1,640 @@
{
"kind": "DashboardWithAccessInfo",
"apiVersion": "dashboard.grafana.app/v1beta1",
"metadata": {
"name": "value-mapping-test",
"namespace": "default",
"uid": "value-mapping-test",
"resourceVersion": "1765384157199094",
"generation": 2,
"creationTimestamp": "2025-11-19T20:09:28Z",
"labels": {
"grafana.app/deprecatedInternalID": "646372978987008"
}
},
"spec": {
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations \u0026 Alerts",
"type": "dashboard"
}
]
},
"description": "Test dashboard for all value mapping types and override matcher types",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"liveNow": false,
"panels": [
{
"description": "Panel with ValueMap mapping type - maps specific text values to colors and display text",
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"critical": {
"color": "red",
"index": 0,
"text": "Critical!"
},
"ok": {
"color": "green",
"index": 2,
"text": "OK"
},
"warning": {
"color": "orange",
"index": 1,
"text": "Warning"
}
},
"type": "value"
}
]
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "status"
},
"properties": [
{
"id": "custom.width",
"value": 100
},
{
"id": "custom.align",
"value": "center"
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 0
},
"id": 1,
"options": {},
"pluginVersion": "",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"expr": "up",
"refId": "A"
}
],
"title": "ValueMap Example",
"type": "stat"
},
{
"description": "Panel with RangeMap mapping type - maps numerical ranges to colors and display text",
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"from": 0,
"result": {
"color": "green",
"index": 0,
"text": "Low"
},
"to": 50
},
"type": "range"
},
{
"options": {
"from": 50,
"result": {
"color": "orange",
"index": 1,
"text": "Medium"
},
"to": 80
},
"type": "range"
},
{
"options": {
"from": 80,
"result": {
"color": "red",
"index": 2,
"text": "High"
},
"to": 100
},
"type": "range"
}
]
},
"overrides": [
{
"matcher": {
"id": "byRegexp",
"options": "/^cpu_/"
},
"properties": [
{
"id": "unit",
"value": "percent"
},
{
"id": "decimals",
"value": 2
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 0
},
"id": 2,
"options": {},
"pluginVersion": "",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"expr": "cpu_usage_percent",
"refId": "A"
}
],
"title": "RangeMap Example",
"type": "gauge"
},
{
"description": "Panel with RegexMap mapping type - maps values matching regex patterns to colors",
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"pattern": "/^error.*/",
"result": {
"color": "red",
"index": 0,
"text": "Error"
}
},
"type": "regex"
},
{
"options": {
"pattern": "/^warn.*/",
"result": {
"color": "orange",
"index": 1,
"text": "Warning"
}
},
"type": "regex"
},
{
"options": {
"pattern": "/^info.*/",
"result": {
"color": "blue",
"index": 2,
"text": "Info"
}
},
"type": "regex"
}
]
},
"overrides": [
{
"matcher": {
"id": "byType",
"options": "string"
},
"properties": [
{
"id": "custom.cellOptions",
"value": {
"type": "color-text"
}
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 8
},
"id": 3,
"options": {},
"pluginVersion": "",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"expr": "log_level",
"refId": "A"
}
],
"title": "RegexMap Example",
"type": "stat"
},
{
"description": "Panel with SpecialValueMap mapping type - maps special values like null, NaN, true, false to display text",
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"match": "null",
"result": {
"color": "gray",
"index": 0,
"text": "No Data"
}
},
"type": "special"
},
{
"options": {
"match": "nan",
"result": {
"color": "gray",
"index": 1,
"text": "Not a Number"
}
},
"type": "special"
},
{
"options": {
"match": "null+nan",
"result": {
"color": "gray",
"index": 2,
"text": "N/A"
}
},
"type": "special"
},
{
"options": {
"match": "true",
"result": {
"color": "green",
"index": 3,
"text": "Yes"
}
},
"type": "special"
},
{
"options": {
"match": "false",
"result": {
"color": "red",
"index": 4,
"text": "No"
}
},
"type": "special"
},
{
"options": {
"match": "empty",
"result": {
"color": "gray",
"index": 5,
"text": "Empty"
}
},
"type": "special"
}
]
},
"overrides": [
{
"matcher": {
"id": "byFrameRefID",
"options": "A"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "blue",
"mode": "fixed"
}
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 8
},
"id": 4,
"options": {},
"pluginVersion": "",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"expr": "some_metric",
"refId": "A"
}
],
"title": "SpecialValueMap Example",
"type": "stat"
},
{
"description": "Panel with all mapping types combined - demonstrates mixing different mapping types and multiple override matchers",
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"failure": {
"color": "red",
"index": 1,
"text": "Failure"
},
"success": {
"color": "green",
"index": 0,
"text": "Success"
}
},
"type": "value"
},
{
"options": {
"from": 0,
"result": {
"color": "blue",
"index": 2,
"text": "In Range"
},
"to": 100
},
"type": "range"
},
{
"options": {
"pattern": "/^[A-Z]{3}-\\d+$/",
"result": {
"color": "purple",
"index": 3,
"text": "ID Format"
}
},
"type": "regex"
},
{
"options": {
"match": "null",
"result": {
"color": "gray",
"index": 4,
"text": "Missing"
}
},
"type": "special"
}
]
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "status"
},
"properties": [
{
"id": "custom.width",
"value": 120
},
{
"id": "custom.cellOptions",
"value": {
"type": "color-background"
}
}
]
},
{
"matcher": {
"id": "byRegexp",
"options": "/^value_/"
},
"properties": [
{
"id": "unit",
"value": "short"
},
{
"id": "min",
"value": 0
},
{
"id": "max",
"value": 100
}
]
},
{
"matcher": {
"id": "byType",
"options": "number"
},
"properties": [
{
"id": "decimals",
"value": 2
},
{
"id": "thresholds",
"value": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 50
},
{
"color": "red",
"value": 80
}
]
}
}
]
},
{
"matcher": {
"id": "byFrameRefID",
"options": "B"
},
"properties": [
{
"id": "displayName",
"value": "Secondary Query"
}
]
},
{
"matcher": {
"id": "byValue",
"options": {
"op": "gte",
"reducer": "allIsNull",
"value": 0
}
},
"properties": [
{
"id": "custom.hidden",
"value": true
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 16
},
"id": 5,
"options": {},
"pluginVersion": "",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"expr": "combined_metric",
"refId": "A"
},
{
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"expr": "secondary_metric",
"refId": "B"
}
],
"title": "Combined Mappings and Overrides Example",
"type": "table"
},
{
"description": "Panel with override that has empty properties array - tests conversion of overrides without any property modifications",
"fieldConfig": {
"overrides": [
{
"matcher": {
"id": "byName",
"options": "field_with_empty_override"
},
"properties": []
}
]
},
"gridPos": {
"h": 8,
"w": 12,
"x": 0,
"y": 24
},
"id": 6,
"options": {},
"pluginVersion": "",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"expr": "empty_override_metric",
"refId": "A"
}
],
"title": "Empty Properties Override Example",
"type": "stat"
}
],
"preload": false,
"refresh": "",
"schemaVersion": 42,
"tags": [
"value-mapping",
"overrides",
"test"
],
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
]
},
"timezone": "browser",
"title": "Value Mapping and Overrides Test"
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v2beta1"
}
}
}

View File

@@ -0,0 +1,850 @@
{
"kind": "DashboardWithAccessInfo",
"apiVersion": "dashboard.grafana.app/v2alpha1",
"metadata": {
"name": "value-mapping-test",
"namespace": "default",
"uid": "value-mapping-test",
"resourceVersion": "1765384157199094",
"generation": 2,
"creationTimestamp": "2025-11-19T20:09:28Z",
"labels": {
"grafana.app/deprecatedInternalID": "646372978987008"
}
},
"spec": {
"annotations": [
{
"kind": "AnnotationQuery",
"spec": {
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"query": {
"kind": "grafana",
"spec": {}
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations \u0026 Alerts",
"builtIn": true,
"legacyOptions": {
"type": "dashboard"
}
}
}
],
"cursorSync": "Off",
"description": "Test dashboard for all value mapping types and override matcher types",
"editable": true,
"elements": {
"panel-1": {
"kind": "Panel",
"spec": {
"id": 1,
"title": "ValueMap Example",
"description": "Panel with ValueMap mapping type - maps specific text values to colors and display text",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "prometheus",
"spec": {
"expr": "up"
}
},
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"refId": "A",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "stat",
"spec": {
"pluginVersion": "",
"options": {},
"fieldConfig": {
"defaults": {
"mappings": [
{
"type": "value",
"options": {
"critical": {
"text": "Critical!",
"color": "red",
"index": 0
},
"ok": {
"text": "OK",
"color": "green",
"index": 2
},
"warning": {
"text": "Warning",
"color": "orange",
"index": 1
}
}
}
]
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "status"
},
"properties": [
{
"id": "custom.width",
"value": 100
},
{
"id": "custom.align",
"value": "center"
}
]
}
]
}
}
}
}
},
"panel-2": {
"kind": "Panel",
"spec": {
"id": 2,
"title": "RangeMap Example",
"description": "Panel with RangeMap mapping type - maps numerical ranges to colors and display text",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "prometheus",
"spec": {
"expr": "cpu_usage_percent"
}
},
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"refId": "A",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "gauge",
"spec": {
"pluginVersion": "",
"options": {},
"fieldConfig": {
"defaults": {
"mappings": [
{
"type": "range",
"options": {
"from": 0,
"to": 50,
"result": {
"text": "Low",
"color": "green",
"index": 0
}
}
},
{
"type": "range",
"options": {
"from": 50,
"to": 80,
"result": {
"text": "Medium",
"color": "orange",
"index": 1
}
}
},
{
"type": "range",
"options": {
"from": 80,
"to": 100,
"result": {
"text": "High",
"color": "red",
"index": 2
}
}
}
]
},
"overrides": [
{
"matcher": {
"id": "byRegexp",
"options": "/^cpu_/"
},
"properties": [
{
"id": "unit",
"value": "percent"
},
{
"id": "decimals",
"value": 2
}
]
}
]
}
}
}
}
},
"panel-3": {
"kind": "Panel",
"spec": {
"id": 3,
"title": "RegexMap Example",
"description": "Panel with RegexMap mapping type - maps values matching regex patterns to colors",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "prometheus",
"spec": {
"expr": "log_level"
}
},
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"refId": "A",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "stat",
"spec": {
"pluginVersion": "",
"options": {},
"fieldConfig": {
"defaults": {
"mappings": [
{
"type": "regex",
"options": {
"pattern": "/^error.*/",
"result": {
"text": "Error",
"color": "red",
"index": 0
}
}
},
{
"type": "regex",
"options": {
"pattern": "/^warn.*/",
"result": {
"text": "Warning",
"color": "orange",
"index": 1
}
}
},
{
"type": "regex",
"options": {
"pattern": "/^info.*/",
"result": {
"text": "Info",
"color": "blue",
"index": 2
}
}
}
]
},
"overrides": [
{
"matcher": {
"id": "byType",
"options": "string"
},
"properties": [
{
"id": "custom.cellOptions",
"value": {
"type": "color-text"
}
}
]
}
]
}
}
}
}
},
"panel-4": {
"kind": "Panel",
"spec": {
"id": 4,
"title": "SpecialValueMap Example",
"description": "Panel with SpecialValueMap mapping type - maps special values like null, NaN, true, false to display text",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "prometheus",
"spec": {
"expr": "some_metric"
}
},
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"refId": "A",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "stat",
"spec": {
"pluginVersion": "",
"options": {},
"fieldConfig": {
"defaults": {
"mappings": [
{
"type": "special",
"options": {
"match": "null",
"result": {
"text": "No Data",
"color": "gray",
"index": 0
}
}
},
{
"type": "special",
"options": {
"match": "nan",
"result": {
"text": "Not a Number",
"color": "gray",
"index": 1
}
}
},
{
"type": "special",
"options": {
"match": "null+nan",
"result": {
"text": "N/A",
"color": "gray",
"index": 2
}
}
},
{
"type": "special",
"options": {
"match": "true",
"result": {
"text": "Yes",
"color": "green",
"index": 3
}
}
},
{
"type": "special",
"options": {
"match": "false",
"result": {
"text": "No",
"color": "red",
"index": 4
}
}
},
{
"type": "special",
"options": {
"match": "empty",
"result": {
"text": "Empty",
"color": "gray",
"index": 5
}
}
}
]
},
"overrides": [
{
"matcher": {
"id": "byFrameRefID",
"options": "A"
},
"properties": [
{
"id": "color",
"value": {
"fixedColor": "blue",
"mode": "fixed"
}
}
]
}
]
}
}
}
}
},
"panel-5": {
"kind": "Panel",
"spec": {
"id": 5,
"title": "Combined Mappings and Overrides Example",
"description": "Panel with all mapping types combined - demonstrates mixing different mapping types and multiple override matchers",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "prometheus",
"spec": {
"expr": "combined_metric"
}
},
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"refId": "A",
"hidden": false
}
},
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "prometheus",
"spec": {
"expr": "secondary_metric"
}
},
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"refId": "B",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "table",
"spec": {
"pluginVersion": "",
"options": {},
"fieldConfig": {
"defaults": {
"mappings": [
{
"type": "value",
"options": {
"failure": {
"text": "Failure",
"color": "red",
"index": 1
},
"success": {
"text": "Success",
"color": "green",
"index": 0
}
}
},
{
"type": "range",
"options": {
"from": 0,
"to": 100,
"result": {
"text": "In Range",
"color": "blue",
"index": 2
}
}
},
{
"type": "regex",
"options": {
"pattern": "/^[A-Z]{3}-\\d+$/",
"result": {
"text": "ID Format",
"color": "purple",
"index": 3
}
}
},
{
"type": "special",
"options": {
"match": "null",
"result": {
"text": "Missing",
"color": "gray",
"index": 4
}
}
}
]
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "status"
},
"properties": [
{
"id": "custom.width",
"value": 120
},
{
"id": "custom.cellOptions",
"value": {
"type": "color-background"
}
}
]
},
{
"matcher": {
"id": "byRegexp",
"options": "/^value_/"
},
"properties": [
{
"id": "unit",
"value": "short"
},
{
"id": "min",
"value": 0
},
{
"id": "max",
"value": 100
}
]
},
{
"matcher": {
"id": "byType",
"options": "number"
},
"properties": [
{
"id": "decimals",
"value": 2
},
{
"id": "thresholds",
"value": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 50
},
{
"color": "red",
"value": 80
}
]
}
}
]
},
{
"matcher": {
"id": "byFrameRefID",
"options": "B"
},
"properties": [
{
"id": "displayName",
"value": "Secondary Query"
}
]
},
{
"matcher": {
"id": "byValue",
"options": {
"op": "gte",
"reducer": "allIsNull",
"value": 0
}
},
"properties": [
{
"id": "custom.hidden",
"value": true
}
]
}
]
}
}
}
}
},
"panel-6": {
"kind": "Panel",
"spec": {
"id": 6,
"title": "Empty Properties Override Example",
"description": "Panel with override that has empty properties array - tests conversion of overrides without any property modifications",
"links": [],
"data": {
"kind": "QueryGroup",
"spec": {
"queries": [
{
"kind": "PanelQuery",
"spec": {
"query": {
"kind": "prometheus",
"spec": {
"expr": "empty_override_metric"
}
},
"datasource": {
"type": "prometheus",
"uid": "prometheus-uid"
},
"refId": "A",
"hidden": false
}
}
],
"transformations": [],
"queryOptions": {}
}
},
"vizConfig": {
"kind": "stat",
"spec": {
"pluginVersion": "",
"options": {},
"fieldConfig": {
"defaults": {},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "field_with_empty_override"
},
"properties": []
}
]
}
}
}
}
}
},
"layout": {
"kind": "GridLayout",
"spec": {
"items": [
{
"kind": "GridLayoutItem",
"spec": {
"x": 0,
"y": 0,
"width": 12,
"height": 8,
"element": {
"kind": "ElementReference",
"name": "panel-1"
}
}
},
{
"kind": "GridLayoutItem",
"spec": {
"x": 12,
"y": 0,
"width": 12,
"height": 8,
"element": {
"kind": "ElementReference",
"name": "panel-2"
}
}
},
{
"kind": "GridLayoutItem",
"spec": {
"x": 0,
"y": 8,
"width": 12,
"height": 8,
"element": {
"kind": "ElementReference",
"name": "panel-3"
}
}
},
{
"kind": "GridLayoutItem",
"spec": {
"x": 12,
"y": 8,
"width": 12,
"height": 8,
"element": {
"kind": "ElementReference",
"name": "panel-4"
}
}
},
{
"kind": "GridLayoutItem",
"spec": {
"x": 0,
"y": 16,
"width": 24,
"height": 8,
"element": {
"kind": "ElementReference",
"name": "panel-5"
}
}
},
{
"kind": "GridLayoutItem",
"spec": {
"x": 0,
"y": 24,
"width": 12,
"height": 8,
"element": {
"kind": "ElementReference",
"name": "panel-6"
}
}
}
]
}
},
"links": [],
"liveNow": false,
"preload": false,
"tags": [
"value-mapping",
"overrides",
"test"
],
"timeSettings": {
"timezone": "browser",
"from": "now-6h",
"to": "now",
"autoRefresh": "",
"autoRefreshIntervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"hideTimepicker": false,
"fiscalYearStartMonth": 0
},
"title": "Value Mapping and Overrides Test",
"variables": []
},
"status": {
"conversion": {
"failed": false,
"storedVersion": "v2beta1"
}
}
}

View File

@@ -1973,16 +1973,16 @@ func convertFieldConfigOverridesToV1(overrides []dashv2alpha1.DashboardV2alpha1F
"options": override.Matcher.Options,
}
properties := make([]map[string]interface{}, 0, len(override.Properties))
if len(override.Properties) > 0 {
properties := make([]map[string]interface{}, 0, len(override.Properties))
for _, prop := range override.Properties {
properties = append(properties, map[string]interface{}{
"id": prop.Id,
"value": prop.Value,
})
}
overrideMap["properties"] = properties
}
overrideMap["properties"] = properties
result = append(result, overrideMap)
}
@@ -2074,11 +2074,9 @@ func convertRegexMapToV1(regexMap *dashv2alpha1.DashboardRegexMap) map[string]in
return nil
}
options := []map[string]interface{}{
{
"pattern": regexMap.Options.Pattern,
"result": convertValueMappingResultToV1(regexMap.Options.Result),
},
options := map[string]interface{}{
"pattern": regexMap.Options.Pattern,
"result": convertValueMappingResultToV1(regexMap.Options.Result),
}
return map[string]interface{}{

View File

@@ -24,6 +24,7 @@ require (
require (
cel.dev/expr v0.25.1 // indirect
github.com/NYTimes/gziphandler v1.1.1 // indirect
github.com/ProtonMail/go-crypto v1.1.6 // indirect
github.com/antlr4-go/antlr/v4 v4.13.1 // indirect
github.com/apache/arrow-go/v18 v18.4.1 // indirect
github.com/armon/go-metrics v0.4.1 // indirect
@@ -35,16 +36,21 @@ require (
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.8 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5 // indirect
github.com/aws/smithy-go v1.23.1 // indirect
github.com/barkimedes/go-deepcopy v0.0.0-20220514131651-17c30cfc62df // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver v3.5.1+incompatible // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/bluele/gcache v0.0.2 // indirect
github.com/bradfitz/gomemcache v0.0.0-20250403215159-8d39553ac7cf // indirect
github.com/bwmarrin/snowflake v0.3.0 // indirect
github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cheekybits/genny v1.0.0 // indirect
github.com/cloudflare/circl v1.6.1 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/coreos/go-systemd/v22 v22.6.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/diegoholiveira/jsonlogic/v3 v3.7.4 // indirect
github.com/evanphx/json-patch v5.9.11+incompatible // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
@@ -94,6 +100,7 @@ require (
github.com/grafana/grafana-plugin-sdk-go v0.284.0 // indirect
github.com/grafana/grafana/pkg/apimachinery v0.0.0 // indirect
github.com/grafana/grafana/pkg/apiserver v0.0.0 // indirect
github.com/grafana/grafana/pkg/semconv v0.0.0-20250804150913-990f1c69ecc2 // indirect
github.com/grafana/otel-profiling-go v0.5.1 // indirect
github.com/grafana/pyroscope-go/godeltaprof v0.1.9 // indirect
github.com/grafana/sqlds/v4 v4.2.7 // indirect
@@ -142,11 +149,15 @@ require (
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect
github.com/nikunjy/rules v1.5.0 // indirect
github.com/oasdiff/yaml v0.0.0-20250309154309-f31be36b4037 // indirect
github.com/oasdiff/yaml3 v0.0.0-20250309153720-d2182401db90 // indirect
github.com/oklog/run v1.1.0 // indirect
github.com/oklog/ulid v1.3.1 // indirect
github.com/olekukonko/tablewriter v0.0.5 // indirect
github.com/open-feature/go-sdk v1.16.0 // indirect
github.com/open-feature/go-sdk-contrib/providers/go-feature-flag v0.2.6 // indirect
github.com/open-feature/go-sdk-contrib/providers/ofrep v0.1.6 // indirect
github.com/patrickmn/go-cache v2.1.0+incompatible // indirect
github.com/perimeterx/marshmallow v1.1.5 // indirect
github.com/pierrec/lz4/v4 v4.1.22 // indirect
@@ -165,6 +176,7 @@ require (
github.com/spf13/pflag v1.0.10 // indirect
github.com/stoewer/go-strcase v1.3.1 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/thomaspoignant/go-feature-flag v1.42.0 // indirect
github.com/tjhop/slog-gokit v0.1.5 // indirect
github.com/woodsbury/decimal128 v1.4.0 // indirect
github.com/x448/float16 v0.8.4 // indirect
@@ -179,6 +191,7 @@ require (
go.opentelemetry.io/contrib/propagators/jaeger v1.38.0 // indirect
go.opentelemetry.io/contrib/samplers/jaegerremote v0.32.0 // indirect
go.opentelemetry.io/otel v1.39.0 // indirect
go.opentelemetry.io/otel/exporters/jaeger v1.17.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.39.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.39.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.39.0 // indirect
@@ -186,6 +199,8 @@ require (
go.opentelemetry.io/otel/sdk v1.39.0 // indirect
go.opentelemetry.io/otel/trace v1.39.0 // indirect
go.opentelemetry.io/proto/otlp v1.9.0 // indirect
go.uber.org/atomic v1.11.0 // indirect
go.uber.org/mock v0.6.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.1 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect

View File

@@ -4,9 +4,13 @@ cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMT
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
github.com/NYTimes/gziphandler v1.1.1 h1:ZUDjpQae29j0ryrS0u/B8HZfJBtBQHjqw2rQ2cqUQ3I=
github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c=
github.com/ProtonMail/go-crypto v1.1.6 h1:ZcV+Ropw6Qn0AX9brlQLAUXfqLBc7Bl+f/DmNxpLfdw=
github.com/ProtonMail/go-crypto v1.1.6/go.mod h1:rA3QumHc/FZ8pAHreoekgiAbzpNsfQAosU5td4SnOrE=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
@@ -38,12 +42,18 @@ github.com/aws/aws-sdk-go-v2/service/sts v1.38.5 h1:+LVB0xBqEgjQoqr9bGZbRzvg212B
github.com/aws/aws-sdk-go-v2/service/sts v1.38.5/go.mod h1:xoaxeqnnUaZjPjaICgIy5B+MHCSb/ZSOn4MvkFNOUA0=
github.com/aws/smithy-go v1.23.1 h1:sLvcH6dfAFwGkHLZ7dGiYF7aK6mg4CgKA/iDKjLDt9M=
github.com/aws/smithy-go v1.23.1/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/barkimedes/go-deepcopy v0.0.0-20220514131651-17c30cfc62df h1:GSoSVRLoBaFpOOds6QyY1L8AX7uoY+Ln3BHc22W40X0=
github.com/barkimedes/go-deepcopy v0.0.0-20220514131651-17c30cfc62df/go.mod h1:hiVxq5OP2bUGBRNS3Z/bt/reCLFNbdcST6gISi1fiOM=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/blang/semver v3.5.1+incompatible h1:cQNTCjp13qL8KC3Nbxr/y2Bqb63oX6wdnnjpJbkM4JQ=
github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM=
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
github.com/bluele/gcache v0.0.2 h1:WcbfdXICg7G/DGBh1PFfcirkWOQV+v077yF1pSy3DGw=
github.com/bluele/gcache v0.0.2/go.mod h1:m15KV+ECjptwSPxKhOhQoAFQVtUFjTVkc3H8o0t/fp0=
github.com/bradfitz/gomemcache v0.0.0-20250403215159-8d39553ac7cf h1:TqhNAT4zKbTdLa62d2HDBFdvgSbIGB3eJE8HqhgiL9I=
github.com/bradfitz/gomemcache v0.0.0-20250403215159-8d39553ac7cf/go.mod h1:r5xuitiExdLAJ09PR7vBVENGvp4ZuTBeWTGtxuX3K+c=
github.com/bufbuild/protocompile v0.14.1 h1:iA73zAf/fyljNjQKwYzUHD6AD4R8KMasmwa/FBatYVw=
@@ -60,6 +70,8 @@ github.com/cheekybits/genny v1.0.0/go.mod h1:+tQajlRqAUrPI7DOSpB0XAqZYtQakVtB7wX
github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag=
github.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cloudflare/circl v1.6.1 h1:zqIqSPIndyBh1bjLVVDHMPpVKqp8Su/V+6MeDzzQBQ0=
github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs=
github.com/coreos/go-semver v0.3.1 h1:yi21YpKnrx1gt5R+la8n5WgS0kCrsPp33dmEyHReZr4=
github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec=
github.com/coreos/go-systemd/v22 v22.6.0 h1:aGVa/v8B7hpb0TKl0MWoAavPDmHvobFe5R5zn0bCJWo=
@@ -69,6 +81,8 @@ github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/diegoholiveira/jsonlogic/v3 v3.7.4 h1:92HSmB9bwM/o0ZvrCpcvTP2EsPXSkKtAniIr2W/dcIM=
github.com/diegoholiveira/jsonlogic/v3 v3.7.4/go.mod h1:OYRb6FSTVmMM+MNQ7ElmMsczyNSepw+OU4Z8emDSi4w=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/emicklei/go-restful/v3 v3.13.0 h1:C4Bl2xDndpU6nJ4bc1jXd+uTmYPVUwkD6bFY/oTyCes=
@@ -219,6 +233,8 @@ github.com/grafana/grafana-azure-sdk-go/v2 v2.3.1 h1:FFcEA01tW+SmuJIuDbHOdgUBL+d
github.com/grafana/grafana-azure-sdk-go/v2 v2.3.1/go.mod h1:Oi4anANlCuTCc66jCyqIzfVbgLXFll8Wja+Y4vfANlc=
github.com/grafana/grafana-plugin-sdk-go v0.284.0 h1:1bK7eWsnPBLUWDcWJWe218Ik5ad0a5JpEL4mH9ry7Ws=
github.com/grafana/grafana-plugin-sdk-go v0.284.0/go.mod h1:lHPniaSxq3SL5MxDIPy04TYB1jnTp/ivkYO+xn5Rz3E=
github.com/grafana/grafana/pkg/semconv v0.0.0-20250804150913-990f1c69ecc2 h1:A65jWgLk4Re28gIuZcpC0aTh71JZ0ey89hKGE9h543s=
github.com/grafana/grafana/pkg/semconv v0.0.0-20250804150913-990f1c69ecc2/go.mod h1:2HRzUK/xQEYc+8d5If/XSusMcaYq9IptnBSHACiQcOQ=
github.com/grafana/otel-profiling-go v0.5.1 h1:stVPKAFZSa7eGiqbYuG25VcqYksR6iWvF3YH66t4qL8=
github.com/grafana/otel-profiling-go v0.5.1/go.mod h1:ftN/t5A/4gQI19/8MoWurBEtC6gFw8Dns1sJZ9W4Tls=
github.com/grafana/prometheus-alertmanager v0.25.1-0.20250911094103-5456b6e45604 h1:aXfUhVN/Ewfpbko2CCtL65cIiGgwStOo4lWH2b6gw2U=
@@ -366,6 +382,8 @@ github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8m
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f h1:KUppIJq7/+SVif2QVs3tOP0zanoHgBEVAwHxUSIzRqU=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/nikunjy/rules v1.5.0 h1:KJDSLOsFhwt7kcXUyZqwkgrQg5YoUwj+TVu6ItCQShw=
github.com/nikunjy/rules v1.5.0/go.mod h1:TlZtZdBChrkqi8Lr2AXocme8Z7EsbxtFdDoKeI6neBQ=
github.com/oasdiff/yaml v0.0.0-20250309154309-f31be36b4037 h1:G7ERwszslrBzRxj//JalHPu/3yz+De2J+4aLtSRlHiY=
github.com/oasdiff/yaml v0.0.0-20250309154309-f31be36b4037/go.mod h1:2bpvgLBZEtENV5scfDFEtB/5+1M4hkQhDQrccEJ/qGw=
github.com/oasdiff/yaml3 v0.0.0-20250309153720-d2182401db90 h1:bQx3WeLcUWy+RletIKwUIt4x3t8n2SxavmoclizMb8c=
@@ -380,6 +398,12 @@ github.com/onsi/ginkgo/v2 v2.22.2 h1:/3X8Panh8/WwhU/3Ssa6rCKqPLuAkVY2I0RoyDLySlU
github.com/onsi/ginkgo/v2 v2.22.2/go.mod h1:oeMosUL+8LtarXBHu/c0bx2D/K9zyQ6uX3cTyztHwsk=
github.com/onsi/gomega v1.36.2 h1:koNYke6TVk6ZmnyHrCXba/T/MoLBXFjeC1PtvYgw0A8=
github.com/onsi/gomega v1.36.2/go.mod h1:DdwyADRjrc825LhMEkD76cHR5+pUnjhUN8GlHlRPHzY=
github.com/open-feature/go-sdk v1.16.0 h1:5NCHYv5slvNBIZhYXAzAufo0OI59OACZ5tczVqSE+Tg=
github.com/open-feature/go-sdk v1.16.0/go.mod h1:EIF40QcoYT1VbQkMPy2ZJH4kvZeY+qGUXAorzSWgKSo=
github.com/open-feature/go-sdk-contrib/providers/go-feature-flag v0.2.6 h1:megzzlQGjsRVWDX8oJnLaa5eEcsAHekiL4Uvl3jSAcY=
github.com/open-feature/go-sdk-contrib/providers/go-feature-flag v0.2.6/go.mod h1:K1gDKvt76CGFLSUMHUydd5ba2V5Cv69gQZsdbnXhAm8=
github.com/open-feature/go-sdk-contrib/providers/ofrep v0.1.6 h1:WinefYxeVx5rV0uQmuWbxQf8iACu/JiRubo5w0saToc=
github.com/open-feature/go-sdk-contrib/providers/ofrep v0.1.6/go.mod h1:Dwcaoma6lZVqYwyfVlY7eB6RXbG+Ju3b9cnpTlUN+Hc=
github.com/pascaldekloe/goe v0.1.0 h1:cBOtyMzM9HTpWjXfbbunk26uA6nG3a8n06Wieeh0MwY=
github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=
@@ -465,6 +489,10 @@ github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/thejerf/slogassert v0.3.4 h1:VoTsXixRbXMrRSSxDjYTiEDCM4VWbsYPW5rB/hX24kM=
github.com/thejerf/slogassert v0.3.4/go.mod h1:0zn9ISLVKo1aPMTqcGfG1o6dWwt+Rk574GlUxHD4rs8=
github.com/thomaspoignant/go-feature-flag v1.42.0 h1:C7embmOTzaLyRki+OoU2RvtVjJE9IrvgBA2C1mRN1lc=
github.com/thomaspoignant/go-feature-flag v1.42.0/go.mod h1:y0QiWH7chHWhGATb/+XqwAwErORmPSH2MUsQlCmmWlM=
github.com/tjhop/slog-gokit v0.1.5 h1:ayloIUi5EK2QYB8eY4DOPO95/mRtMW42lUkp3quJohc=
github.com/tjhop/slog-gokit v0.1.5/go.mod h1:yA48zAHvV+Sg4z4VRyeFyFUNNXd3JY5Zg84u3USICq0=
github.com/tmc/grpc-websocket-proxy v0.0.0-20220101234140-673ab2c3ae75 h1:6fotK7otjonDflCTK0BCfls4SPy3NcCVb5dqqmbRknE=
@@ -513,6 +541,8 @@ go.opentelemetry.io/contrib/samplers/jaegerremote v0.32.0/go.mod h1:B9Oka5QVD0bn
go.opentelemetry.io/otel v1.21.0/go.mod h1:QZzNPQPm1zLX4gZK4cMi+71eaorMSGT3A4znnUvNNEo=
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
go.opentelemetry.io/otel/exporters/jaeger v1.17.0 h1:D7UpUy2Xc2wsi1Ras6V40q806WM07rqoCWzXu7Sqy+4=
go.opentelemetry.io/otel/exporters/jaeger v1.17.0/go.mod h1:nPCqOnEH9rNLKqH/+rrUjiMzHJdV1BlpKcTwRTyKkKI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.39.0 h1:f0cb2XPmrqn4XMy9PNliTgRKJgS5WcL/u0/WRYGz4t0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.39.0/go.mod h1:vnakAaFckOMiMtOIhFI2MNH4FYrZzXCYxmb1LlhoGz8=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.39.0 h1:in9O8ESIOlwJAEGTkkf34DesGRAc/Pn8qJ7k3r/42LM=
@@ -532,8 +562,12 @@ go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
go.opentelemetry.io/proto/otlp v1.9.0 h1:l706jCMITVouPOqEnii2fIAuO3IVGBRPV5ICjceRb/A=
go.opentelemetry.io/proto/otlp v1.9.0/go.mod h1:xE+Cx5E/eEHw+ISFkwPLwCZefwVjY+pqKg1qcK03+/4=
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y=
go.uber.org/mock v0.6.0/go.mod h1:KiVJ4BqZJaMj4svdfmHM0AUx4NJYO8ZNpPnZn1Z+BBU=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=

View File

@@ -5,7 +5,25 @@ metaV0Alpha1: {
scope: "Namespaced"
schema: {
spec: {
pluginJSON: #JSONData,
pluginJson: #JSONData
class: "core" | "external"
module?: {
path: string
hash?: string
loadingStrategy?: "fetch" | "script"
}
baseURL?: string
signature?: {
status: "internal" | "valid" | "invalid" | "modified" | "unsigned"
type?: "grafana" | "commercial" | "community" | "private" | "private-glob"
org?: string
}
angular?: {
detected: bool
}
translations?: [string]: string
// +listType=atomic
children?: [...string]
}
}
}

View File

@@ -9,7 +9,6 @@ pluginV0Alpha1: {
id: string
version: string
url?: string
class: "core" | "external"
}
}
}

View File

@@ -208,13 +208,21 @@ func NewMetaExtensions() *MetaExtensions {
// +k8s:openapi-gen=true
type MetaSpec struct {
PluginJSON MetaJSONData `json:"pluginJSON"`
PluginJson MetaJSONData `json:"pluginJson"`
Class MetaSpecClass `json:"class"`
Module *MetaV0alpha1SpecModule `json:"module,omitempty"`
BaseURL *string `json:"baseURL,omitempty"`
Signature *MetaV0alpha1SpecSignature `json:"signature,omitempty"`
Angular *MetaV0alpha1SpecAngular `json:"angular,omitempty"`
Translations map[string]string `json:"translations,omitempty"`
// +listType=atomic
Children []string `json:"children,omitempty"`
}
// NewMetaSpec creates a new MetaSpec object.
func NewMetaSpec() *MetaSpec {
return &MetaSpec{
PluginJSON: *NewMetaJSONData(),
PluginJson: *NewMetaJSONData(),
}
}
@@ -412,6 +420,40 @@ func NewMetaV0alpha1ExtensionsExtensionPoints() *MetaV0alpha1ExtensionsExtension
return &MetaV0alpha1ExtensionsExtensionPoints{}
}
// +k8s:openapi-gen=true
type MetaV0alpha1SpecModule struct {
Path string `json:"path"`
Hash *string `json:"hash,omitempty"`
LoadingStrategy *MetaV0alpha1SpecModuleLoadingStrategy `json:"loadingStrategy,omitempty"`
}
// NewMetaV0alpha1SpecModule creates a new MetaV0alpha1SpecModule object.
func NewMetaV0alpha1SpecModule() *MetaV0alpha1SpecModule {
return &MetaV0alpha1SpecModule{}
}
// +k8s:openapi-gen=true
type MetaV0alpha1SpecSignature struct {
Status MetaV0alpha1SpecSignatureStatus `json:"status"`
Type *MetaV0alpha1SpecSignatureType `json:"type,omitempty"`
Org *string `json:"org,omitempty"`
}
// NewMetaV0alpha1SpecSignature creates a new MetaV0alpha1SpecSignature object.
func NewMetaV0alpha1SpecSignature() *MetaV0alpha1SpecSignature {
return &MetaV0alpha1SpecSignature{}
}
// +k8s:openapi-gen=true
type MetaV0alpha1SpecAngular struct {
Detected bool `json:"detected"`
}
// NewMetaV0alpha1SpecAngular creates a new MetaV0alpha1SpecAngular object.
func NewMetaV0alpha1SpecAngular() *MetaV0alpha1SpecAngular {
return &MetaV0alpha1SpecAngular{}
}
// +k8s:openapi-gen=true
type MetaJSONDataType string
@@ -464,6 +506,14 @@ const (
MetaIncludeRoleViewer MetaIncludeRole = "Viewer"
)
// +k8s:openapi-gen=true
type MetaSpecClass string
const (
MetaSpecClassCore MetaSpecClass = "core"
MetaSpecClassExternal MetaSpecClass = "external"
)
// +k8s:openapi-gen=true
type MetaV0alpha1DependenciesPluginsType string
@@ -472,3 +522,33 @@ const (
MetaV0alpha1DependenciesPluginsTypeDatasource MetaV0alpha1DependenciesPluginsType = "datasource"
MetaV0alpha1DependenciesPluginsTypePanel MetaV0alpha1DependenciesPluginsType = "panel"
)
// +k8s:openapi-gen=true
type MetaV0alpha1SpecModuleLoadingStrategy string
const (
MetaV0alpha1SpecModuleLoadingStrategyFetch MetaV0alpha1SpecModuleLoadingStrategy = "fetch"
MetaV0alpha1SpecModuleLoadingStrategyScript MetaV0alpha1SpecModuleLoadingStrategy = "script"
)
// +k8s:openapi-gen=true
type MetaV0alpha1SpecSignatureStatus string
const (
MetaV0alpha1SpecSignatureStatusInternal MetaV0alpha1SpecSignatureStatus = "internal"
MetaV0alpha1SpecSignatureStatusValid MetaV0alpha1SpecSignatureStatus = "valid"
MetaV0alpha1SpecSignatureStatusInvalid MetaV0alpha1SpecSignatureStatus = "invalid"
MetaV0alpha1SpecSignatureStatusModified MetaV0alpha1SpecSignatureStatus = "modified"
MetaV0alpha1SpecSignatureStatusUnsigned MetaV0alpha1SpecSignatureStatus = "unsigned"
)
// +k8s:openapi-gen=true
type MetaV0alpha1SpecSignatureType string
const (
MetaV0alpha1SpecSignatureTypeGrafana MetaV0alpha1SpecSignatureType = "grafana"
MetaV0alpha1SpecSignatureTypeCommercial MetaV0alpha1SpecSignatureType = "commercial"
MetaV0alpha1SpecSignatureTypeCommunity MetaV0alpha1SpecSignatureType = "community"
MetaV0alpha1SpecSignatureTypePrivate MetaV0alpha1SpecSignatureType = "private"
MetaV0alpha1SpecSignatureTypePrivateGlob MetaV0alpha1SpecSignatureType = "private-glob"
)

View File

@@ -4,21 +4,12 @@ package v0alpha1
// +k8s:openapi-gen=true
type PluginSpec struct {
Id string `json:"id"`
Version string `json:"version"`
Url *string `json:"url,omitempty"`
Class PluginSpecClass `json:"class"`
Id string `json:"id"`
Version string `json:"version"`
Url *string `json:"url,omitempty"`
}
// NewPluginSpec creates a new PluginSpec object.
func NewPluginSpec() *PluginSpec {
return &PluginSpec{}
}
// +k8s:openapi-gen=true
type PluginSpecClass string
const (
PluginSpecClassCore PluginSpecClass = "core"
PluginSpecClassExternal PluginSpecClass = "external"
)

File diff suppressed because one or more lines are too long

View File

@@ -15,16 +15,6 @@ const (
PluginInstallSourceAnnotation = "plugins.grafana.app/install-source"
)
// Class represents the plugin class type in an unversioned internal format.
// This intentionally duplicates the versioned API type (PluginInstallSpecClass) to decouple
// internal code from API version changes, making it easier to support multiple API versions.
type Class = string
const (
ClassCore Class = "core"
ClassExternal Class = "external"
)
type Source = string
const (
@@ -36,7 +26,6 @@ type PluginInstall struct {
ID string
Version string
URL string
Class Class
Source Source
}
@@ -57,7 +46,6 @@ func (p *PluginInstall) ToPluginInstallV0Alpha1(namespace string) *pluginsv0alph
Id: p.ID,
Version: p.Version,
Url: url,
Class: pluginsv0alpha1.PluginSpecClass(p.Class),
},
}
}
@@ -70,9 +58,6 @@ func (p *PluginInstall) ShouldUpdate(existing *pluginsv0alpha1.Plugin) bool {
if existing.Spec.Version != update.Spec.Version {
return true
}
if existing.Spec.Class != update.Spec.Class {
return true // this should never really happen
}
if !equalStringPointers(existing.Spec.Url, update.Spec.Url) {
return true
}

View File

@@ -26,14 +26,12 @@ func TestPluginInstall_ShouldUpdate(t *testing.T) {
Spec: pluginsv0alpha1.PluginSpec{
Id: "plugin-1",
Version: "1.0.0",
Class: pluginsv0alpha1.PluginSpecClass(ClassExternal),
},
}
baseInstall := PluginInstall{
ID: "plugin-1",
Version: "1.0.0",
Class: ClassExternal,
Source: SourcePluginStore,
}
@@ -54,13 +52,6 @@ func TestPluginInstall_ShouldUpdate(t *testing.T) {
},
expectUpdate: true,
},
{
name: "class differs",
modifyInstall: func(pi *PluginInstall) {
pi.Class = ClassCore
},
expectUpdate: true,
},
{
name: "url differs",
modifyInstall: func(pi *PluginInstall) {
@@ -109,7 +100,6 @@ func TestInstallRegistrar_Register(t *testing.T) {
install: &PluginInstall{
ID: "plugin-1",
Version: "1.0.0",
Class: ClassExternal,
Source: SourcePluginStore,
},
existingErr: errorsK8s.NewNotFound(pluginGroupResource(), "plugin-1"),
@@ -120,7 +110,6 @@ func TestInstallRegistrar_Register(t *testing.T) {
install: &PluginInstall{
ID: "plugin-1",
Version: "2.0.0",
Class: ClassExternal,
Source: SourcePluginStore,
},
existing: &pluginsv0alpha1.Plugin{
@@ -135,7 +124,6 @@ func TestInstallRegistrar_Register(t *testing.T) {
Spec: pluginsv0alpha1.PluginSpec{
Id: "plugin-1",
Version: "1.0.0",
Class: pluginsv0alpha1.PluginSpecClass(ClassExternal),
},
},
expectedUpdates: 1,
@@ -145,7 +133,6 @@ func TestInstallRegistrar_Register(t *testing.T) {
install: &PluginInstall{
ID: "plugin-1",
Version: "1.0.0",
Class: ClassExternal,
Source: SourcePluginStore,
},
existing: &pluginsv0alpha1.Plugin{
@@ -160,7 +147,6 @@ func TestInstallRegistrar_Register(t *testing.T) {
Spec: pluginsv0alpha1.PluginSpec{
Id: "plugin-1",
Version: "1.0.0",
Class: pluginsv0alpha1.PluginSpecClass(ClassExternal),
},
},
},
@@ -169,7 +155,6 @@ func TestInstallRegistrar_Register(t *testing.T) {
install: &PluginInstall{
ID: "plugin-err",
Version: "1.0.0",
Class: ClassExternal,
Source: SourcePluginStore,
},
existingErr: errorsK8s.NewInternalError(errors.New("boom")),
@@ -410,7 +395,6 @@ func TestPluginInstall_ToPluginInstallV0Alpha1(t *testing.T) {
install: PluginInstall{
ID: "plugin-1",
Version: "1.0.0",
Class: ClassExternal,
Source: SourcePluginStore,
},
namespace: "org-1",
@@ -424,7 +408,6 @@ func TestPluginInstall_ToPluginInstallV0Alpha1(t *testing.T) {
ID: "plugin-1",
Version: "1.0.0",
URL: "https://example.com/plugin.zip",
Class: ClassExternal,
Source: SourcePluginStore,
},
namespace: "org-1",
@@ -433,25 +416,11 @@ func TestPluginInstall_ToPluginInstallV0Alpha1(t *testing.T) {
require.Equal(t, "https://example.com/plugin.zip", *p.Spec.Url)
},
},
{
name: "core class is mapped correctly",
install: PluginInstall{
ID: "plugin-core",
Version: "2.0.0",
Class: ClassCore,
Source: SourcePluginStore,
},
namespace: "org-2",
validate: func(t *testing.T, p *pluginsv0alpha1.Plugin) {
require.Equal(t, pluginsv0alpha1.PluginSpecClass(ClassCore), p.Spec.Class)
},
},
{
name: "source annotation is set correctly",
install: PluginInstall{
ID: "plugin-1",
Version: "1.0.0",
Class: ClassExternal,
Source: SourceUnknown,
},
namespace: "org-1",
@@ -464,7 +433,6 @@ func TestPluginInstall_ToPluginInstallV0Alpha1(t *testing.T) {
install: PluginInstall{
ID: "my-plugin",
Version: "1.0.0",
Class: ClassExternal,
Source: SourcePluginStore,
},
namespace: "my-namespace",
@@ -556,7 +524,6 @@ func TestPluginInstall_ShouldUpdate_URLTransitions(t *testing.T) {
ID: "plugin-1",
Version: "1.0.0",
URL: newURL,
Class: ClassExternal,
Source: SourcePluginStore,
},
existingURL: nil,
@@ -568,7 +535,6 @@ func TestPluginInstall_ShouldUpdate_URLTransitions(t *testing.T) {
ID: "plugin-1",
Version: "1.0.0",
URL: "",
Class: ClassExternal,
Source: SourcePluginStore,
},
existingURL: &existingURL,
@@ -580,7 +546,6 @@ func TestPluginInstall_ShouldUpdate_URLTransitions(t *testing.T) {
ID: "plugin-1",
Version: "1.0.0",
URL: "",
Class: ClassExternal,
Source: SourcePluginStore,
},
existingURL: nil,
@@ -592,7 +557,6 @@ func TestPluginInstall_ShouldUpdate_URLTransitions(t *testing.T) {
ID: "plugin-1",
Version: "1.0.0",
URL: existingURL,
Class: ClassExternal,
Source: SourcePluginStore,
},
existingURL: &existingURL,
@@ -614,7 +578,6 @@ func TestPluginInstall_ShouldUpdate_URLTransitions(t *testing.T) {
Id: "plugin-1",
Version: "1.0.0",
Url: tt.existingURL,
Class: pluginsv0alpha1.PluginSpecClass(ClassExternal),
},
}
@@ -670,7 +633,6 @@ func TestInstallRegistrar_Register_ErrorCases(t *testing.T) {
install: &PluginInstall{
ID: "plugin-1",
Version: "1.0.0",
Class: ClassExternal,
Source: SourcePluginStore,
},
setupClient: func(fc *fakePluginInstallClient) {
@@ -688,7 +650,6 @@ func TestInstallRegistrar_Register_ErrorCases(t *testing.T) {
install: &PluginInstall{
ID: "plugin-1",
Version: "2.0.0",
Class: ClassExternal,
Source: SourcePluginStore,
},
setupClient: func(fc *fakePluginInstallClient) {
@@ -705,7 +666,6 @@ func TestInstallRegistrar_Register_ErrorCases(t *testing.T) {
Spec: pluginsv0alpha1.PluginSpec{
Id: "plugin-1",
Version: "1.0.0",
Class: pluginsv0alpha1.PluginSpecClass(ClassExternal),
},
}, nil
}
@@ -876,7 +836,6 @@ func TestInstallRegistrar_GetClientError(t *testing.T) {
install := &PluginInstall{
ID: "plugin-1",
Version: "1.0.0",
Class: ClassExternal,
Source: SourcePluginStore,
}

View File

@@ -10,8 +10,6 @@ import (
"time"
"github.com/grafana/grafana-app-sdk/logging"
pluginsv0alpha1 "github.com/grafana/grafana/apps/plugins/pkg/apis/plugins/v0alpha1"
)
const (
@@ -87,45 +85,9 @@ func (p *CatalogProvider) GetMeta(ctx context.Context, pluginID, version string)
return nil, fmt.Errorf("failed to decode response: %w", err)
}
metaSpec := grafanaComPluginVersionMetaToMetaSpec(gcomMeta)
return &Result{
Meta: gcomMeta.JSON,
Meta: metaSpec,
TTL: p.ttl,
}, nil
}
// grafanaComPluginVersionMeta represents the response from grafana.com API
// GET /api/plugins/{pluginId}/versions/{version}
type grafanaComPluginVersionMeta struct {
PluginID string `json:"pluginSlug"`
Version string `json:"version"`
URL string `json:"url"`
Commit string `json:"commit"`
Description string `json:"description"`
Keywords []string `json:"keywords"`
CreatedAt time.Time `json:"createdAt"`
UpdatedAt time.Time `json:"updatedAt"`
JSON pluginsv0alpha1.MetaJSONData `json:"json"`
Readme string `json:"readme"`
Downloads int `json:"downloads"`
Verified bool `json:"verified"`
Status string `json:"status"`
StatusContext string `json:"statusContext"`
DownloadSlug string `json:"downloadSlug"`
SignatureType string `json:"signatureType"`
SignedByOrg string `json:"signedByOrg"`
SignedByOrgName string `json:"signedByOrgName"`
Packages struct {
Any struct {
Md5 string `json:"md5"`
Sha256 string `json:"sha256"`
PackageName string `json:"packageName"`
DownloadURL string `json:"downloadUrl"`
} `json:"any"`
} `json:"packages"`
Links []struct {
Rel string `json:"rel"`
Href string `json:"href"`
} `json:"links"`
AngularDetected bool `json:"angularDetected"`
Scopes []string `json:"scopes"`
}

View File

@@ -49,7 +49,7 @@ func TestCatalogProvider_GetMeta(t *testing.T) {
require.NoError(t, err)
require.NotNil(t, result)
assert.Equal(t, expectedMeta, result.Meta)
assert.Equal(t, expectedMeta, result.Meta.PluginJson)
assert.Equal(t, defaultCatalogTTL, result.TTL)
})

View File

@@ -0,0 +1,744 @@
package meta
import (
"encoding/json"
"time"
pluginsv0alpha1 "github.com/grafana/grafana/apps/plugins/pkg/apis/plugins/v0alpha1"
"github.com/grafana/grafana/pkg/plugins"
"github.com/grafana/grafana/pkg/services/pluginsintegration/pluginstore"
)
// jsonDataToMetaJSONData converts a plugins.JSONData to a pluginsv0alpha1.MetaJSONData.
// nolint:gocyclo
func jsonDataToMetaJSONData(jsonData plugins.JSONData) pluginsv0alpha1.MetaJSONData {
meta := pluginsv0alpha1.MetaJSONData{
Id: jsonData.ID,
Name: jsonData.Name,
}
// Map plugin type
switch jsonData.Type {
case plugins.TypeApp:
meta.Type = pluginsv0alpha1.MetaJSONDataTypeApp
case plugins.TypeDataSource:
meta.Type = pluginsv0alpha1.MetaJSONDataTypeDatasource
case plugins.TypePanel:
meta.Type = pluginsv0alpha1.MetaJSONDataTypePanel
case plugins.TypeRenderer:
meta.Type = pluginsv0alpha1.MetaJSONDataTypeRenderer
}
// Map Info
meta.Info = pluginsv0alpha1.MetaInfo{
Keywords: jsonData.Info.Keywords,
Logos: pluginsv0alpha1.MetaV0alpha1InfoLogos{
Small: jsonData.Info.Logos.Small,
Large: jsonData.Info.Logos.Large,
},
Updated: jsonData.Info.Updated,
Version: jsonData.Info.Version,
}
if jsonData.Info.Description != "" {
meta.Info.Description = &jsonData.Info.Description
}
if jsonData.Info.Author.Name != "" || jsonData.Info.Author.URL != "" {
author := &pluginsv0alpha1.MetaV0alpha1InfoAuthor{}
if jsonData.Info.Author.Name != "" {
author.Name = &jsonData.Info.Author.Name
}
if jsonData.Info.Author.URL != "" {
author.Url = &jsonData.Info.Author.URL
}
meta.Info.Author = author
}
if len(jsonData.Info.Links) > 0 {
meta.Info.Links = make([]pluginsv0alpha1.MetaV0alpha1InfoLinks, 0, len(jsonData.Info.Links))
for _, link := range jsonData.Info.Links {
v0Link := pluginsv0alpha1.MetaV0alpha1InfoLinks{}
if link.Name != "" {
v0Link.Name = &link.Name
}
if link.URL != "" {
v0Link.Url = &link.URL
}
meta.Info.Links = append(meta.Info.Links, v0Link)
}
}
if len(jsonData.Info.Screenshots) > 0 {
meta.Info.Screenshots = make([]pluginsv0alpha1.MetaV0alpha1InfoScreenshots, 0, len(jsonData.Info.Screenshots))
for _, screenshot := range jsonData.Info.Screenshots {
v0Screenshot := pluginsv0alpha1.MetaV0alpha1InfoScreenshots{}
if screenshot.Name != "" {
v0Screenshot.Name = &screenshot.Name
}
if screenshot.Path != "" {
v0Screenshot.Path = &screenshot.Path
}
meta.Info.Screenshots = append(meta.Info.Screenshots, v0Screenshot)
}
}
// Map Dependencies
meta.Dependencies = pluginsv0alpha1.MetaDependencies{
GrafanaDependency: jsonData.Dependencies.GrafanaDependency,
}
if jsonData.Dependencies.GrafanaVersion != "" {
meta.Dependencies.GrafanaVersion = &jsonData.Dependencies.GrafanaVersion
}
if len(jsonData.Dependencies.Plugins) > 0 {
meta.Dependencies.Plugins = make([]pluginsv0alpha1.MetaV0alpha1DependenciesPlugins, 0, len(jsonData.Dependencies.Plugins))
for _, dep := range jsonData.Dependencies.Plugins {
var depType pluginsv0alpha1.MetaV0alpha1DependenciesPluginsType
switch dep.Type {
case "app":
depType = pluginsv0alpha1.MetaV0alpha1DependenciesPluginsTypeApp
case "datasource":
depType = pluginsv0alpha1.MetaV0alpha1DependenciesPluginsTypeDatasource
case "panel":
depType = pluginsv0alpha1.MetaV0alpha1DependenciesPluginsTypePanel
}
meta.Dependencies.Plugins = append(meta.Dependencies.Plugins, pluginsv0alpha1.MetaV0alpha1DependenciesPlugins{
Id: dep.ID,
Type: depType,
Name: dep.Name,
})
}
}
if len(jsonData.Dependencies.Extensions.ExposedComponents) > 0 {
meta.Dependencies.Extensions = &pluginsv0alpha1.MetaV0alpha1DependenciesExtensions{
ExposedComponents: jsonData.Dependencies.Extensions.ExposedComponents,
}
}
// Map optional boolean fields
if jsonData.Alerting {
meta.Alerting = &jsonData.Alerting
}
if jsonData.Annotations {
meta.Annotations = &jsonData.Annotations
}
if jsonData.AutoEnabled {
meta.AutoEnabled = &jsonData.AutoEnabled
}
if jsonData.Backend {
meta.Backend = &jsonData.Backend
}
if jsonData.BuiltIn {
meta.BuiltIn = &jsonData.BuiltIn
}
if jsonData.HideFromList {
meta.HideFromList = &jsonData.HideFromList
}
if jsonData.Logs {
meta.Logs = &jsonData.Logs
}
if jsonData.Metrics {
meta.Metrics = &jsonData.Metrics
}
if jsonData.MultiValueFilterOperators {
meta.MultiValueFilterOperators = &jsonData.MultiValueFilterOperators
}
if jsonData.Preload {
meta.Preload = &jsonData.Preload
}
if jsonData.SkipDataQuery {
meta.SkipDataQuery = &jsonData.SkipDataQuery
}
if jsonData.Streaming {
meta.Streaming = &jsonData.Streaming
}
if jsonData.Tracing {
meta.Tracing = &jsonData.Tracing
}
// Map category
if jsonData.Category != "" {
var category pluginsv0alpha1.MetaJSONDataCategory
switch jsonData.Category {
case "tsdb":
category = pluginsv0alpha1.MetaJSONDataCategoryTsdb
case "logging":
category = pluginsv0alpha1.MetaJSONDataCategoryLogging
case "cloud":
category = pluginsv0alpha1.MetaJSONDataCategoryCloud
case "tracing":
category = pluginsv0alpha1.MetaJSONDataCategoryTracing
case "profiling":
category = pluginsv0alpha1.MetaJSONDataCategoryProfiling
case "sql":
category = pluginsv0alpha1.MetaJSONDataCategorySql
case "enterprise":
category = pluginsv0alpha1.MetaJSONDataCategoryEnterprise
case "iot":
category = pluginsv0alpha1.MetaJSONDataCategoryIot
case "other":
category = pluginsv0alpha1.MetaJSONDataCategoryOther
default:
category = pluginsv0alpha1.MetaJSONDataCategoryOther
}
meta.Category = &category
}
// Map state
if jsonData.State != "" {
var state pluginsv0alpha1.MetaJSONDataState
switch jsonData.State {
case plugins.ReleaseStateAlpha:
state = pluginsv0alpha1.MetaJSONDataStateAlpha
case plugins.ReleaseStateBeta:
state = pluginsv0alpha1.MetaJSONDataStateBeta
default:
}
if state != "" {
meta.State = &state
}
}
// Map executable
if jsonData.Executable != "" {
meta.Executable = &jsonData.Executable
}
// Map QueryOptions
if len(jsonData.QueryOptions) > 0 {
queryOptions := &pluginsv0alpha1.MetaQueryOptions{}
if val, ok := jsonData.QueryOptions["maxDataPoints"]; ok {
queryOptions.MaxDataPoints = &val
}
if val, ok := jsonData.QueryOptions["minInterval"]; ok {
queryOptions.MinInterval = &val
}
if val, ok := jsonData.QueryOptions["cacheTimeout"]; ok {
queryOptions.CacheTimeout = &val
}
meta.QueryOptions = queryOptions
}
// Map Includes
if len(jsonData.Includes) > 0 {
meta.Includes = make([]pluginsv0alpha1.MetaInclude, 0, len(jsonData.Includes))
for _, include := range jsonData.Includes {
v0Include := pluginsv0alpha1.MetaInclude{}
if include.UID != "" {
v0Include.Uid = &include.UID
}
if include.Type != "" {
var includeType pluginsv0alpha1.MetaIncludeType
switch include.Type {
case "dashboard":
includeType = pluginsv0alpha1.MetaIncludeTypeDashboard
case "page":
includeType = pluginsv0alpha1.MetaIncludeTypePage
case "panel":
includeType = pluginsv0alpha1.MetaIncludeTypePanel
case "datasource":
includeType = pluginsv0alpha1.MetaIncludeTypeDatasource
}
v0Include.Type = &includeType
}
if include.Name != "" {
v0Include.Name = &include.Name
}
if include.Component != "" {
v0Include.Component = &include.Component
}
if include.Role != "" {
var role pluginsv0alpha1.MetaIncludeRole
switch include.Role {
case "Admin":
role = pluginsv0alpha1.MetaIncludeRoleAdmin
case "Editor":
role = pluginsv0alpha1.MetaIncludeRoleEditor
case "Viewer":
role = pluginsv0alpha1.MetaIncludeRoleViewer
}
v0Include.Role = &role
}
if include.Action != "" {
v0Include.Action = &include.Action
}
if include.Path != "" {
v0Include.Path = &include.Path
}
if include.AddToNav {
v0Include.AddToNav = &include.AddToNav
}
if include.DefaultNav {
v0Include.DefaultNav = &include.DefaultNav
}
if include.Icon != "" {
v0Include.Icon = &include.Icon
}
meta.Includes = append(meta.Includes, v0Include)
}
}
// Map Routes
if len(jsonData.Routes) > 0 {
meta.Routes = make([]pluginsv0alpha1.MetaRoute, 0, len(jsonData.Routes))
for _, route := range jsonData.Routes {
v0Route := pluginsv0alpha1.MetaRoute{}
if route.Path != "" {
v0Route.Path = &route.Path
}
if route.Method != "" {
v0Route.Method = &route.Method
}
if route.URL != "" {
v0Route.Url = &route.URL
}
if route.ReqRole != "" {
reqRole := string(route.ReqRole)
v0Route.ReqRole = &reqRole
}
if route.ReqAction != "" {
v0Route.ReqAction = &route.ReqAction
}
if len(route.Headers) > 0 {
headers := make([]string, 0, len(route.Headers))
for _, header := range route.Headers {
headers = append(headers, header.Name+": "+header.Content)
}
v0Route.Headers = headers
}
if len(route.URLParams) > 0 {
v0Route.UrlParams = make([]pluginsv0alpha1.MetaV0alpha1RouteUrlParams, 0, len(route.URLParams))
for _, param := range route.URLParams {
v0Param := pluginsv0alpha1.MetaV0alpha1RouteUrlParams{}
if param.Name != "" {
v0Param.Name = &param.Name
}
if param.Content != "" {
v0Param.Content = &param.Content
}
v0Route.UrlParams = append(v0Route.UrlParams, v0Param)
}
}
if route.TokenAuth != nil {
v0Route.TokenAuth = &pluginsv0alpha1.MetaV0alpha1RouteTokenAuth{}
if route.TokenAuth.Url != "" {
v0Route.TokenAuth.Url = &route.TokenAuth.Url
}
if len(route.TokenAuth.Scopes) > 0 {
v0Route.TokenAuth.Scopes = route.TokenAuth.Scopes
}
if len(route.TokenAuth.Params) > 0 {
v0Route.TokenAuth.Params = make(map[string]interface{})
for k, v := range route.TokenAuth.Params {
v0Route.TokenAuth.Params[k] = v
}
}
}
if route.JwtTokenAuth != nil {
v0Route.JwtTokenAuth = &pluginsv0alpha1.MetaV0alpha1RouteJwtTokenAuth{}
if route.JwtTokenAuth.Url != "" {
v0Route.JwtTokenAuth.Url = &route.JwtTokenAuth.Url
}
if len(route.JwtTokenAuth.Scopes) > 0 {
v0Route.JwtTokenAuth.Scopes = route.JwtTokenAuth.Scopes
}
if len(route.JwtTokenAuth.Params) > 0 {
v0Route.JwtTokenAuth.Params = make(map[string]interface{})
for k, v := range route.JwtTokenAuth.Params {
v0Route.JwtTokenAuth.Params[k] = v
}
}
}
if len(route.Body) > 0 {
var bodyMap map[string]interface{}
if err := json.Unmarshal(route.Body, &bodyMap); err == nil {
v0Route.Body = bodyMap
}
}
meta.Routes = append(meta.Routes, v0Route)
}
}
// Map Extensions
if len(jsonData.Extensions.AddedLinks) > 0 || len(jsonData.Extensions.AddedComponents) > 0 ||
len(jsonData.Extensions.ExposedComponents) > 0 || len(jsonData.Extensions.ExtensionPoints) > 0 {
extensions := &pluginsv0alpha1.MetaExtensions{}
if len(jsonData.Extensions.AddedLinks) > 0 {
extensions.AddedLinks = make([]pluginsv0alpha1.MetaV0alpha1ExtensionsAddedLinks, 0, len(jsonData.Extensions.AddedLinks))
for _, link := range jsonData.Extensions.AddedLinks {
v0Link := pluginsv0alpha1.MetaV0alpha1ExtensionsAddedLinks{
Targets: link.Targets,
Title: link.Title,
}
if link.Description != "" {
v0Link.Description = &link.Description
}
extensions.AddedLinks = append(extensions.AddedLinks, v0Link)
}
}
if len(jsonData.Extensions.AddedComponents) > 0 {
extensions.AddedComponents = make([]pluginsv0alpha1.MetaV0alpha1ExtensionsAddedComponents, 0, len(jsonData.Extensions.AddedComponents))
for _, comp := range jsonData.Extensions.AddedComponents {
v0Comp := pluginsv0alpha1.MetaV0alpha1ExtensionsAddedComponents{
Targets: comp.Targets,
Title: comp.Title,
}
if comp.Description != "" {
v0Comp.Description = &comp.Description
}
extensions.AddedComponents = append(extensions.AddedComponents, v0Comp)
}
}
if len(jsonData.Extensions.ExposedComponents) > 0 {
extensions.ExposedComponents = make([]pluginsv0alpha1.MetaV0alpha1ExtensionsExposedComponents, 0, len(jsonData.Extensions.ExposedComponents))
for _, comp := range jsonData.Extensions.ExposedComponents {
v0Comp := pluginsv0alpha1.MetaV0alpha1ExtensionsExposedComponents{
Id: comp.Id,
}
if comp.Title != "" {
v0Comp.Title = &comp.Title
}
if comp.Description != "" {
v0Comp.Description = &comp.Description
}
extensions.ExposedComponents = append(extensions.ExposedComponents, v0Comp)
}
}
if len(jsonData.Extensions.ExtensionPoints) > 0 {
extensions.ExtensionPoints = make([]pluginsv0alpha1.MetaV0alpha1ExtensionsExtensionPoints, 0, len(jsonData.Extensions.ExtensionPoints))
for _, point := range jsonData.Extensions.ExtensionPoints {
v0Point := pluginsv0alpha1.MetaV0alpha1ExtensionsExtensionPoints{
Id: point.Id,
}
if point.Title != "" {
v0Point.Title = &point.Title
}
if point.Description != "" {
v0Point.Description = &point.Description
}
extensions.ExtensionPoints = append(extensions.ExtensionPoints, v0Point)
}
}
meta.Extensions = extensions
}
// Map Roles
if len(jsonData.Roles) > 0 {
meta.Roles = make([]pluginsv0alpha1.MetaRole, 0, len(jsonData.Roles))
for _, role := range jsonData.Roles {
v0Role := pluginsv0alpha1.MetaRole{
Grants: role.Grants,
}
if role.Role.Name != "" || role.Role.Description != "" || len(role.Role.Permissions) > 0 {
v0RoleRole := &pluginsv0alpha1.MetaV0alpha1RoleRole{}
if role.Role.Name != "" {
v0RoleRole.Name = &role.Role.Name
}
if role.Role.Description != "" {
v0RoleRole.Description = &role.Role.Description
}
if len(role.Role.Permissions) > 0 {
v0RoleRole.Permissions = make([]pluginsv0alpha1.MetaV0alpha1RoleRolePermissions, 0, len(role.Role.Permissions))
for _, perm := range role.Role.Permissions {
v0Perm := pluginsv0alpha1.MetaV0alpha1RoleRolePermissions{}
if perm.Action != "" {
v0Perm.Action = &perm.Action
}
if perm.Scope != "" {
v0Perm.Scope = &perm.Scope
}
v0RoleRole.Permissions = append(v0RoleRole.Permissions, v0Perm)
}
}
v0Role.Role = v0RoleRole
}
meta.Roles = append(meta.Roles, v0Role)
}
}
// Map IAM
if jsonData.IAM != nil && len(jsonData.IAM.Permissions) > 0 {
iam := &pluginsv0alpha1.MetaIAM{
Permissions: make([]pluginsv0alpha1.MetaV0alpha1IAMPermissions, 0, len(jsonData.IAM.Permissions)),
}
for _, perm := range jsonData.IAM.Permissions {
v0Perm := pluginsv0alpha1.MetaV0alpha1IAMPermissions{}
if perm.Action != "" {
v0Perm.Action = &perm.Action
}
if perm.Scope != "" {
v0Perm.Scope = &perm.Scope
}
iam.Permissions = append(iam.Permissions, v0Perm)
}
meta.Iam = iam
}
return meta
}
// pluginStorePluginToMeta converts a pluginstore.Plugin to a pluginsv0alpha1.MetaSpec.
// This is similar to pluginToPluginMetaSpec but works with the plugin store DTO.
// loadingStrategy and moduleHash are optional calculated values that can be provided.
func pluginStorePluginToMeta(plugin pluginstore.Plugin, loadingStrategy plugins.LoadingStrategy, moduleHash string) pluginsv0alpha1.MetaSpec {
metaSpec := pluginsv0alpha1.MetaSpec{
PluginJson: jsonDataToMetaJSONData(plugin.JSONData),
}
// Set Class - default to External if not specified
var c pluginsv0alpha1.MetaSpecClass
if plugin.Class == plugins.ClassCore {
c = pluginsv0alpha1.MetaSpecClassCore
} else {
c = pluginsv0alpha1.MetaSpecClassExternal
}
metaSpec.Class = c
if plugin.Module != "" {
module := &pluginsv0alpha1.MetaV0alpha1SpecModule{
Path: plugin.Module,
}
if moduleHash != "" {
module.Hash = &moduleHash
}
if loadingStrategy != "" {
var ls pluginsv0alpha1.MetaV0alpha1SpecModuleLoadingStrategy
switch loadingStrategy {
case plugins.LoadingStrategyFetch:
ls = pluginsv0alpha1.MetaV0alpha1SpecModuleLoadingStrategyFetch
case plugins.LoadingStrategyScript:
ls = pluginsv0alpha1.MetaV0alpha1SpecModuleLoadingStrategyScript
}
module.LoadingStrategy = &ls
}
metaSpec.Module = module
}
if plugin.BaseURL != "" {
metaSpec.BaseURL = &plugin.BaseURL
}
if plugin.Signature != "" {
signature := &pluginsv0alpha1.MetaV0alpha1SpecSignature{
Status: convertSignatureStatus(plugin.Signature),
}
if plugin.SignatureType != "" {
sigType := convertSignatureType(plugin.SignatureType)
signature.Type = &sigType
}
if plugin.SignatureOrg != "" {
signature.Org = &plugin.SignatureOrg
}
metaSpec.Signature = signature
}
if len(plugin.Children) > 0 {
metaSpec.Children = plugin.Children
}
metaSpec.Angular = &pluginsv0alpha1.MetaV0alpha1SpecAngular{
Detected: plugin.Angular.Detected,
}
if len(plugin.Translations) > 0 {
metaSpec.Translations = plugin.Translations
}
return metaSpec
}
// convertSignatureStatus converts plugins.SignatureStatus to pluginsv0alpha1.MetaV0alpha1SpecSignatureStatus.
func convertSignatureStatus(status plugins.SignatureStatus) pluginsv0alpha1.MetaV0alpha1SpecSignatureStatus {
switch status {
case plugins.SignatureStatusInternal:
return pluginsv0alpha1.MetaV0alpha1SpecSignatureStatusInternal
case plugins.SignatureStatusValid:
return pluginsv0alpha1.MetaV0alpha1SpecSignatureStatusValid
case plugins.SignatureStatusInvalid:
return pluginsv0alpha1.MetaV0alpha1SpecSignatureStatusInvalid
case plugins.SignatureStatusModified:
return pluginsv0alpha1.MetaV0alpha1SpecSignatureStatusModified
case plugins.SignatureStatusUnsigned:
return pluginsv0alpha1.MetaV0alpha1SpecSignatureStatusUnsigned
default:
return pluginsv0alpha1.MetaV0alpha1SpecSignatureStatusUnsigned
}
}
// convertSignatureType converts plugins.SignatureType to pluginsv0alpha1.MetaV0alpha1SpecSignatureType.
func convertSignatureType(sigType plugins.SignatureType) pluginsv0alpha1.MetaV0alpha1SpecSignatureType {
switch sigType {
case plugins.SignatureTypeGrafana:
return pluginsv0alpha1.MetaV0alpha1SpecSignatureTypeGrafana
case plugins.SignatureTypeCommercial:
return pluginsv0alpha1.MetaV0alpha1SpecSignatureTypeCommercial
case plugins.SignatureTypeCommunity:
return pluginsv0alpha1.MetaV0alpha1SpecSignatureTypeCommunity
case plugins.SignatureTypePrivate:
return pluginsv0alpha1.MetaV0alpha1SpecSignatureTypePrivate
case plugins.SignatureTypePrivateGlob:
return pluginsv0alpha1.MetaV0alpha1SpecSignatureTypePrivateGlob
default:
return pluginsv0alpha1.MetaV0alpha1SpecSignatureTypeGrafana
}
}
// pluginToMetaSpec converts a fully loaded *plugins.Plugin to a pluginsv0alpha1.MetaSpec.
func pluginToMetaSpec(plugin *plugins.Plugin) pluginsv0alpha1.MetaSpec {
metaSpec := pluginsv0alpha1.MetaSpec{
PluginJson: jsonDataToMetaJSONData(plugin.JSONData),
}
// Set Class - default to External if not specified
var c pluginsv0alpha1.MetaSpecClass
if plugin.Class == plugins.ClassCore {
c = pluginsv0alpha1.MetaSpecClassCore
} else {
c = pluginsv0alpha1.MetaSpecClassExternal
}
metaSpec.Class = c
// Set module information
if plugin.Module != "" {
module := &pluginsv0alpha1.MetaV0alpha1SpecModule{
Path: plugin.Module,
}
loadingStrategy := pluginsv0alpha1.MetaV0alpha1SpecModuleLoadingStrategyScript
module.LoadingStrategy = &loadingStrategy
metaSpec.Module = module
}
// Set BaseURL
if plugin.BaseURL != "" {
metaSpec.BaseURL = &plugin.BaseURL
}
// Set signature information
signature := &pluginsv0alpha1.MetaV0alpha1SpecSignature{
Status: convertSignatureStatus(plugin.Signature),
}
if plugin.SignatureType != "" {
sigType := convertSignatureType(plugin.SignatureType)
signature.Type = &sigType
}
if plugin.SignatureOrg != "" {
signature.Org = &plugin.SignatureOrg
}
metaSpec.Signature = signature
if len(plugin.Children) > 0 {
children := make([]string, 0, len(plugin.Children))
for _, child := range plugin.Children {
children = append(children, child.ID)
}
metaSpec.Children = children
}
metaSpec.Angular = &pluginsv0alpha1.MetaV0alpha1SpecAngular{
Detected: plugin.Angular.Detected,
}
if len(plugin.Translations) > 0 {
metaSpec.Translations = plugin.Translations
}
return metaSpec
}
// grafanaComPluginVersionMeta represents the response from grafana.com API
// GET /api/plugins/{pluginId}/versions/{version}
type grafanaComPluginVersionMeta struct {
PluginID string `json:"pluginSlug"`
Version string `json:"version"`
URL string `json:"url"`
Commit string `json:"commit"`
Description string `json:"description"`
Keywords []string `json:"keywords"`
CreatedAt time.Time `json:"createdAt"`
UpdatedAt time.Time `json:"updatedAt"`
JSON pluginsv0alpha1.MetaJSONData `json:"json"`
Readme string `json:"readme"`
Downloads int `json:"downloads"`
Verified bool `json:"verified"`
Status string `json:"status"`
StatusContext string `json:"statusContext"`
DownloadSlug string `json:"downloadSlug"`
SignatureType string `json:"signatureType"`
SignedByOrg string `json:"signedByOrg"`
SignedByOrgName string `json:"signedByOrgName"`
Packages struct {
Any struct {
Md5 string `json:"md5"`
Sha256 string `json:"sha256"`
PackageName string `json:"packageName"`
DownloadURL string `json:"downloadUrl"`
} `json:"any"`
} `json:"packages"`
Links []struct {
Rel string `json:"rel"`
Href string `json:"href"`
} `json:"links"`
AngularDetected bool `json:"angularDetected"`
Scopes []string `json:"scopes"`
}
// grafanaComPluginVersionMetaToMetaSpec converts a grafanaComPluginVersionMeta to a pluginsv0alpha1.MetaSpec.
func grafanaComPluginVersionMetaToMetaSpec(gcomMeta grafanaComPluginVersionMeta) pluginsv0alpha1.MetaSpec {
metaSpec := pluginsv0alpha1.MetaSpec{
PluginJson: gcomMeta.JSON,
Class: pluginsv0alpha1.MetaSpecClassExternal,
}
if gcomMeta.SignatureType != "" {
signature := &pluginsv0alpha1.MetaV0alpha1SpecSignature{
Status: pluginsv0alpha1.MetaV0alpha1SpecSignatureStatusValid,
}
switch gcomMeta.SignatureType {
case "grafana":
sigType := pluginsv0alpha1.MetaV0alpha1SpecSignatureTypeGrafana
signature.Type = &sigType
case "commercial":
sigType := pluginsv0alpha1.MetaV0alpha1SpecSignatureTypeCommercial
signature.Type = &sigType
case "community":
sigType := pluginsv0alpha1.MetaV0alpha1SpecSignatureTypeCommunity
signature.Type = &sigType
case "private":
sigType := pluginsv0alpha1.MetaV0alpha1SpecSignatureTypePrivate
signature.Type = &sigType
case "private-glob":
sigType := pluginsv0alpha1.MetaV0alpha1SpecSignatureTypePrivateGlob
signature.Type = &sigType
}
if gcomMeta.SignedByOrg != "" {
signature.Org = &gcomMeta.SignedByOrg
}
metaSpec.Signature = signature
}
// Set angular info
metaSpec.Angular = &pluginsv0alpha1.MetaV0alpha1SpecAngular{
Detected: gcomMeta.AngularDetected,
}
return metaSpec
}

View File

@@ -2,7 +2,6 @@ package meta
import (
"context"
"encoding/json"
"errors"
"os"
"path/filepath"
@@ -13,7 +12,15 @@ import (
pluginsv0alpha1 "github.com/grafana/grafana/apps/plugins/pkg/apis/plugins/v0alpha1"
"github.com/grafana/grafana/pkg/plugins"
"github.com/grafana/grafana/pkg/plugins/config"
pluginsLoader "github.com/grafana/grafana/pkg/plugins/manager/loader"
"github.com/grafana/grafana/pkg/plugins/manager/pipeline/bootstrap"
"github.com/grafana/grafana/pkg/plugins/manager/pipeline/discovery"
"github.com/grafana/grafana/pkg/plugins/manager/pipeline/initialization"
"github.com/grafana/grafana/pkg/plugins/manager/pipeline/termination"
"github.com/grafana/grafana/pkg/plugins/manager/pipeline/validation"
"github.com/grafana/grafana/pkg/plugins/manager/sources"
"github.com/grafana/grafana/pkg/services/pluginsintegration/pluginerrs"
)
const (
@@ -23,9 +30,10 @@ const (
// CoreProvider retrieves plugin metadata for core plugins.
type CoreProvider struct {
mu sync.RWMutex
loadedPlugins map[string]pluginsv0alpha1.MetaJSONData
loadedPlugins map[string]pluginsv0alpha1.MetaSpec
initialized bool
ttl time.Duration
loader pluginsLoader.Service
}
// NewCoreProvider creates a new CoreProvider for core plugins.
@@ -35,9 +43,13 @@ func NewCoreProvider() *CoreProvider {
// NewCoreProviderWithTTL creates a new CoreProvider with a custom TTL.
func NewCoreProviderWithTTL(ttl time.Duration) *CoreProvider {
cfg := &config.PluginManagementCfg{
Features: config.Features{},
}
return &CoreProvider{
loadedPlugins: make(map[string]pluginsv0alpha1.MetaJSONData),
loadedPlugins: make(map[string]pluginsv0alpha1.MetaSpec),
ttl: ttl,
loader: createLoader(cfg),
}
}
@@ -76,9 +88,9 @@ func (p *CoreProvider) GetMeta(ctx context.Context, pluginID, _ string) (*Result
p.initialized = true
}
if meta, found := p.loadedPlugins[pluginID]; found {
if spec, found := p.loadedPlugins[pluginID]; found {
return &Result{
Meta: meta,
Meta: spec,
TTL: p.ttl,
}, nil
}
@@ -86,8 +98,8 @@ func (p *CoreProvider) GetMeta(ctx context.Context, pluginID, _ string) (*Result
return nil, ErrMetaNotFound
}
// loadPlugins discovers and caches all core plugins.
// Returns an error if the static root path cannot be found or if plugin discovery fails.
// loadPlugins discovers and caches all core plugins by fully loading them.
// Returns an error if the static root path cannot be found or if plugin loading fails.
// This error will be handled gracefully by GetMeta, which will return ErrMetaNotFound
// to allow other providers to handle the request.
func (p *CoreProvider) loadPlugins(ctx context.Context) error {
@@ -108,496 +120,51 @@ func (p *CoreProvider) loadPlugins(ctx context.Context) error {
panelPath := filepath.Join(staticRootPath, "app", "plugins", "panel")
src := sources.NewLocalSource(plugins.ClassCore, []string{datasourcePath, panelPath})
ps, err := src.Discover(ctx)
loadedPlugins, err := p.loader.Load(ctx, src)
if err != nil {
return err
}
if len(ps) == 0 {
logging.DefaultLogger.Warn("CoreProvider: no core plugins found during discovery")
if len(loadedPlugins) == 0 {
logging.DefaultLogger.Warn("CoreProvider: no core plugins found during loading")
return nil
}
for _, bundle := range ps {
meta := jsonDataToMetaJSONData(bundle.Primary.JSONData)
p.loadedPlugins[bundle.Primary.JSONData.ID] = meta
for _, plugin := range loadedPlugins {
metaSpec := pluginToMetaSpec(plugin)
p.loadedPlugins[plugin.ID] = metaSpec
}
return nil
}
// jsonDataToMetaJSONData converts a plugins.JSONData to a pluginsv0alpha1.MetaJSONData.
// nolint:gocyclo
func jsonDataToMetaJSONData(jsonData plugins.JSONData) pluginsv0alpha1.MetaJSONData {
meta := pluginsv0alpha1.MetaJSONData{
Id: jsonData.ID,
Name: jsonData.Name,
}
// Map plugin type
switch jsonData.Type {
case plugins.TypeApp:
meta.Type = pluginsv0alpha1.MetaJSONDataTypeApp
case plugins.TypeDataSource:
meta.Type = pluginsv0alpha1.MetaJSONDataTypeDatasource
case plugins.TypePanel:
meta.Type = pluginsv0alpha1.MetaJSONDataTypePanel
case plugins.TypeRenderer:
meta.Type = pluginsv0alpha1.MetaJSONDataTypeRenderer
}
// Map Info
meta.Info = pluginsv0alpha1.MetaInfo{
Keywords: jsonData.Info.Keywords,
Logos: pluginsv0alpha1.MetaV0alpha1InfoLogos{
Small: jsonData.Info.Logos.Small,
Large: jsonData.Info.Logos.Large,
// createLoader creates a loader service configured for core plugins.
func createLoader(cfg *config.PluginManagementCfg) pluginsLoader.Service {
d := discovery.New(cfg, discovery.Opts{
FilterFuncs: []discovery.FilterFunc{
// Allow all plugin types for core plugins
},
Updated: jsonData.Info.Updated,
Version: jsonData.Info.Version,
}
})
b := bootstrap.New(cfg, bootstrap.Opts{
DecorateFuncs: []bootstrap.DecorateFunc{}, // no decoration required for metadata
})
v := validation.New(cfg, validation.Opts{
ValidateFuncs: []validation.ValidateFunc{
// Skip validation for core plugins - they're trusted
},
})
i := initialization.New(cfg, initialization.Opts{
InitializeFuncs: []initialization.InitializeFunc{
// Skip initialization - we only need metadata, not running plugins
},
})
t, _ := termination.New(cfg, termination.Opts{
TerminateFuncs: []termination.TerminateFunc{
// No termination needed for metadata-only loading
},
})
if jsonData.Info.Description != "" {
meta.Info.Description = &jsonData.Info.Description
}
et := pluginerrs.ProvideErrorTracker()
if jsonData.Info.Author.Name != "" || jsonData.Info.Author.URL != "" {
author := &pluginsv0alpha1.MetaV0alpha1InfoAuthor{}
if jsonData.Info.Author.Name != "" {
author.Name = &jsonData.Info.Author.Name
}
if jsonData.Info.Author.URL != "" {
author.Url = &jsonData.Info.Author.URL
}
meta.Info.Author = author
}
if len(jsonData.Info.Links) > 0 {
meta.Info.Links = make([]pluginsv0alpha1.MetaV0alpha1InfoLinks, 0, len(jsonData.Info.Links))
for _, link := range jsonData.Info.Links {
v0Link := pluginsv0alpha1.MetaV0alpha1InfoLinks{}
if link.Name != "" {
v0Link.Name = &link.Name
}
if link.URL != "" {
v0Link.Url = &link.URL
}
meta.Info.Links = append(meta.Info.Links, v0Link)
}
}
if len(jsonData.Info.Screenshots) > 0 {
meta.Info.Screenshots = make([]pluginsv0alpha1.MetaV0alpha1InfoScreenshots, 0, len(jsonData.Info.Screenshots))
for _, screenshot := range jsonData.Info.Screenshots {
v0Screenshot := pluginsv0alpha1.MetaV0alpha1InfoScreenshots{}
if screenshot.Name != "" {
v0Screenshot.Name = &screenshot.Name
}
if screenshot.Path != "" {
v0Screenshot.Path = &screenshot.Path
}
meta.Info.Screenshots = append(meta.Info.Screenshots, v0Screenshot)
}
}
// Map Dependencies
meta.Dependencies = pluginsv0alpha1.MetaDependencies{
GrafanaDependency: jsonData.Dependencies.GrafanaDependency,
}
if jsonData.Dependencies.GrafanaVersion != "" {
meta.Dependencies.GrafanaVersion = &jsonData.Dependencies.GrafanaVersion
}
if len(jsonData.Dependencies.Plugins) > 0 {
meta.Dependencies.Plugins = make([]pluginsv0alpha1.MetaV0alpha1DependenciesPlugins, 0, len(jsonData.Dependencies.Plugins))
for _, dep := range jsonData.Dependencies.Plugins {
var depType pluginsv0alpha1.MetaV0alpha1DependenciesPluginsType
switch dep.Type {
case "app":
depType = pluginsv0alpha1.MetaV0alpha1DependenciesPluginsTypeApp
case "datasource":
depType = pluginsv0alpha1.MetaV0alpha1DependenciesPluginsTypeDatasource
case "panel":
depType = pluginsv0alpha1.MetaV0alpha1DependenciesPluginsTypePanel
}
meta.Dependencies.Plugins = append(meta.Dependencies.Plugins, pluginsv0alpha1.MetaV0alpha1DependenciesPlugins{
Id: dep.ID,
Type: depType,
Name: dep.Name,
})
}
}
if len(jsonData.Dependencies.Extensions.ExposedComponents) > 0 {
meta.Dependencies.Extensions = &pluginsv0alpha1.MetaV0alpha1DependenciesExtensions{
ExposedComponents: jsonData.Dependencies.Extensions.ExposedComponents,
}
}
// Map optional boolean fields
if jsonData.Alerting {
meta.Alerting = &jsonData.Alerting
}
if jsonData.Annotations {
meta.Annotations = &jsonData.Annotations
}
if jsonData.AutoEnabled {
meta.AutoEnabled = &jsonData.AutoEnabled
}
if jsonData.Backend {
meta.Backend = &jsonData.Backend
}
if jsonData.BuiltIn {
meta.BuiltIn = &jsonData.BuiltIn
}
if jsonData.HideFromList {
meta.HideFromList = &jsonData.HideFromList
}
if jsonData.Logs {
meta.Logs = &jsonData.Logs
}
if jsonData.Metrics {
meta.Metrics = &jsonData.Metrics
}
if jsonData.MultiValueFilterOperators {
meta.MultiValueFilterOperators = &jsonData.MultiValueFilterOperators
}
if jsonData.Preload {
meta.Preload = &jsonData.Preload
}
if jsonData.SkipDataQuery {
meta.SkipDataQuery = &jsonData.SkipDataQuery
}
if jsonData.Streaming {
meta.Streaming = &jsonData.Streaming
}
if jsonData.Tracing {
meta.Tracing = &jsonData.Tracing
}
// Map category
if jsonData.Category != "" {
var category pluginsv0alpha1.MetaJSONDataCategory
switch jsonData.Category {
case "tsdb":
category = pluginsv0alpha1.MetaJSONDataCategoryTsdb
case "logging":
category = pluginsv0alpha1.MetaJSONDataCategoryLogging
case "cloud":
category = pluginsv0alpha1.MetaJSONDataCategoryCloud
case "tracing":
category = pluginsv0alpha1.MetaJSONDataCategoryTracing
case "profiling":
category = pluginsv0alpha1.MetaJSONDataCategoryProfiling
case "sql":
category = pluginsv0alpha1.MetaJSONDataCategorySql
case "enterprise":
category = pluginsv0alpha1.MetaJSONDataCategoryEnterprise
case "iot":
category = pluginsv0alpha1.MetaJSONDataCategoryIot
case "other":
category = pluginsv0alpha1.MetaJSONDataCategoryOther
default:
category = pluginsv0alpha1.MetaJSONDataCategoryOther
}
meta.Category = &category
}
// Map state
if jsonData.State != "" {
var state pluginsv0alpha1.MetaJSONDataState
switch jsonData.State {
case plugins.ReleaseStateAlpha:
state = pluginsv0alpha1.MetaJSONDataStateAlpha
case plugins.ReleaseStateBeta:
state = pluginsv0alpha1.MetaJSONDataStateBeta
default:
}
if state != "" {
meta.State = &state
}
}
// Map executable
if jsonData.Executable != "" {
meta.Executable = &jsonData.Executable
}
// Map QueryOptions
if len(jsonData.QueryOptions) > 0 {
queryOptions := &pluginsv0alpha1.MetaQueryOptions{}
if val, ok := jsonData.QueryOptions["maxDataPoints"]; ok {
queryOptions.MaxDataPoints = &val
}
if val, ok := jsonData.QueryOptions["minInterval"]; ok {
queryOptions.MinInterval = &val
}
if val, ok := jsonData.QueryOptions["cacheTimeout"]; ok {
queryOptions.CacheTimeout = &val
}
meta.QueryOptions = queryOptions
}
// Map Includes
if len(jsonData.Includes) > 0 {
meta.Includes = make([]pluginsv0alpha1.MetaInclude, 0, len(jsonData.Includes))
for _, include := range jsonData.Includes {
v0Include := pluginsv0alpha1.MetaInclude{}
if include.UID != "" {
v0Include.Uid = &include.UID
}
if include.Type != "" {
var includeType pluginsv0alpha1.MetaIncludeType
switch include.Type {
case "dashboard":
includeType = pluginsv0alpha1.MetaIncludeTypeDashboard
case "page":
includeType = pluginsv0alpha1.MetaIncludeTypePage
case "panel":
includeType = pluginsv0alpha1.MetaIncludeTypePanel
case "datasource":
includeType = pluginsv0alpha1.MetaIncludeTypeDatasource
}
v0Include.Type = &includeType
}
if include.Name != "" {
v0Include.Name = &include.Name
}
if include.Component != "" {
v0Include.Component = &include.Component
}
if include.Role != "" {
var role pluginsv0alpha1.MetaIncludeRole
switch include.Role {
case "Admin":
role = pluginsv0alpha1.MetaIncludeRoleAdmin
case "Editor":
role = pluginsv0alpha1.MetaIncludeRoleEditor
case "Viewer":
role = pluginsv0alpha1.MetaIncludeRoleViewer
}
v0Include.Role = &role
}
if include.Action != "" {
v0Include.Action = &include.Action
}
if include.Path != "" {
v0Include.Path = &include.Path
}
if include.AddToNav {
v0Include.AddToNav = &include.AddToNav
}
if include.DefaultNav {
v0Include.DefaultNav = &include.DefaultNav
}
if include.Icon != "" {
v0Include.Icon = &include.Icon
}
meta.Includes = append(meta.Includes, v0Include)
}
}
// Map Routes
if len(jsonData.Routes) > 0 {
meta.Routes = make([]pluginsv0alpha1.MetaRoute, 0, len(jsonData.Routes))
for _, route := range jsonData.Routes {
v0Route := pluginsv0alpha1.MetaRoute{}
if route.Path != "" {
v0Route.Path = &route.Path
}
if route.Method != "" {
v0Route.Method = &route.Method
}
if route.URL != "" {
v0Route.Url = &route.URL
}
if route.ReqRole != "" {
reqRole := string(route.ReqRole)
v0Route.ReqRole = &reqRole
}
if route.ReqAction != "" {
v0Route.ReqAction = &route.ReqAction
}
if len(route.Headers) > 0 {
headers := make([]string, 0, len(route.Headers))
for _, header := range route.Headers {
headers = append(headers, header.Name+": "+header.Content)
}
v0Route.Headers = headers
}
if len(route.URLParams) > 0 {
v0Route.UrlParams = make([]pluginsv0alpha1.MetaV0alpha1RouteUrlParams, 0, len(route.URLParams))
for _, param := range route.URLParams {
v0Param := pluginsv0alpha1.MetaV0alpha1RouteUrlParams{}
if param.Name != "" {
v0Param.Name = &param.Name
}
if param.Content != "" {
v0Param.Content = &param.Content
}
v0Route.UrlParams = append(v0Route.UrlParams, v0Param)
}
}
if route.TokenAuth != nil {
v0Route.TokenAuth = &pluginsv0alpha1.MetaV0alpha1RouteTokenAuth{}
if route.TokenAuth.Url != "" {
v0Route.TokenAuth.Url = &route.TokenAuth.Url
}
if len(route.TokenAuth.Scopes) > 0 {
v0Route.TokenAuth.Scopes = route.TokenAuth.Scopes
}
if len(route.TokenAuth.Params) > 0 {
v0Route.TokenAuth.Params = make(map[string]interface{})
for k, v := range route.TokenAuth.Params {
v0Route.TokenAuth.Params[k] = v
}
}
}
if route.JwtTokenAuth != nil {
v0Route.JwtTokenAuth = &pluginsv0alpha1.MetaV0alpha1RouteJwtTokenAuth{}
if route.JwtTokenAuth.Url != "" {
v0Route.JwtTokenAuth.Url = &route.JwtTokenAuth.Url
}
if len(route.JwtTokenAuth.Scopes) > 0 {
v0Route.JwtTokenAuth.Scopes = route.JwtTokenAuth.Scopes
}
if len(route.JwtTokenAuth.Params) > 0 {
v0Route.JwtTokenAuth.Params = make(map[string]interface{})
for k, v := range route.JwtTokenAuth.Params {
v0Route.JwtTokenAuth.Params[k] = v
}
}
}
if len(route.Body) > 0 {
var bodyMap map[string]interface{}
if err := json.Unmarshal(route.Body, &bodyMap); err == nil {
v0Route.Body = bodyMap
}
}
meta.Routes = append(meta.Routes, v0Route)
}
}
// Map Extensions
if len(jsonData.Extensions.AddedLinks) > 0 || len(jsonData.Extensions.AddedComponents) > 0 ||
len(jsonData.Extensions.ExposedComponents) > 0 || len(jsonData.Extensions.ExtensionPoints) > 0 {
extensions := &pluginsv0alpha1.MetaExtensions{}
if len(jsonData.Extensions.AddedLinks) > 0 {
extensions.AddedLinks = make([]pluginsv0alpha1.MetaV0alpha1ExtensionsAddedLinks, 0, len(jsonData.Extensions.AddedLinks))
for _, link := range jsonData.Extensions.AddedLinks {
v0Link := pluginsv0alpha1.MetaV0alpha1ExtensionsAddedLinks{
Targets: link.Targets,
Title: link.Title,
}
if link.Description != "" {
v0Link.Description = &link.Description
}
extensions.AddedLinks = append(extensions.AddedLinks, v0Link)
}
}
if len(jsonData.Extensions.AddedComponents) > 0 {
extensions.AddedComponents = make([]pluginsv0alpha1.MetaV0alpha1ExtensionsAddedComponents, 0, len(jsonData.Extensions.AddedComponents))
for _, comp := range jsonData.Extensions.AddedComponents {
v0Comp := pluginsv0alpha1.MetaV0alpha1ExtensionsAddedComponents{
Targets: comp.Targets,
Title: comp.Title,
}
if comp.Description != "" {
v0Comp.Description = &comp.Description
}
extensions.AddedComponents = append(extensions.AddedComponents, v0Comp)
}
}
if len(jsonData.Extensions.ExposedComponents) > 0 {
extensions.ExposedComponents = make([]pluginsv0alpha1.MetaV0alpha1ExtensionsExposedComponents, 0, len(jsonData.Extensions.ExposedComponents))
for _, comp := range jsonData.Extensions.ExposedComponents {
v0Comp := pluginsv0alpha1.MetaV0alpha1ExtensionsExposedComponents{
Id: comp.Id,
}
if comp.Title != "" {
v0Comp.Title = &comp.Title
}
if comp.Description != "" {
v0Comp.Description = &comp.Description
}
extensions.ExposedComponents = append(extensions.ExposedComponents, v0Comp)
}
}
if len(jsonData.Extensions.ExtensionPoints) > 0 {
extensions.ExtensionPoints = make([]pluginsv0alpha1.MetaV0alpha1ExtensionsExtensionPoints, 0, len(jsonData.Extensions.ExtensionPoints))
for _, point := range jsonData.Extensions.ExtensionPoints {
v0Point := pluginsv0alpha1.MetaV0alpha1ExtensionsExtensionPoints{
Id: point.Id,
}
if point.Title != "" {
v0Point.Title = &point.Title
}
if point.Description != "" {
v0Point.Description = &point.Description
}
extensions.ExtensionPoints = append(extensions.ExtensionPoints, v0Point)
}
}
meta.Extensions = extensions
}
// Map Roles
if len(jsonData.Roles) > 0 {
meta.Roles = make([]pluginsv0alpha1.MetaRole, 0, len(jsonData.Roles))
for _, role := range jsonData.Roles {
v0Role := pluginsv0alpha1.MetaRole{
Grants: role.Grants,
}
if role.Role.Name != "" || role.Role.Description != "" || len(role.Role.Permissions) > 0 {
v0RoleRole := &pluginsv0alpha1.MetaV0alpha1RoleRole{}
if role.Role.Name != "" {
v0RoleRole.Name = &role.Role.Name
}
if role.Role.Description != "" {
v0RoleRole.Description = &role.Role.Description
}
if len(role.Role.Permissions) > 0 {
v0RoleRole.Permissions = make([]pluginsv0alpha1.MetaV0alpha1RoleRolePermissions, 0, len(role.Role.Permissions))
for _, perm := range role.Role.Permissions {
v0Perm := pluginsv0alpha1.MetaV0alpha1RoleRolePermissions{}
if perm.Action != "" {
v0Perm.Action = &perm.Action
}
if perm.Scope != "" {
v0Perm.Scope = &perm.Scope
}
v0RoleRole.Permissions = append(v0RoleRole.Permissions, v0Perm)
}
}
v0Role.Role = v0RoleRole
}
meta.Roles = append(meta.Roles, v0Role)
}
}
// Map IAM
if jsonData.IAM != nil && len(jsonData.IAM.Permissions) > 0 {
iam := &pluginsv0alpha1.MetaIAM{
Permissions: make([]pluginsv0alpha1.MetaV0alpha1IAMPermissions, 0, len(jsonData.IAM.Permissions)),
}
for _, perm := range jsonData.IAM.Permissions {
v0Perm := pluginsv0alpha1.MetaV0alpha1IAMPermissions{}
if perm.Action != "" {
v0Perm.Action = &perm.Action
}
if perm.Scope != "" {
v0Perm.Scope = &perm.Scope
}
iam.Permissions = append(iam.Permissions, v0Perm)
}
meta.Iam = iam
}
return meta
return pluginsLoader.New(cfg, d, b, v, i, t, et)
}

View File

@@ -22,10 +22,12 @@ func TestCoreProvider_GetMeta(t *testing.T) {
t.Run("returns cached plugin when available", func(t *testing.T) {
provider := NewCoreProvider()
expectedMeta := pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Test Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
expectedMeta := pluginsv0alpha1.MetaSpec{
PluginJson: pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Test Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
},
}
provider.mu.Lock()
@@ -58,10 +60,12 @@ func TestCoreProvider_GetMeta(t *testing.T) {
t.Run("ignores version parameter", func(t *testing.T) {
provider := NewCoreProvider()
expectedMeta := pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Test Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
expectedMeta := pluginsv0alpha1.MetaSpec{
PluginJson: pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Test Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
},
}
provider.mu.Lock()
@@ -81,10 +85,12 @@ func TestCoreProvider_GetMeta(t *testing.T) {
customTTL := 2 * time.Hour
provider := NewCoreProviderWithTTL(customTTL)
expectedMeta := pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Test Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
expectedMeta := pluginsv0alpha1.MetaSpec{
PluginJson: pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Test Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
},
}
provider.mu.Lock()
@@ -226,8 +232,8 @@ func TestCoreProvider_loadPlugins(t *testing.T) {
if loaded {
result, err := provider.GetMeta(ctx, "test-datasource", "1.0.0")
require.NoError(t, err)
assert.Equal(t, "test-datasource", result.Meta.Id)
assert.Equal(t, "Test Datasource", result.Meta.Name)
assert.Equal(t, "test-datasource", result.Meta.PluginJson.Id)
assert.Equal(t, "Test Datasource", result.Meta.PluginJson.Name)
}
})
}

View File

@@ -0,0 +1,53 @@
package meta
import (
"context"
"time"
"github.com/grafana/grafana/pkg/plugins"
"github.com/grafana/grafana/pkg/services/pluginsintegration/pluginstore"
)
const (
defaultLocalTTL = 1 * time.Hour
)
// PluginAssetsCalculator is an interface for calculating plugin asset information.
// LocalProvider requires this to calculate loading strategy and module hash.
type PluginAssetsCalculator interface {
LoadingStrategy(ctx context.Context, p pluginstore.Plugin) plugins.LoadingStrategy
ModuleHash(ctx context.Context, p pluginstore.Plugin) string
}
// LocalProvider retrieves plugin metadata for locally installed plugins.
// It uses the plugin store to access plugins that have already been loaded.
type LocalProvider struct {
store pluginstore.Store
pluginAssets PluginAssetsCalculator
}
// NewLocalProvider creates a new LocalProvider for locally installed plugins.
// pluginAssets is required for calculating loading strategy and module hash.
func NewLocalProvider(pluginStore pluginstore.Store, pluginAssets PluginAssetsCalculator) *LocalProvider {
return &LocalProvider{
store: pluginStore,
pluginAssets: pluginAssets,
}
}
// GetMeta retrieves plugin metadata for locally installed plugins.
func (p *LocalProvider) GetMeta(ctx context.Context, pluginID, version string) (*Result, error) {
plugin, exists := p.store.Plugin(ctx, pluginID)
if !exists {
return nil, ErrMetaNotFound
}
loadingStrategy := p.pluginAssets.LoadingStrategy(ctx, plugin)
moduleHash := p.pluginAssets.ModuleHash(ctx, plugin)
spec := pluginStorePluginToMeta(plugin, loadingStrategy, moduleHash)
return &Result{
Meta: spec,
TTL: defaultLocalTTL,
}, nil
}

View File

@@ -16,7 +16,7 @@ const (
// cachedMeta represents a cached metadata entry with expiration time
type cachedMeta struct {
meta pluginsv0alpha1.MetaJSONData
meta pluginsv0alpha1.MetaSpec
ttl time.Duration
expiresAt time.Time
}
@@ -84,7 +84,7 @@ func (pm *ProviderManager) GetMeta(ctx context.Context, pluginID, version string
if err == nil {
// Don't cache results with a zero TTL
if result.TTL == 0 {
continue
return result, nil
}
pm.cacheMu.Lock()

View File

@@ -35,10 +35,12 @@ func TestProviderManager_GetMeta(t *testing.T) {
ctx := context.Background()
t.Run("returns cached result when available and not expired", func(t *testing.T) {
cachedMeta := pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Test Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
cachedMeta := pluginsv0alpha1.MetaSpec{
PluginJson: pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Test Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
},
}
provider := &mockProvider{
@@ -60,8 +62,10 @@ func TestProviderManager_GetMeta(t *testing.T) {
provider.getMetaFunc = func(ctx context.Context, pluginID, version string) (*Result, error) {
return &Result{
Meta: pluginsv0alpha1.MetaJSONData{Id: "different"},
TTL: time.Hour,
Meta: pluginsv0alpha1.MetaSpec{
PluginJson: pluginsv0alpha1.MetaJSONData{Id: "different"},
},
TTL: time.Hour,
}, nil
}
@@ -73,10 +77,12 @@ func TestProviderManager_GetMeta(t *testing.T) {
})
t.Run("fetches from provider when not cached", func(t *testing.T) {
expectedMeta := pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Test Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
expectedMeta := pluginsv0alpha1.MetaSpec{
PluginJson: pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Test Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
},
}
expectedTTL := 2 * time.Hour
@@ -107,19 +113,16 @@ func TestProviderManager_GetMeta(t *testing.T) {
assert.Equal(t, expectedTTL, cached.ttl)
})
t.Run("does not cache result with zero TTL and tries next provider", func(t *testing.T) {
zeroTTLMeta := pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Zero TTL Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
}
expectedMeta := pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Test Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
t.Run("does not cache result with zero TTL", func(t *testing.T) {
zeroTTLMeta := pluginsv0alpha1.MetaSpec{
PluginJson: pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Zero TTL Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
},
}
provider1 := &mockProvider{
provider := &mockProvider{
getMetaFunc: func(ctx context.Context, pluginID, version string) (*Result, error) {
return &Result{
Meta: zeroTTLMeta,
@@ -127,37 +130,30 @@ func TestProviderManager_GetMeta(t *testing.T) {
}, nil
},
}
provider2 := &mockProvider{
getMetaFunc: func(ctx context.Context, pluginID, version string) (*Result, error) {
return &Result{
Meta: expectedMeta,
TTL: time.Hour,
}, nil
},
}
pm := NewProviderManager(provider1, provider2)
pm := NewProviderManager(provider)
result, err := pm.GetMeta(ctx, "test-plugin", "1.0.0")
require.NoError(t, err)
require.NotNil(t, result)
assert.Equal(t, expectedMeta, result.Meta)
assert.Equal(t, zeroTTLMeta, result.Meta)
assert.Equal(t, time.Duration(0), result.TTL)
pm.cacheMu.RLock()
cached, exists := pm.cache["test-plugin:1.0.0"]
_, exists := pm.cache["test-plugin:1.0.0"]
pm.cacheMu.RUnlock()
assert.True(t, exists)
assert.Equal(t, expectedMeta, cached.meta)
assert.Equal(t, time.Hour, cached.ttl)
assert.False(t, exists, "zero TTL results should not be cached")
})
t.Run("tries next provider when first returns ErrMetaNotFound", func(t *testing.T) {
expectedMeta := pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Test Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
expectedMeta := pluginsv0alpha1.MetaSpec{
PluginJson: pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Test Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
},
}
provider1 := &mockProvider{
@@ -229,15 +225,19 @@ func TestProviderManager_GetMeta(t *testing.T) {
})
t.Run("skips expired cache entries", func(t *testing.T) {
expiredMeta := pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Expired Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
expiredMeta := pluginsv0alpha1.MetaSpec{
PluginJson: pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Expired Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
},
}
expectedMeta := pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Test Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
expectedMeta := pluginsv0alpha1.MetaSpec{
PluginJson: pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Test Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
},
}
callCount := 0
@@ -272,15 +272,19 @@ func TestProviderManager_GetMeta(t *testing.T) {
})
t.Run("uses first successful provider", func(t *testing.T) {
expectedMeta1 := pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Provider 1 Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
expectedMeta1 := pluginsv0alpha1.MetaSpec{
PluginJson: pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Provider 1 Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
},
}
expectedMeta2 := pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Provider 2 Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
expectedMeta2 := pluginsv0alpha1.MetaSpec{
PluginJson: pluginsv0alpha1.MetaJSONData{
Id: "test-plugin",
Name: "Provider 2 Plugin",
Type: pluginsv0alpha1.MetaJSONDataTypeDatasource,
},
}
provider1 := &mockProvider{
@@ -331,9 +335,9 @@ func TestProviderManager_Run(t *testing.T) {
func TestProviderManager_cleanupExpired(t *testing.T) {
t.Run("removes expired entries", func(t *testing.T) {
validMeta := pluginsv0alpha1.MetaJSONData{Id: "valid"}
expiredMeta1 := pluginsv0alpha1.MetaJSONData{Id: "expired1"}
expiredMeta2 := pluginsv0alpha1.MetaJSONData{Id: "expired2"}
validMeta := pluginsv0alpha1.MetaSpec{PluginJson: pluginsv0alpha1.MetaJSONData{Id: "valid"}}
expiredMeta1 := pluginsv0alpha1.MetaSpec{PluginJson: pluginsv0alpha1.MetaJSONData{Id: "expired1"}}
expiredMeta2 := pluginsv0alpha1.MetaSpec{PluginJson: pluginsv0alpha1.MetaJSONData{Id: "expired2"}}
provider := &mockProvider{
getMetaFunc: func(ctx context.Context, pluginID, version string) (*Result, error) {

View File

@@ -14,7 +14,7 @@ var (
// Result contains plugin metadata along with its recommended TTL.
type Result struct {
Meta pluginsv0alpha1.MetaJSONData
Meta pluginsv0alpha1.MetaSpec
TTL time.Duration
}

View File

@@ -121,8 +121,19 @@ func (s *MetaStorage) List(ctx context.Context, options *internalversion.ListOpt
continue
}
pluginMeta := createMetaFromMetaJSONData(result.Meta, plugin.Name, plugin.Namespace)
metaItems = append(metaItems, *pluginMeta)
pluginMeta := pluginsv0alpha1.Meta{
ObjectMeta: metav1.ObjectMeta{
Name: plugin.Name,
Namespace: plugin.Namespace,
},
Spec: result.Meta,
}
pluginMeta.SetGroupVersionKind(schema.GroupVersionKind{
Group: pluginsv0alpha1.APIGroup,
Version: pluginsv0alpha1.APIVersion,
Kind: pluginsv0alpha1.MetaKind().Kind(),
})
metaItems = append(metaItems, pluginMeta)
}
list := &pluginsv0alpha1.MetaList{
@@ -169,27 +180,18 @@ func (s *MetaStorage) Get(ctx context.Context, name string, options *metav1.GetO
return nil, apierrors.NewInternalError(fmt.Errorf("failed to fetch plugin metadata: %w", err))
}
return createMetaFromMetaJSONData(result.Meta, name, ns.Value), nil
}
// createMetaFromMetaJSONData creates a Meta k8s object from MetaJSONData and plugin metadata.
func createMetaFromMetaJSONData(pluginJSON pluginsv0alpha1.MetaJSONData, name, namespace string) *pluginsv0alpha1.Meta {
pluginMeta := &pluginsv0alpha1.Meta{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
Spec: pluginsv0alpha1.MetaSpec{
PluginJSON: pluginJSON,
Name: plugin.Name,
Namespace: plugin.Namespace,
},
Spec: result.Meta,
}
// Set the GroupVersionKind
pluginMeta.SetGroupVersionKind(schema.GroupVersionKind{
Group: pluginsv0alpha1.APIGroup,
Version: pluginsv0alpha1.APIVersion,
Kind: pluginsv0alpha1.MetaKind().Kind(),
})
return pluginMeta
return pluginMeta, nil
}

View File

@@ -7,6 +7,7 @@ require (
github.com/google/go-github/v70 v70.0.0
github.com/google/uuid v1.6.0
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f
github.com/grafana/grafana-app-sdk v0.48.7
github.com/grafana/grafana-app-sdk/logging v0.48.7
github.com/grafana/grafana/apps/secret v0.0.0-20250902093454-b56b7add012f
github.com/grafana/grafana/pkg/apimachinery v0.0.0-20250804150913-990f1c69ecc2
@@ -28,6 +29,7 @@ require (
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.13.0 // indirect
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
github.com/getkin/kin-openapi v0.133.0 // indirect
github.com/go-jose/go-jose/v3 v3.0.4 // indirect
github.com/go-jose/go-jose/v4 v4.1.3 // indirect
github.com/go-logr/logr v1.4.3 // indirect
@@ -54,13 +56,20 @@ require (
github.com/gorilla/mux v1.8.1 // indirect
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 // indirect
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4 // indirect
github.com/grafana/grafana-app-sdk v0.48.7 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/mailru/easyjson v0.9.1 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/oasdiff/yaml v0.0.0-20250309154309-f31be36b4037 // indirect
github.com/oasdiff/yaml3 v0.0.0-20250309153720-d2182401db90 // indirect
github.com/patrickmn/go-cache v2.1.0+incompatible // indirect
github.com/perimeterx/marshmallow v1.1.5 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.23.2 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
@@ -68,6 +77,7 @@ require (
github.com/prometheus/procfs v0.19.2 // indirect
github.com/spf13/pflag v1.0.10 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/woodsbury/decimal128 v1.4.0 // indirect
github.com/x448/float16 v0.8.4 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/otel v1.39.0 // indirect

View File

@@ -14,6 +14,8 @@ github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM=
github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=
github.com/getkin/kin-openapi v0.133.0 h1:pJdmNohVIJ97r4AUFtEXRXwESr8b0bD721u/Tz6k8PQ=
github.com/getkin/kin-openapi v0.133.0/go.mod h1:boAciF6cXk5FhPqe/NQeBTeenbjqU4LhWBf09ILVvWE=
github.com/go-jose/go-jose/v3 v3.0.4 h1:Wp5HA7bLQcKnf6YYao/4kpRpVMp/yf6+pJKV8WFSaNY=
github.com/go-jose/go-jose/v3 v3.0.4/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ=
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=
@@ -59,6 +61,8 @@ github.com/go-openapi/testify/v2 v2.0.2 h1:X999g3jeLcoY8qctY/c/Z8iBHTbwLz7R2WXd6
github.com/go-openapi/testify/v2 v2.0.2/go.mod h1:HCPmvFFnheKK2BuwSA0TbbdxJ3I16pjwMkYkP4Ywn54=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/go-test/deep v1.1.1 h1:0r/53hagsehfO4bzD2Pgr/+RgHqhmf+k1Bpse2cTu1U=
github.com/go-test/deep v1.1.1/go.mod h1:5C2ZWiW0ErCdrYzpqxLbTX7MG14M9iiw8DgHncVwcsE=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
@@ -98,6 +102,13 @@ github.com/grafana/grafana/pkg/apimachinery v0.0.0-20250804150913-990f1c69ecc2 h
github.com/grafana/grafana/pkg/apimachinery v0.0.0-20250804150913-990f1c69ecc2/go.mod h1:RRvSjHH12/PnQaXraMO65jUhVu8n59mzvhfIMBETnV4=
github.com/grafana/nanogit v0.3.0 h1:XNEef+4Vi+465ZITJs/g/xgnDRJbWhhJ7iQrAnWZ0oQ=
github.com/grafana/nanogit v0.3.0/go.mod h1:6s6CCTpyMOHPpcUZaLGI+rgBEKdmxVbhqSGgCK13j7Y=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
@@ -110,6 +121,8 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/mailru/easyjson v0.9.1 h1:LbtsOm5WAswyWbvTEOqhypdPeZzHavpZx96/n553mR8=
github.com/mailru/easyjson v0.9.1/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU=
github.com/migueleliasweb/go-github-mock v1.1.0 h1:GKaOBPsrPGkAKgtfuWY8MclS1xR6MInkx1SexJucMwE=
github.com/migueleliasweb/go-github-mock v1.1.0/go.mod h1:pYe/XlGs4BGMfRY4vmeixVsODHnVDDhJ9zoi0qzSMHc=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -118,14 +131,22 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJ
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8=
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw=
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/oasdiff/yaml v0.0.0-20250309154309-f31be36b4037 h1:G7ERwszslrBzRxj//JalHPu/3yz+De2J+4aLtSRlHiY=
github.com/oasdiff/yaml v0.0.0-20250309154309-f31be36b4037/go.mod h1:2bpvgLBZEtENV5scfDFEtB/5+1M4hkQhDQrccEJ/qGw=
github.com/oasdiff/yaml3 v0.0.0-20250309153720-d2182401db90 h1:bQx3WeLcUWy+RletIKwUIt4x3t8n2SxavmoclizMb8c=
github.com/oasdiff/yaml3 v0.0.0-20250309153720-d2182401db90/go.mod h1:y5+oSEHCPT/DGrS++Wc/479ERge0zTFxaF8PbGKcg2o=
github.com/onsi/ginkgo/v2 v2.22.2 h1:/3X8Panh8/WwhU/3Ssa6rCKqPLuAkVY2I0RoyDLySlU=
github.com/onsi/ginkgo/v2 v2.22.2/go.mod h1:oeMosUL+8LtarXBHu/c0bx2D/K9zyQ6uX3cTyztHwsk=
github.com/onsi/gomega v1.36.2 h1:koNYke6TVk6ZmnyHrCXba/T/MoLBXFjeC1PtvYgw0A8=
github.com/onsi/gomega v1.36.2/go.mod h1:DdwyADRjrc825LhMEkD76cHR5+pUnjhUN8GlHlRPHzY=
github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=
github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ=
github.com/perimeterx/marshmallow v1.1.5 h1:a2LALqQ1BlHM8PZblsDdidgv1mWi1DgC2UmX50IvK2s=
github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0Vchtcl8we9tYaXw=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
@@ -150,6 +171,10 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/ugorji/go/codec v1.2.11 h1:BMaWp1Bb6fHwEtbplGBGJ498wD+LKlNSl25MjdZY4dU=
github.com/ugorji/go/codec v1.2.11/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
github.com/woodsbury/decimal128 v1.4.0 h1:xJATj7lLu4f2oObouMt2tgGiElE5gO6mSWUjQsBgUlc=
github.com/woodsbury/decimal128 v1.4.0/go.mod h1:BP46FUrVjVhdTbKT+XuQh2xfQaGki9LMIRJSFuh6THU=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=

View File

@@ -1,9 +1,10 @@
package repository
manifest: {
appName: "provisioning"
groupOverride: "provisioning.grafana.app"
kinds: [
appName: "provisioning"
groupOverride: "provisioning.grafana.app"
preferredVersion: "v0alpha1"
kinds: [
repository,
connection
]

View File

@@ -80,7 +80,7 @@ repository: {
// Enabled must be saved as true before any sync job will run
enabled: bool
// Where values should be saved
target: "unified" | "legacy"
target: "instance" | "folder"
// When non-zero, the sync will run periodically
intervalSeconds?: int
}

View File

@@ -0,0 +1,92 @@
//
// This file is generated by grafana-app-sdk
// DO NOT EDIT
//
package manifestdata
import (
"fmt"
"strings"
"github.com/grafana/grafana-app-sdk/app"
"github.com/grafana/grafana-app-sdk/resource"
"k8s.io/apimachinery/pkg/runtime"
)
var appManifestData = app.ManifestData{
AppName: "provisioning",
Group: "provisioning.grafana.app",
PreferredVersion: "v0alpha1",
Versions: []app.ManifestVersion{},
}
func LocalManifest() app.Manifest {
return app.NewEmbeddedManifest(appManifestData)
}
func RemoteManifest() app.Manifest {
return app.NewAPIServerManifest("provisioning")
}
var kindVersionToGoType = map[string]resource.Kind{}
// ManifestGoTypeAssociator returns the associated resource.Kind instance for a given Kind and Version, if one exists.
// If there is no association for the provided Kind and Version, exists will return false.
func ManifestGoTypeAssociator(kind, version string) (goType resource.Kind, exists bool) {
goType, exists = kindVersionToGoType[fmt.Sprintf("%s/%s", kind, version)]
return goType, exists
}
var customRouteToGoResponseType = map[string]any{}
// ManifestCustomRouteResponsesAssociator returns the associated response go type for a given kind, version, custom route path, and method, if one exists.
// kind may be empty for custom routes which are not kind subroutes. Leading slashes are removed from subroute paths.
// If there is no association for the provided kind, version, custom route path, and method, exists will return false.
// Resource routes (those without a kind) should prefix their route with "<namespace>/" if the route is namespaced (otherwise the route is assumed to be cluster-scope)
func ManifestCustomRouteResponsesAssociator(kind, version, path, verb string) (goType any, exists bool) {
if len(path) > 0 && path[0] == '/' {
path = path[1:]
}
goType, exists = customRouteToGoResponseType[fmt.Sprintf("%s|%s|%s|%s", version, kind, path, strings.ToUpper(verb))]
return goType, exists
}
var customRouteToGoParamsType = map[string]runtime.Object{}
func ManifestCustomRouteQueryAssociator(kind, version, path, verb string) (goType runtime.Object, exists bool) {
if len(path) > 0 && path[0] == '/' {
path = path[1:]
}
goType, exists = customRouteToGoParamsType[fmt.Sprintf("%s|%s|%s|%s", version, kind, path, strings.ToUpper(verb))]
return goType, exists
}
var customRouteToGoRequestBodyType = map[string]any{}
func ManifestCustomRouteRequestBodyAssociator(kind, version, path, verb string) (goType any, exists bool) {
if len(path) > 0 && path[0] == '/' {
path = path[1:]
}
goType, exists = customRouteToGoRequestBodyType[fmt.Sprintf("%s|%s|%s|%s", version, kind, path, strings.ToUpper(verb))]
return goType, exists
}
type GoTypeAssociator struct{}
func NewGoTypeAssociator() *GoTypeAssociator {
return &GoTypeAssociator{}
}
func (g *GoTypeAssociator) KindToGoType(kind, version string) (goType resource.Kind, exists bool) {
return ManifestGoTypeAssociator(kind, version)
}
func (g *GoTypeAssociator) CustomRouteReturnGoType(kind, version, path, verb string) (goType any, exists bool) {
return ManifestCustomRouteResponsesAssociator(kind, version, path, verb)
}
func (g *GoTypeAssociator) CustomRouteQueryGoType(kind, version, path, verb string) (goType runtime.Object, exists bool) {
return ManifestCustomRouteQueryAssociator(kind, version, path, verb)
}
func (g *GoTypeAssociator) CustomRouteRequestBodyGoType(kind, version, path, verb string) (goType any, exists bool) {
return ManifestCustomRouteRequestBodyAssociator(kind, version, path, verb)
}

View File

@@ -136,9 +136,6 @@ type ExportJobOptions struct {
}
type MigrateJobOptions struct {
// Preserve history (if possible)
History bool `json:"history,omitempty"`
// Message to use when committing the changes in a single commit
Message string `json:"message,omitempty"`
}

View File

@@ -9,11 +9,6 @@ import (
type RepositoryViewList struct {
metav1.TypeMeta `json:",inline"`
// The backend is using legacy storage
// FIXME: Not sure where this should be exposed... but we need it somewhere
// The UI should force the onboarding workflow when this is true
LegacyStorage bool `json:"legacyStorage,omitempty"`
// The valid targets (can disable instance or folder types)
AllowedTargets []SyncTargetType `json:"allowedTargets,omitempty"`

View File

@@ -1495,13 +1495,6 @@ func schema_pkg_apis_provisioning_v0alpha1_MigrateJobOptions(ref common.Referenc
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"history": {
SchemaProps: spec.SchemaProps{
Description: "Preserve history (if possible)",
Type: []string{"boolean"},
Format: "",
},
},
"message": {
SchemaProps: spec.SchemaProps{
Description: "Message to use when committing the changes in a single commit",
@@ -2119,13 +2112,6 @@ func schema_pkg_apis_provisioning_v0alpha1_RepositoryViewList(ref common.Referen
Format: "",
},
},
"legacyStorage": {
SchemaProps: spec.SchemaProps{
Description: "The backend is using legacy storage FIXME: Not sure where this should be exposed... but we need it somewhere The UI should force the onboarding workflow when this is true",
Type: []string{"boolean"},
Format: "",
},
},
"allowedTargets": {
SchemaProps: spec.SchemaProps{
Description: "The valid targets (can disable instance or folder types)",

View File

@@ -7,7 +7,6 @@ package v0alpha1
// MigrateJobOptionsApplyConfiguration represents a declarative configuration of the MigrateJobOptions type for use
// with apply.
type MigrateJobOptionsApplyConfiguration struct {
History *bool `json:"history,omitempty"`
Message *string `json:"message,omitempty"`
}
@@ -17,14 +16,6 @@ func MigrateJobOptions() *MigrateJobOptionsApplyConfiguration {
return &MigrateJobOptionsApplyConfiguration{}
}
// WithHistory sets the History field in the declarative configuration to the given value
// and returns the receiver, so that objects can be built by chaining "With" function invocations.
// If called multiple times, the History field is set to the value of the last call.
func (b *MigrateJobOptionsApplyConfiguration) WithHistory(value bool) *MigrateJobOptionsApplyConfiguration {
b.History = &value
return b
}
// WithMessage sets the Message field in the declarative configuration to the given value
// and returns the receiver, so that objects can be built by chaining "With" function invocations.
// If called multiple times, the Message field is set to the value of the last call.

View File

@@ -384,8 +384,7 @@ func TestValidateJob(t *testing.T) {
Action: provisioning.JobActionMigrate,
Repository: "test-repo",
Migrate: &provisioning.MigrateJobOptions{
History: true,
Message: "Migrate from legacy",
Message: "Migrate from unified",
},
},
},

View File

@@ -238,6 +238,8 @@ func (r *gitRepository) Read(ctx context.Context, filePath, ref string) (*reposi
// Check if the path represents a directory
if safepath.IsDir(filePath) {
// Strip trailing slash for git tree lookup to avoid empty path components
finalPath = strings.TrimSuffix(finalPath, "/")
tree, err := r.client.GetTreeByPath(ctx, commit.Tree, finalPath)
if err != nil {
if errors.Is(err, nanogit.ErrObjectNotFound) {

View File

@@ -46,7 +46,7 @@ Complete the following steps to install Grafana from the APT repository:
1. Install the prerequisite packages:
```bash
sudo apt-get install -y apt-transport-https software-properties-common wget
sudo apt-get install -y apt-transport-https wget
```
1. Import the GPG key:

View File

@@ -763,11 +763,6 @@
"count": 1
}
},
"packages/grafana-ui/src/components/Select/resetSelectStyles.ts": {
"@typescript-eslint/no-explicit-any": {
"count": 1
}
},
"packages/grafana-ui/src/components/Select/types.ts": {
"@typescript-eslint/no-explicit-any": {
"count": 6
@@ -2402,11 +2397,6 @@
"count": 1
}
},
"public/app/features/datasources/components/DataSourceLoadError.tsx": {
"no-restricted-syntax": {
"count": 1
}
},
"public/app/features/datasources/components/DataSourcePluginState.tsx": {
"no-restricted-syntax": {
"count": 3

View File

@@ -13,6 +13,7 @@ cel.dev/expr v0.16.0/go.mod h1:TRSuuV7DlVCE/uwv5QbAiW/v8l5O8C4eEPHeu7gf7Sg=
cel.dev/expr v0.19.0/go.mod h1:MrpN08Q+lEBs+bGYdLxxHkZoUSsCp0nSKTs0nTymJgw=
cel.dev/expr v0.23.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
cel.dev/expr v0.23.1/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
cloud.google.com/go v0.82.0/go.mod h1:vlKccHJGuFBFufnAnuB08dfEH9Y3H7dzDzRECFdC2TA=
cloud.google.com/go v0.121.0/go.mod h1:rS7Kytwheu/y9buoDmu5EIpMMCI4Mb8ND4aeN4Vwj7Q=
cloud.google.com/go v0.121.1/go.mod h1:nRFlrHq39MNVWu+zESP2PosMWA0ryJw8KUBZ2iZpxbw=
@@ -329,6 +330,8 @@ github.com/KimMachineGun/automemlimit v0.7.1 h1:QcG/0iCOLChjfUweIMC3YL5Xy9C3VBeN
github.com/KimMachineGun/automemlimit v0.7.1/go.mod h1:QZxpHaGOQoYvFhv/r4u3U0JTC2ZcOwbSr11UZF46UBM=
github.com/KyleBanks/depth v1.2.1 h1:5h8fQADFrWtarTdtDudMmGsC7GPbOAu6RVB3ffsVFHc=
github.com/KyleBanks/depth v1.2.1/go.mod h1:jzSb9d0L43HxTQfT+oSA1EEp2q+ne2uh6XgeJcm8brE=
github.com/MakeNowJust/heredoc v1.0.0 h1:cXCdzVdstXyiTqTvfqk9SDHpKNjxuom+DOlyEeQ4pzQ=
github.com/MakeNowJust/heredoc v1.0.0/go.mod h1:mG5amYoWBHf8vpLOuehzbGGw0EHxpZZ6lCpQ4fNJ8LE=
github.com/MarvinJWendt/testza v0.1.0/go.mod h1:7AxNvlfeHP7Z/hDQ5JtE3OKYT3XFUeLCDE2DQninSqs=
github.com/MarvinJWendt/testza v0.2.1/go.mod h1:God7bhG8n6uQxwdScay+gjm9/LnO4D3kkcZX4hv9Rp8=
github.com/MarvinJWendt/testza v0.2.8/go.mod h1:nwIcjmr0Zz+Rcwfh3/4UhBp7ePKVhuBExvZqnKYWlII=
@@ -410,6 +413,7 @@ github.com/apache/arrow/go/v15 v15.0.2/go.mod h1:DGXsR3ajT524njufqf95822i+KTh+ye
github.com/apache/thrift v0.21.0/go.mod h1:W1H8aR/QRtYNvrPeFXBtobyRkd0/YVhTc6i07XIAgDw=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e h1:QEF07wC0T1rKkctt1RINW/+RMTVmiwxETico2l3gxJA=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6 h1:G1bPvciwNyF7IUmKXNt9Ak3m6u9DE1rF+RmtIkBpVdA=
github.com/at-wat/mqtt-go v0.19.4/go.mod h1:AsiWc9kqVOhqq7LzUeWT/AkKUBfx3Sw5cEe8lc06fqA=
github.com/atc0005/go-teams-notify/v2 v2.13.0 h1:nbDeHy89NjYlF/PEfLVF6lsserY9O5SnN1iOIw3AxXw=
github.com/atc0005/go-teams-notify/v2 v2.13.0/go.mod h1:WSv9moolRsBcpZbwEf6gZxj7h0uJlJskJq5zkEWKO8Y=
github.com/atomicgo/cursor v0.0.1/go.mod h1:cBON2QmmrysudxNBFthvMtN32r3jxVRIvzkUiF/RuIk=
@@ -489,6 +493,8 @@ github.com/aws/smithy-go v1.22.5/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp
github.com/aws/smithy-go v1.23.0/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=
github.com/awslabs/aws-lambda-go-api-proxy v0.16.2 h1:CJyGEyO1CIwOnXTU40urf0mchf6t3voxpvUDikOU9LY=
github.com/awslabs/aws-lambda-go-api-proxy v0.16.2/go.mod h1:vxxjwBHe/KbgFeNlAP/Tvp4SsVRL3WQamcWRxqVh0z0=
github.com/aymanbagabas/go-udiff v0.2.0 h1:TK0fH4MteXUDspT88n8CKzvK0X9O2xu9yQjWpi6yML8=
github.com/aymanbagabas/go-udiff v0.2.0/go.mod h1:RE4Ex0qsGkTAJoQdQQCA0uG+nAzJO/pI/QwceO5fgrA=
github.com/aymerick/douceur v0.2.0 h1:Mv+mAeH1Q+n9Fr+oyamOlAkUNPWPlA8PPGR0QAaYuPk=
github.com/aymerick/douceur v0.2.0/go.mod h1:wlT5vV2O3h55X9m7iVYN0TBM0NH/MmbLnd30/FjWUq4=
github.com/baidubce/bce-sdk-go v0.9.188 h1:8MA7ewe4VpX01uYl7Kic6ZvfIReUFdSKbY46ZqlQM7U=
@@ -527,6 +533,10 @@ github.com/campoy/embedmd v1.0.0 h1:V4kI2qTJJLf4J29RzI/MAt2c3Bl4dQSYPuflzwFH2hY=
github.com/campoy/embedmd v1.0.0/go.mod h1:oxyr9RCiSXg0M3VJ3ks0UGfp98BpSSGr0kpiX3MzVl8=
github.com/cenkalti/backoff/v5 v5.0.2/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/census-instrumentation/opencensus-proto v0.4.1 h1:iKLQ0xPNFxR/2hzXZMrBo8f1j86j5WHzznCCQxV/b8g=
github.com/charmbracelet/harmonica v0.2.0 h1:8NxJWRWg/bzKqqEaaeFNipOu77YR5t8aSwG4pgaUBiQ=
github.com/charmbracelet/harmonica v0.2.0/go.mod h1:KSri/1RMQOZLbw7AHqgcBycp8pgJnQMYYT8QZRqZ1Ao=
github.com/charmbracelet/x/exp/golden v0.0.0-20241011142426-46044092ad91 h1:payRxjMjKgx2PaCWLZ4p3ro9y97+TVLZNaRZgJwSVDQ=
github.com/charmbracelet/x/exp/golden v0.0.0-20241011142426-46044092ad91/go.mod h1:wDlXFlCrmJ8J+swcL/MnGUuYnqgQdW9rhSD61oNMb6U=
github.com/centrifugal/centrifuge v0.37.2/go.mod h1:aj4iRJGhzi3SlL8iUtVezxway1Xf8g+hmNQkLLO7sS8=
github.com/centrifugal/protocol v0.16.2/go.mod h1:Q7OpS/8HMXDnL7f9DpNx24IhG96MP88WPpVTTCdrokI=
github.com/chenzhuoyu/base64x v0.0.0-20230717121745-296ad89f973d h1:77cEq6EriyTZ0g/qfRdp61a3Uu/AWrgIq2s0ClJV1g0=
@@ -562,6 +572,7 @@ github.com/coder/quartz v0.1.0 h1:cLL+0g5l7xTf6ordRnUMMiZtRE8Sq5LxpghS63vEXrQ=
github.com/coder/quartz v0.1.0/go.mod h1:vsiCc+AHViMKH2CQpGIpFgdHIEQsxwm8yCscqKmzbRA=
github.com/coder/websocket v1.8.12 h1:5bUXkEPPIbewrnkU8LTCLVaxi4N4J8ahufH2vlo4NAo=
github.com/coder/websocket v1.8.12/go.mod h1:LNVeNrXQZfe5qhS9ALED3uA+l5pPqvwXg3CKoDBB2gs=
github.com/coder/websocket v1.8.13/go.mod h1:LNVeNrXQZfe5qhS9ALED3uA+l5pPqvwXg3CKoDBB2gs=
github.com/containerd/btrfs/v2 v2.0.0/go.mod h1:swkD/7j9HApWpzl8OHfrHNxppPd9l44DFZdF94BUj9k=
github.com/containerd/cgroups v1.1.0/go.mod h1:6ppBcbh/NOOUU+dMKrykgaBnK9lCIBxHqJDGwsa1mIw=
github.com/containerd/cgroups/v3 v3.0.2/go.mod h1:JUgITrzdFqp42uI2ryGA+ge0ap/nxzYgkGmIcetmErE=
@@ -684,6 +695,7 @@ github.com/eapache/go-resiliency v1.7.0/go.mod h1:5yPzW0MIvSe0JDsv0v+DvcjEv2FyD6
github.com/eapache/go-xerial-snappy v0.0.0-20230731223053-c322873962e3 h1:Oy0F4ALJ04o5Qqpdz8XLIpNA3WM/iSIXqxtqo7UGVws=
github.com/eapache/go-xerial-snappy v0.0.0-20230731223053-c322873962e3/go.mod h1:YvSRo5mw33fLEx1+DlK6L2VV43tJt5Eyel9n9XBcR+0=
github.com/eapache/queue v1.1.0 h1:YOEu7KNc61ntiQlcEeUIoDTJ2o8mQznoNvUhiigpIqc=
github.com/ebitengine/purego v0.8.4/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/efficientgo/tools/core v0.0.0-20220225185207-fe763185946b h1:ZHiD4/yE4idlbqvAO6iYCOYRzOMRpxkW+FKasRA3tsQ=
github.com/efficientgo/tools/core v0.0.0-20220225185207-fe763185946b/go.mod h1:OmVcnJopJL8d3X3sSXTiypGoUSgFq1aDGmlrdi9dn/M=
github.com/elastic/elastic-transport-go/v8 v8.6.1 h1:h2jQRqH6eLGiBSN4eZbQnJLtL4bC5b4lfVFRjw2R4e4=
@@ -716,6 +728,7 @@ github.com/ericlagergren/decimal v0.0.0-20240411145413-00de7ca16731 h1:R/ZjJpjQK
github.com/ericlagergren/decimal v0.0.0-20240411145413-00de7ca16731/go.mod h1:M9R1FoZ3y//hwwnJtO51ypFGwm8ZfpxPT/ZLtO1mcgQ=
github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM=
github.com/expr-lang/expr v1.17.6/go.mod h1:8/vRC7+7HBzESEqt5kKpYXxrxkr31SaO8r40VO/1IT4=
github.com/fatih/color v1.15.0/go.mod h1:0h5ZqXfHYED7Bhv2ZJamyIOUej9KtShiJESRwBDUSsw=
github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE=
github.com/fatih/color v1.17.0/go.mod h1:YZ7TlrGPkiz6ku9fK3TLD/pl3CpsiFyu8N92HLgmosI=
@@ -780,6 +793,7 @@ github.com/go-openapi/loads v0.22.0/go.mod h1:yLsaTCS92mnSAZX5WWoxszLj0u+Ojl+Zs5
github.com/go-openapi/spec v0.21.0/go.mod h1:78u6VdPw81XU44qEWGhtr982gJ5BWg2c0I5XwVMotYk=
github.com/go-openapi/strfmt v0.23.0/go.mod h1:NrtIpfKtWIygRkKVsxh7XQMDQW5HKQl6S5ik2elW+K4=
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
github.com/go-openapi/validate v0.24.0/go.mod h1:iyeX1sEufmv3nPbBdX3ieNviWnOZaJ1+zquzJEf2BAQ=
github.com/go-pdf/fpdf v0.6.0 h1:MlgtGIfsdMEEQJr2le6b/HNr1ZlQwxyWr77r2aj2U/8=
github.com/go-playground/assert/v2 v2.0.1 h1:MsBgLAaY856+nPRTKrp3/OZK38U/wa0CcBYNjji3q3A=
@@ -862,10 +876,13 @@ github.com/gorilla/handlers v1.5.2/go.mod h1:dX+xVpaxdSw+q0Qek8SSsl3dfMk3jNddUkM
github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grafana/alerting v0.0.0-20250729175202-b4b881b7b263/go.mod h1:VKxaR93Gff0ZlO2sPcdPVob1a/UzArFEW5zx3Bpyhls=
github.com/grafana/alerting v0.0.0-20251009192429-9427c24835ae/go.mod h1:VGjS5gDwWEADPP6pF/drqLxEImgeuHlEW5u8E5EfIrM=
github.com/grafana/authlib v0.0.0-20250710201142-9542f2f28d43/go.mod h1:1fWkOiL+m32NBgRHZtlZGz2ji868tPZACYbqP3nBRJI=
github.com/grafana/authlib/types v0.0.0-20250710201142-9542f2f28d43/go.mod h1:qeWYbnWzaYGl88JlL9+DsP1GT2Cudm58rLtx13fKZdw=
github.com/grafana/authlib/types v0.0.0-20250926065801-df98203cff37/go.mod h1:qeWYbnWzaYGl88JlL9+DsP1GT2Cudm58rLtx13fKZdw=
github.com/grafana/cloudflare-go v0.0.0-20230110200409-c627cf6792f2 h1:qhugDMdQ4Vp68H0tp/0iN17DM2ehRo1rLEdOFe/gB8I=
github.com/grafana/cloudflare-go v0.0.0-20230110200409-c627cf6792f2/go.mod h1:w/aiO1POVIeXUQyl0VQSZjl5OAGDTL5aX+4v0RA1tcw=
github.com/grafana/codejen v0.0.4-0.20230321061741-77f656893a3d/go.mod h1:zmwwM/DRyQB7pfuBjTWII3CWtxcXh8LTwAYGfDfpR6s=
github.com/grafana/cog v0.0.43/go.mod h1:TDunc7TYF7EfzjwFOlC5AkMe3To/U2KqyyG3QVvrF38=
github.com/grafana/dskit v0.0.0-20250611075409-46f51e1ce914/go.mod h1:OiN4P4aC6LwLzLbEupH3Ue83VfQoNMfG48rsna8jI/E=
github.com/grafana/dskit v0.0.0-20250818234656-8ff9c6532e85/go.mod h1:kImsvJ1xnmeT9Z6StK+RdEKLzlpzBsKwJbEQfmBJdFs=
@@ -914,6 +931,7 @@ github.com/grafana/grafana-plugin-sdk-go v0.277.0/go.mod h1:mAUWg68w5+1f5TLDqagI
github.com/grafana/grafana-plugin-sdk-go v0.278.0/go.mod h1:+8NXT/XUJ/89GV6FxGQ366NZ3nU+cAXDMd0OUESF9H4=
github.com/grafana/grafana-plugin-sdk-go v0.279.0/go.mod h1:/7oGN6Z7DGTGaLHhgIYrRr6Wvmdsb3BLw5hL4Kbjy88=
github.com/grafana/grafana-plugin-sdk-go v0.280.0/go.mod h1:Z15Wiq3c4I0tzHYrLYpOqrO8u3+2RJ+HN2Q9uiZTILA=
github.com/grafana/grafana-plugin-sdk-go v0.281.0/go.mod h1:3I0g+v6jAwVmrt6BEjDUP4V6pkhGP5QKY5NkXY4Ayr4=
github.com/grafana/grafana-plugin-sdk-go v0.283.0/go.mod h1:20qhoYxIgbZRmwCEO1KMP8q2yq/Kge5+xE/99/hLEk0=
github.com/grafana/grafana/apps/advisor v0.0.0-20250123151950-b066a6313173/go.mod h1:goSDiy3jtC2cp8wjpPZdUHRENcoSUHae1/Px/MDfddA=
github.com/grafana/grafana/apps/advisor v0.0.0-20250220154326-6e5de80ef295/go.mod h1:9I1dKV3Dqr0NPR9Af0WJGxOytp5/6W3JLiNChOz8r+c=
@@ -958,11 +976,13 @@ github.com/grpc-ecosystem/go-grpc-middleware v1.3.0/go.mod h1:z0ButlSOZa5vEBq9m2
github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus v1.0.1/go.mod h1:lXGCsh6c22WGtjr+qGHj1otzZpV/1kwTMAqkwZsnWRU=
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.1.0/go.mod h1:XKMd7iuf/RGPSMJ/U4HP0zS2Z9Fh8Ps9a+6X26m/tmI=
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.3.0/go.mod h1:qOchhhIlmRcqk/O9uCo/puJlyo07YINaIqdZfZG3Jkc=
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.3.2/go.mod h1:wd1YpapPLivG6nQgbf7ZkG1hhSOXDhhn4MLTknx2aAc=
github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4M0+kPpLofRdBo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0/go.mod h1:YN5jB8ie0yfIUg6VvR9Kz84aCaG7AsGZnLjhHbUqwPg=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.19.1/go.mod h1:5SN9VR2LTsRFsrEC6FHgRbTWrTHu6tqPeKxEQv15giM=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3/go.mod h1:ndYquD05frm2vACXE1nsccT4oJzjhw2arTS2cpUD1PI=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.1/go.mod h1:Zanoh4+gvIgluNqcfMVTJueD4wSS5hT7zTt4Mrutd90=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2/go.mod h1:pkJQ2tZHJ0aFOVEEot6oZmaVEZcRme73eIFmhiVuRWs=
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645 h1:MJG/KsmcqMwFAkh8mTnAwhyKoB+sTAnY4CACC110tbU=
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645/go.mod h1:6iZfnjpejD4L/4DwD7NryNaJyCQdzwWwH2MWhCA90Kw=
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed h1:5upAirOpQc1Q53c0bnx2ufif5kANL7bfZWcc6VJWJd8=
@@ -1353,6 +1373,7 @@ github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkq
github.com/prometheus/common v0.64.0/go.mod h1:0gZns+BLRQ3V6NdaerOhMbwwRbNh9hkGINtQAsP5GS8=
github.com/prometheus/common v0.65.0/go.mod h1:0gZns+BLRQ3V6NdaerOhMbwwRbNh9hkGINtQAsP5GS8=
github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA=
github.com/prometheus/common v0.67.1/go.mod h1:RpmT9v35q2Y+lsieQsdOh5sXZ6ajUGC8NjZAmr8vb0Q=
github.com/prometheus/common v0.67.2/go.mod h1:63W3KZb1JOKgcjlIr64WW/LvFGAqKPj0atm+knVGEko=
github.com/prometheus/common/assets v0.2.0 h1:0P5OrzoHrYBOSM1OigWL3mY8ZvV2N4zIE/5AahrSrfM=
github.com/prometheus/exporter-toolkit v0.10.1-0.20230714054209-2f4150c63f97/go.mod h1:LoBCZeRh+5hX+fSULNyFnagYlQG/gBsyA/deNzROkq8=
@@ -1383,6 +1404,7 @@ github.com/richardartoul/molecule v1.0.0/go.mod h1:uvX/8buq8uVeiZiFht+0lqSLBHF+u
github.com/rivo/uniseg v0.4.4/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/rogpeppe/fastuuid v1.2.0 h1:Ppwyp6VYCF1nvBTXL3trRso7mXMlRrw9ooo375wvi2s=
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/russross/blackfriday v1.6.0 h1:KqfZb0pUVN2lYqZUYRddxF4OR8ZMURnJIG5Y3VRLtww=
github.com/russross/blackfriday v1.6.0/go.mod h1:ti0ldHuxg49ri4ksnFxlkCfN+hvslNlmVHqNRXXJNAY=
@@ -1394,6 +1416,8 @@ github.com/sagikazarmark/crypt v0.6.0 h1:REOEXCs/NFY/1jOCEouMuT4zEniE5YoXbvpC5X/
github.com/sagikazarmark/locafero v0.9.0/go.mod h1:UBUyz37V+EdMS3hDF3QWIiVr/2dPrx49OMO0Bn0hJqk=
github.com/sagikazarmark/slog-shim v0.1.0 h1:diDBnUNK9N/354PgrxMywXnAwEr1QZcOr6gto+ugjYE=
github.com/sagikazarmark/slog-shim v0.1.0/go.mod h1:SrcSrq8aKtyuqEI1uvTDTK1arOWRIczQRv+GVI1AkeQ=
github.com/sahilm/fuzzy v0.1.1 h1:ceu5RHF8DGgoi+/dR5PsECjCDH1BE3Fnmpo7aVXOdRA=
github.com/sahilm/fuzzy v0.1.1/go.mod h1:VFvziUEIMCrT6A6tw2RFIXPXXmzXbOsSHF0DOI8ZK9Y=
github.com/samber/lo v1.47.0 h1:z7RynLwP5nbyRscyvcD043DWYoOcYRv3mV8lBeqOCLc=
github.com/samber/lo v1.47.0/go.mod h1:RmDH9Ct32Qy3gduHQuKJ3gW1fMHAnE/fAzQuf6He5cU=
github.com/samber/slog-common v0.18.1 h1:c0EipD/nVY9HG5shgm/XAs67mgpWDMF+MmtptdJNCkQ=
@@ -1599,6 +1623,7 @@ go.mongodb.org/mongo-driver v1.11.4/go.mod h1:PTSz5yu21bkT/wXpkS7WR5f0ddqw5queth
go.mongodb.org/mongo-driver v1.14.0/go.mod h1:Vzb0Mk/pa7e6cWw85R4F/endUC3u0U9jGcNU603k65c=
go.mongodb.org/mongo-driver v1.17.3/go.mod h1:Hy04i7O2kC4RS06ZrhPRqj/u4DTYkFDAAccj+rVKqgQ=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/collector v0.121.0/go.mod h1:M4TlnmkjIgishm2DNCk9K3hMKTmAsY9w8cNFsp9EchM=
go.opentelemetry.io/collector v0.124.0/go.mod h1:QzERYfmHUedawjr8Ph/CBEEkVqWS8IlxRLAZt+KHlCg=
go.opentelemetry.io/collector/client v1.29.0/go.mod h1:LCUoEV2KCTKA1i+/txZaGsSPVWUcqeOV6wCfNsAippE=
@@ -1885,6 +1910,7 @@ go.opentelemetry.io/proto/otlp v1.0.0/go.mod h1:Sy6pihPLfYHkr3NkUbEhGHFhINUSI/v8
go.opentelemetry.io/proto/otlp v1.5.0/go.mod h1:keN8WnHxOy8PG0rQZjJJ5A2ebUoafqWp0eVQ4yIXvJ4=
go.opentelemetry.io/proto/otlp v1.6.0/go.mod h1:cicgGehlFuNdgZkcALOCh3VE6K/u2tAjzlRhDwmVpZc=
go.opentelemetry.io/proto/otlp v1.7.0/go.mod h1:fSKjH6YJ7HDlwzltzyMj036AJ3ejJLCgCSHGj4efDDo=
go.opentelemetry.io/proto/otlp v1.7.1/go.mod h1:b2rVh6rfI/s2pHWNlB7ILJcRALpcNDzKhACevjI+ZnE=
go.uber.org/atomic v1.10.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs=
go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8=
@@ -1930,6 +1956,7 @@ golang.org/x/image v0.25.0/go.mod h1:tCAmOEGthTtkalusGp1g3xa2gke8J6c2N565dTyl9Rs
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 h1:VLliZ0d+/avPrXXH+OakdXhpJuEoBZuwh1m2j7U6Iug=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028 h1:4+4C/Iv2U4fMZBiMCc98MG1In4gJY5YRhtpDNeDeHWs=
golang.org/x/mod v0.6.0-dev.0.20220818022119-ed83ed61efb9/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.6.0/go.mod h1:4mET923SAdbXp2ki8ey+zGs1SLqsuM2Y0uvdZR/fUNI=
golang.org/x/mod v0.13.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.14.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.18.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
@@ -1941,6 +1968,7 @@ golang.org/x/mod v0.26.0/go.mod h1:/j6NAhSk8iQ723BGAUyoAcn7SlD7s15Dp9Nd/SfeaFQ=
golang.org/x/mod v0.27.0/go.mod h1:rWI627Fq0DEoudcK+MBkNkCe0EetEaDSwJJkCcjpazc=
golang.org/x/mod v0.28.0/go.mod h1:yfB/L0NOf/kmEbXjzCPOx1iK1fRutOydrCMsqRhEBxI=
golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=
golang.org/x/net v0.0.0-20190921015927-1a5e07d1ff72/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
@@ -2035,6 +2063,7 @@ golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/time v0.13.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.0.0-20190424220101-1e8e1cfdf96b/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.2.0/go.mod h1:y4OqIKeOV/fWJetJ8bXPU1sEVniLMIyDAZWeHdV+NTA=
golang.org/x/tools v0.4.0/go.mod h1:UE5sM2OK9E/d67R0ANs2xJizIymRP5gJU295PvKXxjQ=
golang.org/x/tools v0.11.0/go.mod h1:anzJrxPjNtfgiYQYirP2CPGzGLxrH2u2QBhn6Bf3qY8=
golang.org/x/tools v0.14.0/go.mod h1:uYBEerGOWcJyEORxN+Ek8+TT266gXkNlHdJBwexUsBg=
@@ -2089,6 +2118,7 @@ google.golang.org/genproto/googleapis/api v0.0.0-20250728155136-f173205681a0/go.
google.golang.org/genproto/googleapis/api v0.0.0-20250804133106-a7a43d27e69b/go.mod h1:oDOGiMSXHL4sDTJvFvIB9nRQCGdLP1o/iVaqQK8zB+M=
google.golang.org/genproto/googleapis/api v0.0.0-20250818200422-3122310a409c/go.mod h1:ea2MjsO70ssTfCjiwHgI0ZFqcw45Ksuk2ckf9G468GA=
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5/go.mod h1:j3QtIyytwqGr1JUDtYXwtMXWPKsEa5LtzIFN1Wn5WvE=
google.golang.org/genproto/googleapis/api v0.0.0-20250908214217-97024824d090/go.mod h1:U8EXRNSd8sUYyDfs/It7KVWodQr+Hf9xtxyxWudSwEw=
google.golang.org/genproto/googleapis/api v0.0.0-20250929231259-57b25ae835d4/go.mod h1:NnuHhy+bxcg30o7FnVAZbXsPHUDQ9qKWAQKCD7VxFtk=
google.golang.org/genproto/googleapis/api v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:+rXWjjaukWZun3mLfjmVnQi18E1AsFbDN9QdJ5YXLto=
google.golang.org/genproto/googleapis/bytestream v0.0.0-20250603155806-513f23925822 h1:zWFRixYR5QlotL+Uv3YfsPRENIrQFXiGs+iwqel6fOQ=
@@ -2120,7 +2150,9 @@ google.golang.org/genproto/googleapis/rpc v0.0.0-20250825161204-c5933d9347a5/go.
google.golang.org/genproto/googleapis/rpc v0.0.0-20250826171959-ef028d996bc1/go.mod h1:GmFNa4BdJZ2a8G+wCe9Bg3wwThLrJun751XstdJt5Og=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250908214217-97024824d090/go.mod h1:GmFNa4BdJZ2a8G+wCe9Bg3wwThLrJun751XstdJt5Og=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250929231259-57b25ae835d4/go.mod h1:HSkG/KdJWusxU1F6CNrwNDjBMgisKxGnc5dAZfT0mjQ=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251002232023-7c0ddcbb5797/go.mod h1:HSkG/KdJWusxU1F6CNrwNDjBMgisKxGnc5dAZfT0mjQ=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251014184007-4626949a642f/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251022142026-3a174f9686a8/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251124214823-79d6a2a48846/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251202230838-ff82c1b0f217/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
@@ -2145,6 +2177,7 @@ google.golang.org/grpc v1.73.0/go.mod h1:50sbHOUqWoCQGI8V2HQLJM0B+LMlIUjNSZmow7E
google.golang.org/grpc v1.74.2/go.mod h1:CtQ+BGjaAIXHs/5YS3i473GqwBBa1zGQNevxdeBEXrM=
google.golang.org/grpc v1.75.0/go.mod h1:JtPAzKiq4v1xcAB2hydNlWI2RnF85XXcV0mhKXr2ecQ=
google.golang.org/grpc v1.75.1/go.mod h1:JtPAzKiq4v1xcAB2hydNlWI2RnF85XXcV0mhKXr2ecQ=
google.golang.org/grpc v1.76.0/go.mod h1:Ju12QI8M6iQJtbcsV+awF5a4hfJMLi4X0JLo94ULZ6c=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0 h1:M1YKkFIboKNieVO5DLUEVzQfGwJD30Nv2jfUgzb5UcE=
google.golang.org/grpc/examples v0.0.0-20230224211313-3775f633ce20 h1:MLBCGN1O7GzIx+cBiwfYPwtmZ41U3Mn/cotLJciaArI=
google.golang.org/grpc/examples v0.0.0-20230224211313-3775f633ce20/go.mod h1:Nr5H8+MlGWr5+xX/STzdoEqJrO+YteqFbMyCsrb6mH0=
@@ -2188,12 +2221,14 @@ k8s.io/api v0.33.3/go.mod h1:01Y/iLUjNBM3TAvypct7DIj0M0NIZc+PzAHCIo0CYGE=
k8s.io/api v0.34.0/go.mod h1:YzgkIzOOlhl9uwWCZNqpw6RJy9L2FK4dlJeayUoydug=
k8s.io/api v0.34.1/go.mod h1:SB80FxFtXn5/gwzCoN6QCtPD7Vbu5w2n1S0J5gFfTYk=
k8s.io/apiextensions-apiserver v0.33.3/go.mod h1:oROuctgo27mUsyp9+Obahos6CWcMISSAPzQ77CAQGz8=
k8s.io/apiextensions-apiserver v0.34.1/go.mod h1:hP9Rld3zF5Ay2Of3BeEpLAToP+l4s5UlxiHfqRaRcMc=
k8s.io/apimachinery v0.26.2/go.mod h1:ats7nN1LExKHvJ9TmwootT00Yz05MuYqPXEXaVeOy5I=
k8s.io/apimachinery v0.33.3/go.mod h1:BHW0YOu7n22fFv/JkYOEfkUYNRN0fj0BlvMFWA7b+SM=
k8s.io/apimachinery v0.34.0/go.mod h1:/GwIlEcWuTX9zKIg2mbw0LRFIsXwrfoVxn+ef0X13lw=
k8s.io/apimachinery v0.34.1/go.mod h1:/GwIlEcWuTX9zKIg2mbw0LRFIsXwrfoVxn+ef0X13lw=
k8s.io/apiserver v0.26.2/go.mod h1:GHcozwXgXsPuOJ28EnQ/jXEM9QeG6HT22YxSNmpYNh8=
k8s.io/apiserver v0.33.3/go.mod h1:05632ifFEe6TxwjdAIrwINHWE2hLwyADFk5mBsQa15E=
k8s.io/apiserver v0.34.1/go.mod h1:eOOc9nrVqlBI1AFCvVzsob0OxtPZUCPiUJL45JOTBG0=
k8s.io/client-go v0.26.2/go.mod h1:u5EjOuSyBa09yqqyY7m3abZeovO/7D/WehVVlZ2qcqU=
k8s.io/client-go v0.33.3/go.mod h1:luqKBQggEf3shbxHY4uVENAxrDISLOarxpTKMiUuujg=
k8s.io/client-go v0.34.0/go.mod h1:ozgMnEKXkRjeMvBZdV1AijMHLTh3pbACPvK7zFR+QQY=
@@ -2202,6 +2237,7 @@ k8s.io/code-generator v0.34.3 h1:6ipJKsJZZ9q21BO8I2jEj4OLN3y8/1n4aihKN0xKmQk=
k8s.io/code-generator v0.34.3/go.mod h1:oW73UPYpGLsbRN8Ozkhd6ZzkF8hzFCiYmvEuWZDroI4=
k8s.io/component-base v0.26.2/go.mod h1:DxbuIe9M3IZPRxPIzhch2m1eT7uFrSBJUBuVCQEBivs=
k8s.io/component-base v0.33.3/go.mod h1:ktBVsBzkI3imDuxYXmVxZ2zxJnYTZ4HAsVj9iF09qp4=
k8s.io/component-base v0.34.1/go.mod h1:mknCpLlTSKHzAQJJnnHVKqjxR7gBeHRv0rPXA7gdtQ0=
k8s.io/cri-api v0.27.1/go.mod h1:+Ts/AVYbIo04S86XbTD73UPp/DkTiYxtsFeOFEu32L0=
k8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6 h1:4s3/R4+OYYYUKptXPhZKjQ04WJ6EhQQVFdjOFvCazDk=
k8s.io/gengo/v2 v2.0.0-20250604051438-85fd79dbfd9f h1:SLb+kxmzfA87x4E4brQzB33VBbT2+x7Zq9ROIHmGn9Q=
@@ -2214,6 +2250,8 @@ k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/klog/v2 v2.80.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/klog/v2 v2.90.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/kms v0.34.1/go.mod h1:s1CFkLG7w9eaTYvctOxosx88fl4spqmixnNpys0JAtM=
k8s.io/kube-aggregator v0.34.1/go.mod h1:RU8j+5ERfp0h+gIvWtxRPfsa5nK7rboDm8RST8BJfYQ=
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff/go.mod h1:5jIi+8yX4RIb8wk3XwBo5Pq2ccx4FP10ohkbSKCZoK8=
k8s.io/kube-openapi v0.0.0-20250710124328-f3f2b991d03b/go.mod h1:UZ2yyWbFTpuhSbFhv24aGNOdoRdJZgsIObGBUaYVsts=
k8s.io/utils v0.0.0-20230220204549-a5ecb0141aa5/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
@@ -2261,6 +2299,7 @@ sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ih
sigs.k8s.io/structured-merge-diff/v4 v4.5.0 h1:nbCitCK2hfnhyiKo6uf2HxUPTCodY6Qaf85SbDIaMBk=
sigs.k8s.io/structured-merge-diff/v4 v4.5.0/go.mod h1:N8f93tFZh9U6vpxwRArLiikrE5/2tiu1w1AGfACIGE4=
sigs.k8s.io/structured-merge-diff/v6 v6.2.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=
sigs.k8s.io/yaml v1.5.0/go.mod h1:wZs27Rbxoai4C0f8/9urLZtZtF3avA3gKvGyPdDqTO4=

View File

@@ -295,8 +295,8 @@
"@grafana/plugin-ui": "^0.11.1",
"@grafana/prometheus": "workspace:*",
"@grafana/runtime": "workspace:*",
"@grafana/scenes": "6.49.0",
"@grafana/scenes-react": "6.49.0",
"@grafana/scenes": "6.50.0",
"@grafana/scenes-react": "6.50.0",
"@grafana/schema": "workspace:*",
"@grafana/sql": "workspace:*",
"@grafana/ui": "workspace:*",

View File

@@ -1585,8 +1585,6 @@ export type DeleteJobOptions = {
resources?: ResourceRef[];
};
export type MigrateJobOptions = {
/** Preserve history (if possible) */
history?: boolean;
/** Message to use when committing the changes in a single commit */
message?: string;
};
@@ -2047,8 +2045,6 @@ export type RepositoryViewList = {
items: RepositoryView[];
/** Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds */
kind?: string;
/** The backend is using legacy storage FIXME: Not sure where this should be exposed... but we need it somewhere The UI should force the onboarding workflow when this is true */
legacyStorage?: boolean;
};
export type ManagerStats = {
/** Manager identity */

View File

@@ -664,6 +664,7 @@ export {
type DataSourceGetTagKeysOptions,
type DataSourceGetTagValuesOptions,
type DataSourceGetDrilldownsApplicabilityOptions,
type DataSourceGetRecommendedDrilldownsOptions,
type MetadataInspectorProps,
type LegacyMetricFindQueryOptions,
type QueryEditorProps,
@@ -681,6 +682,7 @@ export {
type QueryHint,
type MetricFindValue,
type DrilldownsApplicability,
type DrilldownRecommendation,
type DataSourceJsonData,
type DataSourceSettings,
type DataSourceInstanceSettings,

View File

@@ -313,6 +313,13 @@ abstract class DataSourceApi<
options?: DataSourceGetDrilldownsApplicabilityOptions<TQuery>
): Promise<DrilldownsApplicability[]>;
/**
* Get recommended drilldowns for a dashboard
*/
getRecommendedDrilldowns?(
options?: DataSourceGetRecommendedDrilldownsOptions<TQuery>
): Promise<DrilldownRecommendation>;
/**
* Get tag keys for adhoc filters
*/
@@ -398,13 +405,9 @@ abstract class DataSourceApi<
}
/**
* Options argument to DataSourceAPI.getTagKeys
* Base options shared across datasource filtering operations.
*/
export interface DataSourceGetTagKeysOptions<TQuery extends DataQuery = DataQuery> {
/**
* The other existing filters or base filters. New in v10.3
*/
filters: AdHocVariableFilter[];
export interface DataSourceFilteringRequestOptions<TQuery extends DataQuery = DataQuery> {
/**
* Context time range. New in v10.3
*/
@@ -413,21 +416,27 @@ export interface DataSourceGetTagKeysOptions<TQuery extends DataQuery = DataQuer
scopes?: Scope[] | undefined;
}
/**
* Options argument to DataSourceAPI.getTagKeys
*/
export interface DataSourceGetTagKeysOptions<TQuery extends DataQuery = DataQuery>
extends DataSourceFilteringRequestOptions<TQuery> {
/**
* The other existing filters or base filters. New in v10.3
*/
filters: AdHocVariableFilter[];
}
/**
* Options argument to DataSourceAPI.getTagValues
*/
export interface DataSourceGetTagValuesOptions<TQuery extends DataQuery = DataQuery> {
export interface DataSourceGetTagValuesOptions<TQuery extends DataQuery = DataQuery>
extends DataSourceFilteringRequestOptions<TQuery> {
key: string;
/**
* The other existing filters or base filters. New in v10.3
*/
filters: AdHocVariableFilter[];
/**
* Context time range. New in v10.3
*/
timeRange?: TimeRange;
queries?: TQuery[];
scopes?: Scope[] | undefined;
}
export interface MetadataInspectorProps<
@@ -646,12 +655,22 @@ export interface MetricFindValue {
properties?: Record<string, string>;
}
export interface DataSourceGetDrilldownsApplicabilityOptions<TQuery extends DataQuery = DataQuery> {
export interface DataSourceGetDrilldownsApplicabilityOptions<TQuery extends DataQuery = DataQuery>
extends DataSourceFilteringRequestOptions<TQuery> {
filters?: AdHocVariableFilter[];
groupByKeys?: string[];
}
export interface DataSourceGetRecommendedDrilldownsOptions<TQuery extends DataQuery = DataQuery>
extends DataSourceFilteringRequestOptions<TQuery> {
dashboardUid?: string;
filters?: AdHocVariableFilter[];
groupByKeys?: string[];
}
export interface DrilldownRecommendation {
filters?: AdHocVariableFilter[];
groupByKeys?: string[];
timeRange?: TimeRange;
queries?: TQuery[];
scopes?: Scope[] | undefined;
}
export interface DrilldownsApplicability {

View File

@@ -373,6 +373,10 @@ export interface FeatureToggles {
*/
unlimitedLayoutsNesting?: boolean;
/**
* Enables showing recently used drilldowns or recommendations given by the datasource in the AdHocFilters and GroupBy variables
*/
drilldownRecommendations?: boolean;
/**
* Enables viewing non-applicable drilldowns on a panel level
*/
perPanelNonApplicableDrilldowns?: boolean;

View File

@@ -1,5 +1,5 @@
import { fireEvent, render, screen } from '@testing-library/react';
import * as React from 'react';
import { type ComponentProps, useRef } from 'react';
import { createDataFrame } from '@grafana/data';
@@ -16,14 +16,14 @@ jest.mock('react-use', () => {
return {
...reactUse,
useMeasure: () => {
const ref = React.useRef();
const ref = useRef(null);
return [ref, { width: 1600 }];
},
};
});
describe('FlameGraph', () => {
function setup(props?: Partial<React.ComponentProps<typeof FlameGraph>>) {
function setup(props?: Partial<ComponentProps<typeof FlameGraph>>) {
const flameGraphData = createDataFrame(data);
const container = new FlameGraphDataContainer(flameGraphData, { collapsing: true });

View File

@@ -21,7 +21,7 @@ jest.mock('@grafana/assistant', () => ({
jest.mock('react-use', () => ({
...jest.requireActual('react-use'),
useMeasure: () => {
const ref = useRef();
const ref = useRef(null);
return [ref, { width: 1600 }];
},
}));

View File

@@ -262,24 +262,18 @@ function createComponent<Props extends JSX.IntrinsicAttributes>(
pluginId?: string,
id?: string
): ComponentTypeWithExtensionMeta<Props> {
function ComponentWithMeta(props: Props) {
if (Implementation) {
return <Implementation {...props} />;
const ComponentWithMeta: ComponentTypeWithExtensionMeta<Props> = Object.assign(
Implementation || (() => <div>Test</div>),
{
meta: {
id: id ?? '',
pluginId: pluginId ?? '',
title: '',
description: '',
type: PluginExtensionTypes.component,
} satisfies PluginExtensionComponentMeta,
}
return <div>Test</div>;
}
ComponentWithMeta.displayName = '';
ComponentWithMeta.propTypes = {};
ComponentWithMeta.contextTypes = {};
ComponentWithMeta.meta = {
id: id ?? '',
pluginId: pluginId ?? '',
title: '',
description: '',
type: PluginExtensionTypes.component,
} satisfies PluginExtensionComponentMeta;
);
return ComponentWithMeta;
}

View File

@@ -1,9 +1,9 @@
import { useMemo } from 'react';
import { CSSObjectWithLabel } from 'react-select';
import { StylesConfig } from 'react-select';
import { GrafanaTheme2 } from '@grafana/data';
export default function resetSelectStyles(theme: GrafanaTheme2) {
export default function resetSelectStyles(theme: GrafanaTheme2): Partial<StylesConfig> {
return {
clearIndicator: () => ({}),
container: () => ({}),
@@ -13,7 +13,7 @@ export default function resetSelectStyles(theme: GrafanaTheme2) {
groupHeading: () => ({}),
indicatorsContainer: () => ({}),
indicatorSeparator: () => ({}),
input: function (originalStyles: CSSObjectWithLabel) {
input: function (originalStyles) {
return {
...originalStyles,
color: 'inherit',
@@ -27,7 +27,7 @@ export default function resetSelectStyles(theme: GrafanaTheme2) {
loadingIndicator: () => ({}),
loadingMessage: () => ({}),
menu: () => ({}),
menuList: ({ maxHeight }: { maxHeight: number }) => ({
menuList: ({ maxHeight }) => ({
maxHeight,
}),
multiValue: () => ({}),
@@ -38,7 +38,7 @@ export default function resetSelectStyles(theme: GrafanaTheme2) {
multiValueRemove: () => ({}),
noOptionsMessage: () => ({}),
option: () => ({}),
placeholder: (originalStyles: CSSObjectWithLabel) => ({
placeholder: (originalStyles) => ({
...originalStyles,
color: theme.colors.text.secondary,
}),
@@ -47,11 +47,11 @@ export default function resetSelectStyles(theme: GrafanaTheme2) {
};
}
export function useCustomSelectStyles(theme: GrafanaTheme2, width: number | string | undefined) {
export function useCustomSelectStyles(theme: GrafanaTheme2, width: number | string | undefined): Partial<StylesConfig> {
return useMemo(() => {
return {
...resetSelectStyles(theme),
menuPortal: (base: CSSObjectWithLabel) => {
menuPortal: (base) => {
// Would like to correct top position when menu is placed bottom, but have props are not sent to this style function.
// Only state is. https://github.com/JedWatson/react-select/blob/master/packages/react-select/src/components/Menu.tsx#L605
return {
@@ -60,7 +60,7 @@ export function useCustomSelectStyles(theme: GrafanaTheme2, width: number | stri
};
},
//These are required for the menu positioning to function
menu: ({ top, bottom, position }: CSSObjectWithLabel) => {
menu: ({ top, bottom, position }) => {
return {
top,
bottom,
@@ -73,7 +73,7 @@ export function useCustomSelectStyles(theme: GrafanaTheme2, width: number | stri
width: width ? theme.spacing(width) : '100%',
display: width === 'auto' ? 'inline-flex' : 'flex',
}),
option: (provided: CSSObjectWithLabel, state: any) => ({
option: (provided, state) => ({
...provided,
opacity: state.isDisabled ? 0.5 : 1,
}),

View File

@@ -263,7 +263,16 @@ export const Footer: StoryFn<typeof Table> = (args) => {
);
};
export const Pagination: StoryFn<typeof Table> = (args) => <Basic {...args} />;
export const Pagination: StoryFn<typeof Table> = (args) => {
const theme = useTheme2();
const data = buildData(theme, {});
return (
<DashboardStoryCanvas>
<Table {...args} data={data} />
</DashboardStoryCanvas>
);
};
Pagination.args = {
enablePagination: true,
};

View File

@@ -224,7 +224,7 @@ var (
MStatTotalRepositories prometheus.Gauge
// MUnifiedStorageMigrationStatus indicates the migration status for unified storage in this instance.
// Possible values: 0 (default/undefined), 1 (migration disabled), 2 (migration would run).
// Possible values: 0 (default/undefined), 1 (migration disabled), 2 (migration would run), 3 (migration will run).
MUnifiedStorageMigrationStatus prometheus.Gauge
)

View File

@@ -81,7 +81,15 @@ func (s *SocialGoogle) Validate(ctx context.Context, newSettings ssoModels.SSOSe
return validation.Validate(info, requester,
validation.MustBeEmptyValidator(info.AuthUrl, "Auth URL"),
validation.MustBeEmptyValidator(info.TokenUrl, "Token URL"),
validation.MustBeEmptyValidator(info.ApiUrl, "API URL"))
validation.MustBeEmptyValidator(info.ApiUrl, "API URL"),
loginPromptValidator)
}
func loginPromptValidator(info *social.OAuthInfo, requester identity.Requester) error {
if info.UseRefreshToken && !slices.Contains([]string{"", "consent"}, info.LoginPrompt) {
return ssosettings.ErrInvalidOAuthConfig("If provided, login_prompt must be set to consent when use_refresh_token is enabled.")
}
return nil
}
func (s *SocialGoogle) Reload(ctx context.Context, settings ssoModels.SSOSettings) error {

View File

@@ -9,6 +9,7 @@ import (
"fmt"
"net/http"
"net/http/httptest"
"net/url"
"testing"
"time"
@@ -18,6 +19,7 @@ import (
"github.com/stretchr/testify/require"
"golang.org/x/oauth2"
"github.com/grafana/grafana/pkg/apimachinery/errutil"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/login/social"
"github.com/grafana/grafana/pkg/services/featuremgmt"
@@ -871,6 +873,39 @@ func TestSocialGoogle_Validate(t *testing.T) {
},
wantErr: ssosettings.ErrBaseInvalidOAuthConfig,
},
{
name: "fails if use_refresh_token is enabled and login prompt is neither empty or 'consent'",
settings: ssoModels.SSOSettings{
Settings: map[string]any{
"client_id": "client-id",
"use_refresh_token": "true",
"login_prompt": "login",
},
},
wantErr: ssosettings.ErrBaseInvalidOAuthConfig,
},
{
name: "succeeds if use_refresh_token is enabled and login prompt is empty",
settings: ssoModels.SSOSettings{
Settings: map[string]any{
"client_id": "client-id",
"use_refresh_token": "true",
"login_prompt": "",
},
},
wantErr: nil,
},
{
name: "succeeds if use_refresh_token is enabled and login prompt is consent",
settings: ssoModels.SSOSettings{
Settings: map[string]any{
"client_id": "client-id",
"use_refresh_token": "true",
"login_prompt": "consent",
},
},
wantErr: nil,
},
}
for _, tc := range testCases {
@@ -886,7 +921,13 @@ func TestSocialGoogle_Validate(t *testing.T) {
require.ErrorIs(t, err, tc.wantErr)
return
}
require.NoError(t, err)
if err != nil {
var e errutil.Error
require.True(t, errors.As(err, &e))
require.NoError(t, e, "expected no error, got %v", e.PublicMessage)
return
}
})
}
}
@@ -1024,3 +1065,102 @@ func TestIsHDAllowed(t *testing.T) {
})
}
}
func TestSocialGoogle_AuthCodeURL(t *testing.T) {
testCases := []struct {
name string
info *social.OAuthInfo
opts []oauth2.AuthCodeOption
state string
wantURL *url.URL
}{
{
name: "should return the correct auth code URL",
info: &social.OAuthInfo{
ClientId: "client-id",
ClientSecret: "client-secret",
AuthUrl: "https://example.com/auth",
LoginPrompt: "login",
Scopes: []string{"openid", "email", "profile"},
},
state: "test-state",
opts: []oauth2.AuthCodeOption{
oauth2.SetAuthURLParam("extra_param", "extra_value"),
},
wantURL: &url.URL{
Scheme: "https",
Host: "example.com",
Path: "/auth",
RawQuery: url.Values{
"state": {"test-state"},
"prompt": {"login"},
"response_type": {"code"},
"client_id": {"client-id"},
"redirect_uri": {"/login/google"},
"scope": {"openid email profile"},
"extra_param": {"extra_value"},
}.Encode(),
},
},
{
name: "should add access type offline and approval force if use refresh token is enabled",
info: &social.OAuthInfo{
ClientId: "client-id",
ClientSecret: "client-secret",
AuthUrl: "https://example.com/auth",
Scopes: []string{"openid", "email", "profile"},
UseRefreshToken: true,
},
state: "test-state",
wantURL: &url.URL{
Scheme: "https",
Host: "example.com",
Path: "/auth",
RawQuery: url.Values{
"state": {"test-state"},
"prompt": {"consent"},
"response_type": {"code"},
"client_id": {"client-id"},
"redirect_uri": {"/login/google"},
"scope": {"openid email profile"},
"access_type": {"offline"},
}.Encode(),
},
},
{
name: "should override configured login prompt if use refresh token is enabled",
info: &social.OAuthInfo{
ClientId: "client-id",
ClientSecret: "client-secret",
AuthUrl: "https://example.com/auth",
Scopes: []string{"openid", "email", "profile"},
UseRefreshToken: true,
},
state: "test-state",
wantURL: &url.URL{
Scheme: "https",
Host: "example.com",
Path: "/auth",
RawQuery: url.Values{
"state": {"test-state"},
"prompt": {"consent"},
"response_type": {"code"},
"client_id": {"client-id"},
"redirect_uri": {"/login/google"},
"scope": {"openid email profile"},
"access_type": {"offline"},
}.Encode(),
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
s := NewGoogleProvider(tc.info, &setting.Cfg{}, nil, ssosettingstests.NewFakeService(), featuremgmt.WithFeatures())
gotURL := s.AuthCodeURL(tc.state, tc.opts...)
parsedURL, err := url.Parse(gotURL)
require.NoError(t, err)
require.EqualValues(t, tc.wantURL, parsedURL)
})
}
}

View File

@@ -91,7 +91,11 @@ func (s *SocialBase) AuthCodeURL(state string, opts ...oauth2.AuthCodeOption) st
func (s *SocialBase) getAuthCodeURL(state string, opts ...oauth2.AuthCodeOption) string {
if s.info.LoginPrompt != "" {
promptOpt := oauth2.SetAuthURLParam("prompt", s.info.LoginPrompt)
opts = append(opts, promptOpt)
// Prepend the prompt option to the opts slice to ensure it is applied last.
// This is necessary in case the caller provides an option that overrides the prompt,
// such as `oauth2.ApprovalForce`.
opts = append([]oauth2.AuthCodeOption{promptOpt}, opts...)
}
return s.Config.AuthCodeURL(state, opts...)

View File

@@ -85,7 +85,6 @@ func RunRepoController(deps server.OperatorDependencies) error {
resourceLister,
controllerCfg.clients,
jobs,
nil, // dualwrite -- standalone operator assumes it is backed by unified storage
healthChecker,
statusPatcher,
deps.Registerer,

View File

@@ -3,11 +3,16 @@ package bootstrap
import (
"context"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/trace"
"github.com/grafana/grafana/pkg/infra/tracing"
"github.com/grafana/grafana/pkg/plugins"
"github.com/grafana/grafana/pkg/plugins/config"
"github.com/grafana/grafana/pkg/plugins/log"
"github.com/grafana/grafana/pkg/plugins/manager/signature"
"github.com/grafana/grafana/pkg/plugins/pluginassets"
"github.com/grafana/grafana/pkg/semconv"
)
// Bootstrapper is responsible for the Bootstrap stage of the plugin loader pipeline.
@@ -34,6 +39,7 @@ type Bootstrap struct {
constructStep ConstructFunc
decorateSteps []DecorateFunc
log log.Logger
tracer trace.Tracer
}
type Opts struct {
@@ -55,14 +61,21 @@ func New(cfg *config.PluginManagementCfg, opts Opts) *Bootstrap {
constructStep: opts.ConstructFunc,
decorateSteps: opts.DecorateFuncs,
log: log.New("plugins.bootstrap"),
tracer: otel.Tracer("github.com/grafana/grafana/pkg/plugins/manager/pipeline/bootstrap"),
}
}
// Bootstrap will execute the Construct and Decorate steps of the Bootstrap stage.
func (b *Bootstrap) Bootstrap(ctx context.Context, src plugins.PluginSource, found *plugins.FoundBundle) ([]*plugins.Plugin, error) {
pluginClass := src.PluginClass(ctx)
ctx, span := b.tracer.Start(ctx, "bootstrap.Bootstrap", trace.WithAttributes(
semconv.PluginSourceClass(pluginClass),
))
defer span.End()
ps, err := b.constructStep(ctx, src, found)
if err != nil {
return nil, err
return nil, tracing.Error(span, err)
}
if len(b.decorateSteps) == 0 {
@@ -76,7 +89,7 @@ func (b *Bootstrap) Bootstrap(ctx context.Context, src plugins.PluginSource, fou
ip, err = decorate(ctx, p)
if err != nil {
b.log.Error("Could not decorate plugin", "pluginId", p.ID, "error", err)
return nil, err
return nil, tracing.Error(span, err)
}
}
bootstrappedPlugins = append(bootstrappedPlugins, ip)

View File

@@ -3,6 +3,11 @@ package discovery
import (
"context"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
"github.com/grafana/grafana/pkg/infra/tracing"
"github.com/grafana/grafana/pkg/plugins"
"github.com/grafana/grafana/pkg/plugins/config"
"github.com/grafana/grafana/pkg/plugins/log"
@@ -26,6 +31,7 @@ type FilterFunc func(ctx context.Context, class plugins.Class, bundles []*plugin
type Discovery struct {
filterSteps []FilterFunc
log log.Logger
tracer trace.Tracer
}
type Opts struct {
@@ -41,29 +47,37 @@ func New(_ *config.PluginManagementCfg, opts Opts) *Discovery {
return &Discovery{
filterSteps: opts.FilterFuncs,
log: log.New("plugins.discovery"),
tracer: otel.Tracer("github.com/grafana/grafana/pkg/plugins/manager/pipeline/discovery"),
}
}
// Discover will execute the Filter step of the Discovery stage.
func (d *Discovery) Discover(ctx context.Context, src plugins.PluginSource) ([]*plugins.FoundBundle, error) {
pluginClass := src.PluginClass(ctx)
ctx, span := d.tracer.Start(ctx, "discovery.Discover", trace.WithAttributes(
attribute.String("grafana.plugins.class", string(pluginClass)),
))
defer span.End()
ctxLogger := d.log.FromContext(ctx)
// Use the source's own Discover method
found, err := src.Discover(ctx)
if err != nil {
d.log.Warn("Discovery source failed", "class", src.PluginClass(ctx), "error", err)
return nil, err
ctxLogger.Warn("Discovery source failed", "class", pluginClass, "error", err)
return nil, tracing.Error(span, err)
}
d.log.Debug("Found plugins", "class", src.PluginClass(ctx), "count", len(found))
ctxLogger.Debug("Found plugins", "class", pluginClass, "count", len(found))
// Apply filtering steps
result := found
for _, filter := range d.filterSteps {
result, err = filter(ctx, src.PluginClass(ctx), result)
if err != nil {
return nil, err
return nil, tracing.Error(span, err)
}
}
d.log.Debug("Discovery complete", "class", src.PluginClass(ctx), "found", len(found), "filtered", len(result))
ctxLogger.Debug("Discovery complete", "class", pluginClass, "found", len(found), "filtered", len(result))
return result, nil
}

View File

@@ -3,9 +3,14 @@ package initialization
import (
"context"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/trace"
"github.com/grafana/grafana/pkg/infra/tracing"
"github.com/grafana/grafana/pkg/plugins"
"github.com/grafana/grafana/pkg/plugins/config"
"github.com/grafana/grafana/pkg/plugins/log"
"github.com/grafana/grafana/pkg/semconv"
)
// Initializer is responsible for the Initialization stage of the plugin loader pipeline.
@@ -20,6 +25,7 @@ type Initialize struct {
cfg *config.PluginManagementCfg
initializeSteps []InitializeFunc
log log.Logger
tracer trace.Tracer
}
type Opts struct {
@@ -36,11 +42,17 @@ func New(cfg *config.PluginManagementCfg, opts Opts) *Initialize {
cfg: cfg,
initializeSteps: opts.InitializeFuncs,
log: log.New("plugins.initialization"),
tracer: otel.Tracer("github.com/grafana/grafana/pkg/plugins/manager/pipeline/initialization"),
}
}
// Initialize will execute the Initialize steps of the Initialization stage.
func (i *Initialize) Initialize(ctx context.Context, ps *plugins.Plugin) (*plugins.Plugin, error) {
ctx, span := i.tracer.Start(ctx, "initialization.Initialize", trace.WithAttributes(
semconv.GrafanaPluginId(ps.ID),
))
defer span.End()
if len(i.initializeSteps) == 0 {
return ps, nil
}
@@ -51,7 +63,7 @@ func (i *Initialize) Initialize(ctx context.Context, ps *plugins.Plugin) (*plugi
ip, err = init(ctx, ps)
if err != nil {
i.log.Error("Could not initialize plugin", "pluginId", ps.ID, "error", err)
return nil, err
return nil, tracing.Error(span, err)
}
}

View File

@@ -3,9 +3,14 @@ package termination
import (
"context"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/trace"
"github.com/grafana/grafana/pkg/infra/tracing"
"github.com/grafana/grafana/pkg/plugins"
"github.com/grafana/grafana/pkg/plugins/config"
"github.com/grafana/grafana/pkg/plugins/log"
"github.com/grafana/grafana/pkg/semconv"
)
// Terminator is responsible for the Termination stage of the plugin loader pipeline.
@@ -20,6 +25,7 @@ type Terminate struct {
cfg *config.PluginManagementCfg
terminateSteps []TerminateFunc
log log.Logger
tracer trace.Tracer
}
type Opts struct {
@@ -36,14 +42,20 @@ func New(cfg *config.PluginManagementCfg, opts Opts) (*Terminate, error) {
cfg: cfg,
terminateSteps: opts.TerminateFuncs,
log: log.New("plugins.termination"),
tracer: otel.Tracer("github.com/grafana/grafana/pkg/plugins/manager/pipeline/termination"),
}, nil
}
// Terminate will execute the Terminate steps of the Termination stage.
func (t *Terminate) Terminate(ctx context.Context, p *plugins.Plugin) (*plugins.Plugin, error) {
ctx, span := t.tracer.Start(ctx, "termination.Terminate", trace.WithAttributes(
semconv.GrafanaPluginId(p.ID),
))
defer span.End()
for _, terminate := range t.terminateSteps {
if err := terminate(ctx, p); err != nil {
return nil, err
return nil, tracing.Error(span, err)
}
}
return p, nil

View File

@@ -3,9 +3,14 @@ package validation
import (
"context"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/trace"
"github.com/grafana/grafana/pkg/infra/tracing"
"github.com/grafana/grafana/pkg/plugins"
"github.com/grafana/grafana/pkg/plugins/config"
"github.com/grafana/grafana/pkg/plugins/log"
"github.com/grafana/grafana/pkg/semconv"
)
// Validator is responsible for the Validation stage of the plugin loader pipeline.
@@ -20,6 +25,7 @@ type Validate struct {
cfg *config.PluginManagementCfg
validateSteps []ValidateFunc
log log.Logger
tracer trace.Tracer
}
type Opts struct {
@@ -36,11 +42,17 @@ func New(cfg *config.PluginManagementCfg, opts Opts) *Validate {
cfg: cfg,
validateSteps: opts.ValidateFuncs,
log: log.New("plugins.validation"),
tracer: otel.Tracer("github.com/grafana/grafana/pkg/plugins/manager/pipeline/validation"),
}
}
// Validate will execute the Validate steps of the Validation stage.
func (v *Validate) Validate(ctx context.Context, ps *plugins.Plugin) error {
ctx, span := v.tracer.Start(ctx, "validation.Validate", trace.WithAttributes(
semconv.GrafanaPluginId(ps.ID),
))
defer span.End()
if len(v.validateSteps) == 0 {
return nil
}
@@ -49,7 +61,7 @@ func (v *Validate) Validate(ctx context.Context, ps *plugins.Plugin) error {
err := validate(ctx, ps)
if err != nil {
v.log.Error("Plugin validation failed", "pluginId", ps.ID, "error", err)
return err
return tracing.Error(span, err)
}
}

View File

@@ -8,6 +8,7 @@ import (
"k8s.io/apiserver/pkg/registry/rest"
"github.com/grafana/grafana-app-sdk/logging"
"github.com/grafana/grafana/pkg/apimachinery/utils"
grafanarest "github.com/grafana/grafana/pkg/apiserver/rest"
"github.com/grafana/grafana/pkg/services/accesscontrol"
"github.com/grafana/grafana/pkg/services/apiserver/endpoints/request"
@@ -33,8 +34,11 @@ func (d dashboardStorageWrapper) Update(ctx context.Context, name string, objInf
obj, created, err := d.Storage.Update(ctx, name, objInfo, createValidation, updateValidation, forceAllowCreate, options)
if err == nil && ns.OrgID > 0 && d.live != nil {
if err := d.live.DashboardSaved(ns.OrgID, name); err != nil {
logging.FromContext(ctx).Info("live dashboard update failed", "err", err)
m, err := utils.MetaAccessor(obj)
if err == nil {
if err := d.live.DashboardSaved(ns.OrgID, name, m.GetResourceVersion()); err != nil {
logging.FromContext(ctx).Info("live dashboard update failed", "err", err)
}
}
}
return obj, created, err

View File

@@ -154,12 +154,6 @@ func (a *dashboardSqlAccess) executeQuery(ctx context.Context, helper *legacysql
return nil
})
// Use transaction from unified storage if available in the context.
// This allows us to run migrations in a transaction which is specifically required for SQLite.
if tx == nil {
tx = resource.TransactionFromContext(ctx)
}
if tx != nil {
return tx.QueryContext(ctx, query, args...)
}
@@ -507,7 +501,7 @@ func (a *dashboardSqlAccess) MigratePlaylists(ctx context.Context, orgId int64,
return nil, err
}
// Group playlist items by playlist ID
// Group playlist items by playlist ID while preserving order
type playlistData struct {
id int64
uid string
@@ -518,7 +512,8 @@ func (a *dashboardSqlAccess) MigratePlaylists(ctx context.Context, orgId int64,
updatedAt int64
}
playlists := make(map[int64]*playlistData)
playlistIndex := make(map[int64]int) // maps playlist ID to index in playlists slice
playlists := []*playlistData{}
var currentID int64
var orgID int64
var uid, name, interval string
@@ -533,7 +528,8 @@ func (a *dashboardSqlAccess) MigratePlaylists(ctx context.Context, orgId int64,
}
// Get or create playlist entry
pl, exists := playlists[currentID]
idx, exists := playlistIndex[currentID]
var pl *playlistData
if !exists {
pl = &playlistData{
id: currentID,
@@ -544,7 +540,10 @@ func (a *dashboardSqlAccess) MigratePlaylists(ctx context.Context, orgId int64,
createdAt: createdAt,
updatedAt: updatedAt,
}
playlists[currentID] = pl
playlistIndex[currentID] = len(playlists)
playlists = append(playlists, pl)
} else {
pl = playlists[idx]
}
// Add item if it exists (LEFT JOIN can return NULL for playlists without items)
@@ -560,7 +559,7 @@ func (a *dashboardSqlAccess) MigratePlaylists(ctx context.Context, orgId int64,
return nil, err
}
// Convert to K8s objects and send to stream
// Convert to K8s objects and send to stream (order is preserved)
for _, pl := range playlists {
playlist := &playlistv0.Playlist{
TypeMeta: metav1.TypeMeta{

View File

@@ -360,7 +360,17 @@ func (c *DashboardSearchClient) Search(ctx context.Context, req *resourcepb.Reso
})
}
list.TotalHits = int64(len(list.Results.Rows))
// the UI expects us to populate "TotalHits" with the total search hits, not however many we are returning.
// this is a dumb workaround due to the fact that legacy doesn't expose a way of "counting search hits"
// it fixes a bug that only happens in mode 1-2 that prevents pagination from working in the dashboard list view
// we only have a handful of instances running in this mode and moving towards 0 instances fast, so this is fine
query.Limit = 0
query.Page = 1
res, err = c.dashboardStore.FindDashboards(ctx, query)
if err != nil {
return nil, err
}
list.TotalHits = int64(len(res))
return list, nil
}

View File

@@ -48,6 +48,18 @@ func TestDashboardSearchClient_Search(t *testing.T) {
{ID: 2, UID: "uid2", Title: "Test Dashboard2", FolderUID: "folder2", Tags: []string{}},
}, nil).Once()
// Second call for total count with Page: 1, Limit: 0
mockStore.On("FindDashboards", mock.Anything, &dashboards.FindPersistedDashboardsQuery{
SignedInUser: user,
Type: "dash-db",
Sort: sorter,
Limit: 0,
Page: 1,
}).Return([]dashboards.DashboardSearchProjection{
{ID: 1, UID: "uid", Title: "Test Dashboard", FolderUID: "folder1", Tags: []string{"term"}},
{ID: 2, UID: "uid2", Title: "Test Dashboard2", FolderUID: "folder2", Tags: []string{}},
}, nil).Once()
req := &resourcepb.ResourceSearchRequest{
Options: &resourcepb.ListOptions{
Key: dashboardKey,
@@ -124,6 +136,17 @@ func TestDashboardSearchClient_Search(t *testing.T) {
{ID: 1, UID: "uid", Title: "Test Dashboard", FolderUID: "folder", SortMeta: int64(50), Tags: []string{}},
}, nil).Once()
// Second call for total count with Page: 1, Limit: 0
mockStore.On("FindDashboards", mock.Anything, &dashboards.FindPersistedDashboardsQuery{
SignedInUser: user,
Type: "dash-db",
Sort: sortOptionAsc,
Limit: 0,
Page: 1,
}).Return([]dashboards.DashboardSearchProjection{
{ID: 1, UID: "uid", Title: "Test Dashboard", FolderUID: "folder", SortMeta: int64(50), Tags: []string{}},
}, nil).Once()
req := &resourcepb.ResourceSearchRequest{
Options: &resourcepb.ListOptions{
Key: dashboardKey,
@@ -189,6 +212,17 @@ func TestDashboardSearchClient_Search(t *testing.T) {
{ID: 1, UID: "uid", Title: "Test Dashboard", FolderUID: "folder", SortMeta: int64(2), Tags: []string{}},
}, nil).Once()
// Second call for total count with Page: 1, Limit: 0
mockStore.On("FindDashboards", mock.Anything, &dashboards.FindPersistedDashboardsQuery{
SignedInUser: user,
Type: "dash-db",
Sort: sortOptionAsc,
Limit: 0,
Page: 1,
}).Return([]dashboards.DashboardSearchProjection{
{ID: 1, UID: "uid", Title: "Test Dashboard", FolderUID: "folder", SortMeta: int64(2), Tags: []string{}},
}, nil).Once()
req := &resourcepb.ResourceSearchRequest{
Options: &resourcepb.ListOptions{
Key: dashboardKey,
@@ -293,6 +327,17 @@ func TestDashboardSearchClient_Search(t *testing.T) {
{UID: "uid", Title: "Test Dashboard", FolderUID: "folder1"},
}, nil).Once()
// Second call for total count with Page: 1, Limit: 0
mockStore.On("FindDashboards", mock.Anything, &dashboards.FindPersistedDashboardsQuery{
Title: "test",
SignedInUser: user,
Type: "dash-db",
Limit: 0,
Page: 1,
}).Return([]dashboards.DashboardSearchProjection{
{UID: "uid", Title: "Test Dashboard", FolderUID: "folder1"},
}, nil).Once()
req := &resourcepb.ResourceSearchRequest{
Options: &resourcepb.ListOptions{
Key: dashboardKey,
@@ -319,6 +364,18 @@ func TestDashboardSearchClient_Search(t *testing.T) {
{UID: "uid", Title: "Test Dashboard", FolderUID: "folder1"},
}, nil).Once()
// Second call for total count with Page: 1, Limit: 0
mockStore.On("FindDashboards", mock.Anything, &dashboards.FindPersistedDashboardsQuery{
Title: "test",
TitleExactMatch: true,
SignedInUser: user,
Type: "dash-db",
Limit: 0,
Page: 1,
}).Return([]dashboards.DashboardSearchProjection{
{UID: "uid", Title: "Test Dashboard", FolderUID: "folder1"},
}, nil).Once()
req := &resourcepb.ResourceSearchRequest{
Options: &resourcepb.ListOptions{
Key: dashboardKey,
@@ -350,6 +407,17 @@ func TestDashboardSearchClient_Search(t *testing.T) {
{UID: "uid", Title: "Test Dashboard", FolderUID: "folder1"},
}, nil).Once()
// Second call for total count with Page: 1, Limit: 0
mockStore.On("FindDashboards", mock.Anything, &dashboards.FindPersistedDashboardsQuery{
DashboardIds: []int64{1, 2},
SignedInUser: user,
Type: "dash-db",
Limit: 0,
Page: 1,
}).Return([]dashboards.DashboardSearchProjection{
{UID: "uid", Title: "Test Dashboard", FolderUID: "folder1"},
}, nil).Once()
req := &resourcepb.ResourceSearchRequest{
Options: &resourcepb.ListOptions{
Key: dashboardKey,
@@ -383,6 +451,19 @@ func TestDashboardSearchClient_Search(t *testing.T) {
{UID: "uid", Title: "Test Dashboard", FolderUID: "folder1"},
}, nil).Once()
// Second call for total count with Page: 1, Limit: 0
mockStore.On("FindDashboards", mock.Anything, &dashboards.FindPersistedDashboardsQuery{
DashboardUIDs: []string{"uid1", "uid2"},
Tags: []string{"tag1", "tag2"},
FolderUIDs: []string{"general", "folder1"},
SignedInUser: user,
Type: "dash-db",
Limit: 0,
Page: 1,
}).Return([]dashboards.DashboardSearchProjection{
{UID: "uid", Title: "Test Dashboard", FolderUID: "folder1"},
}, nil).Once()
req := &resourcepb.ResourceSearchRequest{
Options: &resourcepb.ListOptions{
Key: dashboardKey,
@@ -532,6 +613,17 @@ func TestDashboardSearchClient_Search(t *testing.T) {
{ID: 1, UID: "uid", Title: "Test Dashboard", FolderUID: "folder1"},
}, nil).Once()
// Second call for total count with Page: 1, Limit: 0
mockStore.On("FindDashboards", mock.Anything, &dashboards.FindPersistedDashboardsQuery{
SignedInUser: user,
Sort: sort.SortAlphaAsc,
Type: "dash-db",
Limit: 0,
Page: 1,
}).Return([]dashboards.DashboardSearchProjection{
{ID: 1, UID: "uid", Title: "Test Dashboard", FolderUID: "folder1"},
}, nil).Once()
req := &resourcepb.ResourceSearchRequest{
Options: &resourcepb.ListOptions{
Key: dashboardKey,
@@ -552,7 +644,7 @@ func TestDashboardSearchClient_Search(t *testing.T) {
t.Run("Should set correct sort field when sorting by views", func(t *testing.T) {
mockStore.On("FindDashboards", mock.Anything, mock.Anything).Return([]dashboards.DashboardSearchProjection{
{ID: 1, UID: "uid", Title: "Test Dashboard", FolderUID: "folder1", SortMeta: 100},
}, nil).Once()
}, nil).Twice() // Will be called twice due to the total count call
req := &resourcepb.ResourceSearchRequest{
Options: &resourcepb.ListOptions{
@@ -584,7 +676,7 @@ func TestDashboardSearchClient_Search(t *testing.T) {
mockStore.On("FindDashboards", mock.Anything, mock.Anything).Return([]dashboards.DashboardSearchProjection{
{UID: "dashboard1", FolderUID: "folder1", ID: 1},
{UID: "dashboard2", FolderUID: "folder2", ID: 2},
}, nil).Once()
}, nil).Twice() // Will be called twice due to the total count call
req := &resourcepb.ResourceSearchRequest{
Options: &resourcepb.ListOptions{

View File

@@ -26,7 +26,6 @@ import (
"github.com/grafana/grafana/pkg/infra/tracing"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
"github.com/prometheus/client_golang/prometheus"
)
@@ -53,7 +52,6 @@ type RepositoryController struct {
repoLister listers.RepositoryLister
repoSynced cache.InformerSynced
logger logging.Logger
dualwrite dualwrite.Service
jobs interface {
jobs.Queue
@@ -86,7 +84,6 @@ func NewRepositoryController(
jobs.Queue
jobs.Store
},
dualwrite dualwrite.Service,
healthChecker *HealthChecker,
statusPatcher StatusPatcher,
registry prometheus.Registerer,
@@ -114,11 +111,10 @@ func NewRepositoryController(
metrics: &finalizerMetrics,
maxWorkers: parallelOperations,
},
jobs: jobs,
logger: logging.DefaultLogger.With("logger", loggerName),
dualwrite: dualwrite,
registry: registry,
tracer: tracer,
jobs: jobs,
logger: logging.DefaultLogger.With("logger", loggerName),
registry: registry,
tracer: tracer,
}
_, err := repoInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
@@ -356,9 +352,6 @@ func (rc *RepositoryController) determineSyncStrategy(ctx context.Context, obj *
case !healthStatus.Healthy:
logger.Info("skip sync for unhealthy repository")
return nil
case rc.dualwrite != nil && dualwrite.IsReadingLegacyDashboardsAndFolders(ctx, rc.dualwrite):
logger.Info("skip sync as we are reading from legacy storage")
return nil
case healthStatus.Healthy != obj.Status.Health.Healthy:
logger.Info("repository became healthy, full resync")
return &provisioning.SyncJobOptions{}

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.52.4. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package export

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.52.4. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package export

View File

@@ -49,7 +49,6 @@ func TestLokiJobHistory_WriteJob(t *testing.T) {
Path: "/exported",
},
Migrate: &provisioning.MigrateJobOptions{
History: true,
Message: "Migration test",
},
Delete: &provisioning.DeleteJobOptions{
@@ -101,7 +100,7 @@ func TestLokiJobHistory_WriteJob(t *testing.T) {
}
t.Run("jobToStream creates correct stream with all fields", func(t *testing.T) {
history := createTestLokiJobHistory(t)
history := createTestLokiJobHistory()
// Clean job copy like WriteJob does
jobCopy := job.DeepCopy()
delete(jobCopy.Labels, LabelJobClaim)
@@ -147,7 +146,6 @@ func TestLokiJobHistory_WriteJob(t *testing.T) {
assert.Equal(t, "main", deserializedJob.Spec.Push.Branch)
assert.Equal(t, "/exported", deserializedJob.Spec.Push.Path)
require.NotNil(t, deserializedJob.Spec.Migrate)
assert.True(t, deserializedJob.Spec.Migrate.History)
assert.Equal(t, "Migration test", deserializedJob.Spec.Migrate.Message)
require.NotNil(t, deserializedJob.Spec.Delete)
assert.Equal(t, "main", deserializedJob.Spec.Delete.Ref)
@@ -190,7 +188,7 @@ func TestLokiJobHistory_WriteJob(t *testing.T) {
})
t.Run("buildJobQuery creates correct LogQL", func(t *testing.T) {
history := createTestLokiJobHistory(t)
history := createTestLokiJobHistory()
query := history.buildJobQuery("test-ns", "test-repo")
@@ -199,7 +197,7 @@ func TestLokiJobHistory_WriteJob(t *testing.T) {
})
t.Run("getJobTimestamp returns correct timestamp", func(t *testing.T) {
history := createTestLokiJobHistory(t)
history := createTestLokiJobHistory()
// Test finished time priority
jobWithFinished := &provisioning.Job{
@@ -584,7 +582,7 @@ func TestLokiJobHistory_GetJob(t *testing.T) {
}
// createTestLokiJobHistory creates a LokiJobHistory for testing
func createTestLokiJobHistory(t *testing.T) *LokiJobHistory {
func createTestLokiJobHistory() *LokiJobHistory {
// Create test URLs
readURL, _ := url.Parse("http://localhost:3100")
writeURL, _ := url.Parse("http://localhost:3100")

View File

@@ -1,95 +0,0 @@
package migrate
import (
"context"
"errors"
"fmt"
"time"
"github.com/grafana/grafana-app-sdk/logging"
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
)
type LegacyMigrator struct {
legacyMigrator LegacyResourcesMigrator
storageSwapper StorageSwapper
syncWorker jobs.Worker
wrapWithStageFn WrapWithStageFn
}
func NewLegacyMigrator(
legacyMigrator LegacyResourcesMigrator,
storageSwapper StorageSwapper,
syncWorker jobs.Worker,
wrapWithStageFn WrapWithStageFn,
) *LegacyMigrator {
return &LegacyMigrator{
legacyMigrator: legacyMigrator,
storageSwapper: storageSwapper,
syncWorker: syncWorker,
wrapWithStageFn: wrapWithStageFn,
}
}
func (m *LegacyMigrator) Migrate(ctx context.Context, rw repository.ReaderWriter, options provisioning.MigrateJobOptions, progress jobs.JobProgressRecorder) error {
namespace := rw.Config().Namespace
var stageMode repository.StageMode
if options.History {
// When History is true, we want to commit and push each file (previous PushOnWrites: true)
stageMode = repository.StageModeCommitAndPushOnEach
} else {
// When History is false, we want to commit only once (previous CommitOnlyOnce: true)
stageMode = repository.StageModeCommitOnlyOnce
}
stageOptions := repository.StageOptions{
Mode: stageMode,
CommitOnlyOnceMessage: options.Message,
// TODO: make this configurable
Timeout: 10 * time.Minute,
}
// Fail if migrating at least one
progress.StrictMaxErrors(1)
progress.SetMessage(ctx, "migrating legacy resources")
if err := m.wrapWithStageFn(ctx, rw, stageOptions, func(repo repository.Repository, staged bool) error {
rw, ok := repo.(repository.ReaderWriter)
if !ok {
return errors.New("migration job submitted targeting repository that is not a ReaderWriter")
}
return m.legacyMigrator.Migrate(ctx, rw, namespace, options, progress)
}); err != nil {
return fmt.Errorf("migrate from SQL: %w", err)
}
progress.SetMessage(ctx, "resetting unified storage")
if err := m.storageSwapper.WipeUnifiedAndSetMigratedFlag(ctx, namespace); err != nil {
return fmt.Errorf("unable to reset unified storage %w", err)
}
// Reset the results after the export as pull will operate on the same resources
progress.ResetResults()
// Delegate the import to a sync (from the already checked out go-git repository!)
progress.SetMessage(ctx, "pulling resources")
if err := m.syncWorker.Process(ctx, rw, provisioning.Job{
Spec: provisioning.JobSpec{
Pull: &provisioning.SyncJobOptions{
Incremental: false,
},
},
}, progress); err != nil { // this will have an error when too many errors exist
progress.SetMessage(ctx, "error importing resources, reverting")
if e2 := m.storageSwapper.StopReadingUnifiedStorage(ctx); e2 != nil {
logger := logging.FromContext(ctx)
logger.Warn("error trying to revert dual write settings after an error", "err", err)
}
return err
}
return nil
}

View File

@@ -1,265 +0,0 @@
package migrate
import (
"context"
"fmt"
"k8s.io/apimachinery/pkg/runtime/schema"
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/registry/apis/dashboard/legacy"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs/export"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources/signature"
unifiedmigrations "github.com/grafana/grafana/pkg/storage/unified/migrations"
"github.com/grafana/grafana/pkg/storage/unified/parquet"
"github.com/grafana/grafana/pkg/storage/unified/resource"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
)
var _ resource.BulkResourceWriter = (*legacyResourceResourceMigrator)(nil)
//go:generate mockery --name LegacyResourcesMigrator --structname MockLegacyResourcesMigrator --inpackage --filename mock_legacy_resources_migrator.go --with-expecter
type LegacyResourcesMigrator interface {
Migrate(ctx context.Context, rw repository.ReaderWriter, namespace string, opts provisioning.MigrateJobOptions, progress jobs.JobProgressRecorder) error
}
type legacyResourcesMigrator struct {
repositoryResources resources.RepositoryResourcesFactory
parsers resources.ParserFactory
dashboardAccess legacy.MigrationDashboardAccessor
signerFactory signature.SignerFactory
clients resources.ClientFactory
exportFn export.ExportFn
}
func NewLegacyResourcesMigrator(
repositoryResources resources.RepositoryResourcesFactory,
parsers resources.ParserFactory,
dashboardAccess legacy.MigrationDashboardAccessor,
signerFactory signature.SignerFactory,
clients resources.ClientFactory,
exportFn export.ExportFn,
) LegacyResourcesMigrator {
return &legacyResourcesMigrator{
repositoryResources: repositoryResources,
parsers: parsers,
dashboardAccess: dashboardAccess,
signerFactory: signerFactory,
clients: clients,
exportFn: exportFn,
}
}
func (m *legacyResourcesMigrator) Migrate(ctx context.Context, rw repository.ReaderWriter, namespace string, opts provisioning.MigrateJobOptions, progress jobs.JobProgressRecorder) error {
parser, err := m.parsers.GetParser(ctx, rw)
if err != nil {
return fmt.Errorf("get parser: %w", err)
}
repositoryResources, err := m.repositoryResources.Client(ctx, rw)
if err != nil {
return fmt.Errorf("get repository resources: %w", err)
}
// FIXME: signature is only relevant for repositories which support signature
// Not all repositories support history
signer, err := m.signerFactory.New(ctx, signature.SignOptions{
Namespace: namespace,
History: opts.History,
})
if err != nil {
return fmt.Errorf("get signer: %w", err)
}
progress.SetMessage(ctx, "migrate folders from SQL")
clients, err := m.clients.Clients(ctx, namespace)
if err != nil {
return err
}
// nothing special for the export for now
exportOpts := provisioning.ExportJobOptions{}
if err = m.exportFn(ctx, rw.Config().Name, exportOpts, clients, repositoryResources, progress); err != nil {
return fmt.Errorf("migrate folders from SQL: %w", err)
}
progress.SetMessage(ctx, "migrate resources from SQL")
for _, kind := range resources.SupportedProvisioningResources {
if kind == resources.FolderResource {
continue // folders have special handling
}
reader := newLegacyResourceMigrator(
rw,
m.dashboardAccess,
parser,
repositoryResources,
progress,
opts,
namespace,
kind.GroupResource(),
signer,
)
if err := reader.Migrate(ctx); err != nil {
return fmt.Errorf("migrate resource %s: %w", kind, err)
}
}
return nil
}
type legacyResourceResourceMigrator struct {
repo repository.ReaderWriter
dashboardAccess legacy.MigrationDashboardAccessor
parser resources.Parser
progress jobs.JobProgressRecorder
namespace string
kind schema.GroupResource
options provisioning.MigrateJobOptions
resources resources.RepositoryResources
signer signature.Signer
history map[string]string // UID >> file path
}
func newLegacyResourceMigrator(
repo repository.ReaderWriter,
dashboardAccess legacy.MigrationDashboardAccessor,
parser resources.Parser,
resources resources.RepositoryResources,
progress jobs.JobProgressRecorder,
options provisioning.MigrateJobOptions,
namespace string,
kind schema.GroupResource,
signer signature.Signer,
) *legacyResourceResourceMigrator {
var history map[string]string
if options.History {
history = make(map[string]string)
}
return &legacyResourceResourceMigrator{
repo: repo,
dashboardAccess: dashboardAccess,
parser: parser,
progress: progress,
options: options,
namespace: namespace,
kind: kind,
resources: resources,
signer: signer,
history: history,
}
}
// Close implements resource.BulkResourceWriter.
func (r *legacyResourceResourceMigrator) Close() error {
return nil
}
// CloseWithResults implements resource.BulkResourceWriter.
func (r *legacyResourceResourceMigrator) CloseWithResults() (*resourcepb.BulkResponse, error) {
return &resourcepb.BulkResponse{}, nil
}
// Write implements resource.BulkResourceWriter.
func (r *legacyResourceResourceMigrator) Write(ctx context.Context, key *resourcepb.ResourceKey, value []byte) error {
// Reuse the same parse+cleanup logic
parsed, err := r.parser.Parse(ctx, &repository.FileInfo{
Path: "", // empty path to ignore file system
Data: value,
})
if err != nil {
return fmt.Errorf("unmarshal unstructured: %w", err)
}
// clear anything so it will get written
parsed.Meta.SetManagerProperties(utils.ManagerProperties{})
parsed.Meta.SetSourceProperties(utils.SourceProperties{})
// Add author signature to the context
ctx, err = r.signer.Sign(ctx, parsed.Meta)
if err != nil {
return fmt.Errorf("add author signature: %w", err)
}
// TODO: this seems to be same logic as the export job
// TODO: we should use a kind safe manager here
fileName, err := r.resources.WriteResourceFileFromObject(ctx, parsed.Obj, resources.WriteOptions{
Path: "",
Ref: "",
})
// When replaying history, the path to the file may change over time
// This happens when the title or folder change
if r.history != nil && err == nil {
name := parsed.Meta.GetName()
previous := r.history[name]
if previous != "" && previous != fileName {
err = r.repo.Delete(ctx, previous, "", fmt.Sprintf("moved to: %s", fileName))
}
r.history[name] = fileName
}
result := jobs.JobResourceResult{
Name: parsed.Meta.GetName(),
Group: r.kind.Group,
Kind: parsed.GVK.Kind,
Action: repository.FileActionCreated,
Path: fileName,
}
if err != nil {
result.Error = fmt.Errorf("writing resource %s/%s %s to file %s: %w", r.kind.Group, r.kind.Resource, parsed.Meta.GetName(), fileName, err)
}
r.progress.Record(ctx, result)
if err := r.progress.TooManyErrors(); err != nil {
return err
}
return nil
}
func (r *legacyResourceResourceMigrator) Migrate(ctx context.Context) error {
r.progress.SetMessage(ctx, fmt.Sprintf("migrate %s resource", r.kind.Resource))
// Create a parquet migrator with this instance as the BulkResourceWriter
parquetClient := parquet.NewBulkResourceWriterClient(r)
migrator := unifiedmigrations.ProvideUnifiedMigratorParquet(
r.dashboardAccess,
parquetClient,
)
opts := legacy.MigrateOptions{
Namespace: r.namespace,
WithHistory: r.options.History,
Resources: []schema.GroupResource{r.kind},
OnlyCount: true, // first get the count
}
stats, err := migrator.Migrate(ctx, opts)
if err != nil {
return fmt.Errorf("unable to count legacy items %w", err)
}
// FIXME: explain why we calculate it in this way
if len(stats.Summary) > 0 {
count := stats.Summary[0].Count //
history := stats.Summary[0].History
if history > count {
count = history // the number of items we will process
}
r.progress.SetTotal(ctx, int(count))
}
opts.OnlyCount = false // this time actually write
_, err = migrator.Migrate(ctx, opts)
if err != nil {
return fmt.Errorf("migrate legacy %s: %w", r.kind.Resource, err)
}
return nil
}

View File

@@ -1,934 +0,0 @@
package migrate
import (
"context"
"errors"
"fmt"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/registry/apis/dashboard/legacy"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs/export"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources/signature"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
)
func TestLegacyResourcesMigrator_Migrate(t *testing.T) {
t.Run("should fail when parser factory fails", func(t *testing.T) {
mockParserFactory := resources.NewMockParserFactory(t)
mockParserFactory.On("GetParser", mock.Anything, mock.Anything).
Return(nil, errors.New("parser factory error"))
signerFactory := signature.NewMockSignerFactory(t)
mockClientFactory := resources.NewMockClientFactory(t)
mockExportFn := export.NewMockExportFn(t)
migrator := NewLegacyResourcesMigrator(
nil,
mockParserFactory,
nil,
signerFactory,
mockClientFactory,
mockExportFn.Execute,
)
err := migrator.Migrate(context.Background(), nil, "test-namespace", provisioning.MigrateJobOptions{}, jobs.NewMockJobProgressRecorder(t))
require.Error(t, err)
require.EqualError(t, err, "get parser: parser factory error")
mockParserFactory.AssertExpectations(t)
mockExportFn.AssertExpectations(t)
mockClientFactory.AssertExpectations(t)
})
t.Run("should fail when repository resources factory fails", func(t *testing.T) {
mockParserFactory := resources.NewMockParserFactory(t)
mockParserFactory.On("GetParser", mock.Anything, mock.Anything).
Return(resources.NewMockParser(t), nil)
mockRepoResourcesFactory := resources.NewMockRepositoryResourcesFactory(t)
mockRepoResourcesFactory.On("Client", mock.Anything, mock.Anything).
Return(nil, errors.New("repo resources factory error"))
signerFactory := signature.NewMockSignerFactory(t)
mockClientFactory := resources.NewMockClientFactory(t)
mockExportFn := export.NewMockExportFn(t)
migrator := NewLegacyResourcesMigrator(
mockRepoResourcesFactory,
mockParserFactory,
nil,
signerFactory,
mockClientFactory,
mockExportFn.Execute,
)
err := migrator.Migrate(context.Background(), nil, "test-namespace", provisioning.MigrateJobOptions{}, jobs.NewMockJobProgressRecorder(t))
require.Error(t, err)
require.EqualError(t, err, "get repository resources: repo resources factory error")
mockParserFactory.AssertExpectations(t)
mockRepoResourcesFactory.AssertExpectations(t)
mockExportFn.AssertExpectations(t)
mockClientFactory.AssertExpectations(t)
})
t.Run("should fail when resource migration fails", func(t *testing.T) {
mockParserFactory := resources.NewMockParserFactory(t)
mockParserFactory.On("GetParser", mock.Anything, mock.Anything).
Return(resources.NewMockParser(t), nil)
mockRepoResources := resources.NewMockRepositoryResources(t)
mockRepoResourcesFactory := resources.NewMockRepositoryResourcesFactory(t)
mockRepoResourcesFactory.On("Client", mock.Anything, mock.Anything).
Return(mockRepoResources, nil)
mockDashboardAccess := legacy.NewMockMigrationDashboardAccessor(t)
mockDashboardAccess.On("CountResources", mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
return opts.OnlyCount && opts.Namespace == "test-namespace"
})).Return(&resourcepb.BulkResponse{}, errors.New("legacy migrator error"))
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("SetMessage", mock.Anything, mock.Anything).Return()
signer := signature.NewMockSigner(t)
signerFactory := signature.NewMockSignerFactory(t)
signerFactory.On("New", mock.Anything, mock.Anything).
Return(signer, nil)
mockClients := resources.NewMockResourceClients(t)
mockClientFactory := resources.NewMockClientFactory(t)
mockClientFactory.On("Clients", mock.Anything, "test-namespace").
Return(mockClients, nil)
mockExportFn := export.NewMockExportFn(t)
migrator := NewLegacyResourcesMigrator(
mockRepoResourcesFactory,
mockParserFactory,
mockDashboardAccess,
signerFactory,
mockClientFactory,
mockExportFn.Execute,
)
repo := repository.NewMockRepository(t)
repo.On("Config").Return(&provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test-namespace",
Name: "test-repo",
},
})
mockExportFn.On("Execute", mock.Anything, mock.Anything, provisioning.ExportJobOptions{}, mockClients, mockRepoResources, mock.Anything).
Return(nil)
err := migrator.Migrate(context.Background(), repo, "test-namespace", provisioning.MigrateJobOptions{}, progress)
require.Error(t, err)
require.Contains(t, err.Error(), "migrate resource")
mockParserFactory.AssertExpectations(t)
mockRepoResourcesFactory.AssertExpectations(t)
mockDashboardAccess.AssertExpectations(t)
progress.AssertExpectations(t)
mockExportFn.AssertExpectations(t)
mockClientFactory.AssertExpectations(t)
mockClients.AssertExpectations(t)
repo.AssertExpectations(t)
})
t.Run("should fail when client creation fails", func(t *testing.T) {
mockParserFactory := resources.NewMockParserFactory(t)
mockParserFactory.On("GetParser", mock.Anything, mock.Anything).
Return(resources.NewMockParser(t), nil)
mockRepoResources := resources.NewMockRepositoryResources(t)
mockRepoResourcesFactory := resources.NewMockRepositoryResourcesFactory(t)
mockRepoResourcesFactory.On("Client", mock.Anything, mock.Anything).
Return(mockRepoResources, nil)
mockSigner := signature.NewMockSigner(t)
mockSignerFactory := signature.NewMockSignerFactory(t)
mockSignerFactory.On("New", mock.Anything, mock.Anything).
Return(mockSigner, nil)
mockClientFactory := resources.NewMockClientFactory(t)
mockClientFactory.On("Clients", mock.Anything, "test-namespace").
Return(nil, errors.New("client creation error"))
mockExportFn := export.NewMockExportFn(t)
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("SetMessage", mock.Anything, "migrate folders from SQL").Return()
migrator := NewLegacyResourcesMigrator(
mockRepoResourcesFactory,
mockParserFactory,
nil,
mockSignerFactory,
mockClientFactory,
mockExportFn.Execute,
)
repo := repository.NewMockRepository(t)
err := migrator.Migrate(context.Background(), repo, "test-namespace", provisioning.MigrateJobOptions{}, progress)
require.Error(t, err)
require.EqualError(t, err, "client creation error")
mockParserFactory.AssertExpectations(t)
mockRepoResourcesFactory.AssertExpectations(t)
mockSignerFactory.AssertExpectations(t)
mockClientFactory.AssertExpectations(t)
progress.AssertExpectations(t)
mockExportFn.AssertExpectations(t)
repo.AssertExpectations(t)
})
t.Run("should fail when signer factory fails", func(t *testing.T) {
mockParserFactory := resources.NewMockParserFactory(t)
mockParserFactory.On("GetParser", mock.Anything, mock.Anything).
Return(resources.NewMockParser(t), nil)
mockRepoResources := resources.NewMockRepositoryResources(t)
mockRepoResourcesFactory := resources.NewMockRepositoryResourcesFactory(t)
mockRepoResourcesFactory.On("Client", mock.Anything, mock.Anything).
Return(mockRepoResources, nil)
mockSignerFactory := signature.NewMockSignerFactory(t)
mockSignerFactory.On("New", mock.Anything, signature.SignOptions{
Namespace: "test-namespace",
History: true,
}).Return(nil, fmt.Errorf("signer factory error"))
mockClientFactory := resources.NewMockClientFactory(t)
mockExportFn := export.NewMockExportFn(t)
progress := jobs.NewMockJobProgressRecorder(t)
migrator := NewLegacyResourcesMigrator(
mockRepoResourcesFactory,
mockParserFactory,
nil,
mockSignerFactory,
mockClientFactory,
mockExportFn.Execute,
)
err := migrator.Migrate(context.Background(), nil, "test-namespace", provisioning.MigrateJobOptions{
History: true,
}, progress)
require.Error(t, err)
require.EqualError(t, err, "get signer: signer factory error")
mockParserFactory.AssertExpectations(t)
mockRepoResourcesFactory.AssertExpectations(t)
mockSignerFactory.AssertExpectations(t)
mockClientFactory.AssertExpectations(t)
progress.AssertExpectations(t)
mockExportFn.AssertExpectations(t)
})
t.Run("should fail when folder export fails", func(t *testing.T) {
mockParser := resources.NewMockParser(t)
mockParserFactory := resources.NewMockParserFactory(t)
mockParserFactory.On("GetParser", mock.Anything, mock.Anything).
Return(mockParser, nil)
mockRepoResources := resources.NewMockRepositoryResources(t)
mockRepoResourcesFactory := resources.NewMockRepositoryResourcesFactory(t)
mockRepoResourcesFactory.On("Client", mock.Anything, mock.Anything).
Return(mockRepoResources, nil)
mockSigner := signature.NewMockSigner(t)
mockSignerFactory := signature.NewMockSignerFactory(t)
mockSignerFactory.On("New", mock.Anything, signature.SignOptions{
Namespace: "test-namespace",
History: false,
}).Return(mockSigner, nil)
mockClients := resources.NewMockResourceClients(t)
mockClientFactory := resources.NewMockClientFactory(t)
mockClientFactory.On("Clients", mock.Anything, "test-namespace").
Return(mockClients, nil)
mockExportFn := export.NewMockExportFn(t)
mockExportFn.On("Execute", mock.Anything, mock.Anything, provisioning.ExportJobOptions{}, mockClients, mockRepoResources, mock.Anything).
Return(fmt.Errorf("export error"))
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("SetMessage", mock.Anything, "migrate folders from SQL").Return()
migrator := NewLegacyResourcesMigrator(
mockRepoResourcesFactory,
mockParserFactory,
nil,
mockSignerFactory,
mockClientFactory,
mockExportFn.Execute,
)
repo := repository.NewMockRepository(t)
repo.On("Config").Return(&provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test-namespace",
Name: "test-repo",
},
})
err := migrator.Migrate(context.Background(), repo, "test-namespace", provisioning.MigrateJobOptions{}, progress)
require.Error(t, err)
require.Contains(t, err.Error(), "migrate folders from SQL: export error")
mockParserFactory.AssertExpectations(t)
mockRepoResourcesFactory.AssertExpectations(t)
mockSignerFactory.AssertExpectations(t)
mockClientFactory.AssertExpectations(t)
mockExportFn.AssertExpectations(t)
progress.AssertExpectations(t)
})
t.Run("should successfully migrate all resources", func(t *testing.T) {
mockParser := resources.NewMockParser(t)
mockParserFactory := resources.NewMockParserFactory(t)
mockParserFactory.On("GetParser", mock.Anything, mock.Anything).
Return(mockParser, nil)
mockRepoResources := resources.NewMockRepositoryResources(t)
mockRepoResourcesFactory := resources.NewMockRepositoryResourcesFactory(t)
mockRepoResourcesFactory.On("Client", mock.Anything, mock.Anything).
Return(mockRepoResources, nil)
mockSigner := signature.NewMockSigner(t)
mockSignerFactory := signature.NewMockSignerFactory(t)
mockSignerFactory.On("New", mock.Anything, signature.SignOptions{
Namespace: "test-namespace",
History: true,
}).Return(mockSigner, nil)
mockDashboardAccess := legacy.NewMockMigrationDashboardAccessor(t)
// Mock CountResources for the count phase
mockDashboardAccess.On("CountResources", mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
return opts.OnlyCount && opts.Namespace == "test-namespace"
})).Return(&resourcepb.BulkResponse{}, nil).Once()
// Mock MigrateDashboards for the actual migration phase (dashboards resource)
mockDashboardAccess.On("MigrateDashboards", mock.Anything, mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
return !opts.OnlyCount && opts.Namespace == "test-namespace"
}), mock.Anything).Return(&legacy.BlobStoreInfo{
Count: 10,
Size: 5,
}, nil).Once()
mockClients := resources.NewMockResourceClients(t)
mockClientFactory := resources.NewMockClientFactory(t)
mockClientFactory.On("Clients", mock.Anything, "test-namespace").
Return(mockClients, nil)
mockExportFn := export.NewMockExportFn(t)
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("SetMessage", mock.Anything, "migrate folders from SQL").Return()
progress.On("SetMessage", mock.Anything, "migrate resources from SQL").Return()
progress.On("SetMessage", mock.Anything, "migrate dashboards resource").Return()
migrator := NewLegacyResourcesMigrator(
mockRepoResourcesFactory,
mockParserFactory,
mockDashboardAccess,
mockSignerFactory,
mockClientFactory,
mockExportFn.Execute,
)
repo := repository.NewMockRepository(t)
repo.On("Config").Return(&provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test-namespace",
Name: "test-repo",
},
})
mockExportFn.On("Execute", mock.Anything, mock.Anything, provisioning.ExportJobOptions{}, mockClients, mockRepoResources, mock.Anything).
Return(nil)
err := migrator.Migrate(context.Background(), repo, "test-namespace", provisioning.MigrateJobOptions{
History: true,
}, progress)
require.NoError(t, err)
mockParserFactory.AssertExpectations(t)
mockRepoResourcesFactory.AssertExpectations(t)
mockDashboardAccess.AssertExpectations(t)
mockClientFactory.AssertExpectations(t)
mockExportFn.AssertExpectations(t)
progress.AssertExpectations(t)
mockClients.AssertExpectations(t)
})
}
func TestLegacyResourceResourceMigrator_Write(t *testing.T) {
t.Run("should fail when parser fails", func(t *testing.T) {
mockParser := resources.NewMockParser(t)
mockParser.On("Parse", mock.Anything, mock.Anything).
Return(nil, errors.New("parser error"))
progress := jobs.NewMockJobProgressRecorder(t)
migrator := newLegacyResourceMigrator(
nil,
nil,
mockParser,
nil,
progress,
provisioning.MigrateJobOptions{},
"test-namespace",
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
signature.NewGrafanaSigner(),
)
err := migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte("test"))
require.Error(t, err)
require.Contains(t, err.Error(), "unmarshal unstructured")
mockParser.AssertExpectations(t)
})
t.Run("records error when create resource file fails", func(t *testing.T) {
mockParser := resources.NewMockParser(t)
obj := &unstructured.Unstructured{
Object: map[string]any{
"metadata": map[string]any{
"name": "test",
},
},
}
meta, err := utils.MetaAccessor(obj)
require.NoError(t, err)
mockParser.On("Parse", mock.Anything, mock.Anything).
Return(&resources.ParsedResource{
Meta: meta,
Obj: obj,
}, nil)
mockRepoResources := resources.NewMockRepositoryResources(t)
mockRepoResources.On("WriteResourceFileFromObject", mock.Anything, mock.Anything, mock.Anything).
Return("", errors.New("create file error"))
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("Record", mock.Anything, mock.MatchedBy(func(result jobs.JobResourceResult) bool {
return result.Action == repository.FileActionCreated &&
result.Name == "test" &&
result.Error != nil &&
result.Error.Error() == "writing resource test.grafana.app/tests test to file : create file error"
})).Return()
progress.On("TooManyErrors").Return(nil)
migrator := newLegacyResourceMigrator(
nil,
nil,
mockParser,
mockRepoResources,
progress,
provisioning.MigrateJobOptions{},
"test-namespace",
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
signature.NewGrafanaSigner(),
)
err = migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte("test"))
require.NoError(t, err) // Error is recorded but not returned
mockParser.AssertExpectations(t)
mockRepoResources.AssertExpectations(t)
progress.AssertExpectations(t)
})
t.Run("should fail when signer fails", func(t *testing.T) {
mockParser := resources.NewMockParser(t)
obj := &unstructured.Unstructured{
Object: map[string]any{
"metadata": map[string]any{
"name": "test",
},
},
}
meta, err := utils.MetaAccessor(obj)
require.NoError(t, err)
mockParser.On("Parse", mock.Anything, mock.Anything).
Return(&resources.ParsedResource{
Meta: meta,
Obj: obj,
}, nil)
mockSigner := signature.NewMockSigner(t)
mockSigner.On("Sign", mock.Anything, meta).
Return(nil, errors.New("signing error"))
progress := jobs.NewMockJobProgressRecorder(t)
migrator := newLegacyResourceMigrator(
nil,
nil,
mockParser,
nil,
progress,
provisioning.MigrateJobOptions{},
"test-namespace",
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
mockSigner,
)
err = migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte("test"))
require.Error(t, err)
require.EqualError(t, err, "add author signature: signing error")
mockParser.AssertExpectations(t)
mockSigner.AssertExpectations(t)
progress.AssertExpectations(t)
})
t.Run("should successfully add author signature", func(t *testing.T) {
mockParser := resources.NewMockParser(t)
obj := &unstructured.Unstructured{
Object: map[string]any{
"metadata": map[string]any{
"name": "test",
},
},
}
meta, err := utils.MetaAccessor(obj)
require.NoError(t, err)
mockParser.On("Parse", mock.Anything, mock.Anything).
Return(&resources.ParsedResource{
Meta: meta,
Obj: obj,
}, nil)
mockSigner := signature.NewMockSigner(t)
signedCtx := repository.WithAuthorSignature(context.Background(), repository.CommitSignature{
Name: "test-user",
Email: "test@example.com",
})
mockSigner.On("Sign", mock.Anything, meta).
Return(signedCtx, nil)
mockRepoResources := resources.NewMockRepositoryResources(t)
mockRepoResources.On("WriteResourceFileFromObject", signedCtx, mock.Anything, mock.Anything).
Return("test/path", nil)
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("Record", mock.Anything, mock.MatchedBy(func(result jobs.JobResourceResult) bool {
return result.Action == repository.FileActionCreated &&
result.Name == "test" &&
result.Error == nil &&
result.Path == "test/path"
})).Return()
progress.On("TooManyErrors").Return(nil)
migrator := newLegacyResourceMigrator(
nil,
nil,
mockParser,
mockRepoResources,
progress,
provisioning.MigrateJobOptions{},
"test-namespace",
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
mockSigner,
)
err = migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte("test"))
require.NoError(t, err)
mockParser.AssertExpectations(t)
mockSigner.AssertExpectations(t)
mockRepoResources.AssertExpectations(t)
progress.AssertExpectations(t)
})
t.Run("should maintain history", func(t *testing.T) {
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("Record", mock.Anything, mock.Anything).Return()
progress.On("TooManyErrors").Return(nil)
mockParser := resources.NewMockParser(t)
obj := &unstructured.Unstructured{
Object: map[string]any{
"metadata": map[string]any{
"name": "test",
},
},
}
meta, _ := utils.MetaAccessor(obj)
mockParser.On("Parse", mock.Anything, mock.Anything).
Return(&resources.ParsedResource{
Meta: meta,
Obj: obj,
}, nil)
mockRepo := repository.NewMockRepository(t)
mockRepoResources := resources.NewMockRepositoryResources(t)
writeResourceFileFromObject := mockRepoResources.On("WriteResourceFileFromObject", mock.Anything, mock.Anything, mock.Anything)
migrator := newLegacyResourceMigrator(
mockRepo,
nil,
mockParser,
mockRepoResources,
progress,
provisioning.MigrateJobOptions{
History: true,
},
"test-namespace",
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
signature.NewGrafanaSigner(),
)
writeResourceFileFromObject.Return("aaaa.json", nil)
err := migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte(""))
require.NoError(t, err)
require.Equal(t, "aaaa.json", migrator.history["test"], "kept track of the old files")
// Change the result file name
writeResourceFileFromObject.Return("bbbb.json", nil)
mockRepo.On("Delete", mock.Anything, "aaaa.json", "", "moved to: bbbb.json").
Return(nil).Once()
err = migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte(""))
require.NoError(t, err)
require.Equal(t, "bbbb.json", migrator.history["test"], "kept track of the old files")
mockParser.AssertExpectations(t)
mockRepoResources.AssertExpectations(t)
progress.AssertExpectations(t)
})
t.Run("should successfully write resource", func(t *testing.T) {
mockParser := resources.NewMockParser(t)
obj := &unstructured.Unstructured{
Object: map[string]any{
"metadata": map[string]any{
"name": "test",
},
},
}
meta, err := utils.MetaAccessor(obj)
require.NoError(t, err)
meta.SetManagerProperties(utils.ManagerProperties{
Kind: utils.ManagerKindRepo,
Identity: "test",
AllowsEdits: true,
Suspended: false,
})
meta.SetSourceProperties(utils.SourceProperties{
Path: "test",
Checksum: "test",
TimestampMillis: 1234567890,
})
mockParser.On("Parse", mock.Anything, mock.MatchedBy(func(info *repository.FileInfo) bool {
return info != nil && info.Path == "" && string(info.Data) == "test"
})).
Return(&resources.ParsedResource{
Meta: meta,
Obj: obj,
}, nil)
mockRepoResources := resources.NewMockRepositoryResources(t)
mockRepoResources.On("WriteResourceFileFromObject", mock.Anything, mock.MatchedBy(func(obj *unstructured.Unstructured) bool {
if obj == nil {
return false
}
if obj.GetName() != "test" {
return false
}
meta, err := utils.MetaAccessor(obj)
require.NoError(t, err)
managerProps, _ := meta.GetManagerProperties()
sourceProps, _ := meta.GetSourceProperties()
return assert.Zero(t, sourceProps) && assert.Zero(t, managerProps)
}), resources.WriteOptions{
Path: "",
Ref: "",
}).
Return("test/path", nil)
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("Record", mock.Anything, mock.MatchedBy(func(result jobs.JobResourceResult) bool {
return result.Action == repository.FileActionCreated &&
result.Name == "test" &&
result.Error == nil &&
result.Kind == "" && // empty kind
result.Group == "test.grafana.app" &&
result.Path == "test/path"
})).Return()
progress.On("TooManyErrors").Return(nil)
migrator := newLegacyResourceMigrator(
nil,
nil,
mockParser,
mockRepoResources,
progress,
provisioning.MigrateJobOptions{},
"test-namespace",
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
signature.NewGrafanaSigner(),
)
err = migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte("test"))
require.NoError(t, err)
mockParser.AssertExpectations(t)
mockRepoResources.AssertExpectations(t)
progress.AssertExpectations(t)
})
t.Run("should fail when too many errors", func(t *testing.T) {
mockParser := resources.NewMockParser(t)
obj := &unstructured.Unstructured{
Object: map[string]any{
"metadata": map[string]any{
"name": "test",
},
},
}
meta, err := utils.MetaAccessor(obj)
require.NoError(t, err)
mockParser.On("Parse", mock.Anything, mock.Anything).
Return(&resources.ParsedResource{
Meta: meta,
Obj: obj,
}, nil)
mockRepoResources := resources.NewMockRepositoryResources(t)
mockRepoResources.On("WriteResourceFileFromObject", mock.Anything, mock.Anything, resources.WriteOptions{}).
Return("test/path", nil)
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("Record", mock.Anything, mock.Anything).Return()
progress.On("TooManyErrors").Return(errors.New("too many errors"))
migrator := newLegacyResourceMigrator(
nil,
nil,
mockParser,
mockRepoResources,
progress,
provisioning.MigrateJobOptions{},
"test-namespace",
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
signature.NewGrafanaSigner(),
)
err = migrator.Write(context.Background(), &resourcepb.ResourceKey{}, []byte("test"))
require.EqualError(t, err, "too many errors")
mockParser.AssertExpectations(t)
mockRepoResources.AssertExpectations(t)
progress.AssertExpectations(t)
})
}
func TestLegacyResourceResourceMigrator_Migrate(t *testing.T) {
t.Run("should fail when legacy migrate count fails", func(t *testing.T) {
mockDashboardAccess := legacy.NewMockMigrationDashboardAccessor(t)
mockDashboardAccess.On("CountResources", mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
return opts.OnlyCount && opts.Namespace == "test-namespace"
})).Return(&resourcepb.BulkResponse{}, errors.New("count error"))
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("SetMessage", mock.Anything, mock.Anything).Return()
migrator := newLegacyResourceMigrator(
nil,
mockDashboardAccess,
nil,
nil,
progress,
provisioning.MigrateJobOptions{},
"test-namespace",
schema.GroupResource{Group: "test.grafana.app", Resource: "tests"},
signature.NewGrafanaSigner(),
)
err := migrator.Migrate(context.Background())
require.Error(t, err)
require.Contains(t, err.Error(), "unable to count legacy items")
mockDashboardAccess.AssertExpectations(t)
progress.AssertExpectations(t)
})
t.Run("should fail when legacy migrate write fails", func(t *testing.T) {
mockDashboardAccess := legacy.NewMockMigrationDashboardAccessor(t)
mockDashboardAccess.On("CountResources", mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
return opts.OnlyCount && opts.Namespace == "test-namespace"
})).Return(&resourcepb.BulkResponse{}, nil).Once() // Count phase
// For test-resources GroupResource, we don't know which method it will call, but since it's not dashboards/folders/librarypanels,
// the Migrate will fail trying to map the resource type. Let's make it dashboards for this test.
mockDashboardAccess.On("MigrateDashboards", mock.Anything, mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
return !opts.OnlyCount && opts.Namespace == "test-namespace"
}), mock.Anything).Return(nil, errors.New("write error")).Once() // Write phase
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("SetMessage", mock.Anything, mock.Anything).Return()
migrator := newLegacyResourceMigrator(
nil,
mockDashboardAccess,
nil,
nil,
progress,
provisioning.MigrateJobOptions{},
"test-namespace",
schema.GroupResource{Group: "dashboard.grafana.app", Resource: "dashboards"},
signature.NewGrafanaSigner(),
)
err := migrator.Migrate(context.Background())
require.Error(t, err)
require.Contains(t, err.Error(), "migrate legacy dashboards: write error")
mockDashboardAccess.AssertExpectations(t)
progress.AssertExpectations(t)
})
t.Run("should successfully migrate resource", func(t *testing.T) {
mockDashboardAccess := legacy.NewMockMigrationDashboardAccessor(t)
mockDashboardAccess.On("CountResources", mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
return opts.OnlyCount && opts.Namespace == "test-namespace"
})).Return(&resourcepb.BulkResponse{}, nil).Once() // Count phase
mockDashboardAccess.On("MigrateDashboards", mock.Anything, mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
return !opts.OnlyCount && opts.Namespace == "test-namespace"
}), mock.Anything).Return(&legacy.BlobStoreInfo{}, nil).Once() // Write phase
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("SetMessage", mock.Anything, mock.Anything).Return()
migrator := newLegacyResourceMigrator(
nil,
mockDashboardAccess,
nil,
nil,
progress,
provisioning.MigrateJobOptions{},
"test-namespace",
schema.GroupResource{Group: "dashboard.grafana.app", Resource: "dashboards"},
signature.NewGrafanaSigner(),
)
err := migrator.Migrate(context.Background())
require.NoError(t, err)
mockDashboardAccess.AssertExpectations(t)
progress.AssertExpectations(t)
})
t.Run("should set total to history if history is greater than count", func(t *testing.T) {
mockDashboardAccess := legacy.NewMockMigrationDashboardAccessor(t)
mockDashboardAccess.On("CountResources", mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
return opts.OnlyCount && opts.Namespace == "test-namespace"
})).Return(&resourcepb.BulkResponse{
Summary: []*resourcepb.BulkResponse_Summary{
{
Group: "dashboard.grafana.app",
Resource: "dashboards",
Count: 1,
History: 100,
},
},
}, nil).Once() // Count phase
mockDashboardAccess.On("MigrateDashboards", mock.Anything, mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
return !opts.OnlyCount && opts.Namespace == "test-namespace"
}), mock.Anything).Return(&legacy.BlobStoreInfo{}, nil).Once() // Write phase
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("SetMessage", mock.Anything, mock.Anything).Return()
progress.On("SetTotal", mock.Anything, 100).Return()
migrator := newLegacyResourceMigrator(
nil,
mockDashboardAccess,
nil,
nil,
progress,
provisioning.MigrateJobOptions{},
"test-namespace",
schema.GroupResource{Group: "dashboard.grafana.app", Resource: "dashboards"},
signature.NewGrafanaSigner(),
)
err := migrator.Migrate(context.Background())
require.NoError(t, err)
mockDashboardAccess.AssertExpectations(t)
progress.AssertExpectations(t)
})
t.Run("should set total to count if history is less than count", func(t *testing.T) {
mockDashboardAccess := legacy.NewMockMigrationDashboardAccessor(t)
mockDashboardAccess.On("CountResources", mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
return opts.OnlyCount && opts.Namespace == "test-namespace"
})).Return(&resourcepb.BulkResponse{
Summary: []*resourcepb.BulkResponse_Summary{
{
Group: "dashboard.grafana.app",
Resource: "dashboards",
Count: 200,
History: 1,
},
},
}, nil).Once() // Count phase
mockDashboardAccess.On("MigrateDashboards", mock.Anything, mock.Anything, mock.MatchedBy(func(opts legacy.MigrateOptions) bool {
return !opts.OnlyCount && opts.Namespace == "test-namespace"
}), mock.Anything).Return(&legacy.BlobStoreInfo{}, nil).Once() // Write phase
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("SetMessage", mock.Anything, mock.Anything).Return()
progress.On("SetTotal", mock.Anything, 200).Return()
signer := signature.NewMockSigner(t)
migrator := newLegacyResourceMigrator(
nil,
mockDashboardAccess,
nil,
nil,
progress,
provisioning.MigrateJobOptions{},
"test-namespace",
schema.GroupResource{Group: "dashboard.grafana.app", Resource: "dashboards"},
signer,
)
err := migrator.Migrate(context.Background())
require.NoError(t, err)
mockDashboardAccess.AssertExpectations(t)
progress.AssertExpectations(t)
})
}
func TestLegacyResourceResourceMigrator_Close(t *testing.T) {
t.Run("should return nil error", func(t *testing.T) {
migrator := &legacyResourceResourceMigrator{}
err := migrator.Close()
require.NoError(t, err)
})
}
func TestLegacyResourceResourceMigrator_CloseWithResults(t *testing.T) {
t.Run("should return empty bulk response and nil error", func(t *testing.T) {
migrator := &legacyResourceResourceMigrator{}
response, err := migrator.CloseWithResults()
require.NoError(t, err)
require.NotNil(t, response)
require.IsType(t, &resourcepb.BulkResponse{}, response)
require.Empty(t, response.Summary)
})
}

View File

@@ -1,440 +0,0 @@
package migrate
import (
"context"
"errors"
"testing"
"time"
mock "github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
)
func TestWrapWithStageFn(t *testing.T) {
t.Run("should return error when repository is not a ReaderWriter", func(t *testing.T) {
// Setup
ctx := context.Background()
// Create the wrapper function that matches WrapWithCloneFn signature
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
// pass a reader to function call
repo := repository.NewMockReader(t)
return fn(repo, true)
}
legacyFoldersMigrator := NewLegacyMigrator(
NewMockLegacyResourcesMigrator(t),
NewMockStorageSwapper(t),
jobs.NewMockWorker(t),
wrapFn,
)
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("StrictMaxErrors", 1).Return()
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
// Execute
repo := repository.NewMockRepository(t)
repo.On("Config").Return(&provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test-namespace",
},
})
err := legacyFoldersMigrator.Migrate(ctx, repo, provisioning.MigrateJobOptions{}, progress)
// Assert
require.Error(t, err)
require.Contains(t, err.Error(), "migration job submitted targeting repository that is not a ReaderWriter")
})
}
func TestWrapWithCloneFn_Error(t *testing.T) {
t.Run("should return error when wrapFn fails", func(t *testing.T) {
// Setup
ctx := context.Background()
expectedErr := errors.New("clone failed")
// Create the wrapper function that returns an error
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
return expectedErr
}
legacyMigrator := NewLegacyMigrator(
NewMockLegacyResourcesMigrator(t),
NewMockStorageSwapper(t),
jobs.NewMockWorker(t),
wrapFn,
)
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("StrictMaxErrors", 1).Return()
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
// Execute
repo := repository.NewMockRepository(t)
repo.On("Config").Return(&provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test-namespace",
},
})
err := legacyMigrator.Migrate(ctx, repo, provisioning.MigrateJobOptions{}, progress)
// Assert
require.Error(t, err)
require.Contains(t, err.Error(), "migrate from SQL: clone failed")
})
}
func TestLegacyMigrator_MigrateFails(t *testing.T) {
t.Run("should return error when legacyMigrator.Migrate fails", func(t *testing.T) {
// Setup
ctx := context.Background()
expectedErr := errors.New("migration failed")
mockLegacyMigrator := NewMockLegacyResourcesMigrator(t)
mockLegacyMigrator.On("Migrate", mock.Anything, mock.Anything, "test-namespace", mock.Anything, mock.Anything).
Return(expectedErr)
mockStorageSwapper := NewMockStorageSwapper(t)
mockWorker := jobs.NewMockWorker(t)
// Create a wrapper function that calls the provided function
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
return fn(rw, true)
}
legacyMigrator := NewLegacyMigrator(
mockLegacyMigrator,
mockStorageSwapper,
mockWorker,
wrapFn,
)
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("StrictMaxErrors", 1).Return()
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
// Execute
repo := repository.NewMockRepository(t)
repo.On("Config").Return(&provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test-namespace",
},
})
err := legacyMigrator.Migrate(ctx, repo, provisioning.MigrateJobOptions{}, progress)
// Assert
require.Error(t, err)
require.Contains(t, err.Error(), "migrate from SQL: migration failed")
// Storage swapper should not be called when migration fails
mockStorageSwapper.AssertNotCalled(t, "WipeUnifiedAndSetMigratedFlag")
})
}
func TestLegacyMigrator_ResetUnifiedStorageFails(t *testing.T) {
t.Run("should return error when storage reset fails", func(t *testing.T) {
// Setup
ctx := context.Background()
expectedErr := errors.New("reset failed")
mockLegacyMigrator := NewMockLegacyResourcesMigrator(t)
mockLegacyMigrator.On("Migrate", mock.Anything, mock.Anything, "test-namespace", mock.Anything, mock.Anything).
Return(nil)
mockStorageSwapper := NewMockStorageSwapper(t)
mockStorageSwapper.On("WipeUnifiedAndSetMigratedFlag", mock.Anything, "test-namespace").
Return(expectedErr)
mockWorker := jobs.NewMockWorker(t)
// Create a wrapper function that calls the provided function
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
return fn(rw, true)
}
legacyMigrator := NewLegacyMigrator(
mockLegacyMigrator,
mockStorageSwapper,
mockWorker,
wrapFn,
)
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("StrictMaxErrors", 1).Return()
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
progress.On("SetMessage", mock.Anything, "resetting unified storage").Return()
// Execute
repo := repository.NewMockRepository(t)
repo.On("Config").Return(&provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test-namespace",
},
})
err := legacyMigrator.Migrate(ctx, repo, provisioning.MigrateJobOptions{}, progress)
// Assert
require.Error(t, err)
require.Contains(t, err.Error(), "unable to reset unified storage")
// Sync worker should not be called when reset fails
mockWorker.AssertNotCalled(t, "Process")
})
}
func TestLegacyMigrator_SyncFails(t *testing.T) {
t.Run("should revert storage settings when sync fails", func(t *testing.T) {
// Setup
ctx := context.Background()
expectedErr := errors.New("sync failed")
mockLegacyMigrator := NewMockLegacyResourcesMigrator(t)
mockLegacyMigrator.On("Migrate", mock.Anything, mock.Anything, "test-namespace", mock.Anything, mock.Anything).
Return(nil)
mockStorageSwapper := NewMockStorageSwapper(t)
mockStorageSwapper.On("WipeUnifiedAndSetMigratedFlag", mock.Anything, "test-namespace").
Return(nil)
mockStorageSwapper.On("StopReadingUnifiedStorage", mock.Anything).
Return(nil)
mockWorker := jobs.NewMockWorker(t)
mockWorker.On("Process", mock.Anything, mock.Anything, mock.MatchedBy(func(job provisioning.Job) bool {
return job.Spec.Pull != nil && !job.Spec.Pull.Incremental
}), mock.Anything).Return(expectedErr)
// Create a wrapper function that calls the provided function
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
return fn(rw, true)
}
legacyMigrator := NewLegacyMigrator(
mockLegacyMigrator,
mockStorageSwapper,
mockWorker,
wrapFn,
)
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("StrictMaxErrors", 1).Return()
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
progress.On("SetMessage", mock.Anything, "resetting unified storage").Return()
progress.On("ResetResults").Return()
progress.On("SetMessage", mock.Anything, "pulling resources").Return()
progress.On("SetMessage", mock.Anything, "error importing resources, reverting").Return()
// Execute
repo := repository.NewMockRepository(t)
repo.On("Config").Return(&provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test-namespace",
},
})
err := legacyMigrator.Migrate(ctx, repo, provisioning.MigrateJobOptions{}, progress)
// Assert
require.Error(t, err)
require.Contains(t, err.Error(), "sync failed")
// Verify storage settings were reverted
mockStorageSwapper.AssertCalled(t, "StopReadingUnifiedStorage", mock.Anything)
})
t.Run("should handle revert failure after sync failure", func(t *testing.T) {
// Setup
ctx := context.Background()
syncErr := errors.New("sync failed")
revertErr := errors.New("revert failed")
mockLegacyMigrator := NewMockLegacyResourcesMigrator(t)
mockLegacyMigrator.On("Migrate", mock.Anything, mock.Anything, "test-namespace", mock.Anything, mock.Anything).
Return(nil)
mockStorageSwapper := NewMockStorageSwapper(t)
mockStorageSwapper.On("WipeUnifiedAndSetMigratedFlag", mock.Anything, "test-namespace").
Return(nil)
mockStorageSwapper.On("StopReadingUnifiedStorage", mock.Anything).
Return(revertErr)
mockWorker := jobs.NewMockWorker(t)
mockWorker.On("Process", mock.Anything, mock.Anything, mock.MatchedBy(func(job provisioning.Job) bool {
return job.Spec.Pull != nil && !job.Spec.Pull.Incremental
}), mock.Anything).Return(syncErr)
// Create a wrapper function that calls the provided function
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
return fn(rw, true)
}
legacyMigrator := NewLegacyMigrator(
mockLegacyMigrator,
mockStorageSwapper,
mockWorker,
wrapFn,
)
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("StrictMaxErrors", 1).Return()
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
progress.On("SetMessage", mock.Anything, "resetting unified storage").Return()
progress.On("ResetResults").Return()
progress.On("SetMessage", mock.Anything, "pulling resources").Return()
progress.On("SetMessage", mock.Anything, "error importing resources, reverting").Return()
// Execute
repo := repository.NewMockRepository(t)
repo.On("Config").Return(&provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test-namespace",
},
})
err := legacyMigrator.Migrate(ctx, repo, provisioning.MigrateJobOptions{}, progress)
// Assert
require.Error(t, err)
require.Contains(t, err.Error(), "sync failed")
// Verify both errors occurred
mockStorageSwapper.AssertCalled(t, "StopReadingUnifiedStorage", mock.Anything)
})
}
func TestLegacyMigrator_Success(t *testing.T) {
t.Run("should complete migration successfully", func(t *testing.T) {
// Setup
ctx := context.Background()
mockLegacyMigrator := NewMockLegacyResourcesMigrator(t)
mockLegacyMigrator.On("Migrate", mock.Anything, mock.Anything, "test-namespace", mock.Anything, mock.Anything).
Return(nil)
mockStorageSwapper := NewMockStorageSwapper(t)
mockStorageSwapper.On("WipeUnifiedAndSetMigratedFlag", mock.Anything, "test-namespace").
Return(nil)
mockWorker := jobs.NewMockWorker(t)
mockWorker.On("Process", mock.Anything, mock.Anything, mock.MatchedBy(func(job provisioning.Job) bool {
return job.Spec.Pull != nil && !job.Spec.Pull.Incremental
}), mock.Anything).Return(nil)
// Create a wrapper function that calls the provided function
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
return fn(rw, true)
}
legacyMigrator := NewLegacyMigrator(
mockLegacyMigrator,
mockStorageSwapper,
mockWorker,
wrapFn,
)
progress := jobs.NewMockJobProgressRecorder(t)
progress.On("StrictMaxErrors", 1).Return()
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
progress.On("SetMessage", mock.Anything, "resetting unified storage").Return()
progress.On("ResetResults").Return()
progress.On("SetMessage", mock.Anything, "pulling resources").Return()
// Execute
repo := repository.NewMockRepository(t)
repo.On("Config").Return(&provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test-namespace",
},
})
err := legacyMigrator.Migrate(ctx, repo, provisioning.MigrateJobOptions{}, progress)
// Assert
require.NoError(t, err)
// Verify all expected operations were called in order
mockLegacyMigrator.AssertCalled(t, "Migrate", mock.Anything, mock.Anything, "test-namespace", mock.Anything, mock.Anything)
mockStorageSwapper.AssertCalled(t, "WipeUnifiedAndSetMigratedFlag", mock.Anything, "test-namespace")
mockWorker.AssertCalled(t, "Process", mock.Anything, mock.Anything, mock.Anything, mock.Anything)
})
}
func TestLegacyMigrator_BeforeFnExecution(t *testing.T) {
t.Run("should execute beforeFn functions", func(t *testing.T) {
// Setup
mockLegacyMigrator := NewMockLegacyResourcesMigrator(t)
mockStorageSwapper := NewMockStorageSwapper(t)
mockWorker := jobs.NewMockWorker(t)
// Create a wrapper function that calls the provided function
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
return errors.New("abort test here")
}
legacyMigrator := NewLegacyMigrator(
mockLegacyMigrator,
mockStorageSwapper,
mockWorker,
wrapFn,
)
progress := jobs.NewMockJobProgressRecorder(t)
// No progress messages expected in current staging implementation
progress.On("StrictMaxErrors", 1).Return()
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
// Execute
repo := repository.NewMockRepository(t)
repo.On("Config").Return(&provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test-namespace",
},
})
err := legacyMigrator.Migrate(context.Background(), repo, provisioning.MigrateJobOptions{}, progress)
require.EqualError(t, err, "migrate from SQL: abort test here")
})
}
func TestLegacyMigrator_ProgressScanner(t *testing.T) {
t.Run("should update progress with scanner", func(t *testing.T) {
mockLegacyMigrator := NewMockLegacyResourcesMigrator(t)
mockStorageSwapper := NewMockStorageSwapper(t)
mockWorker := jobs.NewMockWorker(t)
// Create a wrapper function that calls the provided function
wrapFn := func(ctx context.Context, rw repository.Repository, stageOpts repository.StageOptions, fn func(repository.Repository, bool) error) error {
return errors.New("abort test here")
}
legacyMigrator := NewLegacyMigrator(
mockLegacyMigrator,
mockStorageSwapper,
mockWorker,
wrapFn,
)
progress := jobs.NewMockJobProgressRecorder(t)
// No progress messages expected in current staging implementation
progress.On("StrictMaxErrors", 1).Return()
progress.On("SetMessage", mock.Anything, "migrating legacy resources").Return()
repo := repository.NewMockRepository(t)
repo.On("Config").Return(&provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test-namespace",
},
})
err := legacyMigrator.Migrate(context.Background(), repo, provisioning.MigrateJobOptions{}, progress)
require.EqualError(t, err, "migrate from SQL: abort test here")
require.Eventually(t, func() bool {
// No progress message calls expected in current staging implementation
return progress.AssertExpectations(t)
}, time.Second, 10*time.Millisecond)
})
}

View File

@@ -1,430 +0,0 @@
// Code generated by mockery v2.52.4. DO NOT EDIT.
package migrate
import (
context "context"
mock "github.com/stretchr/testify/mock"
metadata "google.golang.org/grpc/metadata"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
)
// BulkStore_BulkProcessClient is an autogenerated mock type for the BulkStore_BulkProcessClient type
type BulkStore_BulkProcessClient struct {
mock.Mock
}
type BulkStore_BulkProcessClient_Expecter struct {
mock *mock.Mock
}
func (_m *BulkStore_BulkProcessClient) EXPECT() *BulkStore_BulkProcessClient_Expecter {
return &BulkStore_BulkProcessClient_Expecter{mock: &_m.Mock}
}
// CloseAndRecv provides a mock function with no fields
func (_m *BulkStore_BulkProcessClient) CloseAndRecv() (*resourcepb.BulkResponse, error) {
ret := _m.Called()
if len(ret) == 0 {
panic("no return value specified for CloseAndRecv")
}
var r0 *resourcepb.BulkResponse
var r1 error
if rf, ok := ret.Get(0).(func() (*resourcepb.BulkResponse, error)); ok {
return rf()
}
if rf, ok := ret.Get(0).(func() *resourcepb.BulkResponse); ok {
r0 = rf()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*resourcepb.BulkResponse)
}
}
if rf, ok := ret.Get(1).(func() error); ok {
r1 = rf()
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// BulkStore_BulkProcessClient_CloseAndRecv_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'CloseAndRecv'
type BulkStore_BulkProcessClient_CloseAndRecv_Call struct {
*mock.Call
}
// CloseAndRecv is a helper method to define mock.On call
func (_e *BulkStore_BulkProcessClient_Expecter) CloseAndRecv() *BulkStore_BulkProcessClient_CloseAndRecv_Call {
return &BulkStore_BulkProcessClient_CloseAndRecv_Call{Call: _e.mock.On("CloseAndRecv")}
}
func (_c *BulkStore_BulkProcessClient_CloseAndRecv_Call) Run(run func()) *BulkStore_BulkProcessClient_CloseAndRecv_Call {
_c.Call.Run(func(args mock.Arguments) {
run()
})
return _c
}
func (_c *BulkStore_BulkProcessClient_CloseAndRecv_Call) Return(_a0 *resourcepb.BulkResponse, _a1 error) *BulkStore_BulkProcessClient_CloseAndRecv_Call {
_c.Call.Return(_a0, _a1)
return _c
}
func (_c *BulkStore_BulkProcessClient_CloseAndRecv_Call) RunAndReturn(run func() (*resourcepb.BulkResponse, error)) *BulkStore_BulkProcessClient_CloseAndRecv_Call {
_c.Call.Return(run)
return _c
}
// CloseSend provides a mock function with no fields
func (_m *BulkStore_BulkProcessClient) CloseSend() error {
ret := _m.Called()
if len(ret) == 0 {
panic("no return value specified for CloseSend")
}
var r0 error
if rf, ok := ret.Get(0).(func() error); ok {
r0 = rf()
} else {
r0 = ret.Error(0)
}
return r0
}
// BulkStore_BulkProcessClient_CloseSend_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'CloseSend'
type BulkStore_BulkProcessClient_CloseSend_Call struct {
*mock.Call
}
// CloseSend is a helper method to define mock.On call
func (_e *BulkStore_BulkProcessClient_Expecter) CloseSend() *BulkStore_BulkProcessClient_CloseSend_Call {
return &BulkStore_BulkProcessClient_CloseSend_Call{Call: _e.mock.On("CloseSend")}
}
func (_c *BulkStore_BulkProcessClient_CloseSend_Call) Run(run func()) *BulkStore_BulkProcessClient_CloseSend_Call {
_c.Call.Run(func(args mock.Arguments) {
run()
})
return _c
}
func (_c *BulkStore_BulkProcessClient_CloseSend_Call) Return(_a0 error) *BulkStore_BulkProcessClient_CloseSend_Call {
_c.Call.Return(_a0)
return _c
}
func (_c *BulkStore_BulkProcessClient_CloseSend_Call) RunAndReturn(run func() error) *BulkStore_BulkProcessClient_CloseSend_Call {
_c.Call.Return(run)
return _c
}
// Context provides a mock function with no fields
func (_m *BulkStore_BulkProcessClient) Context() context.Context {
ret := _m.Called()
if len(ret) == 0 {
panic("no return value specified for Context")
}
var r0 context.Context
if rf, ok := ret.Get(0).(func() context.Context); ok {
r0 = rf()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(context.Context)
}
}
return r0
}
// BulkStore_BulkProcessClient_Context_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Context'
type BulkStore_BulkProcessClient_Context_Call struct {
*mock.Call
}
// Context is a helper method to define mock.On call
func (_e *BulkStore_BulkProcessClient_Expecter) Context() *BulkStore_BulkProcessClient_Context_Call {
return &BulkStore_BulkProcessClient_Context_Call{Call: _e.mock.On("Context")}
}
func (_c *BulkStore_BulkProcessClient_Context_Call) Run(run func()) *BulkStore_BulkProcessClient_Context_Call {
_c.Call.Run(func(args mock.Arguments) {
run()
})
return _c
}
func (_c *BulkStore_BulkProcessClient_Context_Call) Return(_a0 context.Context) *BulkStore_BulkProcessClient_Context_Call {
_c.Call.Return(_a0)
return _c
}
func (_c *BulkStore_BulkProcessClient_Context_Call) RunAndReturn(run func() context.Context) *BulkStore_BulkProcessClient_Context_Call {
_c.Call.Return(run)
return _c
}
// Header provides a mock function with no fields
func (_m *BulkStore_BulkProcessClient) Header() (metadata.MD, error) {
ret := _m.Called()
if len(ret) == 0 {
panic("no return value specified for Header")
}
var r0 metadata.MD
var r1 error
if rf, ok := ret.Get(0).(func() (metadata.MD, error)); ok {
return rf()
}
if rf, ok := ret.Get(0).(func() metadata.MD); ok {
r0 = rf()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(metadata.MD)
}
}
if rf, ok := ret.Get(1).(func() error); ok {
r1 = rf()
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// BulkStore_BulkProcessClient_Header_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Header'
type BulkStore_BulkProcessClient_Header_Call struct {
*mock.Call
}
// Header is a helper method to define mock.On call
func (_e *BulkStore_BulkProcessClient_Expecter) Header() *BulkStore_BulkProcessClient_Header_Call {
return &BulkStore_BulkProcessClient_Header_Call{Call: _e.mock.On("Header")}
}
func (_c *BulkStore_BulkProcessClient_Header_Call) Run(run func()) *BulkStore_BulkProcessClient_Header_Call {
_c.Call.Run(func(args mock.Arguments) {
run()
})
return _c
}
func (_c *BulkStore_BulkProcessClient_Header_Call) Return(_a0 metadata.MD, _a1 error) *BulkStore_BulkProcessClient_Header_Call {
_c.Call.Return(_a0, _a1)
return _c
}
func (_c *BulkStore_BulkProcessClient_Header_Call) RunAndReturn(run func() (metadata.MD, error)) *BulkStore_BulkProcessClient_Header_Call {
_c.Call.Return(run)
return _c
}
// RecvMsg provides a mock function with given fields: m
func (_m *BulkStore_BulkProcessClient) RecvMsg(m interface{}) error {
ret := _m.Called(m)
if len(ret) == 0 {
panic("no return value specified for RecvMsg")
}
var r0 error
if rf, ok := ret.Get(0).(func(interface{}) error); ok {
r0 = rf(m)
} else {
r0 = ret.Error(0)
}
return r0
}
// BulkStore_BulkProcessClient_RecvMsg_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'RecvMsg'
type BulkStore_BulkProcessClient_RecvMsg_Call struct {
*mock.Call
}
// RecvMsg is a helper method to define mock.On call
// - m interface{}
func (_e *BulkStore_BulkProcessClient_Expecter) RecvMsg(m interface{}) *BulkStore_BulkProcessClient_RecvMsg_Call {
return &BulkStore_BulkProcessClient_RecvMsg_Call{Call: _e.mock.On("RecvMsg", m)}
}
func (_c *BulkStore_BulkProcessClient_RecvMsg_Call) Run(run func(m interface{})) *BulkStore_BulkProcessClient_RecvMsg_Call {
_c.Call.Run(func(args mock.Arguments) {
run(args[0].(interface{}))
})
return _c
}
func (_c *BulkStore_BulkProcessClient_RecvMsg_Call) Return(_a0 error) *BulkStore_BulkProcessClient_RecvMsg_Call {
_c.Call.Return(_a0)
return _c
}
func (_c *BulkStore_BulkProcessClient_RecvMsg_Call) RunAndReturn(run func(interface{}) error) *BulkStore_BulkProcessClient_RecvMsg_Call {
_c.Call.Return(run)
return _c
}
// Send provides a mock function with given fields: _a0
func (_m *BulkStore_BulkProcessClient) Send(_a0 *resourcepb.BulkRequest) error {
ret := _m.Called(_a0)
if len(ret) == 0 {
panic("no return value specified for Send")
}
var r0 error
if rf, ok := ret.Get(0).(func(*resourcepb.BulkRequest) error); ok {
r0 = rf(_a0)
} else {
r0 = ret.Error(0)
}
return r0
}
// BulkStore_BulkProcessClient_Send_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Send'
type BulkStore_BulkProcessClient_Send_Call struct {
*mock.Call
}
// Send is a helper method to define mock.On call
// - _a0 *resource.BulkRequest
func (_e *BulkStore_BulkProcessClient_Expecter) Send(_a0 interface{}) *BulkStore_BulkProcessClient_Send_Call {
return &BulkStore_BulkProcessClient_Send_Call{Call: _e.mock.On("Send", _a0)}
}
func (_c *BulkStore_BulkProcessClient_Send_Call) Run(run func(_a0 *resourcepb.BulkRequest)) *BulkStore_BulkProcessClient_Send_Call {
_c.Call.Run(func(args mock.Arguments) {
run(args[0].(*resourcepb.BulkRequest))
})
return _c
}
func (_c *BulkStore_BulkProcessClient_Send_Call) Return(_a0 error) *BulkStore_BulkProcessClient_Send_Call {
_c.Call.Return(_a0)
return _c
}
func (_c *BulkStore_BulkProcessClient_Send_Call) RunAndReturn(run func(*resourcepb.BulkRequest) error) *BulkStore_BulkProcessClient_Send_Call {
_c.Call.Return(run)
return _c
}
// SendMsg provides a mock function with given fields: m
func (_m *BulkStore_BulkProcessClient) SendMsg(m interface{}) error {
ret := _m.Called(m)
if len(ret) == 0 {
panic("no return value specified for SendMsg")
}
var r0 error
if rf, ok := ret.Get(0).(func(interface{}) error); ok {
r0 = rf(m)
} else {
r0 = ret.Error(0)
}
return r0
}
// BulkStore_BulkProcessClient_SendMsg_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'SendMsg'
type BulkStore_BulkProcessClient_SendMsg_Call struct {
*mock.Call
}
// SendMsg is a helper method to define mock.On call
// - m interface{}
func (_e *BulkStore_BulkProcessClient_Expecter) SendMsg(m interface{}) *BulkStore_BulkProcessClient_SendMsg_Call {
return &BulkStore_BulkProcessClient_SendMsg_Call{Call: _e.mock.On("SendMsg", m)}
}
func (_c *BulkStore_BulkProcessClient_SendMsg_Call) Run(run func(m interface{})) *BulkStore_BulkProcessClient_SendMsg_Call {
_c.Call.Run(func(args mock.Arguments) {
run(args[0].(interface{}))
})
return _c
}
func (_c *BulkStore_BulkProcessClient_SendMsg_Call) Return(_a0 error) *BulkStore_BulkProcessClient_SendMsg_Call {
_c.Call.Return(_a0)
return _c
}
func (_c *BulkStore_BulkProcessClient_SendMsg_Call) RunAndReturn(run func(interface{}) error) *BulkStore_BulkProcessClient_SendMsg_Call {
_c.Call.Return(run)
return _c
}
// Trailer provides a mock function with no fields
func (_m *BulkStore_BulkProcessClient) Trailer() metadata.MD {
ret := _m.Called()
if len(ret) == 0 {
panic("no return value specified for Trailer")
}
var r0 metadata.MD
if rf, ok := ret.Get(0).(func() metadata.MD); ok {
r0 = rf()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(metadata.MD)
}
}
return r0
}
// BulkStore_BulkProcessClient_Trailer_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Trailer'
type BulkStore_BulkProcessClient_Trailer_Call struct {
*mock.Call
}
// Trailer is a helper method to define mock.On call
func (_e *BulkStore_BulkProcessClient_Expecter) Trailer() *BulkStore_BulkProcessClient_Trailer_Call {
return &BulkStore_BulkProcessClient_Trailer_Call{Call: _e.mock.On("Trailer")}
}
func (_c *BulkStore_BulkProcessClient_Trailer_Call) Run(run func()) *BulkStore_BulkProcessClient_Trailer_Call {
_c.Call.Run(func(args mock.Arguments) {
run()
})
return _c
}
func (_c *BulkStore_BulkProcessClient_Trailer_Call) Return(_a0 metadata.MD) *BulkStore_BulkProcessClient_Trailer_Call {
_c.Call.Return(_a0)
return _c
}
func (_c *BulkStore_BulkProcessClient_Trailer_Call) RunAndReturn(run func() metadata.MD) *BulkStore_BulkProcessClient_Trailer_Call {
_c.Call.Return(run)
return _c
}
// NewBulkStore_BulkProcessClient creates a new instance of BulkStore_BulkProcessClient. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewBulkStore_BulkProcessClient(t interface {
mock.TestingT
Cleanup(func())
}) *BulkStore_BulkProcessClient {
mock := &BulkStore_BulkProcessClient{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -1,113 +0,0 @@
// Code generated by mockery v2.52.4. DO NOT EDIT.
package migrate
import (
context "context"
grpc "google.golang.org/grpc"
mock "github.com/stretchr/testify/mock"
resourcepb "github.com/grafana/grafana/pkg/storage/unified/resourcepb"
)
// MockBulkStoreClient is an autogenerated mock type for the BulkStoreClient type
type MockBulkStoreClient struct {
mock.Mock
}
type MockBulkStoreClient_Expecter struct {
mock *mock.Mock
}
func (_m *MockBulkStoreClient) EXPECT() *MockBulkStoreClient_Expecter {
return &MockBulkStoreClient_Expecter{mock: &_m.Mock}
}
// BulkProcess provides a mock function with given fields: ctx, opts
func (_m *MockBulkStoreClient) BulkProcess(ctx context.Context, opts ...grpc.CallOption) (resourcepb.BulkStore_BulkProcessClient, error) {
_va := make([]interface{}, len(opts))
for _i := range opts {
_va[_i] = opts[_i]
}
var _ca []interface{}
_ca = append(_ca, ctx)
_ca = append(_ca, _va...)
ret := _m.Called(_ca...)
if len(ret) == 0 {
panic("no return value specified for BulkProcess")
}
var r0 resourcepb.BulkStore_BulkProcessClient
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, ...grpc.CallOption) (resourcepb.BulkStore_BulkProcessClient, error)); ok {
return rf(ctx, opts...)
}
if rf, ok := ret.Get(0).(func(context.Context, ...grpc.CallOption) resourcepb.BulkStore_BulkProcessClient); ok {
r0 = rf(ctx, opts...)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(resourcepb.BulkStore_BulkProcessClient)
}
}
if rf, ok := ret.Get(1).(func(context.Context, ...grpc.CallOption) error); ok {
r1 = rf(ctx, opts...)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// MockBulkStoreClient_BulkProcess_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'BulkProcess'
type MockBulkStoreClient_BulkProcess_Call struct {
*mock.Call
}
// BulkProcess is a helper method to define mock.On call
// - ctx context.Context
// - opts ...grpc.CallOption
func (_e *MockBulkStoreClient_Expecter) BulkProcess(ctx interface{}, opts ...interface{}) *MockBulkStoreClient_BulkProcess_Call {
return &MockBulkStoreClient_BulkProcess_Call{Call: _e.mock.On("BulkProcess",
append([]interface{}{ctx}, opts...)...)}
}
func (_c *MockBulkStoreClient_BulkProcess_Call) Run(run func(ctx context.Context, opts ...grpc.CallOption)) *MockBulkStoreClient_BulkProcess_Call {
_c.Call.Run(func(args mock.Arguments) {
variadicArgs := make([]grpc.CallOption, len(args)-1)
for i, a := range args[1:] {
if a != nil {
variadicArgs[i] = a.(grpc.CallOption)
}
}
run(args[0].(context.Context), variadicArgs...)
})
return _c
}
func (_c *MockBulkStoreClient_BulkProcess_Call) Return(_a0 resourcepb.BulkStore_BulkProcessClient, _a1 error) *MockBulkStoreClient_BulkProcess_Call {
_c.Call.Return(_a0, _a1)
return _c
}
func (_c *MockBulkStoreClient_BulkProcess_Call) RunAndReturn(run func(context.Context, ...grpc.CallOption) (resourcepb.BulkStore_BulkProcessClient, error)) *MockBulkStoreClient_BulkProcess_Call {
_c.Call.Return(run)
return _c
}
// NewMockBulkStoreClient creates a new instance of MockBulkStoreClient. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewMockBulkStoreClient(t interface {
mock.TestingT
Cleanup(func())
}) *MockBulkStoreClient {
mock := &MockBulkStoreClient{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -1,91 +0,0 @@
// Code generated by mockery v2.52.4. DO NOT EDIT.
package migrate
import (
context "context"
jobs "github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
mock "github.com/stretchr/testify/mock"
repository "github.com/grafana/grafana/apps/provisioning/pkg/repository"
v0alpha1 "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
)
// MockLegacyResourcesMigrator is an autogenerated mock type for the LegacyResourcesMigrator type
type MockLegacyResourcesMigrator struct {
mock.Mock
}
type MockLegacyResourcesMigrator_Expecter struct {
mock *mock.Mock
}
func (_m *MockLegacyResourcesMigrator) EXPECT() *MockLegacyResourcesMigrator_Expecter {
return &MockLegacyResourcesMigrator_Expecter{mock: &_m.Mock}
}
// Migrate provides a mock function with given fields: ctx, rw, namespace, opts, progress
func (_m *MockLegacyResourcesMigrator) Migrate(ctx context.Context, rw repository.ReaderWriter, namespace string, opts v0alpha1.MigrateJobOptions, progress jobs.JobProgressRecorder) error {
ret := _m.Called(ctx, rw, namespace, opts, progress)
if len(ret) == 0 {
panic("no return value specified for Migrate")
}
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, repository.ReaderWriter, string, v0alpha1.MigrateJobOptions, jobs.JobProgressRecorder) error); ok {
r0 = rf(ctx, rw, namespace, opts, progress)
} else {
r0 = ret.Error(0)
}
return r0
}
// MockLegacyResourcesMigrator_Migrate_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Migrate'
type MockLegacyResourcesMigrator_Migrate_Call struct {
*mock.Call
}
// Migrate is a helper method to define mock.On call
// - ctx context.Context
// - rw repository.ReaderWriter
// - namespace string
// - opts v0alpha1.MigrateJobOptions
// - progress jobs.JobProgressRecorder
func (_e *MockLegacyResourcesMigrator_Expecter) Migrate(ctx interface{}, rw interface{}, namespace interface{}, opts interface{}, progress interface{}) *MockLegacyResourcesMigrator_Migrate_Call {
return &MockLegacyResourcesMigrator_Migrate_Call{Call: _e.mock.On("Migrate", ctx, rw, namespace, opts, progress)}
}
func (_c *MockLegacyResourcesMigrator_Migrate_Call) Run(run func(ctx context.Context, rw repository.ReaderWriter, namespace string, opts v0alpha1.MigrateJobOptions, progress jobs.JobProgressRecorder)) *MockLegacyResourcesMigrator_Migrate_Call {
_c.Call.Run(func(args mock.Arguments) {
run(args[0].(context.Context), args[1].(repository.ReaderWriter), args[2].(string), args[3].(v0alpha1.MigrateJobOptions), args[4].(jobs.JobProgressRecorder))
})
return _c
}
func (_c *MockLegacyResourcesMigrator_Migrate_Call) Return(_a0 error) *MockLegacyResourcesMigrator_Migrate_Call {
_c.Call.Return(_a0)
return _c
}
func (_c *MockLegacyResourcesMigrator_Migrate_Call) RunAndReturn(run func(context.Context, repository.ReaderWriter, string, v0alpha1.MigrateJobOptions, jobs.JobProgressRecorder) error) *MockLegacyResourcesMigrator_Migrate_Call {
_c.Call.Return(run)
return _c
}
// NewMockLegacyResourcesMigrator creates a new instance of MockLegacyResourcesMigrator. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewMockLegacyResourcesMigrator(t interface {
mock.TestingT
Cleanup(func())
}) *MockLegacyResourcesMigrator {
mock := &MockLegacyResourcesMigrator{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.52.4. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package migrate

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.52.4. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package migrate

View File

@@ -1,129 +0,0 @@
// Code generated by mockery v2.52.4. DO NOT EDIT.
package migrate
import (
context "context"
mock "github.com/stretchr/testify/mock"
)
// MockStorageSwapper is an autogenerated mock type for the StorageSwapper type
type MockStorageSwapper struct {
mock.Mock
}
type MockStorageSwapper_Expecter struct {
mock *mock.Mock
}
func (_m *MockStorageSwapper) EXPECT() *MockStorageSwapper_Expecter {
return &MockStorageSwapper_Expecter{mock: &_m.Mock}
}
// StopReadingUnifiedStorage provides a mock function with given fields: ctx
func (_m *MockStorageSwapper) StopReadingUnifiedStorage(ctx context.Context) error {
ret := _m.Called(ctx)
if len(ret) == 0 {
panic("no return value specified for StopReadingUnifiedStorage")
}
var r0 error
if rf, ok := ret.Get(0).(func(context.Context) error); ok {
r0 = rf(ctx)
} else {
r0 = ret.Error(0)
}
return r0
}
// MockStorageSwapper_StopReadingUnifiedStorage_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'StopReadingUnifiedStorage'
type MockStorageSwapper_StopReadingUnifiedStorage_Call struct {
*mock.Call
}
// StopReadingUnifiedStorage is a helper method to define mock.On call
// - ctx context.Context
func (_e *MockStorageSwapper_Expecter) StopReadingUnifiedStorage(ctx interface{}) *MockStorageSwapper_StopReadingUnifiedStorage_Call {
return &MockStorageSwapper_StopReadingUnifiedStorage_Call{Call: _e.mock.On("StopReadingUnifiedStorage", ctx)}
}
func (_c *MockStorageSwapper_StopReadingUnifiedStorage_Call) Run(run func(ctx context.Context)) *MockStorageSwapper_StopReadingUnifiedStorage_Call {
_c.Call.Run(func(args mock.Arguments) {
run(args[0].(context.Context))
})
return _c
}
func (_c *MockStorageSwapper_StopReadingUnifiedStorage_Call) Return(_a0 error) *MockStorageSwapper_StopReadingUnifiedStorage_Call {
_c.Call.Return(_a0)
return _c
}
func (_c *MockStorageSwapper_StopReadingUnifiedStorage_Call) RunAndReturn(run func(context.Context) error) *MockStorageSwapper_StopReadingUnifiedStorage_Call {
_c.Call.Return(run)
return _c
}
// WipeUnifiedAndSetMigratedFlag provides a mock function with given fields: ctx, namespace
func (_m *MockStorageSwapper) WipeUnifiedAndSetMigratedFlag(ctx context.Context, namespace string) error {
ret := _m.Called(ctx, namespace)
if len(ret) == 0 {
panic("no return value specified for WipeUnifiedAndSetMigratedFlag")
}
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, string) error); ok {
r0 = rf(ctx, namespace)
} else {
r0 = ret.Error(0)
}
return r0
}
// MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'WipeUnifiedAndSetMigratedFlag'
type MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call struct {
*mock.Call
}
// WipeUnifiedAndSetMigratedFlag is a helper method to define mock.On call
// - ctx context.Context
// - namespace string
func (_e *MockStorageSwapper_Expecter) WipeUnifiedAndSetMigratedFlag(ctx interface{}, namespace interface{}) *MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call {
return &MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call{Call: _e.mock.On("WipeUnifiedAndSetMigratedFlag", ctx, namespace)}
}
func (_c *MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call) Run(run func(ctx context.Context, namespace string)) *MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call {
_c.Call.Run(func(args mock.Arguments) {
run(args[0].(context.Context), args[1].(string))
})
return _c
}
func (_c *MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call) Return(_a0 error) *MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call {
_c.Call.Return(_a0)
return _c
}
func (_c *MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call) RunAndReturn(run func(context.Context, string) error) *MockStorageSwapper_WipeUnifiedAndSetMigratedFlag_Call {
_c.Call.Return(run)
return _c
}
// NewMockStorageSwapper creates a new instance of MockStorageSwapper. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewMockStorageSwapper(t interface {
mock.TestingT
Cleanup(func())
}) *MockStorageSwapper {
mock := &MockStorageSwapper{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -0,0 +1,86 @@
// Code generated by mockery v2.53.4. DO NOT EDIT.
package migrate
import (
context "context"
repository "github.com/grafana/grafana/apps/provisioning/pkg/repository"
mock "github.com/stretchr/testify/mock"
)
// MockWrapWithStageFn is an autogenerated mock type for the WrapWithStageFn type
type MockWrapWithStageFn struct {
mock.Mock
}
type MockWrapWithStageFn_Expecter struct {
mock *mock.Mock
}
func (_m *MockWrapWithStageFn) EXPECT() *MockWrapWithStageFn_Expecter {
return &MockWrapWithStageFn_Expecter{mock: &_m.Mock}
}
// Execute provides a mock function with given fields: ctx, repo, stageOptions, fn
func (_m *MockWrapWithStageFn) Execute(ctx context.Context, repo repository.Repository, stageOptions repository.StageOptions, fn func(repository.Repository, bool) error) error {
ret := _m.Called(ctx, repo, stageOptions, fn)
if len(ret) == 0 {
panic("no return value specified for Execute")
}
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, repository.Repository, repository.StageOptions, func(repository.Repository, bool) error) error); ok {
r0 = rf(ctx, repo, stageOptions, fn)
} else {
r0 = ret.Error(0)
}
return r0
}
// MockWrapWithStageFn_Execute_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Execute'
type MockWrapWithStageFn_Execute_Call struct {
*mock.Call
}
// Execute is a helper method to define mock.On call
// - ctx context.Context
// - repo repository.Repository
// - stageOptions repository.StageOptions
// - fn func(repository.Repository , bool) error
func (_e *MockWrapWithStageFn_Expecter) Execute(ctx interface{}, repo interface{}, stageOptions interface{}, fn interface{}) *MockWrapWithStageFn_Execute_Call {
return &MockWrapWithStageFn_Execute_Call{Call: _e.mock.On("Execute", ctx, repo, stageOptions, fn)}
}
func (_c *MockWrapWithStageFn_Execute_Call) Run(run func(ctx context.Context, repo repository.Repository, stageOptions repository.StageOptions, fn func(repository.Repository, bool) error)) *MockWrapWithStageFn_Execute_Call {
_c.Call.Run(func(args mock.Arguments) {
run(args[0].(context.Context), args[1].(repository.Repository), args[2].(repository.StageOptions), args[3].(func(repository.Repository, bool) error))
})
return _c
}
func (_c *MockWrapWithStageFn_Execute_Call) Return(_a0 error) *MockWrapWithStageFn_Execute_Call {
_c.Call.Return(_a0)
return _c
}
func (_c *MockWrapWithStageFn_Execute_Call) RunAndReturn(run func(context.Context, repository.Repository, repository.StageOptions, func(repository.Repository, bool) error) error) *MockWrapWithStageFn_Execute_Call {
_c.Call.Return(run)
return _c
}
// NewMockWrapWithStageFn creates a new instance of MockWrapWithStageFn. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewMockWrapWithStageFn(t interface {
mock.TestingT
Cleanup(func())
}) *MockWrapWithStageFn {
mock := &MockWrapWithStageFn{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -1,102 +0,0 @@
package migrate
import (
"context"
"fmt"
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/metadata"
"github.com/grafana/grafana-app-sdk/logging"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
"github.com/grafana/grafana/pkg/storage/unified/resource"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
)
//go:generate mockery --name BulkStoreClient --structname MockBulkStoreClient --inpackage --filename mock_bulk_store_client.go --with-expecter
//go:generate mockery --name=BulkStore_BulkProcessClient --srcpkg=github.com/grafana/grafana/pkg/storage/unified/resource --output=. --outpkg=migrate --filename=mock_bulk_process_client.go --with-expecter
type BulkStoreClient interface {
BulkProcess(ctx context.Context, opts ...grpc.CallOption) (resourcepb.BulkStore_BulkProcessClient, error)
}
//go:generate mockery --name StorageSwapper --structname MockStorageSwapper --inpackage --filename mock_storage_swapper.go --with-expecter
type StorageSwapper interface {
StopReadingUnifiedStorage(ctx context.Context) error
WipeUnifiedAndSetMigratedFlag(ctx context.Context, namespace string) error
}
type storageSwapper struct {
// Direct access to unified storage... use carefully!
bulk BulkStoreClient
dual dualwrite.Service
}
func NewStorageSwapper(bulk BulkStoreClient, dual dualwrite.Service) StorageSwapper {
return &storageSwapper{
bulk: bulk,
dual: dual,
}
}
func (s *storageSwapper) StopReadingUnifiedStorage(ctx context.Context) error {
// FIXME: dual writer is not namespaced which means that we would consider all namespaces migrated
// after one migrates
for _, gr := range resources.SupportedProvisioningResources {
status, _ := s.dual.Status(ctx, gr.GroupResource())
status.ReadUnified = false
status.Migrated = 0
status.Migrating = 0
_, err := s.dual.Update(ctx, status)
if err != nil {
return err
}
}
return nil
}
func (s *storageSwapper) WipeUnifiedAndSetMigratedFlag(ctx context.Context, namespace string) error {
for _, gr := range resources.SupportedProvisioningResources {
status, _ := s.dual.Status(ctx, gr.GroupResource())
if status.ReadUnified {
return fmt.Errorf("unexpected state - already using unified storage for: %s", gr)
}
if status.Migrating > 0 {
if time.Since(time.UnixMilli(status.Migrating)) < time.Second*30 {
return fmt.Errorf("another migration job is running for: %s", gr)
}
}
settings := resource.BulkSettings{
RebuildCollection: true, // wipes everything in the collection
Collection: []*resourcepb.ResourceKey{{
Namespace: namespace,
Group: gr.Group,
Resource: gr.Resource,
}},
}
ctx = metadata.NewOutgoingContext(ctx, settings.ToMD())
stream, err := s.bulk.BulkProcess(ctx)
if err != nil {
return fmt.Errorf("error clearing unified %s / %w", gr, err)
}
stats, err := stream.CloseAndRecv()
if err != nil {
return fmt.Errorf("error clearing unified %s / %w", gr, err)
}
logger := logging.FromContext(ctx)
logger.Error("cleared unified storage", "stats", stats)
status.Migrated = time.Now().UnixMilli() // but not really... since the sync is starting
status.ReadUnified = true
status.WriteLegacy = false // keep legacy "clean"
_, err = s.dual.Update(ctx, status)
if err != nil {
return err
}
}
return nil
}

View File

@@ -1,204 +0,0 @@
package migrate
import (
"context"
"errors"
"testing"
"time"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
"github.com/grafana/grafana/pkg/storage/unified/resource"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
"google.golang.org/grpc/metadata"
)
func TestStorageSwapper_StopReadingUnifiedStorage(t *testing.T) {
tests := []struct {
name string
setupMocks func(*MockBulkStoreClient, *dualwrite.MockService)
expectedError string
}{
{
name: "should update status for all resources",
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
for _, gr := range resources.SupportedProvisioningResources {
status := dualwrite.StorageStatus{
ReadUnified: true,
Migrated: 123,
Migrating: 456,
}
dual.On("Status", mock.Anything, gr.GroupResource()).Return(status, nil)
dual.On("Update", mock.Anything, mock.MatchedBy(func(status dualwrite.StorageStatus) bool {
return !status.ReadUnified && status.Migrated == 0 && status.Migrating == 0
})).Return(dualwrite.StorageStatus{}, nil)
}
},
},
{
name: "should fail if status update fails",
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
gr := resources.SupportedProvisioningResources[0]
dual.On("Status", mock.Anything, gr.GroupResource()).Return(dualwrite.StorageStatus{}, nil)
dual.On("Update", mock.Anything, mock.Anything).Return(dualwrite.StorageStatus{}, errors.New("update failed"))
},
expectedError: "update failed",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
bulk := NewMockBulkStoreClient(t)
dual := dualwrite.NewMockService(t)
if tt.setupMocks != nil {
tt.setupMocks(bulk, dual)
}
swapper := NewStorageSwapper(bulk, dual)
err := swapper.StopReadingUnifiedStorage(context.Background())
if tt.expectedError != "" {
require.Error(t, err)
require.Contains(t, err.Error(), tt.expectedError)
} else {
require.NoError(t, err)
}
})
}
}
func TestStorageSwapper_WipeUnifiedAndSetMigratedFlag(t *testing.T) {
tests := []struct {
name string
setupMocks func(*MockBulkStoreClient, *dualwrite.MockService)
expectedError string
}{
{
name: "should fail if already using unified storage",
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
gr := resources.SupportedProvisioningResources[0]
status := dualwrite.StorageStatus{
ReadUnified: true,
}
dual.On("Status", mock.Anything, gr.GroupResource()).Return(status, nil)
},
expectedError: "unexpected state - already using unified storage",
},
{
name: "should fail if migration is in progress",
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
gr := resources.SupportedProvisioningResources[0]
status := dualwrite.StorageStatus{
ReadUnified: false,
Migrating: time.Now().UnixMilli(),
}
dual.On("Status", mock.Anything, gr.GroupResource()).Return(status, nil)
},
expectedError: "another migration job is running",
},
{
name: "should fail if bulk process fails",
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
gr := resources.SupportedProvisioningResources[0]
dual.On("Status", mock.Anything, gr.GroupResource()).Return(dualwrite.StorageStatus{}, nil)
bulk.On("BulkProcess", mock.Anything, mock.Anything).Return(nil, errors.New("bulk process failed"))
},
expectedError: "error clearing unified",
},
{
name: "should fail if status update fails after bulk process",
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
gr := resources.SupportedProvisioningResources[0]
dual.On("Status", mock.Anything, gr.GroupResource()).Return(dualwrite.StorageStatus{}, nil)
mockStream := NewBulkStore_BulkProcessClient(t)
mockStream.On("CloseAndRecv").Return(&resourcepb.BulkResponse{}, nil)
bulk.On("BulkProcess", mock.Anything, mock.Anything).Return(mockStream, nil)
dual.On("Update", mock.Anything, mock.MatchedBy(func(status dualwrite.StorageStatus) bool {
return status.ReadUnified && !status.WriteLegacy && status.Migrated > 0
})).Return(dualwrite.StorageStatus{}, errors.New("update failed"))
},
expectedError: "update failed",
},
{
name: "should fail if bulk process stream close fails",
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
gr := resources.SupportedProvisioningResources[0]
dual.On("Status", mock.Anything, gr.GroupResource()).Return(dualwrite.StorageStatus{}, nil)
mockStream := NewBulkStore_BulkProcessClient(t)
mockStream.On("CloseAndRecv").Return(nil, errors.New("stream close failed"))
bulk.On("BulkProcess", mock.Anything, mock.Anything).Return(mockStream, nil)
},
expectedError: "error clearing unified",
},
{
name: "should succeed with complete workflow",
setupMocks: func(bulk *MockBulkStoreClient, dual *dualwrite.MockService) {
for _, gr := range resources.SupportedProvisioningResources {
dual.On("Status", mock.Anything, gr.GroupResource()).Return(dualwrite.StorageStatus{}, nil)
mockStream := NewBulkStore_BulkProcessClient(t)
mockStream.On("CloseAndRecv").Return(&resourcepb.BulkResponse{}, nil)
bulk.On("BulkProcess", mock.MatchedBy(func(ctx context.Context) bool {
md, ok := metadata.FromOutgoingContext(ctx)
if !ok {
return false
}
//nolint:errcheck // hits the err != nil gotcha
settings, _ := resource.NewBulkSettings(md)
if !settings.RebuildCollection {
return false
}
if len(settings.Collection) != 1 {
return false
}
if settings.Collection[0].Namespace != "test-namespace" {
return false
}
if settings.Collection[0].Group != gr.Group {
return false
}
if settings.Collection[0].Resource != gr.Resource {
return false
}
return true
}), mock.Anything).Return(mockStream, nil)
dual.On("Update", mock.Anything, mock.MatchedBy(func(status dualwrite.StorageStatus) bool {
return status.ReadUnified && !status.WriteLegacy && status.Migrated > 0
})).Return(dualwrite.StorageStatus{}, nil)
}
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
bulk := NewMockBulkStoreClient(t)
dual := dualwrite.NewMockService(t)
if tt.setupMocks != nil {
tt.setupMocks(bulk, dual)
}
swapper := NewStorageSwapper(bulk, dual)
err := swapper.WipeUnifiedAndSetMigratedFlag(context.Background(), "test-namespace")
if tt.expectedError != "" {
require.Error(t, err)
require.Contains(t, err.Error(), tt.expectedError)
} else {
require.NoError(t, err)
}
})
}
}

View File

@@ -33,23 +33,7 @@ func NewUnifiedStorageMigrator(
func (m *UnifiedStorageMigrator) Migrate(ctx context.Context, repo repository.ReaderWriter, options provisioning.MigrateJobOptions, progress jobs.JobProgressRecorder) error {
namespace := repo.Config().GetNamespace()
// For folder-type repositories, only run sync (skip export and cleaner)
if repo.Config().Spec.Sync.Target == provisioning.SyncTargetTypeFolder {
progress.SetMessage(ctx, "pull resources")
syncJob := provisioning.Job{
Spec: provisioning.JobSpec{
Pull: &provisioning.SyncJobOptions{
Incremental: false,
},
},
}
if err := m.syncWorker.Process(ctx, repo, syncJob, progress); err != nil {
return fmt.Errorf("pull resources: %w", err)
}
return nil
}
// For instance-type repositories, run the full workflow: export -> sync -> clean
// Export resources first (for both folder and instance sync)
progress.SetMessage(ctx, "export resources")
progress.StrictMaxErrors(1) // strict as we want the entire instance to be managed
@@ -67,6 +51,7 @@ func (m *UnifiedStorageMigrator) Migrate(ctx context.Context, repo repository.Re
// Reset the results after the export as pull will operate on the same resources
progress.ResetResults()
// Pull resources from the repository
progress.SetMessage(ctx, "pull resources")
syncJob := provisioning.Job{
Spec: provisioning.JobSpec{
@@ -79,9 +64,12 @@ func (m *UnifiedStorageMigrator) Migrate(ctx context.Context, repo repository.Re
return fmt.Errorf("pull resources: %w", err)
}
progress.SetMessage(ctx, "clean namespace")
if err := m.namespaceCleaner.Clean(ctx, namespace, progress); err != nil {
return fmt.Errorf("clean namespace: %w", err)
// For instance-type repositories, also clean the namespace
if repo.Config().Spec.Sync.Target != provisioning.SyncTargetTypeFolder {
progress.SetMessage(ctx, "clean namespace")
if err := m.namespaceCleaner.Clean(ctx, namespace, progress); err != nil {
return fmt.Errorf("clean namespace: %w", err)
}
}
return nil

View File

@@ -134,7 +134,7 @@ func TestUnifiedStorageMigrator_Migrate(t *testing.T) {
expectedError: "",
},
{
name: "should only run sync for folder-type repositories",
name: "should run export and sync for folder-type repositories",
setupMocks: func(nc *MockNamespaceCleaner, ew *jobs.MockWorker, sw *jobs.MockWorker, pr *jobs.MockJobProgressRecorder, rw *repository.MockRepository) {
rw.On("Config").Return(&provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
@@ -147,9 +147,15 @@ func TestUnifiedStorageMigrator_Migrate(t *testing.T) {
},
},
})
// Export should be skipped - no export-related mocks
// Cleaner should also be skipped - no cleaner-related mocks
// Only sync job should run
// Export should run for folder-type repositories
pr.On("SetMessage", mock.Anything, "export resources").Return()
pr.On("StrictMaxErrors", 1).Return()
ew.On("Process", mock.Anything, rw, mock.MatchedBy(func(job provisioning.Job) bool {
return job.Spec.Push != nil
}), pr).Return(nil)
pr.On("ResetResults").Return()
// Cleaner should be skipped - no cleaner-related mocks
// Sync job should run
pr.On("SetMessage", mock.Anything, "pull resources").Return()
sw.On("Process", mock.Anything, rw, mock.MatchedBy(func(job provisioning.Job) bool {
return job.Spec.Pull != nil && !job.Spec.Pull.Incremental
@@ -171,7 +177,14 @@ func TestUnifiedStorageMigrator_Migrate(t *testing.T) {
},
},
})
// Only sync job should run and fail
// Export should run first
pr.On("SetMessage", mock.Anything, "export resources").Return()
pr.On("StrictMaxErrors", 1).Return()
ew.On("Process", mock.Anything, rw, mock.MatchedBy(func(job provisioning.Job) bool {
return job.Spec.Push != nil
}), pr).Return(nil)
pr.On("ResetResults").Return()
// Sync job should run and fail
pr.On("SetMessage", mock.Anything, "pull resources").Return()
sw.On("Process", mock.Anything, rw, mock.MatchedBy(func(job provisioning.Job) bool {
return job.Spec.Pull != nil && !job.Spec.Pull.Incremental

View File

@@ -7,7 +7,6 @@ import (
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
)
//go:generate mockery --name Migrator --structname MockMigrator --inpackage --filename mock_migrator.go --with-expecter
@@ -16,8 +15,6 @@ type Migrator interface {
}
type MigrationWorker struct {
storageStatus dualwrite.Service
legacyMigrator Migrator
unifiedMigrator Migrator
}
@@ -27,16 +24,9 @@ func NewMigrationWorkerFromUnified(unifiedMigrator Migrator) *MigrationWorker {
}
}
// HACK: we should decouple the implementation of these two
func NewMigrationWorker(
legacyMigrator Migrator,
unifiedMigrator Migrator,
storageStatus dualwrite.Service,
) *MigrationWorker {
func NewMigrationWorker(unifiedMigrator Migrator) *MigrationWorker {
return &MigrationWorker{
unifiedMigrator: unifiedMigrator,
legacyMigrator: legacyMigrator,
storageStatus: storageStatus,
}
}
@@ -56,28 +46,5 @@ func (w *MigrationWorker) Process(ctx context.Context, repo repository.Repositor
return errors.New("migration job submitted targeting repository that is not a ReaderWriter")
}
if options.History {
if repo.Config().Spec.Type != provisioning.GitHubRepositoryType {
return errors.New("history is only supported for github repositories")
}
}
// Block migrate for legacy resources if repository type is folder
if repo.Config().Spec.Sync.Target == provisioning.SyncTargetTypeFolder {
// HACK: we should not have to check for storage existence here
if w.storageStatus != nil && dualwrite.IsReadingLegacyDashboardsAndFolders(ctx, w.storageStatus) {
return errors.New("migration of legacy resources is not supported for folder-type repositories")
}
}
// HACK: we should not have to check for storage existence here
if w.storageStatus != nil && dualwrite.IsReadingLegacyDashboardsAndFolders(ctx, w.storageStatus) {
return w.legacyMigrator.Migrate(ctx, rw, *options, progress)
}
if options.History {
return errors.New("history is not yet supported in unified storage")
}
return w.unifiedMigrator.Migrate(ctx, rw, *options, progress)
}

View File

@@ -2,7 +2,6 @@ package migrate
import (
"context"
"errors"
"testing"
"github.com/stretchr/testify/assert"
@@ -11,9 +10,7 @@ import (
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
"github.com/grafana/grafana/apps/provisioning/pkg/repository/local"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
)
func TestMigrationWorker_IsSupported(t *testing.T) {
@@ -42,7 +39,7 @@ func TestMigrationWorker_IsSupported(t *testing.T) {
},
}
worker := NewMigrationWorker(nil, nil, nil)
worker := NewMigrationWorker(nil)
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
@@ -53,7 +50,7 @@ func TestMigrationWorker_IsSupported(t *testing.T) {
}
func TestMigrationWorker_ProcessNotReaderWriter(t *testing.T) {
worker := NewMigrationWorker(nil, nil, nil)
worker := NewMigrationWorker(NewMockMigrator(t))
job := provisioning.Job{
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMigrate,
@@ -68,56 +65,13 @@ func TestMigrationWorker_ProcessNotReaderWriter(t *testing.T) {
require.EqualError(t, err, "migration job submitted targeting repository that is not a ReaderWriter")
}
func TestMigrationWorker_WithHistory(t *testing.T) {
fakeDualwrite := dualwrite.NewMockService(t)
fakeDualwrite.On("ReadFromUnified", mock.Anything, mock.Anything).
Maybe().Return(true, nil) // using unified storage
worker := NewMigrationWorker(nil, nil, fakeDualwrite)
job := provisioning.Job{
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMigrate,
Migrate: &provisioning.MigrateJobOptions{
History: true,
},
},
}
t.Run("fail local", func(t *testing.T) {
progressRecorder := jobs.NewMockJobProgressRecorder(t)
progressRecorder.On("SetTotal", mock.Anything, 10).Return()
repo := local.NewRepository(&provisioning.Repository{}, nil)
err := worker.Process(context.Background(), repo, job, progressRecorder)
require.EqualError(t, err, "history is only supported for github repositories")
})
t.Run("fail unified", func(t *testing.T) {
progressRecorder := jobs.NewMockJobProgressRecorder(t)
progressRecorder.On("SetTotal", mock.Anything, 10).Return()
repo := repository.NewMockRepository(t)
repo.On("Config").Return(&provisioning.Repository{
Spec: provisioning.RepositorySpec{
Type: provisioning.GitHubRepositoryType,
GitHub: &provisioning.GitHubRepositoryConfig{
URL: "empty", // not valid
},
},
})
err := worker.Process(context.Background(), repo, job, progressRecorder)
require.EqualError(t, err, "history is not yet supported in unified storage")
})
}
func TestMigrationWorker_Process(t *testing.T) {
tests := []struct {
name string
setupMocks func(*MockMigrator, *MockMigrator, *dualwrite.MockService, *jobs.MockJobProgressRecorder)
setupRepo func(*repository.MockRepository)
job provisioning.Job
expectedError string
isLegacyActive bool
name string
setupMocks func(*MockMigrator, *jobs.MockJobProgressRecorder)
setupRepo func(*repository.MockRepository)
job provisioning.Job
expectedError string
}{
{
name: "should fail when migrate settings are missing",
@@ -127,7 +81,7 @@ func TestMigrationWorker_Process(t *testing.T) {
Migrate: nil,
},
},
setupMocks: func(lm *MockMigrator, um *MockMigrator, ds *dualwrite.MockService, pr *jobs.MockJobProgressRecorder) {
setupMocks: func(um *MockMigrator, pr *jobs.MockJobProgressRecorder) {
},
setupRepo: func(repo *repository.MockRepository) {
// No Config() call expected since we fail before that
@@ -135,150 +89,35 @@ func TestMigrationWorker_Process(t *testing.T) {
expectedError: "missing migrate settings",
},
{
name: "should use legacy migrator when legacy storage is active",
name: "should use unified storage migrator for instance-type repositories",
job: provisioning.Job{
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMigrate,
Migrate: &provisioning.MigrateJobOptions{},
},
},
isLegacyActive: true,
setupMocks: func(lm *MockMigrator, um *MockMigrator, ds *dualwrite.MockService, pr *jobs.MockJobProgressRecorder) {
setupMocks: func(um *MockMigrator, pr *jobs.MockJobProgressRecorder) {
pr.On("SetTotal", mock.Anything, 10).Return()
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(false, nil)
lm.On("Migrate", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil)
},
setupRepo: func(repo *repository.MockRepository) {
repo.On("Config").Return(&provisioning.Repository{
Spec: provisioning.RepositorySpec{
Sync: provisioning.SyncOptions{
Target: provisioning.SyncTargetTypeInstance,
},
},
})
},
},
{
name: "should use unified storage migrator when legacy storage is not active",
job: provisioning.Job{
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMigrate,
Migrate: &provisioning.MigrateJobOptions{},
},
},
isLegacyActive: false,
setupMocks: func(lm *MockMigrator, um *MockMigrator, ds *dualwrite.MockService, pr *jobs.MockJobProgressRecorder) {
pr.On("SetTotal", mock.Anything, 10).Return()
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil)
um.On("Migrate", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil)
},
setupRepo: func(repo *repository.MockRepository) {
repo.On("Config").Return(&provisioning.Repository{
Spec: provisioning.RepositorySpec{
Sync: provisioning.SyncOptions{
Target: provisioning.SyncTargetTypeInstance,
},
},
})
// No Config() call needed anymore
},
},
{
name: "should propagate migrator errors",
name: "should allow migration for folder-type repositories",
job: provisioning.Job{
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMigrate,
Migrate: &provisioning.MigrateJobOptions{},
},
},
isLegacyActive: true,
setupMocks: func(lm *MockMigrator, um *MockMigrator, ds *dualwrite.MockService, pr *jobs.MockJobProgressRecorder) {
setupMocks: func(um *MockMigrator, pr *jobs.MockJobProgressRecorder) {
pr.On("SetTotal", mock.Anything, 10).Return()
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(false, nil)
lm.On("Migrate", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(errors.New("migration failed"))
},
setupRepo: func(repo *repository.MockRepository) {
repo.On("Config").Return(&provisioning.Repository{
Spec: provisioning.RepositorySpec{
Sync: provisioning.SyncOptions{
Target: provisioning.SyncTargetTypeInstance,
},
},
})
},
expectedError: "migration failed",
},
{
name: "should block migration of legacy resources for folder-type repositories",
job: provisioning.Job{
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMigrate,
Migrate: &provisioning.MigrateJobOptions{},
},
},
isLegacyActive: true,
setupMocks: func(lm *MockMigrator, um *MockMigrator, ds *dualwrite.MockService, pr *jobs.MockJobProgressRecorder) {
pr.On("SetTotal", mock.Anything, 10).Return()
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(false, nil)
// legacyMigrator should not be called as we block before reaching it
},
setupRepo: func(repo *repository.MockRepository) {
repo.On("Config").Return(&provisioning.Repository{
Spec: provisioning.RepositorySpec{
Sync: provisioning.SyncOptions{
Target: provisioning.SyncTargetTypeFolder,
},
},
})
},
expectedError: "migration of legacy resources is not supported for folder-type repositories",
},
{
name: "should allow migration of legacy resources for instance-type repositories",
job: provisioning.Job{
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMigrate,
Migrate: &provisioning.MigrateJobOptions{},
},
},
isLegacyActive: true,
setupMocks: func(lm *MockMigrator, um *MockMigrator, ds *dualwrite.MockService, pr *jobs.MockJobProgressRecorder) {
pr.On("SetTotal", mock.Anything, 10).Return()
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(false, nil)
lm.On("Migrate", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil)
},
setupRepo: func(repo *repository.MockRepository) {
repo.On("Config").Return(&provisioning.Repository{
Spec: provisioning.RepositorySpec{
Sync: provisioning.SyncOptions{
Target: provisioning.SyncTargetTypeInstance,
},
},
})
},
expectedError: "",
},
{
name: "should allow migration for folder-type repositories when legacy storage is not active",
job: provisioning.Job{
Spec: provisioning.JobSpec{
Action: provisioning.JobActionMigrate,
Migrate: &provisioning.MigrateJobOptions{},
},
},
isLegacyActive: false,
setupMocks: func(lm *MockMigrator, um *MockMigrator, ds *dualwrite.MockService, pr *jobs.MockJobProgressRecorder) {
pr.On("SetTotal", mock.Anything, 10).Return()
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil)
um.On("Migrate", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil)
},
setupRepo: func(repo *repository.MockRepository) {
repo.On("Config").Return(&provisioning.Repository{
Spec: provisioning.RepositorySpec{
Sync: provisioning.SyncOptions{
Target: provisioning.SyncTargetTypeFolder,
},
},
})
// No Config() call needed anymore
},
expectedError: "",
},
@@ -286,15 +125,13 @@ func TestMigrationWorker_Process(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
legacyMigrator := NewMockMigrator(t)
unifiedMigrator := NewMockMigrator(t)
dualWriteService := dualwrite.NewMockService(t)
progressRecorder := jobs.NewMockJobProgressRecorder(t)
worker := NewMigrationWorker(legacyMigrator, unifiedMigrator, dualWriteService)
worker := NewMigrationWorker(unifiedMigrator)
if tt.setupMocks != nil {
tt.setupMocks(legacyMigrator, unifiedMigrator, dualWriteService, progressRecorder)
tt.setupMocks(unifiedMigrator, progressRecorder)
}
rw := repository.NewMockRepository(t)
@@ -310,7 +147,7 @@ func TestMigrationWorker_Process(t *testing.T) {
require.NoError(t, err)
}
mock.AssertExpectationsForObjects(t, legacyMigrator, unifiedMigrator, dualWriteService, progressRecorder, rw)
mock.AssertExpectationsForObjects(t, unifiedMigrator, progressRecorder, rw)
})
}
}

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.52.4. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package jobs

View File

@@ -14,7 +14,7 @@ import (
//go:generate mockery --name FullSyncFn --structname MockFullSyncFn --inpackage --filename full_sync_fn_mock.go --with-expecter
type FullSyncFn func(ctx context.Context, repo repository.Reader, compare CompareFn, clients resources.ResourceClients, currentRef string, repositoryResources resources.RepositoryResources, progress jobs.JobProgressRecorder, tracer tracing.Tracer, maxSyncWorkers int, metrics jobs.JobMetrics) error
//go:generate mockery --name CompareFn --structname MockCompareFn --inpackage --filename compare_fn_mock.go --with-expecter
//go:generate mockery -name CompareFn --structname MockCompareFn --inpackage --filename compare_fn_mock.go --with-expecter
type CompareFn func(ctx context.Context, repo repository.Reader, repositoryResources resources.RepositoryResources, ref string) ([]ResourceFileChange, error)
//go:generate mockery --name IncrementalSyncFn --structname MockIncrementalSyncFn --inpackage --filename incremental_sync_fn_mock.go --with-expecter

View File

@@ -12,7 +12,6 @@ import (
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/utils"
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
)
@@ -29,9 +28,6 @@ type SyncWorker struct {
// ResourceClients for the repository
repositoryResources resources.RepositoryResourcesFactory
// Check if the system is using unified storage
storageStatus dualwrite.Service
// Patch status for the repository
patchStatus RepositoryPatchFn
@@ -48,7 +44,6 @@ type SyncWorker struct {
func NewSyncWorker(
clients resources.ClientFactory,
repositoryResources resources.RepositoryResourcesFactory,
storageStatus dualwrite.Service,
patchStatus RepositoryPatchFn,
syncer Syncer,
metrics jobs.JobMetrics,
@@ -59,7 +54,6 @@ func NewSyncWorker(
clients: clients,
repositoryResources: repositoryResources,
patchStatus: patchStatus,
storageStatus: storageStatus,
syncer: syncer,
metrics: metrics,
tracer: tracer,
@@ -96,13 +90,6 @@ func (r *SyncWorker) Process(ctx context.Context, repo repository.Repository, jo
)
}()
// Check if we are onboarding from legacy storage
// HACK -- this should be handled outside of this worker
if r.storageStatus != nil && dualwrite.IsReadingLegacyDashboardsAndFolders(ctx, r.storageStatus) {
err := fmt.Errorf("sync not supported until storage has migrated")
return tracing.Error(span, err)
}
rw, ok := repo.(repository.ReaderWriter)
if !ok {
err := fmt.Errorf("sync job submitted for repository that does not support read-write")

View File

@@ -10,7 +10,6 @@ import (
"github.com/grafana/grafana/pkg/infra/tracing"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
"github.com/prometheus/client_golang/prometheus"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
@@ -46,7 +45,7 @@ func TestSyncWorker_IsSupported(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
worker := NewSyncWorker(nil, nil, nil, nil, nil, metrics, tracing.NewNoopTracerService(), 10)
worker := NewSyncWorker(nil, nil, nil, nil, metrics, tracing.NewNoopTracerService(), 10)
result := worker.IsSupported(context.Background(), tt.job)
require.Equal(t, tt.expected, result)
})
@@ -63,9 +62,7 @@ func TestSyncWorker_ProcessNotReaderWriter(t *testing.T) {
Title: "test-repo",
},
})
fakeDualwrite := dualwrite.NewMockService(t)
fakeDualwrite.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
worker := NewSyncWorker(nil, nil, fakeDualwrite, nil, nil, jobs.RegisterJobMetrics(prometheus.NewPedanticRegistry()), tracing.NewNoopTracerService(), 10)
worker := NewSyncWorker(nil, nil, nil, nil, jobs.RegisterJobMetrics(prometheus.NewPedanticRegistry()), tracing.NewNoopTracerService(), 10)
err := worker.Process(context.Background(), repo, provisioning.Job{}, jobs.NewMockJobProgressRecorder(t))
require.EqualError(t, err, "sync job submitted for repository that does not support read-write")
}
@@ -73,31 +70,13 @@ func TestSyncWorker_ProcessNotReaderWriter(t *testing.T) {
func TestSyncWorker_Process(t *testing.T) {
tests := []struct {
name string
setupMocks func(*resources.MockClientFactory, *resources.MockRepositoryResourcesFactory, *dualwrite.MockService, *MockRepositoryPatchFn, *MockSyncer, *mockReaderWriter, *jobs.MockJobProgressRecorder)
setupMocks func(*resources.MockClientFactory, *resources.MockRepositoryResourcesFactory, *MockRepositoryPatchFn, *MockSyncer, *mockReaderWriter, *jobs.MockJobProgressRecorder)
expectedError string
expectedStatus *provisioning.SyncStatus
}{
{
name: "legacy storage not migrated",
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
rw.MockRepository.On("Config").Return(&provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Name: "test-repo",
},
Spec: provisioning.RepositorySpec{
Title: "test-repo",
},
})
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(false, nil).Twice()
},
expectedError: "sync not supported until storage has migrated",
},
{
name: "failed initial status patching",
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
// Setup repository config with existing LastRef
repoConfig := &provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
@@ -132,7 +111,7 @@ func TestSyncWorker_Process(t *testing.T) {
},
{
name: "failed getting repository resources",
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
// Setup repository config
repoConfig := &provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
@@ -149,9 +128,6 @@ func TestSyncWorker_Process(t *testing.T) {
}
rw.MockRepository.On("Config").Return(repoConfig)
// Storage is migrated
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
// Initial status update succeeds - expect granular patches
pr.On("SetMessage", mock.Anything, "update sync status at start").Return()
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
@@ -168,7 +144,7 @@ func TestSyncWorker_Process(t *testing.T) {
},
{
name: "failed getting clients for namespace",
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
// Setup repository config
repoConfig := &provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
@@ -186,9 +162,6 @@ func TestSyncWorker_Process(t *testing.T) {
}
rw.MockRepository.On("Config").Return(repoConfig)
// Storage is migrated
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
// Initial status update succeeds - expect granular patches
pr.On("SetMessage", mock.Anything, "update sync status at start").Return()
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
@@ -208,7 +181,7 @@ func TestSyncWorker_Process(t *testing.T) {
},
{
name: "successful sync",
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
repoConfig := &provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Name: "test-repo",
@@ -222,9 +195,6 @@ func TestSyncWorker_Process(t *testing.T) {
}
rw.MockRepository.On("Config").Return(repoConfig)
// Storage is migrated
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
// Initial status update - expect granular patches
pr.On("SetMessage", mock.Anything, "update sync status at start").Return()
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
@@ -261,7 +231,7 @@ func TestSyncWorker_Process(t *testing.T) {
},
{
name: "failed sync",
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
repoConfig := &provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Name: "test-repo",
@@ -275,9 +245,6 @@ func TestSyncWorker_Process(t *testing.T) {
}
rw.MockRepository.On("Config").Return(repoConfig)
// Storage is migrated
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
// Initial status update - expect granular patches
pr.On("SetMessage", mock.Anything, "update sync status at start").Return()
rpf.On("Execute", mock.Anything, repoConfig, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
@@ -315,7 +282,7 @@ func TestSyncWorker_Process(t *testing.T) {
},
{
name: "stats call fails",
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
repoConfig := &provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Name: "test-repo",
@@ -323,7 +290,6 @@ func TestSyncWorker_Process(t *testing.T) {
},
}
rw.MockRepository.On("Config").Return(repoConfig)
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
mockRepoResources := resources.NewMockRepositoryResources(t)
mockRepoResources.On("Stats", mock.Anything).Return(nil, errors.New("stats error"))
@@ -344,7 +310,7 @@ func TestSyncWorker_Process(t *testing.T) {
},
{
name: "stats returns nil stats and nil error",
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
repoConfig := &provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Name: "test-repo",
@@ -352,7 +318,6 @@ func TestSyncWorker_Process(t *testing.T) {
},
}
rw.MockRepository.On("Config").Return(repoConfig)
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
mockRepoResources := resources.NewMockRepositoryResources(t)
mockRepoResources.On("Stats", mock.Anything).Return(nil, nil)
@@ -378,7 +343,7 @@ func TestSyncWorker_Process(t *testing.T) {
},
{
name: "stats returns one managed stats",
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
repoConfig := &provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Name: "test-repo",
@@ -386,7 +351,6 @@ func TestSyncWorker_Process(t *testing.T) {
},
}
rw.MockRepository.On("Config").Return(repoConfig)
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
// Initial patch with granular updates
rpf.On("Execute", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
@@ -439,7 +403,7 @@ func TestSyncWorker_Process(t *testing.T) {
},
{
name: "stats returns multiple managed stats",
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
repoConfig := &provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Name: "test-repo",
@@ -447,7 +411,6 @@ func TestSyncWorker_Process(t *testing.T) {
},
}
rw.MockRepository.On("Config").Return(repoConfig)
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
mockRepoResources := resources.NewMockRepositoryResources(t)
stats := &provisioning.ResourceStats{
@@ -495,7 +458,7 @@ func TestSyncWorker_Process(t *testing.T) {
},
{
name: "failed final status patch",
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, ds *dualwrite.MockService, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
setupMocks: func(cf *resources.MockClientFactory, rrf *resources.MockRepositoryResourcesFactory, rpf *MockRepositoryPatchFn, s *MockSyncer, rw *mockReaderWriter, pr *jobs.MockJobProgressRecorder) {
repoConfig := &provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Name: "test-repo",
@@ -503,7 +466,6 @@ func TestSyncWorker_Process(t *testing.T) {
},
}
rw.MockRepository.On("Config").Return(repoConfig)
ds.On("ReadFromUnified", mock.Anything, mock.Anything).Return(true, nil).Twice()
// Initial status patch succeeds - expect granular patches
rpf.On("Execute", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil).Once()
@@ -534,7 +496,6 @@ func TestSyncWorker_Process(t *testing.T) {
// Create mocks
clientFactory := resources.NewMockClientFactory(t)
repoResourcesFactory := resources.NewMockRepositoryResourcesFactory(t)
dualwriteService := dualwrite.NewMockService(t)
repositoryPatchFn := NewMockRepositoryPatchFn(t)
syncer := NewMockSyncer(t)
readerWriter := &mockReaderWriter{
@@ -544,13 +505,12 @@ func TestSyncWorker_Process(t *testing.T) {
progressRecorder := jobs.NewMockJobProgressRecorder(t)
// Setup mocks
tt.setupMocks(clientFactory, repoResourcesFactory, dualwriteService, repositoryPatchFn, syncer, readerWriter, progressRecorder)
tt.setupMocks(clientFactory, repoResourcesFactory, repositoryPatchFn, syncer, readerWriter, progressRecorder)
// Create worker
worker := NewSyncWorker(
clientFactory,
repoResourcesFactory,
dualwriteService,
repositoryPatchFn.Execute,
syncer,
jobs.RegisterJobMetrics(prometheus.NewPedanticRegistry()),

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.52.4. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package jobs

View File

@@ -28,8 +28,6 @@ import (
authlib "github.com/grafana/authlib/types"
"github.com/grafana/grafana-app-sdk/logging"
dashboard "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v0alpha1"
folders "github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1"
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
connectionvalidation "github.com/grafana/grafana/apps/provisioning/pkg/connection"
appcontroller "github.com/grafana/grafana/apps/provisioning/pkg/controller"
@@ -54,14 +52,12 @@ import (
movepkg "github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs/move"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/jobs/sync"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources/signature"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/usage"
"github.com/grafana/grafana/pkg/services/apiserver"
"github.com/grafana/grafana/pkg/services/apiserver/builder"
"github.com/grafana/grafana/pkg/services/featuremgmt"
"github.com/grafana/grafana/pkg/setting"
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
"github.com/grafana/grafana/pkg/storage/unified/migrations"
"github.com/grafana/grafana/pkg/storage/unified/resource"
)
@@ -112,7 +108,6 @@ type APIBuilder struct {
jobHistoryLoki *jobs.LokiJobHistory
resourceLister resources.ResourceLister
dashboardAccess legacy.MigrationDashboardAccessor
storageStatus dualwrite.Service
unified resource.ResourceClient
repoFactory repository.Factory
client client.ProvisioningV0alpha1Interface
@@ -159,9 +154,9 @@ func NewAPIBuilder(
} else {
clients = resources.NewClientFactory(configProvider)
}
parsers := resources.NewParserFactory(clients)
legacyMigrator := migrations.ProvideUnifiedMigrator(dashboardAccess, unified)
resourceLister := resources.NewResourceListerForMigrations(unified, legacyMigrator, storageStatus)
resourceLister := resources.NewResourceListerForMigrations(unified)
b := &APIBuilder{
onlyApiServer: onlyApiServer,
@@ -174,7 +169,6 @@ func NewAPIBuilder(
repositoryResources: resources.NewRepositoryResourcesFactory(parsers, clients, resourceLister),
resourceLister: resourceLister,
dashboardAccess: dashboardAccess,
storageStatus: storageStatus,
unified: unified,
access: access,
jobHistoryConfig: jobHistoryConfig,
@@ -250,6 +244,10 @@ func RegisterAPIService(
return nil, nil
}
if dualwrite.IsReadingLegacyDashboardsAndFolders(context.Background(), storageStatus) {
return nil, fmt.Errorf("resources are stored in an incompatible data format to use provisioning. Please enable data migration in settings for folders and dashboards by adding the following configuration:\n[unified_storage.folders.folder.grafana.app]\nenableMigration = true\n\n[unified_storage.dashboards.dashboard.grafana.app]\nenableMigration = true\n\nAlternatively, disable provisioning")
}
allowedTargets := []provisioning.SyncTargetType{}
for _, target := range cfg.ProvisioningAllowedTargets {
allowedTargets = append(allowedTargets, provisioning.SyncTargetType(target))
@@ -762,12 +760,6 @@ func (b *APIBuilder) GetPostStartHooks() (map[string]genericapiserver.PostStartH
go repoInformer.Informer().Run(postStartHookCtx.Done())
go jobInformer.Informer().Run(postStartHookCtx.Done())
// When starting with an empty instance -- swith to "mode 4+"
err = b.tryRunningOnlyUnifiedStorage()
if err != nil {
return err
}
// Create the repository resources factory
repositoryListerWrapper := func(ctx context.Context) ([]provisioning.Repository, error) {
return GetRepositoriesInNamespace(ctx, b.store)
@@ -790,29 +782,12 @@ func (b *APIBuilder) GetPostStartHooks() (map[string]genericapiserver.PostStartH
syncWorker := sync.NewSyncWorker(
b.clients,
b.repositoryResources,
b.storageStatus,
b.statusPatcher.Patch,
syncer,
metrics,
b.tracer,
10,
)
signerFactory := signature.NewSignerFactory(b.clients)
legacyResources := migrate.NewLegacyResourcesMigrator(
b.repositoryResources,
b.parsers,
b.dashboardAccess,
signerFactory,
b.clients,
export.ExportAll,
)
storageSwapper := migrate.NewStorageSwapper(b.unified, b.storageStatus)
legacyMigrator := migrate.NewLegacyMigrator(
legacyResources,
storageSwapper,
syncWorker,
stageIfPossible,
)
cleaner := migrate.NewNamespaceCleaner(b.clients)
unifiedStorageMigrator := migrate.NewUnifiedStorageMigrator(
@@ -820,12 +795,7 @@ func (b *APIBuilder) GetPostStartHooks() (map[string]genericapiserver.PostStartH
exportWorker,
syncWorker,
)
migrationWorker := migrate.NewMigrationWorker(
legacyMigrator,
unifiedStorageMigrator,
b.storageStatus,
)
migrationWorker := migrate.NewMigrationWorker(unifiedStorageMigrator)
deleteWorker := deletepkg.NewWorker(syncWorker, stageIfPossible, b.repositoryResources, metrics)
moveWorker := movepkg.NewWorker(syncWorker, stageIfPossible, b.repositoryResources, metrics)
@@ -897,7 +867,6 @@ func (b *APIBuilder) GetPostStartHooks() (map[string]genericapiserver.PostStartH
b.resourceLister,
b.clients,
b.jobs,
b.storageStatus,
b.GetHealthChecker(),
b.statusPatcher,
b.registry,
@@ -1306,65 +1275,6 @@ spec:
return oas, nil
}
// FIXME: This logic does not belong in provisioning! (but required for now)
// When starting an empty instance, we shift so that we never reference legacy storage
// This should run somewhere else at startup by default (dual writer? dashboards?)
func (b *APIBuilder) tryRunningOnlyUnifiedStorage() error {
ctx := context.Background()
if !b.storageStatus.ShouldManage(dashboard.DashboardResourceInfo.GroupResource()) {
return nil // not enabled
}
if !dualwrite.IsReadingLegacyDashboardsAndFolders(ctx, b.storageStatus) {
return nil
}
// Count how many things exist - create a migrator on-demand for this
legacyMigrator := migrations.ProvideUnifiedMigrator(b.dashboardAccess, b.unified)
rsp, err := legacyMigrator.Migrate(ctx, legacy.MigrateOptions{
Namespace: "default", // FIXME! this works for single org, but need to check multi-org
Resources: []schema.GroupResource{{
Group: dashboard.GROUP, Resource: dashboard.DASHBOARD_RESOURCE,
}, {
Group: folders.GROUP, Resource: folders.RESOURCE,
}},
OnlyCount: true,
})
if err != nil {
return fmt.Errorf("error getting legacy count %w", err)
}
for _, stats := range rsp.Summary {
if stats.Count > 0 {
return nil // something exists we can not just switch
}
}
logger := logging.DefaultLogger.With("logger", "provisioning startup")
mode5 := func(gr schema.GroupResource) error {
status, _ := b.storageStatus.Status(ctx, gr)
if !status.ReadUnified {
status.ReadUnified = true
status.WriteLegacy = false
status.WriteUnified = true
status.Runtime = false
status.Migrated = time.Now().UnixMilli()
_, err = b.storageStatus.Update(ctx, status)
logger.Info("set unified storage access", "group", gr.Group, "resource", gr.Resource)
return err
}
return nil // already reading unified
}
if err = mode5(dashboard.DashboardResourceInfo.GroupResource()); err != nil {
return err
}
if err = mode5(folders.FolderResourceInfo.GroupResource()); err != nil {
return err
}
return nil
}
// Helpers for fetching valid Repository objects
// TODO: where should the helpers live?

View File

@@ -277,6 +277,16 @@ func (r *DualReadWriter) createOrUpdate(ctx context.Context, create bool, opts D
// FIXME: to make sure if behaves in the same way as in sync, we should
// we should refactor the code to use the same function.
if r.shouldUpdateGrafanaDB(opts, parsed) {
// HACK: Get the has from repository -- this will avoid an additional RV increment
// we should change the signature of Create and Update to return FileInfo instead
info, _ = r.repo.Read(ctx, opts.Path, opts.Ref)
if info != nil {
parsed.Meta.SetSourceProperties(utils.SourceProperties{
Path: opts.Path,
Checksum: info.Hash,
})
}
if _, err := r.folders.EnsureFolderPathExist(ctx, opts.Path); err != nil {
return nil, fmt.Errorf("ensure folder path exists: %w", err)
}

View File

@@ -197,6 +197,8 @@ func (fm *FolderManager) EnsureFolderTreeExists(ctx context.Context, ref, path s
if err != nil && (!errors.Is(err, repository.ErrFileNotFound) && !apierrors.IsNotFound(err)) {
return fn(folder, false, fmt.Errorf("check if folder exists before writing: %w", err))
} else if err == nil {
// Folder already exists in repository, add it to tree so resources can find it
fm.tree.Add(folder, parent)
return fn(folder, false, nil)
}

View File

@@ -4,15 +4,9 @@ import (
"context"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
dashboard "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v0alpha1"
folders "github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1"
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/registry/apis/dashboard/legacy"
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
"github.com/grafana/grafana/pkg/storage/unified/migrations"
"github.com/grafana/grafana/pkg/storage/unified/resource"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
)
@@ -31,25 +25,16 @@ type ResourceStore interface {
}
type ResourceListerFromSearch struct {
store ResourceStore
migrator migrations.UnifiedMigrator
storageStatus dualwrite.Service
store ResourceStore
}
func NewResourceLister(store ResourceStore) ResourceLister {
return &ResourceListerFromSearch{store: store}
}
// FIXME: the logic about migration and storage should probably be separated from this
func NewResourceListerForMigrations(
store ResourceStore,
migrator migrations.UnifiedMigrator,
storageStatus dualwrite.Service,
) ResourceLister {
func NewResourceListerForMigrations(store ResourceStore) ResourceLister {
return &ResourceListerFromSearch{
store: store,
migrator: migrator,
storageStatus: storageStatus,
store: store,
}
}
@@ -133,37 +118,6 @@ func (o *ResourceListerFromSearch) Stats(ctx context.Context, namespace, reposit
return stats, nil
}
// Get the stats based on what a migration could support
if o.storageStatus != nil && o.migrator != nil && dualwrite.IsReadingLegacyDashboardsAndFolders(ctx, o.storageStatus) {
rsp, err := o.migrator.Migrate(ctx, legacy.MigrateOptions{
Namespace: namespace,
Resources: []schema.GroupResource{{
Group: dashboard.GROUP, Resource: dashboard.DASHBOARD_RESOURCE,
}, {
Group: folders.GROUP, Resource: folders.RESOURCE,
}},
WithHistory: false,
OnlyCount: true,
})
if err != nil {
return nil, err
}
for _, v := range rsp.Summary {
stats.Instance = append(stats.Instance, provisioning.ResourceCount{
Group: v.Group,
Resource: v.Resource,
Count: v.Count,
})
// Everything is unmanaged in legacy storage
stats.Unmanaged = append(stats.Unmanaged, provisioning.ResourceCount{
Group: v.Group,
Resource: v.Resource,
Count: v.Count,
})
}
return stats, nil
}
// Get full instance stats
info, err := o.store.GetStats(ctx, &resourcepb.ResourceStatsRequest{
Namespace: namespace,

View File

@@ -180,7 +180,12 @@ func (r *ResourcesManager) WriteResourceFileFromObject(ctx context.Context, obj
var ok bool
fid, ok = r.folders.Tree().DirPath(folder, rootFolder)
if !ok {
return "", fmt.Errorf("folder %s NOT found in tree with root: %s", folder, rootFolder)
// HACK: this is a hack to get the folder path without the root folder
// TODO: should we build the tree in a different way?
fid, ok = r.folders.Tree().DirPath(folder, "")
if !ok {
return "", fmt.Errorf("folder %s NOT found in tree", folder)
}
}
}

View File

@@ -1,33 +0,0 @@
package signature
import (
"context"
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
"github.com/grafana/grafana/pkg/apimachinery/utils"
)
type grafanaSigner struct{}
// FIXME: where should we use this default signature?
// NewGrafanaSigner returns a Signer that uses the grafana user as the author
func NewGrafanaSigner() Signer {
return &grafanaSigner{}
}
func (s *grafanaSigner) Sign(ctx context.Context, item utils.GrafanaMetaAccessor) (context.Context, error) {
sig := repository.CommitSignature{
Name: "grafana",
// TODO: should we add email?
// Email: "grafana@grafana.com",
}
t, err := item.GetUpdatedTimestamp()
if err == nil && t != nil {
sig.When = *t
} else {
sig.When = item.GetCreationTimestamp().Time
}
return repository.WithAuthorSignature(ctx, sig), nil
}

Some files were not shown because too many files have changed in this diff Show More