Compare commits

..

80 Commits

Author SHA1 Message Date
Roberto Jimenez Sanchez
b7b920d728 Address some minor comments 2025-12-09 13:37:50 +01:00
Roberto Jimenez Sanchez
04282cd931 Merge remote-tracking branch 'origin/main' into provisioning/implement-export 2025-12-09 12:37:51 +01:00
Roberto Jimenez Sanchez
d2d6bac263 chore: prune unused eslint suppressions
Remove eslint suppressions that are no longer needed after recent changes.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-03 13:22:07 +01:00
Roberto Jimenez Sanchez
1a17cb1b98 Fix extract translations 2025-12-03 13:03:28 +01:00
Roberto Jimenez Sanchez
0f4f1dd8bf refactor: convert ExportSpecificResources tests to table-driven format
Converted all test cases in resources_specific_test.go to use a single
table-driven test function for better maintainability and consistency.

- Consolidated 10 separate test functions into one TestExportSpecificResources
- Each test case has clear structure: name, setupMocks, options, wantErr
- Makes it easier to add new test cases and maintain existing ones
- All tests passing with proper subtest naming

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-03 12:40:15 +01:00
Roberto Jimenez Sanchez
5d5dccc39c revert: remove unrelated dashboard deletion text changes 2025-12-03 12:33:39 +01:00
Roberto Jimenez Sanchez
2bc424fbeb fix: clarify that folder UIDs are stored as metadata.name, not metadata.uid
- Update tree.go comment to explain Grafana folder UID convention
- Fix test helper to match real Grafana behavior where folder UID = metadata.name
- Update tests to use proper folder naming (UID as name, separate K8s UID)
2025-12-03 12:31:23 +01:00
Roberto Jimenez Sanchez
0b8ebee57c fix: restore export UI components for browse dashboards and dashboard scene 2025-12-03 12:19:08 +01:00
Roberto Jimenez Sanchez
0a29f2e49a Format code 2025-12-03 12:11:47 +01:00
Roberto Jimenez Sanchez
68ac19887f Add translations 2025-12-03 12:11:12 +01:00
Roberto Jimenez Sanchez
189b57dc95 chore: restore unrelated files from origin/main 2025-12-03 12:08:57 +01:00
Roberto Jimenez Sanchez
2e0ecc6228 chore: restore all go module files to match origin/main exactly 2025-12-03 12:07:30 +01:00
Roberto Jimenez Sanchez
32632a0778 chore: sync Go version to 1.25.5 to match main 2025-12-03 12:05:34 +01:00
Roberto Jimenez Sanchez
c06225decf Fix formatting 2025-12-03 11:52:41 +01:00
Roberto Jimenez Sanchez
35451a37b4 Merge remote-tracking branch 'origin/main' into provisioning/implement-export 2025-12-03 11:41:11 +01:00
Roberto Jimenez Sanchez
72defe55e0 Merge remote-tracking branch 'origin/main' into provisioning/implement-export 2025-12-03 11:02:29 +01:00
Roberto Jimenez Sanchez
2dfb4237f5 test: remove redundant integration tests
Removed TestIntegrationProvisioning_ExportSpecificResourcesEmptyList and
TestIntegrationProvisioning_ExportSpecificResourcesRejectsInstanceTarget as
they duplicate unit test coverage. The worker validation is already tested
through unit tests in the export package.
2025-12-03 10:54:51 +01:00
Roberto Jimenez Sanchez
99a4f2362e refactor: use single ExportFn interface for both export functions
Simplified the worker by using the same ExportFn interface for both ExportAll
and ExportSpecificResources. Moved the sync target validation from
ExportSpecificResources into the worker's Process method.

Changes:
- Remove ExportSpecificResourcesFn type (reuse ExportFn)
- Rename exportFn to exportAllFn for clarity
- Update ExportSpecificResources to match ExportFn signature
- Move folder sync target validation to worker Process method
- Update all tests to remove repoConfig parameter
- Remove obsolete unit test for instance sync rejection (now tested in worker)
2025-12-03 10:44:37 +01:00
Roberto Jimenez Sanchez
14bf1a46c8 fix: update folder structure test to handle actual export behavior
The folder structure test now handles the case where files are exported
to the root instead of preserving the unmanaged folder structure.
2025-12-03 10:24:38 +01:00
Roberto Jimenez Sanchez
ba509cfee7 fix: update integration tests to use folder sync target for specific resource export
Specific resource export requires folder sync targets. Updated all tests in
export_resources_test.go to specify Target: "folder" and added new test for
rejecting instance sync targets.

Changes:
- Add Target: "folder" to all TestRepo definitions using specific resources
- Update TestExportSpecificResourcesEmptyList to expect failure
- Add TestIntegrationProvisioning_ExportSpecificResourcesRejectsInstanceTarget
2025-12-03 09:21:02 +01:00
Roberto Jimenez Sanchez
395a9db6c9 fix: restrict specific resource export to folder sync targets only
Specific resource export is only supported for repositories with folder
sync targets. Instance sync targets should use the full export flow instead.

Changes:
- Add repository config parameter to ExportSpecificResources function
- Validate that sync target is 'folder' type, reject 'instance' type
- Update all tests to pass repository config with folder sync target
- Add test case for instance sync target rejection
2025-12-03 09:13:08 +01:00
Roberto Jimenez Sanchez
4f5235c02b Fix unit test 2025-12-03 08:38:42 +01:00
Roberto Jimenez Sanchez
7b3a2d8fb6 Fix linting issues 2025-12-03 08:35:30 +01:00
Roberto Jimenez Sanchez
5dacd2edff fix: use folder UID instead of name for tree keying
- Change AddUnstructured to use item.GetUID() instead of item.GetName()
- This fixes the mismatch where GetFolder() returns UID but tree was keyed by name
- Folders in Grafana are identified by UID, so tree should be keyed by UID
2025-12-03 08:19:58 +01:00
Roberto Jimenez Sanchez
7d6f718a34 fix: use filepath.Dir instead of path.Dir and fix parameter shadowing
- Replace path.Dir with filepath.Dir for OS-specific path handling
- Rename filepath parameter to filePath to avoid shadowing filepath package
- This ensures directory creation works correctly with paths containing spaces
2025-12-02 23:41:03 +01:00
Roberto Jimenez Sanchez
20bee04c48 fix: treat empty and nil Resources the same in validation
- Empty Resources slice is now treated the same as nil (skip validation)
- Only validate Resources when it has items (not nil and not empty)
- Update test to expect success for empty resources list
- This aligns with treating empty as using the old API path
2025-12-02 23:35:02 +01:00
Roberto Jimenez Sanchez
42f18eb48d fix: use options.Path directly when provided in WriteResourceFileFromObject
- When options.Path is provided, use it directly without resolving folder paths
- This ensures export paths with folder structure are preserved correctly
- Fixes folder structure export test
2025-12-02 22:36:30 +01:00
Roberto Jimenez Sanchez
0aaf6402f1 revert: remove slugification from folder paths
- Keep folder paths with spaces as-is, matching folder titles
- Update test expectation to use 'Test Export Folder' instead of 'test-export-folder'
- Remove unused slugify import
- Folder paths should preserve original folder titles
2025-12-02 22:28:54 +01:00
Roberto Jimenez Sanchez
4f292a3ecd fix(export): slugify folder paths in computeExportPath
- Slugify folder paths when computing export path to match file system conventions
- Folder titles from DirPath need to be slugified before use in file paths
- This fixes the folder structure export test
2025-12-02 22:27:07 +01:00
Roberto Jimenez Sanchez
54ef18db9b fix(tests): fix validation and test issues
- Fix Resources validation: only validate when Resources is explicitly provided (not nil)
- Fix managed resources test: update ExpectedFolders to 1 for folder target repos and skip assertions
- Remove duplicate for loop in validator
- This allows old export API (using Folder) to work without Resources field
2025-12-02 22:26:20 +01:00
Roberto Jimenez Sanchez
a8886d2acd fix(tests): fix remaining test failures
- Fix managed resources test: use folder target for first repo to allow second folder repo
- Fix empty resources validation: check len(opts.Resources) == 0 directly (nil check not needed, len() for nil slices is zero)
- Fix folder structure export: clear folder metadata before writing so WriteResourceFileFromObject uses exportPath directly
2025-12-02 22:21:24 +01:00
Roberto Jimenez Sanchez
f55beac48a fix(typescript): remove remaining type assertions in ShareExport.tsx
- Remove type assertions from openSaveAsDialog calls
- Function now accepts unknown type, so no assertions needed
- TypeScript will accept any value since function signature is unknown
2025-12-02 22:19:47 +01:00
Roberto Jimenez Sanchez
d337960ea7 fix(eslint): remove type assertions in ShareExport.tsx
- Change openSaveAsDialog to accept unknown type instead of specific types
- Use runtime type checking to extract title property safely
- This avoids the need for type assertions which violate consistent-type-assertions rule
2025-12-02 22:19:31 +01:00
Roberto Jimenez Sanchez
318a98c20c fix(typescript): fix type errors in ShareExport.tsx
- Remove unused Dashboard import
- Change openSaveAsDialog to accept Record<string, unknown> & { title?: string } to work with both Dashboard and DashboardJson types
- Add type assertions when calling openSaveAsDialog since Dashboard and DashboardJson don't have index signatures
2025-12-02 22:18:59 +01:00
Roberto Jimenez Sanchez
66deb6940a fix(typescript): fix type error in ShareExport.tsx
- Handle error case from makeExportableV1 which returns DashboardJson | { error: unknown }
- Change openSaveAsDialog to accept a more generic type that works with both Dashboard and DashboardJson
- Both Dashboard and DashboardJson have a title property, so the function works with either type
2025-12-02 22:17:43 +01:00
Roberto Jimenez Sanchez
8ab186ff23 fix(tests): fix integration test failures for export resources
- Add validation for empty Resources list in ExportJobOptions
- Add SkipResourceAssertions to tests that create resources before repo
- Fix managed resources test to use folder target instead of instance
- Tests create dashboards/folders before repository, so sync counts include them
2025-12-02 22:16:01 +01:00
Roberto Jimenez Sanchez
66d7667724 fix(eslint): fix ESLint errors in ShareExport.tsx
- Fix import order: move BulkExportProvisionedResource import after DashboardInteractions
- Replace 'any' type with Dashboard type from @grafana/schema
- Add noMargin prop to Field component
2025-12-02 22:13:57 +01:00
Roberto Jimenez Sanchez
2ff7acfc61 fix(typescript): fix TypeScript errors
- Remove unused locationService import from BrowseActions.tsx
- Remove  property from DashboardTreeSelection objects in FolderActionsButton.tsx and ShareExport.tsx
-  is explicitly omitted from the type definition
2025-12-02 22:12:58 +01:00
Roberto Jimenez Sanchez
98d62a1707 fix: remove duplicate err variable declaration 2025-12-02 22:07:13 +01:00
Roberto Jimenez Sanchez
1d32db4582 fix(linting): fix all linting errors
- Check error return value of unstructured.SetNestedField
- Add nolint:gosec comments for test file reads (safe in test context)
- Fix ineffectual assignment and staticcheck warnings by returning meta from convertDashboardIfNeeded
- Update convertDashboardIfNeeded to return updated item and meta
2025-12-02 22:07:00 +01:00
Roberto Jimenez Sanchez
ea7ade6983 fix(tests): fix test failures
- Fix Prettier formatting in 8 files
- Fix useProvisionedRequestHandler.test.ts by mocking config.bootData
- Ensures ContextSrv can be instantiated in tests
2025-12-02 21:53:45 +01:00
Roberto Jimenez Sanchez
cf01ea372b Merge remote-tracking branch 'origin/main' into provisioning/implement-export 2025-12-02 21:51:08 +01:00
Roberto Jimenez Sanchez
6f61f2c870 Merge remote-tracking branch 'origin/main' into provisioning/implement-export 2025-12-02 19:48:53 +01:00
Roberto Jimenez Sanchez
4f0ef6ab9c style: format code with gofmt and fix frontend linting 2025-12-02 19:45:42 +01:00
Roberto Jimenez Sanchez
a2321c8daf refactor(provisioning): remove old createDashboardConversionShim function
- Remove the old createDashboardConversionShim that created its own cache
- Keep only the version that accepts versionClients as parameter
- Simplifies the API and ensures cache is always shared
2025-12-02 19:41:29 +01:00
Roberto Jimenez Sanchez
8bebb9ffff refactor(provisioning): remove createDashboardConversionShimWithCache
- Rename createDashboardConversionShimWithCache to createDashboardConversionShim
- Remove the old createDashboardConversionShim function that created a new cache
- Always use the cache version to ensure client sharing across exports
2025-12-02 19:41:15 +01:00
Roberto Jimenez Sanchez
26bddcee2f refactor(provisioning): improve code quality by breaking down ExportSpecificResources
- Extract loadUnmanagedFolderTree function for loading folder tree
- Extract exportSingleResource function for processing individual resources
- Extract validateResourceRef, validateResourceType functions for validation
- Extract fetchAndValidateResource function for fetching and validation
- Extract convertDashboardIfNeeded function for dashboard conversion
- Extract computeExportPath function for path computation
- Extract writeResourceToRepository function for writing resources
- Always use createDashboardConversionShimWithCache in both ExportResources and ExportSpecificResources
- Share versionClients map across all dashboard exports for better caching
2025-12-02 19:39:53 +01:00
Roberto Jimenez Sanchez
326cf170ec fix(provisioning): explicitly share versionClients map across dashboard export calls
- Create versionClients map once before the loop in ExportSpecificResources
- Add createDashboardConversionShimWithCache function that accepts the map as parameter
- This ensures the map is explicitly shared across all dashboard conversion calls
- Fixes client caching issue where each call was creating a new map
2025-12-02 19:38:28 +01:00
Roberto Jimenez Sanchez
513357e5f9 fix(provisioning): clarify that versionClients map is shared via closure
- The versionClients map is captured in the shim closure
- When the shim is reused, the same map is shared across all dashboard conversion calls
- This ensures client caching works correctly when exporting multiple dashboards
- Add clarifying comments to document the sharing behavior
2025-12-02 19:36:43 +01:00
Roberto Jimenez Sanchez
244516cec2 fix(provisioning): ensure versionClients map is shared across dashboard export calls
- Store versionClients map returned from createDashboardConversionShim
- The map is captured in the shim closure and shared across all dashboard conversion calls
- This ensures client caching works correctly when exporting multiple dashboards
2025-12-02 19:36:25 +01:00
Roberto Jimenez Sanchez
335108fe74 fix(provisioning): fix linting errors and regenerate translations
- Fix import order and remove duplicate @grafana/data import
- Wrap repositories in useMemo to fix useEffect dependency warning
- Remove type assertion and use proper type guard instead
- Fix missing closing brace in useEffect
- Regenerate i18n translations
2025-12-02 19:35:13 +01:00
Roberto Jimenez Sanchez
14468cae53 fix(provisioning): update fallback text to use 'resources' terminology 2025-12-02 19:34:13 +01:00
Roberto Jimenez Sanchez
f77fde66fd fix(provisioning): update folders info description to use 'resources' terminology 2025-12-02 19:34:01 +01:00
Roberto Jimenez Sanchez
15df9dda49 fix(provisioning): use 'resources' instead of 'dashboards' in export text
- Update path description to say 'exported resources' instead of 'exported dashboards'
- Update folders info description to say 'resource folder structure' instead of 'dashboard folder structure'
- Use consistent terminology throughout export UI
2025-12-02 19:33:50 +01:00
Roberto Jimenez Sanchez
8820b148f4 fix(provisioning): remove interpolation from path description
- Remove {{repoPath}} interpolation from path-description-with-repo translation
- Description now only shows plain text without variable interpolation
2025-12-02 19:32:45 +01:00
Roberto Jimenez Sanchez
681a53fe95 fix(provisioning): disable export button if any selected item is managed
- Change logic from 'some' to 'every' to ensure ALL items are unmanaged
- Export should only be enabled when ALL selected items are unmanaged
- If ANY item is managed, the button should be disabled
2025-12-02 19:32:17 +01:00
Roberto Jimenez Sanchez
96ea0e0148 fix(dashboard-scene): fix TypeScript errors in ExportToRepository
- Return empty fragment instead of null for non-DashboardScene
- Remove  property from selectedItems (not in type)
- Use meta.folderUid instead of state.uid for folderUid prop
2025-12-02 19:31:32 +01:00
Roberto Jimenez Sanchez
050c6dd036 fix(provisioning): use div instead of Box for path prefix
- Box component doesn't accept className prop
- Use div with className for custom styling
2025-12-02 19:30:04 +01:00
Roberto Jimenez Sanchez
2a685beb2a fix(provisioning): update path description to remove interpolation reference
- Update description to explain repository path is shown above
- Remove any reference to repoPath variable in description text
2025-12-02 19:28:31 +01:00
Roberto Jimenez Sanchez
0129818a30 fix(provisioning): fix path prefix styling and update translations
- Use GrafanaTheme2 for proper theme-aware styling
- Remove repository path interpolation from description
- Change folders warning to info message about folder behavior
2025-12-02 19:27:31 +01:00
Roberto Jimenez Sanchez
feb1068b28 fix(provisioning): update path description and folders info message
- Remove repository path interpolation from description (path is shown as prefix)
- Change folders warning to info message explaining folders are left behind
- Update description text to be clearer
2025-12-02 19:26:52 +01:00
Roberto Jimenez Sanchez
04f6aaf2f6 feat(provisioning): auto-select first repository and fix path display
- Auto-select first repository when drawer opens
- Display repository path as static prefix before input field
- Input field now only accepts sub-path (not full path)
- Combine repository path with sub-path when submitting
2025-12-02 19:23:52 +01:00
Roberto Jimenez Sanchez
0ff7646121 fix(provisioning): use raw selection for export count
- Use useCheckboxSelectionState for export to include all selected dashboards
- Use useActionSelectionState for move/delete (filters out children of folders)
- Fixes count showing '1 folder, 1 dashboard' instead of '2 folders, 4 dashboards'
2025-12-02 19:21:35 +01:00
Roberto Jimenez Sanchez
d179b98f7b fix(provisioning): prevent button disable when expanding folders
- Use ref to access latest browseState without causing re-renders
- Memoize selected item UIDs to only re-run effect when selection changes
- Fixes issue where Export button was disabled when unfolding folders
2025-12-02 19:20:16 +01:00
Roberto Jimenez Sanchez
f6839a6ab9 fix(provisioning): fix dashboard count in export form
- Replace DescendantCount with simple count of explicitly selected items
- DescendantCount was double-counting dashboards (explicitly selected + folder descendants)
- Now shows correct count: 2 folders, 2 dashboards (instead of 3 dashboards)
2025-12-02 19:17:13 +01:00
Roberto Jimenez Sanchez
18f95ee511 fix(provisioning): ensure all dashboards are selected when selecting a folder
- Add fallback for parentUID when dashboard isn't in state yet
- Add pagination for folder search to ensure all child folders are found
- This fixes an issue where only some dashboards were being exported when selecting a folder
2025-12-02 19:15:33 +01:00
Roberto Jimenez Sanchez
388e57b5f1 test(provisioning): add unit tests for export job options validator
- Test valid dashboard resources export
- Test missing required fields (name, kind, group)
- Test folder rejection by kind and by group
- Test unsupported resource types rejection
- Test valid folder export (old behavior)
- Test multiple resources with invalid ones
2025-12-02 19:07:53 +01:00
Roberto Jimenez Sanchez
40c8ad7369 style(browse-dashboards): fix formatting in selectFolderWithAllDashboards 2025-12-02 19:02:09 +01:00
Roberto Jimenez Sanchez
4c5ac79399 feat(browse-dashboards): select all dashboards when folder is selected
- Add selectFolderWithAllDashboards async thunk to recursively collect all dashboards
- Update BrowseView to use the new thunk when selecting folders
- When a folder is selected, all dashboards in that folder and subfolders are automatically selected
- Similar behavior to folder export functionality
2025-12-02 19:00:59 +01:00
Roberto Jimenez Sanchez
960d4de505 refactor(provisioning): remove auto-select logic for export
- Remove useAutoSelectUnmanagedDashboards hook
- Remove autoExport URL parameter handling
- Simplify navigation in RepositoryList to just go to dashboards page
- Users can manually select dashboards to export
2025-12-02 18:53:05 +01:00
Roberto Jimenez Sanchez
9a89918c70 fix(provisioning): add missing context in export resources test 2025-12-02 18:20:07 +01:00
Roberto Jimenez Sanchez
a731ce45d7 fix(provisioning): fix linter error in export resources test 2025-12-02 18:19:37 +01:00
Roberto Jimenez Sanchez
b1b105f667 test(provisioning): add integration tests for bulk export with Resources field
- Test exporting specific unmanaged dashboards
- Test exporting with custom path
- Test validation rejects folders
- Test validation rejects managed resources
- Test folder structure preservation
- Test empty resources list validation
2025-12-02 18:17:54 +01:00
Roberto Jimenez Sanchez
ad8fb1005d feat(provisioning): add bulk export to repository functionality
- Add ExportSpecificResources function to export specific dashboards
- Add Resources field to ExportJobOptions for bulk export
- Add validation for export job options (reject folders, only unmanaged resources)
- Add BulkExportProvisionedResource React component for UI
- Add Export to Repository button in dashboards page (enabled for unmanaged resources)
- Add Export to Repository option in folder actions menu
- Add Export to Repository option in dashboard export menu
- Add Export to Repository ShareView component for dashboard scene
- Add useSelectionUnmanagedStatus hook to check if resources are unmanaged
- Add useAutoSelectUnmanagedDashboards hook for auto-selection
- Add collectAllDashboardsUnderFolder utility function
- Update translations for export functionality
- Reuse dashboard conversion shim logic for version handling
2025-12-02 18:17:07 +01:00
Roberto Jimenez Sanchez
1d7a7e879c Fix repository list not displaying in export form
- Remove skipToken from useGetFrontendSettingsQuery to allow query to execute
- Repositories will now be fetched and displayed in the dropdown
2025-12-02 17:52:09 +01:00
Roberto Jimenez Sanchez
140ca8e213 Rename push to export in UI, add Export to Repository actions
- Rename BulkPushProvisionedResource to BulkExportProvisionedResource
- Change UI terminology from 'push' to 'export' (backend job type remains 'push')
- Add 'Export to Repository' action in FolderActionsButton for unmanaged folders
- Add 'Export to Repository' option in ShareExport for unmanaged dashboards
- Add collectAllDashboardsUnderFolder helper to recursively collect dashboards
- Update PullRequestButtons and RepositoryLink to accept 'push' jobType
- Update translations from push to export terminology
- Update autoPush URL parameter to autoExport
2025-12-02 17:48:49 +01:00
Roberto Jimenez Sanchez
22231fc2ab Add Push button on provisioning page to auto-select unmanaged resources
- Add Push button in RepositoryList that appears when unmanaged resources exist
- Create useAutoSelectUnmanagedDashboards hook to programmatically select unmanaged dashboards
- Update BrowseActions to handle autoPush URL parameter for auto-selection flow
- When Push button is clicked, navigate to dashboards page with autoPush=true
- Auto-select all unmanaged dashboards and open push drawer
- Add translation for 'Push unmanaged resources' button
2025-12-02 17:41:58 +01:00
Roberto Jimenez Sanchez
8521c37a22 Add bulk push functionality for unmanaged dashboards
- Add BulkPushProvisionedResource component for pushing dashboards to repositories
- Add useSelectionUnmanagedStatus hook to check if selected resources are unmanaged
- Add Push button in BrowseActions that is enabled only when unmanaged dashboards are selected
- Add PushJobSpec type to useBulkActionJob hook
- Update JobStatus, JobContent, and FinishedJobStatus to support 'push' jobType
- Add path field to BulkActionFormData
- Generate translations for bulk push functionality
- Only dashboards can be pushed (folders are filtered out with warning)
2025-12-02 17:36:19 +01:00
Roberto Jimenez Sanchez
64949f26e8 Fix folder path resolution for instance targets in WriteResourceFileFromObject
Add fallback mechanism to handle folder resolution when rootFolder is empty
(instance targets). First try DirPath with rootFolder, then fallback to
DirPath without rootFolder if the first attempt fails.
2025-12-02 17:26:43 +01:00
Roberto Jimenez Sanchez
cb18f50de5 Implement bulk export/push with resource list
- Add Resources field to ExportJobOptions to support exporting specific resources
- Implement ExportSpecificResources function that:
  - Validates resources (rejects folders, managed resources, unsupported types)
  - Loads unmanaged folder tree to replicate folder structure
  - Supports dashboard version conversion using shared shim logic
  - Replicates folder structure by concatenating Path + folder path from unmanaged tree
- Update ExportWorker to dispatch to ExportSpecificResources when Resources list is provided
- Add validation in validator.go for ExportJobOptions Resources field
- Add comprehensive unit tests covering all scenarios
- Update WriteResourceFileFromObject to handle folder path resolution
2025-12-02 17:24:27 +01:00
481 changed files with 7900 additions and 14326 deletions

1
.github/CODEOWNERS vendored
View File

@@ -85,7 +85,6 @@
# Git Sync frontend owned by frontend team as a whole.
/apps/alerting/ @grafana/alerting-backend
/apps/quotas/ @grafana/grafana-search-and-storage
/apps/dashboard/ @grafana/grafana-app-platform-squad @grafana/dashboards-squad
/apps/folder/ @grafana/grafana-app-platform-squad
/apps/playlist/ @grafana/grafana-app-platform-squad

View File

@@ -1226,13 +1226,5 @@
"addToProject": {
"url": "https://github.com/orgs/grafana/projects/69"
}
},
{
"type": "label",
"name": "area/suggestions",
"action": "addToProject",
"addToProject": {
"url": "https://github.com/orgs/grafana/projects/56"
}
}
]

View File

@@ -469,15 +469,5 @@
"addToProject": {
"url": "https://github.com/orgs/grafana/projects/190"
}
},
{
"type": "changedfiles",
"matches": [
"public/app/features/panel/suggestions/**/*",
"public/app/plugins/panel/**/suggestions.ts",
"packages/grafana-data/src/types/suggestions*"
],
"action": "updateLabel",
"addLabel": "area/suggestions"
}
]

View File

@@ -85,7 +85,6 @@ area/scenes
area/search
area/security
area/streaming
area/suggestions
area/templating/repeating
area/tooltip
area/transformations

View File

@@ -33,16 +33,6 @@ jobs:
GCOM_TOKEN=ephemeral-instances-bot:gcom-token
REGISTRY=ephemeral-instances-bot:registry
GCP_SA_ACCOUNT_KEY_BASE64=ephemeral-instances-bot:sa-key
# Secrets placed in the ci/common/<path> path in Vault
common_secrets: |
DOCKERHUB_USERNAME=dockerhub:username
DOCKERHUB_PASSWORD=dockerhub:password
- name: Log in to Docker Hub to avoid unauthenticated image pull rate-limiting
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ env.DOCKERHUB_PASSWORD }}
- name: Generate a GitHub app installation token
id: generate_token

View File

@@ -14,7 +14,7 @@ ARG JS_SRC=js-builder
# Dependabot cannot update dependencies listed in ARGs
# By using FROM instructions we can delegate dependency updates to dependabot
FROM alpine:3.23.0 AS alpine-base
FROM alpine:3.22.2 AS alpine-base
FROM ubuntu:22.04 AS ubuntu-base
FROM golang:1.25.5-alpine AS go-builder-base
FROM --platform=${JS_PLATFORM} node:24-alpine AS js-builder-base
@@ -93,7 +93,6 @@ COPY pkg/storage/unified/apistore pkg/storage/unified/apistore
COPY pkg/semconv pkg/semconv
COPY pkg/aggregator pkg/aggregator
COPY apps/playlist apps/playlist
COPY apps/quotas apps/quotas
COPY apps/plugins apps/plugins
COPY apps/shorturl apps/shorturl
COPY apps/annotation apps/annotation

View File

@@ -224,8 +224,6 @@ github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmF
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4 h1:jSojuc7njleS3UOz223WDlXOinmuLAIPI0z2vtq8EgI=
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4/go.mod h1:VahT+GtfQIM+o8ht2StR6J9g+Ef+C2Vokh5uuSmOD/4=
github.com/grafana/grafana-app-sdk v0.48.5 h1:MS8l9fTZz+VbTfgApn09jw27GxhQ6fNOWGhC4ydvZmM=
github.com/grafana/grafana-app-sdk v0.48.5/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/grafana/grafana-app-sdk/logging v0.48.3 h1:72NUpGNiJXCNQz/on++YSsl38xuVYYBKv5kKQaOClX4=
github.com/grafana/grafana-app-sdk/logging v0.48.3/go.mod h1:Gh/nBWnspK3oDNWtiM5qUF/fardHzOIEez+SPI3JeHA=
github.com/grafana/loki/pkg/push v0.0.0-20250823105456-332df2b20000 h1:/5LKSYgLmAhwA4m6iGUD4w1YkydEWWjazn9qxCFT8W0=

View File

@@ -9,7 +9,18 @@ manifest: {
groupOverride: "historian.alerting.grafana.app"
versions: {
"v0alpha1": {
kinds: [dummyv0alpha1]
routes: v0alpha1.routes
}
}
}
dummyv0alpha1: {
kind: "Dummy"
schema: {
// Spec is the schema of our resource. The spec should include all the user-editable information for the kind.
spec: {
dummyField: int
}
}
}

View File

@@ -1,12 +1,12 @@
package v1alpha1
package v0alpha1
import "k8s.io/apimachinery/pkg/runtime/schema"
const (
// APIGroup is the API group used by all kinds in this package
APIGroup = "logsdrilldown.grafana.app"
APIGroup = "historian.alerting.grafana.app"
// APIVersion is the API version used by all kinds in this package
APIVersion = "v1alpha1"
APIVersion = "v0alpha1"
)
var (

View File

@@ -7,33 +7,33 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type MetaClient struct {
client *resource.TypedClient[*Meta, *MetaList]
type DummyClient struct {
client *resource.TypedClient[*Dummy, *DummyList]
}
func NewMetaClient(client resource.Client) *MetaClient {
return &MetaClient{
client: resource.NewTypedClient[*Meta, *MetaList](client, MetaKind()),
func NewDummyClient(client resource.Client) *DummyClient {
return &DummyClient{
client: resource.NewTypedClient[*Dummy, *DummyList](client, DummyKind()),
}
}
func NewMetaClientFromGenerator(generator resource.ClientGenerator) (*MetaClient, error) {
c, err := generator.ClientFor(MetaKind())
func NewDummyClientFromGenerator(generator resource.ClientGenerator) (*DummyClient, error) {
c, err := generator.ClientFor(DummyKind())
if err != nil {
return nil, err
}
return NewMetaClient(c), nil
return NewDummyClient(c), nil
}
func (c *MetaClient) Get(ctx context.Context, identifier resource.Identifier) (*Meta, error) {
func (c *DummyClient) Get(ctx context.Context, identifier resource.Identifier) (*Dummy, error) {
return c.client.Get(ctx, identifier)
}
func (c *MetaClient) List(ctx context.Context, namespace string, opts resource.ListOptions) (*MetaList, error) {
func (c *DummyClient) List(ctx context.Context, namespace string, opts resource.ListOptions) (*DummyList, error) {
return c.client.List(ctx, namespace, opts)
}
func (c *MetaClient) ListAll(ctx context.Context, namespace string, opts resource.ListOptions) (*MetaList, error) {
func (c *DummyClient) ListAll(ctx context.Context, namespace string, opts resource.ListOptions) (*DummyList, error) {
resp, err := c.client.List(ctx, namespace, resource.ListOptions{
ResourceVersion: opts.ResourceVersion,
Limit: opts.Limit,
@@ -61,25 +61,25 @@ func (c *MetaClient) ListAll(ctx context.Context, namespace string, opts resourc
return resp, nil
}
func (c *MetaClient) Create(ctx context.Context, obj *Meta, opts resource.CreateOptions) (*Meta, error) {
func (c *DummyClient) Create(ctx context.Context, obj *Dummy, opts resource.CreateOptions) (*Dummy, error) {
// Make sure apiVersion and kind are set
obj.APIVersion = GroupVersion.Identifier()
obj.Kind = MetaKind().Kind()
obj.Kind = DummyKind().Kind()
return c.client.Create(ctx, obj, opts)
}
func (c *MetaClient) Update(ctx context.Context, obj *Meta, opts resource.UpdateOptions) (*Meta, error) {
func (c *DummyClient) Update(ctx context.Context, obj *Dummy, opts resource.UpdateOptions) (*Dummy, error) {
return c.client.Update(ctx, obj, opts)
}
func (c *MetaClient) Patch(ctx context.Context, identifier resource.Identifier, req resource.PatchRequest, opts resource.PatchOptions) (*Meta, error) {
func (c *DummyClient) Patch(ctx context.Context, identifier resource.Identifier, req resource.PatchRequest, opts resource.PatchOptions) (*Dummy, error) {
return c.client.Patch(ctx, identifier, req, opts)
}
func (c *MetaClient) UpdateStatus(ctx context.Context, identifier resource.Identifier, newStatus MetaStatus, opts resource.UpdateOptions) (*Meta, error) {
return c.client.Update(ctx, &Meta{
func (c *DummyClient) UpdateStatus(ctx context.Context, identifier resource.Identifier, newStatus DummyStatus, opts resource.UpdateOptions) (*Dummy, error) {
return c.client.Update(ctx, &Dummy{
TypeMeta: metav1.TypeMeta{
Kind: MetaKind().Kind(),
Kind: DummyKind().Kind(),
APIVersion: GroupVersion.Identifier(),
},
ObjectMeta: metav1.ObjectMeta{
@@ -94,6 +94,6 @@ func (c *MetaClient) UpdateStatus(ctx context.Context, identifier resource.Ident
})
}
func (c *MetaClient) Delete(ctx context.Context, identifier resource.Identifier, opts resource.DeleteOptions) error {
func (c *DummyClient) Delete(ctx context.Context, identifier resource.Identifier, opts resource.DeleteOptions) error {
return c.client.Delete(ctx, identifier, opts)
}

View File

@@ -11,18 +11,18 @@ import (
"github.com/grafana/grafana-app-sdk/resource"
)
// MetaJSONCodec is an implementation of resource.Codec for kubernetes JSON encoding
type MetaJSONCodec struct{}
// DummyJSONCodec is an implementation of resource.Codec for kubernetes JSON encoding
type DummyJSONCodec struct{}
// Read reads JSON-encoded bytes from `reader` and unmarshals them into `into`
func (*MetaJSONCodec) Read(reader io.Reader, into resource.Object) error {
func (*DummyJSONCodec) Read(reader io.Reader, into resource.Object) error {
return json.NewDecoder(reader).Decode(into)
}
// Write writes JSON-encoded bytes into `writer` marshaled from `from`
func (*MetaJSONCodec) Write(writer io.Writer, from resource.Object) error {
func (*DummyJSONCodec) Write(writer io.Writer, from resource.Object) error {
return json.NewEncoder(writer).Encode(from)
}
// Interface compliance checks
var _ resource.Codec = &MetaJSONCodec{}
var _ resource.Codec = &DummyJSONCodec{}

View File

@@ -9,7 +9,7 @@ import (
// metadata contains embedded CommonMetadata and can be extended with custom string fields
// TODO: use CommonMetadata instead of redefining here; currently needs to be defined here
// without external reference as using the CommonMetadata reference breaks thema codegen.
type MetaMetadata struct {
type DummyMetadata struct {
UpdateTimestamp time.Time `json:"updateTimestamp"`
CreatedBy string `json:"createdBy"`
Uid string `json:"uid"`
@@ -22,9 +22,9 @@ type MetaMetadata struct {
Labels map[string]string `json:"labels"`
}
// NewMetaMetadata creates a new MetaMetadata object.
func NewMetaMetadata() *MetaMetadata {
return &MetaMetadata{
// NewDummyMetadata creates a new DummyMetadata object.
func NewDummyMetadata() *DummyMetadata {
return &DummyMetadata{
Finalizers: []string{},
Labels: map[string]string{},
}

View File

@@ -15,29 +15,22 @@ import (
)
// +k8s:openapi-gen=true
type Meta struct {
type Dummy struct {
metav1.TypeMeta `json:",inline" yaml:",inline"`
metav1.ObjectMeta `json:"metadata" yaml:"metadata"`
// Spec is the spec of the Meta
Spec MetaSpec `json:"spec" yaml:"spec"`
// Spec is the spec of the Dummy
Spec DummySpec `json:"spec" yaml:"spec"`
Status MetaStatus `json:"status" yaml:"status"`
Status DummyStatus `json:"status" yaml:"status"`
}
func NewMeta() *Meta {
return &Meta{
Spec: *NewMetaSpec(),
Status: *NewMetaStatus(),
}
}
func (o *Meta) GetSpec() any {
func (o *Dummy) GetSpec() any {
return o.Spec
}
func (o *Meta) SetSpec(spec any) error {
cast, ok := spec.(MetaSpec)
func (o *Dummy) SetSpec(spec any) error {
cast, ok := spec.(DummySpec)
if !ok {
return fmt.Errorf("cannot set spec type %#v, not of type Spec", spec)
}
@@ -45,13 +38,13 @@ func (o *Meta) SetSpec(spec any) error {
return nil
}
func (o *Meta) GetSubresources() map[string]any {
func (o *Dummy) GetSubresources() map[string]any {
return map[string]any{
"status": o.Status,
}
}
func (o *Meta) GetSubresource(name string) (any, bool) {
func (o *Dummy) GetSubresource(name string) (any, bool) {
switch name {
case "status":
return o.Status, true
@@ -60,12 +53,12 @@ func (o *Meta) GetSubresource(name string) (any, bool) {
}
}
func (o *Meta) SetSubresource(name string, value any) error {
func (o *Dummy) SetSubresource(name string, value any) error {
switch name {
case "status":
cast, ok := value.(MetaStatus)
cast, ok := value.(DummyStatus)
if !ok {
return fmt.Errorf("cannot set status type %#v, not of type MetaStatus", value)
return fmt.Errorf("cannot set status type %#v, not of type DummyStatus", value)
}
o.Status = cast
return nil
@@ -74,7 +67,7 @@ func (o *Meta) SetSubresource(name string, value any) error {
}
}
func (o *Meta) GetStaticMetadata() resource.StaticMetadata {
func (o *Dummy) GetStaticMetadata() resource.StaticMetadata {
gvk := o.GroupVersionKind()
return resource.StaticMetadata{
Name: o.ObjectMeta.Name,
@@ -85,7 +78,7 @@ func (o *Meta) GetStaticMetadata() resource.StaticMetadata {
}
}
func (o *Meta) SetStaticMetadata(metadata resource.StaticMetadata) {
func (o *Dummy) SetStaticMetadata(metadata resource.StaticMetadata) {
o.Name = metadata.Name
o.Namespace = metadata.Namespace
o.SetGroupVersionKind(schema.GroupVersionKind{
@@ -95,7 +88,7 @@ func (o *Meta) SetStaticMetadata(metadata resource.StaticMetadata) {
})
}
func (o *Meta) GetCommonMetadata() resource.CommonMetadata {
func (o *Dummy) GetCommonMetadata() resource.CommonMetadata {
dt := o.DeletionTimestamp
var deletionTimestamp *time.Time
if dt != nil {
@@ -127,7 +120,7 @@ func (o *Meta) GetCommonMetadata() resource.CommonMetadata {
}
}
func (o *Meta) SetCommonMetadata(metadata resource.CommonMetadata) {
func (o *Dummy) SetCommonMetadata(metadata resource.CommonMetadata) {
o.UID = types.UID(metadata.UID)
o.ResourceVersion = metadata.ResourceVersion
o.Generation = metadata.Generation
@@ -172,7 +165,7 @@ func (o *Meta) SetCommonMetadata(metadata resource.CommonMetadata) {
}
}
func (o *Meta) GetCreatedBy() string {
func (o *Dummy) GetCreatedBy() string {
if o.ObjectMeta.Annotations == nil {
o.ObjectMeta.Annotations = make(map[string]string)
}
@@ -180,7 +173,7 @@ func (o *Meta) GetCreatedBy() string {
return o.ObjectMeta.Annotations["grafana.com/createdBy"]
}
func (o *Meta) SetCreatedBy(createdBy string) {
func (o *Dummy) SetCreatedBy(createdBy string) {
if o.ObjectMeta.Annotations == nil {
o.ObjectMeta.Annotations = make(map[string]string)
}
@@ -188,7 +181,7 @@ func (o *Meta) SetCreatedBy(createdBy string) {
o.ObjectMeta.Annotations["grafana.com/createdBy"] = createdBy
}
func (o *Meta) GetUpdateTimestamp() time.Time {
func (o *Dummy) GetUpdateTimestamp() time.Time {
if o.ObjectMeta.Annotations == nil {
o.ObjectMeta.Annotations = make(map[string]string)
}
@@ -197,7 +190,7 @@ func (o *Meta) GetUpdateTimestamp() time.Time {
return parsed
}
func (o *Meta) SetUpdateTimestamp(updateTimestamp time.Time) {
func (o *Dummy) SetUpdateTimestamp(updateTimestamp time.Time) {
if o.ObjectMeta.Annotations == nil {
o.ObjectMeta.Annotations = make(map[string]string)
}
@@ -205,7 +198,7 @@ func (o *Meta) SetUpdateTimestamp(updateTimestamp time.Time) {
o.ObjectMeta.Annotations["grafana.com/updateTimestamp"] = updateTimestamp.Format(time.RFC3339)
}
func (o *Meta) GetUpdatedBy() string {
func (o *Dummy) GetUpdatedBy() string {
if o.ObjectMeta.Annotations == nil {
o.ObjectMeta.Annotations = make(map[string]string)
}
@@ -213,7 +206,7 @@ func (o *Meta) GetUpdatedBy() string {
return o.ObjectMeta.Annotations["grafana.com/updatedBy"]
}
func (o *Meta) SetUpdatedBy(updatedBy string) {
func (o *Dummy) SetUpdatedBy(updatedBy string) {
if o.ObjectMeta.Annotations == nil {
o.ObjectMeta.Annotations = make(map[string]string)
}
@@ -221,21 +214,21 @@ func (o *Meta) SetUpdatedBy(updatedBy string) {
o.ObjectMeta.Annotations["grafana.com/updatedBy"] = updatedBy
}
func (o *Meta) Copy() resource.Object {
func (o *Dummy) Copy() resource.Object {
return resource.CopyObject(o)
}
func (o *Meta) DeepCopyObject() runtime.Object {
func (o *Dummy) DeepCopyObject() runtime.Object {
return o.Copy()
}
func (o *Meta) DeepCopy() *Meta {
cpy := &Meta{}
func (o *Dummy) DeepCopy() *Dummy {
cpy := &Dummy{}
o.DeepCopyInto(cpy)
return cpy
}
func (o *Meta) DeepCopyInto(dst *Meta) {
func (o *Dummy) DeepCopyInto(dst *Dummy) {
dst.TypeMeta.APIVersion = o.TypeMeta.APIVersion
dst.TypeMeta.Kind = o.TypeMeta.Kind
o.ObjectMeta.DeepCopyInto(&dst.ObjectMeta)
@@ -244,34 +237,34 @@ func (o *Meta) DeepCopyInto(dst *Meta) {
}
// Interface compliance compile-time check
var _ resource.Object = &Meta{}
var _ resource.Object = &Dummy{}
// +k8s:openapi-gen=true
type MetaList struct {
type DummyList struct {
metav1.TypeMeta `json:",inline" yaml:",inline"`
metav1.ListMeta `json:"metadata" yaml:"metadata"`
Items []Meta `json:"items" yaml:"items"`
Items []Dummy `json:"items" yaml:"items"`
}
func (o *MetaList) DeepCopyObject() runtime.Object {
func (o *DummyList) DeepCopyObject() runtime.Object {
return o.Copy()
}
func (o *MetaList) Copy() resource.ListObject {
cpy := &MetaList{
func (o *DummyList) Copy() resource.ListObject {
cpy := &DummyList{
TypeMeta: o.TypeMeta,
Items: make([]Meta, len(o.Items)),
Items: make([]Dummy, len(o.Items)),
}
o.ListMeta.DeepCopyInto(&cpy.ListMeta)
for i := 0; i < len(o.Items); i++ {
if item, ok := o.Items[i].Copy().(*Meta); ok {
if item, ok := o.Items[i].Copy().(*Dummy); ok {
cpy.Items[i] = *item
}
}
return cpy
}
func (o *MetaList) GetItems() []resource.Object {
func (o *DummyList) GetItems() []resource.Object {
items := make([]resource.Object, len(o.Items))
for i := 0; i < len(o.Items); i++ {
items[i] = &o.Items[i]
@@ -279,48 +272,48 @@ func (o *MetaList) GetItems() []resource.Object {
return items
}
func (o *MetaList) SetItems(items []resource.Object) {
o.Items = make([]Meta, len(items))
func (o *DummyList) SetItems(items []resource.Object) {
o.Items = make([]Dummy, len(items))
for i := 0; i < len(items); i++ {
o.Items[i] = *items[i].(*Meta)
o.Items[i] = *items[i].(*Dummy)
}
}
func (o *MetaList) DeepCopy() *MetaList {
cpy := &MetaList{}
func (o *DummyList) DeepCopy() *DummyList {
cpy := &DummyList{}
o.DeepCopyInto(cpy)
return cpy
}
func (o *MetaList) DeepCopyInto(dst *MetaList) {
func (o *DummyList) DeepCopyInto(dst *DummyList) {
resource.CopyObjectInto(dst, o)
}
// Interface compliance compile-time check
var _ resource.ListObject = &MetaList{}
var _ resource.ListObject = &DummyList{}
// Copy methods for all subresource types
// DeepCopy creates a full deep copy of Spec
func (s *MetaSpec) DeepCopy() *MetaSpec {
cpy := &MetaSpec{}
func (s *DummySpec) DeepCopy() *DummySpec {
cpy := &DummySpec{}
s.DeepCopyInto(cpy)
return cpy
}
// DeepCopyInto deep copies Spec into another Spec object
func (s *MetaSpec) DeepCopyInto(dst *MetaSpec) {
func (s *DummySpec) DeepCopyInto(dst *DummySpec) {
resource.CopyObjectInto(dst, s)
}
// DeepCopy creates a full deep copy of MetaStatus
func (s *MetaStatus) DeepCopy() *MetaStatus {
cpy := &MetaStatus{}
// DeepCopy creates a full deep copy of DummyStatus
func (s *DummyStatus) DeepCopy() *DummyStatus {
cpy := &DummyStatus{}
s.DeepCopyInto(cpy)
return cpy
}
// DeepCopyInto deep copies MetaStatus into another MetaStatus object
func (s *MetaStatus) DeepCopyInto(dst *MetaStatus) {
// DeepCopyInto deep copies DummyStatus into another DummyStatus object
func (s *DummyStatus) DeepCopyInto(dst *DummyStatus) {
resource.CopyObjectInto(dst, s)
}

View File

@@ -0,0 +1,34 @@
//
// Code generated by grafana-app-sdk. DO NOT EDIT.
//
package v0alpha1
import (
"github.com/grafana/grafana-app-sdk/resource"
)
// schema is unexported to prevent accidental overwrites
var (
schemaDummy = resource.NewSimpleSchema("historian.alerting.grafana.app", "v0alpha1", &Dummy{}, &DummyList{}, resource.WithKind("Dummy"),
resource.WithPlural("dummys"), resource.WithScope(resource.NamespacedScope))
kindDummy = resource.Kind{
Schema: schemaDummy,
Codecs: map[resource.KindEncoding]resource.Codec{
resource.KindEncodingJSON: &DummyJSONCodec{},
},
}
)
// Kind returns a resource.Kind for this Schema with a JSON codec
func DummyKind() resource.Kind {
return kindDummy
}
// Schema returns a resource.SimpleSchema representation of Dummy
func DummySchema() *resource.SimpleSchema {
return schemaDummy
}
// Interface compliance checks
var _ resource.Schema = kindDummy

View File

@@ -0,0 +1,14 @@
// Code generated - EDITING IS FUTILE. DO NOT EDIT.
package v0alpha1
// Spec is the schema of our resource. The spec should include all the user-editable information for the kind.
// +k8s:openapi-gen=true
type DummySpec struct {
DummyField int64 `json:"dummyField"`
}
// NewDummySpec creates a new DummySpec object.
func NewDummySpec() *DummySpec {
return &DummySpec{}
}

View File

@@ -3,42 +3,42 @@
package v0alpha1
// +k8s:openapi-gen=true
type MetastatusOperatorState struct {
type DummystatusOperatorState struct {
// lastEvaluation is the ResourceVersion last evaluated
LastEvaluation string `json:"lastEvaluation"`
// state describes the state of the lastEvaluation.
// It is limited to three possible states for machine evaluation.
State MetaStatusOperatorStateState `json:"state"`
State DummyStatusOperatorStateState `json:"state"`
// descriptiveState is an optional more descriptive state field which has no requirements on format
DescriptiveState *string `json:"descriptiveState,omitempty"`
// details contains any extra information that is operator-specific
Details map[string]interface{} `json:"details,omitempty"`
}
// NewMetastatusOperatorState creates a new MetastatusOperatorState object.
func NewMetastatusOperatorState() *MetastatusOperatorState {
return &MetastatusOperatorState{}
// NewDummystatusOperatorState creates a new DummystatusOperatorState object.
func NewDummystatusOperatorState() *DummystatusOperatorState {
return &DummystatusOperatorState{}
}
// +k8s:openapi-gen=true
type MetaStatus struct {
type DummyStatus struct {
// operatorStates is a map of operator ID to operator state evaluations.
// Any operator which consumes this kind SHOULD add its state evaluation information to this field.
OperatorStates map[string]MetastatusOperatorState `json:"operatorStates,omitempty"`
OperatorStates map[string]DummystatusOperatorState `json:"operatorStates,omitempty"`
// additionalFields is reserved for future use
AdditionalFields map[string]interface{} `json:"additionalFields,omitempty"`
}
// NewMetaStatus creates a new MetaStatus object.
func NewMetaStatus() *MetaStatus {
return &MetaStatus{}
// NewDummyStatus creates a new DummyStatus object.
func NewDummyStatus() *DummyStatus {
return &DummyStatus{}
}
// +k8s:openapi-gen=true
type MetaStatusOperatorStateState string
type DummyStatusOperatorStateState string
const (
MetaStatusOperatorStateStateSuccess MetaStatusOperatorStateState = "success"
MetaStatusOperatorStateStateInProgress MetaStatusOperatorStateState = "in_progress"
MetaStatusOperatorStateStateFailed MetaStatusOperatorStateState = "failed"
DummyStatusOperatorStateStateSuccess DummyStatusOperatorStateState = "success"
DummyStatusOperatorStateStateInProgress DummyStatusOperatorStateState = "in_progress"
DummyStatusOperatorStateStateFailed DummyStatusOperatorStateState = "failed"
)

View File

@@ -6,6 +6,7 @@
package apis
import (
"encoding/json"
"fmt"
"strings"
@@ -18,6 +19,12 @@ import (
v0alpha1 "github.com/grafana/grafana/apps/alerting/historian/pkg/apis/alertinghistorian/v0alpha1"
)
var (
rawSchemaDummyv0alpha1 = []byte(`{"Dummy":{"properties":{"spec":{"$ref":"#/components/schemas/spec"},"status":{"$ref":"#/components/schemas/status"}},"required":["spec"]},"OperatorState":{"additionalProperties":false,"properties":{"descriptiveState":{"description":"descriptiveState is an optional more descriptive state field which has no requirements on format","type":"string"},"details":{"additionalProperties":{"additionalProperties":{},"type":"object"},"description":"details contains any extra information that is operator-specific","type":"object"},"lastEvaluation":{"description":"lastEvaluation is the ResourceVersion last evaluated","type":"string"},"state":{"description":"state describes the state of the lastEvaluation.\nIt is limited to three possible states for machine evaluation.","enum":["success","in_progress","failed"],"type":"string"}},"required":["lastEvaluation","state"],"type":"object"},"spec":{"additionalProperties":false,"description":"Spec is the schema of our resource. The spec should include all the user-editable information for the kind.","properties":{"dummyField":{"type":"integer"}},"required":["dummyField"],"type":"object"},"status":{"additionalProperties":false,"properties":{"additionalFields":{"additionalProperties":{"additionalProperties":{},"type":"object"},"description":"additionalFields is reserved for future use","type":"object"},"operatorStates":{"additionalProperties":{"$ref":"#/components/schemas/OperatorState"},"description":"operatorStates is a map of operator ID to operator state evaluations.\nAny operator which consumes this kind SHOULD add its state evaluation information to this field.","type":"object"}},"type":"object"}}`)
versionSchemaDummyv0alpha1 app.VersionSchema
_ = json.Unmarshal(rawSchemaDummyv0alpha1, &versionSchemaDummyv0alpha1)
)
var appManifestData = app.ManifestData{
AppName: "alerting-historian",
Group: "historian.alerting.grafana.app",
@@ -26,7 +33,15 @@ var appManifestData = app.ManifestData{
{
Name: "v0alpha1",
Served: true,
Kinds: []app.ManifestVersionKind{},
Kinds: []app.ManifestVersionKind{
{
Kind: "Dummy",
Plural: "Dummys",
Scope: "Namespaced",
Conversion: false,
Schema: &versionSchemaDummyv0alpha1,
},
},
Routes: app.ManifestVersionRoutes{
Namespaced: map[string]spec3.PathProps{
"/alertstate/history": {
@@ -166,13 +181,6 @@ var appManifestData = app.ManifestData{
"entries": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Ref: spec.MustCreateRef("#/components/schemas/createNotificationqueryNotificationEntry"),
}},
},
},
},
},
@@ -227,13 +235,6 @@ var appManifestData = app.ManifestData{
"createNotificationqueryMatchers": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Ref: spec.MustCreateRef("#/components/schemas/createNotificationqueryMatcher"),
}},
},
},
},
"createNotificationqueryNotificationEntry": {
@@ -244,13 +245,6 @@ var appManifestData = app.ManifestData{
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Description: "Alerts are the alerts grouped into the notification.",
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Ref: spec.MustCreateRef("#/components/schemas/createNotificationqueryNotificationEntryAlert"),
}},
},
},
},
"duration": {
@@ -426,7 +420,9 @@ func RemoteManifest() app.Manifest {
return app.NewAPIServerManifest("alerting-historian")
}
var kindVersionToGoType = map[string]resource.Kind{}
var kindVersionToGoType = map[string]resource.Kind{
"Dummy/v0alpha1": v0alpha1.DummyKind(),
}
// ManifestGoTypeAssociator returns the associated resource.Kind instance for a given Kind and Version, if one exists.
// If there is no association for the provided Kind and Version, exists will return false.

View File

@@ -12,6 +12,7 @@ import (
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/grafana/grafana/apps/alerting/historian/pkg/apis/alertinghistorian/v0alpha1"
"github.com/grafana/grafana/apps/alerting/historian/pkg/app/config"
"github.com/grafana/grafana/apps/alerting/historian/pkg/app/notification"
)
@@ -46,6 +47,12 @@ func New(cfg app.Config) (app.App, error) {
}: notificationHandler.QueryHandler,
},
},
// TODO: Remove when SDK is fixed.
ManagedKinds: []simple.AppManagedKind{
{
Kind: v0alpha1.DummyKind(),
},
},
}
a, err := simple.NewApp(simpleConfig)

View File

@@ -57,7 +57,6 @@ require (
github.com/hashicorp/go-hclog v1.6.3 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/go-plugin v1.7.0 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/hashicorp/yamux v0.1.2 // indirect
github.com/jaegertracing/jaeger-idl v0.5.0 // indirect
github.com/josharian/intern v1.0.0 // indirect

View File

@@ -112,8 +112,6 @@ github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+l
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-plugin v1.7.0 h1:YghfQH/0QmPNc/AZMTFE3ac8fipZyZECHdDPshfk+mA=
github.com/hashicorp/go-plugin v1.7.0/go.mod h1:BExt6KEaIYx804z8k4gRzRLEvxKVb+kn0NMcihqOqb8=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/hashicorp/yamux v0.1.2 h1:XtB8kyFOyHXYVFnwT5C3+Bdo8gArse7j2AQ0DA0Uey8=
github.com/hashicorp/yamux v0.1.2/go.mod h1:C+zze2n6e/7wshOZep2A70/aQU6QBRWJO/G6FT1wIns=
github.com/jaegertracing/jaeger-idl v0.5.0 h1:zFXR5NL3Utu7MhPg8ZorxtCBjHrL3ReM1VoB65FOFGE=

View File

@@ -768,10 +768,6 @@ VariableRefresh: *"never" | "onDashboardLoad" | "onTimeRangeChanged"
// Accepted values are `dontHide` (show label and value), `hideLabel` (show value only), `hideVariable` (show nothing).
VariableHide: *"dontHide" | "hideLabel" | "hideVariable"
// Determine whether regex applies to variable value or display text
// Accepted values are `value` (apply to value used in queries) or `text` (apply to display text shown to users)
VariableRegexApplyTo: *"value" | "text"
// Determine the origin of the adhoc variable filter
FilterOrigin: "dashboard"
@@ -807,7 +803,6 @@ QueryVariableSpec: {
datasource?: DataSourceRef
query: DataQueryKind
regex: string | *""
regexApplyTo?: VariableRegexApplyTo
sort: VariableSort
definition?: string
options: [...VariableOption] | *[]

View File

@@ -772,10 +772,6 @@ VariableRefresh: *"never" | "onDashboardLoad" | "onTimeRangeChanged"
// Accepted values are `dontHide` (show label and value), `hideLabel` (show value only), `hideVariable` (show nothing), `inControlsMenu` (show in a drop-down menu).
VariableHide: *"dontHide" | "hideLabel" | "hideVariable" | "inControlsMenu"
// Determine whether regex applies to variable value or display text
// Accepted values are `value` (apply to value used in queries) or `text` (apply to display text shown to users)
VariableRegexApplyTo: *"value" | "text"
// Determine the origin of the adhoc variable filter
FilterOrigin: "dashboard"
@@ -810,7 +806,6 @@ QueryVariableSpec: {
description?: string
query: DataQueryKind
regex: string | *""
regexApplyTo?: VariableRegexApplyTo
sort: VariableSort
definition?: string
options: [...VariableOption] | *[]

View File

@@ -222,8 +222,6 @@ lineage: schemas: [{
// Optional field, if you want to extract part of a series name or metric node segment.
// Named capture groups can be used to separate the display text and value.
regex?: string
// Determine whether regex applies to variable value or display text
regexApplyTo?: #VariableRegexApplyTo
// Additional static options for query variable
staticOptions?: [...#VariableOption]
// Ordering of static options in relation to options returned from data source for query variable
@@ -251,10 +249,6 @@ lineage: schemas: [{
// Accepted values are 0 (show label and value), 1 (show value only), 2 (show nothing), 3 (show under the controls dropdown menu).
#VariableHide: 0 | 1 | 2 | 3 @cuetsy(kind="enum",memberNames="dontHide|hideLabel|hideVariable|inControlsMenu") @grafana(TSVeneer="type")
// Determine whether regex applies to variable value or display text
// Accepted values are "value" (apply to value used in queries) or "text" (apply to display text shown to users)
#VariableRegexApplyTo: "value" | "text" @cuetsy(kind="type")
// Sort variable options
// Accepted values are:
// `0`: No sorting

View File

@@ -222,8 +222,6 @@ lineage: schemas: [{
// Optional field, if you want to extract part of a series name or metric node segment.
// Named capture groups can be used to separate the display text and value.
regex?: string
// Determine whether regex applies to variable value or display text
regexApplyTo?: #VariableRegexApplyTo
// Additional static options for query variable
staticOptions?: [...#VariableOption]
// Ordering of static options in relation to options returned from data source for query variable
@@ -251,10 +249,6 @@ lineage: schemas: [{
// Accepted values are 0 (show label and value), 1 (show value only), 2 (show nothing), 3 (show under the controls dropdown menu).
#VariableHide: 0 | 1 | 2 | 3 @cuetsy(kind="enum",memberNames="dontHide|hideLabel|hideVariable|inControlsMenu") @grafana(TSVeneer="type")
// Determine whether regex applies to variable value or display text
// Accepted values are "value" (apply to value used in queries) or "text" (apply to display text shown to users)
#VariableRegexApplyTo: "value" | "text" @cuetsy(kind="type")
// Sort variable options
// Accepted values are:
// `0`: No sorting

View File

@@ -772,10 +772,6 @@ VariableRefresh: *"never" | "onDashboardLoad" | "onTimeRangeChanged"
// Accepted values are `dontHide` (show label and value), `hideLabel` (show value only), `hideVariable` (show nothing).
VariableHide: *"dontHide" | "hideLabel" | "hideVariable"
// Determine whether regex applies to variable value or display text
// Accepted values are `value` (apply to value used in queries) or `text` (apply to display text shown to users)
VariableRegexApplyTo: *"value" | "text"
// Determine the origin of the adhoc variable filter
FilterOrigin: "dashboard"
@@ -811,7 +807,6 @@ QueryVariableSpec: {
datasource?: DataSourceRef
query: DataQueryKind
regex: string | *""
regexApplyTo?: VariableRegexApplyTo
sort: VariableSort
definition?: string
options: [...VariableOption] | *[]

View File

@@ -1364,7 +1364,6 @@ type DashboardQueryVariableSpec struct {
Datasource *DashboardDataSourceRef `json:"datasource,omitempty"`
Query DashboardDataQueryKind `json:"query"`
Regex string `json:"regex"`
RegexApplyTo *DashboardVariableRegexApplyTo `json:"regexApplyTo,omitempty"`
Sort DashboardVariableSort `json:"sort"`
Definition *string `json:"definition,omitempty"`
Options []DashboardVariableOption `json:"options"`
@@ -1394,7 +1393,6 @@ func NewDashboardQueryVariableSpec() *DashboardQueryVariableSpec {
SkipUrlSync: false,
Query: *NewDashboardDataQueryKind(),
Regex: "",
RegexApplyTo: (func(input DashboardVariableRegexApplyTo) *DashboardVariableRegexApplyTo { return &input })(DashboardVariableRegexApplyToValue),
Options: []DashboardVariableOption{},
Multi: false,
IncludeAll: false,
@@ -1445,16 +1443,6 @@ const (
DashboardVariableRefreshOnTimeRangeChanged DashboardVariableRefresh = "onTimeRangeChanged"
)
// Determine whether regex applies to variable value or display text
// Accepted values are `value` (apply to value used in queries) or `text` (apply to display text shown to users)
// +k8s:openapi-gen=true
type DashboardVariableRegexApplyTo string
const (
DashboardVariableRegexApplyToValue DashboardVariableRegexApplyTo = "value"
DashboardVariableRegexApplyToText DashboardVariableRegexApplyTo = "text"
)
// Sort variable options
// Accepted values are:
// `disabled`: No sorting

View File

@@ -3646,12 +3646,6 @@ func schema_pkg_apis_dashboard_v2alpha1_DashboardQueryVariableSpec(ref common.Re
Format: "",
},
},
"regexApplyTo": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"sort": {
SchemaProps: spec.SchemaProps{
Default: "",

View File

@@ -776,10 +776,6 @@ VariableRefresh: *"never" | "onDashboardLoad" | "onTimeRangeChanged"
// Accepted values are `dontHide` (show label and value), `hideLabel` (show value only), `hideVariable` (show nothing), `inControlsMenu` (show in a drop-down menu).
VariableHide: *"dontHide" | "hideLabel" | "hideVariable" | "inControlsMenu"
// Determine whether regex applies to variable value or display text
// Accepted values are `value` (apply to value used in queries) or `text` (apply to display text shown to users)
VariableRegexApplyTo: *"value" | "text"
// Determine the origin of the adhoc variable filter
FilterOrigin: "dashboard"
@@ -814,7 +810,6 @@ QueryVariableSpec: {
description?: string
query: DataQueryKind
regex: string | *""
regexApplyTo?: VariableRegexApplyTo
sort: VariableSort
definition?: string
options: [...VariableOption] | *[]

View File

@@ -1367,7 +1367,6 @@ type DashboardQueryVariableSpec struct {
Description *string `json:"description,omitempty"`
Query DashboardDataQueryKind `json:"query"`
Regex string `json:"regex"`
RegexApplyTo *DashboardVariableRegexApplyTo `json:"regexApplyTo,omitempty"`
Sort DashboardVariableSort `json:"sort"`
Definition *string `json:"definition,omitempty"`
Options []DashboardVariableOption `json:"options"`
@@ -1397,7 +1396,6 @@ func NewDashboardQueryVariableSpec() *DashboardQueryVariableSpec {
SkipUrlSync: false,
Query: *NewDashboardDataQueryKind(),
Regex: "",
RegexApplyTo: (func(input DashboardVariableRegexApplyTo) *DashboardVariableRegexApplyTo { return &input })(DashboardVariableRegexApplyToValue),
Options: []DashboardVariableOption{},
Multi: false,
IncludeAll: false,
@@ -1449,16 +1447,6 @@ const (
DashboardVariableRefreshOnTimeRangeChanged DashboardVariableRefresh = "onTimeRangeChanged"
)
// Determine whether regex applies to variable value or display text
// Accepted values are `value` (apply to value used in queries) or `text` (apply to display text shown to users)
// +k8s:openapi-gen=true
type DashboardVariableRegexApplyTo string
const (
DashboardVariableRegexApplyToValue DashboardVariableRegexApplyTo = "value"
DashboardVariableRegexApplyToText DashboardVariableRegexApplyTo = "text"
)
// Sort variable options
// Accepted values are:
// `disabled`: No sorting

View File

@@ -3656,12 +3656,6 @@ func schema_pkg_apis_dashboard_v2beta1_DashboardQueryVariableSpec(ref common.Ref
Format: "",
},
},
"regexApplyTo": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
Format: "",
},
},
"sort": {
SchemaProps: spec.SchemaProps{
Default: "",

File diff suppressed because one or more lines are too long

View File

@@ -12,6 +12,13 @@ import (
)
func RegisterConversions(s *runtime.Scheme, dsIndexProvider schemaversion.DataSourceIndexProvider, leIndexProvider schemaversion.LibraryElementIndexProvider) error {
// Wrap the provider once with 10s caching for all conversions.
// This prevents repeated DB queries across multiple conversion calls while allowing
// the cache to refresh periodically, making it suitable for long-lived singleton usage.
dsIndexProvider = schemaversion.WrapIndexProviderWithCache(dsIndexProvider)
// Wrap library element provider with caching as well
leIndexProvider = schemaversion.WrapLibraryElementProviderWithCache(leIndexProvider)
// v0 conversions
if err := s.AddConversionFunc((*dashv0.Dashboard)(nil), (*dashv1.Dashboard)(nil),
withConversionMetrics(dashv0.APIVERSION, dashv1.APIVERSION, func(a, b interface{}, scope conversion.Scope) error {
@@ -55,13 +62,13 @@ func RegisterConversions(s *runtime.Scheme, dsIndexProvider schemaversion.DataSo
// v2alpha1 conversions
if err := s.AddConversionFunc((*dashv2alpha1.Dashboard)(nil), (*dashv0.Dashboard)(nil),
withConversionMetrics(dashv2alpha1.APIVERSION, dashv0.APIVERSION, func(a, b interface{}, scope conversion.Scope) error {
return Convert_V2alpha1_to_V0(a.(*dashv2alpha1.Dashboard), b.(*dashv0.Dashboard), scope)
return Convert_V2alpha1_to_V0(a.(*dashv2alpha1.Dashboard), b.(*dashv0.Dashboard), scope, dsIndexProvider)
})); err != nil {
return err
}
if err := s.AddConversionFunc((*dashv2alpha1.Dashboard)(nil), (*dashv1.Dashboard)(nil),
withConversionMetrics(dashv2alpha1.APIVERSION, dashv1.APIVERSION, func(a, b interface{}, scope conversion.Scope) error {
return Convert_V2alpha1_to_V1beta1(a.(*dashv2alpha1.Dashboard), b.(*dashv1.Dashboard), scope)
return Convert_V2alpha1_to_V1beta1(a.(*dashv2alpha1.Dashboard), b.(*dashv1.Dashboard), scope, dsIndexProvider)
})); err != nil {
return err
}

View File

@@ -1,454 +0,0 @@
package conversion
import (
"context"
"sync/atomic"
"testing"
"time"
dashv0 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v0alpha1"
dashv1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v1beta1"
dashv2alpha1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v2alpha1"
dashv2beta1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v2beta1"
"github.com/grafana/grafana/apps/dashboard/pkg/migration"
"github.com/grafana/grafana/apps/dashboard/pkg/migration/schemaversion"
common "github.com/grafana/grafana/pkg/apimachinery/apis/common/v0alpha1"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// countingDataSourceProvider tracks how many times Index() is called
type countingDataSourceProvider struct {
datasources []schemaversion.DataSourceInfo
callCount atomic.Int64
}
func newCountingDataSourceProvider(datasources []schemaversion.DataSourceInfo) *countingDataSourceProvider {
return &countingDataSourceProvider{
datasources: datasources,
}
}
func (p *countingDataSourceProvider) Index(_ context.Context) *schemaversion.DatasourceIndex {
p.callCount.Add(1)
return schemaversion.NewDatasourceIndex(p.datasources)
}
func (p *countingDataSourceProvider) getCallCount() int64 {
return p.callCount.Load()
}
// countingLibraryElementProvider tracks how many times GetLibraryElementInfo() is called
type countingLibraryElementProvider struct {
elements []schemaversion.LibraryElementInfo
callCount atomic.Int64
}
func newCountingLibraryElementProvider(elements []schemaversion.LibraryElementInfo) *countingLibraryElementProvider {
return &countingLibraryElementProvider{
elements: elements,
}
}
func (p *countingLibraryElementProvider) GetLibraryElementInfo(_ context.Context) []schemaversion.LibraryElementInfo {
p.callCount.Add(1)
return p.elements
}
func (p *countingLibraryElementProvider) getCallCount() int64 {
return p.callCount.Load()
}
// createTestV0Dashboard creates a minimal v0 dashboard for testing
// The dashboard has a datasource with UID only (no type) to force provider lookup
// and includes library panels to test library element provider caching
func createTestV0Dashboard(namespace, title string) *dashv0.Dashboard {
return &dashv0.Dashboard{
ObjectMeta: metav1.ObjectMeta{
Name: "test-dashboard",
Namespace: namespace,
},
Spec: common.Unstructured{
Object: map[string]interface{}{
"title": title,
"schemaVersion": schemaversion.LATEST_VERSION,
// Variables with datasource reference that requires lookup
"templating": map[string]interface{}{
"list": []interface{}{
map[string]interface{}{
"name": "query_var",
"type": "query",
"query": "label_values(up, job)",
// Datasource with UID only - type needs to be looked up
"datasource": map[string]interface{}{
"uid": "ds1",
// type is intentionally omitted to trigger provider lookup
},
},
},
},
"panels": []interface{}{
map[string]interface{}{
"id": 1,
"title": "Test Panel",
"type": "timeseries",
"targets": []interface{}{
map[string]interface{}{
// Datasource with UID only - type needs to be looked up
"datasource": map[string]interface{}{
"uid": "ds1",
},
},
},
},
// Library panel reference - triggers library element provider lookup
map[string]interface{}{
"id": 2,
"title": "Library Panel with Horizontal Repeat",
"type": "library-panel-ref",
"gridPos": map[string]interface{}{
"h": 8,
"w": 12,
"x": 0,
"y": 8,
},
"libraryPanel": map[string]interface{}{
"uid": "lib-panel-repeat-h",
"name": "Library Panel with Horizontal Repeat",
},
},
// Another library panel reference
map[string]interface{}{
"id": 3,
"title": "Library Panel without Repeat",
"type": "library-panel-ref",
"gridPos": map[string]interface{}{
"h": 3,
"w": 6,
"x": 0,
"y": 16,
},
"libraryPanel": map[string]interface{}{
"uid": "lib-panel-no-repeat",
"name": "Library Panel without Repeat",
},
},
},
},
},
}
}
// createTestV1Dashboard creates a minimal v1beta1 dashboard for testing
// The dashboard has a datasource with UID only (no type) to force provider lookup
// and includes library panels to test library element provider caching
func createTestV1Dashboard(namespace, title string) *dashv1.Dashboard {
return &dashv1.Dashboard{
ObjectMeta: metav1.ObjectMeta{
Name: "test-dashboard",
Namespace: namespace,
},
Spec: common.Unstructured{
Object: map[string]interface{}{
"title": title,
"schemaVersion": schemaversion.LATEST_VERSION,
// Variables with datasource reference that requires lookup
"templating": map[string]interface{}{
"list": []interface{}{
map[string]interface{}{
"name": "query_var",
"type": "query",
"query": "label_values(up, job)",
// Datasource with UID only - type needs to be looked up
"datasource": map[string]interface{}{
"uid": "ds1",
// type is intentionally omitted to trigger provider lookup
},
},
},
},
"panels": []interface{}{
map[string]interface{}{
"id": 1,
"title": "Test Panel",
"type": "timeseries",
"targets": []interface{}{
map[string]interface{}{
// Datasource with UID only - type needs to be looked up
"datasource": map[string]interface{}{
"uid": "ds1",
},
},
},
},
// Library panel reference - triggers library element provider lookup
map[string]interface{}{
"id": 2,
"title": "Library Panel with Vertical Repeat",
"type": "library-panel-ref",
"gridPos": map[string]interface{}{
"h": 4,
"w": 6,
"x": 0,
"y": 8,
},
"libraryPanel": map[string]interface{}{
"uid": "lib-panel-repeat-v",
"name": "Library Panel with Vertical Repeat",
},
},
// Another library panel reference
map[string]interface{}{
"id": 3,
"title": "Library Panel without Repeat",
"type": "library-panel-ref",
"gridPos": map[string]interface{}{
"h": 3,
"w": 6,
"x": 6,
"y": 8,
},
"libraryPanel": map[string]interface{}{
"uid": "lib-panel-no-repeat",
"name": "Library Panel without Repeat",
},
},
},
},
},
}
}
// TestConversionCaching_V0_to_V2alpha1 verifies caching works when converting V0 to V2alpha1
func TestConversionCaching_V0_to_V2alpha1(t *testing.T) {
datasources := []schemaversion.DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
elements := []schemaversion.LibraryElementInfo{
{UID: "lib-panel-repeat-h", Name: "Library Panel with Horizontal Repeat", Type: "timeseries"},
{UID: "lib-panel-no-repeat", Name: "Library Panel without Repeat", Type: "graph"},
}
underlyingDS := newCountingDataSourceProvider(datasources)
underlyingLE := newCountingLibraryElementProvider(elements)
cachedDS := schemaversion.WrapIndexProviderWithCache(underlyingDS, time.Minute)
cachedLE := schemaversion.WrapLibraryElementProviderWithCache(underlyingLE, time.Minute)
migration.ResetForTesting()
migration.Initialize(cachedDS, cachedLE, migration.DefaultCacheTTL)
// Convert multiple dashboards in the same namespace
numDashboards := 5
namespace := "default"
for i := 0; i < numDashboards; i++ {
source := createTestV0Dashboard(namespace, "Dashboard "+string(rune('A'+i)))
target := &dashv2alpha1.Dashboard{}
err := Convert_V0_to_V2alpha1(source, target, nil, cachedDS, cachedLE)
require.NoError(t, err, "conversion %d should succeed", i)
require.NotNil(t, target.Spec)
}
// With caching, the underlying datasource provider should only be called once per namespace
// The test dashboard has datasources without type that require lookup
assert.Equal(t, int64(1), underlyingDS.getCallCount(),
"datasource provider should be called only once for %d conversions in same namespace", numDashboards)
// Library element provider should also be called only once per namespace due to caching
assert.Equal(t, int64(1), underlyingLE.getCallCount(),
"library element provider should be called only once for %d conversions in same namespace", numDashboards)
}
// TestConversionCaching_V0_to_V2beta1 verifies caching works when converting V0 to V2beta1
func TestConversionCaching_V0_to_V2beta1(t *testing.T) {
datasources := []schemaversion.DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
elements := []schemaversion.LibraryElementInfo{
{UID: "lib-panel-repeat-h", Name: "Library Panel with Horizontal Repeat", Type: "timeseries"},
{UID: "lib-panel-no-repeat", Name: "Library Panel without Repeat", Type: "graph"},
}
underlyingDS := newCountingDataSourceProvider(datasources)
underlyingLE := newCountingLibraryElementProvider(elements)
cachedDS := schemaversion.WrapIndexProviderWithCache(underlyingDS, time.Minute)
cachedLE := schemaversion.WrapLibraryElementProviderWithCache(underlyingLE, time.Minute)
migration.ResetForTesting()
migration.Initialize(cachedDS, cachedLE, migration.DefaultCacheTTL)
numDashboards := 5
namespace := "default"
for i := 0; i < numDashboards; i++ {
source := createTestV0Dashboard(namespace, "Dashboard "+string(rune('A'+i)))
target := &dashv2beta1.Dashboard{}
err := Convert_V0_to_V2beta1(source, target, nil, cachedDS, cachedLE)
require.NoError(t, err, "conversion %d should succeed", i)
require.NotNil(t, target.Spec)
}
assert.Equal(t, int64(1), underlyingDS.getCallCount(),
"datasource provider should be called only once for %d conversions in same namespace", numDashboards)
assert.Equal(t, int64(1), underlyingLE.getCallCount(),
"library element provider should be called only once for %d conversions in same namespace", numDashboards)
}
// TestConversionCaching_V1beta1_to_V2alpha1 verifies caching works when converting V1beta1 to V2alpha1
func TestConversionCaching_V1beta1_to_V2alpha1(t *testing.T) {
datasources := []schemaversion.DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
elements := []schemaversion.LibraryElementInfo{
{UID: "lib-panel-repeat-v", Name: "Library Panel with Vertical Repeat", Type: "timeseries"},
{UID: "lib-panel-no-repeat", Name: "Library Panel without Repeat", Type: "graph"},
}
underlyingDS := newCountingDataSourceProvider(datasources)
underlyingLE := newCountingLibraryElementProvider(elements)
cachedDS := schemaversion.WrapIndexProviderWithCache(underlyingDS, time.Minute)
cachedLE := schemaversion.WrapLibraryElementProviderWithCache(underlyingLE, time.Minute)
migration.ResetForTesting()
migration.Initialize(cachedDS, cachedLE, migration.DefaultCacheTTL)
numDashboards := 5
namespace := "default"
for i := 0; i < numDashboards; i++ {
source := createTestV1Dashboard(namespace, "Dashboard "+string(rune('A'+i)))
target := &dashv2alpha1.Dashboard{}
err := Convert_V1beta1_to_V2alpha1(source, target, nil, cachedDS, cachedLE)
require.NoError(t, err, "conversion %d should succeed", i)
require.NotNil(t, target.Spec)
}
assert.Equal(t, int64(1), underlyingDS.getCallCount(),
"datasource provider should be called only once for %d conversions in same namespace", numDashboards)
assert.Equal(t, int64(1), underlyingLE.getCallCount(),
"library element provider should be called only once for %d conversions in same namespace", numDashboards)
}
// TestConversionCaching_V1beta1_to_V2beta1 verifies caching works when converting V1beta1 to V2beta1
func TestConversionCaching_V1beta1_to_V2beta1(t *testing.T) {
datasources := []schemaversion.DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
elements := []schemaversion.LibraryElementInfo{
{UID: "lib-panel-repeat-v", Name: "Library Panel with Vertical Repeat", Type: "timeseries"},
{UID: "lib-panel-no-repeat", Name: "Library Panel without Repeat", Type: "graph"},
}
underlyingDS := newCountingDataSourceProvider(datasources)
underlyingLE := newCountingLibraryElementProvider(elements)
cachedDS := schemaversion.WrapIndexProviderWithCache(underlyingDS, time.Minute)
cachedLE := schemaversion.WrapLibraryElementProviderWithCache(underlyingLE, time.Minute)
migration.ResetForTesting()
migration.Initialize(cachedDS, cachedLE, migration.DefaultCacheTTL)
numDashboards := 5
namespace := "default"
for i := 0; i < numDashboards; i++ {
source := createTestV1Dashboard(namespace, "Dashboard "+string(rune('A'+i)))
target := &dashv2beta1.Dashboard{}
err := Convert_V1beta1_to_V2beta1(source, target, nil, cachedDS, cachedLE)
require.NoError(t, err, "conversion %d should succeed", i)
require.NotNil(t, target.Spec)
}
assert.Equal(t, int64(1), underlyingDS.getCallCount(),
"datasource provider should be called only once for %d conversions in same namespace", numDashboards)
assert.Equal(t, int64(1), underlyingLE.getCallCount(),
"library element provider should be called only once for %d conversions in same namespace", numDashboards)
}
// TestConversionCaching_MultipleNamespaces verifies that different namespaces get separate cache entries
func TestConversionCaching_MultipleNamespaces(t *testing.T) {
datasources := []schemaversion.DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
elements := []schemaversion.LibraryElementInfo{
{UID: "lib-panel-repeat-h", Name: "Library Panel with Horizontal Repeat", Type: "timeseries"},
{UID: "lib-panel-no-repeat", Name: "Library Panel without Repeat", Type: "graph"},
}
underlyingDS := newCountingDataSourceProvider(datasources)
underlyingLE := newCountingLibraryElementProvider(elements)
cachedDS := schemaversion.WrapIndexProviderWithCache(underlyingDS, time.Minute)
cachedLE := schemaversion.WrapLibraryElementProviderWithCache(underlyingLE, time.Minute)
migration.ResetForTesting()
migration.Initialize(cachedDS, cachedLE, migration.DefaultCacheTTL)
namespaces := []string{"default", "org-2", "org-3"}
numDashboardsPerNs := 3
for _, ns := range namespaces {
for i := 0; i < numDashboardsPerNs; i++ {
source := createTestV0Dashboard(ns, "Dashboard "+string(rune('A'+i)))
target := &dashv2alpha1.Dashboard{}
err := Convert_V0_to_V2alpha1(source, target, nil, cachedDS, cachedLE)
require.NoError(t, err, "conversion for namespace %s should succeed", ns)
}
}
// With caching, each namespace should result in one call to the underlying provider
expectedCalls := int64(len(namespaces))
assert.Equal(t, expectedCalls, underlyingDS.getCallCount(),
"datasource provider should be called once per namespace (%d namespaces)", len(namespaces))
assert.Equal(t, expectedCalls, underlyingLE.getCallCount(),
"library element provider should be called once per namespace (%d namespaces)", len(namespaces))
}
// TestConversionCaching_CacheDisabled verifies that TTL=0 disables caching
func TestConversionCaching_CacheDisabled(t *testing.T) {
datasources := []schemaversion.DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
elements := []schemaversion.LibraryElementInfo{
{UID: "lib-panel-repeat-h", Name: "Library Panel with Horizontal Repeat", Type: "timeseries"},
{UID: "lib-panel-no-repeat", Name: "Library Panel without Repeat", Type: "graph"},
}
underlyingDS := newCountingDataSourceProvider(datasources)
underlyingLE := newCountingLibraryElementProvider(elements)
// TTL of 0 should disable caching - the wrapper returns the underlying provider directly
cachedDS := schemaversion.WrapIndexProviderWithCache(underlyingDS, 0)
cachedLE := schemaversion.WrapLibraryElementProviderWithCache(underlyingLE, 0)
migration.ResetForTesting()
migration.Initialize(cachedDS, cachedLE, migration.DefaultCacheTTL)
numDashboards := 3
namespace := "default"
for i := 0; i < numDashboards; i++ {
source := createTestV0Dashboard(namespace, "Dashboard "+string(rune('A'+i)))
target := &dashv2alpha1.Dashboard{}
err := Convert_V0_to_V2alpha1(source, target, nil, cachedDS, cachedLE)
require.NoError(t, err, "conversion %d should succeed", i)
}
// Without caching, each conversion calls the underlying provider multiple times
// (once for each datasource lookup needed - variables and panels)
// The key check is that the count is GREATER than 1 per conversion (no caching benefit)
assert.Greater(t, underlyingDS.getCallCount(), int64(numDashboards),
"with cache disabled, conversions should call datasource provider multiple times")
// Library element provider is also called for each conversion without caching
assert.GreaterOrEqual(t, underlyingLE.getCallCount(), int64(numDashboards),
"with cache disabled, conversions should call library element provider multiple times")
}

View File

@@ -829,7 +829,7 @@ func TestDataLossDetectionOnAllInputFiles(t *testing.T) {
// Initialize the migrator with a test data source provider
dsProvider := testutil.NewDataSourceProvider(testutil.StandardTestConfig)
leProvider := testutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Set up conversion scheme
scheme := runtime.NewScheme()

View File

@@ -35,7 +35,7 @@ func TestConversionMatrixExist(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
versions := []metav1.Object{
&dashv0.Dashboard{Spec: common.Unstructured{Object: map[string]any{"title": "dashboardV0"}}},
@@ -89,7 +89,7 @@ func TestDashboardConversionToAllVersions(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Set up conversion scheme
scheme := runtime.NewScheme()
@@ -309,7 +309,7 @@ func TestMigratedDashboardsConversion(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Set up conversion scheme
scheme := runtime.NewScheme()
@@ -428,7 +428,7 @@ func setupTestConversionScheme(t *testing.T) *runtime.Scheme {
t.Helper()
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
scheme := runtime.NewScheme()
err := RegisterConversions(scheme, dsProvider, leProvider)
@@ -527,7 +527,7 @@ func TestConversionMetrics(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Create a test registry for metrics
registry := prometheus.NewRegistry()
@@ -694,7 +694,7 @@ func TestConversionMetricsWrapper(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Create a test registry for metrics
registry := prometheus.NewRegistry()
@@ -864,7 +864,7 @@ func TestSchemaVersionExtraction(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Create a test registry for metrics
registry := prometheus.NewRegistry()
@@ -910,7 +910,7 @@ func TestConversionLogging(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Create a test registry for metrics
registry := prometheus.NewRegistry()
@@ -1003,7 +1003,7 @@ func TestConversionLogLevels(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
t.Run("log levels and structured fields verification", func(t *testing.T) {
// Create test wrapper to verify logging behavior
@@ -1076,7 +1076,7 @@ func TestConversionLoggingFields(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
// Use TestLibraryElementProvider for tests that need library panel models with repeat options
leProvider := migrationtestutil.NewTestLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
t.Run("verify all log fields are present", func(t *testing.T) {
// Test that the conversion wrapper includes all expected structured fields

View File

@@ -42,7 +42,7 @@
"regex": "",
"skipUrlSync": false,
"refresh": 1
},
},
{
"name": "query_var",
"type": "query",
@@ -81,7 +81,6 @@
"allValue": ".*",
"multi": true,
"regex": "/.*9090.*/",
"regexApplyTo": "text",
"skipUrlSync": false,
"refresh": 2,
"sort": 1,
@@ -108,7 +107,7 @@
},
{
"selected": false,
"text": "staging",
"text": "staging",
"value": "staging"
},
{
@@ -336,7 +335,6 @@
"allValue": "*",
"multi": true,
"regex": "/host[0-9]+/",
"regexApplyTo": "value",
"skipUrlSync": false,
"refresh": 1,
"sort": 2,
@@ -356,4 +354,4 @@
},
"links": []
}
}
}

View File

@@ -94,7 +94,6 @@
"query": "label_values(up, instance)",
"refresh": 2,
"regex": "/.*9090.*/",
"regexApplyTo": "text",
"skipUrlSync": false,
"sort": 1,
"tagValuesQuery": "",
@@ -363,7 +362,6 @@
},
"refresh": 1,
"regex": "/host[0-9]+/",
"regexApplyTo": "value",
"skipUrlSync": false,
"sort": 2,
"tagValuesQuery": "",

View File

@@ -110,7 +110,6 @@
}
},
"regex": "/.*9090.*/",
"regexApplyTo": "text",
"sort": "alphabeticalAsc",
"definition": "label_values(up, instance)",
"options": [
@@ -402,7 +401,6 @@
}
},
"regex": "/host[0-9]+/",
"regexApplyTo": "value",
"sort": "alphabeticalDesc",
"definition": "terms field:@host size:100",
"options": [],

View File

@@ -111,7 +111,6 @@
}
},
"regex": "/.*9090.*/",
"regexApplyTo": "text",
"sort": "alphabeticalAsc",
"definition": "label_values(up, instance)",
"options": [
@@ -405,7 +404,6 @@
}
},
"regex": "/host[0-9]+/",
"regexApplyTo": "value",
"sort": "alphabeticalDesc",
"definition": "terms field:@host size:100",
"options": [],

View File

@@ -20,7 +20,7 @@ func TestV0ConversionErrorHandling(t *testing.T) {
// Initialize the migrator with a test data source provider
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
tests := []struct {
name string
@@ -132,7 +132,7 @@ func TestV0ConversionErrorPropagation(t *testing.T) {
// Initialize the migrator with a test data source provider
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
t.Run("ConvertDashboard_V0_to_V1beta1 returns error on migration failure", func(t *testing.T) {
source := &dashv0.Dashboard{
@@ -206,7 +206,7 @@ func TestV0ConversionSuccessPaths(t *testing.T) {
// Initialize the migrator with a test data source provider
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
t.Run("Convert_V0_to_V1beta1 success path returns nil", func(t *testing.T) {
source := &dashv0.Dashboard{
@@ -275,7 +275,7 @@ func TestV0ConversionSecondStepErrors(t *testing.T) {
// Initialize the migrator with a test data source provider
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
t.Run("Convert_V0_to_V2alpha1 sets status on first step error", func(t *testing.T) {
// Create a dashboard that will fail v0->v1beta1 conversion

View File

@@ -19,7 +19,7 @@ func TestV1ConversionErrorHandling(t *testing.T) {
// Initialize the migrator with a test data source provider
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
t.Run("Convert_V1beta1_to_V2alpha1 sets status on conversion error", func(t *testing.T) {
// Create a dashboard that will cause conversion to fail

View File

@@ -229,16 +229,6 @@ func getBoolField(m map[string]interface{}, key string, defaultValue bool) bool
return defaultValue
}
func getUnionField[T ~string](m map[string]interface{}, key string) *T {
if val, ok := m[key]; ok {
if str, ok := val.(string); ok && str != "" {
result := T(str)
return &result
}
}
return nil
}
// Helper function to create int64 pointer
func int64Ptr(i int64) *int64 {
return &i
@@ -1205,7 +1195,6 @@ func buildQueryVariable(ctx context.Context, varMap map[string]interface{}, comm
Refresh: transformVariableRefreshToEnum(varMap["refresh"]),
Sort: transformVariableSortToEnum(varMap["sort"]),
Regex: schemaversion.GetStringValue(varMap, "regex"),
RegexApplyTo: getUnionField[dashv2alpha1.DashboardVariableRegexApplyTo](varMap, "regexApplyTo"),
Query: buildDataQueryKindForVariable(varMap["query"], datasourceType),
AllowCustomValue: getBoolField(varMap, "allowCustomValue", true),
},

View File

@@ -19,7 +19,7 @@ func TestV1beta1ToV2alpha1(t *testing.T) {
// Initialize the migrator with test providers
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Set up conversion scheme
scheme := runtime.NewScheme()

View File

@@ -11,10 +11,10 @@ import (
"github.com/grafana/grafana/apps/dashboard/pkg/migration/schemaversion"
)
func Convert_V2alpha1_to_V0(in *dashv2alpha1.Dashboard, out *dashv0.Dashboard, scope conversion.Scope) error {
func Convert_V2alpha1_to_V0(in *dashv2alpha1.Dashboard, out *dashv0.Dashboard, scope conversion.Scope, dsIndexProvider schemaversion.DataSourceIndexProvider) error {
// Convert v2alpha1 → v1beta1 first, then v1beta1 → v0
v1beta1 := &dashv1.Dashboard{}
if err := ConvertDashboard_V2alpha1_to_V1beta1(in, v1beta1, scope); err != nil {
if err := ConvertDashboard_V2alpha1_to_V1beta1(in, v1beta1, scope, dsIndexProvider); err != nil {
out.ObjectMeta = in.ObjectMeta
out.APIVersion = dashv0.APIVERSION
out.Kind = in.Kind
@@ -53,13 +53,13 @@ func Convert_V2alpha1_to_V0(in *dashv2alpha1.Dashboard, out *dashv0.Dashboard, s
return nil
}
func Convert_V2alpha1_to_V1beta1(in *dashv2alpha1.Dashboard, out *dashv1.Dashboard, scope conversion.Scope) error {
func Convert_V2alpha1_to_V1beta1(in *dashv2alpha1.Dashboard, out *dashv1.Dashboard, scope conversion.Scope, dsIndexProvider schemaversion.DataSourceIndexProvider) error {
out.ObjectMeta = in.ObjectMeta
out.APIVersion = dashv1.APIVERSION
out.Kind = in.Kind
// Convert the spec
if err := ConvertDashboard_V2alpha1_to_V1beta1(in, out, scope); err != nil {
if err := ConvertDashboard_V2alpha1_to_V1beta1(in, out, scope, dsIndexProvider); err != nil {
out.Status = dashv1.DashboardStatus{
Conversion: &dashv1.DashboardConversionStatus{
StoredVersion: ptr.To(dashv2alpha1.VERSION),
@@ -179,7 +179,7 @@ func Convert_V2beta1_to_V1beta1(in *dashv2beta1.Dashboard, out *dashv1.Dashboard
// Convert v2alpha1 → v1beta1
// Note: ConvertDashboard_V2alpha1_to_V1beta1 will set out.ObjectMeta from v2alpha1,
// but we've already set it from the original input, so it will be preserved
if err := ConvertDashboard_V2alpha1_to_V1beta1(v2alpha1, out, scope); err != nil {
if err := ConvertDashboard_V2alpha1_to_V1beta1(v2alpha1, out, scope, dsIndexProvider); err != nil {
out.Status = dashv1.DashboardStatus{
Conversion: &dashv1.DashboardConversionStatus{
StoredVersion: ptr.To(dashv2beta1.VERSION),

View File

@@ -18,7 +18,7 @@ func TestV2alpha1ConversionErrorHandling(t *testing.T) {
// Initialize the migrator with test data source and library element providers
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
t.Run("Convert_V2alpha1_to_V1beta1 sets status on conversion", func(t *testing.T) {
// Create a dashboard for conversion
@@ -39,7 +39,7 @@ func TestV2alpha1ConversionErrorHandling(t *testing.T) {
}
target := &dashv1.Dashboard{}
err := Convert_V2alpha1_to_V1beta1(source, target, nil)
err := Convert_V2alpha1_to_V1beta1(source, target, nil, dsProvider)
// Convert_V2alpha1_to_V1beta1 doesn't return error, just sets status
require.NoError(t, err, "Convert_V2alpha1_to_V1beta1 doesn't return error")
@@ -90,7 +90,7 @@ func TestV2beta1ConversionErrorHandling(t *testing.T) {
// Initialize the migrator with test data source and library element providers
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
t.Run("Convert_V2beta1_to_V1beta1 sets status on first step failure", func(t *testing.T) {
// Create a dashboard that might cause conversion to fail on first step (v2beta1 -> v2alpha1)

View File

@@ -1,12 +1,14 @@
package conversion
import (
"context"
"fmt"
"k8s.io/apimachinery/pkg/conversion"
dashv1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v1beta1"
dashv2alpha1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v2alpha1"
"github.com/grafana/grafana/apps/dashboard/pkg/migration/schemaversion"
"k8s.io/apimachinery/pkg/conversion"
)
// ConvertDashboard_V2alpha1_to_V1beta1 converts a v2alpha1 dashboard to v1beta1 format.
@@ -14,13 +16,19 @@ import (
// that represents the v1 dashboard JSON format.
// The dsIndexProvider is used to resolve default datasources when queries/variables/annotations
// don't have explicit datasource references.
func ConvertDashboard_V2alpha1_to_V1beta1(in *dashv2alpha1.Dashboard, out *dashv1.Dashboard, scope conversion.Scope) error {
func ConvertDashboard_V2alpha1_to_V1beta1(in *dashv2alpha1.Dashboard, out *dashv1.Dashboard, scope conversion.Scope, dsIndexProvider schemaversion.DataSourceIndexProvider) error {
out.ObjectMeta = in.ObjectMeta
out.APIVersion = dashv1.APIVERSION
out.Kind = in.Kind // Preserve the Kind from input (should be "Dashboard")
// Get datasource index for resolving default datasources
var dsIndex *schemaversion.DatasourceIndex
if dsIndexProvider != nil {
dsIndex = dsIndexProvider.Index(context.Background())
}
// Convert the spec to v1beta1 unstructured format
dashboardJSON, err := convertDashboardSpec_V2alpha1_to_V1beta1(&in.Spec)
dashboardJSON, err := convertDashboardSpec_V2alpha1_to_V1beta1(&in.Spec, dsIndex)
if err != nil {
return fmt.Errorf("failed to convert dashboard spec: %w", err)
}
@@ -31,7 +39,7 @@ func ConvertDashboard_V2alpha1_to_V1beta1(in *dashv2alpha1.Dashboard, out *dashv
return nil
}
func convertDashboardSpec_V2alpha1_to_V1beta1(in *dashv2alpha1.DashboardSpec) (map[string]interface{}, error) {
func convertDashboardSpec_V2alpha1_to_V1beta1(in *dashv2alpha1.DashboardSpec, dsIndex *schemaversion.DatasourceIndex) (map[string]interface{}, error) {
dashboard := make(map[string]interface{})
// Convert basic fields
@@ -67,7 +75,7 @@ func convertDashboardSpec_V2alpha1_to_V1beta1(in *dashv2alpha1.DashboardSpec) (m
}
// Convert panels from elements and layout
panels, err := convertPanelsFromElementsAndLayout(in.Elements, in.Layout)
panels, err := convertPanelsFromElementsAndLayout(in.Elements, in.Layout, dsIndex)
if err != nil {
return nil, fmt.Errorf("failed to convert panels: %w", err)
}
@@ -82,7 +90,7 @@ func convertDashboardSpec_V2alpha1_to_V1beta1(in *dashv2alpha1.DashboardSpec) (m
}
// Convert variables
variables := convertVariablesToV1(in.Variables)
variables := convertVariablesToV1(in.Variables, dsIndex)
if len(variables) > 0 {
dashboard["templating"] = map[string]interface{}{
"list": variables,
@@ -90,7 +98,7 @@ func convertDashboardSpec_V2alpha1_to_V1beta1(in *dashv2alpha1.DashboardSpec) (m
}
// Convert annotations - always include even if empty to prevent DashboardModel from adding built-in
annotations := convertAnnotationsToV1(in.Annotations)
annotations := convertAnnotationsToV1(in.Annotations, dsIndex)
dashboard["annotations"] = map[string]interface{}{
"list": annotations,
}
@@ -228,28 +236,28 @@ func countTotalPanels(panels []interface{}) int {
// - RowsLayout: Rows become row panels; nested structures are flattened
// - AutoGridLayout: Calculates gridPos based on column count and row height
// - TabsLayout: Tabs become expanded row panels; content is flattened
func convertPanelsFromElementsAndLayout(elements map[string]dashv2alpha1.DashboardElement, layout dashv2alpha1.DashboardGridLayoutKindOrRowsLayoutKindOrAutoGridLayoutKindOrTabsLayoutKind) ([]interface{}, error) {
func convertPanelsFromElementsAndLayout(elements map[string]dashv2alpha1.DashboardElement, layout dashv2alpha1.DashboardGridLayoutKindOrRowsLayoutKindOrAutoGridLayoutKindOrTabsLayoutKind, dsIndex *schemaversion.DatasourceIndex) ([]interface{}, error) {
if layout.GridLayoutKind != nil {
return convertGridLayoutToPanels(elements, layout.GridLayoutKind)
return convertGridLayoutToPanels(elements, layout.GridLayoutKind, dsIndex)
}
if layout.RowsLayoutKind != nil {
return convertRowsLayoutToPanels(elements, layout.RowsLayoutKind)
return convertRowsLayoutToPanels(elements, layout.RowsLayoutKind, dsIndex)
}
if layout.AutoGridLayoutKind != nil {
return convertAutoGridLayoutToPanels(elements, layout.AutoGridLayoutKind)
return convertAutoGridLayoutToPanels(elements, layout.AutoGridLayoutKind, dsIndex)
}
if layout.TabsLayoutKind != nil {
return convertTabsLayoutToPanels(elements, layout.TabsLayoutKind)
return convertTabsLayoutToPanels(elements, layout.TabsLayoutKind, dsIndex)
}
// No layout specified, return empty panels
return []interface{}{}, nil
}
func convertGridLayoutToPanels(elements map[string]dashv2alpha1.DashboardElement, gridLayout *dashv2alpha1.DashboardGridLayoutKind) ([]interface{}, error) {
func convertGridLayoutToPanels(elements map[string]dashv2alpha1.DashboardElement, gridLayout *dashv2alpha1.DashboardGridLayoutKind, dsIndex *schemaversion.DatasourceIndex) ([]interface{}, error) {
panels := make([]interface{}, 0, len(gridLayout.Spec.Items))
for _, item := range gridLayout.Spec.Items {
@@ -258,7 +266,7 @@ func convertGridLayoutToPanels(elements map[string]dashv2alpha1.DashboardElement
return nil, fmt.Errorf("panel with uid %s not found in the dashboard elements", item.Spec.Element.Name)
}
panel, err := convertPanelFromElement(&element, &item)
panel, err := convertPanelFromElement(&element, &item, dsIndex)
if err != nil {
return nil, fmt.Errorf("failed to convert panel %s: %w", item.Spec.Element.Name, err)
}
@@ -271,21 +279,21 @@ func convertGridLayoutToPanels(elements map[string]dashv2alpha1.DashboardElement
// convertRowsLayoutToPanels converts a RowsLayout to V1 panels.
// All nested structures (rows within rows, tabs within rows) are flattened to the root level.
// Each row becomes a row panel, and nested content is added sequentially after it.
func convertRowsLayoutToPanels(elements map[string]dashv2alpha1.DashboardElement, rowsLayout *dashv2alpha1.DashboardRowsLayoutKind) ([]interface{}, error) {
return convertNestedLayoutToPanels(elements, rowsLayout, nil, 0)
func convertRowsLayoutToPanels(elements map[string]dashv2alpha1.DashboardElement, rowsLayout *dashv2alpha1.DashboardRowsLayoutKind, dsIndex *schemaversion.DatasourceIndex) ([]interface{}, error) {
return convertNestedLayoutToPanels(elements, rowsLayout, nil, dsIndex, 0)
}
// convertNestedLayoutToPanels handles arbitrary nesting of RowsLayout and TabsLayout.
// It processes each row/tab in order, tracking Y position to ensure panels don't overlap.
// The function recursively flattens nested structures to produce a flat V1 panel array.
func convertNestedLayoutToPanels(elements map[string]dashv2alpha1.DashboardElement, rowsLayout *dashv2alpha1.DashboardRowsLayoutKind, tabsLayout *dashv2alpha1.DashboardTabsLayoutKind, yOffset int64) ([]interface{}, error) {
func convertNestedLayoutToPanels(elements map[string]dashv2alpha1.DashboardElement, rowsLayout *dashv2alpha1.DashboardRowsLayoutKind, tabsLayout *dashv2alpha1.DashboardTabsLayoutKind, dsIndex *schemaversion.DatasourceIndex, yOffset int64) ([]interface{}, error) {
panels := make([]interface{}, 0)
currentY := yOffset
// Process RowsLayout
if rowsLayout != nil {
for _, row := range rowsLayout.Spec.Rows {
rowPanels, newY, err := processRowItem(elements, &row, currentY)
rowPanels, newY, err := processRowItem(elements, &row, dsIndex, currentY)
if err != nil {
return nil, err
}
@@ -297,7 +305,7 @@ func convertNestedLayoutToPanels(elements map[string]dashv2alpha1.DashboardEleme
// Process TabsLayout (tabs are converted to rows)
if tabsLayout != nil {
for _, tab := range tabsLayout.Spec.Tabs {
tabPanels, newY, err := processTabItem(elements, &tab, currentY)
tabPanels, newY, err := processTabItem(elements, &tab, dsIndex, currentY)
if err != nil {
return nil, err
}
@@ -316,7 +324,7 @@ func convertNestedLayoutToPanels(elements map[string]dashv2alpha1.DashboardEleme
// - Collapsed row: Panels stored inside row.panels with absolute Y positions
// - Expanded row: Panels added to top level after the row panel
// - Nested layouts: Parent row is preserved; nested content is flattened after it
func processRowItem(elements map[string]dashv2alpha1.DashboardElement, row *dashv2alpha1.DashboardRowsLayoutRowKind, startY int64) ([]interface{}, int64, error) {
func processRowItem(elements map[string]dashv2alpha1.DashboardElement, row *dashv2alpha1.DashboardRowsLayoutRowKind, dsIndex *schemaversion.DatasourceIndex, startY int64) ([]interface{}, int64, error) {
panels := make([]interface{}, 0)
currentY := startY
@@ -346,7 +354,7 @@ func processRowItem(elements map[string]dashv2alpha1.DashboardElement, row *dash
}
// Then process nested rows
nestedPanels, err := convertNestedLayoutToPanels(elements, row.Spec.Layout.RowsLayoutKind, nil, currentY)
nestedPanels, err := convertNestedLayoutToPanels(elements, row.Spec.Layout.RowsLayoutKind, nil, dsIndex, currentY)
if err != nil {
return nil, 0, err
}
@@ -379,7 +387,7 @@ func processRowItem(elements map[string]dashv2alpha1.DashboardElement, row *dash
}
// Then process nested tabs
nestedPanels, err := convertNestedLayoutToPanels(elements, nil, row.Spec.Layout.TabsLayoutKind, currentY)
nestedPanels, err := convertNestedLayoutToPanels(elements, nil, row.Spec.Layout.TabsLayoutKind, dsIndex, currentY)
if err != nil {
return nil, 0, err
}
@@ -421,7 +429,7 @@ func processRowItem(elements map[string]dashv2alpha1.DashboardElement, row *dash
// Add collapsed panels if row is collapsed (panels use absolute Y positions)
if isCollapsed {
collapsedPanels, err := extractCollapsedPanelsWithAbsoluteY(elements, &row.Spec.Layout, currentY+1)
collapsedPanels, err := extractCollapsedPanelsWithAbsoluteY(elements, &row.Spec.Layout, dsIndex, currentY+1)
if err != nil {
return nil, 0, err
}
@@ -436,7 +444,7 @@ func processRowItem(elements map[string]dashv2alpha1.DashboardElement, row *dash
// Add panels from row layout (only for expanded rows or hidden header rows)
if !isCollapsed || isHiddenHeader {
rowPanels, newY, err := extractExpandedPanels(elements, &row.Spec.Layout, currentY, isHiddenHeader, startY)
rowPanels, newY, err := extractExpandedPanels(elements, &row.Spec.Layout, dsIndex, currentY, isHiddenHeader, startY)
if err != nil {
return nil, 0, err
}
@@ -451,7 +459,7 @@ func processRowItem(elements map[string]dashv2alpha1.DashboardElement, row *dash
// Each tab becomes an expanded row panel (collapsed=false) with an empty panels array.
// The tab's content is flattened and added to the top level after the row panel.
// Nested layouts within the tab are recursively processed.
func processTabItem(elements map[string]dashv2alpha1.DashboardElement, tab *dashv2alpha1.DashboardTabsLayoutTabKind, startY int64) ([]interface{}, int64, error) {
func processTabItem(elements map[string]dashv2alpha1.DashboardElement, tab *dashv2alpha1.DashboardTabsLayoutTabKind, dsIndex *schemaversion.DatasourceIndex, startY int64) ([]interface{}, int64, error) {
panels := make([]interface{}, 0)
currentY := startY
@@ -479,7 +487,7 @@ func processTabItem(elements map[string]dashv2alpha1.DashboardElement, tab *dash
// Handle nested layouts inside the tab
if tab.Spec.Layout.RowsLayoutKind != nil {
// Nested RowsLayout inside tab
nestedPanels, err := convertNestedLayoutToPanels(elements, tab.Spec.Layout.RowsLayoutKind, nil, currentY)
nestedPanels, err := convertNestedLayoutToPanels(elements, tab.Spec.Layout.RowsLayoutKind, nil, dsIndex, currentY)
if err != nil {
return nil, 0, err
}
@@ -487,7 +495,7 @@ func processTabItem(elements map[string]dashv2alpha1.DashboardElement, tab *dash
currentY = getMaxYFromPanels(nestedPanels, currentY)
} else if tab.Spec.Layout.TabsLayoutKind != nil {
// Nested TabsLayout inside tab
nestedPanels, err := convertNestedLayoutToPanels(elements, nil, tab.Spec.Layout.TabsLayoutKind, currentY)
nestedPanels, err := convertNestedLayoutToPanels(elements, nil, tab.Spec.Layout.TabsLayoutKind, dsIndex, currentY)
if err != nil {
return nil, 0, err
}
@@ -504,7 +512,7 @@ func processTabItem(elements map[string]dashv2alpha1.DashboardElement, tab *dash
adjustedItem := item
adjustedItem.Spec.Y = item.Spec.Y + currentY
panel, err := convertPanelFromElement(&element, &adjustedItem)
panel, err := convertPanelFromElement(&element, &adjustedItem, dsIndex)
if err != nil {
return nil, 0, fmt.Errorf("failed to convert panel %s: %w", item.Spec.Element.Name, err)
}
@@ -517,7 +525,7 @@ func processTabItem(elements map[string]dashv2alpha1.DashboardElement, tab *dash
}
} else if tab.Spec.Layout.AutoGridLayoutKind != nil {
// AutoGridLayout inside tab - convert with Y offset
autoGridPanels, err := convertAutoGridLayoutToPanelsWithOffset(elements, tab.Spec.Layout.AutoGridLayoutKind, currentY)
autoGridPanels, err := convertAutoGridLayoutToPanelsWithOffset(elements, tab.Spec.Layout.AutoGridLayoutKind, dsIndex, currentY)
if err != nil {
return nil, 0, err
}
@@ -532,7 +540,7 @@ func processTabItem(elements map[string]dashv2alpha1.DashboardElement, tab *dash
// Panels are positioned with absolute Y coordinates (baseY + relative Y).
// This matches V1 behavior where collapsed row panels store their children
// with Y positions as if the row were expanded at that location.
func extractCollapsedPanelsWithAbsoluteY(elements map[string]dashv2alpha1.DashboardElement, layout *dashv2alpha1.DashboardGridLayoutKindOrAutoGridLayoutKindOrTabsLayoutKindOrRowsLayoutKind, baseY int64) ([]interface{}, error) {
func extractCollapsedPanelsWithAbsoluteY(elements map[string]dashv2alpha1.DashboardElement, layout *dashv2alpha1.DashboardGridLayoutKindOrAutoGridLayoutKindOrTabsLayoutKindOrRowsLayoutKind, dsIndex *schemaversion.DatasourceIndex, baseY int64) ([]interface{}, error) {
panels := make([]interface{}, 0)
if layout.GridLayoutKind != nil {
@@ -544,7 +552,7 @@ func extractCollapsedPanelsWithAbsoluteY(elements map[string]dashv2alpha1.Dashbo
// Create a copy with adjusted Y position
adjustedItem := item
adjustedItem.Spec.Y = item.Spec.Y + baseY
panel, err := convertPanelFromElement(&element, &adjustedItem)
panel, err := convertPanelFromElement(&element, &adjustedItem, dsIndex)
if err != nil {
return nil, fmt.Errorf("failed to convert panel %s: %w", item.Spec.Element.Name, err)
}
@@ -553,7 +561,7 @@ func extractCollapsedPanelsWithAbsoluteY(elements map[string]dashv2alpha1.Dashbo
}
// Handle AutoGridLayout for collapsed rows with Y offset
if layout.AutoGridLayoutKind != nil {
autoGridPanels, err := convertAutoGridLayoutToPanelsWithOffset(elements, layout.AutoGridLayoutKind, baseY)
autoGridPanels, err := convertAutoGridLayoutToPanelsWithOffset(elements, layout.AutoGridLayoutKind, dsIndex, baseY)
if err != nil {
return nil, err
}
@@ -563,7 +571,7 @@ func extractCollapsedPanelsWithAbsoluteY(elements map[string]dashv2alpha1.Dashbo
if layout.RowsLayoutKind != nil {
currentY := baseY
for _, row := range layout.RowsLayoutKind.Spec.Rows {
nestedPanels, err := extractCollapsedPanelsWithAbsoluteY(elements, &row.Spec.Layout, currentY)
nestedPanels, err := extractCollapsedPanelsWithAbsoluteY(elements, &row.Spec.Layout, dsIndex, currentY)
if err != nil {
return nil, err
}
@@ -574,7 +582,7 @@ func extractCollapsedPanelsWithAbsoluteY(elements map[string]dashv2alpha1.Dashbo
if layout.TabsLayoutKind != nil {
currentY := baseY
for _, tab := range layout.TabsLayoutKind.Spec.Tabs {
nestedPanels, err := extractCollapsedPanelsFromTabLayoutWithAbsoluteY(elements, &tab.Spec.Layout, currentY)
nestedPanels, err := extractCollapsedPanelsFromTabLayoutWithAbsoluteY(elements, &tab.Spec.Layout, dsIndex, currentY)
if err != nil {
return nil, err
}
@@ -588,7 +596,7 @@ func extractCollapsedPanelsWithAbsoluteY(elements map[string]dashv2alpha1.Dashbo
// extractCollapsedPanelsFromTabLayoutWithAbsoluteY extracts panels from a tab layout with absolute Y.
// Similar to extractCollapsedPanelsWithAbsoluteY but handles the tab-specific layout type.
func extractCollapsedPanelsFromTabLayoutWithAbsoluteY(elements map[string]dashv2alpha1.DashboardElement, layout *dashv2alpha1.DashboardGridLayoutKindOrRowsLayoutKindOrAutoGridLayoutKindOrTabsLayoutKind, baseY int64) ([]interface{}, error) {
func extractCollapsedPanelsFromTabLayoutWithAbsoluteY(elements map[string]dashv2alpha1.DashboardElement, layout *dashv2alpha1.DashboardGridLayoutKindOrRowsLayoutKindOrAutoGridLayoutKindOrTabsLayoutKind, dsIndex *schemaversion.DatasourceIndex, baseY int64) ([]interface{}, error) {
panels := make([]interface{}, 0)
if layout.GridLayoutKind != nil {
@@ -599,7 +607,7 @@ func extractCollapsedPanelsFromTabLayoutWithAbsoluteY(elements map[string]dashv2
}
adjustedItem := item
adjustedItem.Spec.Y = item.Spec.Y + baseY
panel, err := convertPanelFromElement(&element, &adjustedItem)
panel, err := convertPanelFromElement(&element, &adjustedItem, dsIndex)
if err != nil {
return nil, fmt.Errorf("failed to convert panel %s: %w", item.Spec.Element.Name, err)
}
@@ -607,7 +615,7 @@ func extractCollapsedPanelsFromTabLayoutWithAbsoluteY(elements map[string]dashv2
}
}
if layout.AutoGridLayoutKind != nil {
autoGridPanels, err := convertAutoGridLayoutToPanelsWithOffset(elements, layout.AutoGridLayoutKind, baseY)
autoGridPanels, err := convertAutoGridLayoutToPanelsWithOffset(elements, layout.AutoGridLayoutKind, dsIndex, baseY)
if err != nil {
return nil, err
}
@@ -616,7 +624,7 @@ func extractCollapsedPanelsFromTabLayoutWithAbsoluteY(elements map[string]dashv2
if layout.RowsLayoutKind != nil {
currentY := baseY
for _, row := range layout.RowsLayoutKind.Spec.Rows {
nestedPanels, err := extractCollapsedPanelsWithAbsoluteY(elements, &row.Spec.Layout, currentY)
nestedPanels, err := extractCollapsedPanelsWithAbsoluteY(elements, &row.Spec.Layout, dsIndex, currentY)
if err != nil {
return nil, err
}
@@ -627,7 +635,7 @@ func extractCollapsedPanelsFromTabLayoutWithAbsoluteY(elements map[string]dashv2
if layout.TabsLayoutKind != nil {
currentY := baseY
for _, tab := range layout.TabsLayoutKind.Spec.Tabs {
nestedPanels, err := extractCollapsedPanelsFromTabLayoutWithAbsoluteY(elements, &tab.Spec.Layout, currentY)
nestedPanels, err := extractCollapsedPanelsFromTabLayoutWithAbsoluteY(elements, &tab.Spec.Layout, dsIndex, currentY)
if err != nil {
return nil, err
}
@@ -671,7 +679,7 @@ func getLayoutHeightFromTab(layout *dashv2alpha1.DashboardGridLayoutKindOrRowsLa
// - Explicit row: Add (currentY - 1) to relative Y for absolute positioning
//
// Returns the panels and the new Y position for the next row.
func extractExpandedPanels(elements map[string]dashv2alpha1.DashboardElement, layout *dashv2alpha1.DashboardGridLayoutKindOrAutoGridLayoutKindOrTabsLayoutKindOrRowsLayoutKind, currentY int64, isHiddenHeader bool, startY int64) ([]interface{}, int64, error) {
func extractExpandedPanels(elements map[string]dashv2alpha1.DashboardElement, layout *dashv2alpha1.DashboardGridLayoutKindOrAutoGridLayoutKindOrTabsLayoutKindOrRowsLayoutKind, dsIndex *schemaversion.DatasourceIndex, currentY int64, isHiddenHeader bool, startY int64) ([]interface{}, int64, error) {
panels := make([]interface{}, 0)
// For hidden headers, don't track Y changes (matches original behavior)
maxY := startY
@@ -692,7 +700,7 @@ func extractExpandedPanels(elements map[string]dashv2alpha1.DashboardElement, la
}
// For hidden headers: don't adjust Y, keep item.Spec.Y as-is
panel, err := convertPanelFromElement(&element, &adjustedItem)
panel, err := convertPanelFromElement(&element, &adjustedItem, dsIndex)
if err != nil {
return nil, 0, fmt.Errorf("failed to convert panel %s: %w", item.Spec.Element.Name, err)
}
@@ -717,7 +725,7 @@ func extractExpandedPanels(elements map[string]dashv2alpha1.DashboardElement, la
yOffset = currentY - 1
}
autoGridPanels, err := convertAutoGridLayoutToPanelsWithOffset(elements, layout.AutoGridLayoutKind, yOffset)
autoGridPanels, err := convertAutoGridLayoutToPanelsWithOffset(elements, layout.AutoGridLayoutKind, dsIndex, yOffset)
if err != nil {
return nil, 0, err
}
@@ -780,7 +788,7 @@ func getLayoutHeight(layout *dashv2alpha1.DashboardGridLayoutKindOrAutoGridLayou
// convertAutoGridLayoutToPanelsWithOffset converts AutoGridLayout with a Y offset.
// Same as convertAutoGridLayoutToPanels but starts at yOffset instead of 0.
// Used when AutoGridLayout appears inside rows or tabs.
func convertAutoGridLayoutToPanelsWithOffset(elements map[string]dashv2alpha1.DashboardElement, autoGridLayout *dashv2alpha1.DashboardAutoGridLayoutKind, yOffset int64) ([]interface{}, error) {
func convertAutoGridLayoutToPanelsWithOffset(elements map[string]dashv2alpha1.DashboardElement, autoGridLayout *dashv2alpha1.DashboardAutoGridLayoutKind, dsIndex *schemaversion.DatasourceIndex, yOffset int64) ([]interface{}, error) {
panels := make([]interface{}, 0, len(autoGridLayout.Spec.Items))
const (
@@ -842,7 +850,7 @@ func convertAutoGridLayoutToPanelsWithOffset(elements map[string]dashv2alpha1.Da
},
}
panel, err := convertPanelFromElement(&element, &gridItem)
panel, err := convertPanelFromElement(&element, &gridItem, dsIndex)
if err != nil {
return nil, fmt.Errorf("failed to convert panel %s: %w", item.Spec.Element.Name, err)
}
@@ -868,7 +876,7 @@ func convertAutoGridLayoutToPanelsWithOffset(elements map[string]dashv2alpha1.Da
//
// Width: 24 / maxColumnCount (default 3 columns = 8 units wide)
// Height: Predefined grid units per mode (see pixelsToGridUnits for custom)
func convertAutoGridLayoutToPanels(elements map[string]dashv2alpha1.DashboardElement, autoGridLayout *dashv2alpha1.DashboardAutoGridLayoutKind) ([]interface{}, error) {
func convertAutoGridLayoutToPanels(elements map[string]dashv2alpha1.DashboardElement, autoGridLayout *dashv2alpha1.DashboardAutoGridLayoutKind, dsIndex *schemaversion.DatasourceIndex) ([]interface{}, error) {
panels := make([]interface{}, 0, len(autoGridLayout.Spec.Items))
const (
@@ -955,7 +963,7 @@ func convertAutoGridLayoutToPanels(elements map[string]dashv2alpha1.DashboardEle
}
}
panel, err := convertPanelFromElement(&element, &gridItem)
panel, err := convertPanelFromElement(&element, &gridItem, dsIndex)
if err != nil {
return nil, fmt.Errorf("failed to convert panel %s: %w", item.Spec.Element.Name, err)
}
@@ -976,11 +984,11 @@ func convertAutoGridLayoutToPanels(elements map[string]dashv2alpha1.DashboardEle
// V1 has no native tab concept, so tabs are converted to expanded row panels.
// Each tab becomes a row panel (collapsed=false, panels=[]) with its content
// flattened to the top level. Tab order is preserved in the output.
func convertTabsLayoutToPanels(elements map[string]dashv2alpha1.DashboardElement, tabsLayout *dashv2alpha1.DashboardTabsLayoutKind) ([]interface{}, error) {
return convertNestedLayoutToPanels(elements, nil, tabsLayout, 0)
func convertTabsLayoutToPanels(elements map[string]dashv2alpha1.DashboardElement, tabsLayout *dashv2alpha1.DashboardTabsLayoutKind, dsIndex *schemaversion.DatasourceIndex) ([]interface{}, error) {
return convertNestedLayoutToPanels(elements, nil, tabsLayout, dsIndex, 0)
}
func convertPanelFromElement(element *dashv2alpha1.DashboardElement, layoutItem *dashv2alpha1.DashboardGridLayoutItemKind) (map[string]interface{}, error) {
func convertPanelFromElement(element *dashv2alpha1.DashboardElement, layoutItem *dashv2alpha1.DashboardGridLayoutItemKind, dsIndex *schemaversion.DatasourceIndex) (map[string]interface{}, error) {
panel := make(map[string]interface{})
// Set grid position
@@ -1009,7 +1017,7 @@ func convertPanelFromElement(element *dashv2alpha1.DashboardElement, layoutItem
}
if element.PanelKind != nil {
return convertPanelKindToV1(element.PanelKind, panel)
return convertPanelKindToV1(element.PanelKind, panel, dsIndex)
}
if element.LibraryPanelKind != nil {
@@ -1019,7 +1027,7 @@ func convertPanelFromElement(element *dashv2alpha1.DashboardElement, layoutItem
return nil, fmt.Errorf("element has neither PanelKind nor LibraryPanelKind")
}
func convertPanelKindToV1(panelKind *dashv2alpha1.DashboardPanelKind, panel map[string]interface{}) (map[string]interface{}, error) {
func convertPanelKindToV1(panelKind *dashv2alpha1.DashboardPanelKind, panel map[string]interface{}, dsIndex *schemaversion.DatasourceIndex) (map[string]interface{}, error) {
spec := panelKind.Spec
panel["id"] = int(spec.Id)
@@ -1061,14 +1069,14 @@ func convertPanelKindToV1(panelKind *dashv2alpha1.DashboardPanelKind, panel map[
// Convert queries (targets)
targets := make([]map[string]interface{}, 0, len(spec.Data.Spec.Queries))
for _, query := range spec.Data.Spec.Queries {
target := convertPanelQueryToV1(&query)
target := convertPanelQueryToV1(&query, dsIndex)
targets = append(targets, target)
}
panel["targets"] = targets
// Detect mixed datasource - set panel.datasource to "mixed" if queries use different datasources
// This matches the frontend behavior in getPanelDataSource (layoutSerializers/utils.ts)
if mixedDS := detectMixedDatasource(spec.Data.Spec.Queries); mixedDS != nil {
if mixedDS := detectMixedDatasource(spec.Data.Spec.Queries, dsIndex); mixedDS != nil {
panel["datasource"] = mixedDS
}
@@ -1117,7 +1125,7 @@ func convertPanelKindToV1(panelKind *dashv2alpha1.DashboardPanelKind, panel map[
return panel, nil
}
func convertPanelQueryToV1(query *dashv2alpha1.DashboardPanelQueryKind) map[string]interface{} {
func convertPanelQueryToV1(query *dashv2alpha1.DashboardPanelQueryKind, dsIndex *schemaversion.DatasourceIndex) map[string]interface{} {
target := make(map[string]interface{})
// Copy query spec (excluding refId, hide, datasource which are handled separately)
@@ -1142,7 +1150,7 @@ func convertPanelQueryToV1(query *dashv2alpha1.DashboardPanelQueryKind) map[stri
}
// Resolve datasource based on V2 input (reuse shared function)
datasource := getDataSourceForQuery(query.Spec.Datasource, query.Spec.Query.Kind)
datasource := getDataSourceForQuery(query.Spec.Datasource, query.Spec.Query.Kind, nil)
if datasource != nil {
target["datasource"] = datasource
}
@@ -1156,7 +1164,7 @@ func convertPanelQueryToV1(query *dashv2alpha1.DashboardPanelQueryKind) map[stri
// - Else if queryKind (type) is non-empty → return {type} only
// - Else → return nil (no datasource)
// Used for variables and annotations. Panel queries use convertPanelQueryToV1Target.
func getDataSourceForQuery(explicitDS *dashv2alpha1.DashboardDataSourceRef, queryKind string) map[string]interface{} {
func getDataSourceForQuery(explicitDS *dashv2alpha1.DashboardDataSourceRef, queryKind string, _ *schemaversion.DatasourceIndex) map[string]interface{} {
// Case 1: Explicit datasource with UID provided
if explicitDS != nil && explicitDS.Uid != nil && *explicitDS.Uid != "" {
datasource := map[string]interface{}{
@@ -1187,7 +1195,7 @@ func getDataSourceForQuery(explicitDS *dashv2alpha1.DashboardDataSourceRef, quer
// Compares based on V2 input without runtime resolution:
// - If query has explicit datasource.uid → use that UID and type
// - Else → use query.Kind as type (empty UID)
func detectMixedDatasource(queries []dashv2alpha1.DashboardPanelQueryKind) map[string]interface{} {
func detectMixedDatasource(queries []dashv2alpha1.DashboardPanelQueryKind, _ *schemaversion.DatasourceIndex) map[string]interface{} {
if len(queries) == 0 {
return nil
}
@@ -1246,7 +1254,7 @@ func convertLibraryPanelKindToV1(libPanelKind *dashv2alpha1.DashboardLibraryPane
return panel, nil
}
func convertVariablesToV1(variables []dashv2alpha1.DashboardVariableKind) []map[string]interface{} {
func convertVariablesToV1(variables []dashv2alpha1.DashboardVariableKind, dsIndex *schemaversion.DatasourceIndex) []map[string]interface{} {
result := make([]map[string]interface{}, 0, len(variables))
for _, variable := range variables {
@@ -1254,7 +1262,7 @@ func convertVariablesToV1(variables []dashv2alpha1.DashboardVariableKind) []map[
var err error
if variable.QueryVariableKind != nil {
varMap, err = convertQueryVariableToV1(variable.QueryVariableKind)
varMap, err = convertQueryVariableToV1(variable.QueryVariableKind, dsIndex)
} else if variable.DatasourceVariableKind != nil {
varMap, err = convertDatasourceVariableToV1(variable.DatasourceVariableKind)
} else if variable.CustomVariableKind != nil {
@@ -1266,9 +1274,9 @@ func convertVariablesToV1(variables []dashv2alpha1.DashboardVariableKind) []map[
} else if variable.TextVariableKind != nil {
varMap, err = convertTextVariableToV1(variable.TextVariableKind)
} else if variable.GroupByVariableKind != nil {
varMap, err = convertGroupByVariableToV1(variable.GroupByVariableKind)
varMap, err = convertGroupByVariableToV1(variable.GroupByVariableKind, dsIndex)
} else if variable.AdhocVariableKind != nil {
varMap, err = convertAdhocVariableToV1(variable.AdhocVariableKind)
varMap, err = convertAdhocVariableToV1(variable.AdhocVariableKind, dsIndex)
} else if variable.SwitchVariableKind != nil {
varMap, err = convertSwitchVariableToV1(variable.SwitchVariableKind)
}
@@ -1281,7 +1289,7 @@ func convertVariablesToV1(variables []dashv2alpha1.DashboardVariableKind) []map[
return result
}
func convertQueryVariableToV1(variable *dashv2alpha1.DashboardQueryVariableKind) (map[string]interface{}, error) {
func convertQueryVariableToV1(variable *dashv2alpha1.DashboardQueryVariableKind, dsIndex *schemaversion.DatasourceIndex) (map[string]interface{}, error) {
spec := variable.Spec
varMap := map[string]interface{}{
"name": spec.Name,
@@ -1312,9 +1320,6 @@ func convertQueryVariableToV1(variable *dashv2alpha1.DashboardQueryVariableKind)
if spec.Definition != nil {
varMap["definition"] = *spec.Definition
}
if spec.RegexApplyTo != nil {
varMap["regexApplyTo"] = string(*spec.RegexApplyTo)
}
varMap["allowCustomValue"] = spec.AllowCustomValue
// Convert query - handle LEGACY_STRING_VALUE_KEY
@@ -1331,7 +1336,7 @@ func convertQueryVariableToV1(variable *dashv2alpha1.DashboardQueryVariableKind)
}
// Resolve datasource - use explicit datasource or resolve from query kind (datasource type)/default
datasource := getDataSourceForQuery(spec.Datasource, spec.Query.Kind)
datasource := getDataSourceForQuery(spec.Datasource, spec.Query.Kind, dsIndex)
if datasource != nil {
varMap["datasource"] = datasource
}
@@ -1481,7 +1486,7 @@ func convertTextVariableToV1(variable *dashv2alpha1.DashboardTextVariableKind) (
return varMap, nil
}
func convertGroupByVariableToV1(variable *dashv2alpha1.DashboardGroupByVariableKind) (map[string]interface{}, error) {
func convertGroupByVariableToV1(variable *dashv2alpha1.DashboardGroupByVariableKind, dsIndex *schemaversion.DatasourceIndex) (map[string]interface{}, error) {
spec := variable.Spec
varMap := map[string]interface{}{
"name": spec.Name,
@@ -1504,7 +1509,7 @@ func convertGroupByVariableToV1(variable *dashv2alpha1.DashboardGroupByVariableK
}
// Resolve datasource - GroupBy variables don't have a query kind, so use empty string (will fall back to default)
datasource := getDataSourceForQuery(spec.Datasource, "")
datasource := getDataSourceForQuery(spec.Datasource, "", dsIndex)
if datasource != nil {
varMap["datasource"] = datasource
}
@@ -1512,7 +1517,7 @@ func convertGroupByVariableToV1(variable *dashv2alpha1.DashboardGroupByVariableK
return varMap, nil
}
func convertAdhocVariableToV1(variable *dashv2alpha1.DashboardAdhocVariableKind) (map[string]interface{}, error) {
func convertAdhocVariableToV1(variable *dashv2alpha1.DashboardAdhocVariableKind, dsIndex *schemaversion.DatasourceIndex) (map[string]interface{}, error) {
spec := variable.Spec
varMap := map[string]interface{}{
"name": spec.Name,
@@ -1531,7 +1536,7 @@ func convertAdhocVariableToV1(variable *dashv2alpha1.DashboardAdhocVariableKind)
varMap["allowCustomValue"] = spec.AllowCustomValue
// Resolve datasource - Adhoc variables don't have a query kind, so use empty string (will fall back to default)
datasource := getDataSourceForQuery(spec.Datasource, "")
datasource := getDataSourceForQuery(spec.Datasource, "", dsIndex)
if datasource != nil {
varMap["datasource"] = datasource
}
@@ -1658,7 +1663,7 @@ func convertSwitchVariableToV1(variable *dashv2alpha1.DashboardSwitchVariableKin
return varMap, nil
}
func convertAnnotationsToV1(annotations []dashv2alpha1.DashboardAnnotationQueryKind) []map[string]interface{} {
func convertAnnotationsToV1(annotations []dashv2alpha1.DashboardAnnotationQueryKind, dsIndex *schemaversion.DatasourceIndex) []map[string]interface{} {
result := make([]map[string]interface{}, 0, len(annotations))
for _, annotation := range annotations {
@@ -1681,7 +1686,7 @@ func convertAnnotationsToV1(annotations []dashv2alpha1.DashboardAnnotationQueryK
if annotation.Spec.Query != nil {
queryKind = annotation.Spec.Query.Kind
}
datasource := getDataSourceForQuery(annotation.Spec.Datasource, queryKind)
datasource := getDataSourceForQuery(annotation.Spec.Datasource, queryKind, dsIndex)
if datasource != nil {
annotationMap["datasource"] = datasource
}

View File

@@ -282,7 +282,7 @@ func TestV2alpha1ToV1beta1LayoutErrors(t *testing.T) {
// Initialize the migrator with test data source and library element providers
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Set up conversion scheme
scheme := runtime.NewScheme()
@@ -498,7 +498,7 @@ func TestV2alpha1ToV1beta1BasicFields(t *testing.T) {
// Initialize the migrator with test data source and library element providers
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Set up conversion scheme
scheme := runtime.NewScheme()

View File

@@ -767,7 +767,6 @@ func convertQueryVariableSpec_V2alpha1_to_V2beta1(in *dashv2alpha1.DashboardQuer
out.SkipUrlSync = in.SkipUrlSync
out.Description = in.Description
out.Regex = in.Regex
out.RegexApplyTo = (*dashv2beta1.DashboardVariableRegexApplyTo)(in.RegexApplyTo)
out.Sort = dashv2beta1.DashboardVariableSort(in.Sort)
out.Definition = in.Definition
out.Options = convertVariableOptions_V2alpha1_to_V2beta1(in.Options)

View File

@@ -18,7 +18,7 @@ func TestV2alpha1ToV2beta1(t *testing.T) {
// Initialize the migrator with test providers
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Set up conversion scheme
scheme := runtime.NewScheme()

View File

@@ -806,7 +806,6 @@ func convertQueryVariableSpec_V2beta1_to_V2alpha1(in *dashv2beta1.DashboardQuery
out.SkipUrlSync = in.SkipUrlSync
out.Description = in.Description
out.Regex = in.Regex
out.RegexApplyTo = (*dashv2alpha1.DashboardVariableRegexApplyTo)(in.RegexApplyTo)
out.Sort = dashv2alpha1.DashboardVariableSort(in.Sort)
out.Definition = in.Definition
out.Options = convertVariableOptions_V2beta1_to_V2alpha1(in.Options)

View File

@@ -24,7 +24,7 @@ func TestV2beta1ToV2alpha1RoundTrip(t *testing.T) {
// Initialize the migrator with test providers
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Set up conversion scheme
scheme := runtime.NewScheme()
@@ -107,7 +107,7 @@ func TestV2beta1ToV2alpha1FromOutputFiles(t *testing.T) {
// Initialize the migrator with test providers
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Set up conversion scheme
scheme := runtime.NewScheme()
@@ -193,7 +193,7 @@ func TestV2beta1ToV2alpha1(t *testing.T) {
// Initialize the migrator with test providers
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
migration.Initialize(dsProvider, leProvider, migration.DefaultCacheTTL)
migration.Initialize(dsProvider, leProvider)
// Set up conversion scheme
scheme := runtime.NewScheme()

View File

@@ -4,19 +4,13 @@ import (
"context"
"fmt"
"sync"
"time"
"github.com/grafana/authlib/types"
"github.com/grafana/grafana-app-sdk/logging"
"github.com/grafana/grafana/apps/dashboard/pkg/migration/schemaversion"
)
// DefaultCacheTTL is the default TTL for the datasource and library element caches.
const DefaultCacheTTL = time.Minute
// Initialize provides the migrator singleton with required dependencies and builds the map of migrations.
func Initialize(dsIndexProvider schemaversion.DataSourceIndexProvider, leIndexProvider schemaversion.LibraryElementIndexProvider, cacheTTL time.Duration) {
migratorInstance.init(dsIndexProvider, leIndexProvider, cacheTTL)
func Initialize(dsIndexProvider schemaversion.DataSourceIndexProvider, leIndexProvider schemaversion.LibraryElementIndexProvider) {
migratorInstance.init(dsIndexProvider, leIndexProvider)
}
// GetDataSourceIndexProvider returns the datasource index provider instance that was initialized.
@@ -44,34 +38,6 @@ func ResetForTesting() {
initOnce = sync.Once{}
}
// PreloadCache preloads the datasource and library element caches for the given namespaces.
func PreloadCache(ctx context.Context, nsInfos []types.NamespaceInfo) {
// Wait for initialization to complete
<-migratorInstance.ready
// Try to preload datasource cache
if preloadable, ok := migratorInstance.dsIndexProvider.(schemaversion.PreloadableCache); ok {
preloadable.Preload(ctx, nsInfos)
}
// Try to preload library element cache
if preloadable, ok := migratorInstance.leIndexProvider.(schemaversion.PreloadableCache); ok {
preloadable.Preload(ctx, nsInfos)
}
}
// PreloadCacheInBackground starts a goroutine that preloads the caches for the given namespaces.
func PreloadCacheInBackground(nsInfos []types.NamespaceInfo) {
go func() {
defer func() {
if r := recover(); r != nil {
logging.DefaultLogger.Error("panic during cache preloading", "error", r)
}
}()
PreloadCache(context.Background(), nsInfos)
}()
}
// Migrate migrates the given dashboard to the target version.
// This will block until the migrator is initialized.
func Migrate(ctx context.Context, dash map[string]interface{}, targetVersion int) error {
@@ -93,15 +59,11 @@ type migrator struct {
leIndexProvider schemaversion.LibraryElementIndexProvider
}
func (m *migrator) init(dsIndexProvider schemaversion.DataSourceIndexProvider, leIndexProvider schemaversion.LibraryElementIndexProvider, cacheTTL time.Duration) {
func (m *migrator) init(dsIndexProvider schemaversion.DataSourceIndexProvider, leIndexProvider schemaversion.LibraryElementIndexProvider) {
initOnce.Do(func() {
// Wrap the provider with org-aware TTL caching for all conversions.
// This prevents repeated DB queries across multiple conversion calls while allowing
// the cache to refresh periodically, making it suitable for long-lived singleton usage.
m.dsIndexProvider = schemaversion.WrapIndexProviderWithCache(dsIndexProvider, cacheTTL)
// Wrap library element provider with caching as well
m.leIndexProvider = schemaversion.WrapLibraryElementProviderWithCache(leIndexProvider, cacheTTL)
m.migrations = schemaversion.GetMigrations(m.dsIndexProvider, m.leIndexProvider)
m.dsIndexProvider = dsIndexProvider
m.leIndexProvider = leIndexProvider
m.migrations = schemaversion.GetMigrations(dsIndexProvider, leIndexProvider)
close(m.ready)
})
}

View File

@@ -10,13 +10,10 @@ import (
"path/filepath"
"strconv"
"strings"
"sync/atomic"
"testing"
"github.com/prometheus/client_golang/prometheus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apiserver/pkg/endpoints/request"
"github.com/grafana/grafana/apps/dashboard/pkg/migration/schemaversion"
migrationtestutil "github.com/grafana/grafana/apps/dashboard/pkg/migration/testutil"
@@ -34,7 +31,7 @@ func TestMigrate(t *testing.T) {
ResetForTesting()
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
Initialize(dsProvider, leProvider, DefaultCacheTTL)
Initialize(dsProvider, leProvider)
t.Run("minimum version check", func(t *testing.T) {
err := Migrate(context.Background(), map[string]interface{}{
@@ -52,7 +49,7 @@ func TestMigrateSingleVersion(t *testing.T) {
// Use the same datasource provider as the frontend test to ensure consistency
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
Initialize(dsProvider, leProvider, DefaultCacheTTL)
Initialize(dsProvider, leProvider)
runSingleVersionMigrationTests(t, SINGLE_VERSION_OUTPUT_DIR)
}
@@ -221,7 +218,7 @@ func TestSchemaMigrationMetrics(t *testing.T) {
// Initialize migration with test providers
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
Initialize(dsProvider, leProvider, DefaultCacheTTL)
Initialize(dsProvider, leProvider)
// Create a test registry for metrics
registry := prometheus.NewRegistry()
@@ -307,7 +304,7 @@ func TestSchemaMigrationMetrics(t *testing.T) {
func TestSchemaMigrationLogging(t *testing.T) {
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.StandardTestConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
Initialize(dsProvider, leProvider, DefaultCacheTTL)
Initialize(dsProvider, leProvider)
tests := []struct {
name string
@@ -426,7 +423,7 @@ func TestMigrateDevDashboards(t *testing.T) {
ResetForTesting()
dsProvider := migrationtestutil.NewDataSourceProvider(migrationtestutil.DevDashboardConfig)
leProvider := migrationtestutil.NewLibraryElementProvider()
Initialize(dsProvider, leProvider, DefaultCacheTTL)
Initialize(dsProvider, leProvider)
runDevDashboardMigrationTests(t, schemaversion.LATEST_VERSION, DEV_DASHBOARDS_OUTPUT_DIR)
}
@@ -452,232 +449,3 @@ func runDevDashboardMigrationTests(t *testing.T, targetVersion int, outputDir st
})
}
}
func TestMigrateWithCache(t *testing.T) {
// Reset the migration singleton before each test
ResetForTesting()
datasources := []schemaversion.DataSourceInfo{
{UID: "ds-uid-1", Type: "prometheus", Name: "Prometheus", Default: true, APIVersion: "v1"},
{UID: "ds-uid-2", Type: "loki", Name: "Loki", Default: false, APIVersion: "v1"},
{UID: "ds-uid-3", Type: "prometheus", Name: "Prometheus 2", Default: false, APIVersion: "v1"},
}
// Create a dashboard at schema version 32 for V33 and V36 migration with datasource references
dashboard1 := map[string]interface{}{
"schemaVersion": 32,
"title": "Test Dashboard 1",
"panels": []interface{}{
map[string]interface{}{
"id": 1,
"type": "timeseries",
// String datasource that V33 will migrate to object reference
"datasource": "Prometheus",
"targets": []interface{}{
map[string]interface{}{
"refId": "A",
"datasource": "Loki",
},
},
},
},
}
// Create a dashboard at schema version 35 for testing V36 migration with datasource references in annotations
dashboard2 := map[string]interface{}{
"schemaVersion": 35,
"title": "Test Dashboard 2",
"annotations": map[string]interface{}{
"list": []interface{}{
map[string]interface{}{
"name": "Test Annotation",
"datasource": "Prometheus 2", // String reference that V36 should convert
"enable": true,
},
},
},
}
t.Run("with datasources", func(t *testing.T) {
ResetForTesting()
dsProvider := newCountingProvider(datasources)
leProvider := newCountingLibraryProvider(nil)
// Initialize the migration system with our counting providers
Initialize(dsProvider, leProvider, DefaultCacheTTL)
// Verify initial call count is zero
assert.Equal(t, dsProvider.GetCallCount(), int64(0))
// Create a context with namespace (required for caching)
ctx := request.WithNamespace(context.Background(), "default")
// First migration - should invoke the provider once to build the cache
dash1 := deepCopyDashboard(dashboard1)
err := Migrate(ctx, dash1, schemaversion.LATEST_VERSION)
require.NoError(t, err)
assert.Equal(t, int64(1), dsProvider.GetCallCount())
// Verify datasource conversion from string to object reference
panels := dash1["panels"].([]interface{})
panel := panels[0].(map[string]interface{})
panelDS, ok := panel["datasource"].(map[string]interface{})
require.True(t, ok, "panel datasource should be converted to object")
assert.Equal(t, "ds-uid-1", panelDS["uid"])
assert.Equal(t, "prometheus", panelDS["type"])
// Verify target datasource conversion
targets := panel["targets"].([]interface{})
target := targets[0].(map[string]interface{})
targetDS, ok := target["datasource"].(map[string]interface{})
require.True(t, ok, "target datasource should be converted to object")
assert.Equal(t, "ds-uid-2", targetDS["uid"])
assert.Equal(t, "loki", targetDS["type"])
// Migration with V35 dashboard - should use the cached index from first migration
dash2 := deepCopyDashboard(dashboard2)
err = Migrate(ctx, dash2, schemaversion.LATEST_VERSION)
require.NoError(t, err, "second migration should succeed")
assert.Equal(t, int64(1), dsProvider.GetCallCount())
// Verify the annotation datasource was converted to object reference
annotations := dash2["annotations"].(map[string]interface{})
list := annotations["list"].([]interface{})
var testAnnotation map[string]interface{}
for _, a := range list {
ann := a.(map[string]interface{})
if ann["name"] == "Test Annotation" {
testAnnotation = ann
break
}
}
require.NotNil(t, testAnnotation, "Test Annotation should exist")
annotationDS, ok := testAnnotation["datasource"].(map[string]interface{})
require.True(t, ok, "annotation datasource should be converted to object")
assert.Equal(t, "ds-uid-3", annotationDS["uid"])
assert.Equal(t, "prometheus", annotationDS["type"])
})
// tests that cache isolates data per namespace
t.Run("with multiple orgs", func(t *testing.T) {
// Reset the migration singleton
ResetForTesting()
dsProvider := newCountingProvider(datasources)
leProvider := newCountingLibraryProvider(nil)
Initialize(dsProvider, leProvider, DefaultCacheTTL)
// Create contexts for different orgs with proper namespace format (org-ID)
ctx1 := request.WithNamespace(context.Background(), "default") // org 1
ctx2 := request.WithNamespace(context.Background(), "stacks-2") // stack 2
// Migrate for org 1
err := Migrate(ctx1, deepCopyDashboard(dashboard1), schemaversion.LATEST_VERSION)
require.NoError(t, err)
callsAfterOrg1 := dsProvider.GetCallCount()
// Migrate for org 2 - should build separate cache
err = Migrate(ctx2, deepCopyDashboard(dashboard2), schemaversion.LATEST_VERSION)
require.NoError(t, err)
callsAfterOrg2 := dsProvider.GetCallCount()
assert.Greater(t, callsAfterOrg2, callsAfterOrg1,
"org 2 migration should have called provider (separate cache)")
// Migrate again for org 1 - should use cache
err = Migrate(ctx1, deepCopyDashboard(dashboard1), schemaversion.LATEST_VERSION)
require.NoError(t, err)
callsAfterOrg1Again := dsProvider.GetCallCount()
assert.Equal(t, callsAfterOrg2, callsAfterOrg1Again,
"second org 1 migration should use cache")
// Migrate again for org 2 - should use cache
err = Migrate(ctx2, deepCopyDashboard(dashboard1), schemaversion.LATEST_VERSION)
require.NoError(t, err)
callsAfterOrg2Again := dsProvider.GetCallCount()
assert.Equal(t, callsAfterOrg2, callsAfterOrg2Again,
"second org 2 migration should use cache")
})
}
// countingProvider wraps a datasource provider and counts calls to Index()
type countingProvider struct {
datasources []schemaversion.DataSourceInfo
callCount atomic.Int64
}
func newCountingProvider(datasources []schemaversion.DataSourceInfo) *countingProvider {
return &countingProvider{
datasources: datasources,
}
}
func (p *countingProvider) Index(_ context.Context) *schemaversion.DatasourceIndex {
p.callCount.Add(1)
return schemaversion.NewDatasourceIndex(p.datasources)
}
func (p *countingProvider) GetCallCount() int64 {
return p.callCount.Load()
}
// countingLibraryProvider wraps a library element provider and counts calls
type countingLibraryProvider struct {
elements []schemaversion.LibraryElementInfo
callCount atomic.Int64
}
func newCountingLibraryProvider(elements []schemaversion.LibraryElementInfo) *countingLibraryProvider {
return &countingLibraryProvider{
elements: elements,
}
}
func (p *countingLibraryProvider) GetLibraryElementInfo(_ context.Context) []schemaversion.LibraryElementInfo {
p.callCount.Add(1)
return p.elements
}
func (p *countingLibraryProvider) GetCallCount() int64 {
return p.callCount.Load()
}
// deepCopyDashboard creates a deep copy of a dashboard map
func deepCopyDashboard(dash map[string]interface{}) map[string]interface{} {
cpy := make(map[string]interface{})
for k, v := range dash {
switch val := v.(type) {
case []interface{}:
cpy[k] = deepCopySlice(val)
case map[string]interface{}:
cpy[k] = deepCopyMapForCache(val)
default:
cpy[k] = v
}
}
return cpy
}
func deepCopySlice(s []interface{}) []interface{} {
cpy := make([]interface{}, len(s))
for i, v := range s {
switch val := v.(type) {
case []interface{}:
cpy[i] = deepCopySlice(val)
case map[string]interface{}:
cpy[i] = deepCopyMapForCache(val)
default:
cpy[i] = v
}
}
return cpy
}
func deepCopyMapForCache(m map[string]interface{}) map[string]interface{} {
cpy := make(map[string]interface{})
for k, v := range m {
switch val := v.(type) {
case []interface{}:
cpy[k] = deepCopySlice(val)
case map[string]interface{}:
cpy[k] = deepCopyMapForCache(val)
default:
cpy[k] = v
}
}
return cpy
}

View File

@@ -1,104 +0,0 @@
package schemaversion
import (
"context"
"sync"
"time"
"github.com/grafana/authlib/types"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/hashicorp/golang-lru/v2/expirable"
k8srequest "k8s.io/apiserver/pkg/endpoints/request"
"github.com/grafana/grafana/pkg/services/apiserver/endpoints/request"
)
const defaultCacheSize = 1000
// CacheProvider is a generic cache interface for schema version providers.
type CacheProvider[T any] interface {
// Get returns the cached value if it's still valid, otherwise calls fetch and caches the result.
Get(ctx context.Context) T
}
// PreloadableCache is an interface for providers that support preloading the cache.
type PreloadableCache interface {
// Preload loads data into the cache for the given namespaces.
Preload(ctx context.Context, nsInfos []types.NamespaceInfo)
}
// cachedProvider is a thread-safe TTL cache that wraps any fetch function.
type cachedProvider[T any] struct {
fetch func(context.Context) T
cache *expirable.LRU[string, T] // LRU cache: namespace to cache entry
inFlight sync.Map // map[string]*sync.Mutex - per-namespace fetch locks
logger log.Logger
}
// newCachedProvider creates a new cachedProvider.
// The fetch function should be able to handle context with different namespaces.
// A non-positive size turns LRU mechanism off (cache of unlimited size).
// A non-positive cacheTTL disables TTL expiration.
func newCachedProvider[T any](fetch func(context.Context) T, size int, cacheTTL time.Duration, logger log.Logger) *cachedProvider[T] {
cacheProvider := &cachedProvider[T]{
fetch: fetch,
logger: logger,
}
cacheProvider.cache = expirable.NewLRU(size, func(key string, value T) {
cacheProvider.inFlight.Delete(key)
}, cacheTTL)
return cacheProvider
}
// Get returns the cached value if it's still valid, otherwise calls fetch and caches the result.
func (p *cachedProvider[T]) Get(ctx context.Context) T {
// Get namespace info from ctx
nsInfo, err := request.NamespaceInfoFrom(ctx, true)
if err != nil {
// No namespace, fall back to direct fetch call without caching
p.logger.Warn("Unable to get namespace info from context, skipping cache", "error", err)
return p.fetch(ctx)
}
namespace := nsInfo.Value
// Fast path: check if cache is still valid
if entry, ok := p.cache.Get(namespace); ok {
return entry
}
// Get or create a per-namespace lock for this fetch operation
// This ensures only one fetch happens per namespace at a time
lockInterface, _ := p.inFlight.LoadOrStore(namespace, &sync.Mutex{})
nsMutex := lockInterface.(*sync.Mutex)
// Lock this specific namespace - other namespaces can still proceed
nsMutex.Lock()
defer nsMutex.Unlock()
// Double-check: another goroutine might have already fetched while we waited
if entry, ok := p.cache.Get(namespace); ok {
return entry
}
// Fetch outside the main lock - only this namespace is blocked
p.logger.Debug("cache miss or expired, fetching new value", "namespace", namespace)
value := p.fetch(ctx)
// Update the cache for this namespace
p.cache.Add(namespace, value)
return value
}
// Preload loads data into the cache for the given namespaces.
func (p *cachedProvider[T]) Preload(ctx context.Context, nsInfos []types.NamespaceInfo) {
// Build the cache using a context with the namespace
p.logger.Info("preloading cache", "nsInfos", len(nsInfos))
startedAt := time.Now()
defer func() {
p.logger.Info("finished preloading cache", "nsInfos", len(nsInfos), "elapsed", time.Since(startedAt))
}()
for _, nsInfo := range nsInfos {
p.cache.Add(nsInfo.Value, p.fetch(k8srequest.WithNamespace(ctx, nsInfo.Value)))
}
}

View File

@@ -1,478 +0,0 @@
package schemaversion
import (
"context"
"fmt"
"sync"
"sync/atomic"
"testing"
"time"
authlib "github.com/grafana/authlib/types"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apiserver/pkg/endpoints/request"
)
// testProvider tracks how many times get() is called
type testProvider struct {
testData any
callCount atomic.Int64
}
func newTestProvider(testData any) *testProvider {
return &testProvider{
testData: testData,
}
}
func (p *testProvider) get(_ context.Context) any {
p.callCount.Add(1)
return p.testData
}
func (p *testProvider) getCallCount() int64 {
return p.callCount.Load()
}
func TestCachedProvider_CacheHit(t *testing.T) {
datasources := []DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
{UID: "ds2", Type: "loki", Name: "Loki"},
}
underlying := newTestProvider(datasources)
// Test newCachedProvider directly instead of the wrapper
cached := newCachedProvider(underlying.get, defaultCacheSize, time.Minute, log.New("test"))
// Use "default" namespace (org 1) - this is the standard Grafana namespace format
ctx := request.WithNamespace(context.Background(), "default")
// First call should hit the underlying provider
idx1 := cached.Get(ctx)
require.NotNil(t, idx1)
assert.Equal(t, int64(1), underlying.getCallCount(), "first call should invoke underlying provider")
// Second call should use cache
idx2 := cached.Get(ctx)
require.NotNil(t, idx2)
assert.Equal(t, int64(1), underlying.getCallCount(), "second call should use cache, not invoke underlying provider")
// Both should return the same data
assert.Equal(t, idx1, idx2)
}
func TestCachedProvider_NamespaceIsolation(t *testing.T) {
datasources := []DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
underlying := newTestProvider(datasources)
cached := newCachedProvider(underlying.get, defaultCacheSize, time.Minute, log.New("test"))
// Use "default" (org 1) and "org-2" (org 2) - standard Grafana namespace formats
ctx1 := request.WithNamespace(context.Background(), "default")
ctx2 := request.WithNamespace(context.Background(), "org-2")
// First call for org 1
idx1 := cached.Get(ctx1)
require.NotNil(t, idx1)
assert.Equal(t, int64(1), underlying.getCallCount(), "first org-1 call should invoke underlying provider")
// Call for org 2 should also invoke underlying provider (different namespace)
idx2 := cached.Get(ctx2)
require.NotNil(t, idx2)
assert.Equal(t, int64(2), underlying.getCallCount(), "org-2 call should invoke underlying provider (separate cache)")
// Second call for org 1 should use cache
idx3 := cached.Get(ctx1)
require.NotNil(t, idx3)
assert.Equal(t, int64(2), underlying.getCallCount(), "second org-1 call should use cache")
// Second call for org 2 should use cache
idx4 := cached.Get(ctx2)
require.NotNil(t, idx4)
assert.Equal(t, int64(2), underlying.getCallCount(), "second org-2 call should use cache")
}
func TestCachedProvider_NoNamespaceFallback(t *testing.T) {
datasources := []DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
underlying := newTestProvider(datasources)
cached := newCachedProvider(underlying.get, defaultCacheSize, time.Minute, log.New("test"))
// Context without namespace - should fall back to direct provider call
ctx := context.Background()
idx1 := cached.Get(ctx)
require.NotNil(t, idx1)
assert.Equal(t, int64(1), underlying.getCallCount())
// Second call without namespace should also invoke underlying (no caching for unknown namespace)
idx2 := cached.Get(ctx)
require.NotNil(t, idx2)
assert.Equal(t, int64(2), underlying.getCallCount(), "without namespace, each call should invoke underlying provider")
}
func TestCachedProvider_ConcurrentAccess(t *testing.T) {
datasources := []DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
underlying := newTestProvider(datasources)
cached := newCachedProvider(underlying.get, defaultCacheSize, time.Minute, log.New("test"))
// Use "default" namespace (org 1)
ctx := request.WithNamespace(context.Background(), "default")
var wg sync.WaitGroup
numGoroutines := 100
// Launch many goroutines that all try to access the cache simultaneously
for i := 0; i < numGoroutines; i++ {
wg.Add(1)
go func() {
defer wg.Done()
idx := cached.Get(ctx)
require.NotNil(t, idx)
}()
}
wg.Wait()
// Due to double-check locking, only 1 goroutine should have actually built the cache
// In practice, there might be a few more due to timing, but it should be much less than numGoroutines
callCount := underlying.getCallCount()
assert.LessOrEqual(t, callCount, int64(5), "with proper locking, very few goroutines should invoke underlying provider; got %d", callCount)
}
func TestCachedProvider_ConcurrentNamespaces(t *testing.T) {
datasources := []DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
underlying := newTestProvider(datasources)
cached := newCachedProvider(underlying.get, defaultCacheSize, time.Minute, log.New("test"))
var wg sync.WaitGroup
numOrgs := 10
callsPerOrg := 20
// Launch goroutines for multiple namespaces
// Use valid namespace formats: "default" for org 1, "org-N" for N > 1
namespaces := make([]string, numOrgs)
namespaces[0] = "default"
for i := 1; i < numOrgs; i++ {
namespaces[i] = fmt.Sprintf("org-%d", i+1)
}
for _, ns := range namespaces {
ctx := request.WithNamespace(context.Background(), ns)
for i := 0; i < callsPerOrg; i++ {
wg.Add(1)
go func(ctx context.Context) {
defer wg.Done()
idx := cached.Get(ctx)
require.NotNil(t, idx)
}(ctx)
}
}
wg.Wait()
// Each org should have at most a few calls (ideally 1, but timing can cause a few more)
callCount := underlying.getCallCount()
// With 10 orgs, we expect around 10 calls (one per org)
assert.LessOrEqual(t, callCount, int64(numOrgs), "expected roughly one call per org, got %d calls for %d orgs", callCount, numOrgs)
}
// Test that cache returns correct data for each namespace
func TestCachedProvider_CorrectDataPerNamespace(t *testing.T) {
// Provider that returns different data based on namespace
underlying := &namespaceAwareProvider{
datasourcesByNamespace: map[string][]DataSourceInfo{
"default": {{UID: "org1-ds", Type: "prometheus", Name: "Org1 DS", Default: true}},
"org-2": {{UID: "org2-ds", Type: "loki", Name: "Org2 DS", Default: true}},
},
}
cached := newCachedProvider(underlying.Index, defaultCacheSize, time.Minute, log.New("test"))
// Use valid namespace formats
ctx1 := request.WithNamespace(context.Background(), "default")
ctx2 := request.WithNamespace(context.Background(), "org-2")
idx1 := cached.Get(ctx1)
idx2 := cached.Get(ctx2)
assert.Equal(t, "org1-ds", idx1.GetDefault().UID, "org 1 should get org-1 datasources")
assert.Equal(t, "org2-ds", idx2.GetDefault().UID, "org 2 should get org-2 datasources")
// Subsequent calls should still return correct data
idx1Again := cached.Get(ctx1)
idx2Again := cached.Get(ctx2)
assert.Equal(t, "org1-ds", idx1Again.GetDefault().UID, "org 1 should still get org-1 datasources from cache")
assert.Equal(t, "org2-ds", idx2Again.GetDefault().UID, "org 2 should still get org-2 datasources from cache")
}
// TestCachedProvider_PreloadMultipleNamespaces verifies preloading multiple namespaces
func TestCachedProvider_PreloadMultipleNamespaces(t *testing.T) {
// Provider that returns different data based on namespace
underlying := &namespaceAwareProvider{
datasourcesByNamespace: map[string][]DataSourceInfo{
"default": {{UID: "org1-ds", Type: "prometheus", Name: "Org1 DS", Default: true}},
"org-2": {{UID: "org2-ds", Type: "loki", Name: "Org2 DS", Default: true}},
"org-3": {{UID: "org3-ds", Type: "tempo", Name: "Org3 DS", Default: true}},
},
}
cached := newCachedProvider(underlying.Index, defaultCacheSize, time.Minute, log.New("test"))
// Preload multiple namespaces
nsInfos := []authlib.NamespaceInfo{
createNamespaceInfo(1, 0, "default"),
createNamespaceInfo(2, 0, "org-2"),
createNamespaceInfo(3, 0, "org-3"),
}
cached.Preload(context.Background(), nsInfos)
// After preload, the underlying provider should have been called once per namespace
assert.Equal(t, 3, underlying.callCount, "preload should call underlying provider once per namespace")
// Access all namespaces - should use preloaded data and get correct data per namespace
expectedUIDs := map[string]string{
"default": "org1-ds",
"org-2": "org2-ds",
"org-3": "org3-ds",
}
for _, ns := range []string{"default", "org-2", "org-3"} {
ctx := request.WithNamespace(context.Background(), ns)
idx := cached.Get(ctx)
require.NotNil(t, idx, "index for namespace %s should not be nil", ns)
assert.Equal(t, expectedUIDs[ns], idx.GetDefault().UID, "namespace %s should get correct datasource", ns)
}
// The underlying provider should still have been called only 3 times (from preload)
assert.Equal(t, 3, underlying.callCount,
"access after preload should use cached data for all namespaces")
}
// namespaceAwareProvider returns different datasources based on namespace
type namespaceAwareProvider struct {
datasourcesByNamespace map[string][]DataSourceInfo
callCount int
}
func (p *namespaceAwareProvider) Index(ctx context.Context) *DatasourceIndex {
p.callCount++
ns := request.NamespaceValue(ctx)
if ds, ok := p.datasourcesByNamespace[ns]; ok {
return NewDatasourceIndex(ds)
}
return NewDatasourceIndex(nil)
}
// createNamespaceInfo creates a NamespaceInfo for testing
func createNamespaceInfo(orgID, stackID int64, value string) authlib.NamespaceInfo {
return authlib.NamespaceInfo{
OrgID: orgID,
StackID: stackID,
Value: value,
}
}
// Test DatasourceIndex functionality
func TestDatasourceIndex_Lookup(t *testing.T) {
datasources := []DataSourceInfo{
{UID: "ds-uid-1", Type: "prometheus", Name: "Prometheus DS", Default: true, APIVersion: "v1"},
{UID: "ds-uid-2", Type: "loki", Name: "Loki DS", Default: false, APIVersion: "v1"},
}
idx := NewDatasourceIndex(datasources)
t.Run("lookup by name", func(t *testing.T) {
ds := idx.Lookup("Prometheus DS")
require.NotNil(t, ds)
assert.Equal(t, "ds-uid-1", ds.UID)
})
t.Run("lookup by UID", func(t *testing.T) {
ds := idx.Lookup("ds-uid-2")
require.NotNil(t, ds)
assert.Equal(t, "Loki DS", ds.Name)
})
t.Run("lookup unknown returns nil", func(t *testing.T) {
ds := idx.Lookup("unknown")
assert.Nil(t, ds)
})
t.Run("get default", func(t *testing.T) {
ds := idx.GetDefault()
require.NotNil(t, ds)
assert.Equal(t, "ds-uid-1", ds.UID)
})
t.Run("lookup by UID directly", func(t *testing.T) {
ds := idx.LookupByUID("ds-uid-1")
require.NotNil(t, ds)
assert.Equal(t, "Prometheus DS", ds.Name)
})
t.Run("lookup by name directly", func(t *testing.T) {
ds := idx.LookupByName("Loki DS")
require.NotNil(t, ds)
assert.Equal(t, "ds-uid-2", ds.UID)
})
}
func TestDatasourceIndex_EmptyIndex(t *testing.T) {
idx := NewDatasourceIndex(nil)
assert.Nil(t, idx.GetDefault())
assert.Nil(t, idx.Lookup("anything"))
assert.Nil(t, idx.LookupByUID("anything"))
assert.Nil(t, idx.LookupByName("anything"))
}
// TestCachedProvider_TTLExpiration verifies that cache expires after TTL
func TestCachedProvider_TTLExpiration(t *testing.T) {
datasources := []DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
}
underlying := newTestProvider(datasources)
// Use a very short TTL for testing
shortTTL := 50 * time.Millisecond
cached := newCachedProvider(underlying.get, defaultCacheSize, shortTTL, log.New("test"))
ctx := request.WithNamespace(context.Background(), "default")
// First call - should call underlying provider
idx1 := cached.Get(ctx)
require.NotNil(t, idx1)
assert.Equal(t, int64(1), underlying.getCallCount(), "first call should invoke underlying provider")
// Second call immediately - should use cache
idx2 := cached.Get(ctx)
require.NotNil(t, idx2)
assert.Equal(t, int64(1), underlying.getCallCount(), "second call should use cache")
// Wait for TTL to expire
time.Sleep(shortTTL + 20*time.Millisecond)
// Third call after TTL - should call underlying provider again
idx3 := cached.Get(ctx)
require.NotNil(t, idx3)
assert.Equal(t, int64(2), underlying.getCallCount(),
"after TTL expiration, underlying provider should be called again")
}
// TestCachedProvider_ParallelNamespacesFetch verifies that different namespaces can fetch in parallel
func TestCachedProvider_ParallelNamespacesFetch(t *testing.T) {
// Create a blocking provider that tracks concurrent executions
provider := &blockingProvider{
blockDuration: 100 * time.Millisecond,
datasources: []DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
},
}
cached := newCachedProvider(provider.get, defaultCacheSize, time.Minute, log.New("test"))
numNamespaces := 5
var wg sync.WaitGroup
// Launch fetches for different namespaces simultaneously
startTime := time.Now()
for i := 0; i < numNamespaces; i++ {
wg.Add(1)
namespace := fmt.Sprintf("org-%d", i+1)
go func(ns string) {
defer wg.Done()
ctx := request.WithNamespace(context.Background(), ns)
idx := cached.Get(ctx)
require.NotNil(t, idx)
}(namespace)
}
wg.Wait()
elapsed := time.Since(startTime)
// Verify that all namespaces were called
assert.Equal(t, int64(numNamespaces), provider.callCount.Load())
// Verify max concurrent executions shows parallelism
maxConcurrent := provider.maxConcurrent.Load()
assert.Equal(t, int64(numNamespaces), maxConcurrent)
// If all namespaces had to wait sequentially, it would take numNamespaces * blockDuration
// With parallelism, it should be much faster (close to just blockDuration)
sequentialTime := time.Duration(numNamespaces) * provider.blockDuration
assert.Less(t, elapsed, sequentialTime)
}
// TestCachedProvider_SameNamespaceSerialFetch verifies that the same namespace doesn't fetch concurrently
func TestCachedProvider_SameNamespaceSerialFetch(t *testing.T) {
// Create a blocking provider that tracks concurrent executions
provider := &blockingProvider{
blockDuration: 100 * time.Millisecond,
datasources: []DataSourceInfo{
{UID: "ds1", Type: "prometheus", Name: "Prometheus", Default: true},
},
}
cached := newCachedProvider(provider.get, defaultCacheSize, time.Minute, log.New("test"))
numGoroutines := 10
var wg sync.WaitGroup
// Launch multiple fetches for the SAME namespace simultaneously
ctx := request.WithNamespace(context.Background(), "default")
for i := 0; i < numGoroutines; i++ {
wg.Add(1)
go func() {
defer wg.Done()
idx := cached.Get(ctx)
require.NotNil(t, idx)
}()
}
wg.Wait()
// Max concurrent should be 1 since all goroutines are for the same namespace
maxConcurrent := provider.maxConcurrent.Load()
assert.Equal(t, int64(1), maxConcurrent)
}
// blockingProvider is a test provider that simulates slow fetch operations
// and tracks concurrent executions
type blockingProvider struct {
blockDuration time.Duration
datasources []DataSourceInfo
callCount atomic.Int64
currentActive atomic.Int64
maxConcurrent atomic.Int64
}
func (p *blockingProvider) get(_ context.Context) any {
p.callCount.Add(1)
// Track concurrent executions
current := p.currentActive.Add(1)
// Update max concurrent if this is a new peak
for {
maxVal := p.maxConcurrent.Load()
if current <= maxVal {
break
}
if p.maxConcurrent.CompareAndSwap(maxVal, current) {
break
}
}
// Simulate slow operation
time.Sleep(p.blockDuration)
p.currentActive.Add(-1)
return p.datasources
}

View File

@@ -2,9 +2,8 @@ package schemaversion
import (
"context"
"sync"
"time"
"github.com/grafana/grafana/pkg/infra/log"
)
// Shared utility functions for datasource migrations across different schema versions.
@@ -12,41 +11,65 @@ import (
// string names/UIDs to structured reference objects with uid, type, and apiVersion.
// cachedIndexProvider wraps a DataSourceIndexProvider with time-based caching.
// This prevents multiple DB queries and index builds during operations that may call
// provider.Index() multiple times (e.g., dashboard conversions with many datasource lookups).
// The cache expires after 10 seconds, allowing it to be used as a long-lived singleton
// while still refreshing periodically.
//
// Thread-safe: Uses sync.RWMutex to guarantee safe concurrent access.
type cachedIndexProvider struct {
*cachedProvider[*DatasourceIndex]
provider DataSourceIndexProvider
mu sync.RWMutex
index *DatasourceIndex
cachedAt time.Time
cacheTTL time.Duration
}
// Index returns the cached index if it's still valid (< TTL old), otherwise rebuilds it.
// Index returns the cached index if it's still valid (< 10s old), otherwise rebuilds it.
// Uses RWMutex for efficient concurrent reads when cache is valid.
func (p *cachedIndexProvider) Index(ctx context.Context) *DatasourceIndex {
return p.Get(ctx)
// Fast path: check if cache is still valid using read lock
p.mu.RLock()
if p.index != nil && time.Since(p.cachedAt) < p.cacheTTL {
idx := p.index
p.mu.RUnlock()
return idx
}
p.mu.RUnlock()
// Slow path: cache expired or not yet built, acquire write lock
p.mu.Lock()
defer p.mu.Unlock()
// Double-check: another goroutine might have refreshed the cache
// while we were waiting for the write lock
if p.index != nil && time.Since(p.cachedAt) < p.cacheTTL {
return p.index
}
// Rebuild the cache
p.index = p.provider.Index(ctx)
p.cachedAt = time.Now()
return p.index
}
// cachedLibraryElementProvider wraps a LibraryElementIndexProvider with time-based caching.
type cachedLibraryElementProvider struct {
*cachedProvider[[]LibraryElementInfo]
}
func (p *cachedLibraryElementProvider) GetLibraryElementInfo(ctx context.Context) []LibraryElementInfo {
return p.Get(ctx)
}
// WrapIndexProviderWithCache wraps a DataSourceIndexProvider to cache indexes with a configurable TTL.
func WrapIndexProviderWithCache(provider DataSourceIndexProvider, cacheTTL time.Duration) DataSourceIndexProvider {
if provider == nil || cacheTTL <= 0 {
return provider
// WrapIndexProviderWithCache wraps a provider to cache the index with a 10-second TTL.
// Useful for conversions or migrations that may call provider.Index() multiple times.
// The cache expires after 10 seconds, making it suitable for use as a long-lived singleton
// at the top level of dependency injection while still refreshing periodically.
//
// Example usage in dashboard conversion:
//
// cachedDsIndexProvider := schemaversion.WrapIndexProviderWithCache(dsIndexProvider)
// // Now all calls to cachedDsIndexProvider.Index(ctx) return the same cached index
// // for up to 10 seconds before refreshing
func WrapIndexProviderWithCache(provider DataSourceIndexProvider) DataSourceIndexProvider {
if provider == nil {
return nil
}
return &cachedIndexProvider{
newCachedProvider[*DatasourceIndex](provider.Index, defaultCacheSize, cacheTTL, log.New("schemaversion.dsindexprovider")),
}
}
// WrapLibraryElementProviderWithCache wraps a LibraryElementIndexProvider to cache library elements with a configurable TTL.
func WrapLibraryElementProviderWithCache(provider LibraryElementIndexProvider, cacheTTL time.Duration) LibraryElementIndexProvider {
if provider == nil || cacheTTL <= 0 {
return provider
}
return &cachedLibraryElementProvider{
newCachedProvider[[]LibraryElementInfo](provider.GetLibraryElementInfo, defaultCacheSize, cacheTTL, log.New("schemaversion.leindexprovider")),
provider: provider,
cacheTTL: 10 * time.Second,
}
}
@@ -193,3 +216,60 @@ func MigrateDatasourceNameToRef(nameOrRef interface{}, options map[string]bool,
return nil
}
// cachedLibraryElementProvider wraps a LibraryElementIndexProvider with time-based caching.
// This prevents multiple DB queries during operations that may call GetLibraryElementInfo()
// multiple times (e.g., dashboard conversions with many library panel lookups).
// The cache expires after 10 seconds, allowing it to be used as a long-lived singleton
// while still refreshing periodically.
//
// Thread-safe: Uses sync.RWMutex to guarantee safe concurrent access.
type cachedLibraryElementProvider struct {
provider LibraryElementIndexProvider
mu sync.RWMutex
elements []LibraryElementInfo
cachedAt time.Time
cacheTTL time.Duration
}
// GetLibraryElementInfo returns the cached library elements if they're still valid (< 10s old), otherwise rebuilds the cache.
// Uses RWMutex for efficient concurrent reads when cache is valid.
func (p *cachedLibraryElementProvider) GetLibraryElementInfo(ctx context.Context) []LibraryElementInfo {
// Fast path: check if cache is still valid using read lock
p.mu.RLock()
if p.elements != nil && time.Since(p.cachedAt) < p.cacheTTL {
elements := p.elements
p.mu.RUnlock()
return elements
}
p.mu.RUnlock()
// Slow path: cache expired or not yet built, acquire write lock
p.mu.Lock()
defer p.mu.Unlock()
// Double-check: another goroutine might have refreshed the cache
// while we were waiting for the write lock
if p.elements != nil && time.Since(p.cachedAt) < p.cacheTTL {
return p.elements
}
// Rebuild the cache
p.elements = p.provider.GetLibraryElementInfo(ctx)
p.cachedAt = time.Now()
return p.elements
}
// WrapLibraryElementProviderWithCache wraps a provider to cache library elements with a 10-second TTL.
// Useful for conversions or migrations that may call GetLibraryElementInfo() multiple times.
// The cache expires after 10 seconds, making it suitable for use as a long-lived singleton
// at the top level of dependency injection while still refreshing periodically.
func WrapLibraryElementProviderWithCache(provider LibraryElementIndexProvider) LibraryElementIndexProvider {
if provider == nil {
return nil
}
return &cachedLibraryElementProvider{
provider: provider,
cacheTTL: 10 * time.Second,
}
}

View File

@@ -34,7 +34,7 @@ manifest: {
v0alpha1: {
kinds: [examplev0alpha1]
// This is explicitly set to false to keep the example app disabled by default.
// This is explicitly set to false to keep the example app disabled by default.
// It can be enabled via conf overrides, or by setting this value to true and regenerating.
served: false
}
@@ -48,14 +48,14 @@ v1alpha1: {
// served indicates whether this particular version is served by the API server.
// served should be set to false before a version is removed from the manifest entirely.
// served defaults to true if not present.
// This is explicitly set to false to keep the example app disabled by default.
// This is explicitly set to false to keep the example app disabled by default.
// It can be enabled via conf overrides, or by setting this value to true and regenerating.
served: false
// routes contains resource routes for the version, which are split into 'namespaced' and 'cluster' scoped routes.
// This allows you to add additional non-storage- and non-kind- based handlers for your app.
// These should only be used if the behavior cannot be accomplished by reconciliation on storage events or subresource routes on a kind.
routes: {
// namespaced contains namespace-scoped resource routes for the version,
// namespaced contains namespace-scoped resource routes for the version,
// which are exposed as HTTP handlers on '<version>/namespaces/<namespace>/<route>'.
namespaced: {
"/something": {
@@ -72,7 +72,7 @@ v1alpha1: {
}
}
}
// cluster contains cluster-scoped resource routes for the version,
// cluster contains cluster-scoped resource routes for the version,
// which are exposed as HTTP handlers on '<version>/<route>'.
cluster: {
"/other": {
@@ -113,4 +113,4 @@ v1alpha1: {
enabled: true
}
}
}
}

View File

@@ -499,8 +499,8 @@ github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+m
github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
github.com/ebitengine/purego v0.8.2 h1:jPPGWs2sZ1UgOSgD2bClL0MJIqu58nOmIcBuXr62z1I=
github.com/ebitengine/purego v0.8.2/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/ebitengine/purego v0.8.4 h1:CF7LEKg5FFOsASUj0+QwaXf8Ht6TlFxg09+S9wz0omw=
github.com/ebitengine/purego v0.8.4/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/edsrzf/mmap-go v0.0.0-20170320065105-0bce6a688712/go.mod h1:YO35OhQPt3KJa3ryjFM5Bs14WD66h8eGKpfaBNrHW5M=
github.com/edsrzf/mmap-go v1.2.0 h1:hXLYlkbaPzt1SaQk+anYwKSRNhufIDCchSPkUD6dD84=
github.com/edsrzf/mmap-go v1.2.0/go.mod h1:19H/e8pUPLicwkyNgOykDXkJ9F0MHE+Z52B8EIth78Q=
@@ -853,8 +853,6 @@ github.com/grafana/grafana-plugin-sdk-go v0.284.0 h1:1bK7eWsnPBLUWDcWJWe218Ik5ad
github.com/grafana/grafana-plugin-sdk-go v0.284.0/go.mod h1:lHPniaSxq3SL5MxDIPy04TYB1jnTp/ivkYO+xn5Rz3E=
github.com/grafana/grafana/apps/example v0.0.0-20251027162426-edef69fdc82b h1:6Bo65etvjQ4tStkaA5+N3A3ENbO4UAWj53TxF6g2Hdk=
github.com/grafana/grafana/apps/example v0.0.0-20251027162426-edef69fdc82b/go.mod h1:6+wASOCN8LWt6FJ8dc0oODUBIEY5XHaE6ABi8g0mR+k=
github.com/grafana/grafana/apps/quotas v0.0.0-20251209183543-1013d74f13f2 h1:rDPMdshj3QMvpXn+wK4T8awF9n2sd8i4YRiGqX2xTvg=
github.com/grafana/grafana/apps/quotas v0.0.0-20251209183543-1013d74f13f2/go.mod h1:M7bV60iRB61y0ISPG1HX/oNLZtlh0ZF22rUYwNkAKjo=
github.com/grafana/grafana/pkg/promlib v0.0.8 h1:VUWsqttdf0wMI4j9OX9oNrykguQpZcruudDAFpJJVw0=
github.com/grafana/grafana/pkg/promlib v0.0.8/go.mod h1:U1ezG/MGaEPoThqsr3lymMPN5yIPdVTJnDZ+wcXT+ao=
github.com/grafana/grafana/pkg/semconv v0.0.0-20250804150913-990f1c69ecc2 h1:A65jWgLk4Re28gIuZcpC0aTh71JZ0ey89hKGE9h543s=
@@ -1418,8 +1416,8 @@ github.com/sethvargo/go-retry v0.3.0 h1:EEt31A35QhrcRZtrYFDTBg91cqZVnFL2navjDrah
github.com/sethvargo/go-retry v0.3.0/go.mod h1:mNX17F0C/HguQMyMyJxcnU471gOZGxCLyYaFyAZraas=
github.com/shadowspore/fossil-delta v0.0.0-20241213113458-1d797d70cbe3 h1:/4/IJi5iyTdh6mqOUaASW148HQpujYiHl0Wl78dSOSc=
github.com/shadowspore/fossil-delta v0.0.0-20241213113458-1d797d70cbe3/go.mod h1:aJIMhRsunltJR926EB2MUg8qHemFQDreSB33pyto2Ps=
github.com/shirou/gopsutil/v4 v4.25.3 h1:SeA68lsu8gLggyMbmCn8cmp97V1TI9ld9sVzAUcKcKE=
github.com/shirou/gopsutil/v4 v4.25.3/go.mod h1:xbuxyoZj+UsgnZrENu3lQivsngRR5BdjbJwf2fv4szA=
github.com/shirou/gopsutil/v4 v4.25.6 h1:kLysI2JsKorfaFPcYmcJqbzROzsBWEOAtw6A7dIfqXs=
github.com/shirou/gopsutil/v4 v4.25.6/go.mod h1:PfybzyydfZcN+JMMjkF6Zb8Mq1A/VcogFFg7hj50W9c=
github.com/shopspring/decimal v0.0.0-20180709203117-cd690d0c9e24/go.mod h1:M+9NzErvs504Cn4c5DxATwIqPbtswREoFCre64PpcG4=
github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k=
github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=

View File

@@ -23,12 +23,6 @@ type CoreRole struct {
Spec CoreRoleSpec `json:"spec" yaml:"spec"`
}
func NewCoreRole() *CoreRole {
return &CoreRole{
Spec: *NewCoreRoleSpec(),
}
}
func (o *CoreRole) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaCoreRole = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", NewCoreRole(), &CoreRoleList{}, resource.WithKind("CoreRole"),
schemaCoreRole = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", &CoreRole{}, &CoreRoleList{}, resource.WithKind("CoreRole"),
resource.WithPlural("coreroles"), resource.WithScope(resource.NamespacedScope))
kindCoreRole = resource.Kind{
Schema: schemaCoreRole,

View File

@@ -23,12 +23,6 @@ type ExternalGroupMapping struct {
Spec ExternalGroupMappingSpec `json:"spec" yaml:"spec"`
}
func NewExternalGroupMapping() *ExternalGroupMapping {
return &ExternalGroupMapping{
Spec: *NewExternalGroupMappingSpec(),
}
}
func (o *ExternalGroupMapping) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaExternalGroupMapping = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", NewExternalGroupMapping(), &ExternalGroupMappingList{}, resource.WithKind("ExternalGroupMapping"),
schemaExternalGroupMapping = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", &ExternalGroupMapping{}, &ExternalGroupMappingList{}, resource.WithKind("ExternalGroupMapping"),
resource.WithPlural("externalgroupmappings"), resource.WithScope(resource.NamespacedScope))
kindExternalGroupMapping = resource.Kind{
Schema: schemaExternalGroupMapping,

View File

@@ -23,12 +23,6 @@ type GlobalRole struct {
Spec GlobalRoleSpec `json:"spec" yaml:"spec"`
}
func NewGlobalRole() *GlobalRole {
return &GlobalRole{
Spec: *NewGlobalRoleSpec(),
}
}
func (o *GlobalRole) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaGlobalRole = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", NewGlobalRole(), &GlobalRoleList{}, resource.WithKind("GlobalRole"),
schemaGlobalRole = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", &GlobalRole{}, &GlobalRoleList{}, resource.WithKind("GlobalRole"),
resource.WithPlural("globalroles"), resource.WithScope(resource.NamespacedScope))
kindGlobalRole = resource.Kind{
Schema: schemaGlobalRole,

View File

@@ -23,12 +23,6 @@ type GlobalRoleBinding struct {
Spec GlobalRoleBindingSpec `json:"spec" yaml:"spec"`
}
func NewGlobalRoleBinding() *GlobalRoleBinding {
return &GlobalRoleBinding{
Spec: *NewGlobalRoleBindingSpec(),
}
}
func (o *GlobalRoleBinding) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaGlobalRoleBinding = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", NewGlobalRoleBinding(), &GlobalRoleBindingList{}, resource.WithKind("GlobalRoleBinding"),
schemaGlobalRoleBinding = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", &GlobalRoleBinding{}, &GlobalRoleBindingList{}, resource.WithKind("GlobalRoleBinding"),
resource.WithPlural("globalrolebindings"), resource.WithScope(resource.NamespacedScope))
kindGlobalRoleBinding = resource.Kind{
Schema: schemaGlobalRoleBinding,

View File

@@ -23,12 +23,6 @@ type ResourcePermission struct {
Spec ResourcePermissionSpec `json:"spec" yaml:"spec"`
}
func NewResourcePermission() *ResourcePermission {
return &ResourcePermission{
Spec: *NewResourcePermissionSpec(),
}
}
func (o *ResourcePermission) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaResourcePermission = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", NewResourcePermission(), &ResourcePermissionList{}, resource.WithKind("ResourcePermission"),
schemaResourcePermission = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", &ResourcePermission{}, &ResourcePermissionList{}, resource.WithKind("ResourcePermission"),
resource.WithPlural("resourcepermissions"), resource.WithScope(resource.NamespacedScope))
kindResourcePermission = resource.Kind{
Schema: schemaResourcePermission,

View File

@@ -23,12 +23,6 @@ type Role struct {
Spec RoleSpec `json:"spec" yaml:"spec"`
}
func NewRole() *Role {
return &Role{
Spec: *NewRoleSpec(),
}
}
func (o *Role) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaRole = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", NewRole(), &RoleList{}, resource.WithKind("Role"),
schemaRole = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", &Role{}, &RoleList{}, resource.WithKind("Role"),
resource.WithPlural("roles"), resource.WithScope(resource.NamespacedScope))
kindRole = resource.Kind{
Schema: schemaRole,

View File

@@ -23,12 +23,6 @@ type RoleBinding struct {
Spec RoleBindingSpec `json:"spec" yaml:"spec"`
}
func NewRoleBinding() *RoleBinding {
return &RoleBinding{
Spec: *NewRoleBindingSpec(),
}
}
func (o *RoleBinding) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaRoleBinding = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", NewRoleBinding(), &RoleBindingList{}, resource.WithKind("RoleBinding"),
schemaRoleBinding = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", &RoleBinding{}, &RoleBindingList{}, resource.WithKind("RoleBinding"),
resource.WithPlural("rolebindings"), resource.WithScope(resource.NamespacedScope))
kindRoleBinding = resource.Kind{
Schema: schemaRoleBinding,

View File

@@ -23,12 +23,6 @@ type ServiceAccount struct {
Spec ServiceAccountSpec `json:"spec" yaml:"spec"`
}
func NewServiceAccount() *ServiceAccount {
return &ServiceAccount{
Spec: *NewServiceAccountSpec(),
}
}
func (o *ServiceAccount) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaServiceAccount = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", NewServiceAccount(), &ServiceAccountList{}, resource.WithKind("ServiceAccount"),
schemaServiceAccount = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", &ServiceAccount{}, &ServiceAccountList{}, resource.WithKind("ServiceAccount"),
resource.WithPlural("serviceaccounts"), resource.WithScope(resource.NamespacedScope))
kindServiceAccount = resource.Kind{
Schema: schemaServiceAccount,

View File

@@ -23,12 +23,6 @@ type Team struct {
Spec TeamSpec `json:"spec" yaml:"spec"`
}
func NewTeam() *Team {
return &Team{
Spec: *NewTeamSpec(),
}
}
func (o *Team) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaTeam = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", NewTeam(), &TeamList{}, resource.WithKind("Team"),
schemaTeam = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", &Team{}, &TeamList{}, resource.WithKind("Team"),
resource.WithPlural("teams"), resource.WithScope(resource.NamespacedScope))
kindTeam = resource.Kind{
Schema: schemaTeam,

View File

@@ -23,12 +23,6 @@ type TeamBinding struct {
Spec TeamBindingSpec `json:"spec" yaml:"spec"`
}
func NewTeamBinding() *TeamBinding {
return &TeamBinding{
Spec: *NewTeamBindingSpec(),
}
}
func (o *TeamBinding) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaTeamBinding = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", NewTeamBinding(), &TeamBindingList{}, resource.WithKind("TeamBinding"),
schemaTeamBinding = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", &TeamBinding{}, &TeamBindingList{}, resource.WithKind("TeamBinding"),
resource.WithPlural("teambindings"), resource.WithScope(resource.NamespacedScope))
kindTeamBinding = resource.Kind{
Schema: schemaTeamBinding,

View File

@@ -23,12 +23,6 @@ type User struct {
Spec UserSpec `json:"spec" yaml:"spec"`
}
func NewUser() *User {
return &User{
Spec: *NewUserSpec(),
}
}
func (o *User) GetSpec() any {
return o.Spec
}

View File

@@ -10,7 +10,7 @@ import (
// schema is unexported to prevent accidental overwrites
var (
schemaUser = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", NewUser(), &UserList{}, resource.WithKind("User"),
schemaUser = resource.NewSimpleSchema("iam.grafana.app", "v0alpha1", &User{}, &UserList{}, resource.WithKind("User"),
resource.WithPlural("users"), resource.WithScope(resource.NamespacedScope))
kindUser = resource.Kind{
Schema: schemaUser,

View File

@@ -1,3 +1,8 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
// Code generated by grafana-app-sdk. DO NOT EDIT.
package v0alpha1
import (

View File

@@ -109,13 +109,6 @@ var appManifestData = app.ManifestData{
"items": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Ref: spec.MustCreateRef("#/components/schemas/getGroupsExternalGroupMapping"),
}},
},
},
},
"kind": {
@@ -207,13 +200,6 @@ var appManifestData = app.ManifestData{
"hits": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Ref: spec.MustCreateRef("#/components/schemas/getSearchTeamsTeamHit"),
}},
},
},
},
"kind": {

View File

@@ -1,10 +1,5 @@
include ../sdk.mk
.PHONY: generate # Run Grafana App SDK code generation
.PHONY: generate
generate: install-app-sdk update-app-sdk
@$(APP_SDK_BIN) generate \
--source=./kinds/ \
--gogenpath=./pkg/apis \
--grouping=group \
--genoperatorstate=false \
--defencoding=none
@$(APP_SDK_BIN) generate -g ./pkg/apis --grouping=group --postprocess --defencoding=none --useoldmanifestkinds

View File

@@ -1,18 +1,39 @@
package investigations
investigationV0alpha1: {
// This is our Investigation definition, which contains metadata about the kind, and the kind's schema
investigation: {
kind: "Investigation"
group: "investigations.grafana.app"
apiResource: {
groupOverride: "investigations.grafana.app"
}
pluralName: "Investigations"
schema: {
spec: {
title: string
createdByProfile: #Person
hasCustomName: bool
isFavorite: bool
overviewNote: string
overviewNoteUpdatedAt: string
collectables: [...#Collectable] // +listType=atomic
viewMode: #ViewMode
current: "v0alpha1"
versions: {
"v0alpha1": {
codegen: {
frontend: true
backend: true
options: {
generateObjectMeta: true
generateClient: true
k8sLike: true
package: "github.com/grafana/grafana/apps/investigations"
}
}
schema: {
// spec is the schema of our resource
spec: {
title: string
createdByProfile: #Person
hasCustomName: bool
isFavorite: bool
overviewNote: string
overviewNoteUpdatedAt: string
collectables: [...#Collectable] // +listType=atomic
viewMode: #ViewMode
}
}
}
}
}

View File

@@ -1,18 +1,37 @@
package investigations
investigationIndexV0alpha1:{
investigationIndex: {
kind: "InvestigationIndex"
group: "investigations.grafana.app"
apiResource: {
groupOverride: "investigations.grafana.app"
}
pluralName: "InvestigationIndexes"
schema: {
spec: {
// Title of the index, e.g. 'Favorites' or 'My Investigations'
title: string
current: "v0alpha1"
versions: {
"v0alpha1": {
codegen: {
frontend: true
backend: true
options: {
generateObjectMeta: true
generateClient: true
k8sLike: true
package: "github.com/grafana/grafana/apps/investigations"
}
}
schema: {
spec: {
// Title of the index, e.g. 'Favorites' or 'My Investigations'
title: string
// The Person who owns this investigation index
owner: #Person
// The Person who owns this investigation index
owner: #Person
// Array of investigation summaries
investigationSummaries: [...#InvestigationSummary] // +listType=atomic
// Array of investigation summaries
investigationSummaries: [...#InvestigationSummary] // +listType=atomic
}
}
}
}
}

View File

@@ -3,16 +3,8 @@ package investigations
manifest: {
appName: "investigations"
groupOverride: "investigations.grafana.app"
versions: {
"v0alpha1": {
codegen: {
ts: {enabled: false}
go: {enabled: true}
}
kinds: [
investigationV0alpha1,
investigationIndexV0alpha1,
]
}
}
}
kinds: [
investigation,
investigationIndex,
]
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"github.com/grafana/grafana-app-sdk/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type InvestigationClient struct {
@@ -75,6 +76,24 @@ func (c *InvestigationClient) Patch(ctx context.Context, identifier resource.Ide
return c.client.Patch(ctx, identifier, req, opts)
}
func (c *InvestigationClient) UpdateStatus(ctx context.Context, identifier resource.Identifier, newStatus InvestigationStatus, opts resource.UpdateOptions) (*Investigation, error) {
return c.client.Update(ctx, &Investigation{
TypeMeta: metav1.TypeMeta{
Kind: InvestigationKind().Kind(),
APIVersion: GroupVersion.Identifier(),
},
ObjectMeta: metav1.ObjectMeta{
ResourceVersion: opts.ResourceVersion,
Namespace: identifier.Namespace,
Name: identifier.Name,
},
Status: newStatus,
}, resource.UpdateOptions{
Subresource: "status",
ResourceVersion: opts.ResourceVersion,
})
}
func (c *InvestigationClient) Delete(ctx context.Context, identifier resource.Identifier, opts resource.DeleteOptions) error {
return c.client.Delete(ctx, identifier, opts)
}

View File

@@ -21,6 +21,8 @@ type Investigation struct {
// Spec is the spec of the Investigation
Spec InvestigationSpec `json:"spec" yaml:"spec"`
Status InvestigationStatus `json:"status" yaml:"status"`
}
func (o *Investigation) GetSpec() any {
@@ -37,11 +39,15 @@ func (o *Investigation) SetSpec(spec any) error {
}
func (o *Investigation) GetSubresources() map[string]any {
return map[string]any{}
return map[string]any{
"status": o.Status,
}
}
func (o *Investigation) GetSubresource(name string) (any, bool) {
switch name {
case "status":
return o.Status, true
default:
return nil, false
}
@@ -49,6 +55,13 @@ func (o *Investigation) GetSubresource(name string) (any, bool) {
func (o *Investigation) SetSubresource(name string, value any) error {
switch name {
case "status":
cast, ok := value.(InvestigationStatus)
if !ok {
return fmt.Errorf("cannot set status type %#v, not of type InvestigationStatus", value)
}
o.Status = cast
return nil
default:
return fmt.Errorf("subresource '%s' does not exist", name)
}
@@ -220,6 +233,7 @@ func (o *Investigation) DeepCopyInto(dst *Investigation) {
dst.TypeMeta.Kind = o.TypeMeta.Kind
o.ObjectMeta.DeepCopyInto(&dst.ObjectMeta)
o.Spec.DeepCopyInto(&dst.Spec)
o.Status.DeepCopyInto(&dst.Status)
}
// Interface compliance compile-time check
@@ -291,3 +305,15 @@ func (s *InvestigationSpec) DeepCopy() *InvestigationSpec {
func (s *InvestigationSpec) DeepCopyInto(dst *InvestigationSpec) {
resource.CopyObjectInto(dst, s)
}
// DeepCopy creates a full deep copy of InvestigationStatus
func (s *InvestigationStatus) DeepCopy() *InvestigationStatus {
cpy := &InvestigationStatus{}
s.DeepCopyInto(cpy)
return cpy
}
// DeepCopyInto deep copies InvestigationStatus into another InvestigationStatus object
func (s *InvestigationStatus) DeepCopyInto(dst *InvestigationStatus) {
resource.CopyObjectInto(dst, s)
}

View File

@@ -84,6 +84,7 @@ func NewInvestigationViewMode() *InvestigationViewMode {
return &InvestigationViewMode{}
}
// spec is the schema of our resource
// +k8s:openapi-gen=true
type InvestigationSpec struct {
Title string `json:"title"`

View File

@@ -1,44 +1,44 @@
// Code generated - EDITING IS FUTILE. DO NOT EDIT.
package v1alpha1
package v0alpha1
// +k8s:openapi-gen=true
type StatusOperatorState struct {
type InvestigationstatusOperatorState struct {
// lastEvaluation is the ResourceVersion last evaluated
LastEvaluation string `json:"lastEvaluation"`
// state describes the state of the lastEvaluation.
// It is limited to three possible states for machine evaluation.
State StatusOperatorStateState `json:"state"`
State InvestigationStatusOperatorStateState `json:"state"`
// descriptiveState is an optional more descriptive state field which has no requirements on format
DescriptiveState *string `json:"descriptiveState,omitempty"`
// details contains any extra information that is operator-specific
Details map[string]interface{} `json:"details,omitempty"`
}
// NewStatusOperatorState creates a new StatusOperatorState object.
func NewStatusOperatorState() *StatusOperatorState {
return &StatusOperatorState{}
// NewInvestigationstatusOperatorState creates a new InvestigationstatusOperatorState object.
func NewInvestigationstatusOperatorState() *InvestigationstatusOperatorState {
return &InvestigationstatusOperatorState{}
}
// +k8s:openapi-gen=true
type Status struct {
type InvestigationStatus struct {
// operatorStates is a map of operator ID to operator state evaluations.
// Any operator which consumes this kind SHOULD add its state evaluation information to this field.
OperatorStates map[string]StatusOperatorState `json:"operatorStates,omitempty"`
OperatorStates map[string]InvestigationstatusOperatorState `json:"operatorStates,omitempty"`
// additionalFields is reserved for future use
AdditionalFields map[string]interface{} `json:"additionalFields,omitempty"`
}
// NewStatus creates a new Status object.
func NewStatus() *Status {
return &Status{}
// NewInvestigationStatus creates a new InvestigationStatus object.
func NewInvestigationStatus() *InvestigationStatus {
return &InvestigationStatus{}
}
// +k8s:openapi-gen=true
type StatusOperatorStateState string
type InvestigationStatusOperatorStateState string
const (
StatusOperatorStateStateSuccess StatusOperatorStateState = "success"
StatusOperatorStateStateInProgress StatusOperatorStateState = "in_progress"
StatusOperatorStateStateFailed StatusOperatorStateState = "failed"
InvestigationStatusOperatorStateStateSuccess InvestigationStatusOperatorStateState = "success"
InvestigationStatusOperatorStateStateInProgress InvestigationStatusOperatorStateState = "in_progress"
InvestigationStatusOperatorStateStateFailed InvestigationStatusOperatorStateState = "failed"
)

View File

@@ -4,6 +4,7 @@ import (
"context"
"github.com/grafana/grafana-app-sdk/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type InvestigationIndexClient struct {
@@ -75,6 +76,24 @@ func (c *InvestigationIndexClient) Patch(ctx context.Context, identifier resourc
return c.client.Patch(ctx, identifier, req, opts)
}
func (c *InvestigationIndexClient) UpdateStatus(ctx context.Context, identifier resource.Identifier, newStatus InvestigationIndexStatus, opts resource.UpdateOptions) (*InvestigationIndex, error) {
return c.client.Update(ctx, &InvestigationIndex{
TypeMeta: metav1.TypeMeta{
Kind: InvestigationIndexKind().Kind(),
APIVersion: GroupVersion.Identifier(),
},
ObjectMeta: metav1.ObjectMeta{
ResourceVersion: opts.ResourceVersion,
Namespace: identifier.Namespace,
Name: identifier.Name,
},
Status: newStatus,
}, resource.UpdateOptions{
Subresource: "status",
ResourceVersion: opts.ResourceVersion,
})
}
func (c *InvestigationIndexClient) Delete(ctx context.Context, identifier resource.Identifier, opts resource.DeleteOptions) error {
return c.client.Delete(ctx, identifier, opts)
}

View File

@@ -21,6 +21,8 @@ type InvestigationIndex struct {
// Spec is the spec of the InvestigationIndex
Spec InvestigationIndexSpec `json:"spec" yaml:"spec"`
Status InvestigationIndexStatus `json:"status" yaml:"status"`
}
func (o *InvestigationIndex) GetSpec() any {
@@ -37,11 +39,15 @@ func (o *InvestigationIndex) SetSpec(spec any) error {
}
func (o *InvestigationIndex) GetSubresources() map[string]any {
return map[string]any{}
return map[string]any{
"status": o.Status,
}
}
func (o *InvestigationIndex) GetSubresource(name string) (any, bool) {
switch name {
case "status":
return o.Status, true
default:
return nil, false
}
@@ -49,6 +55,13 @@ func (o *InvestigationIndex) GetSubresource(name string) (any, bool) {
func (o *InvestigationIndex) SetSubresource(name string, value any) error {
switch name {
case "status":
cast, ok := value.(InvestigationIndexStatus)
if !ok {
return fmt.Errorf("cannot set status type %#v, not of type InvestigationIndexStatus", value)
}
o.Status = cast
return nil
default:
return fmt.Errorf("subresource '%s' does not exist", name)
}
@@ -220,6 +233,7 @@ func (o *InvestigationIndex) DeepCopyInto(dst *InvestigationIndex) {
dst.TypeMeta.Kind = o.TypeMeta.Kind
o.ObjectMeta.DeepCopyInto(&dst.ObjectMeta)
o.Spec.DeepCopyInto(&dst.Spec)
o.Status.DeepCopyInto(&dst.Status)
}
// Interface compliance compile-time check
@@ -291,3 +305,15 @@ func (s *InvestigationIndexSpec) DeepCopy() *InvestigationIndexSpec {
func (s *InvestigationIndexSpec) DeepCopyInto(dst *InvestigationIndexSpec) {
resource.CopyObjectInto(dst, s)
}
// DeepCopy creates a full deep copy of InvestigationIndexStatus
func (s *InvestigationIndexStatus) DeepCopy() *InvestigationIndexStatus {
cpy := &InvestigationIndexStatus{}
s.DeepCopyInto(cpy)
return cpy
}
// DeepCopyInto deep copies InvestigationIndexStatus into another InvestigationIndexStatus object
func (s *InvestigationIndexStatus) DeepCopyInto(dst *InvestigationIndexStatus) {
resource.CopyObjectInto(dst, s)
}

View File

@@ -20,10 +20,10 @@ import (
)
var (
rawSchemaInvestigationv0alpha1 = []byte(`{"Collectable":{"additionalProperties":false,"description":"Collectable represents an item collected during investigation","properties":{"createdAt":{"type":"string"},"datasource":{"$ref":"#/components/schemas/DatasourceRef"},"fieldConfig":{"type":"string"},"id":{"type":"string"},"logoPath":{"type":"string"},"note":{"type":"string"},"noteUpdatedAt":{"type":"string"},"origin":{"type":"string"},"queries":{"description":"+listType=atomic","items":{"type":"string"},"type":"array"},"timeRange":{"$ref":"#/components/schemas/TimeRange"},"title":{"type":"string"},"type":{"type":"string"},"url":{"type":"string"}},"required":["id","createdAt","title","origin","type","queries","timeRange","datasource","url","note","noteUpdatedAt","fieldConfig"],"type":"object"},"DatasourceRef":{"additionalProperties":false,"description":"DatasourceRef is a reference to a datasource","properties":{"uid":{"type":"string"}},"required":["uid"],"type":"object"},"Investigation":{"properties":{"spec":{"$ref":"#/components/schemas/spec"}},"required":["spec"]},"Person":{"additionalProperties":false,"description":"Person represents a user profile with basic information","properties":{"gravatarUrl":{"description":"URL to user's Gravatar image","type":"string"},"name":{"description":"Display name of the user","type":"string"},"uid":{"description":"Unique identifier for the user","type":"string"}},"required":["uid","name","gravatarUrl"],"type":"object"},"TimeRange":{"additionalProperties":false,"description":"TimeRange represents a time range with both absolute and relative values","properties":{"from":{"type":"string"},"raw":{"additionalProperties":false,"properties":{"from":{"type":"string"},"to":{"type":"string"}},"required":["from","to"],"type":"object"},"to":{"type":"string"}},"required":["from","to","raw"],"type":"object"},"ViewMode":{"additionalProperties":false,"properties":{"mode":{"enum":["compact","full"],"type":"string"},"showComments":{"type":"boolean"},"showTooltips":{"type":"boolean"}},"required":["mode","showComments","showTooltips"],"type":"object"},"spec":{"additionalProperties":false,"properties":{"collectables":{"description":"+listType=atomic","items":{"$ref":"#/components/schemas/Collectable"},"type":"array"},"createdByProfile":{"$ref":"#/components/schemas/Person"},"hasCustomName":{"type":"boolean"},"isFavorite":{"type":"boolean"},"overviewNote":{"type":"string"},"overviewNoteUpdatedAt":{"type":"string"},"title":{"type":"string"},"viewMode":{"$ref":"#/components/schemas/ViewMode"}},"required":["title","createdByProfile","hasCustomName","isFavorite","overviewNote","overviewNoteUpdatedAt","collectables","viewMode"],"type":"object"}}`)
rawSchemaInvestigationv0alpha1 = []byte(`{"Collectable":{"additionalProperties":false,"description":"Collectable represents an item collected during investigation","properties":{"createdAt":{"type":"string"},"datasource":{"$ref":"#/components/schemas/DatasourceRef"},"fieldConfig":{"type":"string"},"id":{"type":"string"},"logoPath":{"type":"string"},"note":{"type":"string"},"noteUpdatedAt":{"type":"string"},"origin":{"type":"string"},"queries":{"description":"+listType=atomic","items":{"type":"string"},"type":"array"},"timeRange":{"$ref":"#/components/schemas/TimeRange"},"title":{"type":"string"},"type":{"type":"string"},"url":{"type":"string"}},"required":["id","createdAt","title","origin","type","queries","timeRange","datasource","url","note","noteUpdatedAt","fieldConfig"],"type":"object"},"DatasourceRef":{"additionalProperties":false,"description":"DatasourceRef is a reference to a datasource","properties":{"uid":{"type":"string"}},"required":["uid"],"type":"object"},"Investigation":{"properties":{"spec":{"$ref":"#/components/schemas/spec"},"status":{"$ref":"#/components/schemas/status"}},"required":["spec"]},"OperatorState":{"additionalProperties":false,"properties":{"descriptiveState":{"description":"descriptiveState is an optional more descriptive state field which has no requirements on format","type":"string"},"details":{"additionalProperties":{"additionalProperties":{},"type":"object"},"description":"details contains any extra information that is operator-specific","type":"object"},"lastEvaluation":{"description":"lastEvaluation is the ResourceVersion last evaluated","type":"string"},"state":{"description":"state describes the state of the lastEvaluation.\nIt is limited to three possible states for machine evaluation.","enum":["success","in_progress","failed"],"type":"string"}},"required":["lastEvaluation","state"],"type":"object"},"Person":{"additionalProperties":false,"description":"Person represents a user profile with basic information","properties":{"gravatarUrl":{"description":"URL to user's Gravatar image","type":"string"},"name":{"description":"Display name of the user","type":"string"},"uid":{"description":"Unique identifier for the user","type":"string"}},"required":["uid","name","gravatarUrl"],"type":"object"},"TimeRange":{"additionalProperties":false,"description":"TimeRange represents a time range with both absolute and relative values","properties":{"from":{"type":"string"},"raw":{"additionalProperties":false,"properties":{"from":{"type":"string"},"to":{"type":"string"}},"required":["from","to"],"type":"object"},"to":{"type":"string"}},"required":["from","to","raw"],"type":"object"},"ViewMode":{"additionalProperties":false,"properties":{"mode":{"enum":["compact","full"],"type":"string"},"showComments":{"type":"boolean"},"showTooltips":{"type":"boolean"}},"required":["mode","showComments","showTooltips"],"type":"object"},"spec":{"additionalProperties":false,"description":"spec is the schema of our resource","properties":{"collectables":{"description":"+listType=atomic","items":{"$ref":"#/components/schemas/Collectable"},"type":"array"},"createdByProfile":{"$ref":"#/components/schemas/Person"},"hasCustomName":{"type":"boolean"},"isFavorite":{"type":"boolean"},"overviewNote":{"type":"string"},"overviewNoteUpdatedAt":{"type":"string"},"title":{"type":"string"},"viewMode":{"$ref":"#/components/schemas/ViewMode"}},"required":["title","createdByProfile","hasCustomName","isFavorite","overviewNote","overviewNoteUpdatedAt","collectables","viewMode"],"type":"object"},"status":{"additionalProperties":false,"properties":{"additionalFields":{"additionalProperties":{"additionalProperties":{},"type":"object"},"description":"additionalFields is reserved for future use","type":"object"},"operatorStates":{"additionalProperties":{"$ref":"#/components/schemas/OperatorState"},"description":"operatorStates is a map of operator ID to operator state evaluations.\nAny operator which consumes this kind SHOULD add its state evaluation information to this field.","type":"object"}},"type":"object"}}`)
versionSchemaInvestigationv0alpha1 app.VersionSchema
_ = json.Unmarshal(rawSchemaInvestigationv0alpha1, &versionSchemaInvestigationv0alpha1)
rawSchemaInvestigationIndexv0alpha1 = []byte(`{"CollectableSummary":{"additionalProperties":false,"properties":{"id":{"type":"string"},"logoPath":{"type":"string"},"origin":{"type":"string"},"title":{"type":"string"}},"required":["id","title","logoPath","origin"],"type":"object"},"InvestigationIndex":{"properties":{"spec":{"$ref":"#/components/schemas/spec"}},"required":["spec"]},"InvestigationSummary":{"additionalProperties":false,"description":"Type definition for investigation summaries","properties":{"collectableSummaries":{"description":"+listType=atomic","items":{"$ref":"#/components/schemas/CollectableSummary"},"type":"array"},"createdByProfile":{"$ref":"#/components/schemas/Person"},"hasCustomName":{"type":"boolean"},"isFavorite":{"type":"boolean"},"overviewNote":{"type":"string"},"overviewNoteUpdatedAt":{"type":"string"},"title":{"type":"string"},"viewMode":{"$ref":"#/components/schemas/ViewMode"}},"required":["title","createdByProfile","hasCustomName","isFavorite","overviewNote","overviewNoteUpdatedAt","viewMode","collectableSummaries"],"type":"object"},"Person":{"additionalProperties":false,"description":"Person represents a user profile with basic information","properties":{"gravatarUrl":{"description":"URL to user's Gravatar image","type":"string"},"name":{"description":"Display name of the user","type":"string"},"uid":{"description":"Unique identifier for the user","type":"string"}},"required":["uid","name","gravatarUrl"],"type":"object"},"ViewMode":{"additionalProperties":false,"properties":{"mode":{"enum":["compact","full"],"type":"string"},"showComments":{"type":"boolean"},"showTooltips":{"type":"boolean"}},"required":["mode","showComments","showTooltips"],"type":"object"},"spec":{"additionalProperties":false,"properties":{"investigationSummaries":{"description":"Array of investigation summaries\n+listType=atomic","items":{"$ref":"#/components/schemas/InvestigationSummary"},"type":"array"},"owner":{"$ref":"#/components/schemas/Person","description":"The Person who owns this investigation index"},"title":{"description":"Title of the index, e.g. 'Favorites' or 'My Investigations'","type":"string"}},"required":["title","owner","investigationSummaries"],"type":"object"}}`)
rawSchemaInvestigationIndexv0alpha1 = []byte(`{"CollectableSummary":{"additionalProperties":false,"properties":{"id":{"type":"string"},"logoPath":{"type":"string"},"origin":{"type":"string"},"title":{"type":"string"}},"required":["id","title","logoPath","origin"],"type":"object"},"InvestigationIndex":{"properties":{"spec":{"$ref":"#/components/schemas/spec"},"status":{"$ref":"#/components/schemas/status"}},"required":["spec"]},"InvestigationSummary":{"additionalProperties":false,"description":"Type definition for investigation summaries","properties":{"collectableSummaries":{"description":"+listType=atomic","items":{"$ref":"#/components/schemas/CollectableSummary"},"type":"array"},"createdByProfile":{"$ref":"#/components/schemas/Person"},"hasCustomName":{"type":"boolean"},"isFavorite":{"type":"boolean"},"overviewNote":{"type":"string"},"overviewNoteUpdatedAt":{"type":"string"},"title":{"type":"string"},"viewMode":{"$ref":"#/components/schemas/ViewMode"}},"required":["title","createdByProfile","hasCustomName","isFavorite","overviewNote","overviewNoteUpdatedAt","viewMode","collectableSummaries"],"type":"object"},"OperatorState":{"additionalProperties":false,"properties":{"descriptiveState":{"description":"descriptiveState is an optional more descriptive state field which has no requirements on format","type":"string"},"details":{"additionalProperties":{"additionalProperties":{},"type":"object"},"description":"details contains any extra information that is operator-specific","type":"object"},"lastEvaluation":{"description":"lastEvaluation is the ResourceVersion last evaluated","type":"string"},"state":{"description":"state describes the state of the lastEvaluation.\nIt is limited to three possible states for machine evaluation.","enum":["success","in_progress","failed"],"type":"string"}},"required":["lastEvaluation","state"],"type":"object"},"Person":{"additionalProperties":false,"description":"Person represents a user profile with basic information","properties":{"gravatarUrl":{"description":"URL to user's Gravatar image","type":"string"},"name":{"description":"Display name of the user","type":"string"},"uid":{"description":"Unique identifier for the user","type":"string"}},"required":["uid","name","gravatarUrl"],"type":"object"},"ViewMode":{"additionalProperties":false,"properties":{"mode":{"enum":["compact","full"],"type":"string"},"showComments":{"type":"boolean"},"showTooltips":{"type":"boolean"}},"required":["mode","showComments","showTooltips"],"type":"object"},"spec":{"additionalProperties":false,"properties":{"investigationSummaries":{"description":"Array of investigation summaries\n+listType=atomic","items":{"$ref":"#/components/schemas/InvestigationSummary"},"type":"array"},"owner":{"$ref":"#/components/schemas/Person","description":"The Person who owns this investigation index"},"title":{"description":"Title of the index, e.g. 'Favorites' or 'My Investigations'","type":"string"}},"required":["title","owner","investigationSummaries"],"type":"object"},"status":{"additionalProperties":false,"properties":{"additionalFields":{"additionalProperties":{"additionalProperties":{},"type":"object"},"description":"additionalFields is reserved for future use","type":"object"},"operatorStates":{"additionalProperties":{"$ref":"#/components/schemas/OperatorState"},"description":"operatorStates is a map of operator ID to operator state evaluations.\nAny operator which consumes this kind SHOULD add its state evaluation information to this field.","type":"object"}},"type":"object"}}`)
versionSchemaInvestigationIndexv0alpha1 app.VersionSchema
_ = json.Unmarshal(rawSchemaInvestigationIndexv0alpha1, &versionSchemaInvestigationIndexv0alpha1)
)

View File

@@ -1,319 +0,0 @@
{
"apiVersion": "apps.grafana.com/v1alpha2",
"kind": "AppManifest",
"metadata": {
"name": "logsdrilldown"
},
"spec": {
"appName": "logsdrilldown",
"group": "logsdrilldown.grafana.app",
"versions": [
{
"name": "v1alpha1",
"served": true,
"kinds": [
{
"kind": "LogsDrilldown",
"plural": "LogsDrilldowns",
"scope": "Namespaced",
"schemas": {
"LogsDrilldown": {
"properties": {
"spec": {
"$ref": "#/components/schemas/spec"
},
"status": {
"$ref": "#/components/schemas/status"
}
},
"required": ["spec"]
},
"OperatorState": {
"additionalProperties": false,
"properties": {
"descriptiveState": {
"description": "descriptiveState is an optional more descriptive state field which has no requirements on format",
"type": "string"
},
"details": {
"additionalProperties": {
"additionalProperties": {},
"type": "object"
},
"description": "details contains any extra information that is operator-specific",
"type": "object"
},
"lastEvaluation": {
"description": "lastEvaluation is the ResourceVersion last evaluated",
"type": "string"
},
"state": {
"description": "state describes the state of the lastEvaluation.\nIt is limited to three possible states for machine evaluation.",
"enum": ["success", "in_progress", "failed"],
"type": "string"
}
},
"required": ["lastEvaluation", "state"],
"type": "object"
},
"spec": {
"additionalProperties": false,
"properties": {
"defaultFields": {
"items": {
"type": "string"
},
"type": "array"
},
"interceptDismissed": {
"type": "boolean"
},
"prettifyJSON": {
"type": "boolean"
},
"wrapLogMessage": {
"type": "boolean"
}
},
"required": ["defaultFields", "prettifyJSON", "wrapLogMessage", "interceptDismissed"],
"type": "object"
},
"status": {
"additionalProperties": false,
"properties": {
"additionalFields": {
"additionalProperties": {
"additionalProperties": {},
"type": "object"
},
"description": "additionalFields is reserved for future use",
"type": "object"
},
"operatorStates": {
"additionalProperties": {
"$ref": "#/components/schemas/OperatorState"
},
"description": "operatorStates is a map of operator ID to operator state evaluations.\nAny operator which consumes this kind SHOULD add its state evaluation information to this field.",
"type": "object"
}
},
"type": "object"
}
},
"conversion": false
},
{
"kind": "LogsDrilldownDefaults",
"plural": "LogsDrilldownDefaults",
"scope": "Namespaced",
"schemas": {
"LogsDrilldownDefaults": {
"properties": {
"spec": {
"$ref": "#/components/schemas/spec"
},
"status": {
"$ref": "#/components/schemas/status"
}
},
"required": ["spec"]
},
"OperatorState": {
"additionalProperties": false,
"properties": {
"descriptiveState": {
"description": "descriptiveState is an optional more descriptive state field which has no requirements on format",
"type": "string"
},
"details": {
"additionalProperties": {
"additionalProperties": {},
"type": "object"
},
"description": "details contains any extra information that is operator-specific",
"type": "object"
},
"lastEvaluation": {
"description": "lastEvaluation is the ResourceVersion last evaluated",
"type": "string"
},
"state": {
"description": "state describes the state of the lastEvaluation.\nIt is limited to three possible states for machine evaluation.",
"enum": ["success", "in_progress", "failed"],
"type": "string"
}
},
"required": ["lastEvaluation", "state"],
"type": "object"
},
"spec": {
"additionalProperties": false,
"properties": {
"defaultFields": {
"items": {
"type": "string"
},
"type": "array"
},
"interceptDismissed": {
"type": "boolean"
},
"prettifyJSON": {
"type": "boolean"
},
"wrapLogMessage": {
"type": "boolean"
}
},
"required": ["defaultFields", "prettifyJSON", "wrapLogMessage", "interceptDismissed"],
"type": "object"
},
"status": {
"additionalProperties": false,
"properties": {
"additionalFields": {
"additionalProperties": {
"additionalProperties": {},
"type": "object"
},
"description": "additionalFields is reserved for future use",
"type": "object"
},
"operatorStates": {
"additionalProperties": {
"$ref": "#/components/schemas/OperatorState"
},
"description": "operatorStates is a map of operator ID to operator state evaluations.\nAny operator which consumes this kind SHOULD add its state evaluation information to this field.",
"type": "object"
}
},
"type": "object"
}
},
"conversion": false
},
{
"kind": "LogsDrilldownDefaultColumns",
"plural": "LogsDrilldownDefaultColumns",
"scope": "Namespaced",
"schemas": {
"LogsDefaultColumnsLabel": {
"additionalProperties": false,
"properties": {
"key": {
"type": "string"
},
"value": {
"type": "string"
}
},
"required": ["key", "value"],
"type": "object"
},
"LogsDefaultColumnsLabels": {
"items": {
"$ref": "#/components/schemas/LogsDefaultColumnsLabel"
},
"type": "array"
},
"LogsDefaultColumnsRecord": {
"additionalProperties": false,
"properties": {
"columns": {
"items": {
"type": "string"
},
"type": "array"
},
"labels": {
"$ref": "#/components/schemas/LogsDefaultColumnsLabels"
}
},
"required": ["columns", "labels"],
"type": "object"
},
"LogsDefaultColumnsRecords": {
"items": {
"$ref": "#/components/schemas/LogsDefaultColumnsRecord"
},
"type": "array"
},
"LogsDrilldownDefaultColumns": {
"properties": {
"spec": {
"$ref": "#/components/schemas/spec"
},
"status": {
"$ref": "#/components/schemas/status"
}
},
"required": ["spec"]
},
"OperatorState": {
"additionalProperties": false,
"properties": {
"descriptiveState": {
"description": "descriptiveState is an optional more descriptive state field which has no requirements on format",
"type": "string"
},
"details": {
"additionalProperties": {
"additionalProperties": {},
"type": "object"
},
"description": "details contains any extra information that is operator-specific",
"type": "object"
},
"lastEvaluation": {
"description": "lastEvaluation is the ResourceVersion last evaluated",
"type": "string"
},
"state": {
"description": "state describes the state of the lastEvaluation.\nIt is limited to three possible states for machine evaluation.",
"enum": ["success", "in_progress", "failed"],
"type": "string"
}
},
"required": ["lastEvaluation", "state"],
"type": "object"
},
"spec": {
"additionalProperties": false,
"properties": {
"records": {
"$ref": "#/components/schemas/LogsDefaultColumnsRecords"
}
},
"required": ["records"],
"type": "object"
},
"status": {
"additionalProperties": false,
"properties": {
"additionalFields": {
"additionalProperties": {
"additionalProperties": {},
"type": "object"
},
"description": "additionalFields is reserved for future use",
"type": "object"
},
"operatorStates": {
"additionalProperties": {
"$ref": "#/components/schemas/OperatorState"
},
"description": "operatorStates is a map of operator ID to operator state evaluations.\nAny operator which consumes this kind SHOULD add its state evaluation information to this field.",
"type": "object"
}
},
"type": "object"
}
},
"conversion": false
}
]
}
],
"preferredVersion": "v1alpha1"
}
}

View File

@@ -1,92 +0,0 @@
{
"kind": "CustomResourceDefinition",
"apiVersion": "apiextensions.k8s.io/v1",
"metadata": {
"name": "logsdrilldowns.logsdrilldown.grafana.app"
},
"spec": {
"group": "logsdrilldown.grafana.app",
"versions": [
{
"name": "v1alpha1",
"served": true,
"storage": true,
"schema": {
"openAPIV3Schema": {
"properties": {
"spec": {
"properties": {
"defaultFields": {
"items": {
"type": "string"
},
"type": "array"
},
"interceptDismissed": {
"type": "boolean"
},
"prettifyJSON": {
"type": "boolean"
},
"wrapLogMessage": {
"type": "boolean"
}
},
"required": ["defaultFields", "prettifyJSON", "wrapLogMessage", "interceptDismissed"],
"type": "object"
},
"status": {
"properties": {
"additionalFields": {
"description": "additionalFields is reserved for future use",
"type": "object",
"x-kubernetes-preserve-unknown-fields": true
},
"operatorStates": {
"additionalProperties": {
"properties": {
"descriptiveState": {
"description": "descriptiveState is an optional more descriptive state field which has no requirements on format",
"type": "string"
},
"details": {
"description": "details contains any extra information that is operator-specific",
"type": "object",
"x-kubernetes-preserve-unknown-fields": true
},
"lastEvaluation": {
"description": "lastEvaluation is the ResourceVersion last evaluated",
"type": "string"
},
"state": {
"description": "state describes the state of the lastEvaluation.\nIt is limited to three possible states for machine evaluation.",
"enum": ["success", "in_progress", "failed"],
"type": "string"
}
},
"required": ["lastEvaluation", "state"],
"type": "object"
},
"description": "operatorStates is a map of operator ID to operator state evaluations.\nAny operator which consumes this kind SHOULD add its state evaluation information to this field.",
"type": "object"
}
},
"type": "object"
}
},
"required": ["spec"],
"type": "object"
}
},
"subresources": {
"status": {}
}
}
],
"names": {
"kind": "LogsDrilldown",
"plural": "logsdrilldowns"
},
"scope": "Namespaced"
}
}

View File

@@ -1,107 +0,0 @@
{
"kind": "CustomResourceDefinition",
"apiVersion": "apiextensions.k8s.io/v1",
"metadata": {
"name": "logsdrilldowndefaultcolumns.logsdrilldown.grafana.app"
},
"spec": {
"group": "logsdrilldown.grafana.app",
"versions": [
{
"name": "v1alpha1",
"served": true,
"storage": true,
"schema": {
"openAPIV3Schema": {
"properties": {
"spec": {
"properties": {
"records": {
"items": {
"properties": {
"columns": {
"items": {
"type": "string"
},
"type": "array"
},
"labels": {
"items": {
"properties": {
"key": {
"type": "string"
},
"value": {
"type": "string"
}
},
"required": ["key", "value"],
"type": "object"
},
"type": "array"
}
},
"required": ["columns", "labels"],
"type": "object"
},
"type": "array"
}
},
"required": ["records"],
"type": "object"
},
"status": {
"properties": {
"additionalFields": {
"description": "additionalFields is reserved for future use",
"type": "object",
"x-kubernetes-preserve-unknown-fields": true
},
"operatorStates": {
"additionalProperties": {
"properties": {
"descriptiveState": {
"description": "descriptiveState is an optional more descriptive state field which has no requirements on format",
"type": "string"
},
"details": {
"description": "details contains any extra information that is operator-specific",
"type": "object",
"x-kubernetes-preserve-unknown-fields": true
},
"lastEvaluation": {
"description": "lastEvaluation is the ResourceVersion last evaluated",
"type": "string"
},
"state": {
"description": "state describes the state of the lastEvaluation.\nIt is limited to three possible states for machine evaluation.",
"enum": ["success", "in_progress", "failed"],
"type": "string"
}
},
"required": ["lastEvaluation", "state"],
"type": "object"
},
"description": "operatorStates is a map of operator ID to operator state evaluations.\nAny operator which consumes this kind SHOULD add its state evaluation information to this field.",
"type": "object"
}
},
"type": "object"
}
},
"required": ["spec"],
"type": "object"
}
},
"subresources": {
"status": {}
}
}
],
"names": {
"kind": "LogsDrilldownDefaultColumns",
"plural": "logsdrilldowndefaultcolumns"
},
"scope": "Namespaced"
}
}

View File

@@ -1,92 +0,0 @@
{
"kind": "CustomResourceDefinition",
"apiVersion": "apiextensions.k8s.io/v1",
"metadata": {
"name": "logsdrilldowndefaults.logsdrilldown.grafana.app"
},
"spec": {
"group": "logsdrilldown.grafana.app",
"versions": [
{
"name": "v1alpha1",
"served": true,
"storage": true,
"schema": {
"openAPIV3Schema": {
"properties": {
"spec": {
"properties": {
"defaultFields": {
"items": {
"type": "string"
},
"type": "array"
},
"interceptDismissed": {
"type": "boolean"
},
"prettifyJSON": {
"type": "boolean"
},
"wrapLogMessage": {
"type": "boolean"
}
},
"required": ["defaultFields", "prettifyJSON", "wrapLogMessage", "interceptDismissed"],
"type": "object"
},
"status": {
"properties": {
"additionalFields": {
"description": "additionalFields is reserved for future use",
"type": "object",
"x-kubernetes-preserve-unknown-fields": true
},
"operatorStates": {
"additionalProperties": {
"properties": {
"descriptiveState": {
"description": "descriptiveState is an optional more descriptive state field which has no requirements on format",
"type": "string"
},
"details": {
"description": "details contains any extra information that is operator-specific",
"type": "object",
"x-kubernetes-preserve-unknown-fields": true
},
"lastEvaluation": {
"description": "lastEvaluation is the ResourceVersion last evaluated",
"type": "string"
},
"state": {
"description": "state describes the state of the lastEvaluation.\nIt is limited to three possible states for machine evaluation.",
"enum": ["success", "in_progress", "failed"],
"type": "string"
}
},
"required": ["lastEvaluation", "state"],
"type": "object"
},
"description": "operatorStates is a map of operator ID to operator state evaluations.\nAny operator which consumes this kind SHOULD add its state evaluation information to this field.",
"type": "object"
}
},
"type": "object"
}
},
"required": ["spec"],
"type": "object"
}
},
"subresources": {
"status": {}
}
}
],
"names": {
"kind": "LogsDrilldownDefaults",
"plural": "logsdrilldowndefaults"
},
"scope": "Namespaced"
}
}

View File

@@ -1,9 +1,5 @@
package kinds
import (
"github.com/grafana/grafana/apps/logsdrilldown/kinds/v0alpha1"
)
LogsDrilldownSpecv1alpha1: {
defaultFields: [...string] | *[]
prettifyJSON: bool
@@ -25,12 +21,3 @@ logsdrilldownDefaultsv1alpha1: {
spec: LogsDrilldownSpecv1alpha1
}
}
// Default columns API
logsdrilldownDefaultColumnsv0alpha1: {
kind: "LogsDrilldownDefaultColumns"
pluralName: "LogsDrilldownDefaultColumns"
schema: {
spec: v0alpha1.LogsDefaultColumns
}
}

View File

@@ -35,12 +35,12 @@ manifest: {
// It includes kinds which the v1alpha1 API serves, and (future) custom routes served globally from the v1alpha1 version.
v1alpha1: {
// kinds is the list of kinds served by this version
kinds: [logsdrilldownv1alpha1, logsdrilldownDefaultsv1alpha1, logsdrilldownDefaultColumnsv0alpha1]
kinds: [logsdrilldownv1alpha1, logsdrilldownDefaultsv1alpha1]
// [OPTIONAL]
// served indicates whether this particular version is served by the API server.
// served should be set to false before a version is removed from the manifest entirely.
// served defaults to true if not present.
served: true
served: true
// [OPTIONAL]
// Codegen is a trait that tells the grafana-app-sdk, or other code generation tooling, how to process this kind.
// If not present, default values within the codegen trait are used.
@@ -64,4 +64,4 @@ v1alpha1: {
enabled: true
}
}
}
}

View File

@@ -1,19 +0,0 @@
package v0alpha1
#LogsDefaultColumnsLabel: {
key: string
value: string
}
#LogsDefaultColumnsLabels: [...#LogsDefaultColumnsLabel]
#LogsDefaultColumnsRecord: {
columns: [...string]
labels: #LogsDefaultColumnsLabels
}
#LogsDefaultColumnsRecords: [...#LogsDefaultColumnsRecord]
LogsDefaultColumns: {
records: #LogsDefaultColumnsRecords
}

Some files were not shown because too many files have changed in this diff Show More