Files
grafana/apps/provisioning/pkg/repository/git/repository.go
T
Roberto Jiménez Sánchez 7e45a300b9 Provisioning: Remove migration from legacy storage (#112505)
* Deprecate Legacy Storage Migration in Backend

* Change the messaging around legacy storage

* Disable cards to connect

* Commit import changes

* Block repository creation if resources are in legacy storage

* Update error message

* Prettify

* chore: uncomment unified migration

* chore: adapt and fix tests

* Remove legacy storage migration from frontend

* Refactor provisioning job options by removing legacy storage and history fields

- Removed the `History` field from `MigrateJobOptions` and related references in the codebase.
- Eliminated the `LegacyStorage` field from `RepositoryViewList` and its associated comments.
- Updated tests and generated OpenAPI schema to reflect these changes.
- Simplified the `MigrationWorker` by removing dependencies on legacy storage checks.

* Refactor OpenAPI schema and tests to remove deprecated fields

- Removed the `history` field from `MigrateJobOptions` and updated the OpenAPI schema accordingly.
- Eliminated the `legacyStorage` field from `RepositoryViewList` and its associated comments in the schema.
- Updated integration tests to reflect the removal of these fields.

* Fix typescript errors

* Refactor provisioning code to remove legacy storage dependencies

- Eliminated references to `dualwrite.Service` and related legacy storage checks across multiple files.
- Updated `APIBuilder`, `RepositoryController`, and `SyncWorker` to streamline resource handling without legacy storage considerations.
- Adjusted tests to reflect the removal of legacy storage mocks and dependencies, ensuring cleaner and more maintainable code.

* Fix unit tests

* Remove more references to legacy

* Enhance provisioning wizard with migration options

- Added a checkbox for migrating existing resources in the BootstrapStep component.
- Updated the form context to track the new migration option.
- Adjusted the SynchronizeStep and useCreateSyncJob hook to incorporate the migration logic.
- Enhanced localization with new descriptions and labels for migration features.

* Remove unused variable and dualwrite reference in provisioning code

- Eliminated an unused variable declaration in `provisioning_manifest.go`.
- Removed the `nil` reference for dualwrite in `repo_operator.go`, aligning with the standalone operator's assumption of unified storage.

* Update go.mod and go.sum to include new dependencies

- Added `github.com/grafana/grafana-app-sdk` version `0.48.5` and several indirect dependencies including `github.com/getkin/kin-openapi`, `github.com/hashicorp/errwrap`, and others.
- Updated `go.sum` to reflect the new dependencies and their respective versions.

* Refactor provisioning components for improved readability

- Simplified the import statement in HomePage.tsx by removing unnecessary line breaks.
- Consolidated props in the SynchronizeStep component for cleaner code.
- Enhanced the layout of the ProvisioningWizard component by streamlining the rendering of the SynchronizeStep.

* Deprecate MigrationWorker and clean up related comments

- Removed the deprecated MigrationWorker implementation and its associated comments from the provisioning code.
- This change reflects the ongoing effort to eliminate legacy components and improve code maintainability.

* Fix linting issues

* Add explicit comment

* Update useResourceStats hook in BootstrapStep component to accept selected target

- Modified the BootstrapStep component to pass the selected target to the useResourceStats hook.
- Updated related tests to reflect the change in expected arguments for the useResourceStats hook.

* fix(provisioning): Update migrate tests to match export-then-sync behavior for all repository types

Updates test expectations for folder-type repositories to match the
implementation changes where both folder and instance repository types
now run export followed by sync. Only the namespace cleaner is skipped
for folder-type repositories.

Changes:
- Update "should run export and sync for folder-type repositories" test to include export mocks
- Update "should fail when sync job fails for folder-type repositories" test to include export mocks
- Rename test to clarify that both export and sync run for folder types
- Add proper mock expectations for SetMessage, StrictMaxErrors, Process, and ResetResults

All migrate package tests now pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Update provisioning wizard text and improve resource counting display

- Enhanced descriptions for migrating existing resources to clarify that unmanaged resources will also be included.
- Refactored BootstrapStepResourceCounting component to simplify the rendering logic and ensure both external storage and unmanaged resources are displayed correctly.
- Updated alert messages in SynchronizeStep to reflect accurate information regarding resource management during migration.
- Adjusted localization strings for consistency with the new descriptions.

* Update provisioning wizard alert messages for clarity and accuracy

- Revised alert points to indicate that resources can still be modified during migration, with a note on potential export issues.
- Clarified that resources will be marked as managed post-provisioning and that dashboards remain accessible throughout the process.

* Fix issue with trigger wrong type of job

* Fix export failure when folder already exists in repository

When exporting resources to a repository, if a folder already exists,
the Read() method would fail with "path component is empty" error.

This occurred because:
1. Folders are identified by trailing slash (e.g., "Legacy Folder/")
2. The Read() method passes this path directly to GetTreeByPath()
3. GetTreeByPath() splits the path by "/" creating empty components
4. This causes the "path component is empty" error

The fix strips the trailing slash before calling GetTreeByPath() to
avoid empty path components, while still using the trailing slash
convention to identify directories.

The Create() method already handles this correctly by appending
".keep" to directory paths, which is why the first export succeeded
but subsequent exports failed.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Fix folder tree not updated when folder already exists in repository

When exporting resources and a folder already exists in the repository,
the folder was not being added to the FolderManager's tree. This caused
subsequent dashboard exports to fail with "folder NOT found in tree".

The fix adds the folder to fm.tree even when it already exists in the
repository, ensuring all folders are available for resource lookups.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Revert "Merge remote-tracking branch 'origin/uncomment-unified-migration-code' into cleanup/deprecate-legacy-storage-migration-in-provisioning"

This reverts commit 6440fae342, reversing
changes made to ec39fb04f2.

* fix: handle empty folder titles in path construction

- Skip folders with empty titles in dirPath to avoid empty path components
- Skip folders with empty paths before checking if they exist in repository
- Fix unit tests to properly check useResourceStats hook calls with type annotations

* Update workspace

* Fix BootstrapStep tests after reverting unified migration merge

Updated test expectations to match the current component behavior where
resource counts are displayed for both instance and folder sync options.

- Changed 'Empty' count expectation from 3 to 4 (2 cards × 2 counts each)
- Changed '7 resources' test to use findAllByText instead of findByText
  since the count appears in multiple cards

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Remove bubbletee deps

* Fix workspace

* provisioning: update error message to reference enableMigration config

Update the error message when provisioning cannot be used due to
incompatible data format to instruct users to enable data migration
for folders and dashboards using the enableMigration configuration
introduced in PR #114857.

Also update the test helper to include EnableMigration: true for both
dashboards and folders to match the new configuration pattern.

* provisioning: add comment explaining Mode5 and EnableMigration requirement

Add a comment in the integration test helper explaining that Provisioning
requires Mode5 (unified storage) and EnableMigration (data migration) as
it expects resources to be fully migrated to unified storage.

* Remove migrate resources checkbox from folder type provisioning wizard

- Remove checkbox UI for migrating existing resources in folder type
- Remove migrateExistingResources from migration logic
- Simplify migration to only use requiresMigration flag
- Remove unused translation keys
- Update i18n strings

* Fix linting

* Remove unnecessary React Fragment wrapper in BootstrapStep

* Address comments

---------

Co-authored-by: Rafael Paulovic <rafael.paulovic@grafana.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-17 17:22:17 +01:00

886 lines
25 KiB
Go

package git
import (
"bytes"
"context"
"errors"
"fmt"
"log/slog"
"net/http"
"net/url"
"strings"
"time"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/validation/field"
"github.com/grafana/grafana-app-sdk/logging"
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
"github.com/grafana/grafana/apps/provisioning/pkg/safepath"
common "github.com/grafana/grafana/pkg/apimachinery/apis/common/v0alpha1"
"github.com/grafana/nanogit"
"github.com/grafana/nanogit/log"
"github.com/grafana/nanogit/options"
"github.com/grafana/nanogit/protocol"
"github.com/grafana/nanogit/protocol/hash"
"github.com/grafana/nanogit/retry"
)
type RepositoryConfig struct {
URL string
Branch string
TokenUser string
Token common.RawSecureValue
Path string
}
// Make sure all public functions of this struct call the (*gitRepository).logger function, to ensure the Git repo details are included.
type gitRepository struct {
config *provisioning.Repository
gitConfig RepositoryConfig
client nanogit.Client
}
func NewRepository(
ctx context.Context,
config *provisioning.Repository,
gitConfig RepositoryConfig,
) (GitRepository, error) {
var opts []options.Option
if !gitConfig.Token.IsZero() {
tokenUser := gitConfig.TokenUser
if tokenUser == "" {
tokenUser = "git"
}
opts = append(opts, options.WithBasicAuth(tokenUser, string(gitConfig.Token)))
}
client, err := nanogit.NewHTTPClient(gitConfig.URL, opts...)
if err != nil {
return nil, fmt.Errorf("create nanogit client: %w", err)
}
return &gitRepository{
config: config,
gitConfig: gitConfig,
client: client,
}, nil
}
func (r *gitRepository) URL() string {
return r.gitConfig.URL
}
func (r *gitRepository) Branch() string {
return r.gitConfig.Branch
}
func (r *gitRepository) Config() *provisioning.Repository {
return r.config
}
// Validate implements provisioning.Repository.
func (r *gitRepository) Validate() (list field.ErrorList) {
cfg := r.gitConfig
t := string(r.config.Spec.Type)
if cfg.URL == "" {
list = append(list, field.Required(field.NewPath("spec", t, "url"), "a git url is required"))
} else {
if !isValidGitURL(cfg.URL) {
list = append(list, field.Invalid(field.NewPath("spec", t, "url"), cfg.URL, "invalid git URL format"))
}
}
if cfg.Branch == "" {
list = append(list, field.Required(field.NewPath("spec", t, "branch"), "a git branch is required"))
} else if !IsValidGitBranchName(cfg.Branch) {
list = append(list, field.Invalid(field.NewPath("spec", t, "branch"), cfg.Branch, "invalid branch name"))
}
// Readonly repositories may not need a token (if public)
if len(r.config.Spec.Workflows) > 0 {
if cfg.Token == "" && r.config.Secure.Token.IsZero() {
list = append(list, field.Required(field.NewPath("secure", "token"), "a git access token is required"))
}
}
if err := safepath.IsSafe(cfg.Path); err != nil {
list = append(list, field.Invalid(field.NewPath("spec", t, "path"), cfg.Path, err.Error()))
}
if safepath.IsAbs(cfg.Path) {
list = append(list, field.Invalid(field.NewPath("spec", t, "path"), cfg.Path, "path must be relative"))
}
return list
}
func isValidGitURL(gitURL string) bool {
// Parse URL
parsed, err := url.Parse(gitURL)
if err != nil {
return false
}
// Must be HTTPS
if parsed.Scheme != "https" {
return false
}
// Must have a host
if parsed.Host == "" {
return false
}
// Must have a path
if parsed.Path == "" || parsed.Path == "/" {
return false
}
return true
}
// Test implements provisioning.Repository.
func (r *gitRepository) Test(ctx context.Context) (*provisioning.TestResults, error) {
ctx, _ = r.withGitContext(ctx, "")
t := string(r.config.Spec.Type)
if ok, err := r.client.IsAuthorized(ctx); err != nil || !ok {
detail := "not authorized"
if err != nil {
detail = fmt.Sprintf("failed check if authorized: %v", err)
}
return &provisioning.TestResults{
Code: http.StatusBadRequest,
Success: false,
Errors: []provisioning.ErrorDetails{{
Type: metav1.CauseTypeFieldValueInvalid,
Field: field.NewPath("secure", "token").String(),
Detail: detail,
}},
}, nil
}
if ok, err := r.client.RepoExists(ctx); err != nil || !ok {
detail := "repository not found"
if err != nil {
detail = fmt.Sprintf("failed check if repository exists: %v", err)
}
return &provisioning.TestResults{
Code: http.StatusBadRequest,
Success: false,
Errors: []provisioning.ErrorDetails{{
Type: metav1.CauseTypeFieldValueInvalid,
Field: field.NewPath("spec", t, "url").String(),
Detail: detail,
}},
}, nil
}
// Test basic connectivity by getting the branch reference
_, err := r.client.GetRef(ctx, fmt.Sprintf("refs/heads/%s", r.gitConfig.Branch))
if err != nil {
detail := "branch not found"
if errors.Is(err, nanogit.ErrObjectNotFound) {
return &provisioning.TestResults{
Code: http.StatusBadRequest,
Success: false,
Errors: []provisioning.ErrorDetails{{
Type: metav1.CauseTypeFieldValueInvalid,
Field: field.NewPath("spec", t, "branch").String(),
Detail: detail,
}},
}, nil
}
detail = fmt.Sprintf("failed to check if branch exists: %v", err)
return &provisioning.TestResults{
Code: http.StatusBadRequest,
Success: false,
Errors: []provisioning.ErrorDetails{{
Type: metav1.CauseTypeFieldValueInvalid,
Field: field.NewPath("spec", t, "branch").String(),
Detail: detail,
}},
}, nil
}
return &provisioning.TestResults{
Code: http.StatusOK,
Success: true,
}, nil
}
// Read implements provisioning.Repository.
func (r *gitRepository) Read(ctx context.Context, filePath, ref string) (*repository.FileInfo, error) {
ctx, _ = r.withGitContext(ctx, ref)
finalPath := safepath.Join(r.gitConfig.Path, filePath)
// Resolve ref to commit hash
refHash, err := r.resolveRefToHash(ctx, ref)
if err != nil {
return nil, err
}
// get root hash
// TODO: Fix GetTree in nanogit as it does not work commit hash
commit, err := r.client.GetCommit(ctx, refHash)
if err != nil {
return nil, fmt.Errorf("get commit: %w", err)
}
// Check if the path represents a directory
if safepath.IsDir(filePath) {
// Strip trailing slash for git tree lookup to avoid empty path components
finalPath = strings.TrimSuffix(finalPath, "/")
tree, err := r.client.GetTreeByPath(ctx, commit.Tree, finalPath)
if err != nil {
if errors.Is(err, nanogit.ErrObjectNotFound) {
return nil, repository.ErrFileNotFound
}
return nil, fmt.Errorf("get tree by path: %w", err)
}
return &repository.FileInfo{
Path: filePath,
Ref: refHash.String(),
Hash: tree.Hash.String(),
}, nil
}
blob, err := r.client.GetBlobByPath(ctx, commit.Tree, finalPath)
if err != nil {
if errors.Is(err, nanogit.ErrObjectNotFound) {
return nil, repository.ErrFileNotFound
}
return nil, fmt.Errorf("read blob: %w", err)
}
return &repository.FileInfo{
Path: filePath,
Ref: ref,
Data: blob.Content,
Hash: blob.Hash.String(),
}, nil
}
func (r *gitRepository) ReadTree(ctx context.Context, ref string) ([]repository.FileTreeEntry, error) {
ctx, _ = r.withGitContext(ctx, ref)
// Resolve ref to commit hash
refHash, err := r.resolveRefToHash(ctx, ref)
if err != nil {
return nil, err
}
// Get flat tree using nanogit's GetFlatTree
tree, err := r.client.GetFlatTree(ctx, refHash)
if err != nil {
if errors.Is(err, nanogit.ErrObjectNotFound) {
return nil, repository.ErrRefNotFound
}
return nil, fmt.Errorf("get flat tree: %w", err)
}
entries := make([]repository.FileTreeEntry, 0, len(tree.Entries))
for _, entry := range tree.Entries {
isBlob := entry.Type == protocol.ObjectTypeBlob
// Apply path prefix filtering
relativePath, err := safepath.RelativeTo(entry.Path, r.gitConfig.Path)
if err != nil {
// File is outside configured path, skip it
continue
}
filePath := relativePath
if !isBlob && !safepath.IsDir(filePath) {
filePath = filePath + "/"
}
converted := repository.FileTreeEntry{
Path: filePath,
// TODO: Remove size from repository.FileTreeEntry. We don't need it per se.
Size: 0, // FlatTreeEntry doesn't have size, set to 0
Hash: entry.Hash.String(),
Blob: isBlob,
}
entries = append(entries, converted)
}
return entries, nil
}
func (r *gitRepository) Create(ctx context.Context, path, ref string, data []byte, comment string) error {
if ref == "" {
ref = r.gitConfig.Branch
}
ctx, _ = r.withGitContext(ctx, ref)
branchRef, err := r.ensureBranchExists(ctx, ref)
if err != nil {
return err
}
writer, err := r.client.NewStagedWriter(ctx, branchRef)
if err != nil {
return fmt.Errorf("create staged writer: %w", err)
}
if err := r.create(ctx, path, data, writer); err != nil {
return err
}
return r.commitAndPush(ctx, writer, comment)
}
func (r *gitRepository) create(ctx context.Context, path string, data []byte, writer nanogit.StagedWriter) error {
finalPath := safepath.Join(r.gitConfig.Path, path)
// Create .keep file if it is a directory
if safepath.IsDir(finalPath) {
if data != nil {
return apierrors.NewBadRequest("data cannot be provided for a directory")
}
finalPath = safepath.Join(finalPath, ".keep")
data = []byte{}
}
if _, err := writer.CreateBlob(ctx, finalPath, data); err != nil {
if errors.Is(err, nanogit.ErrObjectAlreadyExists) {
return repository.ErrFileAlreadyExists
}
return fmt.Errorf("create blob: %w", err)
}
return nil
}
func (r *gitRepository) Update(ctx context.Context, path, ref string, data []byte, comment string) error {
if ref == "" {
ref = r.gitConfig.Branch
}
ctx, _ = r.withGitContext(ctx, ref)
// Check if trying to update a directory
if safepath.IsDir(path) {
return apierrors.NewBadRequest("cannot update a directory")
}
branchRef, err := r.ensureBranchExists(ctx, ref)
if err != nil {
return err
}
// Create a staged writer
writer, err := r.client.NewStagedWriter(ctx, branchRef)
if err != nil {
return fmt.Errorf("create staged writer: %w", err)
}
if err := r.update(ctx, path, data, writer); err != nil {
return err
}
return r.commitAndPush(ctx, writer, comment)
}
func (r *gitRepository) update(ctx context.Context, path string, data []byte, writer nanogit.StagedWriter) error {
// Check if trying to update a directory
if safepath.IsDir(path) {
return apierrors.NewBadRequest("cannot update a directory")
}
finalPath := safepath.Join(r.gitConfig.Path, path)
if _, err := writer.UpdateBlob(ctx, finalPath, data); err != nil {
if errors.Is(err, nanogit.ErrObjectNotFound) {
return repository.ErrFileNotFound
}
return fmt.Errorf("update blob: %w", err)
}
return nil
}
func (r *gitRepository) Write(ctx context.Context, path string, ref string, data []byte, message string) error {
if ref == "" {
ref = r.gitConfig.Branch
}
ctx, _ = r.withGitContext(ctx, ref)
info, err := r.Read(ctx, path, ref)
if err != nil && !(errors.Is(err, repository.ErrFileNotFound)) {
return fmt.Errorf("check if file exists before writing: %w", err)
}
if err == nil {
// If the value already exists and is the same, we don't need to do anything
if bytes.Equal(info.Data, data) {
return nil
}
return r.Update(ctx, path, ref, data, message)
}
return r.Create(ctx, path, ref, data, message)
}
func (r *gitRepository) Delete(ctx context.Context, path, ref, comment string) error {
if ref == "" {
ref = r.gitConfig.Branch
}
ctx, _ = r.withGitContext(ctx, ref)
branchRef, err := r.ensureBranchExists(ctx, ref)
if err != nil {
return err
}
// Create a staged writer
writer, err := r.client.NewStagedWriter(ctx, branchRef)
if err != nil {
return fmt.Errorf("create staged writer: %w", err)
}
if err := r.delete(ctx, path, writer); err != nil {
return err
}
return r.commitAndPush(ctx, writer, comment)
}
func (r *gitRepository) Move(ctx context.Context, oldPath, newPath, ref, comment string) error {
if ref == "" {
ref = r.gitConfig.Branch
}
ctx, _ = r.withGitContext(ctx, ref)
branchRef, err := r.ensureBranchExists(ctx, ref)
if err != nil {
return err
}
// Create a staged writer
writer, err := r.client.NewStagedWriter(ctx, branchRef)
if err != nil {
return fmt.Errorf("create staged writer: %w", err)
}
if err := r.move(ctx, oldPath, newPath, writer); err != nil {
return err
}
return r.commitAndPush(ctx, writer, comment)
}
func (r *gitRepository) delete(ctx context.Context, path string, writer nanogit.StagedWriter) error {
finalPath := safepath.Join(r.gitConfig.Path, path)
// Check if it's a directory - use DeleteTree for directories, DeleteBlob for files
if safepath.IsDir(path) {
trimmed := strings.TrimSuffix(finalPath, "/")
if _, err := writer.DeleteTree(ctx, trimmed); err != nil {
if errors.Is(err, nanogit.ErrObjectNotFound) {
return repository.ErrFileNotFound
}
return fmt.Errorf("delete tree: %w", err)
}
} else {
if _, err := writer.DeleteBlob(ctx, finalPath); err != nil {
if errors.Is(err, nanogit.ErrObjectNotFound) {
return repository.ErrFileNotFound
}
return fmt.Errorf("delete blob: %w", err)
}
}
return nil
}
func (r *gitRepository) move(ctx context.Context, oldPath, newPath string, writer nanogit.StagedWriter) error {
oldFinalPath := safepath.Join(r.gitConfig.Path, oldPath)
newFinalPath := safepath.Join(r.gitConfig.Path, newPath)
// Check if moving directories
if safepath.IsDir(oldPath) && safepath.IsDir(newPath) {
// For directories, trim trailing slashes and use MoveTree
oldTrimmed := strings.TrimSuffix(oldFinalPath, "/")
newTrimmed := strings.TrimSuffix(newFinalPath, "/")
if _, err := writer.MoveTree(ctx, oldTrimmed, newTrimmed); err != nil {
if errors.Is(err, nanogit.ErrObjectNotFound) {
return repository.ErrFileNotFound
}
if errors.Is(err, nanogit.ErrObjectAlreadyExists) {
return repository.ErrFileAlreadyExists
}
return fmt.Errorf("move tree: %w", err)
}
} else if !safepath.IsDir(oldPath) && !safepath.IsDir(newPath) {
// For files, use MoveBlob operation
if _, err := writer.MoveBlob(ctx, oldFinalPath, newFinalPath); err != nil {
if errors.Is(err, nanogit.ErrObjectNotFound) {
return repository.ErrFileNotFound
}
if errors.Is(err, nanogit.ErrObjectAlreadyExists) {
return repository.ErrFileAlreadyExists
}
return fmt.Errorf("move blob: %w", err)
}
} else {
// Mismatched types (file to directory or vice versa)
return apierrors.NewBadRequest("cannot move between file and directory types")
}
return nil
}
func (r *gitRepository) History(_ context.Context, _ string, _ string) ([]provisioning.HistoryItem, error) {
return nil, &apierrors.StatusError{ErrStatus: metav1.Status{
Status: metav1.StatusFailure,
Code: http.StatusNotImplemented,
Reason: metav1.StatusReasonMethodNotAllowed,
Message: "history is not supported for pure git repositories",
}}
}
func (r *gitRepository) ListRefs(ctx context.Context) ([]provisioning.RefItem, error) {
ctx, _ = r.withGitContext(ctx, "")
refs, err := r.client.ListRefs(ctx)
if err != nil {
return nil, fmt.Errorf("list refs: %w", err)
}
refItems := make([]provisioning.RefItem, 0, len(refs))
for _, ref := range refs {
// Only branches
if !strings.HasPrefix(ref.Name, "refs/heads/") {
continue
}
refItems = append(refItems, provisioning.RefItem{
Name: strings.TrimPrefix(ref.Name, "refs/heads/"),
Hash: ref.Hash.String(),
})
}
return refItems, nil
}
func (r *gitRepository) LatestRef(ctx context.Context) (string, error) {
ctx, _ = r.withGitContext(ctx, "")
branchRef, err := r.client.GetRef(ctx, fmt.Sprintf("refs/heads/%s", r.gitConfig.Branch))
if err != nil {
return "", fmt.Errorf("get branch ref: %w", err)
}
return branchRef.Hash.String(), nil
}
func (r *gitRepository) CompareFiles(ctx context.Context, base, ref string) ([]repository.VersionedFileChange, error) {
if base == "" && ref == "" {
return nil, fmt.Errorf("base and ref cannot be empty")
}
if ref == "" {
return nil, fmt.Errorf("ref cannot be empty")
}
ctx, logger := r.withGitContext(ctx, ref)
// Resolve base ref to hash
var baseHash hash.Hash
if base != "" {
var err error
baseHash, err = r.resolveRefToHash(ctx, base)
if err != nil {
return nil, fmt.Errorf("resolve base ref: %w", err)
}
}
// Resolve ref to hash
refHash, err := r.resolveRefToHash(ctx, ref)
if err != nil {
return nil, fmt.Errorf("resolve ref: %w", err)
}
// Get commit hashes for base and ref
// Compare commits using nanogit
files, err := r.client.CompareCommits(ctx, baseHash, refHash)
if err != nil {
return nil, fmt.Errorf("compare commits: %w", err)
}
changes := make([]repository.VersionedFileChange, 0)
for _, f := range files {
switch f.Status {
case protocol.FileStatusAdded:
currentPath, err := safepath.RelativeTo(f.Path, r.gitConfig.Path)
if err != nil {
// do nothing as it's outside of configured path
continue
}
changes = append(changes, repository.VersionedFileChange{
Path: currentPath,
Ref: ref,
Action: repository.FileActionCreated,
})
case protocol.FileStatusModified:
currentPath, err := safepath.RelativeTo(f.Path, r.gitConfig.Path)
if err != nil {
// do nothing as it's outside of configured path
continue
}
changes = append(changes, repository.VersionedFileChange{
Path: currentPath,
Ref: ref,
Action: repository.FileActionUpdated,
})
case protocol.FileStatusDeleted:
currentPath, err := safepath.RelativeTo(f.Path, r.gitConfig.Path)
if err != nil {
// do nothing as it's outside of configured path
continue
}
changes = append(changes, repository.VersionedFileChange{
Ref: ref,
PreviousRef: base,
Path: currentPath,
PreviousPath: currentPath,
Action: repository.FileActionDeleted,
})
case protocol.FileStatusTypeChanged:
// Handle type changes as modifications
currentPath, err := safepath.RelativeTo(f.Path, r.gitConfig.Path)
if err != nil {
// do nothing as it's outside of configured path
continue
}
changes = append(changes, repository.VersionedFileChange{
Path: currentPath,
Ref: ref,
Action: repository.FileActionUpdated,
})
default:
logger.Error("ignore unhandled file", "file", f.Path, "status", string(f.Status))
}
}
return changes, nil
}
func (r *gitRepository) Stage(ctx context.Context, opts repository.StageOptions) (repository.StagedRepository, error) {
ctx = ensureRetryContext(ctx)
ctx, _ = r.withGitContext(ctx, "")
return NewStagedGitRepository(ctx, r, opts)
}
// resolveRefToHash resolves a ref (branch name or commit hash) to a commit hash
func (r *gitRepository) resolveRefToHash(ctx context.Context, ref string) (hash.Hash, error) {
ctx, _ = r.withGitContext(ctx, ref)
// Use default branch if ref is empty
if ref == "" {
ref = r.gitConfig.Branch
}
// Try to parse ref as a hash first
refHash, err := hash.FromHex(ref)
if err == nil && refHash != hash.Zero {
// Valid hash, return it
return refHash, nil
}
// Prefix ref with refs/heads/
ref = fmt.Sprintf("refs/heads/%s", ref)
// Not a valid hash, try to resolve as a branch reference
branchRef, err := r.client.GetRef(ctx, ref)
if err != nil {
if errors.Is(err, nanogit.ErrObjectNotFound) {
return hash.Zero, fmt.Errorf("ref not found: %s: %w", ref, repository.ErrRefNotFound)
}
return hash.Zero, fmt.Errorf("get ref %s: %w", ref, err)
}
return branchRef.Hash, nil
}
// ensureBranchExists checks if a branch exists and creates it if it doesn't,
// returning the branch reference to avoid duplicate GetRef calls
func (r *gitRepository) ensureBranchExists(ctx context.Context, branchName string) (nanogit.Ref, error) {
ctx, _ = r.withGitContext(ctx, branchName)
if !IsValidGitBranchName(branchName) {
return nanogit.Ref{}, &apierrors.StatusError{
ErrStatus: metav1.Status{
Code: http.StatusBadRequest,
Message: "invalid branch name",
},
}
}
// Check if branch exists by trying to get the branch reference
branchRef, err := r.client.GetRef(ctx, fmt.Sprintf("refs/heads/%s", branchName))
if err == nil {
// Branch exists, return it
logging.FromContext(ctx).Info("branch already exists", "branch", branchName)
return branchRef, nil
}
// If error is not "ref not found", return the error
if !errors.Is(err, nanogit.ErrObjectNotFound) {
return nanogit.Ref{}, fmt.Errorf("check branch exists: %w", err)
}
// Branch doesn't exist, create it based on the configured branch
srcBranch := r.gitConfig.Branch
srcRef, err := r.client.GetRef(ctx, fmt.Sprintf("refs/heads/%s", srcBranch))
if err != nil {
return nanogit.Ref{}, fmt.Errorf("get source branch ref: %w", err)
}
// Create the new branch reference
newRef := nanogit.Ref{
Name: fmt.Sprintf("refs/heads/%s", branchName),
Hash: srcRef.Hash,
}
if err := r.client.CreateRef(ctx, newRef); err != nil {
return nanogit.Ref{}, fmt.Errorf("create branch: %w", err)
}
return newRef, nil
}
// createSignature creates author and committer signatures using the context signature if available,
// falling back to default Grafana signature
func (r *gitRepository) createSignature(ctx context.Context) (nanogit.Author, nanogit.Committer) {
author := nanogit.Author{
Name: "Grafana",
Email: "noreply@grafana.com",
Time: time.Now(),
}
// Use signature from context if available
if sig := repository.GetAuthorSignature(ctx); sig != nil {
if sig.Name != "" {
author.Name = sig.Name
}
if sig.Email != "" {
author.Email = sig.Email
}
if !sig.When.IsZero() {
author.Time = sig.When
}
}
if author.Time.IsZero() {
author.Time = time.Now()
}
// Author and committer are always the same (for now)
return author, nanogit.Committer(author)
}
func (r *gitRepository) commit(ctx context.Context, writer nanogit.StagedWriter, comment string) error {
author, committer := r.createSignature(ctx)
if _, err := writer.Commit(ctx, comment, author, committer); err != nil {
if errors.Is(err, nanogit.ErrNothingToCommit) {
return repository.ErrNothingToCommit
}
return fmt.Errorf("commit changes: %w", err)
}
return nil
}
func (r *gitRepository) commitAndPush(ctx context.Context, writer nanogit.StagedWriter, comment string) error {
if err := r.commit(ctx, writer, comment); err != nil {
return err
}
if err := writer.Push(ctx); err != nil {
return fmt.Errorf("push changes: %w", err)
}
return nil
}
// defaultGitRetrier returns a default retrier configuration for Git operations.
//
// Retry attempts will happen when:
// - Network errors occur: connection timeouts, temporary network failures, or connection errors
// - HTTP 5xx server errors: For GET and DELETE operations (idempotent)
// - HTTP 429 Too Many Requests: For all operations (rate limiting is temporary)
//
// The retry behavior:
// - Total attempts: 8 (1 initial attempt + 7 retries)
// - Initial delay: 100ms before the first retry
// - Exponential backoff: delay doubles after each failed attempt (100ms → 200ms → 400ms → 800ms → 1.6s → 3.2s → 5s)
// - Maximum delay: capped at 5 seconds
// - Jitter: enabled to prevent thundering herd problems
// - Total retry window: approximately 10 seconds from first attempt to last retry
//
// All attempts will fail when:
// - The Git server is completely unavailable or unreachable
// - Network connectivity issues persist beyond the retry window (~10 seconds)
// - The server returns transient errors consistently for the entire retry duration
// - Context cancellation occurs before retries complete
//
// Non-transient errors (e.g., 4xx client errors except 429, authentication failures) are not retried and returned immediately.
func defaultGitRetrier() *retry.ExponentialBackoffRetrier {
return retry.NewExponentialBackoffRetrier().
WithMaxAttempts(8). // 1 initial + 7 retries = 8 total attempts (~10s total retry window)
WithInitialDelay(100 * time.Millisecond).
WithMaxDelay(5 * time.Second).
WithMultiplier(2.0).
WithJitter()
}
// ensureRetryContext ensures that retry logic is configured in the context.
// This function should be called at the beginning of all methods that make client calls
// to guarantee retry logic is always present, regardless of context state.
func ensureRetryContext(ctx context.Context) context.Context {
// Only add retrier if one doesn't already exist in the context
if retry.FromContext(ctx).MaxAttempts() <= 1 {
ctx = retry.ToContext(ctx, defaultGitRetrier())
}
return ctx
}
// withGitContext sets up the context with logging, git repository metadata, and retry logic.
// This function should be called at the beginning of all public methods to ensure:
// - Proper logging context with git repository details
// - Retry logic is configured for all Git operations
// - Context is properly prepared for nanogit client calls
func (r *gitRepository) withGitContext(ctx context.Context, ref string) (context.Context, logging.Logger) {
// Ensure retry logic is configured first, before any early returns
ctx = ensureRetryContext(ctx)
logger := logging.FromContext(ctx)
type containsGit int
var containsGitKey containsGit
if ctx.Value(containsGitKey) != nil {
return ctx, logging.FromContext(ctx)
}
if ref == "" {
ref = r.gitConfig.Branch
}
logger = logger.With(slog.Group("git_repository", "url", r.gitConfig.URL, "ref", ref, "nanogit", true))
ctx = logging.Context(ctx, logger)
// We want to ensure we don't add multiple git_repository keys. With doesn't deduplicate the keys...
ctx = context.WithValue(ctx, containsGitKey, true)
ctx = log.ToContext(ctx, logger)
return ctx, logger
}