* feat(auth): add ExtraAudience option to RoundTripper Add ExtraAudience option to RoundTripper to allow operators to include additional audiences (e.g., provisioning group) when connecting to the multitenant aggregator. This ensures tokens include both the target API server's audience and the provisioning group audience, which is required to pass the enforceManagerProperties check. - Add ExtraAudience RoundTripperOption - Improve documentation and comments - Add comprehensive test coverage * fix(operators): add ExtraAudience for dashboards/folders API servers Operators connecting to dashboards and folders API servers need to include the provisioning group audience in addition to the target API server's audience to pass the enforceManagerProperties check. * provisioning: fix settings/stats authorization for AccessPolicy identities The settings and stats endpoints were returning 403 for users accessing via ST->MT because the AccessPolicy identity was routed to the access checker, which doesn't know about these resources. This fix handles 'settings' and 'stats' resources before the access checker path, routing them to the role-based authorization that allows: - settings: Viewer role (read-only, needed by frontend) - stats: Admin role (can leak information) * fix: update BootstrapStep component to remove legacy storage handling and adjust resource counting logic - Removed legacy storage flag from useResourceStats hook in BootstrapStep. - Updated BootstrapStepResourceCounting to simplify rendering logic and removed target prop. - Adjusted tests to reflect changes in resource counting and rendering behavior. * Revert "fix: update BootstrapStep component to remove legacy storage handling and adjust resource counting logic" This reverts commit148802cbb5. * provisioning: allow any authenticated user for settings/stats endpoints These are read-only endpoints needed by the frontend: - settings: returns available repository types and configuration for the wizard - stats: returns resource counts Authentication is verified before reaching authorization, so any user who reaches these endpoints is already authenticated. Requiring specific org roles failed for AccessPolicy tokens which don't carry traditional roles. * provisioning: remove redundant admin role check from listFolderFiles The admin role check in listFolderFiles was redundant (route-level auth already handles access) and broken for AccessPolicy identities which don't have org roles. File access is controlled by the AccessClient as documented in the route-level authorization comment. * provisioning: add isAdminOrAccessPolicy helper for auth checks Consolidates authorization logic for provisioning endpoints: - Adds isAdminOrAccessPolicy() helper that allows admin users OR AccessPolicy identities - AccessPolicy identities (ST->MT flow) are trusted internal callers without org roles - Regular users must have admin role (matching frontend navtree restriction) Used in: authorizeSettings, authorizeStats, authorizeJobs, listFolderFiles * provisioning: consolidate auth helpers into allowForAdminsOrAccessPolicy Simplifies authorization by: - Adding isAccessPolicy() helper for AccessPolicy identity check - Adding allowForAdminsOrAccessPolicy() that returns Decision directly - Consolidating stats/settings/jobs into single switch case - Using consistent pattern in files.go * provisioning: require admin for files subresource at route level Aligns route-level authorization with handler-level check in listFolderFiles. Both now require admin role OR AccessPolicy identity for consistency. * provisioning: restructure authorization with role-based helpers Reorganizes authorization code for clarity: Role-based helpers (all support AccessPolicy for ST->MT flow): - allowForAdminsOrAccessPolicy: admin role required - allowForEditorsOrAccessPolicy: editor role required - allowForViewersOrAccessPolicy: viewer role required Repository subresources by role: - Admin: repository CRUD, test, files - Editor: jobs, resources, sync, history - Viewer: refs, status (GET only) Connection subresources by role: - Admin: connection CRUD - Viewer: status (GET only) * provisioning: move refs to admin-only refs subresource now requires admin role (or AccessPolicy). Updated documentation comments to reflect current permissions. * provisioning: add fine-grained permissions for connections Adds connection permissions following the same pattern as repositories: - provisioning.connections:create - provisioning.connections:read - provisioning.connections:write - provisioning.connections:delete Roles: - fixed:provisioning.connections:reader (granted to Admin) - fixed:provisioning.connections:writer (granted to Admin) * provisioning: remove non-existent sync subresource from auth The sync subresource doesn't exist - syncing is done via the jobs endpoint. Removed dead code from authorization switch case. * provisioning: use access checker for fine-grained permissions Refactors authorization to use b.access.Check() with verb-based checks: Repository subresources: - CRUD: uses actual verb (get/create/update/delete) - test: uses 'update' (write permission) - files/refs/resources/history/status: uses 'get' (read permission) - jobs: uses actual verb for jobs resource Connection subresources: - CRUD: uses actual verb - status: uses 'get' (read permission) The access checker maps verbs to actions defined in accesscontrol.go. Falls back to admin role for backwards compatibility. Also removes redundant admin check from listFolderFiles since authorization is now properly handled at route level. * provisioning: use verb constants instead of string literals Uses apiutils.VerbGet, apiutils.VerbUpdate instead of "get", "update". * provisioning: use access checker for jobs and historicjobs resources Jobs resource: uses actual verb (create/read/write/delete) HistoricJobs resource: read-only (historicjobs:read) * provisioning: allow viewers to access settings endpoint Settings is read-only and needed by multiple UI pages (not just admin pages). Stats remains admin-only. * provisioning: consolidate role-based resource authorization Extract isRoleBasedResource() and authorizeRoleBasedResource() helpers to avoid duplicating settings/stats resource checks in multiple places. * provisioning: use resource name constants instead of hardcoded strings Replace 'repositories', 'connections', 'jobs', 'historicjobs' with their corresponding ResourceInfo.GetName() constants. * provisioning: delegate file authorization to connector Route level: allow any authenticated user for files subresource Connector: check repositories:read only for directory listing Individual file CRUD: handled by DualReadWriter based on actual resource * provisioning: enhance authorization for files and jobs resources Updated file authorization to fall back to admin role for listing files. Introduced checkAccessForJobs function to manage job permissions, allowing editors to create and manage jobs while maintaining admin-only access for historic jobs. Improved error messaging for permission denials. * provisioning: refactor authorization with fine-grained permissions Authorization changes: - Use access checker with role-based fallback for backwards compatibility - Repositories/Connections: admin role fallback - Jobs: editor role fallback (editors can manage jobs) - HistoricJobs: admin role fallback (read-only) - Settings: viewer role (needed by multiple UI pages) - Stats: admin role Files subresource: - Route level allows any authenticated user - Directory listing checks repositories:read in connector - Individual file CRUD delegated to DualReadWriter Refactored checkAccessWithFallback to accept fallback role parameter. * provisioning: refactor access checker integration for improved authorization Updated the authorization logic to utilize the new access checker across various resources, including files and jobs. This change simplifies the permission checks by removing redundant identity retrieval and enhances error handling. The access checker now supports role-based fallbacks for admin and editor roles, ensuring backward compatibility while streamlining the authorization process for repository and connection subresources. * provisioning: remove legacy access checker tests and refactor access checker implementation Deleted the access_checker_test.go file to streamline the codebase and focus on the updated access checker implementation. Refactored the access checker to enhance clarity and maintainability, ensuring it supports role-based fallback behavior. Updated the access checker integration in the API builder to utilize the new fallback role configuration, improving authorization logic across resources. * refactor: split AccessChecker into TokenAccessChecker and SessionAccessChecker - Renamed NewMultiTenantAccessChecker -> NewTokenAccessChecker (uses AuthInfoFrom) - Renamed NewSingleTenantAccessChecker -> NewSessionAccessChecker (uses GetRequester) - Split into separate files with their own tests - Added mockery-generated mock for AccessChecker interface - Names now reflect identity source rather than deployment mode * fix: correct error message case and use accessWithAdmin for filesConnector - Fixed error message to use lowercase 'admin role is required' - Fixed filesConnector to use accessWithAdmin for proper role fallback - Formatted code * refactor: reduce cyclomatic complexity in filesConnector.Connect Split the Connect handler into smaller focused functions: - handleRequest: main request processing - createDualReadWriter: setup dependencies - parseRequestOptions: extract request options - handleDirectoryListing: GET directory requests - handleMethodRequest: route to method handlers - handleGet/handlePost/handlePut/handleDelete: method-specific logic - handleMove: move operation logic * security: remove blind TypeAccessPolicy bypass from access checkers Removed the code that bypassed authorization for TypeAccessPolicy identities. All identities now go through proper permission verification via the inner access checker, which will validate permissions from ServiceIdentityClaims. This addresses the security concern where TypeAccessPolicy was being trusted blindly without verifying whether the identity came from the wire or in-process. * feat: allow editors to access repository refs subresource Change refs authorization from admin to editor fallback so editors can view repository branches when pushing changes to dashboards/folders. - Split refs from other read-only subresources (resources, history, status) - refs now uses accessWithEditor instead of accessWithAdmin - Updated documentation comment to reflect authorization levels - Added integration test TestIntegrationProvisioning_RefsPermissions verifying editor access and viewer denial * tests: add authorization tests for missing provisioning API endpoints Add comprehensive authorization tests for: - Repository subresources (test, resources, history, status) - Connection status subresource - HistoricJobs resource - Settings and Stats resources All authorization paths are now covered by integration tests. * test: fix RefsPermissions test to use GitHub repository Use github-readonly.json.tmpl template instead of local folder, since refs endpoint requires a versioned repository that supports git operations. * chore: format test files * fix: make settings/stats authorization work in MT mode Update authorizeRoleBasedResource to check authlib.AuthInfoFrom(ctx) for AccessPolicy identity type in addition to identity.GetRequester(ctx). This ensures AccessPolicy identities are recognized in MT mode where identity.GetRequester may not set the identity type correctly. * fix: remove unused authorization helper functions Remove allowForAdminsOrAccessPolicy and allowForViewersOrAccessPolicy as they are no longer used after refactoring to use authorizeRoleBasedResource. * Fix AccessPolicy identity detection in ST authorizer - Add check for AccessPolicy identities via GetAuthID() in authorizeRoleBasedResource - Extended JWT may set identity type to TypeUser but AuthID is 'access-policy:...' - Forward user ID token in X-Grafana-Id header in RoundTripper for aggregator forwarding * Revert "Fix AccessPolicy identity detection in ST authorizer" This reverts commit0f4885e503. * Add fine-grained permissions for settings and stats endpoints - Add provisioning.settings:read action (granted to Viewer role) - Add provisioning.stats:read action (granted to Admin role) - Add accessWithViewer to APIBuilder for Viewer role fallback - Use access checker for settings/stats authorization - Remove role-based authorization functions (isRoleBasedResource, authorizeRoleBasedResource) This makes settings and stats consistent with other provisioning resources and works properly in both ST and MT modes via the access checker. * Remove AUTHORIZATION_COVERAGE.md * Add provisioning resources to RBAC mapper - Add connections, settings, stats to provisioning.grafana.app mappings - Required for authz service to translate K8s verbs to legacy actions - Fixes 403 errors for settings/stats in MT mode * refactor: merge access checkers with original fallthrough behavior Merge tokenAccessChecker and sessionAccessChecker into a unified access checker that implements the original fallthrough behavior: 1. First try to get identity from access token (authlib.AuthInfoFrom) 2. If token exists AND (is TypeAccessPolicy OR useExclusivelyAccessCheckerForAuthz), use the access checker with token identity 3. If no token or conditions not met, fall back to session identity (identity.GetRequester) with optional role-based fallback This fixes the issue where settings/stats/connections endpoints were failing in MT mode because the tokenAccessChecker was returning an error when there was no auth info in context, instead of falling through to session-based authorization. The unified checker now properly handles: - MT mode: tries token first, falls back to session if no token - ST mode: only uses token for AccessPolicy identities, otherwise session - Role fallback: applies when configured and access checker denies * Revert "refactor: merge access checkers with original fallthrough behavior" This reverts commit96451f948b. * Grant settings view role to all * fix: use actual request verb for settings/stats authorization Use a.GetVerb() instead of hardcoded VerbGet for settings and stats authorization. When listing resources (hitting collection endpoint), the verb is 'list' not 'get', and this mismatch could cause issues with the RBAC service. * debug: add logging to access checkers for authorization debugging Add klog debug logs (V4 level) to token and session access checkers to help diagnose why settings/stats authorization is failing while connections works. * debug: improve access checker logging with grafana-app-sdk logger - Use grafana-app-sdk logging.FromContext instead of klog - Add error wrapping with resource.group format for better context - Log more details including folder, group, and allowed status - Log error.Error() for better error message visibility * chore: use generic log messages in access checkers * Revert "Grant settings view role to all" This reverts commit3f5758cf36. * fix: use request verb for historicjobs authorization The original role-based check allowed any verb for admins. To preserve this behavior with the access checker, we should pass the actual verb from the request instead of hardcoding VerbGet. --------- Co-authored-by: Charandas Batra <charandas.batra@grafana.com>
391 lines
14 KiB
Go
391 lines
14 KiB
Go
package provisioning
|
|
|
|
import (
|
|
"context"
|
|
"crypto/x509"
|
|
"fmt"
|
|
"net/http"
|
|
"os"
|
|
"time"
|
|
|
|
"github.com/grafana/authlib/authn"
|
|
"github.com/prometheus/client_golang/prometheus"
|
|
"k8s.io/client-go/rest"
|
|
"k8s.io/client-go/transport"
|
|
"k8s.io/client-go/util/flowcontrol"
|
|
|
|
"github.com/grafana/grafana/pkg/infra/tracing"
|
|
"github.com/grafana/grafana/pkg/services/apiserver"
|
|
"github.com/grafana/grafana/pkg/setting"
|
|
"github.com/grafana/grafana/pkg/storage/unified"
|
|
"github.com/grafana/grafana/pkg/storage/unified/resource"
|
|
|
|
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
|
|
authrt "github.com/grafana/grafana/apps/provisioning/pkg/auth"
|
|
client "github.com/grafana/grafana/apps/provisioning/pkg/generated/clientset/versioned"
|
|
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
|
|
"github.com/grafana/grafana/apps/provisioning/pkg/repository/git"
|
|
"github.com/grafana/grafana/apps/provisioning/pkg/repository/github"
|
|
"github.com/grafana/grafana/apps/provisioning/pkg/repository/local"
|
|
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
|
|
"github.com/grafana/grafana/pkg/registry/apis/provisioning/webhooks"
|
|
secretdecrypt "github.com/grafana/grafana/pkg/registry/apis/secret/decrypt"
|
|
)
|
|
|
|
// provisioningControllerConfig contains the configuration that overlaps for the jobs and repo controllers
|
|
type provisioningControllerConfig struct {
|
|
provisioningClient *client.Clientset
|
|
resyncInterval time.Duration
|
|
repoFactory repository.Factory
|
|
unified resources.ResourceStore
|
|
clients resources.ClientFactory
|
|
tokenExchangeClient *authn.TokenExchangeClient
|
|
tlsConfig rest.TLSClientConfig
|
|
}
|
|
|
|
// expects:
|
|
// [grpc_client_authentication]
|
|
// token =
|
|
// token_exchange_url =
|
|
// [secrets_manager]
|
|
// grpc_server_address =
|
|
// grpc_server_tls_server_name =
|
|
// grpc_server_use_tls =
|
|
// grpc_server_tls_ca_file =
|
|
// grpc_server_tls_skip_verify =
|
|
// [unified_storage]
|
|
// grpc_address =
|
|
// grpc_index_address =
|
|
// allow_insecure =
|
|
// audiences =
|
|
// [operator]
|
|
// provisioning_server_url =
|
|
// provisioning_server_public_url =
|
|
// dashboards_server_url =
|
|
// folders_server_url =
|
|
// tls_insecure =
|
|
// tls_cert_file =
|
|
// tls_key_file =
|
|
// tls_ca_file =
|
|
// resync_interval =
|
|
// home_path =
|
|
// local_permitted_prefixes =
|
|
// [provisioning]
|
|
// repository_types =
|
|
func setupFromConfig(cfg *setting.Cfg, registry prometheus.Registerer) (controllerCfg *provisioningControllerConfig, err error) {
|
|
if cfg == nil {
|
|
return nil, fmt.Errorf("no configuration available")
|
|
}
|
|
// TODO: we should setup tracing properly
|
|
// https://github.com/grafana/git-ui-sync-project/issues/507
|
|
tracer := tracing.NewNoopTracerService()
|
|
|
|
gRPCAuth := cfg.SectionWithEnvOverrides("grpc_client_authentication")
|
|
token := gRPCAuth.Key("token").String()
|
|
if token == "" {
|
|
return nil, fmt.Errorf("token is required in [grpc_client_authentication] section")
|
|
}
|
|
tokenExchangeURL := gRPCAuth.Key("token_exchange_url").String()
|
|
if tokenExchangeURL == "" {
|
|
return nil, fmt.Errorf("token_exchange_url is required in [grpc_client_authentication] section")
|
|
}
|
|
|
|
operatorSec := cfg.SectionWithEnvOverrides("operator")
|
|
provisioningServerURL := operatorSec.Key("provisioning_server_url").String()
|
|
if provisioningServerURL == "" {
|
|
return nil, fmt.Errorf("provisioning_server_url is required in [operator] section")
|
|
}
|
|
|
|
tlsInsecure := operatorSec.Key("tls_insecure").MustBool(false)
|
|
tlsCertFile := operatorSec.Key("tls_cert_file").String()
|
|
tlsKeyFile := operatorSec.Key("tls_key_file").String()
|
|
tlsCAFile := operatorSec.Key("tls_ca_file").String()
|
|
|
|
tokenExchangeClient, err := authn.NewTokenExchangeClient(authn.TokenExchangeConfig{
|
|
TokenExchangeURL: tokenExchangeURL,
|
|
Token: token,
|
|
})
|
|
if err != nil {
|
|
return nil, fmt.Errorf("failed to create token exchange client: %w", err)
|
|
}
|
|
|
|
tlsConfig, err := buildTLSConfig(tlsInsecure, tlsCertFile, tlsKeyFile, tlsCAFile)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("failed to build TLS configuration: %w", err)
|
|
}
|
|
|
|
config := &rest.Config{
|
|
APIPath: "/apis",
|
|
Host: provisioningServerURL,
|
|
WrapTransport: transport.WrapperFunc(func(rt http.RoundTripper) http.RoundTripper {
|
|
return authrt.NewRoundTripper(tokenExchangeClient, rt, provisioning.GROUP)
|
|
}),
|
|
TLSClientConfig: tlsConfig,
|
|
RateLimiter: flowcontrol.NewFakeAlwaysRateLimiter(),
|
|
}
|
|
|
|
provisioningClient, err := client.NewForConfig(config)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("failed to create provisioning client: %w", err)
|
|
}
|
|
|
|
decrypter, err := setupDecrypter(cfg, tracer, tokenExchangeClient)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("failed to setup decrypter: %w", err)
|
|
}
|
|
|
|
repoFactory, err := setupRepoFactory(cfg, decrypter, provisioningClient, registry)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("failed to setup repository getter: %w", err)
|
|
}
|
|
|
|
// HACK: This logic directly connects to unified storage. We are doing this for now as there is no global
|
|
// search endpoint. But controllers, in general, should not connect directly to unified storage and instead
|
|
// go through the api server. Once there is a global search endpoint, we will switch to that here as well.
|
|
resourceClientCfg := resource.RemoteResourceClientConfig{
|
|
Token: token,
|
|
TokenExchangeURL: tokenExchangeURL,
|
|
Namespace: gRPCAuth.Key("token_namespace").String(),
|
|
}
|
|
unified, err := setupUnifiedStorageClient(cfg, tracer, resourceClientCfg)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("failed to setup unified storage: %w", err)
|
|
}
|
|
|
|
dashboardsServerURL := operatorSec.Key("dashboards_server_url").String()
|
|
if dashboardsServerURL == "" {
|
|
return nil, fmt.Errorf("dashboards_server_url is required in [operator] section")
|
|
}
|
|
foldersServerURL := operatorSec.Key("folders_server_url").String()
|
|
if foldersServerURL == "" {
|
|
return nil, fmt.Errorf("folders_server_url is required in [operator] section")
|
|
}
|
|
|
|
apiServerURLs := map[string]string{
|
|
resources.DashboardResource.Group: dashboardsServerURL,
|
|
resources.FolderResource.Group: foldersServerURL,
|
|
provisioning.GROUP: provisioningServerURL,
|
|
}
|
|
configProviders := make(map[string]apiserver.RestConfigProvider)
|
|
|
|
tlsConfigForTransport, err := rest.TLSConfigFor(&rest.Config{TLSClientConfig: tlsConfig})
|
|
if err != nil {
|
|
return nil, fmt.Errorf("failed to convert TLS config for transport: %w", err)
|
|
}
|
|
|
|
for group, url := range apiServerURLs {
|
|
config := &rest.Config{
|
|
APIPath: "/apis",
|
|
Host: url,
|
|
WrapTransport: transport.WrapperFunc(func(rt http.RoundTripper) http.RoundTripper {
|
|
return authrt.NewRoundTripper(tokenExchangeClient, rt, group, authrt.ExtraAudience(provisioning.GROUP))
|
|
}),
|
|
Transport: &http.Transport{
|
|
MaxConnsPerHost: 100,
|
|
MaxIdleConns: 100,
|
|
MaxIdleConnsPerHost: 100,
|
|
TLSClientConfig: tlsConfigForTransport,
|
|
},
|
|
RateLimiter: flowcontrol.NewFakeAlwaysRateLimiter(),
|
|
}
|
|
configProviders[group] = NewDirectConfigProvider(config)
|
|
}
|
|
|
|
clients := resources.NewClientFactoryForMultipleAPIServers(configProviders)
|
|
|
|
return &provisioningControllerConfig{
|
|
provisioningClient: provisioningClient,
|
|
repoFactory: repoFactory,
|
|
unified: unified,
|
|
clients: clients,
|
|
resyncInterval: operatorSec.Key("resync_interval").MustDuration(60 * time.Second),
|
|
tokenExchangeClient: tokenExchangeClient,
|
|
tlsConfig: tlsConfig,
|
|
}, nil
|
|
}
|
|
|
|
func buildTLSConfig(insecure bool, certFile, keyFile, caFile string) (rest.TLSClientConfig, error) {
|
|
tlsConfig := rest.TLSClientConfig{
|
|
Insecure: insecure,
|
|
}
|
|
|
|
if certFile != "" && keyFile != "" {
|
|
tlsConfig.CertFile = certFile
|
|
tlsConfig.KeyFile = keyFile
|
|
}
|
|
|
|
if caFile != "" {
|
|
// caFile is set in operator.ini file
|
|
// nolint:gosec
|
|
caCert, err := os.ReadFile(caFile)
|
|
if err != nil {
|
|
return tlsConfig, fmt.Errorf("failed to read CA certificate file: %w", err)
|
|
}
|
|
|
|
caCertPool := x509.NewCertPool()
|
|
if !caCertPool.AppendCertsFromPEM(caCert) {
|
|
return tlsConfig, fmt.Errorf("failed to parse CA certificate")
|
|
}
|
|
|
|
tlsConfig.CAData = caCert
|
|
}
|
|
|
|
return tlsConfig, nil
|
|
}
|
|
|
|
func setupRepoFactory(
|
|
cfg *setting.Cfg,
|
|
decrypter repository.Decrypter,
|
|
provisioningClient *client.Clientset,
|
|
registry prometheus.Registerer,
|
|
) (repository.Factory, error) {
|
|
operatorSec := cfg.SectionWithEnvOverrides("operator")
|
|
provisioningSec := cfg.SectionWithEnvOverrides("provisioning")
|
|
repoTypes := provisioningSec.Key("repository_types").Strings("|")
|
|
if len(repoTypes) == 0 {
|
|
repoTypes = []string{"github"}
|
|
}
|
|
|
|
// TODO: This depends on the different flavor of Grafana
|
|
// https://github.com/grafana/git-ui-sync-project/issues/495
|
|
extras := make([]repository.Extra, 0)
|
|
alreadyRegistered := make(map[provisioning.RepositoryType]struct{})
|
|
|
|
for _, t := range repoTypes {
|
|
if _, ok := alreadyRegistered[provisioning.RepositoryType(t)]; ok {
|
|
continue
|
|
}
|
|
alreadyRegistered[provisioning.RepositoryType(t)] = struct{}{}
|
|
|
|
switch provisioning.RepositoryType(t) {
|
|
case provisioning.GitRepositoryType:
|
|
extras = append(extras, git.Extra(decrypter))
|
|
case provisioning.GitHubRepositoryType:
|
|
var webhook *webhooks.WebhookExtraBuilder
|
|
provisioningAppURL := operatorSec.Key("provisioning_server_public_url").String()
|
|
if provisioningAppURL != "" {
|
|
webhook = webhooks.ProvideWebhooks(provisioningAppURL, registry)
|
|
}
|
|
|
|
extras = append(extras, github.Extra(
|
|
decrypter,
|
|
github.ProvideFactory(),
|
|
webhook,
|
|
),
|
|
)
|
|
case provisioning.LocalRepositoryType:
|
|
homePath := operatorSec.Key("home_path").String()
|
|
if homePath == "" {
|
|
return nil, fmt.Errorf("home_path is required in [operator] section for local repository type")
|
|
}
|
|
|
|
permittedPrefixes := operatorSec.Key("local_permitted_prefixes").Strings("|")
|
|
if len(permittedPrefixes) == 0 {
|
|
return nil, fmt.Errorf("local_permitted_prefixes is required in [operator] section for local repository type")
|
|
}
|
|
|
|
extras = append(extras, local.Extra(
|
|
homePath,
|
|
permittedPrefixes,
|
|
))
|
|
default:
|
|
return nil, fmt.Errorf("unsupported repository type: %s", t)
|
|
}
|
|
}
|
|
|
|
repoFactory, err := repository.ProvideFactory(alreadyRegistered, extras)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("create repository factory: %w", err)
|
|
}
|
|
|
|
return repoFactory, nil
|
|
}
|
|
|
|
func setupDecrypter(cfg *setting.Cfg, tracer tracing.Tracer, tokenExchangeClient *authn.TokenExchangeClient) (decrypter repository.Decrypter, err error) {
|
|
secretsSec := cfg.SectionWithEnvOverrides("secrets_manager")
|
|
if secretsSec == nil {
|
|
return nil, fmt.Errorf("no [secrets_manager] section found in config")
|
|
}
|
|
|
|
address := secretsSec.Key("grpc_server_address").String()
|
|
if address == "" {
|
|
return nil, fmt.Errorf("grpc_server_address is required in [secrets_manager] section")
|
|
}
|
|
|
|
secretsTls := secretdecrypt.TLSConfig{
|
|
UseTLS: secretsSec.Key("grpc_server_use_tls").MustBool(true),
|
|
CAFile: secretsSec.Key("grpc_server_tls_ca_file").String(),
|
|
ServerName: secretsSec.Key("grpc_server_tls_server_name").String(),
|
|
InsecureSkipVerify: secretsSec.Key("grpc_server_tls_skip_verify").MustBool(false),
|
|
}
|
|
|
|
decryptSvc, err := secretdecrypt.NewGRPCDecryptClientWithTLS(
|
|
tokenExchangeClient,
|
|
tracer,
|
|
address,
|
|
secretsTls,
|
|
secretsSec.Key("grpc_client_load_balancing").MustBool(false),
|
|
)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("create decrypt service: %w", err)
|
|
}
|
|
|
|
return repository.ProvideDecrypter(decryptSvc), nil
|
|
}
|
|
|
|
// HACK: This logic directly connects to unified storage. We are doing this for now as there is no global
|
|
// search endpoint. But controllers, in general, should not connect directly to unified storage and instead
|
|
// go through the api server. Once there is a global search endpoint, we will switch to that here as well.
|
|
func setupUnifiedStorageClient(cfg *setting.Cfg, tracer tracing.Tracer, resourceClientCfg resource.RemoteResourceClientConfig) (resources.ResourceStore, error) {
|
|
unifiedStorageSec := cfg.SectionWithEnvOverrides("unified_storage")
|
|
// Connect to Server
|
|
address := unifiedStorageSec.Key("grpc_address").String()
|
|
if address == "" {
|
|
return nil, fmt.Errorf("grpc_address is required in [unified_storage] section")
|
|
}
|
|
// FIXME: These metrics are not going to show up in /metrics
|
|
registry := prometheus.NewPedanticRegistry()
|
|
conn, err := unified.GrpcConn(address, registry)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("create unified storage gRPC connection: %w", err)
|
|
}
|
|
|
|
// Connect to Index
|
|
indexConn := conn
|
|
indexAddress := unifiedStorageSec.Key("grpc_index_address").String()
|
|
if indexAddress != "" {
|
|
// FIXME: These metrics are not going to show up in /metrics. We will also need to wrap these metrics
|
|
// to start with something else so it doesn't collide with the storage api metrics.
|
|
registry2 := prometheus.NewPedanticRegistry()
|
|
indexConn, err = unified.GrpcConn(indexAddress, registry2)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("create unified storage index gRPC connection: %w", err)
|
|
}
|
|
}
|
|
|
|
// Create client
|
|
resourceClientCfg.AllowInsecure = unifiedStorageSec.Key("allow_insecure").MustBool(false)
|
|
resourceClientCfg.Audiences = unifiedStorageSec.Key("audiences").Strings("|")
|
|
|
|
client, err := resource.NewRemoteResourceClient(tracer, conn, indexConn, resourceClientCfg)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("create unified storage client: %w", err)
|
|
}
|
|
|
|
return client, nil
|
|
}
|
|
|
|
// directConfigProvider is a simple RestConfigProvider that always returns the same rest.Config
|
|
// it implements apiserver.RestConfigProvider
|
|
type directConfigProvider struct {
|
|
cfg *rest.Config
|
|
}
|
|
|
|
func NewDirectConfigProvider(cfg *rest.Config) apiserver.RestConfigProvider {
|
|
return &directConfigProvider{cfg: cfg}
|
|
}
|
|
|
|
func (r *directConfigProvider) GetRestConfig(ctx context.Context) (*rest.Config, error) {
|
|
return r.cfg, nil
|
|
}
|