* gRPC Server: Instrument requests made to the server.
Expose metrics from the gRPC server in order to monitor for failed responses
and response latency. Uses code from the already vendored weaveworks/common.
* Review comments.
This commit changes extractEvalString to sort NumberCaptureValues
in ascending order of Var before building the output string. This
means that users will see EvaluationString in a consistent order,
but also make it possible to assert its output in tests.
* chore: wrap HTTP server in a dskit module
Much of the logic from this comes from the POC branch, so:
- credit for this work goes to everyone else
- mistakes are my own
This is needed to support microservice deployment modes.
* added an arbitrarily-chosen 30second timeout
* enforce role sync except if skip org role sync is enabled
* move errors to errors file and set codes
* fix docs and defaults
* remove legacy parameter
* support fall through token-api in generic oauth
* fix error handling for generic_oauth
* Update pkg/login/social/generic_oauth.go
Co-authored-by: Gabriel MABILLE <gamab@users.noreply.github.com>
* Update pkg/login/social/gitlab_oauth_test.go
Co-authored-by: Gabriel MABILLE <gamab@users.noreply.github.com>
* Update pkg/login/social/gitlab_oauth_test.go
Co-authored-by: Gabriel MABILLE <gamab@users.noreply.github.com>
---------
Co-authored-by: Gabriel MABILLE <gamab@users.noreply.github.com>
* Alerting: No longer silence paused alerts during legacy migration
Now that we migrate paused legacy alerts to paused UA alert rules, we no longer need to silence them.
* add grafana-apiserver
* remove watchset & move provisioning and http server to background
services
* remove scheme
* otel fixes (#70874)
* remove module ProvideRegistry test
* use certgenerator from apiserver package
* Control collector/pdata from going to v1.0.0-rc8 (as Tempo 1.5.1 would have it)
* Refactor Tempo datasource backend to support multiple queryData types.
Added traceId query type that is set when performing the request but doesn't map to a tab.
* WIP data is reaching the frontend
* WIP
* Use channels and goroutines
* Some fixes
* Simplify backend code.
Return traces, metrics, state and error in a dataframe.
Shared state type between FE and BE.
Use getStream() instead of getQueryData()
* Handle errors in frontend
* Update Tempo and use same URL for RPC and HTTP
* Cleanup backend code
* Merge main
* Create grpc client only with host and authenticate
* Create grpc client only with host and authenticate
* Cleanup
* Add streaming to TraceQL Search tab
* Fix merge conflicts
* Added tests for processStream
* make gen-cue
* make gen-cue
* goimports
* lint
* Cleanup go.mod
* Comments
* Addressing PR comments
* Fix streaming for tracel search tab
* Added streaming kill switch as the disableTraceQLStreaming feature toggle
* Small comment
* Fix conflicts
* Correctly capture and send all errors as a DF to client
* Fix infinite error loop
* Fix merge conflicts
* Fix test
* Update deprecated import
* Fix feature toggles gen
* Fix merge conflicts
* First changes
* WIP docs
* Align current tests
* Add test for UseRefreshToken
* Update docs
* Fix
* Remove unnecessary AuthCodeURL from generic_oauth
* Change GitHub to disable use_refresh_token by default
* introduce a new node-type ML and implement a command outlier that uses ML plugin as a source of data.
* add feature flag mlExpressions that guards the feature
This PR adds /api/gnet to the list of ignored paths in the gzip middleware.
Without this, when gzip is enabled (`server.enable_gzip = true`), responses
from the gnet proxy are double compressed: once by grafana.com and once by
Grafana itself. With this change we only do one round of compression for these
endpoints.
To test this out, try a request like this with `server.enable_gzip = true`
(after setting `GCOM_TOKEN` to a valid grafana.com token; you may need to
change the 'bsull' slug, too):
curl -v --user admin:admin \
-H "X-Api-Key: $GCOM_TOKEN" \
-H 'Accept-Encoding: gzip' \
localhost:3000/api/gnet/instances/bsull/provisioned-plugins/grafana-ml-app | gzip -d
Note that there are two Content-Encoding: gzip headers before this PR, and
the output is still compressed even after the `gzip -d`. After this PR things
look as expected.
* add data source uid to QueryError
* add error and datasource uid to tracing
* split queryDataResponseToResults to two functions one to extract frame from the result, and another to convert it
* propagate logger with context to convertDataFramesToResults