Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/prompts/review-docs.prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ Review the documentation for clarity, completeness, and accuracy.
- Known product names should be capitalized consistently throughout the documentation.
- Changes in docs should be reflected in the glossary, if the changes are related to terms/concepts that are unique to
COS or charmed observability (don't redefine all/general terms).
- DO NOT use prompt marks (e.g. $ or #) in code samples.

## Context

Expand Down
4 changes: 2 additions & 2 deletions docs/explanation/telemetry/logging-architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ relations:
this results in an auto-render Promtail config file with three scrape jobs, one for each "filename":

```bash
$ juju ssh --container postgresql pgsql/0 cat /etc/promtail/promtail_config.yaml
juju ssh --container postgresql pgsql/0 cat /etc/promtail/promtail_config.yaml
```

```yaml
Expand Down Expand Up @@ -163,7 +163,7 @@ relations:
This results in an auto-generated `/etc/otelcol/config.d/otelcol_0.yaml` config file with juju topology labels and the default scrape jobs for `/var/log/**/*log` and `journalctl`:

```bash
$ juju ssh otelcol/0 cat /etc/otelcol/config.d/otelcol_0.yaml
juju ssh otelcol/0 cat /etc/otelcol/config.d/otelcol_0.yaml
```

```yaml
Expand Down
6 changes: 3 additions & 3 deletions docs/explanation/telemetry/telemetry-labels.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ By convention, applications expose labeled metrics under a [`/metrics` endpoint]
For example, you can run the prometheus application and curl its `:9090/metrics` endpoint to obtain the metrics exposed by the process.

```bash
$ sudo snap install prometheus
sudo snap install prometheus

$ curl localhost:9090/metrics
curl localhost:9090/metrics

# -- snip --

Expand Down Expand Up @@ -69,7 +69,7 @@ scrape_configs:
Labels that are specified under a `static_configs` entry are automatically attached to all metrics scraped from the targets:

```bash
$ curl -s --data-urlencode 'match[]={__name__="prometheus_http_requests_total"}' localhost:9090/api/v1/series | jq '.data'
curl -s --data-urlencode 'match[]={__name__="prometheus_http_requests_total"}' localhost:9090/api/v1/series | jq '.data'
[
{
"__name__": "prometheus_http_requests_total",
Expand Down
2 changes: 1 addition & 1 deletion docs/how-to/configure-and-tune/disable-charmed-rules.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ using a boolean configuration option, called `forward_alert_rules`:
For example, to disable forwarding of all alert rules from opentelemetry collector,

```
$ juju config opentelemetry-collector forward_alert_rules=false
juju config opentelemetry-collector forward_alert_rules=false
```

## Silence charmed rules using alertmanager configuration
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ graph LR
We can specify the `drop` action via a config option for the [scrape-config charm](https://charmhub.io/prometheus-scrape-config-k8s):

```shell
$ juju config sc metric_relabel_configs="$(cat <<EOF
juju config sc metric_relabel_configs="$(cat <<EOF
- source_labels: ["__name__"]
regex: "scrape_samples_.+"
action: "drop"
Expand Down
44 changes: 22 additions & 22 deletions docs/how-to/integrate/add-tracing-to-cos-lite.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ In the same Juju model as you have COS Lite deployed, deploy the ``tempo-coordin
using the following command:

```bash
$ juju deploy tempo-coordinator-k8s tempo \
juju deploy tempo-coordinator-k8s tempo \
--channel edge \
--trust
```
Expand All @@ -28,7 +28,7 @@ with the worker nodes directly.
## Deploy the Tempo Worker

```bash
$ juju deploy tempo-worker-k8s tempo-worker \
juju deploy tempo-worker-k8s tempo-worker \
--channel edge \
--trust
```
Expand All @@ -47,7 +47,7 @@ If you don't have an s3 bucket ready at hand, follow [this guide](https://discou
Once you're done deploying ``minio`` and ``s3``, you can run:

```bash
$ juju integrate tempo s3
juju integrate tempo s3
```

And wait for the `tempo` application to go to `active/idle`.
Expand All @@ -56,7 +56,7 @@ And wait for the `tempo` application to go to `active/idle`.
## Integrate coordinator and workers

```bash
$ juju integrate tempo tempo-worker
juju integrate tempo tempo-worker
```

At this point your `juju status` should look like this:
Expand All @@ -73,29 +73,29 @@ Coordinator is reporting 'degraded' because not all roles are assigned in the re
You can enable self-monitoring for ``tempo`` by integrating it with the other COS Lite components.

```bash
$ juju integrate loki:logging tempo:logging
$ juju integrate s3:s3-credentials tempo:s3
$ juju integrate tempo:grafana-dashboard grafana:grafana-dashboard
$ juju integrate tempo:grafana-source grafana:grafana-source
$ juju integrate tempo:metrics-endpoint prometheus:metrics-endpoint
$ juju integrate tempo:tempo-cluster tempo-worker:tempo-cluster
$ juju integrate traefik:traefik-route tempo:ingress
juju integrate loki:logging tempo:logging
juju integrate s3:s3-credentials tempo:s3
juju integrate tempo:grafana-dashboard grafana:grafana-dashboard
juju integrate tempo:grafana-source grafana:grafana-source
juju integrate tempo:metrics-endpoint prometheus:metrics-endpoint
juju integrate tempo:tempo-cluster tempo-worker:tempo-cluster
juju integrate traefik:traefik-route tempo:ingress
```

Similarly, you can enable tracing in COS Lite by integrating the COS Lite charms that support
it to `tempo` over the `tracing` relation:

```bash
$ juju integrate tempo:tracing alertmanager:tracing
$ juju integrate tempo:tracing catalogue:tracing
$ juju integrate tempo:tracing traefik:charm-tracing
$ juju integrate tempo:tracing traefik:workload-tracing
$ juju integrate tempo:tracing loki:charm-tracing
$ juju integrate tempo:tracing loki:workload-tracing
$ juju integrate tempo:tracing grafana:charm-tracing
$ juju integrate tempo:tracing grafana:workload-tracing
$ juju integrate tempo:tracing prometheus:charm-tracing
$ juju integrate tempo:tracing prometheus:workload-tracing
juju integrate tempo:tracing alertmanager:tracing
juju integrate tempo:tracing catalogue:tracing
juju integrate tempo:tracing traefik:charm-tracing
juju integrate tempo:tracing traefik:workload-tracing
juju integrate tempo:tracing loki:charm-tracing
juju integrate tempo:tracing loki:workload-tracing
juju integrate tempo:tracing grafana:charm-tracing
juju integrate tempo:tracing grafana:workload-tracing
juju integrate tempo:tracing prometheus:charm-tracing
juju integrate tempo:tracing prometheus:workload-tracing
```

```{note}
Expand All @@ -107,7 +107,7 @@ You can also achieve the same by running ``jhack imatrix fill``.
If you have a charm offering a `certificates` endpoint such as [`self-signed-certificates`](https://charmhub.io/self-signed-certificates), you can integrate it with `tempo`:

```bash
$ juju integrate tempo:certificates ca:certificates
juju integrate tempo:certificates ca:certificates
```

to enable traces to be sent to `tempo` over HTTPS (or gRPCs).
Expand Down
15 changes: 7 additions & 8 deletions docs/how-to/integrate/configure-scrape-jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,15 @@ Deploying the [Prometheus Scrape Config charm](https://charmhub.io/prometheus-sc
deploy it from the edge channel:

```bash
$ juju deploy prometheus-scrape-config-k8s --channel latest/edge
juju deploy prometheus-scrape-config-k8s --channel latest/edge
```

Then relate it to the application you want to scrape, in this case
[the Zinc charm](https://charmhub.io/zinc-k8s), as well as to Prometheus itself:

```bash
$ juju relate prometheus-scrape-config-k8s zinc-k8s
$ juju relate prometheus-scrape-config-k8s prometheus-k8s
juju relate prometheus-scrape-config-k8s zinc-k8s
juju relate prometheus-scrape-config-k8s prometheus-k8s
```

With that done, you'll now be able to tweak the configuration of the scrape job.
Expand All @@ -40,7 +40,7 @@ With that done, you'll now be able to tweak the configuration of the scrape job.
For a list of configuration options and explanations about what they do as well as what their defaults are, run the `juju config` command without any configuration option:

```
$ juju config prometheus-scrape-config-k8s
juju config prometheus-scrape-config-k8s
```

## Changing the Configuration
Expand All @@ -49,7 +49,7 @@ Let's go ahead and have a look at how our scrape job currently looks:

```bash

$ juju show-unit prometheus-k8s/0
juju show-unit prometheus-k8s/0

...

Expand Down Expand Up @@ -85,14 +85,13 @@ $ juju show-unit prometheus-k8s/0
Then, we will set the `scrape_interval` in the prometheus-scrape-config-k8s charm:

```bash
$ juju config prometheus-scrape-config-k8s scrape_interval=2m
juju config prometheus-scrape-config-k8s scrape_interval=2m
```

Let's have a look again:

```bash

$ juju show-unit prometheus-k8s/0
juju show-unit prometheus-k8s/0

...

Expand Down
18 changes: 9 additions & 9 deletions docs/how-to/integrate/deploy-s3-integrator-and-minio.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,12 @@ single-node configuration, it is suitable for providing an S3 storage backend fo
This is [a small python script](https://raw.githubusercontent.com/canonical/tempo-coordinator-k8s-operator/main/scripts/deploy_minio.py) that deploys `minio`, `s3-integrator`, configures them and provisions a bucket for you to use.

```bash
$ juju switch cos # select the model where you have COS-lite deployed
$ sudo snap install astral-uv --classic # this is how we recommend to run the script, but you're free to do it your way
$ curl https://raw.githubusercontent.com/canonical/tempo-coordinator-k8s-operator/main/scripts/deploy_minio.py -o deploy_minio.py
juju switch cos # select the model where you have COS-lite deployed
sudo snap install astral-uv --classic # this is how we recommend to run the script, but you're free to do it your way
curl https://raw.githubusercontent.com/canonical/tempo-coordinator-k8s-operator/main/scripts/deploy_minio.py -o deploy_minio.py

# review the script prior to executing it, then:
$ MINIO_BUCKET="tempo" uv run --with minio deploy_minio.py
MINIO_BUCKET="tempo" uv run --with minio deploy_minio.py
```

Once the command exits zero, your storage is ready and you can integrate with the `s3` app.
Expand All @@ -39,7 +39,7 @@ The `secret-key` must be at least 8 characters long. If not, Minio will crash.
```

```bash
$ juju deploy minio \
juju deploy minio \
--channel edge \
--trust \
--config access-key=<access-key> \
Expand All @@ -51,7 +51,7 @@ And wait for it to go to `active/idle`.
### 2. Deploy the S3 Integrator

```bash
$ juju deploy s3-integrator s3 \
juju deploy s3-integrator s3 \
--channel edge \
--trust
```
Expand All @@ -60,7 +60,7 @@ Wait for the `s3` app to go to `blocked/idle`.
The `s3` app will go into `blocked` status until you run the `sync-s3-credentials` action to give it access to `minio`.

```bash
$ juju run s3/leader sync-s3-credentials \
juju run s3/leader sync-s3-credentials \
access-key=<access-key> \
secret-key=<secret-key>
```
Expand All @@ -83,7 +83,7 @@ From there you should be able to create a bucket with a few clicks. See [this gu
Alternatively, you can use the Minio Python SDK.

```bash
$ pip install minio
pip install minio
```

Then execute this script:
Expand Down Expand Up @@ -111,7 +111,7 @@ if not found:
Now give the `s3` app access to the bucket.

```
$ juju config s3 \
juju config s3 \
endpoint=minio-0.minio-endpoints.<Juju model name>.svc.cluster.local:9000 \
bucket=<bucket name>
```
Expand Down
2 changes: 1 addition & 1 deletion docs/how-to/integrate/exposing-a-metrics-endpoint.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ amount of code, enable your charm to get scraped by a charm like [prometheus-k8s
Fetch the `prometheus_scrape` library using the `charmcraft` command:

```bash
$ charmcraft fetch-lib charms.prometheus_k8s.v0.prometheus_scrape
charmcraft fetch-lib charms.prometheus_k8s.v0.prometheus_scrape
```

## Import the Library
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ We recommend to host Opentelemetry Collector as close as possible to the workloa
We recommend to install Opentelemetry Collector via a handy snap we maintain:

```bash
$ sudo snap install opentelemetry-collector
sudo snap install opentelemetry-collector
```

```{note}
Expand All @@ -53,7 +53,7 @@ In other words, Traefik's own URL needs to be stable.
In the Juju model where COS Lite is deployed, run the command below to find out the URL to the proxied endpoint.

```
$ juju run traefik/0 show-proxied-endpoints
juju run traefik/0 show-proxied-endpoints
```

Assuming you have [configured the Traefik charm](https://github.com/canonical/traefik-k8s-operator#configurations) to use an external host name, for example `"traefik.url"`, you will see something like:
Expand All @@ -75,7 +75,7 @@ At this point you will need to follow [the documentation on how to configure Ope
Once you've written your finished configuration to `/etc/otelcol/config.d/otelcol_0.yaml `, you'll be able to restart the snap using the following command:

```bash
$ sudo snap restart opentelemetry-collector
sudo snap restart opentelemetry-collector
```

And with that, you are done! Good job, you got this!
Expand Down
Loading
Loading