Prometheus Python client gauge example with asyncio

Lately, I’ve heard a lot about Prometheus use cases and some powerful alerting, querying and dimensional data capabilities for time series data. I’ve had some experience with InfluxDB so far, however even though both backends can work with time series data, there are significant differences, including the architecture/model they use to communicate with clients. For example, InfluxDB uses a push model as opposed to a pull model of Prometheus. As always in engineering, there are trade-offs you have to consider for your use case. This post though is simply my first attempt at using the Prometheus client Python library with asyncio to monitor a random gauge metric in my code, I figured this might be useful for beginners. So, let’s get started!

Setting up the development environment

To facilitate, I’ll run Prometheus on a docker container, and the code I’ll run directly on my local machine, so I’ll use network mode host too. You can find the docker-compose.yml file here. Also, the prometheus.yml file is configured with these parameters:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
global:
scrape_interval: 15s
scrape_timeout: 10s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets: []
scheme: http
timeout: 10s
scrape_configs:
- job_name: prometheus
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
static_configs:
- targets:
- localhost:9090
- job_name: python_app
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /
scheme: http
static_configs:
- targets:
- localhost:8000

In short, most of these parameters are default, except for the added list - job_name:python_app, which is the python client app that will expose the metrics on its http endpoint /. The prometheus.yml file will be mounted in the Prometheus docker container configuration directory.

Installing the requirements

Before spinning up the container, I’ll install the official prometheus_client python library, on the machine that’s running the Python application that I’ll describe shortly:

1
pip install prometheus_client

Compose up

In order to compose up this environment, assuming that you have got this folder in the repo of forwardingflows blog on github:

1
docker-compose up -d

Python client app with asyncio

This Python client is quite simple, in a nutshell, it imports prometheus_client, starts the http server on port 8000 and instantiates the asyncio event loop with two tasks. Each task is supposed to compute a specific rate, which is the time series that I’m interested in monitoring named compute_gauge_rate. This rate for each task has an initial value and after every second the gauge value is incremented or decremented by delta_min or delta_max. Also, this gauge is set with the labelname task_name to identify the task in particular. As a result, overtime this gauge will vary. In this example, there are two tasks, x and y, which starts with the value at 50 and 25, respectively. This is the code of the client if you want to check more details:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import asyncio
import prometheus_client as prom
import logging
import random
format = "%(asctime)s - %(levelname)s [%(name)s] %(threadName)s %(message)s"
logging.basicConfig(level=logging.INFO, format=format)
g1 = prom.Gauge('compute_gauge_rate', 'Random gauge', labelnames=['task_name'])
async def compute_rate(name, rate, delta_min=-100, delta_max=100):
"""Increases or decreases a rate based on a random delta value
which varies from "delta_min" to "delta_max".
:name: task_id
:rate: initial rate value
:delta_min: lowest delta variation
:delta_max: highest delta variation
"""
while True:
logging.info("name: {} value {}".format(name, rate))
g1.labels(task_name=name).set(rate)
rate += random.randint(delta_min, delta_max)
await asyncio.sleep(1)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
# Start up the server to expose metrics.
prom.start_http_server(8000)
t0_value = 50
tasks = [loop.create_task(compute_rate('x', rate=t0_value)),
loop.create_task(compute_rate('y', rate=t0_value/2))]
try:
loop.run_forever()
except KeyboardInterrupt:
pass
finally:
loop.close()

Running the Python client

Assuming that you’ve already started the containers with docker-compose:

1
2
3
❯ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
08ceb63fbe6b prom/prometheus:v2.0.0 "/bin/prometheus --c…" 3 seconds ago Up 2seconds prometheusgauge_prometheus_1

Let’s execute this client:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
❯ python client.py
2018-01-03 23:39:53,470 - INFO [root] MainThread name: x value 50
2018-01-03 23:39:53,470 - INFO [root] MainThread name: y value 25.0
2018-01-03 23:39:54,471 - INFO [root] MainThread name: x value 138
2018-01-03 23:39:54,471 - INFO [root] MainThread name: y value 35.0
2018-01-03 23:39:55,473 - INFO [root] MainThread name: x value 161
2018-01-03 23:39:55,473 - INFO [root] MainThread name: y value 38.0
2018-01-03 23:39:56,475 - INFO [root] MainThread name: x value 172
2018-01-03 23:39:56,476 - INFO [root] MainThread name: y value -47.0
2018-01-03 23:39:57,477 - INFO [root] MainThread name: x value 143
2018-01-03 23:39:57,478 - INFO [root] MainThread name: y value -76.0
2018-01-03 23:39:58,479 - INFO [root] MainThread name: x value 106
2018-01-03 23:39:58,480 - INFO [root] MainThread name: y value -167.0
2018-01-03 23:39:59,481 - INFO [root] MainThread name: x value 197
2018-01-03 23:39:59,482 - INFO [root] MainThread name: y value -104.0
2018-01-03 23:40:00,484 - INFO [root] MainThread name: x value 133
2018-01-03 23:40:00,484 - INFO [root] MainThread name: y value -19.0
2018-01-03 23:40:01,486 - INFO [root] MainThread name: x value 80
2018-01-03 23:40:01,486 - INFO [root] MainThread name: y value -114.0
2018-01-03 23:40:02,488 - INFO [root] MainThread name: x value 103
2018-01-03 23:40:02,488 - INFO [root] MainThread name: y value -168.0
2018-01-03 23:40:03,489 - INFO [root] MainThread name: x value 4
2018-01-03 23:40:03,490 - INFO [root] MainThread name: y value -265.0
...
<truncated output>

At this point the server is exposing the metric on port 8000 in the endpoint /, you can check with curl:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
❯ curl -s http://localhost:8000 | grep compute_gauge_rate
# HELP compute_gauge_rate Random gauge
# TYPE compute_gauge_rate gauge
compute_gauge_rate{task_name="y"} 35.0
compute_gauge_rate{task_name="x"} 138.0
❯ curl -s http://localhost:8000 | grep compute_gauge_rate
# HELP compute_gauge_rate Random gauge
# TYPE compute_gauge_rate gauge
compute_gauge_rate{task_name="y"} 38.0
compute_gauge_rate{task_name="x"} 161.0
❯ curl -s http://localhost:8000 | grep compute_gauge_rate
# HELP compute_gauge_rate Random gauge
# TYPE compute_gauge_rate gauge
compute_gauge_rate{task_name="y"} -47.0
compute_gauge_rate{task_name="x"} 172.0

As you can see, both this time series metric have the key-value pair task_name=y and task_name=x represented.

Prometheus Web Interface

Now, since Prometheus is collecting data, you can visit http://localhost:9090 to check the time series and execute some queries. For example, if you were to type compute_gauge_rate, which is the time series you’d see two two metrics with the labels key-value pair task_name=x and task_name=y with their respective value in the Y-axis as it is depicted in Figure 1.

Figure 1

Conclusion

Prometheus is super flexible and has some powerful features, in this article I didn’t even scratch the surface, however it was quite easy to get started. I can’t wait to start testing some more advanced configurations and instrumentations of some Python applications with Prometheus.