Lately, I’ve heard a lot about Prometheus use cases and some powerful alerting, querying and dimensional data capabilities for time series data. I’ve had some experience with InfluxDB so far, however even though both backends can work with time series data, there are significant differences, including the architecture/model they use to communicate with clients. For example, InfluxDB uses a push model as opposed to a pull model of Prometheus. As always in engineering, there are trade-offs you have to consider for your use case. This post though is simply my first attempt at using the Prometheus client Python library with asyncio to monitor a random gauge metric in my code, I figured this might be useful for beginners. So, let’s get started!
To facilitate, I’ll run Prometheus on a docker container, and the code I’ll run directly on my local machine, so I’ll use network mode host too. You can find the
docker-compose.yml file here. Also, the
prometheus.yml file is configured with these parameters:
In short, most of these parameters are default, except for the added list
- job_name:python_app, which is the python client app that will expose the metrics on its http endpoint
prometheus.yml file will be mounted in the Prometheus docker container configuration directory.
Before spinning up the container, I’ll install the official prometheus_client python library, on the machine that’s running the Python application that I’ll describe shortly:
In order to compose up this environment, assuming that you have got this folder in the repo of forwardingflows blog on github:
This Python client is quite simple, in a nutshell, it imports prometheus_client, starts the http server on port 8000 and instantiates the asyncio event loop with two tasks. Each task is supposed to compute a specific rate, which is the time series that I’m interested in monitoring named
compute_gauge_rate. This rate for each task has an initial value and after every second the gauge value is incremented or decremented by
delta_max. Also, this gauge is set with the labelname
task_name to identify the task in particular. As a result, overtime this gauge will vary. In this example, there are two tasks,
y, which starts with the value at
25, respectively. This is the code of the client if you want to check more details:
Assuming that you’ve already started the containers with docker-compose:
Let’s execute this client:
At this point the server is exposing the metric on port 8000 in the endpoint
/, you can check with curl:
As you can see, both this time series metric have the key-value pair
Now, since Prometheus is collecting data, you can visit
http://localhost:9090 to check the time series and execute some queries. For example, if you were to type
compute_gauge_rate, which is the time series you’d see two two metrics with the labels key-value pair
task_name=y with their respective value in the Y-axis as it is depicted in Figure 1.
Prometheus is super flexible and has some powerful features, in this article I didn’t even scratch the surface, however it was quite easy to get started. I can’t wait to start testing some more advanced configurations and instrumentations of some Python applications with Prometheus.