Today : Mar 03, 2025
Technology
03 March 2025

Monitoring Golang Services With Prometheus: Pull And Push Models

Prometheus enhances service monitoring accuracy through pull and push methods.

Monitoring applications effectively is more important than ever, especially for developers and system administrators striving for optimal performance. One powerful tool for this purpose is Prometheus, which is well-regarded for its capabilities to gather and process numeric data (metrics) from applications. According to sources, Prometheus assists users by answering fundamental questions like, "Is my service performing efficiently?" and "What are the performance bottlenecks?" This article will explore how Prometheus uses both the pull and push models to monitor Golang services efficiently.

The first approach to collecting metrics is known as the pull model. Here, Prometheus actively pulls metrics from your application through HTTP requests. This method is commonly utilized for long-running applications and web services. Setting it up begins with necessary installations of the Prometheus client libraries. For example, developers can install required libraries using:

go get github.com/prometheus/client_golang/prometheus and go get github.com/prometheus/client_golang/prometheus/promhttp.

Once the libraries are installed, users define their metrics. A simple example involves tracking the total count of HTTP requests. This could be done as follows:

import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)

var httpRequestsTotal = promauto.NewCounter(prometheus.CounterOpts{
Name: "http_requests_total",
Help: "Total number of HTTP requests",
})

Subsequently, it allows users to provide metrics via the endpoint /metrics. Here's how to set it up:

import (
"net/http"
"github.com/prometheus/client_golang/prometheus/promhttp"
)

func main() {
http.Handle("/metrics", promhttp.Handler())
}

To complete the setup, Prometheus must be configured to collect metrics from the service using prometheus.yml:

scrape_configs:
- job_name: "example_service"
static_configs:
- targets: ["localhost:8080"]

Now, Prometheus will automatically request http://localhost:8080/metrics every few seconds to collect fresh data.

On the other hand, the push model requires the service to send its metrics to the Pushgateway, which stores them until Prometheus can retrieve them. This model proves useful mainly for batch jobs or tasks without stable network access. For this setup, the following example can be implemented:

import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/push"
)

func main() {
registry := prometheus.NewRegistry()
jobCounter := prometheus.NewCounter(prometheus.CounterOpts{
Name: "job_execution_count",
Help: "Number of executed jobs",
})
registry.MustRegister(jobCounter)
jobCounter.Inc()
err := push.New("http://localhost:9090", "my_service_or_job").
Collector(jobCounter).
Grouping("instance", "worker_1").
Push()
if err != nil {
panic(err)
}
}

Finally, configure Prometheus to scrape the data from the Pushgateway:

scrape_configs:
- job_name: "pushgateway"
static_configs:
- targets: ["localhost:9091"]

But which model should you choose? The pull model is often recommended for web services, APIs, and long-running applications due to its simple configuration, fewer dependencies, and built-in data clearance process. While it may not suit very short-lived tasks, it is efficient overall.

Conversely, the push model offers certain flexibility for short-lived processes such as Lambda functions or batch jobs, allowing data collection even before the process concludes. Nevertheless, it introduces some complications: if the service crashes, its old metrics may remain stored within the Pushgateway, and Prometheus cannot ascertain if the service is still live. Users will have to manually delete outdated metrics using push.Delete(...) or establish expiration policies.

Transitioning from direct links from Service to Prometheus means involving the Pushgateway, creating added service management complexity. If multiple services frequently submit metrics, the Pushgateway may encounter overwhelming loads due to handling all requests through one central unit, unlike Prometheus's direct requests, which can distribute the load over multiple instances.

Overall, Prometheus remains a powerful and reliable tool for service monitoring. For the majority of applications, the pull model is typically the best fit—it is straightforward, effective, and provides up-to-date data without added complexity. Yet, when dealing with ephemeral tasks, the push model via Pushgateway may be prudent for collecting metrics before task completion. Choosing the right approach ensures excellent observability and maintainability within monitoring systems.

Happy monitoring!