Prometheus for Go on GKE

This took me awhile to figure out, so I thought I would write it down in the hopes it saves someone else time later.

I have a Golang app that runs on Kubernetes (in my case GKE) that I want to add some basic monitoring to. I was hoping for a solution that roughly meets these requirements:

  • No vendor-specific code in my application
  • Easy support for exposing custom metrics
  • Since I’m running on GKE, it would be nice for the metrics to show up in the StackDriver console

Searching and reading examples led me down a two routes: OpenTelemetry/OpenCensus and Prometheus. I chose Prometheus in the end, and explain why below.

OpenTelemetry/OpenCensus

An exporter sends traces and metrics to any backend that is capable of consuming them. The exporter itself can change without requiring a change in your client code. This is what makes OpenCensus truly vendor agnostic. Collect traces and metrics once, export simultaneously to various backends!

I want to be able to export to StackDriver when running on GKE/GCP, without coupling my code directly to those APIs. However, the examples showing how to do this look a little different, requiring you to import Stackdriver libraries directly to export there:

Import stackdriver client libraries directly to export metrics there.
Import stackdriver client libraries directly to export metrics there.

Prometheus

In this world, the vendor-specific API handling is wrapped up in the configuration for the Prometheus-scraper, rather than directly in your application code. This approach sounded better to me, but it took awhile to figure out how to best install Prometheus, configure it to scrape my Pods, and then export to StackDriver.

Here’s roughly what I ended up with:

  • Create a new namespace for Prometheus to run in
  • Customize the Helm Chart to include the StackDriver exporter as a Sidecar
  • Install Prometheus there from the Helm Chart
  • Use Workload Identity to grant the Prometheus service account permissions to upload metrics to StackDriver
  • Configure my Pods with the required annotations so Prometheus can figure out how to scrape them.

The most interesting part here is probably the Prometheus Helm Chart customization. This is what ended up working for me:

Replace the <project id> , <region> , and <cluster name> flags appropriately, then place this in a file named values.yaml , and use that to customize the Helm chart during installation using the -fflag like:

$ helm install -n prometheus prometheus-community/prometheus -f values.yaml

Then, if you annotate your pods with the standard Prometheus values:

annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "2112"

You should start to see custom metrics exported into StackDriver! They’ll end up with a name like external/prometheus/<metric name> .

I used the Prometheus golang client to expose my metrics. It’s pretty simple, you create a variable for your metric type, then add data to it. This example is really all I needed.

Create the metric:

var (
cpuTemp = prometheus.NewGauge(prometheus.GaugeOpts{
Name: "cpu_temperature_celsius",
Help: "Current temperature of the CPU.",
})
)

Use it:

cpuTemp.Set(65.3)

Good luck!

Software Engineer at Google

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store