SERVICE MONITOR k8s

Creating a Service Monitor in Kubernetes

Expose metrics in k8s via Service Monitor

Ayush P Gupta

--

Prometheus is an excellent monitoring tool developed for Kubernetes monitoring.
Many of the helm charts like Nginx, and RabbitMq provide inbuilt provisions for metrics exposing for Prometheus monitoring.
These metrics are generally exposed on an endpoint, say, /metrics and Prometheus pulls down, and processes them at a certain interval.

But the question arises how does Prometheus know where to scrap metrics?
The answer to this is via Service Monitor.

Note: This article assumes the reader has basic knowledge of Prometheus, Grafana, Kubernetes, Helm Charts

What is Service Monitor?

Service Monitor is a CRD provided by Prometheus Operator, which provides configuration on how provided services should be monitored. Or how we wish to collect metrics from different services.
Prometheus operator uses this service monitor and configures itself internally.

Requirements

  • All services which are wished to be monitored must have the following annotations in their manifest configuration:
annotations:
prometheus.io/port: "metrics"
prometheus.io/scrape: "true"

The First line represents the PORT number(We used a PORT name instead of the actual port number here).
The second line represents a flag on whether to scrap metrics or not.

  • A service monitor monitors which services’ metrics to scrap.

Example

Let’s say we have a simple k8s deployment(service) having the following manifest:

#########################################################
# Deployment
#########################################################
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service-deployment
namespace: production
spec:
replicas: 1
selector:
matchLabels:
app: my-service
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
namespace: production
labels:
app: my-service
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: my-service-pod
image: myacr.azurecr.io/my_service:latest
envFrom:
- secretRef:
name: prod-secrets
ports:
- containerPort: 4050
resources:
requests:
cpu: 250m
memory: 250Mi
limits:
cpu: 500m
memory: 500Mi
---
#########################################################
# Service
#########################################################
apiVersion: v1
kind: Service
metadata:
name: my-service-service
namespace: production
spec:
type: ClusterIP
selector:
app: my-service
ports:
- port: 4050
protocol: TCP
targetPort: 4050
name: "metrics"
---

Note: the config may look daunting, but it just has a few extra optional parameters

The above is a simple K8s Service and Deployment having type ClusterIP (not exposed as we have Nginx gateway)

If we assume, we have already set up our prom-client in our NodeJs image to scrap metrics, we would have a endpoint /metrics where all Prometheus metrics are exposed.

Now, we just need to connect this service to Prometheus.

STEP 1:

Add required annotations in the service spec template:

annotations:
prometheus.io/port: "metrics"
prometheus.io/scrape: "true"

and add labels to identify service:

labels:
app.kubernetes.io/part-of: dms

Note- Put labels according to the new standard as defined in K8s Docs

Our final service manifest shall look like this:

Note: no changes to the deployment manifest needed

#########################################################
# Service
#########################################################
apiVersion: v1
kind: Service
metadata:
name: my-service-service
namespace: production
annotations:
prometheus.io/port: "metrics"
prometheus.io/scrape: "true"
labels:
app.kubernetes.io/part-of: dms
spec:
type: ClusterIP
selector:
app: my-service
ports:
- port: 4050
protocol: TCP
targetPort: 4050
name: "metrics"
---

Deploy this configuration.
For eg. kubectl apply -f my-service.yml

STEP 2:

After deployment new service manifest, create a new Service monitor as follows:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: dms-service-monitor
namespace: monitoring
spec:
endpoints:
- interval: 15s
port: metrics
scrapeTimeout: 14s
namespaceSelector:
matchNames:
- production
selector:
matchLabels:
app.kubernetes.io/part-of: dms

Note:

  1. The selector: config should match with that defined in the service as above.
  2. Namespace can be different. You can include specific namespaces to monitor using namespaceSelector:.

With this, our setup is done.

STEP 3:

Check if Prometheus is collecting your metrics.
Head over to your Prometheus server (using port-forward if not exposed). On the top menu select Status>Targets.

You can see Prometheus has discovered the new target my-service-monitor and working successfully.

You can even see your scraped data on the home page using a query.

Troubleshoot

If you have deployed Prometheus using kube-prometheus-stack helm chart you might need to upgrade the chart using the following chart values:

values.yml

prometheus:
prometheusSpec:
podMonitorSelectorNilUsesHelmValues: false
serviceMonitorSelectorNilUsesHelmValues: false

For eg:
helm upgrade prometheus-operator prometheus-community/kube-prometheus-stack --namespace monitoring --namespace monitoring --values values.yml

This way we remove pre-configured selectors constraints from Prometheus deployment.

Also, there is an excellent answer on StackOverflow that describes the whole monitoring flow to troubleshoot.

Conclusion

So we see how easily we can connect our Prometheus and Nodejs metrics on our k8s cluster.
You don't need a separate Service Monitor every time you create a new service so long your service correctly passes constraints inside the selector in the service monitor.

Whola! Both you and I learned something new today. Congrats
👏 👏 👏

Further Reading:

  1. https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
  2. https://sysdig.com/blog/kubernetes-monitoring-prometheus/
  3. https://stackoverflow.com/questions/52991038/how-to-create-a-servicemonitor-for-prometheus-operator

--

--

Ayush P Gupta

NodeJs | VueJs | Kubernetes | Flutter | Linux | DIY person