Category: PDI

  • Monitor Kubernetes cluster with Prometheus and Grafana

    Suppose you have a Kubernetes cluster up and running. How do you monitor it? Well obvious choice is to use Prometheus (gather metrics data of cluster) and Grafana (for visualising metrics data in a dashbaord ).

    Prerequisites

    • working kubernetes cluster with kubectl configured
    • locally installed kubectl and helm commands
    • RBAC authorisation already setup

    Setting up Prometheus and Grafana

    Keep monitoring resources together by creating a namespace called ‘monitoring’.

    • create a file called monitoring/namespaces.yml with the following content:
    kind: Namespace
    apiVersion: v1
    metadata:
      name: monitoring
    • apply and test the namespace exists
    $ kubectl get namespaces
    monitoring        Active   2d3h
    • deploy Prometheus
    $ helm install prometheus stable/prometheus --namespace monitoring
    verify that the PODs are running
    $ kubectl get pods -n monitoring
    NAME                                             READY   STATUS    RESTARTS   AGE
    prometheus-1590756801-node-exporter-46l74        1/1     Running   0          13h
    prometheus-1590756801-node-exporter-gl658        1/1     Running   0          13h
    prometheus-1590756801-node-exporter-gt64l        0/1     Pending   0          13h
    prometheus-alertmanager-769fbdd4f5-hddq5         2/2     Running   0          13h
    prometheus-kube-state-metrics-5ccb885bdc-fsrb2   1/1     Running   0          13h
    prometheus-node-exporter-66ff7                   1/1     Running   0          13h
    prometheus-node-exporter-6j57z                   0/1     Pending   0          13h
    prometheus-node-exporter-kq94l                   0/1     Pending   0          13h
    prometheus-pushgateway-75b7cf8896-dttcd          1/1     Running   0          13h
    prometheus-server-5c5c8b58b9-nfkld               2/2     Running   0          13h

    Install Grafana

    Need to configure Grafana to read metrics from a data source. It takes data sources through yaml config file when it gets provisioned.

    • create a file called monitoring/grafana/config.yml with the content:
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: prometheus-grafana-datasource
      namespace: monitoring
      labels:
        grafana_datasource: '1'
    data:
      datasource.yaml: |-
        apiVersion: 1
        datasources:
        - name: Prometheus
          type: prometheus
          access: proxy
          orgId: 1
          url: http://prometheus-server.monitoring.svc.cluster.local
    • apply & test the config
    $ kubectl apply -f monitoring/grafana/config.yml
    configmap/prometheus-grafana-datasource unchanged
    • override Grafana datasources search value – create a file called monitoring/grafana/values.yml with the content:
    sidecar:
      datasources:
        enabled: true
        label: grafana_datasource
      dashboards:
        enabled: true
        label: grafana_dashboard
    • deploy Grafana with local values.yml
    $ helm upgrade --install grafana stable/grafana -f monitoring/grafana/values.yml --namespace monitoring
    Check that it is running:
    $ kubectl get pods -n monitoring
    NAME                                             READY   STATUS    RESTARTS   AGE
    grafana-7744dccd98-8td8x                         2/2     Running   5          4h25m
    
    // get the grafans password
    $ kubectl get secret --namespace monitoring grafana-7744dccd98-8td8x -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
    // port forward to access the dashbaord
    $ kubectl --namespace monitoring port-forward grafana-7744dccd98-8td8x   3000
    Forwarding from 127.0.0.1:3000 -> 3000
    Forwarding from [::1]:3000 -> 3000

    Go to http://localhost:3000