For the platform engineering team providing self-service and single-pane of glass solutions is the ultimate goal. The article shows how to build an integrated solution for obtaining logs and traces with Kuma service mesh, Grafana, OTEL, and Tempo on top of the Kubernetes cluster.
General Overview

Setup
All are being deployed on GKE for the sake of simplicity.
0. Install GKE Cluster
gcloud container clusters create sample-cluster \ --release-channel stable \ --zone europe-west4-a \ --node-locations europe-west4-a
Don't forget to change the Kubernetes context before moving further
Tracing Setup
1. Add helm repos
helm repo add grafana https://grafana.github.io/helm-charts
2. Setup tracing namespace
kubectl apply -f tracing.yaml
tracing.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: tracing
annotations:
kuma.io/sidecar-injection: "false"
3. Install LOKI
helm install loki grafana/loki -f loki.yaml
loki.yaml
---
fluent-bit:
enabled: false
promtail:
enabled: true
prometheus:
enabled: true
alertmanager:
persistentVolume:
enabled: false
server:
persistentVolume:
enabled: false
3. Install Grafana
helm install grafana grafana/grafana -n tracing --version 6.13.5 -f grafana.yaml
grafana.yaml
---
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Tempo
type: tempo
access: browser
orgId: 1
uid: tempo
url: http://tempo.tracing.svc:3100
isDefault: true
editable: true
- name: Loki
type: loki
access: browser
orgId: 1
uid: loki
url: http://loki.kuma-logging.svc:3100
isDefault: false
editable: true
jsonData:
derivedFields:
- datasourceName: Tempo
matcherRegex: "TraceId: (\w+)"
name: TraceID
url: "${__value.raw}"
datasourceUid: tempo
env:
JAEGER_AGENT_PORT: 6831
adminUser: admin
adminPassword: password
service:
type: LoadBalancer
4. Install Tempo
helm install tempo grafana/tempo --version 0.7.4 -n tracing -f tempo.yaml
tempo.yaml
tempo:
extraArgs:
"distributor.log-received-traces": true
receivers:
zipkin:
otlp:
protocols:
http:
grpc:
5. Install Open Telemetry Collector
kubectl apply -f otel.yaml
otel.yaml
---
apiVersion: v1
kind: Service
metadata:
name: otel-collector
labels:
app: opentelemetry
component: otel-collector
spec:
ports:
- name: otlp # Default endpoint for OpenTelemetry receiver.
port: 55680
protocol: TCP
targetPort: 55680
- name: jaeger-grpc # Default endpoint for Jaeger gRPC receiver
port: 14250
- name: jaeger-thrift-http # Default endpoint for Jaeger HTTP receiver.
port: 14268
- name: zipkin # Default endpoint for Zipkin receiver.
port: 9411
- name: metrics # Default endpoint for querying metrics.
port: 8888
- name: prometheus # prometheus exporter
port: 8889
selector:
component: otel-collector
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: otel-collector
labels:
app: opentelemetry
component: otel-collector
spec:
selector:
matchLabels:
app: opentelemetry
component: otel-collector
template:
metadata:
labels:
app: opentelemetry
component: otel-collector
spec:
containers:
- command:
- "/otelcontribcol"
- "--config=/conf/otel-collector-config.yaml"
# Memory Ballast size should be max 1/3 to 1/2 of memory.
- "--mem-ballast-size-mib=683"
- "--log-level=DEBUG"
image: otel/opentelemetry-collector-contrib:0.29.0
name: otel-collector
ports:
- containerPort: 55679 # Default endpoint for ZPages.
- containerPort: 55680 # Default endpoint for OpenTelemetry receiver.
- containerPort: 14250 # Default endpoint for Jaeger HTTP receiver.
- containerPort: 14268 # Default endpoint for Jaeger HTTP receiver.
- containerPort: 9411 # Default endpoint for Zipkin receiver.
- containerPort: 8888 # Default endpoint for querying metrics.
- containerPort: 8889 # prometheus exporter
volumeMounts:
- name: otel-collector-config-vol
mountPath: /conf
# livenessProbe:
# httpGet:
# path: /
# port: 13133 # Health Check extension default port.
# readinessProbe:
# httpGet:
# path: /
# port: 13133 # Health Check extension default port.
volumes:
- configMap:
name: otel-collector-conf
items:
- key: otel-collector-config
path: otel-collector-config.yaml
name: otel-collector-config-vol
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: tracing
name: otel-collector-conf
labels:
app: opentelemetry
component: otel-collector-conf
data:
otel-collector-config: |
receivers:
zipkin:
endpoint: 0.0.0.0:9411
exporters:
logging:
loglevel: debug
otlp:
endpoint: tempo.tracing.svc.cluster.local:55680
insecure: true
service:
pipelines:
traces:
receivers: [zipkin]
exporters: [otlp]
6. Install KUMA
curl -L https://kuma.io/installer.sh | sh -
./kumactl install control-plane | kubectl apply -f -
7. Enable logging on KUMA
kumactl install logging | kubectl apply -f -
8. Enable KUMA log collector collectors
kubectl apply -f kuma-collector.yaml
kuma-collector.yaml
---
apiVersion: kuma.io/v1alpha1
kind: Mesh
metadata:
name: default
spec:
logging:
defaultBackend: loki
backends:
- name: loki
type: file
conf:
path: /dev/stdout
tracing:
defaultBackend: zipkin-collector
backends:
- name: zipkin-collector
type: zipkin
sampling: 100.0
conf:
url: http://otel-collector.tracing:9411/api/v2/spans
- name: jaeger-collector
type: zipkin
sampling: 100.0
conf:
url: http://jaeger-collector.kuma-tracing:9411/api/v2/spans
---
apiVersion: kuma.io/v1alpha1
kind: TrafficTrace
mesh: default
metadata:
name: trace-all-traffic
spec:
selectors:
- match:
kuma.io/service: '*'
conf:
backend: zipkin-collector
---
apiVersion: kuma.io/v1alpha1
kind: TrafficLog
metadata:
name: all-traffic
mesh: default
spec:
# This TrafficLog policy applies all traffic in that Mesh.
sources:
- match:
kuma.io/service: '*'
destinations:
- match:
kuma.io/service: '*'
# When `backend ` field is omitted, the logs will be forwarded into the `defaultBackend` of that Mesh.
Sample App Install
Install Kong
kubectl apply -f https://bit.ly/k4k8s kubectl annotate ns kong kuma.io/sidecar-injection=enabled kubectl delete pod --all -n kong
Install book info sample app
kubectl apply -f yamls/7-ns.yaml kubectl apply -f bookinfo.yaml
bookinfo.yaml
---
apiVersion: v1
kind: Service
metadata:
name: details
namespace: bookinfo1
labels:
app: details
service: details
annotations:
9080.service.kuma.io/protocol: http
ingress.kubernetes.io/service-upstream: "true"
spec:
ports:
- port: 9080
name: http
selector:
app: details
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-details
namespace: bookinfo1
labels:
account: details
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: details-v1
namespace: bookinfo1
labels:
app: details
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: details
version: v1
template:
metadata:
labels:
app: details
version: v1
spec:
serviceAccountName: bookinfo-details
containers:
- name: details
image: docker.io/istio/examples-bookinfo-details-v1:1.16.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
securityContext:
runAsUser: 1000
---
apiVersion: v1
kind: Service
metadata:
name: ratings
namespace: bookinfo1
labels:
app: ratings
service: ratings
annotations:
9080.service.kuma.io/protocol: http
ingress.kubernetes.io/service-upstream: "true"
spec:
ports:
- port: 9080
name: http
selector:
app: ratings
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-ratings
namespace: bookinfo1
labels:
account: ratings
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ratings-v1
namespace: bookinfo1
labels:
app: ratings
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: ratings
version: v1
template:
metadata:
labels:
app: ratings
version: v1
spec:
serviceAccountName: bookinfo-ratings
containers:
- name: ratings
image: docker.io/istio/examples-bookinfo-ratings-v1:1.16.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
securityContext:
runAsUser: 1000
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: reviews
namespace: bookinfo1
labels:
app: reviews
service: reviews
annotations:
9080.service.kuma.io/protocol: http
ingress.kubernetes.io/service-upstream: "true"
spec:
ports:
- port: 9080
name: http
selector:
app: reviews
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-reviews
namespace: bookinfo1
labels:
account: reviews
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v1
namespace: bookinfo1
labels:
app: reviews
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: reviews
version: v1
template:
metadata:
labels:
app: reviews
version: v1
spec:
serviceAccountName: bookinfo-reviews
containers:
- name: reviews
image: docker.io/istio/examples-bookinfo-reviews-v1:1.16.2
imagePullPolicy: IfNotPresent
env:
- name: LOG_DIR
value: "/tmp/logs"
ports:
- containerPort: 9080
volumeMounts:
- name: tmp
mountPath: /tmp
- name: wlp-output
mountPath: /opt/ibm/wlp/output
securityContext:
runAsUser: 1000
volumes:
- name: wlp-output
emptyDir: {}
- name: tmp
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v2
namespace: bookinfo1
labels:
app: reviews
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: reviews
version: v2
template:
metadata:
labels:
app: reviews
version: v2
spec:
serviceAccountName: bookinfo-reviews
containers:
- name: reviews
image: docker.io/istio/examples-bookinfo-reviews-v2:1.16.2
imagePullPolicy: IfNotPresent
env:
- name: LOG_DIR
value: "/tmp/logs"
ports:
- containerPort: 9080
volumeMounts:
- name: tmp
mountPath: /tmp
- name: wlp-output
mountPath: /opt/ibm/wlp/output
securityContext:
runAsUser: 1000
volumes:
- name: wlp-output
emptyDir: {}
- name: tmp
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v3
namespace: bookinfo1
labels:
app: reviews
version: v3
spec:
replicas: 1
selector:
matchLabels:
app: reviews
version: v3
template:
metadata:
labels:
app: reviews
version: v3
spec:
serviceAccountName: bookinfo-reviews
containers:
- name: reviews
image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2
imagePullPolicy: IfNotPresent
env:
- name: LOG_DIR
value: "/tmp/logs"
ports:
- containerPort: 9080
volumeMounts:
- name: tmp
mountPath: /tmp
- name: wlp-output
mountPath: /opt/ibm/wlp/output
securityContext:
runAsUser: 1000
volumes:
- name: wlp-output
emptyDir: {}
- name: tmp
emptyDir: {}
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: productpage
namespace: bookinfo1
labels:
app: productpage
service: productpage
annotations:
9080.service.kuma.io/protocol: http
ingress.kubernetes.io/service-upstream: "true"
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-productpage
namespace: bookinfo1
labels:
account: productpage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: productpage-v1
namespace: bookinfo1
labels:
app: productpage
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: productpage
version: v1
template:
metadata:
labels:
app: productpage
version: v1
spec:
serviceAccountName: bookinfo-productpage
containers:
- name: productpage
image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
volumeMounts:
- name: tmp
mountPath: /tmp
securityContext:
runAsUser: 1000
volumes:
- name: tmp
emptyDir: {}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress
namespace: bookinfo1
annotations:
kubernetes.io/ingress.class: kong
konghq.com/strip-path: 'true'
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: productpage
port:
number: 9080
Accessing Solution
kubectl port-forward svc/kuma-control-plane -n kuma-system 5681:5681
kubectl port-forward svc/grafana -n tracing 8081:80
To Bookinfo: find ingress and access public IP
Searching in Loki
{namespace="bookinfo"} |="TraceId"
Screens
Trace correlation
