Traefik Proxy in Kubernetes


Traefik ('træfik') Proxy

First thing first, this is probably the first time in any of my blog posts that I can use a letter that is part of my native alphabet, namely the "æ". I tried to look up whether I should stick to Træfik or not, so I ended up on this post by Traefik themselves, its not the letter "Æ" itself that is the reason behind the use of it, but it is the phonetic pronunciation of Traefik = 'træfik'. Nice that the letter "æ" has some use internationally though 😄 A fun and nice post though. For the rest of the post I will stick to using Traefik instead of Træfik as Træfik is just the logo and how Traefik is pronounced, it is called Traefik (and to be kind to the non native "æ" speakers out there).

From the offical Traefik homepage:


Traefik is an open-source Edge Router that makes publishing your services a fun and easy experience. It receives requests on behalf of your system and finds out which components are responsible for handling them.

What sets Traefik apart, besides its many features, is that it automatically discovers the right configuration for your services. The magic happens when Traefik inspects your infrastructure, where it finds relevant information and discovers which service serves which request.

Traefik is natively compliant with every major cluster technology, such as Kubernetes, Docker, Docker Swarm, AWS, Mesos, Marathon, and the list goes on; and can handle many at the same time. (It even works for legacy software running on bare metal.)

Why Traefik...

I needed an advanced reverse proxy for my lab that could cover all kinds of backends, from Kubernetes services, services running in regular workload such as virtual machines. I wanted it to be highly available and to solve one of my challenges when exposing services on the the Internet with one public IP and multiple services using the same port. After some quick research I ended up with Traefik. I am not sure why I exactly landed on Traefik, it could have been Nginx or HAProxy just to mention some of the bigger ones out there, or was it the "Æ"? Traefik offers both paid Enterprise editions, and free open source alternatives. I did not want to use time on a product that has some basic features included in their free open source edition and as soon as I wanted a more advanced feature I had to upgrade to a enterprise solution. After some reading Traefik seemed to have all the features I wanted in their open source product Traefik Proxy.

I decided to write this post as I wanted to document all the configurations I have done so far with Traefik. By searching in different forums, blog pages etc some say it is very easy to manage Traefik. I cant say I found it very easy to begin with, but as with everything new one need to learn how to master it. The official Traefik documentation is very good at describing and explaining all the possibilites with Traefik, but several times I was missing some "real life" example configs. But with the help of the great community out there I managed to solve the challenges I had and make them work with Traefik. So thanks to all the blog pages, forums with people asking questions and people willing to answer and explain. This is much appreciated as always.

So lets begin this post wth some high level explanations on some terminology used in Traefik, then the installation and how I have configured Traefik to serve as a reverse proxy for some of my services.

Important terminology used in Traefik Proxy



EntryPoints are the network entry points into Traefik. They define the port which will receive the packets, and whether to listen for TCP or UDP.

In other words either an externally exposed service (NodePort or loadBalancer) or internal service (ClusterIP) defined, the destination endpoints for these entrypoints will here be the Traefik pods responsible for listening to any requests coming their way and do something useful with the traffic if configured.


See more here



A router is in charge of connecting incoming requests to the services that can handle them. In the process, routers may use pieces of middleware to update the request, or act before forwarding the request to the service.

So this is the actual component that knows which service to forward the requests to based on for example host header.


See more here



Attached to the routers, pieces of middleware are a means of tweaking the requests before they are sent to your service (or before the answer from the services are sent to the clients).

An example can be the redirectscheme to redirect all http requests to https. For a full list of options, hava a look here



The Services are responsible for configuring how to reach the actual services that will eventually handle the incoming requests.

Services here can be of servicetype loadBalancer, ClusterIP, ExternalName etc




Configuration discovery in Traefik is achieved through Providers.

The providers are infrastructure components, whether orchestrators, container engines, cloud providers, or key-value stores. The idea is that Traefik queries the provider APIs in order to find relevant information about routing, and when Traefik detects a change, it dynamically updates the routes.

More info on providers can be found here

My lab

Before getting into the actual installaton and configuration of Traefik, a quick context. My lab in this post:

  • A physical server running Proxmox
  • A physical switch with VLAN and routing support
  • Virtual PfSense firewall
  • Kubernetes version 1.28.2
  • 3x Control Plane nodes (Ubuntu)
  • 3x Worker nodes (Ubuntu)
  • A management Ubuntu VM (also running on Proxmox) with all tools needed like Helm and kubectl
  • Cert-Manager configured and installed with LetsEncrypt provider
  • Cilium has been configured with BGP, LB IPAM pools have been defined an provide external ip addresses to servicetype loadBalancer requests in the Kubernetes cluster


Deploying Traefik

Traefik can be deployed in Kubernetes using Helm. First I need to add the Traefik Helm repo:

1helm repo add traefik
2helm repo update

Now it would be as simple as just installing Traefik using helm install traefik traefik/traefik -n traefik, but I have done some adjustements in the values. So before I install Traefik I have adjusted the chart values for Traefik to use this config. See comments inline below. Note that I have removed all the comments from the default value.yaml and just added my own comments. The value yaml can be fetched by issuing this command: helm show values traefik/traefik > traefik-values.yaml

  2  registry:
  3  repository: traefik
  4  tag: ""
  5  pullPolicy: IfNotPresent
  7commonLabels: {}
 10  enabled: true
 11  kind: Deployment
 12  replicas: 3 ### Adjusted to three for high availability
 13  terminationGracePeriodSeconds: 60
 14  minReadySeconds: 0
 15  annotations: {}
 16  labels: {}
 17  podAnnotations: {}
 18  podLabels: {}
 19  additionalContainers: []
 20  additionalVolumes: []
 21  initContainers:
 22  # The "volume-permissions" init container is required if you run into permission issues.
 23  # Related issue:
 24    - name: volume-permissions
 25      image: busybox:latest
 26      command: ["sh", "-c", "touch /data/acme.json; chmod -v 600 /data/acme.json"]
 27      securityContext:
 28        runAsNonRoot: true
 29        runAsGroup: 65532
 30        runAsUser: 65532
 31      volumeMounts:
 32        - name: data
 33          mountPath: /data
 34  shareProcessNamespace: false
 35  dnsConfig: {}
 36  imagePullSecrets: []
 37  lifecycle: {}
 40  enabled: false
 43  enabled: true
 44  isDefaultClass: false # I have set this to false as I also have Cilium IngressController
 47  plugins: {}
 48  kubernetesGateway:
 49    enabled: false
 51  dashboard:
 52    enabled: false # I will enable this later
 53    annotations: {}
 54    labels: {}
 55    matchRule: PathPrefix(`/dashboard`) || PathPrefix(`/api`)
 56    entryPoints: ["traefik"]
 57    middlewares: []
 58    tls: {}
 59  healthcheck:
 60    enabled: false
 61    annotations: {}
 62    labels: {}
 63    matchRule: PathPrefix(`/ping`)
 64    entryPoints: ["traefik"]
 65    middlewares: []
 66    tls: {}
 69  type: RollingUpdate
 70  rollingUpdate:
 71    maxUnavailable: 0
 72    maxSurge: 1
 75  failureThreshold: 1
 76  initialDelaySeconds: 2
 77  periodSeconds: 10
 78  successThreshold: 1
 79  timeoutSeconds: 2
 81  failureThreshold: 3
 82  initialDelaySeconds: 2
 83  periodSeconds: 10
 84  successThreshold: 1
 85  timeoutSeconds: 2
 90  kubernetesCRD:
 91    enabled: true # set to true
 92    allowCrossNamespace: true # set to true
 93    allowExternalNameServices: true # set to true also
 94    allowEmptyServices: false
 95    namespaces: []
 97  kubernetesIngress:
 98    enabled: true # set to true
 99    allowExternalNameServices: true # set to true
100    allowEmptyServices: false
101    namespaces: []
102    publishedService:
103      enabled: false
105  file:
106    enabled: false
107    watch: true
108    content: ""
109volumes: []
110additionalVolumeMounts: []
113  general:
114    level: ERROR
115  access:
116    enabled: false
117    filters: {}
118    fields:
119      general:
120        defaultmode: keep
121        names: {}
122      headers:
123        defaultmode: drop
124        names: {}
127  prometheus:
128    entryPoint: metrics
129    addEntryPointsLabels: true # set to true
130    addRoutersLabels: true # set to true
131    addServicesLabels: true # set to true
132    buckets: "0.1,0.3,1.2,5.0,10.0" # adjusted according to the official docs
134tracing: {}
136- "--global.checknewversion"
137- "--global.sendanonymoususage"
139additionalArguments: []
141- name: POD_NAME
142  valueFrom:
143    fieldRef:
144      fieldPath:
145- name: POD_NAMESPACE
146  valueFrom:
147    fieldRef:
148      fieldPath: metadata.namespace
149envFrom: []
151ports: # these are the entrypoints
152  traefik:
153    port: 9000
154    expose: false
155    exposedPort: 9000
156    protocol: TCP
157  web:
158    port: 8000
159    expose: true
160    exposedPort: 80
161    protocol: TCP
162  websecure:
163    port: 8443
164    expose: true
165    exposedPort: 443
166    protocol: TCP
167    http3:
168      enabled: false
169    tls:
170      enabled: true
171      options: ""
172      certResolver: ""
173      domains: []
174    middlewares: []
175  metrics:
176    port: 9100
177    expose: true
178    exposedPort: 9100
179    protocol: TCP
181tlsOptions: {}
183tlsStore: {}
186  enabled: false # I will create this later, set to false, all values below will be ignored
187  single: true
188  type: LoadBalancer
189  annotations: {}
190  annotationsTCP: {}
191  annotationsUDP: {}
192  labels:
193    env: prod
194  spec:
195    loadBalancerIP: ""
196  loadBalancerSourceRanges: []
197  externalIPs: []
201  enabled: false #This is interesting, need to test
205  enabled: true
206  resourcePolicy: "keep" # I have added this to keep the PVC even after uninstall
207  name: data
208  accessMode: ReadWriteOnce
209  size: 128Mi
210  path: /data
211  annotations: {}
213certResolvers: {}
215hostNetwork: false
218  enabled: true
220  namespaced: false
222  enabled: false
225  name: ""
227serviceAccountAnnotations: {}
230resources: {}
232nodeSelector: {}
234tolerations: []
235topologySpreadConstraints: []
237priorityClassName: ""
241  capabilities:
242    drop: [ALL]
243  readOnlyRootFilesystem: true
244  allowPrivilegeEscalation: false
247  fsGroupChangePolicy: "OnRootMismatch"
248  runAsGroup: 65532
249  runAsNonRoot: true
250  runAsUser: 65532
251extraObjects: []

Now I can install Traefik using the following command:

1helm install traefik traefik/traefik -f traefik-values.yaml -n traefik
2# or 
3helm upgrade -i traefik traefik/traefik -f traefik-values.yaml -n traefik
4When updating changes etc. or just use from the start. 

After a successful installation we should see this message:

 1Release "traefik" has been upgraded. Happy Helming!
 2NAME: traefik
 3LAST DEPLOYED: Wed Dec 27 20:36:23 2023
 4NAMESPACE: traefik
 5STATUS: deployed
 9Traefik Proxy v2.10.6 has been deployed successfully on traefik namespace !
11🚨 When enabling persistence for certificates, permissions on acme.json can be
12lost when Traefik restarts. You can ensure correct permissions with an
13initContainer. See for
14more info. 🚨

Now I should also have a bunch of CRDs, an additional IngressClass (if you have a couple from before as I did).

 1andreasm@linuxmgmt01:~/prod-cluster-1/traefik$ k get crd
 2NAME                                         CREATED AT            2023-12-24T08:53:45Z                     2023-12-24T08:53:45Z         2023-12-24T08:53:45Z                  2023-12-24T08:53:45Z         2023-12-24T08:53:45Z                  2023-12-24T08:53:45Z              2023-12-24T08:53:45Z                       2023-12-24T08:53:45Z           2023-12-24T08:53:45Z                    2023-12-24T08:53:46Z        2023-12-24T08:53:45Z                 2023-12-24T08:53:46Z              2023-12-24T08:53:46Z               2023-12-24T08:53:45Z                        2023-12-24T08:53:46Z                2023-12-24T08:53:45Z                         2023-12-24T08:53:46Z          2023-12-24T08:53:45Z                   2023-12-24T08:53:46Z

A note on this list of CRDs above. The former Traefik APIs used the but from version Traefik 2.x they are now using the APIs the former APIs are there for backward compatibility.

Below I can see the new Traefik Ingress controller.

1andreasm@linuxmgmt01:~/prod-cluster-1/traefik$ k get
2NAME      CONTROLLER                      PARAMETERS   AGE
3cilium    <none>       10d
4traefik   <none>       59s

Deployment info:

 1andreasm@linuxmgmt01:~/prod-cluster-1/traefik$ k get all -n traefik
 2NAME                           READY   STATUS    RESTARTS   AGE
 3pod/traefik-59657c9c59-75cxg   1/1     Running   0          27h
 4pod/traefik-59657c9c59-p2kdv   1/1     Running   0          27h
 5pod/traefik-59657c9c59-tqcrm   1/1     Running   0          27h
 7NAME                             TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                      AGE
 8# No services...
 9NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
10deployment.apps/traefik   3/3     3            3           2d11h
12NAME                                 DESIRED   CURRENT   READY   AGE
13replicaset.apps/traefik-59657c9c59   3         3         3       27h

Now it is all about configuring Traefik to receieve the requests, configure routes, middleware and services. I will start by getting the Traefik dashboard up.


As I have disabled all the services in the Helm value yaml none are created, therefore I need to create these entrypoints before anything can reach Traefik.

A quick explanation why wanted to create these myself. One can have multiple entrypoints to Traefik, even in the same Kubernetes cluster. Assume I want to use different IP addresses and subnets for certain services, some may even call it VIPs, for IP separation, easier physical firewall creation etc. Then I need to create these services to expose the entrypoints I want to use. The Helm chart enables 4 entrypoints by default: web port 8000 (http), websecure port 8443 (https), traefik port 9000 and metrics port 9100 TCP. But these are only configured on the Traefik pods themselves, there is no service to expose them either internally in the cluster or outside. So I need to create these external or internal services to expose these entrypoints.

Describe the pod to see the ports and labels:

 1  andreasm@linuxmgmt01:~/prod-cluster-1/traefik$ k describe pod -n traefik traefik-59657c9c59-75cxg
 2Name:             traefik-59657c9c59-75cxg
 3Namespace:        traefik
 4Priority:         0
 5Service Account:  traefik
 6Node:             k8s-prod-node-01/
 7Start Time:       Tue, 26 Dec 2023 17:03:23 +0000
11  traefik:
12    Container ID:  containerd://b6059c9c6cdf45469403fb153ee8ddd263a870d3e5917a79e0181f543775a302
13    Image:
14    Image ID:
15    Ports:         9100/TCP, 9000/TCP, 8000/TCP, 8443/TCP
16    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP
17    Args:
18      --global.checknewversion
19      --global.sendanonymoususage
20      --entrypoints.metrics.address=:9100/tcp
21      --entrypoints.traefik.address=:9000/tcp
22      --entrypoints.web.address=:8000/tcp
23      --entrypoints.websecure.address=:8443/tcp
24      --api.dashboard=true
25      --ping=true
26      --metrics.prometheus=true
27      --metrics.prometheus.entrypoint=metrics
28      --metrics.prometheus.addRoutersLabels=true
29      --metrics.prometheus.addEntryPointsLabels=true
30      --metrics.prometheus.addServicesLabels=true
31      --metrics.prometheus.buckets=0.1,0.3,1.2,5.0,10.0
32      --providers.kubernetescrd
33      --providers.kubernetescrd.allowCrossNamespace=true
34      --providers.kubernetescrd.allowExternalNameServices=true
35      --providers.kubernetesingress
36      --providers.kubernetesingress.allowExternalNameServices=true
37      --entrypoints.websecure.http.tls=true

My first service I define and apply will primarily be used for management, interacting with Traefik internal services using the correct label selector to select the Traefik pods and refering to the two entrypoint web and websecure. This is how the first entrypoint is defined:

 1apiVersion: v1
 2kind: Service
 4  annotations:
 5    io.cilium/lb-ipam-ips: ""
 6  name: traefik-mgmt
 7  labels:
 8    env: prod
 9  namespace: traefik
11  ports:
12  - name: web
13    port: 80
14    protocol: TCP
15    targetPort: web
16  - name: websecure
17    port: 443
18    protocol: TCP
19    targetPort: websecure
20  selector:
21 traefik
22  type: LoadBalancer

This will create a servicetype LoadBalancer, the IP address is fixed by using the annotation and my confiigured Cilium LB-IPAM pool will provide the IP address for the service and BGP control plane will take care of advertising the IP address for me.

Lets apply the above yaml and check the service:

1andreasm@linuxmgmt01:~/prod-cluster-1/traefik$ k get svc -n traefik
2NAME                     TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                      AGE
3traefik-mgmt             LoadBalancer   80:30343/TCP,443:30564/TCP   49s

This means I can now start registering relevant DNS records to this external IP and Traefik will receieve requests coming to this address/service.

But as I would like to separate out services by type/function using different ip addresses I have created another service using the same entrypoints but with a different external-ip.

 1apiVersion: v1
 2kind: Service
 4  annotations:
 5    io.cilium/lb-ipam-ips: ""
 6  name: traefik-exposed-pool-1
 7  labels:
 8    env: traefik-pool-1
 9  namespace: traefik
11  ports:
12  - name: web
13    port: 80
14    protocol: TCP
15    targetPort: web
16  - name: websecure
17    port: 443
18    protocol: TCP
19    targetPort: websecure
20  selector:
21 traefik
22  type: LoadBalancer

I can go ahead and register DNS records against this IP address also and they will be forwarded to Traefik to handle.

The beauty of this is that I can create as many services I want, using different external-ip addresses, and even specify different Traefik entrypoints. In my physical firewall I can more easily create firewall rules allowing or denying which source is allowed to reach these ip-addresses and then separate out apps from apps, services from services. Like in the next chapter when I expose the Traefik Dashboard.


Traefik Dashbord

Traffic comes with a nice dashboard which gives a quick overview of services enabled, status and detailed information:


As I have not enabled any services I will need to define these to make the Dashboard accessible. I also want it accessible from outside my Kubernetes cluster using basic authentication.

I will prepare three yaml files. The first one will be the secret for the authentication part, the second the middleware config to enable basic authentication and the third and final the actual IngressRoute.

For the secret I used the following command to generate a base64 encoded string containing both username and password:

1andreasm@linuxmgmt01:~/temp$ htpasswd -nb admin 'password' | openssl base64

Then I created the 01-secret.yaml and pasted the bas64 output above

1apiVersion: v1
2kind: Secret
4  name: traefik-dashboard-auth
5  namespace: traefik
8  users: YWRtaW4JGFwcEkdmlBejdSYnEkZW5uN1oualB1Lm1LOUo0dVhqVDB3LgoK

The second yaml, the 02-middleware.yaml, to enable basic authentication:

2kind: Middleware
4  name: traefik-dashboard-basicauth
5  namespace: traefik
8  basicAuth:
9    secret: traefik-dashboard-auth

Then the last yaml, the dashboard IngressRoute:

 2kind: IngressRoute
 4  name: traefik-dashboard
 5  namespace: traefik
 8  entryPoints:
 9    - websecure
11  routes:
12    - match: Host(``)
13      kind: Rule
14      middlewares:
15        - name: traefik-dashboard-basicauth
16          namespace: traefik
17      services:
18#        - name: traefik-mgmt
19        - name: api@internal
20          kind: TraefikService
21  tls:
22    secretName: my-doamin-net-tls-prod

Notice I refer to a tls secret? More on this just a tad later.

Lets see the three objects created.

1andreasm@linuxmgmt01:~/prod-cluster-1/traefik/traefik-dashboard$ k get secrets -n traefik
2NAME                             TYPE                 DATA   AGE
3traefik-dashboard-auth           Opaque               1      39s
4andreasm@linuxmgmt01:~/prod-cluster-1/traefik/traefik-dashboard$ k get -n traefik
5NAME                          AGE
6traefik-dashboard-basicauth   55s
7andreasm@linuxmgmt01:~/prod-cluster-1/traefik/traefik-dashboard$ k get -n traefik
8NAME                         AGE
9traefik-dashboard            1m

I have created a DNS record to point to external-ip of the traefik-mgmt service and made sure the Host definition in the IngressRoute matches this dns.

Now the dashboard is available and prompting for username and password.


The official doc here for more info.


Instead of configuring Traefik to generate the certificates I need for my HTTPS services I have already configured Cert-Manager to create the certificates I need, you can read how I have done it here. I use mostly wildcard certificates, and dont see the need to request certificates all the time.

Then I use reflector to share/sync the certificates across namespaces. Read more on reflector here and here.

Monitoring with Prometheus and Grafana

Another nice feature is Traefik's built in Prometheus metrics. These Prometheus metrics can then be used as datasource in Grafana. So here is how I configured Prometheus and Grafana.

I followed these two blog post's here and here, used them in combination to configure Traefik with Prometheus.


I will start by getting Prometheus up and running, then Grafana

In my Traefik value.yaml I made these changes before I ran helm upgrade on the Traefik installation:

 2  ## -- Prometheus is enabled by default.
 3  ## -- It can be disabled by setting "prometheus: null"
 4  prometheus:
 5    # -- Entry point used to expose metrics.
 6    entryPoint: metrics
 7    ## Enable metrics on entry points. Default=true
 8    addEntryPointsLabels: true
 9    ## Enable metrics on routers. Default=false
10    addRoutersLabels: true
11    ## Enable metrics on services. Default=true
12    addServicesLabels: true
13    ## Buckets for latency metrics. Default="0.1,0.3,1.2,5.0"
14    buckets: "0.1,0.3,1.2,5.0,10.0"

First I registered a DNS record on the external-ip service below with the name as I consider this also a service that belongs within the management category. Now I have two dns records pointing to the same IP (the traefik-ui above included).

1NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)
2traefik-mgmt              LoadBalancer   80:30343/TCP,443:30564/TCP

I will prepare 6 different yaml files. All files will be explained below. First yaml traefik-metrics-service:

 1apiVersion: v1
 2kind: Service
 4  name: traefik-metrics
 5  namespace: traefik
 7  ports:
 8    - name: metrics
 9      protocol: TCP
10      port: 9100
11      targetPort: metrics
12  selector:
13 traefik-traefik
14 traefik
15  type: ClusterIP

As I followed the two blogs above there is a couple of approaches to make this work. One approach is to to expose the metrics using ClusterIP by applying the yaml above. Then the Prometheus target is refering to this svc (requires Prometheus to be runnning on same cluster). The other approach is to configure Prometheus scraping the Traefik pods.

One can also use this ClusterIP service later on with an IngressRoute to expose it outside its Kubernetes cluster for an easy way to just check whether there is metrics coming or need to access this metrics externally. If scraping the pods, this service is not needed as Prometheus will scrape the Traefik pods directly.

Then I need to create a Prometheus configMap telling Promethus what and how to scrape. I will paste below two ways Prometheus can scrape the metrics. The first yaml will scrape the pods directly using the kubernetes_sd_config and filter on the annotations on the Traefik pods.

 1apiVersion: v1
 2kind: ConfigMap
 4  name: prometheus-config
 5  namespace: prometheus
 7  prometheus.yml: |
 8    global:
 9      scrape_interval: 5s
10      evaluation_interval: 5s
11    scrape_configs:
12    - job_name: 'traefik'
13      kubernetes_sd_configs:
14        - role: pod
15          selectors:
16            - role: pod
17              label: ""
18      relabel_configs:
19        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
20          action: keep
21          regex: true
22        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
23          action: replace
24          target_label: __metrics_path__
25          regex: (.+)
26        - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
27          action: replace
28          regex: ([^:]+)(?::\\d+)?;(\\d+)
29          replacement: $1:$2
30          target_label: __address__
31        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
32          action: replace
33          target_label: __scheme__
34          regex: (https?)    

This approach will scrape the metrics from the relevant Traefik pods, using annotation. But it also means I need to give the Prometheus pod access to scrape pods not in its own namespace. So I will go ahead and create a service account, role and role binding for that:

1andreasm@linuxmgmt01:~/prod-cluster-1/traefik/traefik-dashboard$ kubectl -n prometheus create serviceaccount prometheus
2serviceaccount/prometheus created
3andreasm@linuxmgmt01:~/prod-cluster-1/traefik/traefik-dashboard$ k create clusterrole prometheus --verb=get,list,watch --resource=pods,services,endpoints created
5andreasm@linuxmgmt01:~/prod-cluster-1/traefik/traefik-dashboard$ kubectl create clusterrolebinding prometheus --clusterrole=prometheus --serviceaccount=prometheus:prometheus created 

The second approach is to point to the ClusterIP metrics service (defined above) and let Prometheus scrape this service instead. This approach does not need the serviceAccount.

 1apiVersion: v1
 2kind: ConfigMap
 4  name: prometheus-config
 5  namespace: prometheus
 7  prometheus.yml: |
 8    global:
 9      scrape_interval: 5s
10      evaluation_interval: 5s
11    scrape_configs:
12    - job_name: 'traefik'
13      static_configs:
14      - targets: ['traefik-metrics.traefik.svc.cluster.local:9100']    

Then I created the third yaml file that creates the PersistentVolumeClaim for my Prometheus instance:

 1apiVersion: v1
 2kind: PersistentVolumeClaim
 4  name: prometheus-storage-persistence
 5  namespace: prometheus
 7  accessModes:
 8    - ReadWriteOnce
 9  resources:
10    requests:
11      storage: 10Gi

The fourth file is the actual Promethus deployment, refering to the objects created in the previous yamls:

 1apiVersion: apps/v1
 2kind: Deployment
 4  name: prometheus
 5  namespace: prometheus
 7  selector:
 8    matchLabels:
 9      app: prometheus
10  replicas: 1
11  template:
12    metadata:
13      labels:
14        app: prometheus
15    spec:
16      serviceAccountName: prometheus # Remember to add the serviceaccount for scrape access
17      containers:
18      - name: prometheus
19        image: prom/prometheus:latest
20        ports:
21        - containerPort: 9090
22          name: default
23        volumeMounts:
24        - name: prometheus-storage
25          mountPath: /prometheus
26        - name: config-volume
27          mountPath: /etc/prometheus
28      volumes:
29        - name: prometheus-storage
30          persistentVolumeClaim:
31            claimName: prometheus-storage-persistence
32        - name: config-volume
33          configMap:
34            name: prometheus-config

The fifth yaml file is the Prometheus service where I expose Prometheus internally in the cluster:

 1kind: Service
 2apiVersion: v1
 4  name: prometheus
 5  namespace: prometheus
 7  selector:
 8    app: prometheus
 9  type: ClusterIP
10  ports:
11  - protocol: TCP
12    port: 9090
13    targetPort: 9090

The last yaml is the IngressRoute if I want to access Promethus outside my Kubernetes Cluster. Strictly optional if Grafana is also deployed in the same cluster as it can then just use the previously created ClusterIP service. But nice to have if in need to troubleshoot etc.

 2kind: IngressRoute
 4  name: prometheus
 5  namespace: prometheus
 7  entryPoints:
 8    - websecure
 9  routes:
10    - kind: Rule
11      match: Host(``)
12      services:
13        - kind: Service
14          name: prometheus
15          port: 9090

Here comes the DNS record into play, the record I created earlier. Now after I have applied all the above yamls Prometheus should be up and running and I can use the IngressRoute to access the Prometheus Dashboard from my laptop.

Screnshot below is when scraping the pods directly


Screenshot below is scraping the metrics-service:



Now I more or less just need to install Grafana, add the Prometheus ClusterIP as datasource. To install Grafana, that is easily done by using Helm. Below is the steps I did to install Grafana:

1helm repo add grafana
2helm repo update
3## grabbing the default values
4helm show values grafana/grafana > grafana-values.yaml

Below is the changes I have done in the value.yaml I am using to install Grafana:

 1## Configure grafana datasources
 2## ref:
 5  datasources.yaml:
 6    apiVersion: 1
 7    datasources:
 8    - name: Prometheus-traefik
 9      type: prometheus
10      url: http://prometheus.prometheus.svc.cluster.local:9090
11      access: proxy
12      editable: true
13      orgId: 1
14      version: 1
15      isDefault: true
17  enabled: false
19#  type: pvc
20  enabled: true
21#  resourcePolicy: "keep"
22  # storageClassName: default
23  accessModes:
24    - ReadWriteOnce
25  size: 10Gi
26  annotations:
27 "keep"
28  finalizers:
29    -
30  # selectorLabels: {}
31  ## Sub-directory of the PV to mount. Can be templated.
32  # subPath: ""
33  ## Name of an existing PVC. Can be templated.
34  # existingClaim:
35  ## Extra labels to apply to a PVC.
36  extraPvcLabels: {}
37## Expose the grafana service to be accessed from outside the cluster (LoadBalancer service).
38## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
39## ref:
42  enabled: true
43  type: ClusterIP
44  port: 80
45  targetPort: 3000
46    # targetPort: 4181 To be used with a proxy extraContainer
47  ## Service annotations. Can be templated.
48  annotations: {}
49  labels: {}
50  portName: service
51  # Adds the appProtocol field to the service. This allows to work with istio protocol selection. Ex: "http" or "tcp"
52  appProtocol: ""
53# Administrator credentials when not using an existing secret (see below)
54adminUser: admin
55adminPassword: 'password'

This will deploy Grafana with a pvc, not deleted if the Helm installation of Grafana is uninstalled, it will create a ClusterIP exposing the Grafana UI internally in the cluster. So I need to create an IngressRoute to expose it outside the cluster using Traefik.

Below is the IngressRoute for this:

 2kind: IngressRoute
 4  name: grafana-ingressroute
 5  namespace: grafana
 7  entryPoints:                      
 8    - web
 9  routes:                           
10  - kind: Rule
11    match: Host(``) 
12    services:                       
13    - kind: Service
14      name: grafana
15      passHostHeader: true
16      namespace: grafana
17      port: 80                      

Again, the match Host DNS is already registered using the same ExternalIP as the Prometheus one.

Now Grafana should be up and running.


Dashboard depicted above is the Traefik Official Standalone Dashboard which can be imported from here or use the following ID: 17346.

Thats it for monitoring with Prometheus and Grafana. Now onto just a simple web application.

Test application Yelb

I wanted to just expose my test application Yelb deployed twice, but using two different DNS records. I also wanted these services to be exposed using a completely different subnet, to create the IP separation I have mentioned a couple of times. I have already deployed the Yelb application twice in my cluster in their own respective namespaces:

1andreasm@linuxmgmt01:~/prod-cluster-1/traefik/grafana$ k get pods -n yelb
2NAME                            READY   STATUS    RESTARTS   AGE
3redis-server-84f4bf49b5-fq26l   1/1     Running   0          13d
4yelb-appserver-6dc7cd98-s6kt7   1/1     Running   0          13d
5yelb-db-84d6f6fc6c-m7xvd        1/1     Running   0          13d
6yelb-ui-6fbbcc4c87-qjdzg        1/1     Running   0          2d20h
1andreasm@linuxmgmt01:~/prod-cluster-1/traefik/grafana$ k get pods -n yelb-2
2NAME                            READY   STATUS    RESTARTS   AGE
3redis-server-84f4bf49b5-4sx7f   1/1     Running   0          2d16h
4yelb-appserver-6dc7cd98-tqkkh   1/1     Running   0          2d16h
5yelb-db-84d6f6fc6c-t4td2        1/1     Running   0          2d16h
6yelb-ui-2-84cc897d6d-64r9x      1/1     Running   0          2d16h

I want to expose the yelb-ui in both namespaces on their different DNS records using IngressRoutes. I also want to use a completely different external IP address than what I have been using so far under the management category. So this time I will be using this external-ip:

 1apiVersion: v1
 2kind: Service
 4  annotations:
 5    io.cilium/lb-ipam-ips: ""
 6  name: traefik-exposed-pool-1
 7  labels:
 8    env: traefik-pool-1
 9  namespace: traefik
11  ports:
12  - name: web
13    port: 80
14    protocol: TCP
15    targetPort: web
16  - name: websecure
17    port: 443
18    protocol: TCP
19    targetPort: websecure
20  selector:
21 traefik
22  type: LoadBalancer

So I will need to register two DNS records against the IP above: with the following names:" and

Then I can expose the Yelb UI services from both the namespaces yelb and yelb-2 with the following IngressRoutes:

 2kind: IngressRoute
 4  name: yelb-ingressroute-1
 5  namespace: yelb
 7  entryPoints:                      
 8    - web
 9  routes:                           
10  - kind: Rule
11    match: Host(``) 
12    services:                       
13    - kind: Service
14      name: yelb-ui-1
15      passHostHeader: true
16      namespace: yelb
17      port: 80                      
 2kind: IngressRoute
 4  name: yelb-ingressroute-2
 5  namespace: yelb-2
 7  entryPoints:                      
 8    - web
 9  routes:                           
10  - kind: Rule
11    match: Host(``) 
12    services:                       
13    - kind: Service
14      name: yelb-ui-2
15      passHostHeader: true
16      namespace: yelb-2
17      port: 80                      

The two IngressRoutes applied:

1andreasm@linuxmgmt01:~/prod-cluster-1/cilium/test-apps/yelb$ k get -n yelb
2NAME                  AGE
3yelb-ingressroute-1   2d20h
4andreasm@linuxmgmt01:~/prod-cluster-1/cilium/test-apps/yelb$ k get -n yelb-2
5NAME                  AGE
6yelb-ingressroute-2   2d16h

Now I can access both of them using their own dns records:





Traefik in front of my Home Assistant server

Another requirement I had was to expose my Home Assistant server using Traefik, including MQTT. This is how I configured Traefik to handle this.

Home Assistant port 8123

As Home Assistant is running outside my Kubernetes cluster I needed to create an ExternalName service in my Kubernetes cluster for Traefik to use when forwarding requests to my "external" Home Assistant server.

 1kind: Service
 2apiVersion: v1
 4  labels:
 5    k8s-app: external-homeassistant
 6  name: external-homeassistant
 7  namespace: traefik
 9  type: ExternalName
10  ports:
11    - name: homeassistant
12      port: 8123
13      targetPort: 8123
14      protocol: TCP
15  externalName:
16  selector:
17 traefik
18 traefik

The IP is the IP of my Home Assistant server and the port it is listening on. I decided to place the service in the same namespace as Traefik as Home Assistant is not residing in any namespaces in my Kubernetes cluster.

For this to work I needed to make sure my Traefik installation had this value enabled in my value.yaml config before running the helm upgrade of the Traefik installation:

2  kubernetesCRD:
3      # -- Allows to reference ExternalName services in IngressRoute
4    allowExternalNameServices: true

Here is the service after it has been applied:

1andreasm@linuxmgmt01:~/prod-cluster-1/traefik/hass$ k get svc -n traefik
2NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                                     AGE
3external-homeassistant    ExternalName   <none>    8123/TCP                                    42h

Now I needed to create a middleware to redirect all http requests to https:

2kind: Middleware
4  name: hass-redirectscheme
5  namespace: traefik
7  redirectScheme:
8    scheme: https
9    permanent: true

And finally the IngressRoute which routes the requests to the HomeAssistant Externalname service and TLS termination using my wildcard certificate:

 2kind: IngressRoute
 4  name: homeassistant-ingressroute
 5  namespace: traefik
 8  entryPoints:
 9    - websecure
11  routes:
12    - match: Host(``)
13      kind: Rule
14      middlewares:
15        - name: hass-redirectscheme
16          namespace: traefik
17      services:
18        - name: external-homeassistant
19          kind: Service
20          port: 8123
21  tls:
22    secretName: net-tls-prod

Thats it, now I can access my Home Assistant over Traefik with TLS termination. And I dont have to worry about certificate expiration as the certificate will be automatically updated by Cert-Manager.


The DNS record is pointing to the ip I have decided to use for this purpose. Same concept as earlier.

Home Assistant MQTT 1883

I am also running MQTT in Home Assistant to support a bunch of devices, even remote devices (not in the same house). So I wanted to use Traefik for that also. This is how I configured Traefik to handle that:

I needed to create a new entrypoint in Traefik with port 1883 called mqtt. So I edited the Traefik value yaml and updated it accordingly. Then ran Helm upgrade on the Traefik installation. Below is te config I added:

2  mqtt:
3    port: 1883
4    protocol: TCP
5    expose: true
6    exposedPort: 1883

Now my Traefik pods also includes the port 1883:

 2  traefik:
 3    Container ID:  containerd://edf07e67ade4b005e7a7f8ac8a0991b2793c9320cabc35b6a5ea3c6271d63e6d
 4    Image:
 5    Image ID:
 6    Ports:         9100/TCP, 1883/TCP, 9000/TCP, 8000/TCP, 8443/TCP
 7    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
 8    Args:
 9      --global.checknewversion
10      --global.sendanonymoususage
11      --entrypoints.metrics.address=:9100/tcp
12      --entrypoints.mqtt.address=:1883/tcp
13      --entrypoints.traefik.address=:9000/tcp
14      --entrypoints.web.address=:8000/tcp
15      --entrypoints.websecure.address=:8443/tcp
16      --api.dashboard=true
17      --ping=true
18      --metrics.prometheus=true
19      --metrics.prometheus.entrypoint=metrics
20      --metrics.prometheus.addRoutersLabels=true
21      --metrics.prometheus.addEntryPointsLabels=true
22      --metrics.prometheus.addServicesLabels=true
23      --metrics.prometheus.buckets=0.1,0.3,1.2,5.0,10.0
24      --providers.kubernetescrd
25      --providers.kubernetescrd.allowCrossNamespace=true
26      --providers.kubernetescrd.allowExternalNameServices=true
27      --providers.kubernetesingress
28      --providers.kubernetesingress.allowExternalNameServices=true
29      --entrypoints.websecure.http.tls=true

This service is not exposed to the internet, so I decided to create a third Service using another subnet for internal services, that is services within my network, but not exposed to the internet.

I then created a DNS record for the mqtt service in this IP address. Below is the service I am using for mqtt:

 1apiVersion: v1
 2kind: Service
 4  annotations:
 5    io.cilium/lb-ipam-ips: ""
 6  name: traefik-internal-pool-2
 7  labels:
 8    env: traefik-pool-2
 9  namespace: traefik
11  ports:
12  - name: web
13    port: 80
14    protocol: TCP
15    targetPort: web
16  - name: websecure
17    port: 443
18    protocol: TCP
19    targetPort: websecure
20  - name: mqtt
21    port: 1883
22    protocol: TCP
23    targetPort: mqtt
24  selector:
25 traefik
26  type: LoadBalancer

This Service includes the entrypoints web 80, websecure 443 AND the newly created entrypoiint mqtt 1883. Then I can reuse it for other internal purposes also.

Now I can go ahead and create another ExternalName:

 1kind: Service
 2apiVersion: v1
 4  labels:
 5    k8s-app: mqtt-homeassistant
 6  name: mqtt-homeassistant
 7  namespace: traefik
 9  type: ExternalName
10  ports:
11    - name: mqtt-homeassistant
12      port: 1883
13      targetPort: 1883
14      protocol: TCP
15  externalName:
16  selector:
17 traefik
18 traefik

This is also pointing to the IP of my Home Assistant server but using port 1883 instead.

Last step is to create a TCP IngressRoute like this:

 2kind: IngressRouteTCP
 4  name: homeassistant-mqtt-ingressroute
 5  namespace: traefik
 8  entryPoints:
 9    - mqtt
11  routes:
12    - match: ClientIP(``)
13      services:
14        - name: mqtt-homeassistant
15          port: 1883

I can now go ahead and repoint all my mqtt clients to point to the DNS record I have created using the external IP above.

Traefik and Harbor Registry

The last usecase I had for Traefik this round is my Harbor registry. I will quickly show how I done that here.

I deploy Harbor using Helm, below is the steps to add the repo and my value.yaml I am using:

1helm repo add harbor
2helm repo update

Here is my Harbor Helm value yaml file:

 2  type: clusterIP
 3  tls:
 4    enabled: false
 5    certSource: secret
 6    secret:
 7      secretName: "net-tls-prod"
 8    auto:
 9      commonName:
10  clusterIP:
11    name: harbor
12    ports:
13      httpPort: 80
14      httpsPort: 443
15externalURL: ""
16harborAdminPassword: "password"
18  enabled: true
19  # Setting it to "keep" to avoid removing PVCs during a helm delete
20  # operation. Leaving it empty will delete PVCs after the chart deleted
21  # (this does not apply for PVCs that are created for internal database
22  # and redis components, i.e. they are never deleted automatically)
23  resourcePolicy: "keep"
24  persistentVolumeClaim:
25    registry:
26      # Use the existing PVC which must be created manually before bound,
27      # and specify the "subPath" if the PVC is shared with other components
28      existingClaim: ""
29      # Specify the "storageClass" used to provision the volume. Or the default
30      # StorageClass will be used (the default).
31      # Set it to "-" to disable dynamic provisioning
32      storageClass: "nfs-client"
33      subPath: ""
34      accessMode: ReadWriteOnce
35      size: 50Gi
36      annotations: {}
37    database:
38      existingClaim: ""
39      storageClass: "nfs-client"
40      subPath: "postgres-storage"
41      accessMode: ReadWriteOnce
42      size: 1Gi
43      annotations: {}
46  tls:
47    existingSecret: net-tls-prod

Then I install Harbor using Helm, and it should end up like this, only ClusterIP services:

 1andreasm@linuxmgmt01:~/prod-cluster-1/traefik/harbor$ k get svc -n harbor
 2NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
 3harbor              ClusterIP   <none>        80/TCP              29h
 4harbor-core         ClusterIP    <none>        80/TCP              29h
 5harbor-database     ClusterIP   <none>        5432/TCP            29h
 6harbor-jobservice   ClusterIP    <none>        80/TCP              29h
 7harbor-portal       ClusterIP    <none>        80/TCP              29h
 8harbor-redis        ClusterIP    <none>        6379/TCP            29h
 9harbor-registry     ClusterIP     <none>        5000/TCP,8080/TCP   29h
10harbor-trivy        ClusterIP     <none>        8080/TCP            29h

I want to expose my Harbor registry to the Internet so I will be using the Service with the corresponding externalIP I am using to expose things to Internet. This will also be the same externalIP as I am using for my Home Automation exposure. This means I can expose several services to Internet using the same port, like 443, no need to create custom ports etc. Traefik will happily handle the requests coming to the respective DNS records as long as I have configured it to listen 😄

Now I just need to create a middleware to redirect all http to https and the IngressRoute itself.

2kind: Middleware
4  name: harbor-redirectscheme
5  namespace: harbor
7  redirectScheme:
8    scheme: https
9    permanent: true

Then the IngressRoute:

 2kind: IngressRoute
 4  name: harbor-ingressroute
 5  namespace: harbor
 8  entryPoints:
 9    - websecure
11  routes:
12    - match: Host(``)
13      kind: Rule
14      middlewares:
15        - name: harbor-redirectscheme
16          namespace: harbor
17      services:
18        - name: harbor-portal
19          kind: Service
20          port: 80
21    - match: Host(``) && PathPrefix(`/api/`, `/c/`, `/chartrepo/`, `/service/`, `/v2/`)
22      kind: Rule
23      middlewares:
24        - name: harbor-redirectscheme
25          namespace: harbor
26      services:
27        - name: harbor
28          kind: Service
29          port: 80
30  tls:
31    secretName: net-tls-prod

Now, let me see if I can reach Harbor:


And can I login via Docker?

1andreasm@linuxmgmt01:~/prod-cluster-1/traefik/harbor$ docker login
2Username: andreasm
4WARNING! Your password will be stored unencrypted in /home/andreasm/.docker/config.json.
5Configure a credential helper to remove this warning. See
8Login Succeeded


I found Traefik in combination with Cilium a very pleasant experience. The ease of creating IP Pools in Cilium and using BGP to advertise the host routes. How I could configure and manage Traefik to use different external IP entrypoints covering my needs like ip separation. The built-in Traefik dashboard, using Grafana for dashboard creation using Prometheus metrics was very nice. I feel very confident that Traefik is one of my go-to reverse proxies going forward. By deploying Traefik on my Kubernetes cluster I also achieved high-availability and scalability. When I started out with Traefik I found it a bit "difficult", as I mention in the beginning of this post also, but after playing around with it for a while and got the terminology under my skin I find Traefik to be quite easy to manage and operate. Traefik has a good community out there, which also helped out getting the help I needed when I was stuck.

This post is not meant to be an exhaustive list of Traefik capabilities, this post is just scraping the surface of what Traefik is capable of, so I will most likely create a follow up post when I dive into more deeper and advanced topics with Traefik.