Installing TMC local on vSphere 8 with Tanzu using Keycloak as OIDC provider

Overview

TMC local or TMC-SM

TMC, Tanzu Mission Control, has always been a SaaS offering. But now it has also been released as a installable product you can deploy in your own environment. Throughout this post I will most likely refer to it as TMC SM or TMC local. TMC SM stands for Self Managed. For all official documentation and updated content head over here including the installation process.

Pre-requirements

There is always some pre-requirements to be in place. Why should it always be pre-requirements? Well there is no need create any cars if there is no roads for them to drive on, will it? Thats enough humour for today. Instead of listing a detailed list of the requirements here, head over to the official page here and get familiar with it. In this post I have already deployed a Kubernetes cluster in my vSphere with Tanzu environment, that meets the requirements. More on that later. Then I will cover the certificate requirement deploying Cert-Manager and configure a ClusterIssuer. The image registry I will not cover as I already have a registry up and running and will be using that. I will not cover the loadbalancer/Ingress installation as I am assuming the following is already in place:

  • A working vSphere 8 Environment
  • A working Tanzu with vSphere Supervisor deployment
  • A working NSX-ALB configuration to support both L4 and L7 services (meaning AKO is installed on the cluster for TMC-SM)
  • A working image registry with a valid signed certificate, I will be using Harbor Registry.

I will be using NSX ALB in combination with Contour that is being installed with TMC-SM, I will cover the specifics in configuring NSX-ALB, more specifically AKO, to support Keycloak via Ingress. Then I will cover the installation and configuration of Keycloak as the OIDC requirement. Then I will show how I handle my DNS zone for the TMC installation. As a final note, remember that the certificate I going to use needs to be trusted by the components that will be consuming them and DNS is important. Well lets go through it step by step.

In this order the following steps will be done:

install-steps

And, according to the official documentation:

Note

Deploying TMC Self-Managed 1.0 on a Tanzu Kubernetes Grid (TKG) 2.0 workload cluster running in vSphere with Tanzu on vSphere version 8.x is for tech preview only. Initiate deployments only in pre-production environments or production environments where support for the integration is not required. vSphere 8u1 or later is required in order to test the tech preview integration.

I will use vSphere 8 U1 in this post, and is by no means meant as a guideline to a production ready setup of TMC-SM.

The TKG cluster - where TMC will be deployed

I have used this configuration to deploy my TKG cluster, I have used the VM class guaranteed-large, it will work with 4CPU and 8GB ram on the nodes also. Oh, and by the way. This installation is done on a vSphere with Tanzu multi-zone setup:

 1apiVersion: cluster.x-k8s.io/v1beta1
 2kind: Cluster
 3metadata:
 4  name: tmc-sm-cluster #My own name on the cluster
 5  namespace: ns-wdc-prod #My vSphere Namespace
 6spec:
 7  clusterNetwork:
 8    services:
 9      cidrBlocks: ["20.10.0.0/16"] #Edited by me
10    pods:
11      cidrBlocks: ["20.20.0.0/16"] #Edited by me
12    serviceDomain: "cluster.local"
13  topology:
14    class: tanzukubernetescluster
15    version: v1.24.9+vmware.1-tkg.4 #My latest available TKR version
16    controlPlane:
17      replicas: 1 # only one controlplane (saving resources and time)
18      metadata:
19        annotations:
20          run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu
21    workers:
22      #muliple node pools are used
23      machineDeployments:
24        - class: node-pool
25          name: node-pool-1
26          replicas: 1 #only 1 worker here
27          metadata:
28            annotations:
29              run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu
30          #failure domain the machines will be created in
31          #maps to a vSphere Zone; name must match exactly
32          failureDomain: wdc-zone-1 #named after my vSphere zone
33        - class: node-pool
34          name: node-pool-2
35          replicas: 2 #only 1 worker here
36          metadata:
37            annotations:
38              run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu
39          #failure domain the machines will be created in
40          #maps to a vSphere Zone; name must match exactly
41          failureDomain: wdc-zone-2 #named after my vSphere zone
42        - class: node-pool
43          name: node-pool-3
44          replicas: 1 #only 1 worker here
45          metadata:
46            annotations:
47              run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu
48          #failure domain the machines will be created in
49          #maps to a vSphere Zone; name must match exactly
50          failureDomain: wdc-zone-3 #named after my vSphere zone
51    variables:
52      - name: vmClass
53        value: guaranteed-large
54      - name: storageClass
55        value: all-vsans #my zonal storageclass
56      - name: defaultStorageClass
57        value: all-vsans
58      - name: controlPlaneVolumes
59        value:
60          - name: etcd
61            capacity:
62              storage: 10Gi
63            mountPath: /var/lib/etcd
64            storageClass: all-vsans
65      - name: nodePoolVolumes
66        value:
67          - name: containerd
68            capacity:
69              storage: 50Gi
70            mountPath: /var/lib/containerd
71            storageClass: all-vsans
72          - name: kubelet
73            capacity:
74              storage: 50Gi
75            mountPath: /var/lib/kubelet
76            storageClass: all-vsans

As soon as the cluster is ready and deployed I will log into it and change my context using kubectl vsphere login .... and apply my clusterrole policy:

 1apiVersion: rbac.authorization.k8s.io/v1
 2kind: ClusterRole
 3metadata:
 4  name: psp:privileged
 5rules:
 6- apiGroups: ['policy']
 7  resources: ['podsecuritypolicies']
 8  verbs:     ['use']
 9  resourceNames:
10  - vmware-system-privileged
11---
12apiVersion: rbac.authorization.k8s.io/v1
13kind: ClusterRoleBinding
14metadata:
15  name: all:psp:privileged
16roleRef:
17  kind: ClusterRole
18  name: psp:privileged
19  apiGroup: rbac.authorization.k8s.io
20subjects:
21- kind: Group
22  name: system:serviceaccounts
23  apiGroup: rbac.authorization.k8s.io

ClusterIssuer

To support dynamically creating/issuing certificates I will deploy and install Cert-Manager. The approach I am using to deploy Cert-Manager is to use the provided Cert-Manager Packages available in Tanzu.

Tanzu Cert-Manager Package

I will have to add the package repository where I can download and install Cert-Manager from and a namespace for the packages themselves. Before I can approach with this I need the Tanzu CLI. The official approach can be found here Download the Tanzu CLI from here

Extract it:

1tar -zxvf tanzu-cli-bundle-linux-amd64.tar.gz

Enter into the cli folder and copy or move to a folder in your paht:

1andreasm@linuxvm01:~/tanzu-cli/cli/core/v0.29.0$ cp tanzu-core-linux_amd64 /usr/local/bin/tanzu

Run tanzu init and tanzu plugin sync:

1tanzu init
2tanzu plugin sync

When that is done, go ahead dreate the namespace:

1kubectl create ns tanzu-package-repo-global

Then add the the repository:

1tanzu package repository add tanzu-standard --url projects.registry.vmware.com/tkg/packages/standard/repo:v2.2.0 -n tanzu-package-repo-global

Then installing the Cert-Manager package:

1tanzu package install cert-manager --package cert-manager.tanzu.vmware.com --version 1.7.2+vmware.3-tkg.3 -n tanzu-package-repo-global

CA Issuer

Now it is time to configure Cert-Manager with a CA certifcate so it can act as a CA ClusterIssuer. To do that lets start by creating a CA certificate.

Create the certificate, without passphrase:

 1andreasm@linuxvm01:~/tmc-sm$ openssl req -nodes -x509 -sha256 -days 1825 -newkey rsa:2048 -keyout rootCA.key -out rootCA.crt
 2Generating a RSA private key
 3..........................................................................+++++
 4.+++++
 5writing new private key to 'rootCA.key'
 6-----
 7You are about to be asked to enter information that will be incorporated
 8into your certificate request.
 9What you are about to enter is what is called a Distinguished Name or a DN.
10There are quite a few fields but you can leave some blank
11For some fields there will be a default value,
12If you enter '.', the field will be left blank.
13-----
14Country Name (2 letter code) [AU]:US
15State or Province Name (full name) [Some-State]:punxsutawney
16Locality Name (eg, city) []:Groundhog
17Organization Name (eg, company) [Internet Widgits Pty Ltd]:Day
18Organizational Unit Name (eg, section) []:SameDay
19Common Name (e.g. server FQDN or YOUR name) []:tmc.pretty-awesome-domain.net
20Email Address []:

This should give me two files:

11407 Jul 12 14:21 rootCA.crt
21704 Jul 12 14:19 rootCA.key

Then I will go ahead and create a secret for Cert-Manager using these two above files in Base64 format:

1andreasm@linuxvm01:~/tmc-sm$ cat rootCA.crt | base64 -w0
2LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ0ekNDQXN1Z0F3SUJBZ0lVSFgyak5rbysvdnNlcjc0dGpxS2R3U1ZMQlhVd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2dZQXhDekFKQmdOVkJBWVRBbFZUTVJVd0V3WURWUVFJREF4d2RXNTRjM1YwWVhkdVpYa3hFakFRQmdOVgpCQWNNQ1VkeWIzVnVaR2h2WnpFTU1Bb0dBMVVFQ2d3RFJHRjVNUkF3RGdZRFZRUUxEQWRUWVcxbFJHRjVNU1l3CkpBWURWUVFEREIxMGJX
3andreasm@linuxvm01:~/tmc-sm$ cat rootCA.key | base64 -w0
4LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2Z0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktnd2dnU2tBZ0VBQW9JQkFRREFSR2RCSWwreUVUbUsKOGI0N2l4NUNJTDlXNVh2dkZFY0Q3KzZMbkxxQ3ZVTWdyNWxhNGFjUU8vZUsxUFdIV0YvWk9UN0ZyWUY0QVpmYgpFbzB5ejFxL3pGT3AzQS9sMVNqN3lUeHY5WmxYRU9DbWI4dGdQVm9Ld3drUHFiQ0RtNVZ5Ri9HaGUvMDFsbXl6CnEyMlpGM0M4

Put the above content into my secret.yaml file below

1apiVersion: v1
2kind: Secret
3metadata:
4  name: ca-key-pair
5  namespace: cert-manager
6data:
7  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQvekNDQ....
8  tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQUR...

Then apply it:

1andreasm@linuxvm01:~/tmc-sm$ k apply -f secret.yaml
2secret/ca-key-pair configured

Now create the ClusterIssuer yaml definition:

1apiVersion: cert-manager.io/v1
2kind: ClusterIssuer
3metadata:
4  name: ca-issuer
5spec:
6  ca:
7    secretName: ca-key-pair

This points to the secret created in the previous step. And apply it:

1andreasm@linuxvm01:~/tmc-sm$ k apply -f secret-key-pair.yaml
2clusterissuer.cert-manager.io/ca-issuer configured

Now check the status of the clusterissuer. It can take a couple of seconds. If it does not go to a Ready state, check the logs of the cert-manager pod.

1andreasm@linuxvm01:~/tmc-sm$ k get clusterissuers.cert-manager.io
2NAME        READY   AGE
3ca-issuer   True    20s

Now, we have a ClusterIssuer we can use to provide us with self-signed certificates.

DNS-Zone

In my environment I am using dnsmasq as my backend DNS server for all my clients, servers etc to handle dns records and zones. So in my dnsmasq config I will need to create a "forward" zone for my specific tmc.pretty-awesome-domain.net which will forward all requests to the DNS service I have configured in Avi. Here is the dnsmasq.conf:

1server=/.tmc.pretty-awesome-domain.net/10.101.211.9

The IP 10.101.211.9 is my NSX ALB DNS VS. Now in my NSX ALB DNS service I need to create an entry that points to tmc.pretty-awesome-domain.net where the IP is the Contour IP. In the later stage of this post we need to define a value yaml file. In there we can specify a certain IP the Contour service should get. This IP is being used by the NSX ALB dns to forward all the wildcard requests to the tmc.pretty-awesome-domain.net. To configure that in NSX ALB:

Edit the the DNS VS, add a static DNS record, point to the ip of the Contour service (not there yet, but will come when we start deploying TMC-SM). Also remeber to check Enable wild-card match:

static-dns-record-wildcard

added

So what is going on now. I have configured my NSX ALB DNS servide to be responsible for a domain called pretty-awesome-domain.net by adding this domain to my DNS Profile template which the NSX ALB Cloud is configured with. Each time a Kubernetes service requests a DNS record in this domain NSX ALB will create this entry with correct fqdn/IP mapping. Then I have also created a static entry for the subdomain tmc.pretty-awesome-domain.net in the NSX ALB provider which will forward all wildcard requests to the Contour service which holds these actual records:

  • <my-tmc-dns-zone>
  • alertmanager.<my-tmc-dns-zone>
  • auth.<my-tmc-dns-zone>
  • blob.<my-tmc-dns-zone>
  • console.s3.<my-tmc-dns-zone>
  • gts-rest.<my-tmc-dns-zone>
  • gts.<my-tmc-dns-zone>
  • landing.<my-tmc-dns-zone>
  • pinniped-supervisor.<my-tmc-dns-zone>
  • prometheus.<my-tmc-dns-zone>
  • s3.<my-tmc-dns-zone>
  • tmc-local.s3.<my-tmc-dns-zone>

So I dont have to manually create these dns records, they will just happily be handed over to the Contour ingress records. This is how my DNS lookups look like:

dns-lookups

Keycloak - OIDC/ID provider - using AKO as Ingress controller

One of the requirements for TMC local is also an OIDC provider. My colleague Alex gave me the tip to test out Keycloak as it also work as a standalone provider, without any backend ldap service. So this section will be divided into two sub-sections, one section covers the actual installation of Keycloak using Helm, and the other section covers the Keycloak authentication settings that is required for TMC local.

Keycloak installation

I am using Helm to install Keycloak in my cluster. That means we need Helm installed, the Helm repository that contains the Keycloak charts. I will be using the Bitnami repo for this purpose. So first add the Bitnami repo:

1andreasm@linuxvm01:~$  helm repo add bitnami https://charts.bitnami.com/bitnami
2"bitnami" has been added to your repositories

Then do a Helm search repo to see if it has been added (look for a long list of bitnami/xxxx):

 1andreasm@linuxvm01:~$ helm search repo
 2NAME                                        	CHART VERSION	APP VERSION  	DESCRIPTION
 3bitnami/airflow                             	14.3.1       	2.6.3        	Apache Airflow is a tool to express and execute...
 4bitnami/apache                              	9.6.4        	2.4.57       	Apache HTTP Server is an open-source HTTP serve...
 5bitnami/apisix                              	2.0.3        	3.3.0        	Apache APISIX is high-performance, real-time AP...
 6bitnami/appsmith                            	0.3.9        	1.9.25       	Appsmith is an open source platform for buildin...
 7bitnami/argo-cd                             	4.7.14       	2.7.7        	Argo CD is a continuous delivery tool for Kuber...
 8bitnami/argo-workflows                      	5.3.6        	3.4.8        	Argo Workflows is meant to orchestrate Kubernet...
 9bitnami/aspnet-core                         	4.3.3        	7.0.9        	ASP.NET Core is an open-source framework for we...
10bitnami/cassandra                           	10.4.3       	4.1.2        	Apache Cassandra is an open source distributed ...
11bitnami/cert-manager                        	0.11.5       	1.12.2       	cert-manager is a Kubernetes add-on to automate...
12bitnami/clickhouse                          	3.5.4        	23.6.2       	ClickHouse is an open-source column-oriented OL...
13bitnami/common                              	2.6.0        	2.6.0        	A Library Helm Chart for grouping common logic ...
14bitnami/concourse                           	2.2.3        	7.9.1        	Concourse is an automation system written in Go...
15bitnami/consul                              	10.12.4      	1.16.0       	HashiCorp Consul is a tool for discovering and ...
16bitnami/contour                             	12.1.1       	1.25.0       	Contour is an open source Kubernetes ingress co...
17bitnami/contour-operator                    	4.2.1        	1.24.0       	DEPRECATED The Contour Operator extends the Kub...
18bitnami/dataplatform-bp2                    	12.0.5       	1.0.1        	DEPRECATED This Helm chart can be used for the ...
19bitnami/discourse                           	10.3.4       	3.0.4        	Discourse is an open source discussion platform...
20bitnami/dokuwiki                            	14.1.4       	20230404.1.0 	DokuWiki is a standards-compliant wiki optimize...
21bitnami/drupal                              	14.1.5       	10.0.9       	Drupal is one of the most versatile open source...
22bitnami/ejbca                               	7.1.3        	7.11.0       	EJBCA is an enterprise class PKI Certificate Au...
23bitnami/elasticsearch                       	19.10.3      	8.8.2        	Elasticsearch is a distributed search and analy...
24bitnami/etcd                                	9.0.4        	3.5.9        	etcd is a distributed key-value store designed ...
25bitnami/external-dns                        	6.20.4       	0.13.4       	ExternalDNS is a Kubernetes addon that configur...
26bitnami/flink                               	0.3.3        	1.17.1       	Apache Flink is a framework and distributed pro...
27bitnami/fluent-bit                          	0.4.6        	2.1.6        	Fluent Bit is a Fast and Lightweight Log Proces...
28bitnami/fluentd                             	5.8.5        	1.16.1       	Fluentd collects events from various data sourc...
29bitnami/flux                                	0.3.5        	0.36.1       	Flux is a tool for keeping Kubernetes clusters ...
30bitnami/geode                               	1.1.8        	1.15.1       	DEPRECATED Apache Geode is a data management pl...
31bitnami/ghost                               	19.3.23      	5.54.0       	Ghost is an open source publishing platform des...
32bitnami/gitea                               	0.3.5        	1.19.4       	Gitea is a lightweight code hosting solution. W...
33bitnami/grafana                             	9.0.1        	10.0.1       	Grafana is an open source metric analytics and ...
34bitnami/grafana-loki                        	2.10.0       	2.8.2        	Grafana Loki is a horizontally scalable, highly...
35bitnami/grafana-mimir                       	0.5.4        	2.9.0        	Grafana Mimir is an open source, horizontally s...
36bitnami/grafana-operator                    	3.0.2        	5.1.0        	Grafana Operator is a Kubernetes operator that ...
37bitnami/grafana-tempo                       	2.3.4        	2.1.1        	Grafana Tempo is a distributed tracing system t...
38bitnami/haproxy                             	0.8.4        	2.8.1        	HAProxy is a TCP proxy and a HTTP reverse proxy...
39bitnami/haproxy-intel                       	0.2.11       	2.7.1        	DEPRECATED HAProxy for Intel is a high-performa...
40bitnami/harbor                              	16.7.0       	2.8.2        	Harbor is an open source trusted cloud-native r...
41bitnami/influxdb                            	5.7.1        	2.7.1        	InfluxDB(TM) is an open source time-series data...
42bitnami/jaeger                              	1.2.6        	1.47.0       	Jaeger is a distributed tracing system. It is u...
43bitnami/jasperreports                       	15.1.3       	8.2.0        	JasperReports Server is a stand-alone and embed...
44bitnami/jenkins                             	12.2.4       	2.401.2      	Jenkins is an open source Continuous Integratio...
45bitnami/joomla                              	14.1.5       	4.3.3        	Joomla! is an award winning open source CMS pla...
46bitnami/jupyterhub                          	4.1.6        	4.0.1        	JupyterHub brings the power of notebooks to gro...
47bitnami/kafka                               	23.0.2       	3.5.0        	Apache Kafka is a distributed streaming platfor...
48bitnami/keycloak                            	15.1.6       	21.1.2       	Keycloak is a high performance Java-based ident...

And in the list above we can see the bitnami/keycloak charts. So far so good. Now grab the default keycloak chart values file:

1helm show values bitnami/keycloak > keycloak-values.yaml

This should provide you with a file called keycloak-values.yaml. We need to do some basic changes in here. My values file below is snippets from the full values file where I have edited with comments on what I have changed:

 1## Keycloak authentication parameters
 2## ref: https://github.com/bitnami/containers/tree/main/bitnami/keycloak#admin-credentials
 3##
 4auth:
 5  ## @param auth.adminUser Keycloak administrator user
 6  ##
 7  adminUser: admin # I have changed the user to admin
 8  ## @param auth.adminPassword Keycloak administrator password for the new user
 9  ##
10  adminPassword: "PASSWORD" # I have entered my password here
11  ## @param auth.existingSecret Existing secret containing Keycloak admin password
12  ##
13  existingSecret: ""
14  ## @param auth.passwordSecretKey Key where the Keycloak admin password is being stored inside the existing secret.
15  ##
16  passwordSecretKey: ""
17  ...
18  ## @param production Run Keycloak in production mode. TLS configuration is required except when using proxy=edge.
19##
20production: false
21## @param proxy reverse Proxy mode edge, reencrypt, passthrough or none
22## ref: https://www.keycloak.org/server/reverseproxy
23##
24proxy: edge # I am using AKO to terminate the SSL cert at the Service Engine side. So set this to edge
25## @param httpRelativePath Set the path relative to '/' for serving resources. Useful if you are migrating from older version which were using '/auth/'
26## ref: https://www.keycloak.org/migration/migrating-to-quarkus#_default_context_path_changed
27##
28...
29postgresql:
30  enabled: true
31  auth:
32    postgresPassword: "PASSWORD" # I have added my own password here
33    username: bn_keycloak
34    password: "PASSWORD" # I have added my own password here
35    database: bitnami_keycloak
36    existingSecret: ""
37  architecture: standalone
38
39  

In short, the places I have done changes is adjusting the adminUser, password for the adminUser. Then I changed the proxy setting to edge, and adjusted the PostgreSQL password as I dont want to use the auto-generated passwords.

Then I can deploy Keycloak with this value yaml file:

 1andreasm@linuxvm01:~/tmc-sm/keycloak$ k create ns keycloak
 2andreasm@linuxvm01:~/tmc-sm/keycloak$ helm upgrade -i -n keycloak keycloak bitnami/keycloak -f keycloak-values.yaml
 3Release "keycloak" has been upgraded. Happy Helming!
 4NAME: keycloak
 5LAST DEPLOYED: Wed Jul 12 21:34:32 2023
 6NAMESPACE: keycloak
 7STATUS: deployed
 8REVISION: 4
 9TEST SUITE: None
10NOTES:
11CHART NAME: keycloak
12CHART VERSION: 15.1.6
13APP VERSION: 21.1.2
14
15** Please be patient while the chart is being deployed **
16
17Keycloak can be accessed through the following DNS name from within your cluster:
18
19    keycloak.keycloak.svc.cluster.local (port 80)
20
21To access Keycloak from outside the cluster execute the following commands:
22
231. Get the Keycloak URL by running these commands:
24
25    export HTTP_SERVICE_PORT=$(kubectl get --namespace keycloak -o jsonpath="{.spec.ports[?(@.name=='http')].port}" services keycloak)
26    kubectl port-forward --namespace keycloak svc/keycloak ${HTTP_SERVICE_PORT}:${HTTP_SERVICE_PORT} &
27
28    echo "http://127.0.0.1:${HTTP_SERVICE_PORT}/"
29
302. Access Keycloak using the obtained URL.
313. Access the Administration Console using the following credentials:
32
33  echo Username: admin
34  echo Password: $(kubectl get secret --namespace keycloak keycloak -o jsonpath="{.data.admin-password}" | base64 -d)
35  

I am using the helm command upgrade -i, which means if it is not installed it will, if it is installed it will upgrade the existing installation with the content in the values yaml file.

Keeping the values.yaml as default as possible it will not create any serviceType loadBalancer or Ingress. That is something I would like to handle my self after the actual Keycloak deployment is up and running. More on that later.

Any pods running:

1andreasm@linuxvm01:~/tmc-sm/keycloak$ k get pods -n keycloak
2NAME                    READY   STATUS    RESTARTS   AGE
3keycloak-0              0/1     Running   0          14s
4keycloak-postgresql-0   1/1     Running   0          11h

Almost. Give it a couple of seconds more and it should be ready.

1andreasm@linuxvm01:~/tmc-sm/keycloak$ k get pods -n keycloak
2NAME                    READY   STATUS    RESTARTS   AGE
3keycloak-0              1/1     Running   0          2m43s
4keycloak-postgresql-0   1/1     Running   0          11h

The Keycloak is running. Then I need to expose it with a serviceType loadBalancer or Ingress. I have opted to use Ingress as I feel it is much easier to managed the certificates in NSX-ALB and also let the NSX-ALB SEs handle the TLS termination, instead of in the pod itself. So now I need to confige the Ingress for the ClusterIP service that is automatically created by the Helm chart above. Lets check the service:

1andreasm@linuxvm01:~/tmc-sm/keycloak$ k get svc -n keycloak
2NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
3keycloak                 ClusterIP   20.10.61.222   <none>        80/TCP     31h
4keycloak-headless        ClusterIP   None           <none>        80/TCP     31h
5keycloak-postgresql      ClusterIP   20.10.8.129    <none>        5432/TCP   31h
6keycloak-postgresql-hl   ClusterIP   None           <none>        5432/TCP   31h

The one I am interested in is the keycloak ClusterIP service. Next step is to configure the Ingress for this service. I will post the yaml I am using for this Ingress, and explain a bit more below. This step assumes Avi is installed and configured, and AKO has been deployed and ready to provision Ingress requests. For details on how to install AKO in TKG read here and here.

Just a quick comment before we go through the Ingress, what I want to achieve is an Ingress that is handling the client requests and TLS termination at the "loadbalancer" side. Traffic from the "loadbalancer" (the Avi SEs) to the Keycloak pod is pure http, no SSL. I trust my infra between the SEs and Keycloak pods.

The Ingress for Keycloak:

 1apiVersion: networking.k8s.io/v1
 2kind: Ingress
 3metadata:
 4  name: keycloak
 5  namespace: keycloak
 6  annotations:
 7    cert-manager.io/cluster-issuer: ca-issuer
 8    cert-manager.io/common-name: keycloak.tmc.pretty-awesome-domain.net
 9#    ako.vmware.com/enable-tls: "true"
10
11spec:
12  ingressClassName: avi-lb
13  rules:
14    - host: keycloak.tmc.pretty-awesome-domain.net
15      http:
16        paths:
17        - path: /
18          pathType: Prefix
19          backend:
20            service:
21              name: keycloak
22              port:
23                number: 80
24  tls:
25  - hosts:
26      - keycloak.tmc.pretty-awesome-domain.net
27    secretName: keycloak-ingress-secret

In the above yaml I am creating the Ingress to expose my Keycloak instance externally. I am also kindly asking my ca-issuer to issue a fresh new certificate for this Ingress to use. This is done by adding the annotation cert-manager.io/cluster-issuer: ca-issuer which would be sufficient enough in other scenarios, but I also needed to add this section:

1  tls:
2  - hosts:
3      - keycloak.tmc.pretty-awesome-domain.net
4    secretName: keycloak-ingress-secret

Now I just need to apply it:

1andreasm@linuxvm01:~/tmc-sm/keycloak$ k apply -f keycloak-ingress.yaml
2ingress.networking.k8s.io/keycloak created

Now, what is created on the NSX-ALB side:

keycloak-ingress

There is my Ingress for Keycloak. Lets check the certificate it is using:

ssl-certificate

It is using my new freshly created certificate. I will go ahead and open the ui of Keycloak in my browser:

cert-not-trusted

certificate

Whats this? The certificate is the correct one... Remember that I am using Cert-Manager to issue self-signed certificates? I need to trust the root of the CA in my client to make this certificate trusted. Depending on your client's operating system I will not go through how this is done. But I have now added my rootCA.crt certificate created earlier (the same rootCA.crt I generated for my ClusterIssuer) as a trusted root certificate in my client. Let me try again now.

much-better

Now it is looking much better πŸ˜„

Lets try to log in:

administration-console

Using the username and password provided in the value yaml file.

hmm1

hmm2

Seems to be something wrong here.. My login is just "looping" somehow.. Lets check the Keycloak pod logs :

 1andreasm@linuxvm01:~/tmc-sm/keycloak$ k logs -n keycloak keycloak-0
 2keycloak 21:34:34.96
 3keycloak 21:34:34.97 Welcome to the Bitnami keycloak container
 4keycloak 21:34:34.97 Subscribe to project updates by watching https://github.com/bitnami/containers
 5keycloak 21:34:34.97 Submit issues and feature requests at https://github.com/bitnami/containers/issues
 6keycloak 21:34:34.97
 7keycloak 21:34:34.97 INFO  ==> ** Starting keycloak setup **
 8keycloak 21:34:34.98 INFO  ==> Validating settings in KEYCLOAK_* env vars...
 9keycloak 21:34:35.00 INFO  ==> Trying to connect to PostgreSQL server keycloak-postgresql...
10keycloak 21:34:35.01 INFO  ==> Found PostgreSQL server listening at keycloak-postgresql:5432
11keycloak 21:34:35.02 INFO  ==> Configuring database settings
12keycloak 21:34:35.05 INFO  ==> Enabling statistics
13keycloak 21:34:35.06 INFO  ==> Configuring http settings
14keycloak 21:34:35.08 INFO  ==> Configuring hostname settings
15keycloak 21:34:35.09 INFO  ==> Configuring cache count
16keycloak 21:34:35.10 INFO  ==> Configuring log level
17keycloak 21:34:35.11 INFO  ==> Configuring proxy
18keycloak 21:34:35.12 INFO  ==> ** keycloak setup finished! **
19
20keycloak 21:34:35.14 INFO  ==> ** Starting keycloak **
21Appending additional Java properties to JAVA_OPTS: -Djgroups.dns.query=keycloak-headless.keycloak.svc.cluster.local
22Updating the configuration and installing your custom providers, if any. Please wait.
232023-07-12 21:34:38,622 WARN  [org.keycloak.services] (build-6) KC-SERVICES0047: metrics (org.jboss.aerogear.keycloak.metrics.MetricsEndpointFactory) is implementing the internal SPI realm-restapi-extension. This SPI is internal and may change without notice
242023-07-12 21:34:39,163 WARN  [org.keycloak.services] (build-6) KC-SERVICES0047: metrics-listener (org.jboss.aerogear.keycloak.metrics.MetricsEventListenerFactory) is implementing the internal SPI eventsListener. This SPI is internal and may change without notice
252023-07-12 21:34:51,024 INFO  [io.quarkus.deployment.QuarkusAugmentor] (main) Quarkus augmentation completed in 14046ms
262023-07-12 21:34:52,578 INFO  [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: Base URL: <unset>, Hostname: <request>, Strict HTTPS: false, Path: <request>, Strict BackChannel: false, Admin URL: <unset>, Admin: <request>, Port: -1, Proxied: true
272023-07-12 21:34:54,013 WARN  [io.quarkus.agroal.runtime.DataSources] (main) Datasource <default> enables XA but transaction recovery is not enabled. Please enable transaction recovery by setting quarkus.transaction-manager.enable-recovery=true, otherwise data may be lost if the application is terminated abruptly
282023-07-12 21:34:54,756 INFO  [org.infinispan.SERVER] (keycloak-cache-init) ISPN005054: Native IOUring transport not available, using NIO instead: io.netty.incubator.channel.uring.IOUring
292023-07-12 21:34:54,961 WARN  [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
302023-07-12 21:34:54,987 WARN  [io.quarkus.vertx.http.runtime.VertxHttpRecorder] (main) The X-Forwarded-* and Forwarded headers will be considered when determining the proxy address. This configuration can cause a security issue as clients can forge requests and send a forwarded header that is not overwritten by the proxy. Please consider use one of these headers just to forward the proxy address in requests.
312023-07-12 21:34:54,990 WARN  [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
322023-07-12 21:34:55,005 INFO  [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
332023-07-12 21:34:55,450 INFO  [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN`
342023-07-12 21:34:55,455 INFO  [org.jgroups.JChannel] (keycloak-cache-init) local_addr: 148671ea-e4a4-4b1f-9ead-78c598924c94, name: keycloak-0-45065
352023-07-12 21:34:55,466 INFO  [org.jgroups.protocols.FD_SOCK2] (keycloak-cache-init) server listening on *.57800
362023-07-12 21:34:57,471 INFO  [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) keycloak-0-45065: no members discovered after 2002 ms: creating cluster as coordinator
372023-07-12 21:34:57,480 INFO  [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000094: Received new cluster view for channel ISPN: [keycloak-0-45065|0] (1) [keycloak-0-45065]
382023-07-12 21:34:57,486 INFO  [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000079: Channel `ISPN` local address is `keycloak-0-45065`, physical addresses are `[20.20.2.68:7800]`
392023-07-12 21:34:57,953 INFO  [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: keycloak-0-45065, Site name: null
402023-07-12 21:34:57,962 INFO  [org.keycloak.broker.provider.AbstractIdentityProviderMapper] (main) Registering class org.keycloak.broker.provider.mappersync.ConfigSyncEventListener
412023-07-12 21:34:59,149 INFO  [io.quarkus] (main) Keycloak 21.1.2 on JVM (powered by Quarkus 2.13.8.Final) started in 7.949s. Listening on: http://0.0.0.0:8080
422023-07-12 21:34:59,150 INFO  [io.quarkus] (main) Profile dev activated.
432023-07-12 21:34:59,150 INFO  [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, jdbc-h2, jdbc-mariadb, jdbc-mssql, jdbc-mysql, jdbc-oracle, jdbc-postgresql, keycloak, logging-gelf, micrometer, narayana-jta, reactive-routes, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, vertx]
442023-07-12 21:34:59,160 ERROR [org.keycloak.services] (main) KC-SERVICES0010: Failed to add user 'admin' to realm 'master': user with username exists
452023-07-12 21:34:59,161 WARN  [org.keycloak.quarkus.runtime.KeycloakMain] (main) Running the server in development mode. DO NOT use this configuration in production.
462023-07-12 22:04:22,511 WARN  [org.keycloak.events] (executor-thread-4) type=REFRESH_TOKEN_ERROR, realmId=6944b0b7-3592-4ef3-ad40-4b1a7b64543d, clientId=security-admin-console, userId=null, ipAddress=172.18.6.141, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret
472023-07-12 22:04:27,809 WARN  [org.keycloak.events] (executor-thread-6) type=REFRESH_TOKEN_ERROR, realmId=6944b0b7-3592-4ef3-ad40-4b1a7b64543d, clientId=security-admin-console, userId=null, ipAddress=172.18.6.141, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret
482023-07-12 22:04:33,287 WARN  [org.keycloak.events] (executor-thread-3) type=REFRESH_TOKEN_ERROR, realmId=6944b0b7-3592-4ef3-ad40-4b1a7b64543d, clientId=security-admin-console, userId=null, ipAddress=172.18.6.141, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret
492023-07-12 22:04:44,105 WARN  [org.keycloak.events] (executor-thread-7) type=REFRESH_TOKEN_ERROR, realmId=6944b0b7-3592-4ef3-ad40-4b1a7b64543d, clientId=security-admin-console, userId=null, ipAddress=172.18.6.141, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret
502023-07-12 22:04:55,303 WARN  [org.keycloak.events] (executor-thread-5) type=REFRESH_TOKEN_ERROR, realmId=6944b0b7-3592-4ef3-ad40-4b1a7b64543d, clientId=security-admin-console, userId=null, ipAddress=172.18.6.141, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret
512023-07-12 22:05:00,707 WARN  [org.keycloak.events] (executor-thread-6) type=REFRESH_TOKEN_ERROR, realmId=6944b0b7-3592-4ef3-ad40-4b1a7b64543d, clientId=security-admin-console, userId=null, ipAddress=172.18.6.141, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret
522023-07-12 22:05:06,861 WARN  [org.keycloak.events] (executor-thread-4) type=REFRESH_TOKEN_ERROR, realmId=6944b0b7-3592-4ef3-ad40-4b1a7b64543d, clientId=security-admin-console, userId=null, ipAddress=172.18.6.141, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret
532023-07-12 22:05:12,484 WARN  [org.keycloak.events] (executor-thread-4) type=REFRESH_TOKEN_ERROR, realmId=6944b0b7-3592-4ef3-ad40-4b1a7b64543d, clientId=security-admin-console, userId=null, ipAddress=172.18.6.141, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret
542023-07-12 22:05:18,351 WARN  [org.keycloak.events] (executor-thread-6) type=REFRESH_TOKEN_ERROR, realmId=6944b0b7-3592-4ef3-ad40-4b1a7b64543d, clientId=security-admin-console, userId=null, ipAddress=172.18.6.141, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret
552023-07-12 22:05:28,509 WARN  [org.keycloak.events] (executor-thread-4) type=REFRESH_TOKEN_ERROR, realmId=6944b0b7-3592-4ef3-ad40-4b1a7b64543d, clientId=security-admin-console, userId=null, ipAddress=172.18.6.141, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret
562023-07-12 22:05:37,438 WARN  [org.keycloak.events] (executor-thread-7) type=REFRESH_TOKEN_ERROR, realmId=6944b0b7-3592-4ef3-ad40-4b1a7b64543d, clientId=security-admin-console, userId=null, ipAddress=172.18.6.141, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret
572023-07-12 22:05:42,742 WARN  [org.keycloak.events] (executor-thread-5) type=REFRESH_TOKEN_ERROR, realmId=6944b0b7-3592-4ef3-ad40-4b1a7b64543d, clientId=security-admin-console, userId=null, ipAddress=172.18.6.141, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret
582023-07-12 22:05:47,750 WARN  [org.keycloak.events] (executor-thread-5) type=REFRESH_TOKEN_ERROR, realmId=6944b0b7-3592-4ef3-ad40-4b1a7b64543d, clientId=security-admin-console, userId=null, ipAddress=172.18.6.141, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret
592023-07-12 22:05:53,019 WARN  [org.keycloak.events] (executor-thread-3) type=REFRESH_TOKEN_ERROR, realmId=6944b0b7-3592-4ef3-ad40-4b1a7b64543d, clientId=security-admin-console, userId=null, ipAddress=172.18.6.141, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret
602023-07-12 22:05:58,020 WARN  [org.keycloak.events] (executor-thread-3) type=REFRESH_TOKEN_ERROR, realmId=6944b0b7-3592-4ef3-ad40-4b1a7b64543d, clientId=security-admin-console, userId=null, ipAddress=172.18.6.141, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret

Hmm, error=invalid_token... type=REFRESH_TOKEN_ERROR... Well after some investigating, after some Sherlock Holmsing, I managed to figure out what caused this. I need to deselect a setting in my Avi Application profile selected default for this Ingress. So first I need to create an Application Profile, with most of the setting, but unselect the HTTP-only Cookies. So head over to the NSX-ALB gui, create a new application profile:

application-profile-create

Click create, select under Type: HTTP:

http-type

Then scroll down under Security and make these selections:

settings-security

Give it a name at the top and click save at the bottom right corner:

name-save

Now we need to tell our Ingress to use this Application profile. To be able to do that I need to use an AKO crd called HostRule. So I will go ahead and create a yaml using this HostRule crd like this:

 1apiVersion: ako.vmware.com/v1alpha1
 2kind: HostRule
 3metadata:
 4  name: keycloak-host-rule
 5  namespace: keycloak
 6spec:
 7  virtualhost:
 8    fqdn: keycloak.tmc.pretty-awesome-domain.net # mandatory
 9    fqdnType: Exact
10    enableVirtualHost: true
11    tls: # optional
12      sslKeyCertificate:
13        name: keycloak-ingress-secret
14        type: secret
15      termination: edge
16    applicationProfile: keycloak-http

The TLS section is optional, but I have decided to keep it in regardless. The important piece is the applicationProfile where I enter the name of my newly created application profile above. Save it and apply:

1andreasm@linuxvm01:~/tmc-sm/keycloak$ k apply -f keycloak-hostrule.yaml
2hostrule.ako.vmware.com/keycloak-host-rule created

Now, has my application profile changed in my Keycloak Ingress?

keycloak-http

It has.. So far so good. Will I be able to log in to Keycloak now then?

keycloak-admin-ui

So it seems. Wow, cool. Now lets head over to the section where I configure Keycloak settings to support TMC local authentication.

Keycloak authentication settings for TMC local

One of the recommendations from Keycloak is to create a new realm. So when logged in, head over to the top left corner where you have a dropdown menu:

new-realm

Click Create Realm:

new-realm

realm-name

Give it a name and click CREATE. Select the newly created realm in the top left corners drop-down menu:

select-realm

tmc-sm-realm

The first thing I will create is a new Client. Click on Clients in the left menu and click on Create client:

new-client

Fill in the below information, according to your environment:

client-tmc-1

client-tmc-2

client-tmc-3

Click save at the bottom:

tmc-client-4

Later on we will need the Client ID and Client Secret, these can be found here:

client-tmc-secret

Next head over to the Client scopes section on the left side click Create client scope:

client-scope

Make the following selection as below:

groups-scopes

Click save.

Find the newly create Client scope called groups and click on its name. From there click on the tab Mappers and click the blue button Add mapper and select From predefined mappers. In the list below select the newly created Client scope named *groups" and add it.

mappers-group

mapper-user-realm-role

Head back to Clients menu again, select your tmc-sm application. In there click on the tab Client scopes and click Add client scope and select the groups mapper. It will be the only available in the list to select from. After it has been added, it shoul be in the list below.

client-scope-add

Next head over to the left menu and click Realm roles, In there click on Create role

create role

give it the name tmc:admin and save. Nothing more to be done with this role.

tmc:admin-role

Now head over to Users in the left menu, and click Add user

add-user

add-user

Here it is important to add an email-address and select Email-verified. Otherwise we will get an error status when trying to log in to TMC later. Click create.

After the user has been created select the Credentials tab and click on Set password

set-password

set-password-2

Set Temporary to OFF

Next up and final steps is to create a group and and my user to this group and add the role mapping tmc:admin to the group:

group-1

group-2

group-3

Now Keycloak has been configured to work with TMC. Next step is to prepare the packages for TMC local.

Installing TMC local

The actual Installation of TMC local involves a couple of steps. First its the packages, the source files for the application TMC, they need to be downloaded and uploaded to a registry. A defined value file, the cli tools tanzu and tmc-sm.

Download and upload the TMC packages

To begin the actuall installation of TMC local we need to download the needed packages from my.vmware.com here

download-source

Move the downloaded tmc-self-managed-1.0.0.tar file to your jumphost, where you also have access to a registry. Create a folder called sourcefiles. Then extract the the tmc-self-managed-1.0.0.tar with the following command enter the dir where files have been extracted. Inside this folder there is a cli called tmc-sm you will use to upload the images to your registry.

1# create dir
2andreasm@linuxvm01:~/tmc-sm$ mkdir sourcefiles
3# extract the downloaded tmc tar file from my.vmware.com
4andreasm@linuxvm01:~/tmc-sm$ tar -xf tmc-self-managed-1.0.0.tar -C ./tanzumc
5# cd into the folder sourcefiles
6andreasm@linuxvm01:~/tmc-sm$ cd sourcefiles
7# upload the images to your registry
8andreasm@linuxvm01:~/tmc-sm$ tmc-sm push-images harbor --project registry.some-domain.net/project --username <USERNAME> --password <PASSWORD>
9# if using special characters in password use 'passw@rd' (single quote) before and after

Have a cup of coffee and wait for the images to be uploaded to the registry.

Add package repository using the tanzu cli

  1# create a new namespace for the tmc-local installation
  2andreasm@linuxvm01:~/tmc-sm/sourcefiles$ k create ns tmc-local
  3namespace/tmc-local created
  4# add the package repo for tmc-local
  5andreasm@linuxvm01:~/tmc-sm/sourcefiles$ tanzu package repository add tanzu-mission-control-packages --url "registry.some-domain.net/project/package-repository:1.0.0" --namespace tmc-local
  6Waiting for package repository to be added
  7
  87:22:48AM: Waiting for package repository reconciliation for 'tanzu-mission-control-packages'
  97:22:48AM: Fetch started (5s ago)
 107:22:53AM: Fetching
 11	    | apiVersion: vendir.k14s.io/v1alpha1
 12	    | directories:
 13	    | - contents:
 14	    |   - imgpkgBundle:
 15	    |       image: registry.some-domain.net/project/package-repository@sha256:3e19259be2der8d05a342d23dsd3f902c34ffvac4b3c4e61830e27cf0245159e
 16	    |       tag: 1.0.0
 17	    |     path: .
 18	    |   path: "0"
 19	    | kind: LockConfig
 20	    |
 217:22:53AM: Fetch succeeded
 227:22:54AM: Template succeeded
 237:22:54AM: Deploy started (2s ago)
 247:22:56AM: Deploying
 25	    | Target cluster 'https://20.10.0.1:443'
 26	    | Changes
 27	    | Namespace  Name                                                      Kind             Age  Op      Op st.  Wait to  Rs  Ri
 28	    | tmc-local  contour.bitnami.com                                       PackageMetadata  -    create  ???     -        -   -
 29	    | ^          contour.bitnami.com.12.1.0                                Package          -    create  ???     -        -   -
 30	    | ^          kafka-topic-controller.tmc.tanzu.vmware.com               PackageMetadata  -    create  ???     -        -   -
 31	    | ^          kafka-topic-controller.tmc.tanzu.vmware.com.0.0.21        Package          -    create  ???     -        -   -
 32	    | ^          kafka.bitnami.com                                         PackageMetadata  -    create  ???     -        -   -
 33	    | ^          kafka.bitnami.com.22.1.3                                  Package          -    create  ???     -        -   -
 34	    | ^          minio.bitnami.com                                         PackageMetadata  -    create  ???     -        -   -
 35	    | ^          minio.bitnami.com.12.6.4                                  Package          -    create  ???     -        -   -
 36	    | ^          monitoring.tmc.tanzu.vmware.com                           PackageMetadata  -    create  ???     -        -   -
 37	    | ^          monitoring.tmc.tanzu.vmware.com.0.0.13                    Package          -    create  ???     -        -   -
 38	    | ^          pinniped.bitnami.com                                      PackageMetadata  -    create  ???     -        -   -
 39	    | ^          pinniped.bitnami.com.1.2.1                                Package          -    create  ???     -        -   -
 40	    | ^          postgres-endpoint-controller.tmc.tanzu.vmware.com         PackageMetadata  -    create  ???     -        -   -
 41	    | ^          postgres-endpoint-controller.tmc.tanzu.vmware.com.0.1.43  Package          -    create  ???     -        -   -
 42	    | ^          s3-access-operator.tmc.tanzu.vmware.com                   PackageMetadata  -    create  ???     -        -   -
 43	    | ^          s3-access-operator.tmc.tanzu.vmware.com.0.1.22            Package          -    create  ???     -        -   -
 44	    | ^          tmc-local-postgres.tmc.tanzu.vmware.com                   PackageMetadata  -    create  ???     -        -   -
 45	    | ^          tmc-local-postgres.tmc.tanzu.vmware.com.0.0.46            Package          -    create  ???     -        -   -
 46	    | ^          tmc-local-stack-secrets.tmc.tanzu.vmware.com              PackageMetadata  -    create  ???     -        -   -
 47	    | ^          tmc-local-stack-secrets.tmc.tanzu.vmware.com.0.0.17161    Package          -    create  ???     -        -   -
 48	    | ^          tmc-local-stack.tmc.tanzu.vmware.com                      PackageMetadata  -    create  ???     -        -   -
 49	    | ^          tmc-local-stack.tmc.tanzu.vmware.com.0.0.17161            Package          -    create  ???     -        -   -
 50	    | ^          tmc-local-support.tmc.tanzu.vmware.com                    PackageMetadata  -    create  ???     -        -   -
 51	    | ^          tmc-local-support.tmc.tanzu.vmware.com.0.0.17161          Package          -    create  ???     -        -   -
 52	    | ^          tmc.tanzu.vmware.com                                      PackageMetadata  -    create  ???     -        -   -
 53	    | ^          tmc.tanzu.vmware.com.1.0.0                                Package          -    create  ???     -        -   -
 54	    | Op:      26 create, 0 delete, 0 update, 0 noop, 0 exists
 55	    | Wait to: 0 reconcile, 0 delete, 26 noop
 56	    | 7:22:55AM: ---- applying 26 changes [0/26 done] ----
 57	    | 7:22:55AM: create packagemetadata/postgres-endpoint-controller.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 58	    | 7:22:55AM: create packagemetadata/tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 59	    | 7:22:55AM: create package/postgres-endpoint-controller.tmc.tanzu.vmware.com.0.1.43 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 60	    | 7:22:55AM: create packagemetadata/s3-access-operator.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 61	    | 7:22:55AM: create package/s3-access-operator.tmc.tanzu.vmware.com.0.1.22 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 62	    | 7:22:55AM: create package/tmc-local-postgres.tmc.tanzu.vmware.com.0.0.46 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 63	    | 7:22:55AM: create packagemetadata/tmc-local-postgres.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 64	    | 7:22:55AM: create packagemetadata/tmc-local-stack-secrets.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 65	    | 7:22:55AM: create package/tmc-local-stack-secrets.tmc.tanzu.vmware.com.0.0.17161 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 66	    | 7:22:55AM: create packagemetadata/tmc-local-stack.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 67	    | 7:22:55AM: create package/tmc-local-stack.tmc.tanzu.vmware.com.0.0.17161 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 68	    | 7:22:55AM: create package/tmc-local-support.tmc.tanzu.vmware.com.0.0.17161 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 69	    | 7:22:55AM: create packagemetadata/tmc-local-support.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 70	    | 7:22:55AM: create package/tmc.tanzu.vmware.com.1.0.0 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 71	    | 7:22:55AM: create packagemetadata/contour.bitnami.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 72	    | 7:22:55AM: create packagemetadata/kafka-topic-controller.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 73	    | 7:22:55AM: create package/contour.bitnami.com.12.1.0 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 74	    | 7:22:55AM: create packagemetadata/monitoring.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 75	    | 7:22:55AM: create package/minio.bitnami.com.12.6.4 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 76	    | 7:22:55AM: create packagemetadata/kafka.bitnami.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 77	    | 7:22:55AM: create package/kafka-topic-controller.tmc.tanzu.vmware.com.0.0.21 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 78	    | 7:22:55AM: create packagemetadata/minio.bitnami.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 79	    | 7:22:55AM: create packagemetadata/pinniped.bitnami.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 80	    | 7:22:55AM: create package/monitoring.tmc.tanzu.vmware.com.0.0.13 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 81	    | 7:22:55AM: create package/pinniped.bitnami.com.1.2.1 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 82	    | 7:22:56AM: create package/kafka.bitnami.com.22.1.3 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 83	    | 7:22:56AM: ---- waiting on 26 changes [0/26 done] ----
 84	    | 7:22:56AM: ok: noop package/kafka.bitnami.com.22.1.3 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 85	    | 7:22:56AM: ok: noop packagemetadata/tmc-local-support.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 86	    | 7:22:56AM: ok: noop packagemetadata/kafka.bitnami.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 87	    | 7:22:56AM: ok: noop packagemetadata/contour.bitnami.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 88	    | 7:22:56AM: ok: noop packagemetadata/kafka-topic-controller.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 89	    | 7:22:56AM: ok: noop package/contour.bitnami.com.12.1.0 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 90	    | 7:22:56AM: ok: noop packagemetadata/monitoring.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 91	    | 7:22:56AM: ok: noop package/minio.bitnami.com.12.6.4 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 92	    | 7:22:56AM: ok: noop package/tmc-local-postgres.tmc.tanzu.vmware.com.0.0.46 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 93	    | 7:22:56AM: ok: noop packagemetadata/postgres-endpoint-controller.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 94	    | 7:22:56AM: ok: noop packagemetadata/tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 95	    | 7:22:56AM: ok: noop package/postgres-endpoint-controller.tmc.tanzu.vmware.com.0.1.43 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 96	    | 7:22:56AM: ok: noop packagemetadata/s3-access-operator.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 97	    | 7:22:56AM: ok: noop package/s3-access-operator.tmc.tanzu.vmware.com.0.1.22 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 98	    | 7:22:56AM: ok: noop packagemetadata/pinniped.bitnami.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
 99	    | 7:22:56AM: ok: noop package/kafka-topic-controller.tmc.tanzu.vmware.com.0.0.21 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
100	    | 7:22:56AM: ok: noop packagemetadata/minio.bitnami.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
101	    | 7:22:56AM: ok: noop package/tmc-local-stack-secrets.tmc.tanzu.vmware.com.0.0.17161 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
102	    | 7:22:56AM: ok: noop packagemetadata/tmc-local-postgres.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
103	    | 7:22:56AM: ok: noop packagemetadata/tmc-local-stack-secrets.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
104	    | 7:22:56AM: ok: noop package/tmc-local-stack.tmc.tanzu.vmware.com.0.0.17161 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
105	    | 7:22:56AM: ok: noop packagemetadata/tmc-local-stack.tmc.tanzu.vmware.com (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
106	    | 7:22:56AM: ok: noop package/tmc-local-support.tmc.tanzu.vmware.com.0.0.17161 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
107	    | 7:22:56AM: ok: noop package/monitoring.tmc.tanzu.vmware.com.0.0.13 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
108	    | 7:22:56AM: ok: noop package/pinniped.bitnami.com.1.2.1 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
109	    | 7:22:56AM: ok: noop package/tmc.tanzu.vmware.com.1.0.0 (data.packaging.carvel.dev/v1alpha1) namespace: tmc-local
110	    | 7:22:56AM: ---- applying complete [26/26 done] ----
111	    | 7:22:56AM: ---- waiting complete [26/26 done] ----
112	    | Succeeded
1137:22:56AM: Deploy succeeded

Check the status of the package repository added:

1andreasm@linuxvm01:~/tmc-sm$ k get packagerepositories.packaging.carvel.dev  -n tmc-local
2NAME                             AGE   DESCRIPTION
3tanzu-mission-control-packages   31s   Reconcile succeeded

Install the TMC-SM package

Before one can execute the package installation, there is a values-yaml file that needs to be created and edited according to your environment. So I will start with the values-yaml file. Create a file called something like tmc-values.yaml and open with your favourite editor. Below is the content I am using, reflecting the setting in my environment:

 1harborProject: registry.some-domain.net/project # I am using Harbor registry, pointing it to my url/project
 2dnsZone: tmc.pretty-awesome-domain.net.net # my tmc DNS zone
 3clusterIssuer: ca-issuer # the clusterissuer created earlier
 4postgres:
 5  userPassword: password # my own password
 6  maxConnections: 300
 7minio:
 8  username: root
 9  password: password # my own password
10contourEnvoy:
11  serviceType: LoadBalancer 
12#  serviceAnnotations: # needed only when specifying load balancer controller specific config like preferred IP
13#    ako.vmware.com/load-balancer-ip: "10.12.2.17"
14  # when using an auto-assigned IP instead of a preferred IP, please use the following key instead of the serviceAnnotations above
15  loadBalancerClass: ako.vmware.com/avi-lb # I am using this class as I want NSX ALB to provide me the L4 IP for the Contour Ingress being deployed. 
16oidc:
17  issuerType: pinniped
18  issuerURL: https://keycloak.tmc.pretty-awesome-domain.net/realms/tmc-sm # url for my keycloak instance and realm tmc-sm
19  clientID: tmc-sm-application # Id of the client created in keycloak earlier
20  clientSecret: bcwefg3rgrg444ffHH44HHtTTQTnYN # the secret for the client
21trustedCAs:
22  local-ca.pem: | # this is rootCA.crt, created under ClusterIssuer using openssl
23    -----BEGIN CERTIFICATE-----
24    -----END CERTIFICATE-----

When the value yaml file has been edited, its time to spin off the installation of TMC-SM.

Execute the following command:

1andreasm@linuxvm01:~/tmc-sm$ tanzu package install tanzu-mission-control -p tmc.tanzu.vmware.com --version "1.0.0" --values-file tmc-values.yaml --namespace tmc-local

Then you will get a long list of outputs:

  17:38:02AM: Creating service account 'tanzu-mission-control-tmc-local-sa'
  27:38:02AM: Creating cluster admin role 'tanzu-mission-control-tmc-local-cluster-role'
  37:38:02AM: Creating cluster role binding 'tanzu-mission-control-tmc-local-cluster-rolebinding'
  47:38:02AM: Creating secret 'tanzu-mission-control-tmc-local-values'
  57:38:02AM: Creating overlay secrets
  67:38:02AM: Creating package install resource
  77:38:02AM: Waiting for PackageInstall reconciliation for 'tanzu-mission-control'
  87:38:03AM: Fetch started (4s ago)
  97:38:07AM: Fetching
 10	    | apiVersion: vendir.k14s.io/v1alpha1
 11	    | directories:
 12	    | - contents:
 13	    |   - imgpkgBundle:
 14	    |       image: registry.some-domain.net/project/package-repository@sha256:30ca40e2d5bb63ab5b3ace796c87b5358e85b8fe129d4d145d1bac5633a81cca
 15	    |     path: .
 16	    |   path: "0"
 17	    | kind: LockConfig
 18	    |
 197:38:07AM: Fetch succeeded
 207:38:07AM: Template succeeded
 217:38:07AM: Deploy started (2s ago)
 227:38:09AM: Deploying
 23	    | Target cluster 'https://20.10.0.1:443' (nodes: tmc-sm-cluster-node-pool-3-ctgxg-5f76bd48d8-hzh7h, 4+)
 24	    | Changes
 25	    | Namespace  Name                                       Kind                Age  Op      Op st.  Wait to    Rs  Ri
 26	    | (cluster)  tmc-install-cluster-admin-role             ClusterRole         -    create  -       reconcile  -   -
 27	    | ^          tmc-install-cluster-admin-role-binding     ClusterRoleBinding  -    create  -       reconcile  -   -
 28	    | tmc-local  contour                                    PackageInstall      -    create  -       reconcile  -   -
 29	    | ^          contour-values-ver-1                       Secret              -    create  -       reconcile  -   -
 30	    | ^          kafka                                      PackageInstall      -    create  -       reconcile  -   -
 31	    | ^          kafka-topic-controller                     PackageInstall      -    create  -       reconcile  -   -
 32	    | ^          kafka-topic-controller-values-ver-1        Secret              -    create  -       reconcile  -   -
 33	    | ^          kafka-values-ver-1                         Secret              -    create  -       reconcile  -   -
 34	    | ^          minio                                      PackageInstall      -    create  -       reconcile  -   -
 35	    | ^          minio-values-ver-1                         Secret              -    create  -       reconcile  -   -
 36	    | ^          monitoring-values-ver-1                    Secret              -    create  -       reconcile  -   -
 37	    | ^          pinniped                                   PackageInstall      -    create  -       reconcile  -   -
 38	    | ^          pinniped-values-ver-1                      Secret              -    create  -       reconcile  -   -
 39	    | ^          postgres                                   PackageInstall      -    create  -       reconcile  -   -
 40	    | ^          postgres-endpoint-controller               PackageInstall      -    create  -       reconcile  -   -
 41	    | ^          postgres-endpoint-controller-values-ver-1  Secret              -    create  -       reconcile  -   -
 42	    | ^          postgres-values-ver-1                      Secret              -    create  -       reconcile  -   -
 43	    | ^          s3-access-operator                         PackageInstall      -    create  -       reconcile  -   -
 44	    | ^          s3-access-operator-values-ver-1            Secret              -    create  -       reconcile  -   -
 45	    | ^          tmc-install-sa                             ServiceAccount      -    create  -       reconcile  -   -
 46	    | ^          tmc-local-monitoring                       PackageInstall      -    create  -       reconcile  -   -
 47	    | ^          tmc-local-stack                            PackageInstall      -    create  -       reconcile  -   -
 48	    | ^          tmc-local-stack-secrets                    PackageInstall      -    create  -       reconcile  -   -
 49	    | ^          tmc-local-stack-values-ver-1               Secret              -    create  -       reconcile  -   -
 50	    | ^          tmc-local-support                          PackageInstall      -    create  -       reconcile  -   -
 51	    | ^          tmc-local-support-values-ver-1             Secret              -    create  -       reconcile  -   -
 52	    | Op:      26 create, 0 delete, 0 update, 0 noop, 0 exists
 53	    | Wait to: 26 reconcile, 0 delete, 0 noop
 54	    | 7:38:07AM: ---- applying 13 changes [0/26 done] ----
 55	    | 7:38:08AM: create secret/pinniped-values-ver-1 (v1) namespace: tmc-local
 56	    | 7:38:08AM: create secret/minio-values-ver-1 (v1) namespace: tmc-local
 57	    | 7:38:08AM: create serviceaccount/tmc-install-sa (v1) namespace: tmc-local
 58	    | 7:38:08AM: create secret/kafka-values-ver-1 (v1) namespace: tmc-local
 59	    | 7:38:08AM: create secret/contour-values-ver-1 (v1) namespace: tmc-local
 60	    | 7:38:08AM: create secret/kafka-topic-controller-values-ver-1 (v1) namespace: tmc-local
 61	    | 7:38:08AM: create secret/s3-access-operator-values-ver-1 (v1) namespace: tmc-local
 62	    | 7:38:08AM: create secret/monitoring-values-ver-1 (v1) namespace: tmc-local
 63	    | 7:38:08AM: create secret/postgres-values-ver-1 (v1) namespace: tmc-local
 64	    | 7:38:08AM: create secret/postgres-endpoint-controller-values-ver-1 (v1) namespace: tmc-local
 65	    | 7:38:08AM: create secret/tmc-local-support-values-ver-1 (v1) namespace: tmc-local
 66	    | 7:38:08AM: create secret/tmc-local-stack-values-ver-1 (v1) namespace: tmc-local
 67	    | 7:38:08AM: create clusterrole/tmc-install-cluster-admin-role (rbac.authorization.k8s.io/v1) cluster
 68	    | 7:38:08AM: ---- waiting on 13 changes [0/26 done] ----
 69	    | 7:38:08AM: ok: reconcile serviceaccount/tmc-install-sa (v1) namespace: tmc-local
 70	    | 7:38:08AM: ok: reconcile secret/pinniped-values-ver-1 (v1) namespace: tmc-local
 71	    | 7:38:08AM: ok: reconcile clusterrole/tmc-install-cluster-admin-role (rbac.authorization.k8s.io/v1) cluster
 72	    | 7:38:08AM: ok: reconcile secret/contour-values-ver-1 (v1) namespace: tmc-local
 73	    | 7:38:08AM: ok: reconcile secret/kafka-values-ver-1 (v1) namespace: tmc-local
 74	    | 7:38:08AM: ok: reconcile secret/minio-values-ver-1 (v1) namespace: tmc-local
 75	    | 7:38:08AM: ok: reconcile secret/kafka-topic-controller-values-ver-1 (v1) namespace: tmc-local
 76	    | 7:38:08AM: ok: reconcile secret/s3-access-operator-values-ver-1 (v1) namespace: tmc-local
 77	    | 7:38:08AM: ok: reconcile secret/monitoring-values-ver-1 (v1) namespace: tmc-local
 78	    | 7:38:08AM: ok: reconcile secret/postgres-values-ver-1 (v1) namespace: tmc-local
 79	    | 7:38:08AM: ok: reconcile secret/tmc-local-support-values-ver-1 (v1) namespace: tmc-local
 80	    | 7:38:08AM: ok: reconcile secret/tmc-local-stack-values-ver-1 (v1) namespace: tmc-local
 81	    | 7:38:08AM: ok: reconcile secret/postgres-endpoint-controller-values-ver-1 (v1) namespace: tmc-local
 82	    | 7:38:08AM: ---- applying 1 changes [13/26 done] ----
 83	    | 7:38:08AM: create clusterrolebinding/tmc-install-cluster-admin-role-binding (rbac.authorization.k8s.io/v1) cluster
 84	    | 7:38:08AM: ---- waiting on 1 changes [13/26 done] ----
 85	    | 7:38:08AM: ok: reconcile clusterrolebinding/tmc-install-cluster-admin-role-binding (rbac.authorization.k8s.io/v1) cluster
 86	    | 7:38:08AM: ---- applying 2 changes [14/26 done] ----
 87	    | 7:38:08AM: create packageinstall/contour (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 88	    | 7:38:08AM: create packageinstall/tmc-local-stack-secrets (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 89	    | 7:38:08AM: ---- waiting on 2 changes [14/26 done] ----
 90	    | 7:38:08AM: ongoing: reconcile packageinstall/tmc-local-stack-secrets (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 91	    | 7:38:08AM:  ^ Waiting for generation 1 to be observed
 92	    | 7:38:08AM: ongoing: reconcile packageinstall/contour (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 93	    | 7:38:08AM:  ^ Waiting for generation 1 to be observed
 94	    | 7:38:09AM: ongoing: reconcile packageinstall/contour (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 95	    | 7:38:09AM:  ^ Reconciling
 96	    | 7:38:09AM: ongoing: reconcile packageinstall/tmc-local-stack-secrets (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 97	    | 7:38:09AM:  ^ Reconciling
 98	    | 7:38:14AM: ok: reconcile packageinstall/tmc-local-stack-secrets (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 99	    | 7:38:14AM: ---- waiting on 1 changes [15/26 done] ----
100	    | 7:38:43AM: ok: reconcile packageinstall/contour (packaging.carvel.dev/v1alpha1) namespace: tmc-local
101	    | 7:38:43AM: ---- applying 2 changes [16/26 done] ----
102	    | 7:38:43AM: create packageinstall/tmc-local-support (packaging.carvel.dev/v1alpha1) namespace: tmc-local
103	    | 7:38:43AM: create packageinstall/pinniped (packaging.carvel.dev/v1alpha1) namespace: tmc-local
104	    | 7:38:43AM: ---- waiting on 2 changes [16/26 done] ----
105	    | 7:38:43AM: ongoing: reconcile packageinstall/pinniped (packaging.carvel.dev/v1alpha1) namespace: tmc-local
106	    | 7:38:43AM:  ^ Waiting for generation 1 to be observed
107	    | 7:38:43AM: ongoing: reconcile packageinstall/tmc-local-support (packaging.carvel.dev/v1alpha1) namespace: tmc-local
108	    | 7:38:43AM:  ^ Waiting for generation 1 to be observed
109	    | 7:38:44AM: ongoing: reconcile packageinstall/pinniped (packaging.carvel.dev/v1alpha1) namespace: tmc-local
110	    | 7:38:44AM:  ^ Reconciling
111	    | 7:38:44AM: ongoing: reconcile packageinstall/tmc-local-support (packaging.carvel.dev/v1alpha1) namespace: tmc-local
112	    | 7:38:44AM:  ^ Reconciling
113	    | 7:38:51AM: ok: reconcile packageinstall/tmc-local-support (packaging.carvel.dev/v1alpha1) namespace: tmc-local
114	    | 7:38:51AM: ---- applying 4 changes [18/26 done] ----
115	    | 7:38:51AM: create packageinstall/kafka-topic-controller (packaging.carvel.dev/v1alpha1) namespace: tmc-local
116	    | 7:38:51AM: create packageinstall/kafka (packaging.carvel.dev/v1alpha1) namespace: tmc-local
117	    | 7:38:51AM: create packageinstall/postgres (packaging.carvel.dev/v1alpha1) namespace: tmc-local
118	    | 7:38:51AM: create packageinstall/minio (packaging.carvel.dev/v1alpha1) namespace: tmc-local
119	    | 7:38:51AM: ---- waiting on 5 changes [17/26 done] ----
120	    | 7:38:51AM: ongoing: reconcile packageinstall/postgres (packaging.carvel.dev/v1alpha1) namespace: tmc-local
121	    | 7:38:51AM:  ^ Waiting for generation 1 to be observed
122	    | 7:38:51AM: ongoing: reconcile packageinstall/minio (packaging.carvel.dev/v1alpha1) namespace: tmc-local
123	    | 7:38:51AM:  ^ Waiting for generation 1 to be observed
124	    | 7:38:51AM: ongoing: reconcile packageinstall/kafka (packaging.carvel.dev/v1alpha1) namespace: tmc-local
125	    | 7:38:51AM:  ^ Waiting for generation 1 to be observed
126	    | 7:38:51AM: ongoing: reconcile packageinstall/kafka-topic-controller (packaging.carvel.dev/v1alpha1) namespace: tmc-local
127	    | 7:38:51AM:  ^ Waiting for generation 1 to be observed
128	    | 7:38:52AM: ongoing: reconcile packageinstall/postgres (packaging.carvel.dev/v1alpha1) namespace: tmc-local
129	    | 7:38:52AM:  ^ Reconciling
130	    | 7:38:52AM: ongoing: reconcile packageinstall/minio (packaging.carvel.dev/v1alpha1) namespace: tmc-local
131	    | 7:38:52AM:  ^ Reconciling
132	    | 7:38:52AM: ongoing: reconcile packageinstall/kafka-topic-controller (packaging.carvel.dev/v1alpha1) namespace: tmc-local
133	    | 7:38:52AM:  ^ Reconciling
134	    | 7:38:52AM: ongoing: reconcile packageinstall/kafka (packaging.carvel.dev/v1alpha1) namespace: tmc-local
135	    | 7:38:52AM:  ^ Reconciling

You can monitor the progress using this command:

 1andreasm@linuxvm01:~$ k get pods -n tmc-local -w
 2NAME                                            READY   STATUS              RESTARTS      AGE
 3contour-contour-67b48bff88-fqvwk                1/1     Running             0             107s
 4contour-contour-certgen-kt6hk                   0/1     Completed           0             108s
 5contour-envoy-9r4nm                             2/2     Running             0             107s
 6contour-envoy-gzkdf                             2/2     Running             0             107s
 7contour-envoy-hr8lj                             2/2     Running             0             108s
 8contour-envoy-m95qh                             2/2     Running             0             107s
 9kafka-0                                         0/1     ContainerCreating   0             66s
10kafka-exporter-6b4c74b596-k4crf                 0/1     CrashLoopBackOff    3 (18s ago)   66s
11kafka-topic-controller-7bc498856b-sj5jw         1/1     Running             0             66s
12minio-7dbcffd86-w4rv9                           1/1     Running             0             54s
13minio-provisioning-tsb6q                        0/1     Completed           0             54s
14pinniped-supervisor-55c575555-shzjh             1/1     Running             0             74s
15postgres-endpoint-controller-5c784cd44d-gfg55   1/1     Running             0             23s
16postgres-postgresql-0                           2/2     Running             0             57s
17s3-access-operator-68b6485c9b-jdbww             0/1     ContainerCreating   0             15s
18s3-access-operator-68b6485c9b-jdbww             1/1     Running             0             16s
19kafka-0                                         0/1     Running             0             72s

There will be stages where several of the pods enters CrashLoopBackOff, Error, etc. Just give it time. If the package reconciliation fails. There is time to do some troubleshooting. And most likely it is DNS, certificate or the OIDC configuration. Check the progress on the package reconciliation:

 1andreasm@linuxvm01:~$ k get pkgi -n tmc-local
 2NAME                           PACKAGE NAME                                        PACKAGE VERSION   DESCRIPTION           AGE
 3contour                        contour.bitnami.com                                 12.1.0            Reconcile succeeded   7m20s
 4kafka                          kafka.bitnami.com                                   22.1.3            Reconcile succeeded   6m37s
 5kafka-topic-controller         kafka-topic-controller.tmc.tanzu.vmware.com         0.0.21            Reconcile succeeded   6m37s
 6minio                          minio.bitnami.com                                   12.6.4            Reconcile succeeded   6m37s
 7pinniped                       pinniped.bitnami.com                                1.2.1             Reconcile succeeded   6m45s
 8postgres                       tmc-local-postgres.tmc.tanzu.vmware.com             0.0.46            Reconcile succeeded   6m37s
 9postgres-endpoint-controller   postgres-endpoint-controller.tmc.tanzu.vmware.com   0.1.43            Reconcile succeeded   5m58s
10s3-access-operator             s3-access-operator.tmc.tanzu.vmware.com             0.1.22            Reconcile succeeded   5m46s
11tanzu-mission-control          tmc.tanzu.vmware.com                                1.0.0             Reconciling           7m26s
12tmc-local-stack                tmc-local-stack.tmc.tanzu.vmware.com                0.0.17161         Reconciling           5m5s
13tmc-local-stack-secrets        tmc-local-stack-secrets.tmc.tanzu.vmware.com        0.0.17161         Reconcile succeeded   7m20s
14tmc-local-support              tmc-local-support.tmc.tanzu.vmware.com              0.0.17161         Reconcile succeeded   6m45s

In the meantime, also check some of the required dns records such as tmc.pretty-awesome-domain.net and pinniped-supervisor.tmc.pretty-awesome-domain.net if they can be resolved:

1andreasm@linuxvm01:~$ ping pinniped-supervisor.tmc.pretty-awesome-domain.net

If this error:

1ping: pinniped-supervisor.tmc.pretty-awesome-domain.net: Temporary failure in name resolution

I need to troubleshoot my dns-zone.

If I get this:

1andreasm@linuxvm01:~$ ping tmc.pretty-awesome-domain.net
2PING tmc.pretty-awesome-domain.net (10.101.210.12) 56(84) bytes of data.
364 bytes from 10.101.210.12 (10.101.210.12): icmp_seq=13 ttl=61 time=7.31 ms
464 bytes from 10.101.210.12 (10.101.210.12): icmp_seq=14 ttl=61 time=6.47 ms
5andreasm@linuxvm01:~$ ping pinniped-supervisor.tmc.pretty-awesome-domain.net
6PING pinniped-supervisor.tmc.pretty-awesome-domain.net (10.101.210.12) 56(84) bytes of data.
764 bytes from 10.101.210.12 (10.101.210.12): icmp_seq=1 ttl=61 time=3.81 ms
864 bytes from 10.101.210.12 (10.101.210.12): icmp_seq=2 ttl=61 time=9.28 ms

I am good πŸ˜„

After waiting a while, the package installation process finished, either 100% successfully or with errors. In my environment it fails on step 25/26 on the tmc-local-monitoring. This turns out to be the alertmanager. I have a section below that explains how this can be solved.

Here is the pod that is failing:

 1andreasm@linuxvm01:~$ k get pods -n tmc-local
 2NAME                                                 READY   STATUS             RESTARTS      AGE
 3account-manager-server-84b4758ccd-5zx7n              1/1     Running            0             14m
 4account-manager-server-84b4758ccd-zfqlj              1/1     Running            0             14m
 5agent-gateway-server-bf4f6c67-mvq2m                  1/1     Running            1 (14m ago)   14m
 6agent-gateway-server-bf4f6c67-zlj9d                  1/1     Running            1 (14m ago)   14m
 7alertmanager-tmc-local-monitoring-tmc-local-0        1/2     CrashLoopBackOff   7 (46s ago)   12m
 8api-gateway-server-679b8478f9-57ss5                  1/1     Running            1 (14m ago)   14m
 9api-gateway-server-679b8478f9-t6j9s                  1/1     Running            1 (14m ago)   14m
10audit-service-consumer-7bbdd4f55f-bjc5x              1/1     Running            0             14m

But its not bad considering all the services and pods being deployed by TMC, one failed out of MANY:

 1andreasm@linuxvm01:~$ k get pods -n tmc-local
 2NAME                                                 READY   STATUS             RESTARTS      AGE
 3account-manager-server-84b4758ccd-5zx7n              1/1     Running            0             14m
 4account-manager-server-84b4758ccd-zfqlj              1/1     Running            0             14m
 5agent-gateway-server-bf4f6c67-mvq2m                  1/1     Running            1 (14m ago)   14m
 6agent-gateway-server-bf4f6c67-zlj9d                  1/1     Running            1 (14m ago)   14m
 7alertmanager-tmc-local-monitoring-tmc-local-0        1/2     CrashLoopBackOff   7 (46s ago)   12m
 8api-gateway-server-679b8478f9-57ss5                  1/1     Running            1 (14m ago)   14m
 9api-gateway-server-679b8478f9-t6j9s                  1/1     Running            1 (14m ago)   14m
10audit-service-consumer-7bbdd4f55f-bjc5x              1/1     Running            0             14m
11audit-service-consumer-7bbdd4f55f-h6h8c              1/1     Running            0             14m
12audit-service-server-898c98dc5-97s8l                 1/1     Running            0             14m
13audit-service-server-898c98dc5-qvc9k                 1/1     Running            0             14m
14auth-manager-server-79d7567986-7699w                 1/1     Running            0             14m
15auth-manager-server-79d7567986-bbrg8                 1/1     Running            0             14m
16auth-manager-server-79d7567986-tbdww                 1/1     Running            0             14m
17authentication-server-695fd77f46-8p67m               1/1     Running            0             14m
18authentication-server-695fd77f46-ttd4l               1/1     Running            0             14m
19cluster-agent-service-server-599cf966f4-4ndkl        1/1     Running            0             14m
20cluster-agent-service-server-599cf966f4-h4g9l        1/1     Running            0             14m
21cluster-config-server-7c5f5f8dc6-99prt               1/1     Running            1 (13m ago)   14m
22cluster-config-server-7c5f5f8dc6-z4rvg               1/1     Running            0             14m
23cluster-object-service-server-7bc8f7c45c-fw97r       1/1     Running            0             14m
24cluster-object-service-server-7bc8f7c45c-k8bwc       1/1     Running            0             14m
25cluster-reaper-server-5f94f8dd6b-k2pxd               1/1     Running            0             14m
26cluster-secret-server-9fc44564f-g5lv5                1/1     Running            1 (14m ago)   14m
27cluster-secret-server-9fc44564f-vnbck                1/1     Running            0             14m
28cluster-service-server-6f7c657d7-ls9t7               1/1     Running            0             14m
29cluster-service-server-6f7c657d7-xvz7z               1/1     Running            0             14m
30cluster-sync-egest-f96d9b6bb-947c2                   1/1     Running            0             14m
31cluster-sync-egest-f96d9b6bb-q22sg                   1/1     Running            0             14m
32cluster-sync-ingest-798c88467d-c2pgj                 1/1     Running            0             14m
33cluster-sync-ingest-798c88467d-pc2z7                 1/1     Running            0             14m
34contour-contour-certgen-gdnns                        0/1     Completed          0             17m
35contour-contour-ffddc764f-k25pb                      1/1     Running            0             17m
36contour-envoy-4ptk4                                  2/2     Running            0             17m
37contour-envoy-66v8r                                  2/2     Running            0             17m
38contour-envoy-6shc8                                  2/2     Running            0             17m
39contour-envoy-br4nk                                  2/2     Running            0             17m
40dataprotection-server-58c6c9bd8d-dplbs               1/1     Running            0             14m
41dataprotection-server-58c6c9bd8d-hp2nz               1/1     Running            0             14m
42events-service-consumer-76bd756879-49bpb             1/1     Running            0             14m
43events-service-consumer-76bd756879-jnlkw             1/1     Running            0             14m
44events-service-server-694648bcc8-rjg27               1/1     Running            0             14m
45events-service-server-694648bcc8-trtm2               1/1     Running            0             14m
46fanout-service-server-7c6d9559b7-g7mvg               1/1     Running            0             14m
47fanout-service-server-7c6d9559b7-nhcjc               1/1     Running            0             14m
48feature-flag-service-server-855756576c-zltgh         1/1     Running            0             14m
49inspection-server-695b778b48-29s8q                   2/2     Running            0             14m
50inspection-server-695b778b48-7hzf4                   2/2     Running            0             14m
51intent-server-566dd98b76-dhcrx                       1/1     Running            0             14m
52intent-server-566dd98b76-pjdpb                       1/1     Running            0             14m
53kafka-0                                              1/1     Running            0             16m
54kafka-exporter-745d578567-5vhgq                      1/1     Running            4 (15m ago)   16m
55kafka-topic-controller-5cf4d8c559-lxpcb              1/1     Running            0             15m
56landing-service-server-7ddd9774f-szx8v               1/1     Running            0             14m
57minio-764b688f5f-p7lrx                               1/1     Running            0             16m
58minio-provisioning-5vsqs                             0/1     Completed          1             16m
59onboarding-service-server-5ff888758f-bnzp5           1/1     Running            0             14m
60onboarding-service-server-5ff888758f-fq9dg           1/1     Running            0             14m
61package-deployment-server-79dd4b896d-9rv8z           1/1     Running            0             14m
62package-deployment-server-79dd4b896d-txq2x           1/1     Running            0             14m
63pinniped-supervisor-677578c495-jqbq4                 1/1     Running            0             16m
64policy-engine-server-6bcbddf747-jks25                1/1     Running            0             14m
65policy-engine-server-6bcbddf747-vhxlm                1/1     Running            0             14m
66policy-insights-server-6878c9c8f-64ggn               1/1     Running            0             14m
67policy-sync-service-server-7699f47d65-scl5f          1/1     Running            0             14m
68policy-view-service-server-86bb698454-bvclh          1/1     Running            0             14m
69policy-view-service-server-86bb698454-zpkg9          1/1     Running            0             14m
70postgres-endpoint-controller-9d4fc9489-kgdf4         1/1     Running            0             15m
71postgres-postgresql-0                                2/2     Running            0             16m
72prometheus-server-tmc-local-monitoring-tmc-local-0   2/2     Running            0             12m
73provisioner-service-server-84c4f9dc8f-khv2b          1/1     Running            0             14m
74provisioner-service-server-84c4f9dc8f-xl6gr          1/1     Running            0             14m
75resource-manager-server-8567f7cbbc-pl2fz             1/1     Running            0             14m
76resource-manager-server-8567f7cbbc-pqkxp             1/1     Running            0             14m
77s3-access-operator-7f4d77647b-xnnb2                  1/1     Running            0             15m
78schema-service-schema-server-85cb7c7796-prjq7        1/1     Running            0             14m
79telemetry-event-service-consumer-7d6f8cc4b7-ffjcd    1/1     Running            0             14m
80telemetry-event-service-consumer-7d6f8cc4b7-thf44    1/1     Running            0             14m
81tenancy-service-server-57898676cd-9lpjl              1/1     Running            0             14m
82ui-server-6994bc9cd6-gtm6r                           1/1     Running            0             14m
83ui-server-6994bc9cd6-xzxbv                           1/1     Running            0             14m
84wcm-server-5c95c8d587-7sc9l                          1/1     Running            1 (13m ago)   14m
85wcm-server-5c95c8d587-r2kbf                          1/1     Running            1 (12m ago)   14m

Troubleshooting the Alertmanager pod

If your package installation stops at 25/26, and the alertmanager pod is in a crasloopbackoff state:

image-20230716162954492

And if you check the logs of the alertmanager container it will throw you this error.

1k logs -n tmc-local alertmanager-tmc-local-monitoring-tmc-local-0 -c alertmanager
2ts=2023-07-13T14:16:30.239Z caller=main.go:231 level=info msg="Starting Alertmanager" version="(version=0.24.0, branch=HEAD, revision=f484b17fa3c583ed1b2c8bbcec20ba1db2aa5f11)"
3ts=2023-07-13T14:16:30.239Z caller=main.go:232 level=info build_context="(go=go1.17.8, user=root@265f14f5c6fc, date=20220325-09:31:33)"
4ts=2023-07-13T14:16:30.240Z caller=cluster.go:178 level=warn component=cluster err="couldn't deduce an advertise address: no private IP found, explicit advertise addr not provided"
5ts=2023-07-13T14:16:30.241Z caller=main.go:263 level=error msg="unable to initialize gossip mesh" err="create memberlist: Failed to get final advertise address: No private IP address found, and explicit IP not provided"

After some searching around, a workaround is to add the below values to the stateful set (see comments below):

 1    spec:
 2      containers:
 3      - args:
 4        - --volume-dir=/etc/alertmanager
 5        - --webhook-url=http://127.0.0.1:9093/-/reload
 6        image: registry.domain.net/project/package-repository@sha256:9125ebac75af1eb247de0982ce6d56bc7049a1f384f97c77a7af28de010f20a7
 7        imagePullPolicy: IfNotPresent
 8        name: configmap-reloader
 9        resources: {}
10        terminationMessagePath: /dev/termination-log
11        terminationMessagePolicy: File
12        volumeMounts:
13        - mountPath: /etc/alertmanager/config
14          name: config-volume
15          readOnly: true
16      - args:
17        - --config.file=/etc/alertmanager/config/alertmanager.yaml
18        - --cluster.advertise-address=$(POD_IP):9093 # added from here
19        env:
20        - name: POD_IP
21          valueFrom:
22            fieldRef:
23              fieldPath: status.podIP # To here

But setting this directly on the statefulset will be overwritten by the package conciliation.

So we need to apply this config using ytt overlay. Create a new yaml file, call it something like alertmanager-overlay.yaml. Below is my ytt config to achieve this:

 1apiVersion: v1
 2kind: Secret
 3metadata:
 4  name: alertmanager-overlay-secret
 5  namespace: tmc-local
 6stringData:
 7  patch.yaml: |
 8    #@ load("@ytt:overlay", "overlay")
 9    #@overlay/match by=overlay.subset({"kind":"StatefulSet", "metadata": {"name": "alertmanager-tmc-local-monitoring-tmc-local"}})
10    ---
11    spec:
12      template:
13        spec:
14          containers: #@overlay/replace
15          - args:
16            - --volume-dir=/etc/alertmanager
17            - --webhook-url=http://127.0.0.1:9093/-/reload
18            image: registry.domain.net/project/package-repository@sha256:9125ebac75af1eb247de0982ce6d56bc7049a1f384f97c77a7af28de010f20a7
19            imagePullPolicy: IfNotPresent
20            name: configmap-reloader
21            resources: {}
22            terminationMessagePath: /dev/termination-log
23            terminationMessagePolicy: File
24            volumeMounts:
25            - mountPath: /etc/alertmanager/config
26              name: config-volume
27              readOnly: true
28          - args:
29            - --config.file=/etc/alertmanager/config/alertmanager.yaml
30            - --cluster.advertise-address=$(POD_IP):9093
31            env:
32            - name: POD_IP
33              valueFrom:
34                fieldRef:
35                  fieldPath: status.podIP
36            image: registry.domain.net/project/package-repository@sha256:74d46d5614791496104479bbf81c041515c5f8c17d9e9fcf1b33fa36e677156f
37            imagePullPolicy: IfNotPresent
38            name: alertmanager
39            ports:
40            - containerPort: 9093
41              name: alertmanager
42              protocol: TCP
43            readinessProbe:
44              failureThreshold: 3
45              httpGet:
46                path: /#/status
47                port: 9093
48                scheme: HTTP
49              initialDelaySeconds: 30
50              periodSeconds: 10
51              successThreshold: 1
52              timeoutSeconds: 30
53            resources:
54              limits:
55                cpu: 300m
56                memory: 100Mi
57              requests:
58                cpu: 100m
59                memory: 70Mi
60            terminationMessagePath: /dev/termination-log
61            terminationMessagePolicy: File
62            volumeMounts:
63            - mountPath: /etc/alertmanager/config
64              name: config-volume
65              readOnly: true
66            - mountPath: /data
67              name: data    
68
69---
70apiVersion: v1
71kind: Secret
72metadata:
73  name: tmc-overlay-override
74  namespace: tmc-local
75stringData:
76  patch-alertmanager.yaml: |
77    #@ load("@ytt:overlay", "overlay")
78    #@overlay/match by=overlay.subset({"kind":"PackageInstall", "metadata": {"name": "tmc-local-monitoring"}})
79    ---
80    metadata:
81      annotations:
82        #@overlay/match missing_ok=True
83        ext.packaging.carvel.dev/ytt-paths-from-secret-name.0: alertmanager-overlay-secret    

This was the only way I managed to get the configs applied correctly. It can probably be done a different way, but it works.

Apply the above yaml:

1andreasm@linuxvm01:~/tmc-sm/errors$ k apply -f alertmanager-overlay.yaml
2secret/alertmanager-overlay-secret configured
3secret/tmc-overlay-override configured

Then I need to annotate the package:

1kubectl annotate packageinstalls tanzu-mission-control -n tmc-local ext.packaging.carvel.dev/ytt-paths-from-secret-name.0=tmc-overlay-override

Pause and unpause the reconciliation (if it is already in a reconciliation state its not always necessary to pause and unpause). But to kick it off immediately, run the commands below.

1andreasm@linuxvm01:~/tmc-sm/errors$ kubectl patch -n tmc-local --type merge pkgi tmc-local-monitoring --patch '{"spec": {"paused": true}}'
2andreasm@linuxvm01:~/tmc-sm/errors$ kubectl patch -n tmc-local --type merge pkgi tmc-local-monitoring --patch '{"spec": {"paused": false}}'
3packageinstall.packaging.carvel.dev/tmc-local-monitoring patched

One can also kick the reconcile by pointing to the package tanzu-mission-control:

1andreasm@linuxvm01:~/tmc-sm/errors$ kubectl patch -n tmc-local --type merge pkgi tanzu-mission-control --patch '{"spec": {"paused": true}}'
2packageinstall.packaging.carvel.dev/tanzu-mission-control patched
3andreasm@linuxvm01:~/tmc-sm/errors$ kubectl patch -n tmc-local --type merge pkgi tanzu-mission-control --patch '{"spec": {"paused": false}}'
4packageinstall.packaging.carvel.dev/tanzu-mission-control patched

The end result should give us this in our alertmanager statefulset:

 1andreasm@linuxvm01:~/tmc-sm/errors$ k get statefulsets.apps -n tmc-local alertmanager-tmc-local-monitoring-tmc-local -oyaml
 2   #snippet 
 3      - args:
 4        - --config.file=/etc/alertmanager/config/alertmanager.yaml
 5        - --cluster.advertise-address=$(POD_IP):9093
 6        env:
 7        - name: POD_IP
 8          valueFrom:
 9            fieldRef:
10              apiVersion: v1
11              fieldPath: status.podIP
12    #snippet

And the alertmanager pod should start:

1andreasm@linuxvm01:~/tmc-sm/errors$ k get pod -n tmc-local alertmanager-tmc-local-monitoring-tmc-local-0
2NAME                                            READY   STATUS    RESTARTS   AGE
3alertmanager-tmc-local-monitoring-tmc-local-0   2/2     Running   0          10m

If its still in CrashLoopBackOff just delete the pod and it should go into a running state. If not, describe the alermananger statefulset for any additional errors, maybe a typo in the ytt overlay yaml...

One can also do this operation while the installation is waiting on the package tmc-local-monitoring re-conciliation. So the package installation will be successful after all.

Install the TMC-SM package - continued

What about the services created, httpproxies and Ingress?

Get the Services:

 1andreasm@linuxvm01:~$ k get svc -n tmc-local
 2NAME                                               TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
 3account-manager-grpc                               ClusterIP      20.10.134.215   <none>          443/TCP                      18m
 4account-manager-service                            ClusterIP      20.10.6.142     <none>          443/TCP,7777/TCP             18m
 5agent-gateway-service                              ClusterIP      20.10.111.64    <none>          443/TCP,8443/TCP,7777/TCP    18m
 6alertmanager-tmc-local-monitoring-tmc-local        ClusterIP      20.10.113.103   <none>          9093/TCP                     15m
 7api-gateway-service                                ClusterIP      20.10.241.28    <none>          443/TCP,8443/TCP,7777/TCP    18m
 8audit-service-consumer                             ClusterIP      20.10.183.29    <none>          7777/TCP                     18m
 9audit-service-grpc                                 ClusterIP      20.10.94.221    <none>          443/TCP                      18m
10audit-service-rest                                 ClusterIP      20.10.118.27    <none>          443/TCP                      18m
11audit-service-service                              ClusterIP      20.10.193.140   <none>          443/TCP,8443/TCP,7777/TCP    18m
12auth-manager-server                                ClusterIP      20.10.86.230    <none>          443/TCP                      18m
13auth-manager-service                               ClusterIP      20.10.136.164   <none>          443/TCP,7777/TCP             18m
14authentication-grpc                                ClusterIP      20.10.32.80     <none>          443/TCP                      18m
15authentication-service                             ClusterIP      20.10.69.22     <none>          443/TCP,7777/TCP             18m
16cluster-agent-service-grpc                         ClusterIP      20.10.55.122    <none>          443/TCP                      18m
17cluster-agent-service-installer                    ClusterIP      20.10.185.105   <none>          80/TCP                       18m
18cluster-agent-service-service                      ClusterIP      20.10.129.243   <none>          443/TCP,80/TCP,7777/TCP      18m
19cluster-config-service                             ClusterIP      20.10.237.148   <none>          443/TCP,7777/TCP             18m
20cluster-object-service-grpc                        ClusterIP      20.10.221.128   <none>          443/TCP                      18m
21cluster-object-service-service                     ClusterIP      20.10.238.0     <none>          443/TCP,8443/TCP,7777/TCP    18m
22cluster-reaper-grpc                                ClusterIP      20.10.224.97    <none>          443/TCP                      18m
23cluster-reaper-service                             ClusterIP      20.10.65.179    <none>          443/TCP,7777/TCP             18m
24cluster-secret-service                             ClusterIP      20.10.17.122    <none>          443/TCP,7777/TCP             18m
25cluster-service-grpc                               ClusterIP      20.10.152.204   <none>          443/TCP                      18m
26cluster-service-rest                               ClusterIP      20.10.141.159   <none>          443/TCP                      18m
27cluster-service-service                            ClusterIP      20.10.40.169    <none>          443/TCP,8443/TCP,7777/TCP    18m
28cluster-sync-egest                                 ClusterIP      20.10.47.77     <none>          443/TCP,7777/TCP             18m
29cluster-sync-egest-grpc                            ClusterIP      20.10.219.9     <none>          443/TCP                      18m
30cluster-sync-ingest                                ClusterIP      20.10.223.205   <none>          443/TCP,7777/TCP             18m
31cluster-sync-ingest-grpc                           ClusterIP      20.10.196.7     <none>          443/TCP                      18m
32contour                                            ClusterIP      20.10.5.59      <none>          8001/TCP                     21m
33contour-envoy                                      LoadBalancer   20.10.72.121    10.101.210.12   80:31964/TCP,443:31350/TCP   21m
34contour-envoy-metrics                              ClusterIP      None            <none>          8002/TCP                     21m
35dataprotection-grpc                                ClusterIP      20.10.47.233    <none>          443/TCP                      18m
36dataprotection-service                             ClusterIP      20.10.73.15     <none>          443/TCP,8443/TCP,7777/TCP    18m
37events-service-consumer                            ClusterIP      20.10.38.207    <none>          7777/TCP                     18m
38events-service-grpc                                ClusterIP      20.10.65.181    <none>          443/TCP                      18m
39events-service-service                             ClusterIP      20.10.34.169    <none>          443/TCP,7777/TCP             18m
40fanout-service-grpc                                ClusterIP      20.10.77.108    <none>          443/TCP                      18m
41fanout-service-service                             ClusterIP      20.10.141.34    <none>          443/TCP,7777/TCP             18m
42feature-flag-service-grpc                          ClusterIP      20.10.171.161   <none>          443/TCP                      18m
43feature-flag-service-service                       ClusterIP      20.10.112.195   <none>          443/TCP,7777/TCP             18m
44inspection-grpc                                    ClusterIP      20.10.20.119    <none>          443/TCP                      18m
45inspection-service                                 ClusterIP      20.10.85.86     <none>          443/TCP,7777/TCP             18m
46intent-grpc                                        ClusterIP      20.10.213.53    <none>          443/TCP                      18m
47intent-service                                     ClusterIP      20.10.19.196    <none>          443/TCP,7777/TCP             18m
48kafka                                              ClusterIP      20.10.135.162   <none>          9092/TCP                     20m
49kafka-headless                                     ClusterIP      None            <none>          9092/TCP,9094/TCP,9093/TCP   20m
50kafka-metrics                                      ClusterIP      20.10.175.161   <none>          9308/TCP                     20m
51landing-service-metrics                            ClusterIP      None            <none>          7777/TCP                     18m
52landing-service-rest                               ClusterIP      20.10.37.157    <none>          443/TCP                      18m
53landing-service-server                             ClusterIP      20.10.28.110    <none>          443/TCP                      18m
54minio                                              ClusterIP      20.10.234.32    <none>          9000/TCP,9001/TCP            20m
55onboarding-service-metrics                         ClusterIP      None            <none>          7777/TCP                     18m
56onboarding-service-rest                            ClusterIP      20.10.66.85     <none>          443/TCP                      18m
57package-deployment-service                         ClusterIP      20.10.40.90     <none>          443/TCP,7777/TCP             18m
58pinniped-supervisor                                ClusterIP      20.10.138.177   <none>          443/TCP                      20m
59pinniped-supervisor-api                            ClusterIP      20.10.218.242   <none>          443/TCP                      20m
60policy-engine-grpc                                 ClusterIP      20.10.114.38    <none>          443/TCP                      18m
61policy-engine-service                              ClusterIP      20.10.85.191    <none>          443/TCP,7777/TCP             18m
62policy-insights-grpc                               ClusterIP      20.10.95.196    <none>          443/TCP                      18m
63policy-insights-service                            ClusterIP      20.10.119.38    <none>          443/TCP,7777/TCP             18m
64policy-sync-service-service                        ClusterIP      20.10.32.72     <none>          7777/TCP                     18m
65policy-view-service-grpc                           ClusterIP      20.10.4.163     <none>          443/TCP                      18m
66policy-view-service-service                        ClusterIP      20.10.41.172    <none>          443/TCP,7777/TCP             18m
67postgres-endpoint-controller                       ClusterIP      20.10.3.234     <none>          9876/TCP                     18m
68postgres-postgresql                                ClusterIP      20.10.10.197    <none>          5432/TCP                     20m
69postgres-postgresql-hl                             ClusterIP      None            <none>          5432/TCP                     20m
70postgres-postgresql-metrics                        ClusterIP      20.10.79.247    <none>          9187/TCP                     20m
71prometheus-server-tmc-local-monitoring-tmc-local   ClusterIP      20.10.152.45    <none>          9090/TCP                     15m
72provisioner-service-grpc                           ClusterIP      20.10.138.198   <none>          443/TCP                      18m
73provisioner-service-service                        ClusterIP      20.10.96.47     <none>          443/TCP,7777/TCP             18m
74resource-manager-grpc                              ClusterIP      20.10.143.168   <none>          443/TCP                      18m
75resource-manager-service                           ClusterIP      20.10.238.70    <none>          443/TCP,8443/TCP,7777/TCP    18m
76s3-access-operator                                 ClusterIP      20.10.172.230   <none>          443/TCP,8080/TCP             19m
77schema-service-grpc                                ClusterIP      20.10.237.93    <none>          443/TCP                      18m
78schema-service-service                             ClusterIP      20.10.99.167    <none>          443/TCP,7777/TCP             18m
79telemetry-event-service-consumer                   ClusterIP      20.10.196.48    <none>          7777/TCP                     18m
80tenancy-service-metrics-headless                   ClusterIP      None            <none>          7777/TCP                     18m
81tenancy-service-tenancy-service                    ClusterIP      20.10.80.23     <none>          443/TCP                      18m
82tenancy-service-tenancy-service-rest               ClusterIP      20.10.200.153   <none>          443/TCP                      18m
83ui-server                                          ClusterIP      20.10.233.160   <none>          8443/TCP,7777/TCP            18m
84wcm-grpc                                           ClusterIP      20.10.19.188    <none>          443/TCP                      18m
85wcm-service                                        ClusterIP      20.10.175.206   <none>          443/TCP,8443/TCP,7777/TCP    18m

Get the Ingresses:

1andreasm@linuxvm01:~$ k get ingress -n tmc-local
2NAME                                                       CLASS       HOSTS                                        ADDRESS         PORTS     AGE
3alertmanager-tmc-local-monitoring-tmc-local-ingress        tmc-local   alertmanager.tmc.pretty-awesome-domain.net   10.101.210.12   80        16m
4landing-service-ingress-global                             tmc-local   landing.tmc.pretty-awesome-domain.net        10.101.210.12   80, 443   19m
5minio                                                      tmc-local   console.s3.tmc.pretty-awesome-domain.net     10.101.210.12   80        20m
6minio-api                                                  tmc-local   s3.tmc.pretty-awesome-domain.net             10.101.210.12   80, 443   20m
7prometheus-server-tmc-local-monitoring-tmc-local-ingress   tmc-local   prometheus.tmc.pretty-awesome-domain.net     10.101.210.12   80        16m

Ah, there is my dns records πŸ˜„

Get the HTTPProxies

 1andreasm@linuxvm01:~$ k get httpproxies -n tmc-local
 2NAME                              FQDN                                                TLS SECRET   STATUS   STATUS DESCRIPTION
 3auth-manager-server               auth.tmc.pretty-awesome-domain.net                  server-tls   valid    Valid HTTPProxy
 4minio-api-proxy                   s3.tmc.pretty-awesome-domain.net                    minio-tls    valid    Valid HTTPProxy
 5minio-bucket-proxy                tmc-local.s3.tmc.pretty-awesome-domain.net          minio-tls    valid    Valid HTTPProxy
 6minio-console-proxy               console.s3.tmc.pretty-awesome-domain.net            minio-tls    valid    Valid HTTPProxy
 7pinniped-supervisor               pinniped-supervisor.tmc.pretty-awesome-domain.net                valid    Valid HTTPProxy
 8stack-http-proxy                  tmc.pretty-awesome-domain.net                       stack-tls    valid    Valid HTTPProxy
 9tenancy-service-http-proxy        gts.tmc.pretty-awesome-domain.net                                valid    Valid HTTPProxy
10tenancy-service-http-proxy-rest   gts-rest.tmc.pretty-awesome-domain.net                           valid    Valid HTTPProxy

Ah.. More DNS records.

Unfortunately my tmc-sm deployement gave me this error in the end, which can be solved afterwards or during the install process following the section on Alertmanager above:

 1	    | 8:32:35AM: ---- waiting on 1 changes [25/26 done] ----
 2	    | 8:32:36AM: ongoing: reconcile packageinstall/tmc-local-monitoring (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 3	    | 8:32:36AM:  ^ Reconciling
 48:33:30AM: Deploy failed
 5	    | kapp: Error: Timed out waiting after 15m0s for resources: [packageinstall/tmc-local-monitoring (packaging.carvel.dev/v1alpha1) namespace: tmc-local]
 6	    | Deploying: Error (see .status.usefulErrorMessage for details)
 78:33:30AM: Error tailing app: Reconciling app: Deploy failed
 8
 98:33:30AM: packageinstall/tanzu-mission-control (packaging.carvel.dev/v1alpha1) namespace: tmc-local: ReconcileFailed
10Error: packageinstall/tanzu-mission-control (packaging.carvel.dev/v1alpha1) namespace: tmc-local: Reconciling: kapp: Error: Timed out waiting after 15m0s for resources: [packageinstall/tmc-local-monitoring (packaging.carvel.dev/v1alpha1) namespace: tmc-local]. Reconcile failed: Error (see .status.usefulErrorMessage for details)

Except the Alertmanager pod which can be fixed, it is kind a success. Remember also the note in the official documentation:

Note

Deploying TMC Self-Managed 1.0 on a Tanzu Kubernetes Grid (TKG) 2.0 workload cluster running in vSphere with Tanzu on vSphere version 8.x is for tech preview only. Initiate deployments only in pre-production environments or production environments where support for the integration is not required. vSphere 8u1 or later is required in order to test the tech preview integration.

Now its time to log in to the TMC-SM UI..

Uninstall tmc-sm packages

To uninstall, after a failed deployement or other reasons. Issue this command:

  1andreasm@linuxvm01:~/tmc-sm$ tanzu package installed delete tanzu-mission-control -n tmc-local
  2Delete package install 'tanzu-mission-control' from namespace 'tmc-local'
  3
  4Continue? [yN]: y
  5
  6
  77:55:19AM: Deleting package install 'tanzu-mission-control' from namespace 'tmc-local'
  87:55:19AM: Waiting for deletion of package install 'tanzu-mission-control' from namespace 'tmc-local'
  97:55:19AM: Waiting for generation 2 to be observed
 107:55:19AM: Delete started (2s ago)
 117:55:21AM: Deleting
 12	    | Target cluster 'https://20.10.0.1:443' (nodes: tmc-sm-cluster-node-pool-3-ctgxg-5f76bd48d8-hzh7h, 4+)
 13	    | Changes
 14	    | Namespace  Name                                       Kind                Age  Op      Op st.  Wait to  Rs       Ri
 15	    | (cluster)  tmc-install-cluster-admin-role             ClusterRole         17m  delete  -       delete   ok       -
 16	    | ^          tmc-install-cluster-admin-role-binding     ClusterRoleBinding  17m  delete  -       delete   ok       -
 17	    | tmc-local  contour                                    PackageInstall      17m  delete  -       delete   ok       -
 18	    | ^          contour-values-ver-1                       Secret              17m  delete  -       delete   ok       -
 19	    | ^          kafka                                      PackageInstall      16m  delete  -       delete   ok       -
 20	    | ^          kafka-topic-controller                     PackageInstall      16m  delete  -       delete   ok       -
 21	    | ^          kafka-topic-controller-values-ver-1        Secret              17m  delete  -       delete   ok       -
 22	    | ^          kafka-values-ver-1                         Secret              17m  delete  -       delete   ok       -
 23	    | ^          minio                                      PackageInstall      16m  delete  -       delete   ok       -
 24	    | ^          minio-values-ver-1                         Secret              17m  delete  -       delete   ok       -
 25	    | ^          monitoring-values-ver-1                    Secret              17m  delete  -       delete   ok       -
 26	    | ^          pinniped                                   PackageInstall      16m  delete  -       delete   ok       -
 27	    | ^          pinniped-values-ver-1                      Secret              17m  delete  -       delete   ok       -
 28	    | ^          postgres                                   PackageInstall      16m  delete  -       delete   ok       -
 29	    | ^          postgres-endpoint-controller               PackageInstall      15m  delete  -       delete   ok       -
 30	    | ^          postgres-endpoint-controller-values-ver-1  Secret              17m  delete  -       delete   ok       -
 31	    | ^          postgres-values-ver-1                      Secret              17m  delete  -       delete   ok       -
 32	    | ^          s3-access-operator                         PackageInstall      15m  delete  -       delete   ok       -
 33	    | ^          s3-access-operator-values-ver-1            Secret              17m  delete  -       delete   ok       -
 34	    | ^          tmc-install-sa                             ServiceAccount      17m  delete  -       delete   ok       -
 35	    | ^          tmc-local-monitoring                       PackageInstall      4m   delete  -       delete   ongoing  Reconciling
 36	    | ^          tmc-local-stack                            PackageInstall      14m  delete  -       delete   fail     Reconcile failed:  (message: Error
 37	    |                                                                                                                  (see .status.usefulErrorMessage for
 38	    |                                                                                                                  details))
 39	    | ^          tmc-local-stack-secrets                    PackageInstall      17m  delete  -       delete   ok       -
 40	    | ^          tmc-local-stack-values-ver-1               Secret              17m  delete  -       delete   ok       -
 41	    | ^          tmc-local-support                          PackageInstall      16m  delete  -       delete   ok       -
 42	    | ^          tmc-local-support-values-ver-1             Secret              17m  delete  -       delete   ok       -
 43	    | Op:      0 create, 26 delete, 0 update, 0 noop, 0 exists
 44	    | Wait to: 0 reconcile, 26 delete, 0 noop
 45	    | 7:55:19AM: ---- applying 23 changes [0/26 done] ----
 46	    | 7:55:19AM: delete secret/monitoring-values-ver-1 (v1) namespace: tmc-local
 47	    | 7:55:19AM: delete secret/s3-access-operator-values-ver-1 (v1) namespace: tmc-local
 48	    | 7:55:19AM: delete secret/contour-values-ver-1 (v1) namespace: tmc-local
 49	    | 7:55:19AM: delete packageinstall/tmc-local-stack (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 50	    | 7:55:19AM: delete secret/kafka-values-ver-1 (v1) namespace: tmc-local
 51	    | 7:55:19AM: delete packageinstall/tmc-local-stack-secrets (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 52	    | 7:55:19AM: delete secret/kafka-topic-controller-values-ver-1 (v1) namespace: tmc-local
 53	    | 7:55:19AM: delete packageinstall/pinniped (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 54	    | 7:55:20AM: delete packageinstall/tmc-local-support (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 55	    | 7:55:20AM: delete secret/tmc-local-support-values-ver-1 (v1) namespace: tmc-local
 56	    | 7:55:20AM: delete packageinstall/kafka-topic-controller (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 57	    | 7:55:20AM: delete secret/postgres-values-ver-1 (v1) namespace: tmc-local
 58	    | 7:55:20AM: delete secret/postgres-endpoint-controller-values-ver-1 (v1) namespace: tmc-local
 59	    | 7:55:20AM: delete secret/minio-values-ver-1 (v1) namespace: tmc-local
 60	    | 7:55:20AM: delete packageinstall/kafka (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 61	    | 7:55:20AM: delete secret/tmc-local-stack-values-ver-1 (v1) namespace: tmc-local
 62	    | 7:55:20AM: delete packageinstall/postgres-endpoint-controller (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 63	    | 7:55:20AM: delete packageinstall/postgres (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 64	    | 7:55:20AM: delete packageinstall/tmc-local-monitoring (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 65	    | 7:55:20AM: delete secret/pinniped-values-ver-1 (v1) namespace: tmc-local
 66	    | 7:55:20AM: delete packageinstall/minio (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 67	    | 7:55:20AM: delete packageinstall/contour (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 68	    | 7:55:20AM: delete packageinstall/s3-access-operator (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 69	    | 7:55:20AM: ---- waiting on 23 changes [0/26 done] ----
 70	    | 7:55:20AM: ok: delete secret/monitoring-values-ver-1 (v1) namespace: tmc-local
 71	    | 7:55:20AM: ok: delete secret/s3-access-operator-values-ver-1 (v1) namespace: tmc-local
 72	    | 7:55:20AM: ok: delete secret/contour-values-ver-1 (v1) namespace: tmc-local
 73	    | 7:55:20AM: ongoing: delete packageinstall/tmc-local-stack (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 74	    | 7:55:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
 75	    | 7:55:20AM: ongoing: delete packageinstall/s3-access-operator (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 76	    | 7:55:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
 77	    | 7:55:20AM: ok: delete secret/kafka-topic-controller-values-ver-1 (v1) namespace: tmc-local
 78	    | 7:55:20AM: ok: delete secret/kafka-values-ver-1 (v1) namespace: tmc-local
 79	    | 7:55:20AM: ongoing: delete packageinstall/pinniped (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 80	    | 7:55:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
 81	    | 7:55:20AM: ongoing: delete packageinstall/tmc-local-stack-secrets (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 82	    | 7:55:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
 83	    | 7:55:20AM: ok: delete secret/tmc-local-support-values-ver-1 (v1) namespace: tmc-local
 84	    | 7:55:20AM: ongoing: delete packageinstall/tmc-local-support (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 85	    | 7:55:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
 86	    | 7:55:20AM: ok: delete secret/postgres-values-ver-1 (v1) namespace: tmc-local
 87	    | 7:55:20AM: ongoing: delete packageinstall/kafka-topic-controller (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 88	    | 7:55:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
 89	    | 7:55:20AM: ok: delete secret/postgres-endpoint-controller-values-ver-1 (v1) namespace: tmc-local
 90	    | 7:55:20AM: ok: delete secret/minio-values-ver-1 (v1) namespace: tmc-local
 91	    | 7:55:20AM: ongoing: delete packageinstall/postgres-endpoint-controller (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 92	    | 7:55:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
 93	    | 7:55:20AM: ongoing: delete packageinstall/kafka (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 94	    | 7:55:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
 95	    | 7:55:20AM: ok: delete secret/tmc-local-stack-values-ver-1 (v1) namespace: tmc-local
 96	    | 7:55:20AM: ok: delete secret/pinniped-values-ver-1 (v1) namespace: tmc-local
 97	    | 7:55:20AM: ongoing: delete packageinstall/tmc-local-monitoring (packaging.carvel.dev/v1alpha1) namespace: tmc-local
 98	    | 7:55:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
 99	    | 7:55:20AM: ongoing: delete packageinstall/contour (packaging.carvel.dev/v1alpha1) namespace: tmc-local
100	    | 7:55:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
101	    | 7:55:20AM: ongoing: delete packageinstall/minio (packaging.carvel.dev/v1alpha1) namespace: tmc-local
102	    | 7:55:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
103	    | 7:55:20AM: ongoing: delete packageinstall/postgres (packaging.carvel.dev/v1alpha1) namespace: tmc-local
104	    | 7:55:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
105	    | 7:55:20AM: ---- waiting on 12 changes [11/26 done] ----
106	    | 7:55:27AM: ok: delete packageinstall/kafka-topic-controller (packaging.carvel.dev/v1alpha1) namespace: tmc-local
107	    | 7:55:27AM: ---- waiting on 11 changes [12/26 done] ----
108	    | 7:55:28AM: ok: delete packageinstall/tmc-local-support (packaging.carvel.dev/v1alpha1) namespace: tmc-local
109	    | 7:55:28AM: ---- waiting on 10 changes [13/26 done] ----
110	    | 7:55:59AM: ok: delete packageinstall/postgres (packaging.carvel.dev/v1alpha1) namespace: tmc-local
111	    | 7:55:59AM: ---- waiting on 9 changes [14/26 done] ----
112	    | 7:56:03AM: ok: delete packageinstall/minio (packaging.carvel.dev/v1alpha1) namespace: tmc-local
113	    | 7:56:03AM: ---- waiting on 8 changes [15/26 done] ----
114	    | 7:56:20AM: ongoing: delete packageinstall/s3-access-operator (packaging.carvel.dev/v1alpha1) namespace: tmc-local
115	    | 7:56:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
116	    | 7:56:20AM: ongoing: delete packageinstall/contour (packaging.carvel.dev/v1alpha1) namespace: tmc-local
117	    | 7:56:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
118	    | 7:56:20AM: ongoing: delete packageinstall/tmc-local-stack (packaging.carvel.dev/v1alpha1) namespace: tmc-local
119	    | 7:56:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
120	    | 7:56:20AM: ongoing: delete packageinstall/pinniped (packaging.carvel.dev/v1alpha1) namespace: tmc-local
121	    | 7:56:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
122	    | 7:56:20AM: ongoing: delete packageinstall/postgres-endpoint-controller (packaging.carvel.dev/v1alpha1) namespace: tmc-local
123	    | 7:56:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
124	    | 7:56:20AM: ongoing: delete packageinstall/tmc-local-monitoring (packaging.carvel.dev/v1alpha1) namespace: tmc-local
125	    | 7:56:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
126	    | 7:56:20AM: ongoing: delete packageinstall/tmc-local-stack-secrets (packaging.carvel.dev/v1alpha1) namespace: tmc-local
127	    | 7:56:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
128	    | 7:56:20AM: ongoing: delete packageinstall/kafka (packaging.carvel.dev/v1alpha1) namespace: tmc-local
129	    | 7:56:20AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
130	    | 7:56:37AM: ok: delete packageinstall/pinniped (packaging.carvel.dev/v1alpha1) namespace: tmc-local
131	    | 7:56:37AM: ---- waiting on 7 changes [16/26 done] ----
132	    | 7:56:38AM: ok: delete packageinstall/tmc-local-stack-secrets (packaging.carvel.dev/v1alpha1) namespace: tmc-local
133	    | 7:56:38AM: ---- waiting on 6 changes [17/26 done] ----
134	    | 7:56:40AM: ok: delete packageinstall/tmc-local-monitoring (packaging.carvel.dev/v1alpha1) namespace: tmc-local
135	    | 7:56:40AM: ---- waiting on 5 changes [18/26 done] ----
136	    | 7:56:43AM: ok: delete packageinstall/s3-access-operator (packaging.carvel.dev/v1alpha1) namespace: tmc-local
137	    | 7:56:43AM: ---- waiting on 4 changes [19/26 done] ----
138	    | 7:56:48AM: ok: delete packageinstall/kafka (packaging.carvel.dev/v1alpha1) namespace: tmc-local
139	    | 7:56:48AM: ---- waiting on 3 changes [20/26 done] ----
140	    | 7:56:54AM: ok: delete packageinstall/contour (packaging.carvel.dev/v1alpha1) namespace: tmc-local
141	    | 7:56:54AM: ---- waiting on 2 changes [21/26 done] ----
142	    | 7:57:21AM: ongoing: delete packageinstall/postgres-endpoint-controller (packaging.carvel.dev/v1alpha1) namespace: tmc-local
143	    | 7:57:21AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
144	    | 7:57:21AM: ongoing: delete packageinstall/tmc-local-stack (packaging.carvel.dev/v1alpha1) namespace: tmc-local
145	    | 7:57:21AM:  ^ Waiting on finalizers: finalizers.packageinstall.packaging.carvel.dev/delete
146	    | 7:57:40AM: ok: delete packageinstall/tmc-local-stack (packaging.carvel.dev/v1alpha1) namespace: tmc-local
147	    | 7:57:40AM: ---- waiting on 1 changes [22/26 done] ----
1487:57:43AM: App 'tanzu-mission-control' in namespace 'tmc-local' deleted
1497:57:44AM: packageinstall/tanzu-mission-control (packaging.carvel.dev/v1alpha1) namespace: tmc-local: DeletionSucceeded
1507:57:44AM: Deleting 'Secret': tanzu-mission-control-tmc-local-values
1517:57:44AM: Deleting 'ServiceAccount': tanzu-mission-control-tmc-local-sa
1527:57:44AM: Deleting 'ClusterRole': tanzu-mission-control-tmc-local-cluster-role
1537:57:44AM: Deleting 'ClusterRoleBinding': tanzu-mission-control-tmc-local-cluster-rolebinding

There may be reasons you need to remove the namespace tmc-local also has it contains a lot of configmaps, secret and pvc volumes. So if you want to completeley and easily remove everything TMC-SM related, delete the namespace. From the official documentation:

To remove Tanzu Mission Control Self-Managed and its artifacts from you cluster, use the tanzu cli.

  1. Back up any data that you do not want to lose.

  2. Run the following commands:

    1tanzu package installed delete tanzu-mission-control --namespace tmc-local
    2tanzu package repository delete tanzu-mission-control-packages --namespace tmc-local
    
  3. If necessary, delete residual resources.

    The above commands clean up most of the resources that were created by the tanzu-mission-control Tanzu package. However, there are some resources that you have to remove manually. The resources include: - persistent volumes - internal TLS certificates - configmaps

    Alternatively, you can delete the tmc-local namespace. When you delete the tmc-local namespace, the persistent volume claims associated with the namespace are deleted. Make sure you have already backed up any data that you don’t want to lose.

First time login TMC-SM

If everything above went after plan (not for me, just a minor issue), I should now be able to login to my TMC-SM console.

Using my regular client from a web-browser enter https://tmc.pretty-awesome-domain.net

first-login

keycloak-idp

logged-in

And πŸ₯ I am logged into TMC-SM.

I will end this post here. Will create a second post on working with TMC-SM. Thanks for reading.

Credits where credit's due

In this post its necessary to give credits again for making this post. This time it goes to my manager Antonio and colleague Jose that helped out with the initial configs, then my colleague Alex that helped out with the Keycloak authentication related settings.