Proxmox with OpenTofu Kubespray and Kubernetes


I need a Kubernetes cluster, again

I am using a lot of my spare time playing around with my lab exploring different topics. Very much of that time again is spent with Kubernetes. It has been so many times have I deployed a new Kubernetes cluster, then after some time decommissioned it again. They way I have typically done it is using a Ubuntu template I have created, cloned it, manually adjusted all clones as needed, then manually installed Kubernetes. This takes a lot of time overall and it also stops me doing certain tasks as sometimes I just think nah.. not again. Maybe another day and I do something else instead. And, when a human being is set to do a repetive task it is destined to fail at some stage, forgot a setting, why is this one vm not working as the other vms and so on. Now the time has come to automate these tasks with the following goal in mind:

  • Reduce deployment time to a minum
  • Eliminate human errors
  • Consistent outcome every time
  • Make it even more fun to deploy a Kubernetes cluster

I have been running Home Assistant for many years, there I have a bunch of automations automating all kinds of things in my home which just makes my everyday life a bit happier. Most of these automations just work in the background doing their stuff as a good automation should. Automating my lab deployments is something I have been thinking of getting into several times, and now I decided to get this show started. So, as usual, a lot of tinkering, trial and error until I managed to get something working the way I wanted. Probably room for improvement in several areas, that is something I will most likely use my time on "post" this blog post. Speaking of blog post, after using some time on this I had to create a blog post on it. My goal later on is a fully automated lab from VM creation, Kubernetes runtime, and applications. And when they are decommisioned I can easily spin up everything again with persistent data etc. Lets see.

For now, this post will cover what I have done and configured so far to be able to automatically deploy the VMs for my Kubernetes clusters then the provisioning of Kubernetes itself.

My lab

My lab consists of two fairly spec'ed servers with a bunch of CPU cores, a lot of RAM, and a decent amount of SSDs. In regards to network they are using 10GB ethernet. In terms of power usage they are kind of friendly to my electricity bill with my "required" vms running on them, but can potentially ruin that if I throw a lot of stuff at them to chew on.

The total power usage above includes my switch, UPS and some other small devices. So its not that bad considering how much I can get out of it.

But with automation I can easily spin up some resources to be consumed for a certain period and delete again if not needed. My required VMs, that are always on, are things like Bind dns servers, PfSense, Truenas, Frigate, DCS server, a couple of linux "mgmt" vms, Home Assistant, a couple of Kubernetes clusters hosting my Unifi controller, Traefik Proxy, Grafana etc.

For the virtualization layer on my two servers I am using Proxmox, one of the reason is that it supports PCI passthrough of my Coral TPU. I have been running Proxmox for many years and I find it to be a very decent alternative. It does what I want it to do and Proxmox has a great community!

Proxmox has been configured with these two servers as a cluster. I have not configured any VMs in HA, but with a Proxmox cluster I can easily migrate VMs between the hosts, even without shared storage, and single web-ui to manage both servers. To be "quorate" I have a cute little RPi3 with its beefy 32 GB SD card, 2GB RAM and 1GB ethernet as a Qdevice


On that note, one does not necessarily need have them in a cluster to do vm migration. I recently moved a bunch of VMs over two these two new servers and it was easy peasy using this command:

1# executed on the source Proxmox node
2qm remote-migrate <src vm_id> <dst vm_id> 'apitoken=PVEAPIToken=root@pam!migrate=<token>,host=<dst-host-ip>,fingerprint=<thumprint>' --target-bridge vmbr0 --target-storage 'raid10-node01' --online

I found this with the great help of the Proxmox community here

Both Proxmox servers have their own local ZFS storage. In both of them I have created a dedicated zfs pool with identical name which I use for zfs replication for some of the more "critical" VMs, and as a bonus it also reduces the migration time drastically for these VMs when I move them between my servers. The "cluster" network (vmbr1) is directly connected 2x 10GB ethernet (not through my physical switch). The other 2x 10GB interfaces are connected to my switch for all my other network needs like VM network (vmbr0) and Proxmox management.

Throughout this post I will be using a dedicated linux vm for all my commands, interactions. So every time I install something, it is on this linux vm.

But I am digressing, enough of the intro, get on with the automation part you were supposed to write about. Got it.

In this post I will use the following products:

Provision VMs in Proxmox using OpenTofu

Terraform is something I have been involved in many times in my professional work, but never had the chance to actually use it myself other than know about it, and how the solutions I work with can work together with Terraform. I did notice even the Proxmox community had several post around using Terraform, so I decided to just go with Terraform. I already knew that HashiCorp had announced their change of Terraform license from MPL to BSL and that OpenTofu is an OpenSource fork of Terraform. So instead of basing my automations using Terraform, I will be using OpenTofu. Read more about OpenTofu here. For now OpenTofu is Terraform "compatible" and uses Terraform registries, so providers etc for Terraform works with OpenTofu. You will notice that further down, all my configs will be using terraform constructs.

In OpenTofu/Terraform there are several concepts that one needs to know about, I will use some of them in this post like providers, resources, provisioners and variables. It is smart to read about them a bit more here how and why to use them.

To get started with OpenTofu there are some preparations do be done. Lets start with these

Install OpenTofu

To get started with OpenTofu I deployed it on my linux machine using Snap, but several other alternatives is available. See more on the official OpenTofu docs page.

1sudo snap install --classic opentofu

Now OpenTofu is installed and I can start using it. I also installed the bash autocompletion like this:

1tofu -install-autocomplete # restart shell session...

I decided to create a dedicated folder for my "projects" to live in so I have created a folder in my home folder called *proxmox" where I have different subfolders depending on certain tasks or resources I will use OpenTofu for.

2├── k8s-cluster-02
3├── proxmox-images

OpenTofu Proxmox provider

To be able to use OpenTofu with Proxmox I need a provider that can use the Proxmox API. I did some quick research on the different options out there and landed on this provider: bpg/proxmox. Seems very active and are recently updated (according to the git repo here)

An OpenTofu/Terraform provider is being defined like this, and the example below is configured to install the bpg/proxmox provider I need to interact with Proxmox.

 1terraform {
 2  required_providers {
 3    proxmox = {
 4      source = "bpg/proxmox"
 5      version = "0.43.2"
 6    }
 7  }
10provider "proxmox" {
11  endpoint  = var.proxmox_api_endpoint
12  api_token = var.proxmox_api_token
13  insecure  = true
14  ssh {
15    agent    = true
16    username = "root"
17  }

I will save this content in a filed called

First a short explanation of the two sections above. The terraform sections instructs OpenTofu/Terraform which provider to be downloaded and enabled. The version field defines a specific version to use, not the latest but this version. Using this field will make sure that your automation will not break if there is an update in the provider version that introduces some API changes.

The provider section configures the proxmox provider how it should interact with Proxmox. Instead of using a regular username and password I have opted for using API token. Here I in the endpoint and api-token keys am using a variable value that are defined in another file called and a There are also certain tasks in my automation that requires SSH interaction with Proxmox so I have also enabled that by configuring the ssh field.

For more information on the bpg/proxmox provider, head over here

Prepare Proxmox with an API token for the OpenTofu bpg/proxmox provider

To be able to use the above configured provider with Proxmox I need to prepare Proxmox to use API token. I followed the bpg/proxmox provider documentation here

On my "leader" Proxmox node:

 1# Create the user
 2sudo pveum user add terraform@pve
 3# Create a role for the user above
 4sudo pveum role add Terraform -privs "Datastore.Allocate Datastore.AllocateSpace Datastore.AllocateTemplate Datastore.Audit Pool.Allocate Sys.Audit Sys.Console Sys.Modify SDN.Use VM.Allocate VM.Audit VM.Clone VM.Config.CDROM VM.Config.Cloudinit VM.Config.CPU VM.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.Migrate VM.Monitor VM.PowerMgmt User.Modify"
 5# Assign the terraform user to the above role
 6sudo pveum aclmod / -user terraform@pve -role Terraform
 7# Create the token
 8sudo pveum user token add terraform@pve provider --privsep=0
11│ key          │ value                                │
13│ full-tokenid │ terraform@pve!provider               │
15│ info         │ {"privsep":"0"}16├──────────────┼──────────────────────────────────────┤
17│ value        │ <token>                               │
19# make a backup of the token

Now I have create the API user to be used with my bpg/proxmox provider. Though it is not sufficient as I also need to define a ssh keypair on my Linux jumphost for passwordless SSH authentication which I will need to copy to my Proxmox nodes. This is done like this:

1andreasm@linuxmgmt01:~$ ssh-copy-id -i root@ # -i pointing to the pub key I want to use and specify the root user

Now, I can log into my Proxmox node using SSH without password from my Linux jumphost. But, when used in combination with opentofu its not sufficient. I need to load the key into the keystore. If I dont do that the automations that require SSH access will fail with this error message:

1Error: failed to open SSH client: unable to authenticate user "root" over SSH to "". Please verify that ssh-agent is correctly loaded with an authorized key via 'ssh-add -L' (NOTE: configurations in ~/.ssh/config are not considered by golang's ssh implementation). The exact error from ssh.Dial: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password], no supported methods remain

Ah, I can just configure the .ssh/config with this:

2    AddKeysToAgent yes
3    IdentityFile ~/.ssh/id_rsa

Nope, cant do.. Look at the error message again:

1(NOTE: configurations in ~/.ssh/config are not considered by golang's ssh implementation)

Ah, I can just do this then:

1andreasm@linuxmgmt01:~$ eval `ssh-agent -s`
2andreasm@linuxmgmt01:~$ ssh-add id_rsa

Yes, but that is not persistent between sessions, so you need to do this every time you log on to your Linux jumphost again. I couldnt figure out how to do this the "correct" way by following the expected Golang approach so instead I went with this approach.

1# I added this in my .bashrc file
2if [ -z "$SSH_AUTH_SOCK" ] ; then
3 eval `ssh-agent -s`
4 ssh-add ~/.ssh/id_rsa

OpenTofu variables and credentials

Instead of exposing username/tokens and other information directly in my provider/resource *.tf's I can refer to them by declaring them in a and credentials in a The defines the variables to use, and the maps the value to a variable. I am using this content in my

1variable "proxmox_api_endpoint" {
2  type = string
3  description = "Proxmox cluster API endpoint"
6variable "proxmox_api_token" {
7  type = string
8  description = "Proxmox API token bpg proxmox provider with ID and token"

Then in my file I have this content:

1proxmox_api_endpoint = ""
2proxmox_api_token = "terraform@pve!provider=<token-from-earlier>"

To read more about these files head over here. One can also use the to generate prompts if I have created variables being referred to in any of my resources but do not map to a value (credentials.tfvars etc..). Then it will ask you to enter a value when doing a plan or apply.

Prepare a Ubuntu cloud image - opentofu resource

In my automations with OpenTofu I will use a cloud image instead of a VM template in Proxmox.

I have been using Ubuntu as my preferred Linux distribution for many years and dont see any reason to change that now. Ubuntu/Canonical do provide cloud images and they can be found here

But I will not download them using my browser, instead I will be using OpenTofu with the bpg/proxmox provider to download and upload them to my Proxmox nodes.

In my proxmox folder I have a dedicated folder for this project called proxmox-images

2├── proxmox-images

Inside that folder I will start by creating the following files: file, and with the content described above.

60 directories, 3 files

But these files dont to anything other than provides the relevant information to connect and interact with Proxmox. What I need is a OpenTofu resource with a task that needs to be done. So I will prepare a file called with the following content:

 1resource "proxmox_virtual_environment_file" "ubuntu_cloud_image" {
 2  content_type = "iso"
 3  datastore_id = "local"
 4  node_name    = "proxmox-02"
 6  source_file {
 7    # you may download this image locally on your workstation and then use the local path instead of the remote URL
 8    path      = ""
10    # you may also use the SHA256 checksum of the image to verify its integrity
11    checksum = "1d82c1db56e7e55e75344f1ff875b51706efe171ff55299542c29abba3a20823"
12  }

What this file does is instructing OpenTofu where to grab my Ubuntu cloud image from, then upload it to my Proxmox node 2 and a specific datastore on that node. So I have defined a resource to define this task. More info on this here.

Within this same .tf file I can define several resources to download several cloud images. I could have called the file instead and just defined all the images I wanted to download/upload to Proxmox. I have decided to keep this task as a separate project/folder.

When I save this file I will now have 4 files in my proxmox-image folder:

70 directories, 4 files

Thats all the files I need to start my first OpenTofu task/automation.

Lets do a quick recap. The instructs OpenTofu to download and enable the bpg/proxmox provider in this folder, and configure the provider to interact with my Proxmox nodes.

Then it is the and the that provides keys and values to be used in the The contains the task I want OpenTofu to perform.

So now I am ready to execute the first OpenTofu command tofu init. This command initiates OpenTofu in the respective folder proxmox-images, downloads and enables the provider I have configured. More info here.

So lets try it:

 1andreasm@linuxmgmt01:~/terraform/proxmox/proxmox-images$ tofu init
 3Initializing the backend...
 5Initializing provider plugins...
 6- Finding bpg/proxmox versions matching "0.43.2"...
 7- Installing bpg/proxmox v0.43.2...
 8- Installed bpg/proxmox v0.43.2 (signed, key ID DAA1958557A27403)
10Providers are signed by their developers.
11If you'd like to know more about provider signing, you can read about it here:
14OpenTofu has created a lock file .terraform.lock.hcl to record the provider
15selections it made above. Include this file in your version control repository
16so that OpenTofu can guarantee to make the same selections by default when
17you run "tofu init" in the future.
19OpenTofu has been successfully initialized!
21You may now begin working with OpenTofu. Try running "tofu plan" to see
22any changes that are required for your infrastructure. All OpenTofu commands
23should now work.
25If you ever set or change modules or backend configuration for OpenTofu,
26rerun this command to reinitialize your working directory. If you forget, other
27commands will detect it and remind you to do so if necessary.

Cool, now I have initialized my proxmox-image folder. Lets continue with creating a plan.

 1andreasm@linuxmgmt01:~/terraform/proxmox/proxmox-images$ tofu plan -out plan
 3OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
 4  + create
 6OpenTofu will perform the following actions:
 8  # proxmox_virtual_environment_file.ubuntu_cloud_image will be created
 9  + resource "proxmox_virtual_environment_file" "ubuntu_cloud_image" {
10      + content_type           = "iso"
11      + datastore_id           = "local"
12      + file_modification_date = (known after apply)
13      + file_name              = (known after apply)
14      + file_size              = (known after apply)
15      + file_tag               = (known after apply)
16      + id                     = (known after apply)
17      + node_name              = "proxmox-02"
18      + overwrite              = true
19      + timeout_upload         = 1800
21      + source_file {
22          + changed  = false
23          + checksum = "1d82c1db56e7e55e75344f1ff875b51706efe171ff55299542c29abba3a20823"
24          + insecure = false
25          + path     = ""
26        }
27    }
29Plan: 1 to add, 0 to change, 0 to destroy.
33Saved the plan to: plan
35To perform exactly these actions, run the following command to apply:
36    tofu apply "plan"

I am using the -out plan to just keep a record of the plan. It could be called whatever I wanted. Now OpenTofu tells me what its intention are, and if I am happy with the plan, I just need to apply it.

So lets apply it:

1andreasm@linuxmgmt01:~/terraform/proxmox/proxmox-images$ tofu apply plan
2proxmox_virtual_environment_file.ubuntu_cloud_image: Creating...
3proxmox_virtual_environment_file.ubuntu_cloud_image: Still creating... [10s elapsed]
4proxmox_virtual_environment_file.ubuntu_cloud_image: Creation complete after 20s [id=local:iso/jammy-server-cloudimg-amd64.img]
6Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

And after 20 seconds I have my Ubuntu cloud image uploaded to my Proxmox node (jammy-server-cloudimg-amd64.img):


Thats great. Now I have my iso image to use for my VM deployments.

Next up is to deploy a bunch of VMs to be used for a Kubernetes cluster

Deploy VMs using OpenTofu and install Kubernetes using Ansible and Kubespray

In this section I will automate everything from deploying VMs to install Kubernetes on these VMs. This task will involve OpenTofu, Ansible and Kubespray. It will deploy 6 virtual machines, with two different resource configurations. It will deploy 3 vms intended to be used as Kubernetes controlplane nodes, and 3 vms intended to be Kubernetes worker nodes. Thats the task for OpenTofu, as soon as OpenTofu has done its part it will trigger an Ansible playbook to initiate Kubespray to install Kubernetes on all my nodes. This should then end up in a fully automated task from me just executing tofu apply plan to a ready Kubernetes cluster.

I will start by creating another subfolder in my proxmox folder called k8s-cluster-02. In this folder I will reuse the following files:

60 directories, 3 files

These files I will just copy from my proxmox-images folder. I will also need define and add a couple of other files. When I am ready with this task I will end up with these files:

90 directories, 7 files

I will go through all the files one by one. But there is still some preparations to be done to be able to complete this whole task.

Prepare OpenTofu to deploy my VMs

In addition to the three common files I have copied from the previous project, I will need to create two proxmox_virtual_environment_vm resource definitions (one for each VM type). The content of this file will be saved as and looks like this:

  1resource "proxmox_virtual_environment_vm" "k8s-cp-vms-cl02" {
  2  count       = 3
  3  name        = "k8s-cp-vm-${count.index + 1}-cl-02"
  4  description = "Managed by Terraform"
  5  tags        = ["terraform", "ubuntu", "k8s-cp"]
  7  node_name = "proxmox-02"
  8  vm_id     = "100${count.index + 1}"
 11  cpu {
 12    cores = 2
 13    type = "host"
 14  }
 16  memory {
 17    dedicated = 2048
 18  }
 21  agent {
 22    # read 'Qemu guest agent' section, change to true only when ready
 23    enabled = true
 24  }
 26  startup {
 27    order      = "3"
 28    up_delay   = "60"
 29    down_delay = "60"
 30  }
 32  disk {
 33    datastore_id = "raid-10-node02"
 34    file_id      = "local:iso/jammy-server-cloudimg-amd64.img"
 35    interface    = "virtio0"
 36    iothread     = true
 37    discard      = "on"
 38    size         = 40
 39    file_format  = "raw"
 40  }
 43  initialization {
 44    dns {
 45      servers = ["", ""]
 46      domain = ""
 47    }
 48    ip_config {
 49      ipv4 {
 50        address = "${count.index + 1}/24"
 51        gateway = ""
 52      }
 53    }
 54    datastore_id = "raid-10-node02"
 56    user_data_file_id =
 57  }
 59  network_device {
 60    bridge = "vmbr0"
 61    vlan_id = "216"
 62  }
 64  operating_system {
 65    type = "l26"
 66  }
 68  keyboard_layout = "no"
 70  lifecycle {
 71    ignore_changes = [
 72      network_device,
 73    ]
 74  }
 79resource "proxmox_virtual_environment_vm" "k8s-worker-vms-cl02" {
 80  count       = 3
 81  name        = "k8s-node-vm-${count.index + 1}-cl-02"
 82  description = "Managed by Terraform"
 83  tags        = ["terraform", "ubuntu", "k8s-node"]
 85  node_name = "proxmox-02"
 86  vm_id     = "100${count.index + 5}"
 89  cpu {
 90    cores = 4
 91    type = "host"
 92  }
 94  memory {
 95    dedicated = 4096
 96  }
 99  agent {
100    # read 'Qemu guest agent' section, change to true only when ready
101    enabled = true
102  }
104  startup {
105    order      = "3"
106    up_delay   = "60"
107    down_delay = "60"
108  }
110  disk {
111    datastore_id = "raid-10-node02"
112    file_id      = "local:iso/jammy-server-cloudimg-amd64.img"
113    interface    = "virtio0"
114    iothread     = true
115    discard      = "on"
116    size         = 60
117    file_format  = "raw"
118  }
121  initialization {
122    dns {
123      servers = ["", ""]
124      domain = ""
125    }
126    ip_config {
127      ipv4 {
128        address = "${count.index + 5}/24"
129        gateway = ""
130      }
131    }
132    datastore_id = "raid-10-node02"
134    user_data_file_id =
135  }
137  network_device {
138    bridge = "vmbr0"
139    vlan_id = "216"
140  }
142  operating_system {
143    type = "l26"
144  }
146  keyboard_layout = "no"
148  lifecycle {
149    ignore_changes = [
150      network_device,
151    ]
152  }

In this file I have define two proxmox_virtual_environment_vm resources called k8s-cp-vms-cl02 and k8s-worker-vms-cl02 respectively. I am using count.index in some of the fields where I need to automatically generate an increasing number. Like IP address, VM_ID, Name. It is also referring to my Ubuntu cloud image as installation source, then a user_data_file_id (will be shown next) to do a simple cloud-init on the VMs.

Then I need to configure a proxmox_virtual_environment_file resource called ubuntu_cloud_init to trigger a cloud-init task to configure some basic initial config on my VMs.

This is the content of this file:

 1resource "proxmox_virtual_environment_file" "ubuntu_cloud_init" {
 2  content_type = "snippets"
 3  datastore_id = "local"
 4  node_name    = "proxmox-02"
 6  source_raw {
 7    data = <<EOF
10  list: |
11    ubuntu:ubuntu    
12  expire: false
14  - qemu-guest-agent
15timezone: Europe/Oslo
18  - default
19  - name: ubuntu
20    groups: sudo
21    shell: /bin/bash
22    ssh-authorized-keys:
23      - ${trimspace("ssh-rsa <sha356>")}
24    sudo: ALL=(ALL) NOPASSWD:ALL
27    delay: now
28    mode: reboot
29    message: Rebooting after cloud-init completion
30    condition: true
34    file_name = ""
35  }

This will be used by all my OpenTofu deployed VMs in this task. It will install the qemu-guest-agent, configure a timezone, copy my Linux jumphost public key so I can log in to them using SSH without password. Then a quick reboot after all task completed to get the qemu-agent to report correctly to Proxmox. This will be uploaded to my Proxmox server as a Snippet. For Snippets to work. One need to enable this here under Datacenter -> Storage


Now I can go ahead and perform tofu init in this folder, but I am still not ready. I Kubernetes to be deployed also remember. For that I will use Kubespray.

Install and configure Kubespray

If you have not heard about Kubespray before, head over here for more info. I have been following the guides from Kubespray to get it working, and its very well documented.

Kubespray is a really powerful way to deploy Kubernetes with a lot of options and customizations. I just want to highlight Kubespray in this section as it does a really great job in automating Kubernetes deployment.

A quote from the Kubespray pages:


Kubespray vs Kops

Kubespray runs on bare metal and most clouds, using Ansible as its substrate for provisioning and orchestration. Kops performs the provisioning and orchestration itself, and as such is less flexible in deployment platforms. For people with familiarity with Ansible, existing Ansible deployments or the desire to run a Kubernetes cluster across multiple platforms, Kubespray is a good choice. Kops, however, is more tightly integrated with the unique features of the clouds it supports so it could be a better choice if you know that you will only be using one platform for the foreseeable future.

Kubespray vs Kubeadm

Kubeadm provides domain Knowledge of Kubernetes clusters' life cycle management, including self-hosted layouts, dynamic discovery services and so on. Had it belonged to the new operators world, it may have been named a "Kubernetes cluster operator". Kubespray however, does generic configuration management tasks from the "OS operators" ansible world, plus some initial K8s clustering (with networking plugins included) and control plane bootstrapping.

Kubespray has started using kubeadm internally for cluster creation since v2.3 in order to consume life cycle management domain knowledge from it and offload generic OS configuration things from it, which hopefully benefits both sides.

On my Linux jumphost I have done the following to prepare for Kubespray

 1# clone the Kubespray repo
 2andreasm@linuxmgmt01:~/terraform/proxmox$ git clone
 3# kubespray folder created under my proxmox folder
 4andreasm@linuxmgmt01:~/terraform/proxmox$ ls
 5k8s-cluster-02  kubespray  proxmox-images
 6# Python dependencies -  these I am not sure are needed but I installed them anyways.
 7sudo apt install software-properties-common
 8sudo add-apt-repository ppa:deadsnakes/ppa # used this repo to get a newer Python 3 than default repo
 9sudo apt update
10sudo apt install python3.12 python3-pip python3-virtualenv

Don't try to install Ansible in the systemwide, it will not work. Follow the Kubespray documentation to install ansible in a Python Virtual Environment.

1andreasm@linuxmgmt01:~/terraform/proxmox$ VENVDIR=kubespray-venv
2andreasm@linuxmgmt01:~/terraform/proxmox$ KUBESPRAYDIR=kubespray
3andreasm@linuxmgmt01:~/terraform/proxmox$ python3 -m venv $VENVDIR
4andreasm@linuxmgmt01:~/terraform/proxmox$ source $VENVDIR/bin/activate
5(kubespray-venv) andreasm@linuxmgmt01:~/terraform/proxmox$ cd $KUBESPRAYDIR
6(kubespray-venv) andreasm@linuxmgmt01:~/terraform/proxmox/kubespray$ pip install -U -r requirements.txt

To exit out of the python environment type deactivate.

Now I have these subfolders under my proxmox folder:

2andreasm@linuxmgmt01:~/terraform/proxmox$ tree -L 1
4├── k8s-cluster-02
5├── kubespray
6├── kubespray-venv
7├── proxmox-images
94 directories, 0 files

Next thing I need to do is to copy a sample folder inside the kubespray folder to a folder that reflects the kubernetes cluster name I am planning to deploy.

1andreasm@linuxmgmt01:~/terraform/proxmox/kubespray$ cp -rfp inventory/sample inventory/k8s-cluster-02

This sample folder contains a couple of files and directories. The file I am interested in right now is the inventory.ini file. I need to populate this file with the nodes, including the control plane nodes, that will form my Kubernetes cluster. This is how it looks like now, default:

 1# ## Configure 'ip' variable to bind kubernetes services on a
 2# ## different ip than the default iface
 3# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
 5# node1 ansible_host=  # ip= etcd_member_name=etcd1
 6# node2 ansible_host=  # ip= etcd_member_name=etcd2
 7# node3 ansible_host=  # ip= etcd_member_name=etcd3
 8# node4 ansible_host=  # ip= etcd_member_name=etcd4
 9# node5 ansible_host=  # ip= etcd_member_name=etcd5
10# node6 ansible_host=  # ip= etcd_member_name=etcd6
12# ## configure a bastion host if your nodes are not directly reachable
13# [bastion]
14# bastion ansible_host=x.x.x.x ansible_user=some_user
17# node1
18# node2
19# node3
22# node1
23# node2
24# node3
27# node2
28# node3
29# node4
30# node5
31# node6

This is just a sample, but nice to know how it should be defined. I will create a OpenTofu task to create this inventory file for me with the corresponding VMs I am deploying in the same task/project. This will autopopulate this inventory.ini file with all the necessary information. So I can just go ahead and delete the inventory.ini file that has been copied from the sample folder to my new k8s-cluster-02 folder. The other folders and files contains several variables/settings I can adjust to my liking. Like different CNIs, Kubernetes version etc. But I will not cover these here, head over to the official Kubespray docs for more info on that. These are the files/folder:

1andreasm@linuxmgmt01:~/terraform/proxmox/kubespray/inventory/k8s-cluster-02/group_vars$ tree -L 1
3├── all
4├── etcd.yml
5└── k8s_cluster
72 directories, 1 file

I will deploy my Kubernetes cluster "stock" so these files are left untouched for now, except the inventory.ini file.

Now Kubespray is ready to execute Ansible to configure my Kubernetes cluster as soon as my OpenTofu provisioned VMs has been deployed. The last step I need to do is to confgiure a new .tf file to create this inventory.ini file and kick of the Ansible command to fire up Kubespray.

Configure OpenTofu to kick off Kubespray

The last .tf file in my fully automated Kubernetes installation is the file that contains this information:

 1# Generate inventory file
 2resource "local_file" "ansible_inventory" {
 3  filename = "/home/andreasm/terraform/proxmox/kubespray/inventory/k8s-cluster-02/inventory.ini"
 4  content = <<-EOF
 5  [all]
 6  ${proxmox_virtual_environment_vm.k8s-cp-vms-cl02[0].name} ansible_host=${proxmox_virtual_environment_vm.k8s-cp-vms-cl02[0].ipv4_addresses[1][0]}
 7  ${proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1].name} ansible_host=${proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1].ipv4_addresses[1][0]}
 8  ${proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2].name} ansible_host=${proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2].ipv4_addresses[1][0]}
 9  ${proxmox_virtual_environment_vm.k8s-worker-vms-cl02[0].name} ansible_host=${proxmox_virtual_environment_vm.k8s-worker-vms-cl02[0].ipv4_addresses[1][0]}
10  ${proxmox_virtual_environment_vm.k8s-worker-vms-cl02[1].name} ansible_host=${proxmox_virtual_environment_vm.k8s-worker-vms-cl02[1].ipv4_addresses[1][0]}
11  ${proxmox_virtual_environment_vm.k8s-worker-vms-cl02[2].name} ansible_host=${proxmox_virtual_environment_vm.k8s-worker-vms-cl02[2].ipv4_addresses[1][0]}
13  [kube_control_plane]
14  ${proxmox_virtual_environment_vm.k8s-cp-vms-cl02[0].name}
15  ${proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1].name}
16  ${proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2].name}
18  [etcd]
19  ${proxmox_virtual_environment_vm.k8s-cp-vms-cl02[0].name}
20  ${proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1].name}
21  ${proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2].name}
23  [kube_node]
24  ${proxmox_virtual_environment_vm.k8s-worker-vms-cl02[0].name}
25  ${proxmox_virtual_environment_vm.k8s-worker-vms-cl02[1].name}
26  ${proxmox_virtual_environment_vm.k8s-worker-vms-cl02[2].name}
28  [k8s_cluster:children]
29  kube_node
30  kube_control_plane
32  EOF
35resource "null_resource" "ansible_command" {
36  provisioner "local-exec" {
37    command = "./ > k8s-cluster-02/ansible_output.log 2>&1"
38    interpreter = ["/bin/bash", "-c"]
39    working_dir = "/home/andreasm/terraform/proxmox"
40    }
41  depends_on = [proxmox_virtual_environment_vm.k8s-cp-vms-cl02, proxmox_virtual_environment_vm.k8s-worker-vms-cl02, local_file.ansible_inventory]
42  }

This will create the inventory.ini in the /proxmox/kubespray/inventory/k8s-cluster-02/ for Kubespray to use. Then it will fire a command refering to a bash script (more on that further down) to trigger the Ansible command:

1ansible-playbook -i inventory/k8s-cluster-02/inventory.ini --become --become-user=root cluster.yml -u ubuntu

For this to work I had to create a bash script that activated the Kubespray virtual environment:

 3# Set working directory
 5cd "$WORKING_DIR" || exit
 7# Set virtual environment variables
11# Activate virtual environment
12source "$VENVDIR/bin/activate" || exit
14# Change to Kubespray directory
15cd "$KUBESPRAYDIR" || exit
17# Run Ansible playbook
18ansible-playbook -i inventory/k8s-cluster-02/inventory.ini --become --become-user=root cluster.yml -u ubuntu

I will create and save this file in my proxmox folder.

This is now the content of my proxmox folder:

1andreasm@linuxmgmt01:~/terraform/proxmox$ tree -L 1
3├── k8s-cluster-02
4├── kubespray
5├── kubespray-venv
7└── proxmox-images
94 directories, 1 file

And this is the content in the k8s-cluster-02 folder where I have my OpenTofu tasks defined:

 1andreasm@linuxmgmt01:~/terraform/proxmox/k8s-cluster-02$ tree -L 1
100 directories, 6 files

Its time to put it all to a test

A fully automated provisioning of Kubernetes on Proxmox using OpenTofu, Ansible and Kubespray

To get this show started, it is the same procedure as it was with my cloud image task. Need to run tofu init, then create a plan, check the output and finally approve it. So lets see how this goes.

From within my ./proxmox/k8s-cluster-02 folder:

 1andreasm@linuxmgmt01:~/terraform/proxmox/k8s-cluster-02$ tofu init
 3Initializing the backend...
 5Initializing provider plugins...
 6- Finding latest version of hashicorp/null...
 7- Finding bpg/proxmox versions matching "0.43.2"...
 8- Finding latest version of hashicorp/local...
 9- Installing hashicorp/null v3.2.2...
10- Installed hashicorp/null v3.2.2 (signed, key ID 0C0AF313E5FD9F80)
11- Installing bpg/proxmox v0.43.2...
12- Installed bpg/proxmox v0.43.2 (signed, key ID DAA1958557A27403)
13- Installing hashicorp/local v2.4.1...
14- Installed hashicorp/local v2.4.1 (signed, key ID 0C0AF313E5FD9F80)
16Providers are signed by their developers.
17If you'd like to know more about provider signing, you can read about it here:
20OpenTofu has created a lock file .terraform.lock.hcl to record the provider
21selections it made above. Include this file in your version control repository
22so that OpenTofu can guarantee to make the same selections by default when
23you run "tofu init" in the future.
25OpenTofu has been successfully initialized!
27You may now begin working with OpenTofu. Try running "tofu plan" to see
28any changes that are required for your infrastructure. All OpenTofu commands
29should now work.
31If you ever set or change modules or backend configuration for OpenTofu,
32rerun this command to reinitialize your working directory. If you forget, other
33commands will detect it and remind you to do so if necessary.

Then create the plan:

 1OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
 2  + create
 4OpenTofu will perform the following actions:
 6  # local_file.ansible_inventory will be created
 7  + resource "local_file" "ansible_inventory" {
 8      + content              = (known after apply)
 9      + content_base64sha256 = (known after apply)
10      + content_base64sha512 = (known after apply)
11      + content_md5          = (known after apply)
12      + content_sha1         = (known after apply)
13      + content_sha256       = (known after apply)
14      + content_sha512       = (known after apply)
15      + directory_permission = "0777"
16      + file_permission      = "0777"
17      + filename             = "/home/andreasm/terraform/proxmox/kubespray/inventory/k8s-cluster-02/inventory.ini"
18      + id                   = (known after apply)
19    }
21  # null_resource.ansible_command will be created
22  + resource "null_resource" "ansible_command" {
23      + id = (known after apply)
24    }
26  # proxmox_virtual_environment_file.ubuntu_cloud_init will be created
27  + resource "proxmox_virtual_environment_file" "ubuntu_cloud_init" {
28      + content_type           = "snippets"
29      + datastore_id           = "local"
30      + file_modification_date = (known after apply)
31      + file_name              = (known after apply)
32      + file_size              = (known after apply)
33      + file_tag               = (known after apply)
34      + id                     = (known after apply)
35      + node_name              = "proxmox-02"
36      + overwrite              = true
37      + timeout_upload         = 1800
40Plan: 9 to add, 0 to change, 0 to destroy.
44Saved the plan to: plan
46To perform exactly these actions, run the following command to apply:
47    tofu apply "plan"

It looks good to me, lets apply it:

 1andreasm@linuxmgmt01:~/terraform/proxmox/k8s-cluster-02$ tofu apply plan
 2proxmox_virtual_environment_file.ubuntu_cloud_init: Creating...
 3proxmox_virtual_environment_file.ubuntu_cloud_init: Creation complete after 1s [id=local:snippets/]
 4proxmox_virtual_environment_vm.k8s-worker-vms-cl02[2]: Creating...
 5proxmox_virtual_environment_vm.k8s-worker-vms-cl02[1]: Creating...
 6proxmox_virtual_environment_vm.k8s-worker-vms-cl02[0]: Creating...
 7proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1]: Creating...
 8proxmox_virtual_environment_vm.k8s-cp-vms-cl02[0]: Creating...
 9proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2]: Creating...
10proxmox_virtual_environment_vm.k8s-worker-vms-cl02[2]: Still creating... [10s elapsed]
11proxmox_virtual_environment_vm.k8s-worker-vms-cl02[1]: Still creating... [10s elapsed]
12proxmox_virtual_environment_vm.k8s-worker-vms-cl02[0]: Still creating... [10s elapsed]
13proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1]: Still creating... [10s elapsed]
14proxmox_virtual_environment_vm.k8s-cp-vms-cl02[0]: Still creating... [10s elapsed]
15proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2]: Still creating... [10s elapsed]
17proxmox_virtual_environment_vm.k8s-worker-vms-cl02[2]: Still creating... [1m30s elapsed]
18proxmox_virtual_environment_vm.k8s-worker-vms-cl02[1]: Still creating... [1m30s elapsed]
19proxmox_virtual_environment_vm.k8s-worker-vms-cl02[0]: Still creating... [1m30s elapsed]
20proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1]: Still creating... [1m30s elapsed]
21proxmox_virtual_environment_vm.k8s-cp-vms-cl02[0]: Still creating... [1m30s elapsed]
22proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2]: Still creating... [1m30s elapsed]
23proxmox_virtual_environment_vm.k8s-worker-vms-cl02[0]: Creation complete after 1m32s [id=1005]
24proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1]: Creation complete after 1m32s [id=1002]
25proxmox_virtual_environment_vm.k8s-worker-vms-cl02[1]: Creation complete after 1m33s [id=1006]
26proxmox_virtual_environment_vm.k8s-worker-vms-cl02[2]: Creation complete after 1m33s [id=1007]
27proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2]: Creation complete after 1m34s [id=1003]
28proxmox_virtual_environment_vm.k8s-cp-vms-cl02[0]: Creation complete after 1m37s [id=1001]

1m37s to create my 6 VMs, ready for the Kubernetes installation.

And in Proxmox I have 6 new VMs with correct name, vm_id, tags and all:


Let me check if the inventory.ini file has been created correct:

 1andreasm@linuxmgmt01:~/terraform/proxmox/kubespray/inventory/k8s-cluster-02$ cat inventory.ini
 3k8s-cp-vm-1-cl-02 ansible_host=
 4k8s-cp-vm-2-cl-02 ansible_host=
 5k8s-cp-vm-3-cl-02 ansible_host=
 6k8s-node-vm-1-cl-02 ansible_host=
 7k8s-node-vm-2-cl-02 ansible_host=
 8k8s-node-vm-3-cl-02 ansible_host=

Now the last task the ansible_command:

 1local_file.ansible_inventory: Creating...
 2local_file.ansible_inventory: Creation complete after 0s [id=1d19a8be76746178f26336defc9ce96c6e82a791]
 3null_resource.ansible_command: Creating...
 4null_resource.ansible_command: Provisioning with 'local-exec'...
 5null_resource.ansible_command (local-exec): Executing: ["/bin/bash" "-c" "./ > k8s-cluster-02/ansible_output.log 2>&1"]
 6null_resource.ansible_command: Still creating... [10s elapsed]
 7null_resource.ansible_command: Still creating... [20s elapsed]
 8null_resource.ansible_command: Still creating... [30s elapsed]
 9null_resource.ansible_command: Still creating... [40s elapsed]
10null_resource.ansible_command: Still creating... [50s elapsed]
12null_resource.ansible_command: Still creating... [16m50s elapsed]
13null_resource.ansible_command: Still creating... [17m0s elapsed]
14null_resource.ansible_command: Creation complete after 17m4s [id=5215143035945962122]
16Apply complete! Resources: 9 added, 0 changed, 0 destroyed.

That took 17m4s to deploy a fully working Kubernetes cluster with no intervention from me at all.

From the ansible_output.log:

 1PLAY RECAP *********************************************************************
 2k8s-cp-vm-1-cl-02          : ok=691  changed=139  unreachable=0    failed=0    skipped=1080 rescued=0    ignored=6
 3k8s-cp-vm-2-cl-02          : ok=646  changed=131  unreachable=0    failed=0    skipped=1050 rescued=0    ignored=3
 4k8s-cp-vm-3-cl-02          : ok=648  changed=132  unreachable=0    failed=0    skipped=1048 rescued=0    ignored=3
 5k8s-node-vm-1-cl-02        : ok=553  changed=93   unreachable=0    failed=0    skipped=840  rescued=0    ignored=1
 6k8s-node-vm-2-cl-02        : ok=509  changed=90   unreachable=0    failed=0    skipped=739  rescued=0    ignored=1
 7k8s-node-vm-3-cl-02        : ok=509  changed=90   unreachable=0    failed=0    skipped=739  rescued=0    ignored=1
 8localhost                  : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
10Monday 15 January 2024  21:59:02 +0000 (0:00:00.288)       0:17:01.376 ********
12download : Download_file | Download item ------------------------------- 71.29s
13download : Download_file | Download item ------------------------------- 36.04s
14kubernetes/kubeadm : Join to cluster ----------------------------------- 19.06s
15container-engine/containerd : Download_file | Download item ------------ 16.73s
16download : Download_container | Download image if required ------------- 15.65s
17container-engine/runc : Download_file | Download item ------------------ 15.53s
18container-engine/nerdctl : Download_file | Download item --------------- 14.75s
19container-engine/crictl : Download_file | Download item ---------------- 14.66s
20download : Download_container | Download image if required ------------- 13.75s
21container-engine/crictl : Extract_file | Unpacking archive ------------- 13.35s
22kubernetes/control-plane : Kubeadm | Initialize first master ----------- 10.16s
23container-engine/nerdctl : Extract_file | Unpacking archive ------------- 9.99s
24kubernetes/preinstall : Install packages requirements ------------------- 9.93s
25kubernetes/control-plane : Joining control plane node to the cluster. --- 9.21s
26container-engine/runc : Download_file | Validate mirrors ---------------- 9.20s
27container-engine/crictl : Download_file | Validate mirrors -------------- 9.15s
28container-engine/nerdctl : Download_file | Validate mirrors ------------- 9.12s
29container-engine/containerd : Download_file | Validate mirrors ---------- 9.06s
30container-engine/containerd : Containerd | Unpack containerd archive ---- 8.99s
31download : Download_container | Download image if required -------------- 8.19s

I guess I am gonna spin up a couple of Kubernetes clusters going forward 😄

If something should fail I have configured the ansible_command to pipe the output to a file called ansible_output.log I can check. This pipes out the whole Kubespray operation. Some simple tests to do is also checking if Ansible can reach the VMs (after they have been deployed ofcourse). This command needs to be run within the python environment again.

1# activate the environment
2andreasm@linuxmgmt01:~/terraform/proxmox$ source $VENVDIR/bin/activate
3# ping all nodes
4(kubespray-venv) andreasm@linuxmgmt01:~/terraform/proxmox$ ansible -i inventory/k8s-cluster-02/inventory.ini -m ping all -u ubuntu

Now let me log into one of the Kubernetes Control plane and check if there is a Kubernetes cluster ready or not:

 1root@k8s-cp-vm-1-cl-02:/home/ubuntu# kubectl get nodes
 2NAME                  STATUS   ROLES           AGE     VERSION
 3k8s-cp-vm-1-cl-02     Ready    control-plane   6m13s   v1.28.5
 4k8s-cp-vm-2-cl-02     Ready    control-plane   6m1s    v1.28.5
 5k8s-cp-vm-3-cl-02     Ready    control-plane   5m57s   v1.28.5
 6k8s-node-vm-1-cl-02   Ready    <none>          5m24s   v1.28.5
 7k8s-node-vm-2-cl-02   Ready    <none>          5m23s   v1.28.5
 8k8s-node-vm-3-cl-02   Ready    <none>          5m19s   v1.28.5
 9root@k8s-cp-vm-1-cl-02:/home/ubuntu# kubectl get pods -A
10NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
11kube-system   calico-kube-controllers-648dffd99-lr82q     1/1     Running   0          4m25s
12kube-system   calico-node-b25gf                           1/1     Running   0          4m52s
13kube-system   calico-node-h7lpr                           1/1     Running   0          4m52s
14kube-system   calico-node-jdkb9                           1/1     Running   0          4m52s
15kube-system   calico-node-vsgqq                           1/1     Running   0          4m52s
16kube-system   calico-node-w6vrc                           1/1     Running   0          4m52s
17kube-system   calico-node-x95mh                           1/1     Running   0          4m52s
18kube-system   coredns-77f7cc69db-29t6b                    1/1     Running   0          4m13s
19kube-system   coredns-77f7cc69db-2ph9d                    1/1     Running   0          4m10s
20kube-system   dns-autoscaler-8576bb9f5b-8bzqp             1/1     Running   0          4m11s
21kube-system   kube-apiserver-k8s-cp-vm-1-cl-02            1/1     Running   1          6m14s
22kube-system   kube-apiserver-k8s-cp-vm-2-cl-02            1/1     Running   1          5m55s
23kube-system   kube-apiserver-k8s-cp-vm-3-cl-02            1/1     Running   1          6m1s
24kube-system   kube-controller-manager-k8s-cp-vm-1-cl-02   1/1     Running   2          6m16s
25kube-system   kube-controller-manager-k8s-cp-vm-2-cl-02   1/1     Running   2          5m56s
26kube-system   kube-controller-manager-k8s-cp-vm-3-cl-02   1/1     Running   2          6m1s
27kube-system   kube-proxy-6jt89                            1/1     Running   0          5m22s
28kube-system   kube-proxy-9w5f2                            1/1     Running   0          5m22s
29kube-system   kube-proxy-k7l9g                            1/1     Running   0          5m22s
30kube-system   kube-proxy-p7wqt                            1/1     Running   0          5m22s
31kube-system   kube-proxy-qfmg5                            1/1     Running   0          5m22s
32kube-system   kube-proxy-v6tcn                            1/1     Running   0          5m22s
33kube-system   kube-scheduler-k8s-cp-vm-1-cl-02            1/1     Running   1          6m14s
34kube-system   kube-scheduler-k8s-cp-vm-2-cl-02            1/1     Running   1          5m56s
35kube-system   kube-scheduler-k8s-cp-vm-3-cl-02            1/1     Running   1          6m
36kube-system   nginx-proxy-k8s-node-vm-1-cl-02             1/1     Running   0          5m25s
37kube-system   nginx-proxy-k8s-node-vm-2-cl-02             1/1     Running   0          5m20s
38kube-system   nginx-proxy-k8s-node-vm-3-cl-02             1/1     Running   0          5m20s
39kube-system   nodelocaldns-4z72v                          1/1     Running   0          4m9s
40kube-system   nodelocaldns-8wv4j                          1/1     Running   0          4m9s
41kube-system   nodelocaldns-kl6fw                          1/1     Running   0          4m9s
42kube-system   nodelocaldns-pqxpj                          1/1     Running   0          4m9s
43kube-system   nodelocaldns-qmrq8                          1/1     Running   0          4m9s
44kube-system   nodelocaldns-vqnbd                          1/1     Running   0          4m9s

Nice nice nice. Now I can go ahead and grab the kubeconfig, configure my loadbalancer to loadbalance the Kubernetes API and start using my newly decomposable provisioned cluster.

Cleaning up

When I am done with my Kubernetes cluster its just about doing a tofu destroy command and everything is neatly cleaned up. I have not configured any persistent storage yet. So if I have deployed some apps on this cluster and decides to delete it any data that has been created I want to keep will be deleted. So its wise to look into how to keep certain data even after the deletion of my cluster.

There are two ways I can clean up. If I want to keep the nodes running but just reset the Kubernetes installation I can execute the following command:

1# from the Kubespray environment
2(kubespray-venv) andreasm@linuxmgmt01:~/terraform/proxmox/kubespray$ ansible-playbook -i inventory/k8s-cluster-02/hosts.yaml --become --become-user=root reset.yml -u ubuntu

Or a full wipe, including the deployed VMs:

 1# Inside the k8s-cluster-02 OpenTofu project folder
 2andreasm@linuxmgmt01:~/terraform/proxmox/k8s-cluster-02$ tofu destroy
 3Do you really want to destroy all resources?
 4  OpenTofu will destroy all your managed infrastructure, as shown above.
 5  There is no undo. Only 'yes' will be accepted to confirm.
 7  Enter a value:yes
 8null_resource.ansible_command: Destroying... [id=5215143035945962122]
 9null_resource.ansible_command: Destruction complete after 0s
10local_file.ansible_inventory: Destroying... [id=1d19a8be76746178f26336defc9ce96c6e82a791]
11local_file.ansible_inventory: Destruction complete after 0s
12proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1]: Destroying... [id=1002]
13proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2]: Destroying... [id=1003]
14proxmox_virtual_environment_vm.k8s-worker-vms-cl02[0]: Destroying... [id=1005]
15proxmox_virtual_environment_vm.k8s-worker-vms-cl02[2]: Destroying... [id=1007]
16proxmox_virtual_environment_vm.k8s-worker-vms-cl02[1]: Destroying... [id=1006]
17proxmox_virtual_environment_vm.k8s-cp-vms-cl02[0]: Destroying... [id=1001]
18proxmox_virtual_environment_vm.k8s-worker-vms-cl02[0]: Destruction complete after 7s
19proxmox_virtual_environment_vm.k8s-worker-vms-cl02[2]: Destruction complete after 7s
20proxmox_virtual_environment_vm.k8s-worker-vms-cl02[1]: Destruction complete after 9s
21proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1]: Still destroying... [id=1002, 10s elapsed]
22proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2]: Still destroying... [id=1003, 10s elapsed]
23proxmox_virtual_environment_vm.k8s-cp-vms-cl02[0]: Still destroying... [id=1001, 10s elapsed]
24proxmox_virtual_environment_vm.k8s-cp-vms-cl02[0]: Destruction complete after 12s
25proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1]: Still destroying... [id=1002, 20s elapsed]
26proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2]: Still destroying... [id=1003, 20s elapsed]
27proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1]: Still destroying... [id=1002, 30s elapsed]
28proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2]: Still destroying... [id=1003, 30s elapsed]
29proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1]: Still destroying... [id=1002, 40s elapsed]
30proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2]: Still destroying... [id=1003, 40s elapsed]
31proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1]: Still destroying... [id=1002, 50s elapsed]
32proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2]: Still destroying... [id=1003, 50s elapsed]
33proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1]: Still destroying... [id=1002, 1m0s elapsed]
34proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2]: Still destroying... [id=1003, 1m0s elapsed]
35proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1]: Still destroying... [id=1002, 1m10s elapsed]
36proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2]: Still destroying... [id=1003, 1m10s elapsed]
37proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1]: Still destroying... [id=1002, 1m20s elapsed]
38proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2]: Still destroying... [id=1003, 1m20s elapsed]
39proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1]: Still destroying... [id=1002, 1m30s elapsed]
40proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2]: Still destroying... [id=1003, 1m30s elapsed]
41proxmox_virtual_environment_vm.k8s-cp-vms-cl02[1]: Destruction complete after 1m37s
42proxmox_virtual_environment_vm.k8s-cp-vms-cl02[2]: Destruction complete after 1m37s
43proxmox_virtual_environment_file.ubuntu_cloud_init: Destroying... [id=local:snippets/]
44proxmox_virtual_environment_file.ubuntu_cloud_init: Destruction complete after 0s
46Destroy complete! Resources: 9 destroyed.

Now all the VMs, everything that has been deployed with OpenTofu is gone again. And it takes a couple of seconds or minutes depending on whether I have configured the provider to do a shutdown or stop of the VMs on tofu destroy. I am currently using shutdown.


There is so much more that can be adjusted, improved and explored in general. But this post is just how I have done it now to solve a task I have been waiting to finally get some time to do. I will maybe do a follow up post where I do some improvements after some time using it and gained more experience on it. This includes improving the OpenTofu configs and Kubespray.