Optimized cloud-init templates on Proxmox
There are already quite a few resources out there demonstrating how to create a cloud-init enabled VM template in Proxmox. Here are the ones I mainly used to discover the topic, and which I suggest you go through because what follows depends on them:
- Proxmox FAQ, wiki and mostly identical official documentation on Cloud-Init support
- Perfect Proxmox Template with Cloud Image and Cloud Init (YouTube, Techno Tim 2022-03)
What those and many similar resources give are step-by-step instructions divided in as many commands to facilitate understanding. What I haven’t seen so far though, is an all-in-one, optimized command to do the same thing, so here’s my contribution to the field:
qm create 1000 \
--name debian12-cloud \
--description "Debian 12 cloud-init template" \
--template 1 \
--ostype l26 \
--machine q35 \
--cpu host \
--cores 2 \
--memory 4096 \
--balloon 512 \
--scsihw virtio-scsi-single \
--scsi0 local-lvm:0,import-from=/path/to/debian-12-generic-amd64.qcow2,discard=on,iothread=1,ssd=1 \
--net0 virtio,bridge=vmbr0 \
--tablet 0 \
--rng0 source=/dev/urandom \
--boot order=scsi0 \
--vga serial0 --serial0 socket \
--ide2 local-lvm:cloudinit \
--ciuser myuser \
--cipassword changeme \
--sshkey /path/to/your-public.key \
--ciupgrade 0 \
--ipconfig0 ip=dhcp
The same thing as a one-liner for the latest Ubuntu:
qm create 2000 --name ubuntu-server-2404-cloud --description "Ubuntu Server 24.04 cloud-init template" --template 1 --ostype l26 --machine q35 --cpu host --cores 2 --memory 4096 --balloon 512 --scsihw virtio-scsi-single --scsi0 local-lvm:0,import-from=/path/to/ubuntu-24.04-server-cloudimg-amd64.img,discard=on,iothread=1,ssd=1 --net0 virtio,bridge=vmbr0 --rng0 source=/dev/urandom --tablet 0 --boot order=scsi0 --vga serial0 --serial0 socket --ide2 local-lvm:cloudinit --ciuser myuser --cipassword changeme --sshkey /path/to/your-public.key --ciupgrade 0 --ipconfig0 ip=dhcp
Note that you cannot copy-paste those blindly, you have to adjust a few parameters to your local environment (especially the VMID, disk image and SSH key paths).
Follows a description of relevant options at the exclusion of self-evident ones (name
, description
, cores
, memory
…), as well as some possible variations you might want.
Generic options
qm create 1000
: the Proxmox CLI command to create a VM. Replace1000
by the VMID of your choice (must be ≥ 100)--template 1
: directly convert the created VM to a template
Performance related options
--ostype l26
: hint to optimize for a Linux 2.x-6.x-based system--machine q35
: use a modern machine type--cpu host
: pass-through the host CPU type to make all its features available in the VM--balloon 512
: when set to a lower value thanmemory
, enables dynamic memory allocation--scsihw virtio-scsi-single
: the most performant SCSI controller, especially when combined withiothread=1
(see next point)--scsi0 local-lvm:0,import-from=/path/to/debian-12-generic-amd64.qcow2,iothread=1,discard=on,ssd=1
:- import (i.e. copy) the referenced cloud image as the VM disk
- replace
/path/to/
with the full path to where you downloaded the cloud image (which you should have already done by now if you have followed the resources linked above 😉)
- replace
- configure it with performance (IO Thread), thin-provisioning and SSD-optimized settings
- remove
discard=on
and/orssd=1
if not applicable to your storage
- remove
- import (i.e. copy) the referenced cloud image as the VM disk
--tablet 0
: one of the lesser-known performance tips but one of the most important! Disables the USB tablet device only needed when connecting via the integrated console to guests with a GUI (e.g. Ubuntu Desktop). Reported to have a big performance impact.--rng0 source=/dev/urandom
(optional): provides a virtual hardware random number generator to get entropy from the host system (can speed things up during the first boot)
Up to here were performance-related options applicable to all VM templates, not only cloud-init ones. Here comes the cloud-init specific bits:
Cloud-init related options
--boot order=scsi0
: apparently speeds up booting for cloud-init enabled images--vga serial0 --serial0 socket
: creates the serial connection expected by most cloud images in their “native” cloud environments; also useful to monitor and troubleshoot the boot process via the Proxmox console- verified to work with Debian 12 and Ubuntu 24.04 server cloud images; remove if causing issues with the image you’re using
--ide2 local-lvm:cloudinit
: creates the required cloud-init “CD-ROM” drive--ciuser myuser
(optional): provides a custom username for the user account provisioned by cloud-init; without it the account name will depend on the distribution’s default (debian
for Debian,ubuntu
for Ubuntu… Check your cloud image docs about this)--cipassword changeme
(optional): generally not needed nor recommended, but useful for quickly making sure everything is all right the first few times over; afterwards use a SSH key instead--sshkey /path/to/your-public.key
(required if not setting a password): the authorized SSH public key that will be placed in the user account created by cloud-init--ciupgrade 0
(optional): disable automatically upgrading packages during first boot; useful to speed things up during testing, afterwards remove it/set it to1
(the default) if you want “always fresh” clones (which is probably a smart choice)--ipconfig0 ip=dhcp
: cloud-init in Proxmox doesn’t have a network configuration by default, so use this to let DHCP handle it or use something like--ipconfig0 ip=10.0.10.123/24,gw=10.0.10.1
for static config. Can be done later for each VM individually, just don’t leave it empty otherwise they won’t have any network by default.
If you already have custom cloud-init snippets, specify them via --cicustom "user=<volume>,network=<volume>,meta=<volume>"
, e.g. --cicustom "user=local:snippets/user-config.yaml"
.
If you do, make sure you have the equivalents of the Proxmox cloud-init options above set in your custom config, because using a custom user snippet overrides the complete user config set in the GUI or config! Yeah I know, it sucks and it’s not documented, boo Proxmox.
Fortunately, as mentioned in the docs the GUI config can be dumped to serve as a base for custom configs:
qm cloudinit dump 1000 user
qm cloudinit dump 1000 network
Note
Unlike Proxmox’ implementation, when using--cicustom
and in the absence of network configuration, the image’s cloud-init
process will generate a network configuration that will issue a DHCP request on a “first” network interface. So if DHCP is what you want, you don’t have to supply a "network:..."
snippet besides the "user:..."
one.
Post-creation steps
The only thing that cannot be done in the same step (due to using import-from
) is resizing the disk image. I personally prefer doing it on the cloned VMs rather than on the template itself to reduce cloning time and adjust the size depending on the VM’s needs, but there is also a case to be made to do it on the template directly.
So in my case I first clone the template to a new VM:
qm clone 1000 150 --full --name "debian12-cloud"
Note
Besides being generally recommended for VMs you will keep around, it seems we can only use a full clone when using --scsihw virtio-scsi-single
as without the --full
option I get:
Linked clone feature is not supported for drive 'scsi0'
YMMV.
Then expand its disk size:
qm resize 150 scsi0 15G
And then we’re ready to fire up the VM!
After checking everything works, you may want to stop and destroy this test VM:
qm stop 150
qm destroy 150 --purge --destroy-unreferenced-disks 1
Now you can do the final adjustments to your template (e.g. remove --cipassword
, --ciupgrade 0
etc.) and you are ready to rock the cloud-init lifestyle in Proxmox! ☁️🤘🕺
UEFI variant
Generally I try to use as modern a stack as is reasonable, because software written in the last few years is more likely to be tested with it than a more legacy stack.
But I have realized that UEFI is much less commonplace in virtualized environments than on bare metal, making it less tested and I’d say, slightly less supported overall (case in point: it’s still not the default in QEMU/Proxmox).
However it is easy enough to use it in our templates by adding the following options:
--bios ovmf --efidisk0 local-lvm:0,efitype=4m,size=4M,pre-enrolled-keys=0
The only thing to note is that pre-enrolled-keys=0
disables Secure Boot, which trips up all the distros that don’t want to play the Microsoft game (Arch Linux being a notable one for me). Leave the parameter out or switch its value to 1
for a Secure Boot-enabled template (confirmed working with Ubuntu for example)!
The QEMU Guest Agent conundrum
By default, no cloud images I know of come with qemu-guest-agent
preinstalled, but it’s pretty useful on Proxmox.
To install it in your cloud images, you basically have two options:
-
Install and use libguestfs’ virt-customize to the cloud images themselves, as illustrated in this random blog post I found
-
Let cloud-init do it during the first boot of each cloned VM using a custom cloud-init snippet: see this SuperUser answer for an example. The required lines to add to your
user-config.yaml
are:#cloud-config ... package_update: true packages: - qemu-guest-agent runcmd: - systemctl enable --now qemu-guest-agent
Remember that using a custom user snippet overrides the complete user config set in the GUI or config, so those lines must be added to your complete user snippet!
In this case add --cicustom "user=local:snippets/user-config.yaml"
and--agent 1,fstrim_cloned_disks=1
when creating the template (see the docs for details).
Tips & tricks
-
don’t use Debian
genericcloud
image: its kernel is optimized for Azure & AWS environments and in my tests, didn’t work with Proxmox. I had started with this one (being fooled by the wording on the download page ("genericcloud: Similar to generic. Should run in any virtualised environment. Is smaller thangeneric
by excluding drivers for physical hardware"), spent quite a bit of time troubleshooting the VM booting but cloud-init not kicking in, until I eventually tried thegeneric
image where everything worked perfectly. The Debian wiki actually sets the record straight:The generic image uses Debian’s standard Linux kernel packages, while the genericcloud image uses the cloud kernel build. The cloud kernel disables a large number of device drivers and primarily targets the Amazon EC2 and Microsoft Azure VM device models. It may be usable in other environments, but for maximum compatibility we recommend using the generic images.
While troubleshooting I’ve seen plenty of other reports of people having issues making the
genericcloud
image work with Proxmox, while it worked for some others…generic
is the reliable, consistant option. ’nough said! -
you can get Proxmox to display
.qcow2
images alongside regular.iso
in its GUI, by simply suffixing/replacing their extension with.img
(like Ubuntu does). It’s a regex issue ¯\_(ツ)_/¯