This a follow-up to my tale of a system image.

For fun and convenience, I now wanted to have a bootable image of my old laptop.

So at first I thought of making a Live ISO out of it, but didn’t know the name of the process. I tried a few searches like “convert a configured system into ISO” or “create live ISO from running system”, but what I found looked strangely difficult and didn’t seem to really address my need.

Then in the course of writing the aforementioned post, I was reminded of an Archwiki article I had seen some time ago: Moving an existing install into (or out of) a virtual machine. This was exactly what I wanted, as I didn’t need a Live ISO so much as a VM that I could boot on my main machine whenever I wanted to check something, or bask nostalgically into the glow of my old wallpaper.

Looking into it, I stumbled upon presentation slides by Clonezilla and learned that this process had a name: “Physical-to-Virtual”, or P2V for short (here is another article by IBM from 2009 about this topic).

The ultimate OS conservation project: virtualizing to immortality

Indeed, with a VM image of your old install you can easily keep it around forever, as with the specifications of the machine written down into an .xml file or in a QEMU command line invocation, you can spin one up at will.

I’ll use this opportunity to simplify my system stack, namely to rid it of both LUKS and LVM and consolidate all the existing logical volumes into one single filesystem.

So how do we go about turning our physical system into a functional VM? Here is a rough outline of the process:

  1. Preparing the VM image
  2. Transferring the data
  3. Making the necessary adjustments
  4. Profit

Note

There are different ways to approach steps 1 & 2 depending on what you are working with, e.g. your target hypervisor, what kind of backup/file transfer program is available to you and whether or not you need to modify the image.

If you need to do any modifications to the partitions/filesystems, forget about Clonezilla/partclone/FSArchiver savepart mode and use a file-level transfer method.

Since this is my case I’ll describe this approach, but if e.g. for you the physical and virtual machines are connected via the network and no modifications are needed, then Clonezilla remote device cloning could work great.

(Heck, if no modifications are needed you could even just use a loopback file mounted on /home/partimag in the live environment on the physical machine and have Clonezilla make a local clone!)

Here I will target QEMU for the hypervisor as that’s what I use, but only little adjustments should be required for other hypervisors.

All right, let’s delve into it!

Step 1: Preparing the VM image

Here we will create and prepare the image file which will be used as the VM drive.

In this case I will directly create and work with QEMU qcow format, but will also illustrate how to do it all with a raw image file which can be later converted to the desired format if needed.

Creating the file

First we create an image of an appropriate size1.

qcow image:

qemu-img create -f qcow2 -o cluster_size=2M --prealocation=falloc p2v.qcow2 15G
The -o cluster_size=2M option is used to increase performance.

Then we make the image available to the system:

sudo modprobe nbd max_part=1
sudo qemu-nbd -c /dev/nbd0 p2v.qcow2

Raw image:

On file systems supporting it (ext4, XFS, Btrfs, FAT – but not exFAT), fallocate is the best choice:

fallocate -l 20G p2v.img

For systems without fallocate but sparse file support, use truncate:

truncate -s 20G p2v.img

If even sparse files are too much to ask, then it’s time to get the disk destroyer out (warning: might take a while):

dd if=/dev/zero of=p2v.img bs=1M count=20000

If you really don’t want to know/choose and have QEMU installed, let it handle the details for you:

qemu-img create p2v.img 20G

Partitioning

Now to partition our image. It is easier to use the same partition table type as your physical machine (MBR or GPT), except if you want to convert it of course.

Warning

All of the following commands must be run with elevated privileges. Be careful with what you execute. Replace /dev/nbd0 with p2v.img (or whatever you named it) if using a raw disk image.

MBR

Here is the simplest way I found to create a single partition with MBR scheme aligned on the 1 MB boundary (with the default partition type for GNU/Linux).

printf ',' | sfdisk /dev/nbd0

Crazy right? If that (rightly) scares you, I’ve got an equivalent, more spelled-out parted one-liner:

parted --script --align optimal /dev/nbd0 mklabel msdos mkpart primary 1 100%

GPT

And here are their GPT equivalents (with sgdisk thrown into the mix):

printf ',' | sfdisk --label gpt /dev/nbd0
sgdisk --largest-new 0 /dev/nbd0
parted --script --align optimal /dev/nbd0 mklabel gpt mkpart $name 1 100%
See the $name in that parted command here? That’s because parted requires (yes, requires) that a partition be given a name in GPT mode, even though that’s absolutely not part of any specification… So do as you please to christen the first realm of this new digital kingdom!

Note however that if you are planning to use UEFI, a second EFI System Partition (ESP) is required as well. Here are three more one-liners to create both partitions in one go with a 512 MiB ESP, and the rest for the second partition (source for sfdisk example) :

printf ',512M,U\n,,' | sfdisk --label gpt /dev/nbd0
sgdisk -n 1:0:+512MiB -t 1:ef00 -n 2:0:0 /dev/nbd0
parted --script --align optimal /dev/nbd0 mklabel gpt mkpart EFI 0% 513MiB mkpart System 513MiB 100% set 1 esp

Formatting

Before mounting, we need to format our newly created partition(s) with a filesystem. It’s certainly a safe choice to go with the same one as the source system, albeit not strictly required. Ext4 example:

sudo mkfs.ext4 /dev/nbd0p1

If using a raw image file, we first need to use losetup to make the partition(s) available to the system:

sudo losetup --partscan --find --show --nooverlap p2v.img
--> /dev/loop0

From here on replace /dev/nbd0 with /dev/loop0 (or whatever the command returned).

In case of UEFI, the ESP needs to be formatted as FAT (FAT32 recommended):

sudo mkfs.fat -F32 /dev/nbd0p1
sudo mkfs.ext4 /dev/nbd0p2

Mounting

That’s it, now we can finally mount our image. For BIOS (single partition):

sudo mount /dev/nbd0p1 /mnt

For UEFI (two partitions), replace $ESP with the desired mount point:

sudo mount /dev/nbd0p2 /mnt
sudo mkdir -p /mnt/$ESP
sudo mount /dev/nbd0p1 !$

That’s it for the preparations, we are now ready to transfer the source system content!

Step 2: Transferring the data

This part depends on how you realized your full system backup on the source machine. For completeness’ sake and to create the counterpart to the previous article, I’ll show the restoration process of the various tools presented in the linked section.

partclone

Word of warning concerning partclone: it’s not good at restoring to a device smaller than the source, even if the actual content would fit in. Despite its offering of a “Don’t check device size and free space” (-C, --nocheck) option, my experience matched the web reports saying that restoration fails with a target seek ERROR:Invalid argument. Clonezilla FAQ even warns against this use case.

However, for completeness’ sake I’ll leave a quick restoration example to illustrate how to restore from a zstd-compressed image:

umount -R /mnt
zstdcat /path/to/partclone.img.zst | partclone.restore -o /dev/nbd0pY

Replace the Y at the end with the appropriate partition number for your use case.

Tip

partclone offers a --restore_raw_file option that aims at “creating special raw file for loop device”. This image wouldn’t have a partition table though so it’s not useful for our use case, but it’s still good to know it’s there 😉️

FSArchiver

I have a sweet spot for FSArchiver due to its versatility and very complete feature set. The only real downside I found to it are its exclusion rule patterns being less than clear.

Being a file-aware tool instead of block-aware one like partclone, it has absolutely no issues restoring to a destination smaller than the source (it is even one of the suggested workaround in the aforementioned Clonezilla FAQ entry!).

If you made a backup using its savepart mode, see the official docs for nice examples illustrating the different restoration cases.

On the other hand, if like me you used its savedir mode so as to have more flexibility, restoring is as simple as:

fsarchiver --jobs=$(nproc) restdir /path/to/backup.fsa /mnt/

May St IGNUcius bless FSarchiver and its author/team! 😇️

SquashFS

Another very straightforward option:

unsquashfs -force -dest /mnt/  backup.squashfs

Bonus points go to SquashFS compared with FSArchiver and tar because we can very simply access the archived content without having to restore/extract it: a simple mount backup.squashfs /path/to/mount/point suffices to make it browsable (with very good performance) 👍️

Rsync & tar

rsync is just about reversing the source and destination in its invocation, and tar about switching -c, --create with -x, --extract:

rsync -aHAXUUSh --info=progress2 --partial /path/to/backup/dir/* /mnt/
tar -x --zstd -p --acls --xattrs --atime-preserve=system -f /path/to/backup.tar.zst /mnt/

Alright, now that we have transferred the content of our physical system into the VM image, let’s move to the final adjustments.

Step 3: Adjusting the system config

Now it is time to chroot into our restored environment and make the needed adjustments. This step is very much dependent on your source system configuration and which modifications you wish to implement (if any).

arch-chroot /mnt
systemd-nspawn can replace arch-chroot if not using an Arch-based distro.

In my case, it equated to removing/commenting out references to the former LVM/LUKS devices in /etc/{fstab,crypttab} and configuring a single mount:

sed -i 's/^\([^#].*\)/# \1/g' /etc/crypttab
sed -i 's/^\([^#].*\)/# \1/g' /etc/fstab
echo "/dev/vda1  /  ext4  noatime  0  1" >> /etc/fstab
The two sed invocations comment out “active” lines while leaving blank lines alone.

For GPT replace the last line with:

printf "/dev/vda2  /  ext4  noatime  0  1\n/dev/vda1  /  vfat  defaults  0  2\n" >> /etc/fstab

Network-wise, I had nothing to do since NetworkManager was already handling automatic network configuration with DHCP.

I also removed the now unneeded lvm2, encrypt, keyboard and keymap mkinitcpio hooks as well as the i915 module from /etc/mkinitcpio.conf and regenerated the initramfs.

Finally, I removed the cryptdevice= kernel command line parameter from /etc/default/grub, installed GRUB in the image file boot sector and regenerated its config.

grub-install --target=i386-pc /dev/nbd0
sed -i '/cryptdev/s/.*/GRUB_CMDLINE_LINUX_DEFAULT="quiet"/' /etc/default/grub
grub-mkconfig -o /boot/grub/grub.cfg

Don’t forget to replace references to nbd0 (or loop0) in /boot/grub/grub.cfg with whatever disk naming you will use (in my case /dev/vda):

sed -i 's/nbd0/vda/g' /boot/grub/grub.cfg

Then exit the chroot, unmount the image file and remove the nbd device and module:

exit
sudo umount -R /mnt/
sudo qemu-nbd -d /dev/nbd0
sudo modprobe -r nbd

Or loop mount:

losetup -d /dev/loop0

(Optionally, convert the image to a format suitable to your target hypervisor)

And voilà, you are now ready to try out your brand new virtualized system! Hopefully it will boot on the first try, otherwise just make the image available again as before, mount it and chroot in it to troubleshoot the issue.

Enjoy your nascent hall of past digital abodes! 😉️

Edit (2024-07-09): I’ve just stumbled upon virt-p2v from libguestfs, which is a “GUI interface to convert a physical machine to run as virtual machine on KVM”. It is a companion front-end to virt-v2v, and “comes as an ISO, CD or PXE image that can be booted on physical machines to virtualize those machines”. Looks like a great alternative to try as well!

  1. At a minimum it should be able to accommodate all the data you’re going to transfer from the physical system, plus a little bit of spare space for system maintenance. ↩︎