Proxmox
Proxmox Virtual Environment is a hyper-converged infrastructure open-source software. It is a hosted hypervisor that can run operating systems including Linux and Windows on x64 hardware.
I have 3 nodes in my Proxmox VE Cluster, these are all Dell Optiplexes. This allows for advanced functionality like clustering, replication and utilizing Proxmox Backup Server
Tips & Tricks
Shrink Virtual Disk on ZFS Volume
Use Gparted in the VM to resize the partitions (i’d recommend smaller than desired for now, like 90GB for 100GB etc.)
In PVE Console run zfs set volsize=XXXG rpool/data/vm-XXX-disk-X
replacing X’s with the relevant value then run qm rescan
In Gparted ignore the partition table warning and open terminal sudo gdisk /dev/sdX
x
e
p
w
Rescan in Gparted and expand previously shrunk partition to use the remaining disk.
https://t.du9l.com/2023/12/shrinking-the-root-disk-of-a-proxmox-ve-virtual-machine/
Revert clustered node to solo host
1
2
3
4
5
6
7
systemctl stop pve-cluster
systemctl stop corosync
pmxcfs -l
rm /etc/pve/corosync.conf
rm /etc/corosync/*
killall pmxcfs
systemctl start pve-cluster
https://www.reddit.com/r/Proxmox/comments/avk2gx/help_cluster_not_ready_no_quorum_500/
Converting Proxmox Legacy ZFS install to UEFI ZFS
After trial and error and mixing and matching many guides together i struggled but managed to successfully convert my legacy ZFS install using GRUB to a UEFI ZFS install using systemd-boot, after much googling i found that proxmox-boot-tool does not install EFI files unless you are booted in EFI, this means we have to boot to the Proxmox ISO in UEFI mode to get the UEFI files installed. Here are the steps i did.
- Ensure your Proxmox is relatively up to date before proceeding
- Ensure systemd-boot is installed, trust me this is a pain in the ass if you forget this
apt-get install systemd-boot
- Boot using a Proxmox VE version 6.4 or newer ISO in UEFI MODE (CSM Disabled)
- Select Install Proxmox VE (Console Debug)
- Exit the first debug shell by typing
Ctrl + D
or exit. The second debug shell contains all the necessary binaries for the following steps - Import the root pool (usually named rpool) with an alternative mountpoint of /mnt:
1
zpool import -f -R /mnt rpool
- Bind-mount all virtual filesystems needed for running proxmox-boot-tool:
1 2 3 4
mount -o rbind /proc /mnt/proc mount -o rbind /sys /mnt/sys mount -o rbind /dev /mnt/dev mount -o rbind /run /mnt/run
- change root into /mnt
1
chroot /mnt /bin/bash
Find the spare “Fat32” partitions you can use, in this example they are /dev/sda2, /dev/sdb2 and /dev/sdc2 Finding_potential_ESPs
- Replace “ID” with your partition identifier (e.g sda2, sdb2, sdc3) and run the following command, if there is already an existing partition you can append
--force
be careful
repeat this step for each ESP partition you will create (in this example it is 3 as per the picture)1
proxmox-boot-tool format /dev/ID
- Run the following command again replacing ID with your partition identifier(s)
repeat this step for each ESP partition you will create (in this example it is 3 as per the picture)1
proxmox-boot-tool init /dev/ID
you can verify the EFI boot entries with
efibootmgr -v
.Exit the chroot-shell (
Ctrl + D
or exit) and reset the system (for example by pressingCTRL + ALT + DEL
)- You should now be able to boot to your Proxmox installation with CSM disabled with the
Linux Boot Manager
option in your boot list
References:
Repairing_a_System_Stuck_in_the_GRUB_Rescue_Shell Finding_potential_ESPs