I'm installing Proxmox on an HP ProLiant DL360p Gen8 server, which cannot boot from NVMe. The NVMe drives are each installed on single-slot M.2 adapters in the rear PCIe slots.
- Install Proxmox as "zfs raid1" (their installer uses wrong terminology) on two NVMe drives. Consider turning compression off. This Proxmox installation will be the actual instance that gets used, but is not currently bootable on this system.
- Re-run the installer and install Proxmox on a flash drive as a single-drive zfs pool. I used 16GB, not sure if 8GB will suffice.
- Reboot from the flash drive. This will load grub, but throw an error there are now two pools called
rpool
and one of them was last imported on another system. You'll be dropped into a BusyBox shell. - Import the NVMe-based pool by id. Run
zpool import
to list available pools, find the pool with two NVMe drives, and runzpool import -f 123456
.exit
the BusyBox shell, and the system will now boot the Proxmox instance installed in step 1. - Grab a Proxmox console, either by logging in locally (not ideal) or via the web UI. Run
lsblk
and you should see something like this:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 14.9G 0 disk
├─sda1 8:1 1 1007K 0 part
├─sda2 8:2 1 512M 0 part
└─sda3 8:3 1 13.5G 0 part
sr0 11:0 1 1024M 0 rom
nvme0n1 259:0 0 232.9G 0 disk
├─nvme0n1p1 259:2 0 1007K 0 part
├─nvme0n1p2 259:3 0 512M 0 part
└─nvme0n1p3 259:4 0 232.4G 0 part
nvme1n1 259:1 0 232.9G 0 disk
├─nvme1n1p1 259:5 0 1007K 0 part
├─nvme1n1p2 259:6 0 512M 0 part
└─nvme1n1p3 259:7 0 232.4G 0 part
- Import-and-rename the zpool on the USB stick, then destroy it.
zpool import rpool deadpool; zpool destroy deadpool
. (Since the NVMe-based rpool is already imported, this import of rpool is unambiguous, though importing by ID is probably smart, such aszpool import 654321 deadpool
.) - Reboot and pray.
- Cool it worked! Check
zpool status
. It probably alerts "Mismatch between pool hostid and system hostid on imported pool", since we're using the second installation's bootstraps with the first installation's zpool. Do the workaround:zpool set multihost=on rpool; zpool set multihost=off rpool
. If you find yourself back at a BusyBox shell unable to automatically import the pool on the next reboot, force import it, then export and reimport it.
(initramfs) # zpool import -f rpool
(initramfs) # zpool export rpool
(initramfs) # zpool import rpool
(initramfs) # exit
- Confirm
zpool status
is clean. Run azpool scrub rpool
if you want.
# zpool status
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:01:52 with 0 errors on Wed Feb 2 13:30:50 2022
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-eui.0025385791b291a9-part3 ONLINE 0 0 0
nvme-eui.0025385a9151127d-part3 ONLINE 0 0 0
errors: No known data errors
- Use your NVMe-based Proxmox! I'm planning to migrate from a USB-based boot disk to an internal SD card (
dd if=/dev/disk/by-id/usbstick of=/dev/disk/by-id/sdcard bs=1M
) and then lock the SD card so the bootloader can't be overwritten.