Source (with modifications): https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html
Assumptions:
- Unencrypted
- non-EFI BIOS.
- Copying a working image with (dkms-zfs installed) to the new disk.
- Two disks in a machine.
export DISK="/dev/disk/by-id/ata-VBOX_HARDDISK_VB1e5aa92e-0a8f5ded"
Partition goal
- P1 Reserved for emergency boot (1GB)
- P2 ZFS bpool (/boot) (1GB)
- P3 ZFS rpool (/root) (Rest of disk)
Clean the disk with wipefs:
wipefs -a $DISK
If using SSD, do a full disk discard (TRIM/UNMAP):
blkdiscard -f $DISK
Create the partition table:
parted -a optimal $DISK
mklabel msdos
mkpart primary 2048s 1GiB
mkpart primary 1GiB 2GiB
mkpart primary 2GiB -1
Create the GRUB compatible boot pool:
zpool create \
-f \
-o ashift=12 \
-o autotrim=on -d \
-o cachefile=/etc/zfs/zpool.cache \
-o feature@async_destroy=enabled \
-o feature@bookmarks=enabled \
-o feature@embedded_data=enabled \
-o feature@empty_bpobj=enabled \
-o feature@enabled_txg=enabled \
-o feature@extensible_dataset=enabled \
-o feature@filesystem_limits=enabled \
-o feature@hole_birth=enabled \
-o feature@large_blocks=enabled \
-o feature@livelist=enabled \
-o feature@lz4_compress=enabled \
-o feature@spacemap_histogram=enabled \
-o feature@zpool_checkpoint=enabled \
-O devices=off \
-O acltype=posixacl -O xattr=sa \
-O compression=lz4 \
-O normalization=formD \
-O relatime=on \
-O canmount=off -O mountpoint=none -R /mnt \
bpool ${DISK}-part2
Create the root pool:
zpool create \
-f \
-o ashift=12 \
-o autotrim=on \
-O acltype=posixacl -O xattr=sa -O dnodesize=auto \
-O compression=lz4 \
-O normalization=formD \
-O relatime=on \
-O canmount=off -O mountpoint=/ -R /mnt \
rpool ${DISK}-part3
Create the datasets:
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=on -o mountpoint=/ rpool/ROOT/debian
zfs mount rpool/ROOT/debian
# This needs to happen AFTER rpool/ROOT/debian is mounted.
zfs create -o canmount=off -o mountpoint=none bpool/BOOT
zfs create -o mountpoint=/boot bpool/BOOT/debian
zfs create rpool/home
zfs create -o mountpoint=/root rpool/home/root
chmod 700 /mnt/root
zfs create -o canmount=off rpool/var
zfs create -o canmount=off rpool/var/lib
zfs create rpool/var/log
zfs create rpool/var/spool
zfs create rpool/var/lib/docker
Verify that nothing is mounted incorrectly. The command below should show each mountpoint above with a different major/minor number. Notice that only the explicit ZFS datasets should be compared (i.e, /mnt/var/lib/docker instead of /mnt/var or /mnt/var/lib).
find /mnt -maxdepth 1 -type d | xargs stat --format '%n %Hd:%Ld'
Copy the source disk to the destination.
rsync -avPAX --exclude /dev --exclude /media --exclude /mnt --exclude /proc --exclude /sys . /mnt
Bind and chroot to the new copy:
mkdir -p /mnt/dev /mnt/proc /mnt/sys
mount --make-private --rbind /dev /mnt/dev
mount --make-private --rbind /proc /mnt/proc
mount --make-private --rbind /sys /mnt/sys
chroot /mnt /usr/bin/env DISK=$DISK bash --login
Install any missing ZFS packages:
apt install --yes dpkg-dev linux-headers-generic linux-image-generic
apt install --yes zfs-initramfs
echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
Install grub (legacy BIOS):
apt install --yes grub-pc
Enable the importing of the boot
pool:
cat <<EOF >/etc/systemd/system/zfs-import-bpool.service
[Unit]
DefaultDependencies=no
Before=zfs-import-scan.service
Before=zfs-import-cache.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zpool import -N -o cachefile=none bpool
# Work-around to preserve zpool cache:
ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache
ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache
[Install]
WantedBy=zfs-import.target
EOF
Then
systemctl enable zfs-import-bpool.service
Note from the original document:
Note: For some disk configurations (NVMe?), this service may fail with an error indicating that the bpool cannot be found. If this happens, add -d DISK-part3 (replace DISK with the correct device path) to the zpool import command.
Install GRUB:
In /etc/default/grub
, set:
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian"
Then
grub-probe /boot (result should be zfs)
update-initramfs -c -k all
update-grub
grub-install $DISK
Fix filesystem ordering (is this really needed?)
mkdir /etc/zfs/zfs-list.cache
touch /etc/zfs/zfs-list.cache/bpool
touch /etc/zfs/zfs-list.cache/rpool
zed -F &
Verify thatr zed
updated the cache by making sure these are not empty:
cat /etc/zfs/zfs-list.cache/bpool
cat /etc/zfs/zfs-list.cache/rpool
If either is empty, force a cache update and check again:
zfs set canmount=on bpool/BOOT/debian
zfs set canmount=noauto rpool/ROOT/debian
If they are still empty, stop zed (as below), start zed (as above) and try again.
Once the files have data, stop zed:
fg
Press Ctrl-C.
Fix the paths to eliminate /mnt
:
sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
Create swap (source: https://askubuntu.com/questions/228149/zfs-partition-as-swap)
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
mkswap -f /dev/zvol/rpool/swap
echo /dev/zvol/rpool/swap none swap defaults 0 0 >> /etc/fstab
swapon -av
Stop the annoying zpool status
compatibility error:
zpool get all bpool | grep feature@ |
awk '($3 ~ "active|enabled") { print $2 }' |
cut -d@ -f2 | sort | tee /etc/zfs/grub2.compatibility
zpool set compatibility=/etc/zfs/grub2.compatibility bpool
EOF