You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
basically have a mini VPS with mfsbsd running with real disk passthrough and console access, just like a KVM, so I can install as usual - and then I can even test my installation directly by booting from it in the same way! Then when it works I just boot the server normal (ie directly into FreeBSD) and if I ever b0rk something up I boot the Linux rescue image and run mfsbsd again!
Qemu provides an emulated NIC to the VM. So if the physical network in the host
uses a NIC that needs a different driver, the NIC name will be different in the VM
from what it will be when running FreeBSD on the hardware.
The Qemu NIC will appear as em0.
However in my case the physical NIC in the machine uses a different driver and
appears as igb0 when running FreeBSD on the hardware.
The Hetzner Debian based rescue system will give you a minimal description of the NIC
in the machine when you ssh into it. Make note of that. If it's intel, you can
put and entry for both igb0 in addtion to em0 in your /etc/rc.conf
and then when you boot and ssh into the machine you will see which one was used
and then you can update your /etc/rc.conf accordingly.
If the NIC is not Intel, you have to find out what Linux commands to use
in the Hetzner Debian based rescue system to show more details about your NIC,
and then you need to figure out which FreeBSD NIC driver is correct for that one
and edit your /etc/rc.conf accordingly.
For reference, here is what the complete /etc/rc.conf from one of my Hetzner
servers looks like currently:
clear_tmp_enable="YES"
syslogd_flags="-ss"
hostname="de2"
# Used when booting in Qemu
ifconfig_em0="DHCP"
ifconfig_em0_ipv6="inet6 accept_rtadv"
# Used when booting on hardware
ifconfig_igb0_name="extif"
ifconfig_extif="DHCP"
ifconfig_extif_ipv6="inet6 2a01:4f9:5a:16cb:876a:bce7:b3c8:118a prefixlen 80"
ipv6_defaultrouter="fe80::1%extif"
local_unbound_enable="YES"
sshd_enable="YES"
ntpd_enable="YES"
ntpd_sync_on_start="YES"
moused_nondefault_enable="NO"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
wireguard_enable="YES"
wireguard_interfaces="wg0"
jail_enable="YES"
Moment of truth
Reboot the host machine. All goes well, you'll be able to ssh into it and find
a running FreeBSD system :D
For many (most?) purposes, the standard install described above is sufficient.
It's straight forward, and easy to fix ifwhen something breaks.
The standard install described above however does not encrypt any parts of the system,
not even the home directories of users. And while you can add additional individual encrypted datasets
to your ZFS pool even with a standard install, you will not be able to turn on encryption
for any of the ZFS datasets that have been created by the installer.
Wouldn't it be nice if we could reduce the amount of data that is kept unencrypted at rest at least a bit?
One of the motivations of the custom install described here is to do exactly that.
Defining our goals
For my server there are some specific things I am interested in achieving:
Keep as much of the system as possible encrypted at rest. With data encrypted at rest, and the keys to decrypt
that data kept separate, we can recycle the harddrives in the future without needing to do overwrites
of the disks first. This is desirable for multiple reasons:
Big disks take a long time to fully overwrite. Especially so when you do one pass of writing zeros
followed by one or more passes of writing random data to completelly cover the disks.
Hardware failures can leave us unable to fully or even partially being able to overwriting
the data, meaning that safe disposal will hinge on being able to sufficiently physically destroying the drives.
The base system should be possible to throw away and set up again quickly and easily.
Corollary: None of the system directory trees should be included in backups.
Not even /usr/home as a whole. We'll get back to this.
Anything that is important should live in jails, with their own ZFS datasets.
This way, we can back up as well as restore or rollback to past versions of those "things"
mostly independently of the host system itself.
Initial install
We will start off with a standard install, because getting the EFI boot working
when I try to manually partition and copy boot loader files is not working for me.
This will form the basis for our "outer" base system.
We will this one to boot the server into a state where
we can ssh into it to unlock our remaining datasets,
from which we can then reboot into our "inner" base system.
We have 15 disks total, which we plan to add as three vdevs in a single pool.
Initially we will set things up with one vdev having 5 of the disks.
Run
bsdinstall
Performing the install:
For hostname I choose stage4, because the normal boot itself has 3 stages and this will be our fourth stage of sort of booting.
At the partitioning step we do guided root on ZFS, and we select:
Pool type/disks to consist of a single vdev raidz3 with 5 disks (we'll manually add the other vdevs later)
Partition scheme "GPT (UEFI)"
At the user creation step, after you've created a password for root, create a user that has "boot" as part of its name, to distinguish it from the kinds of users you normally make on your servers. For example I usually make my user named "erikn" but here I name it erikboot. When asked if you want to add the user to any additional groups, make sure to add the user to the wheel group.
Keep ssh selected as a service to run.
For all other steps make whatever choices you'd normally make according to your preference.
Finish initial steps
Export the zpool and then power off the VM.
zpool export zroot
poweroff
Check that it works so far
Now boot the VM again but without the mfsbsd media.
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 81.8T 2.42G 81.8T - - 0% 0% 1.00x ONLINE -
zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 986M 32.5T 153K /zroot
zroot/ROOT 984M 32.5T 153K none
zroot/ROOT/default 983M 32.5T 983M /
Reservation
Create a dataset that will reserve 20% of the capacity of the pool,
as per recommendation from Michael W Lucas in the book FreeBSD Mastery: Advanced ZFS.
doas zfs create -o mountpoint=none -o encryption=on -o keyformat=passphrase zroot/IROOT
Enter new passphrase:
Re-enter new passphrase:
doas zfs create -o mountpoint=none zroot/IROOT/default
zfs list -o name,used,avail,refer,mountpoint,encryption,keyformat
NAME USED AVAIL REFER MOUNTPOINT ENCRYPTION KEYFORMAT
zroot 6.50T 26.0T 153K /zroot off none
zroot/IROOT 586K 26.0T 293K none aes-256-gcm passphrase
zroot/IROOT/default 293K 26.0T 293K none aes-256-gcm passphrase
zroot/ROOT 984M 26.0T 153K none off none
zroot/ROOT/default 983M 26.0T 983M / off none
zroot/reservation 6.50T 32.5T 153K none off none
doas zfs set -u mountpoint=/mnt zroot/IROOT/default
doas zfs mount zroot/IROOT/default
mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs)
/dev/gpt/efiboot0 on /boot/efi (msdosfs, local)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
zroot/IROOT/default on /mnt (zfs, local, noatime, nfsv4acls)
Install "inner"
doas bsdinstall
Choose hostname as inner.
On the partitioning step, choose "Shell" ("Open a shell and partition by hand"). We've already done the partitioning and mounted the target so proceed to exit.
exit
The installer will now extract the system.
After it finishes, exit the installer and have a look at the extracted files.
ls -al /mnt
drwxr-xr-x 19 root wheel 24 May 1 03:11 .
drwxr-xr-x 20 root wheel 25 May 1 02:45 ..
-rw-r--r-- 2 root wheel 1011 Nov 10 09:11 .cshrc
-rw-r--r-- 2 root wheel 495 Nov 10 09:11 .profile
-r--r--r-- 1 root wheel 6109 Nov 10 09:49 COPYRIGHT
drwxr-xr-x 2 root wheel 49 Nov 10 09:11 bin
drwxr-xr-x 14 root wheel 70 May 1 03:11 boot
dr-xr-xr-x 2 root wheel 3 May 1 03:10 dev
-rw------- 1 root wheel 4096 May 1 03:11 entropy
drwxr-xr-x 30 root wheel 107 May 1 03:11 etc
drwxr-xr-x 3 root wheel 3 May 1 03:11 home
drwxr-xr-x 4 root wheel 78 Nov 10 09:17 lib
drwxr-xr-x 3 root wheel 5 Nov 10 09:11 libexec
drwxr-xr-x 2 root wheel 2 Nov 10 08:48 media
drwxr-xr-x 2 root wheel 2 Nov 10 08:48 mnt
drwxr-xr-x 2 root wheel 2 Nov 10 08:48 net
dr-xr-xr-x 2 root wheel 2 Nov 10 08:48 proc
drwxr-xr-x 2 root wheel 150 Nov 10 09:15 rescue
drwxr-x--- 2 root wheel 7 Nov 10 09:49 root
drwxr-xr-x 2 root wheel 150 Nov 10 09:44 sbin
lrwxr-xr-x 1 root wheel 11 Nov 10 08:48 sys -> usr/src/sys
drwxrwxrwt 2 root wheel 2 Nov 10 08:48 tmp
drwxr-xr-x 14 root wheel 14 Nov 10 10:02 usr
drwxr-xr-x 24 root wheel 24 Nov 10 08:48 var
Give same hostid to inner as outer has, so that zpool import will think pool has not been used by a different system.
clear_tmp_enable="YES"
syslogd_flags="-ss"
hostname="stage4"
# Used when booting in Qemu
ifconfig_em0="DHCP"
ifconfig_em0_ipv6="inet6 accept_rtadv"
# Used when booting on hardware
ifconfig_igb0_name="extif"
ifconfig_extif="DHCP"
ifconfig_extif_ipv6="inet6 2f00::ba22 prefixlen 80"
ipv6_defaultrouter="fe80::1%extif"
local_unbound_enable="YES"
sshd_enable="YES"
ntpd_enable="YES"
ntpd_sync_on_start="YES"
moused_nondefault_enable="NO"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
doas vim /mnt/etc/rc.conf
clear_tmp_enable="YES"
syslogd_flags="-ss"
hostname="inner"
# Used when booting in Qemu
ifconfig_em0="DHCP"
ifconfig_em0_ipv6="inet6 accept_rtadv"
# Used when booting on hardware
ifconfig_igb0_name="extif"
ifconfig_extif="DHCP"
ifconfig_extif_ipv6="inet6 2f00::1279:9d43 prefixlen 80"
ipv6_defaultrouter="fe80::1%extif"
local_unbound_enable="YES"
sshd_enable="YES"
ntpd_enable="YES"
ntpd_sync_on_start="YES"
moused_nondefault_enable="NO"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
wireguard_enable="YES"
wireguard_interfaces="wg0"
jail_enable="YES"
Power off the VM, and then power it on and ssh into it again.
Then, unset mountpoint for inner
doas zfs set mountpoint=none zroot/IROOT/default
Decrypt it
doas zfs load-key zroot/IROOT
Enter passphrase for 'zroot/IROOT':
And attempt to reboot into it
doas kenv vfs.root.mountfrom="zfs:zroot/IROOT/default"
doas reboot -r
If you're watching on VNC you'll see that it says
Trying to mount root from zfs:zroot/IROOT/default []...
and after a little bit of time you should see that it gives the login prompt with the hostname of the inner system
FreeBSD/amd64 (inner) (ttyv0)
login:
If you now try to ssh into it, you'll be met with an expected warning that host identification has changed.
Accept the modified id (keep in mind that the outer will then later be warned about too until we do something about that.)
Disallow password login over ssh by setting KbdInteractiveAuthentication to no in /etc/ssh/sshd_config on the inner system.
KbdInteractiveAuthentication no
Moment of truth
Reboot the host machine and you should be able to ssh into outer system.
https://hackmd.gfuzz.de/s/Qsk14kc3i