Skip to content

Instantly share code, notes, and snippets.

@umpirsky
Last active September 13, 2024 22:26
Show Gist options
  • Save umpirsky/6ee1f870e759815333c8 to your computer and use it in GitHub Desktop.
Save umpirsky/6ee1f870e759815333c8 to your computer and use it in GitHub Desktop.
Install Ubuntu on RAID 0 and UEFI/GPT system
# http://askubuntu.com/questions/505446/how-to-install-ubuntu-14-04-with-raid-1-using-desktop-installer
# http://askubuntu.com/questions/660023/how-to-install-ubuntu-14-04-64-bit-with-a-dual-boot-raid-1-partition-on-an-uefi%5D
sudo -s
apt-get -y install mdadm
apt-get -y install grub-efi-amd64
sgdisk -z /dev/sda
sgdisk -z /dev/sdb
sgdisk -n 1:0:+100M -t 1:ef00 -c 1:"EFI System" /dev/sda
sgdisk -n 2:0:+8G -t 2:fd00 -c 2:"Linux RAID" /dev/sda
sgdisk -n 3:0:0 -t 3:fd00 -c 3:"Linux RAID" /dev/sda
sgdisk /dev/sda -R /dev/sdb -G
mkfs.fat -F 32 /dev/sda1
mkdir /tmp/sda1
mount /dev/sda1 /tmp/sda1
mkdir /tmp/sda1/EFI
umount /dev/sda1
mdadm --create /dev/md0 --level=0 --raid-disks=2 /dev/sd[ab]2
mdadm --create /dev/md1 --level=0 --raid-disks=2 /dev/sd[ab]3
sgdisk -z /dev/md0
sgdisk -z /dev/md1
sgdisk -N 1 -t 1:8200 -c 1:"Linux swap" /dev/md0
sgdisk -N 1 -t 1:8300 -c 1:"Linux filesystem" /dev/md1
ubiquity -b
mount /dev/md1p1 /mnt
mount -o bind /dev /mnt/dev
mount -o bind /dev/pts /mnt/dev/pts
mount -o bind /sys /mnt/sys
mount -o bind /proc /mnt/proc
cat /etc/resolv.conf >> /mnt/etc/resolv.conf
chroot /mnt
nano /etc/grub.d/10_linux
# change quick_boot and quiet_boot to 0
apt-get install -y grub-efi-amd64
apt-get install -y mdadm
nano /etc/mdadm/mdadm.conf
# remove metadata and name
update-grub
mount /dev/sda1 /boot/efi
grub-install --boot-directory=/boot --bootloader-id=Ubuntu --target=x86_64-efi --efi-directory=/boot/efi --recheck
update-grub
umount /dev/sda1
dd if=/dev/sda1 of=/dev/sdb1
efibootmgr -c -g -d /dev/sdb -p 1 -L "Ubuntu #2" -l '\EFI\Ubuntu\grubx64.efi'
exit # from chroot
exit # from sudo -s
reboot
@tombatron
Copy link

Worked like a champ!

I did have to change sda and sdb to nvme0n1 and nvme1n1. But other than that, the install went off without a hitch.

@mecworks
Copy link

BTW, you shouldn't swap onto a RAID partition. That adds a lot of overhead that slows down raid and you don't need the redundancy on swap. The best way to use two or more disks for swap as in this situation is to set both partitions to the type swap then in /etc/fstab, set them with the same priority. Linux will automatically stripe swap partitions set to the same priority, giving you a roughly 2x performance boost in your swap R/W access.

@fabienengels
Copy link

BTW, you shouldn't swap onto a RAID partition. That adds a lot of overhead that slows down raid and you don't need the redundancy on swap. The best way to use two or more disks for swap as in this situation is to set both partitions to the type swap then in /etc/fstab, set them with the same priority. Linux will automatically stripe swap partitions set to the same priority, giving you a roughly 2x performance boost in your swap R/W access.

But if one of the disks crashes, the swap will be corrupted and may lead to a system crash.

@fabienengels
Copy link

Why partitioning /dev/md* and not formatting them directly ?

@ssybesma
Copy link

Would this be correct for a RAID 0 using THREE NVMe drives? I was not able to upload a WordPad file, sorry. I had to do a screenshot.

See attached PNG as it includes details in colored highlights where I replaced or added information. Unfortunately, not editable.

image

@ssybesma
Copy link

In my quest to get this done on three NVMe drives I discovered and corrected two mistakes and am about to correct a third.

1st mistake:
on the first two mdadm command lines...I failed to change disks to 3 from 2 as I'm using three NVMe drives...not two

2nd mistake:
the command to replicate and randomly assign GUIDs has to be done on two separate command lines for three drives...you can only do one replication of a drive per command line or the mdadm command will fail later on and claim there are not enough disks to combine into the array.

3rd mistake:
When I got into Ubiquity I thought I should try to use 'Something else' but couldn't figure it out so I ended up going the other route which blew the 2nd through 5th mounting steps afterwards -- so I have to start from total scratch a 2nd time.

Fortunately while visiting this forum again just now, I saw this from a previous poster...
When running the installer ubiquity -b and you get to the partitioning choose "Something Else"
-Set Partition ending in md0 to "SWAP"
-Set Partition ending in md1 to "/" and format
So now I think I'll be able to get past all the mount commands after Ubuntu is installed. I keep getting further with each attempt.
After I get this working, I'm going to re-write this with some added detail to help newbies like myself who have no idea what they are doing.

This is one of the toughest and most ambitious things I've tried to do with Linux so far but I'm determined to get there and see if this works.

@ssybesma
Copy link

ssybesma commented Sep 18, 2020

NOTE: These instructions below will produce the quirks in the three NVMe device's FIRST partitions and in the Dell BIOS F12 boot menu I detailed in future posts which I'm trying to figure out how to resolve so this ends up being a completely perfected setup for THREE devices. SO, if you know what you're doing, these instructions will need just a tiny bit of modification to effect those fixes.

I got this working finally...pretty kludgy about 2/3 of the way down...sorry for the formatting...please comment on what you think can be improved/cleaned up....I will next post my lsblk and gparted screenshots of all 5 devices to have people explain what may possibly be missing (less likely) or is unneeded (more likely), or what can be changed to make this even better. THIS IS NOT A PROPERLY FORMATTED SCRIPT. Use by copying and pasting the commands until a working script is developed.

MOD FOR THREE 2TB NVME DRIVES IN RAID-0 (STRIPED) ARRAY -- WORK IN PROGRESS -- COMMENTS INVITED -- MINOR REFINEMENTS NEEDED BUT THIS IS WORKING -- I'M ON IT AT THIS POSTING -- CREATED WITH UBUNTU 20.04.1 ON DELL PRECISION 3630 TOWER USING INTEL CORE I7-8700 3.2GHZ 6 CORE PROCESSOR AND 128GB OF RAM.

sudo -s
apt-get -y install mdadm
apt-get -y install grub-efi-amd64
sgdisk -z /dev/nvme0n1
sgdisk -z /dev/nvme1n1
sgdisk -z /dev/nvme2n1
sgdisk -n 1:0:+100M -t 1:ef00 -c 1:"EFI System" /dev/nvme0n1
sgdisk -n 2:0:+8G -t 2:fd00 -c 2:"Linux RAID" /dev/nvme0n1
sgdisk -n 3:0:0 -t 3:fd00 -c 3:"Linux RAID" /dev/nvme0n1
sgdisk /dev/nvme0n1 -R /dev/nvme1n1 -G ### REPLICATES TO 2ND NVME AND RANDOMIZES GUID -- APPARENTLY CAN'T BE COMBINED W/ LINE BELOW; TWO VARIANTS FAILED TO COMBINE; NOT IMPORTANT?
sgdisk /dev/nvme0n1 -R /dev/nvme2n1 -G ### REPLICATES TO 3RD NVME AND RANDOMIZES GUID -- APPARENTLY CAN'T BE COMBINED W/ LINE ABOVE; TWO VARIANTS FAILED TO COMBINE; NOT IMPORTANT?
mkfs.fat -F 32 /dev/nvme0n1p1
mkdir /tmp/nvme0n1p1
mount /dev/nvme0n1p1 /tmp/nvme0n1p1
mkdir /tmp/nvme0n1p1/EFI
umount /dev/nvme0n1p1
mdadm --create /dev/md0 --level=0 --raid-disks=3 /dev/nvme[012]n1p2 ### ERROR RE: MORE DISKS ATTEMPTED (3) THAN ACTUALLY FOUND (2) IF ABOVE SGDISK LINES COMBINED
mdadm --create /dev/md1 --level=0 --raid-disks=3 /dev/nvme[012]n1p3 ### ERROR RE: MORE DISKS ATTEMPTED (3) THAN ACTUALLY FOUND (2) IF ABOVE SGDISK LINES COMBINED
sgdisk -z /dev/md0
sgdisk -z /dev/md1
sgdisk -N 1 -t 1:8200 -c 1:"Linux swap" /dev/md0
sgdisk -N 1 -t 1:8300 -c 1:"Linux filesystem" /dev/md1
ubiquity -b ### USE "SOMETHING ELSE"; SEE NOTES BELOW; MD1P1 WILL MOUNT ON NEXT LINE
mount /dev/md1p1 /mnt
mount -B /dev /mnt/dev
mount -B /dev/pts /mnt/dev/pts
mount -B /sys /mnt/sys
mount -B /proc /mnt/proc
cat /etc/resolv.conf >> /mnt/etc/resolv.conf
chroot /mnt
nano /etc/grub.d/10_linux ### change quick_boot and quiet_boot to 0
apt-get install -y grub-efi-amd64
apt-get install -y mdadm
nano /etc/mdadm/mdadm.conf ### NOT FOUND; found at /etc but EMPTY; FIX: "apt install mdadm" then "mdadm --examine" then original nano command line (NOW it works); makes next step possible; NOW remove metadata and name (REMOVE EXAMPLE: metadata=1.2 UUID=52e40799:093a47f3:c346f169:1b2ba10d name=ubuntu:0 AND metadata=1.2 UUID=3ffd4026:eb5bb2e4:c453f46c:5d76bef7 name=ubuntu:1)
update-grub ### REPORTS ARRAY HAS NO IDENTITY INFO BUT NO ACTUAL ERROR...INSTRUCTION SAID TO REMOVE METADATA & NAME...SEEMS O.K.
mount /dev/nvme0n1p1 /boot/efi
grub-install --boot-directory=/boot --bootloader-id=Ubuntu --target=x86_64-efi --efi-directory=/boot/efi --recheck ### REPORTS ARRAY HAS NO IDENTITY INFO...SEEMS O.K.
update-grub ### REPORTS ARRAY HAS NO IDENTITY INFO...SEEMS O.K.
umount /dev/nvme0n1p1
dd if=/dev/nvme0n1p1 of=/dev/nvme1n1p1 of=/dev/nvme2n1p1 ### COMPLETED W/ NO ERROR (NOT CERTAIN 3RD NVME MUST BE INCLUDED TO MAKE THIS WORK)
efibootmgr -c -g -d /dev/nvme1n1 -p 1 -L "Ubuntu #2" -l '\EFI\Ubuntu\grubx64.efi' ### SHOWS 2ND BOOT OPTION IN F12 BOOT MENU; SEEMS UNNECESSARY; THIS BOOT OPTION (ASSUME IT'S THIS ONE) WORKS, BUT IS THIS EVEN NEEDED?
efibootmgr -c -g -d /dev/nvme2n1 -p 1 -L "Ubuntu #2" -l '\EFI\Ubuntu\grubx64.efi' ### ERROR: efibootmgr: ** Warning ** : Boot0004 has same label Ubuntu #2 ### SHOWS 3RD BOOT OPTION IN F12 BOOT MENU; NON-WORKING DUPLICATE
exit # from chroot
exit # from sudo -s
reboot

NOTES:

1) When running ubiquity -b and you get to the partitioning phase choose "Something else"

2) Set Partition md0p1 to type "swap"

3) Set Partition md1p1 to type ext4 with mount point "/" and format

4) Install Ubuntu and end with 'continue testing' instead of 'reboot'

5) Resume commands with mount of md1p1 above

@ssybesma
Copy link

ssybesma commented Sep 18, 2020

This is the lsblk and blkid command output as a result of running the above commands to create the working bootable RAID-0 array...one device I'm missing is my Blu-Ray drive (would be 'sr0') and have to figure out how to put that in, but notice USB and SD cards work when I have them plugged in...I don't right now. If you see something isn't quite right please help me figure out how to correct it.

root@PRE3630:/home/penguin# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 55M 1 loop /snap/core18/1880
loop1 7:1 0 255.6M 1 loop /snap/gnome-3-34-1804/36
loop2 7:2 0 49.8M 1 loop /snap/snap-store/467
loop3 7:3 0 29.9M 1 loop /snap/snapd/8542
loop4 7:4 0 62.1M 1 loop /snap/gtk-common-themes/1506
loop5 7:5 0 55.3M 1 loop /snap/core18/1885
loop6 7:6 0 30.3M 1 loop /snap/snapd/9279
nvme0n1 259:0 0 1.9T 0 disk
├─nvme0n1p1 259:2 0 100M 0 part
├─nvme0n1p2 259:3 0 8G 0 part
│ └─md0 9:0 0 24G 0 raid0
│ └─md0p1 259:13 0 24G 0 part [SWAP]
└─nvme0n1p3 259:4 0 1.9T 0 part
└─md1 9:1 0 5.6T 0 raid0
└─md1p1 259:12 0 5.6T 0 part /
nvme1n1 259:1 0 1.9T 0 disk
├─nvme1n1p1 259:5 0 100M 0 part
├─nvme1n1p2 259:6 0 8G 0 part
│ └─md0 9:0 0 24G 0 raid0
│ └─md0p1 259:13 0 24G 0 part [SWAP]
└─nvme1n1p3 259:7 0 1.9T 0 part
└─md1 9:1 0 5.6T 0 raid0
└─md1p1 259:12 0 5.6T 0 part /
nvme2n1 259:8 0 1.9T 0 disk
├─nvme2n1p1 259:9 0 100M 0 part /boot/efi
├─nvme2n1p2 259:10 0 8G 0 part
│ └─md0 9:0 0 24G 0 raid0
│ └─md0p1 259:13 0 24G 0 part [SWAP]
└─nvme2n1p3 259:11 0 1.9T 0 part
└─md1 9:1 0 5.6T 0 raid0
└─md1p1 259:12 0 5.6T 0 part /
root@PRE3630:/home/penguin# blkid
/dev/md1p1: UUID="e14479a8-75d1-4bca-b707-4b599990e365" TYPE="ext4" PARTLABEL="Linux filesystem" PARTUUID="d5ed3ae8-c00c-48c1-9c75-7029a0914ffe"
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/loop3: TYPE="squashfs"
/dev/loop4: TYPE="squashfs"
/dev/nvme0n1p1: UUID="8C21-5515" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="c12df1ee-4294-4f49-a649-c055e55b6a8a"
/dev/nvme0n1p2: UUID="52e40799-093a-47f3-c346-f1691b2ba10d" UUID_SUB="2a455fef-dffe-3947-123b-0dd242af2c02" LABEL="ubuntu:0" TYPE="linux_raid_member" PARTLABEL="Linux RAID" PARTUUID="3fb711db-059f-4162-9fd2-9b89993de48d"
/dev/nvme0n1p3: UUID="3ffd4026-eb5b-b2e4-c453-f46c5d76bef7" UUID_SUB="218dc5a5-5fa1-b3de-ca0a-e6e5b6cb500e" LABEL="ubuntu:1" TYPE="linux_raid_member" PARTLABEL="Linux RAID" PARTUUID="a538b30c-b906-45c2-8b56-1dbf2ee7e2bb"
/dev/nvme1n1p2: UUID="52e40799-093a-47f3-c346-f1691b2ba10d" UUID_SUB="a91f2e03-9471-d49c-0bcb-52918225f362" LABEL="ubuntu:0" TYPE="linux_raid_member" PARTLABEL="Linux RAID" PARTUUID="b5601a7b-4268-4df3-8a26-01e572e421d5"
/dev/nvme1n1p3: UUID="3ffd4026-eb5b-b2e4-c453-f46c5d76bef7" UUID_SUB="7513ff40-2744-f8a6-ad9d-a776cc824d97" LABEL="ubuntu:1" TYPE="linux_raid_member" PARTLABEL="Linux RAID" PARTUUID="aa3f19fb-2bcd-4ece-869c-4cb2cdfdcddb"
/dev/nvme2n1p1: UUID="8C21-5515" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="4dbbe0fc-060e-4ad9-8c1e-e839e972250a"
/dev/nvme2n1p2: UUID="52e40799-093a-47f3-c346-f1691b2ba10d" UUID_SUB="8daf494d-704a-36dd-0ca2-054c6e13c2a4" LABEL="ubuntu:0" TYPE="linux_raid_member" PARTLABEL="Linux RAID" PARTUUID="6ea1fea8-9093-49bd-93a3-542d0c83ccb7"
/dev/nvme2n1p3: UUID="3ffd4026-eb5b-b2e4-c453-f46c5d76bef7" UUID_SUB="6959c4f7-4b28-3b6a-2b4b-cace7fb05c00" LABEL="ubuntu:1" TYPE="linux_raid_member" PARTLABEL="Linux RAID" PARTUUID="dac99920-70f6-40f0-9242-ecadc04a3773"
/dev/md0p1: UUID="73aaa0fb-d487-49e3-9f8a-f17e71f47e5d" TYPE="swap" PARTLABEL="Linux swap" PARTUUID="0a2298fe-2604-4bfb-ba78-8b9354cf0a2c"
/dev/loop5: TYPE="squashfs"
/dev/loop6: TYPE="squashfs"
/dev/nvme1n1p1: PARTLABEL="EFI System" PARTUUID="1e38f265-19e3-46cc-96c3-c67f2989b254"
root@PRE3630:/home/penguin#

@ssybesma
Copy link

ssybesma commented Sep 18, 2020

These are gparted screenshots for all 5 devices which are a result of running the above commands to create the working bootable RAID-0 array: md0, md1, nvme0n1, nvme1n1 and nvme2n1 (comments on improvements invited). The most obvious inconsistency is the 1st partition on each of the 3 NVMe devices...I am not certain if the function for all three has to be the same but the first and third are the same and that seems very odd making me think the 3rd is not necessary. Given the 2nd one isn't formatted and mounted and the bootable RAID-0 still works, shouldn't the 3rd just be the same as the 2nd...or should all three be identical? Is the 2nd and 3rd just extra unneeded space? Cosmetically, it would look a heck of a lot better if they were all identical but perhaps a little smaller. Not sure if all three should be the identical or what. Any help is greatly appreciated and just helps the effort for anyone trying to accomplish a more effective and aesthetic version of this working, bootable Linux software (mdadm) RAID-0 that I'm making this post from now.

Screenshot from 2020-09-17 21-29-11
Screenshot from 2020-09-17 21-29-15
Screenshot from 2020-09-17 21-29-21
Screenshot from 2020-09-17 21-29-26
Screenshot from 2020-09-17 21-29-32

@ssybesma
Copy link

ssybesma commented Sep 18, 2020

Learn how to build this bootable Linux software RAID-0 array...save yourself $600 on a HighPoint RAID card and buy three SSDs instead!!!
IF YOU BUILD THIS, don't get too serious about putting a lot of stuff on it right away until you have a backup solution in place. Use some of that $600 you didn't spend on the HighPoint RAID card and buy a spinning 8TB backup drive that is offline most of the time and not spinning so you don't wear it out. Unlike with normal HDD partitions, no amount of money paid to any company, or for even the most expensive software can recover your data...simply because it's all mixed up between multiple drives. No known recovery software can deal with RAID-0 when it goes south if data on even one of the drives is unrecoverable...your chance of that multiplies in a RAID-0 array for however many drives are part of it...if you intend to use more than two drives and are concerned about this you can opt for RAID-5 which I'm not interested in pursuing...a backup drive should be sufficient if you're somewhat regular about using it.

@ssybesma
Copy link

ssybesma commented Sep 18, 2020

By the way, I owe a debt of gratitude to Saša Stamenković of Niš, Serbia for this page, without which I wouldn't be victorious today.

@umpirsky
Copy link
Author

Thanks for sharing @ssybesma, I'm glad it worked.

@ssybesma
Copy link

ssybesma commented Sep 18, 2020

You're very welcome. I'm just looking for a way to clean this up as my partitions are not exactly as good as I think they should be. I also have a Dell BIOS boot menu (F12 menu) that includes "Ubuntu", "Ubuntu 2" and "Ubuntu 2" (again). One of the "Ubuntu 2" in the list doesn't even work and the other seems unnecessary as it seemingly does exactly the same thing as "Ubuntu"...not sure why it would be needed and why it would say "Ubuntu 2" anyway. So trying to figure out how to remove those extra ones and then get the first partitions in the three NVMe drives to be identical (if that will allow it to work) or else modify them so they are at least closer to being the same in terms of the filesystem, mount point and lock/unlock status. I have to study the commands that produced those inconsistencies so that I can modify this to be a cleaner setup.

@ssybesma
Copy link

ssybesma commented Sep 18, 2020

Here is the picture of my Dell BIOS F12 boot menu which I want to purge both Ubuntu 2's from but don't know how yet unless I were to start from scratch...at this point I don't really want to do that if I can learn how to solve without the nuclear option:

20200917_004849

@ssybesma
Copy link

I should look before I ask questions...I think I solved it here:

https://askubuntu.com/questions/921046/how-to-remove-ubuntu-from-boot-menu-after-deleting-ubuntu-partition-in-windows

Screenshot from 2020-09-18 14-26-32

Rebooting and will upload resulting screenshot of boot menu to prove this worked.

@ssybesma
Copy link

ssybesma commented Sep 18, 2020

It worked...with a side effect...I can't figure out why Boot013* (UEFI: ADATA SX8100NP) automatically got added to the list that was not there before or if that's needed or how to suppress/remove that if it's not needed...must be why that extra line might have been there for Ubuntu 2 but that's a kludgy solution IMHO:

20200918_143305_resized

Screenshot from 2020-09-18 14-46-01

My effort to remove that seems successful...

Screenshot from 2020-09-18 14-52-07

...wonder if I'll be able to reboot and get back in or if it will magically appear again...stay tuned!

@ssybesma
Copy link

ssybesma commented Sep 18, 2020

I rebooted but the extra entry in the Dell BIOS F12 boot menu magically appeared again exactly as it was before and efibootmgr output is exactly the same (contains Boot0013* as before)...so no change. I decided for kicks to try booting from that 2nd one and it works (like one of the "Ubuntu 2" entries used to work before)...SO, I guess someone just masked "UEFI: ADATA SX8100NP" (or whatever it would be called on his machine) in the original instructions by simply renaming it 'Ubuntu 2'. Now I just have to figure out why that 2nd entry is forced to show up...my GUESS is that is due to one of the TWO first partitions (named "EFI System") on the NVMe devices both being mounted and having filesystem on it, which probably causes the 2nd (locked one) on the 3rd NVMe device to produce the Boot0013* entry. Question for the day would be if the 3rd one should have ever been mounted and have a filesystem on it. I suspect for BOOT SPEED reasons it SHOULD, and that means ideally the 2nd device's 1st partition probably should be identical to the 1st and 3rd devices as well...and then we will have to suppress two entries in the Dell BIOS F12 boot menu instead of just one. I must say this boots amazingly FAST after I got rid of that duplicate Ubuntu 2 (which actually caused it to stall)...boots in only a few seconds...never saw anything like that before!!!

To summarize, the ideal situation for this then...and to correct ALL the oddball issues, is to figure out how to make all three 1st partitions identical "EFI System" (with the original instructions being for only two devices I THINK that was the original intent) and then how to suppress displaying the Dell BIOS F12 boot menu entries for the 2nd and 3rd NVMe devices. Doing that would achieve perfection with nothing else left to do...and would help my machine boot that tiny bit faster with all three devices contributing to the boot speed.

@ssybesma
Copy link

ssybesma commented Sep 19, 2020

I think the problems I'm running into with the inconsistent 1st partitions on the NVMe devices and the odd efibootmgr/Dell BIOS F12 boot menu entries are confined to this section which somehow has to be straightened out to make using this with 3 NVMe devices not only work right but look right, yet I haven't figured this out yet and was struggling with this yesterday where I had to wipe and start over twice. The last two lines seem to have the most to do to effect this issue.

mount /dev/nvme0n1p1 /boot/efi
grub-install --boot-directory=/boot --bootloader-id=Ubuntu --target=x86_64-efi --efi-directory=/boot/efi --recheck <--IS ENTIRE LINE OPTIMAL?
update-grub
umount /dev/nvme0n1p1

dd if=/dev/nvme0n1p1 of=/dev/nvme1n1p1 <--CLONING 1ST TO 2ND NVME...PART OF ORIGINAL SCRIPT
dd if=/dev/nvme0n1p1 of=/dev/nvme2n1p1 <--CLONING 1ST TO 3RD NVME SEEMS REQUIRED IF MAX BOOT SPEED DESIRED

efibootmgr -c -g -d /dev/nvme1n1 -p 1 -L "Ubuntu #2" -l '\EFI\Ubuntu\grubx64.efi' <--THIS SEEMS TO NEED ADJUSTMENT

The 'dd' line seems like it's begging for the 3rd NVMe to be addressed by cloning as that's what it's doing with the 2nd NVMe.
The most suspect line of all is the last one for efibootmgr...I think that line has to be adjusted (and not actually added to as I've done nothing but make the problem worse in adding another line or two). At one point I had SIX separate boot entries for Ubuntu and one of them didn't even work. Any reason the efibootmgr line has to be there or any reason this cannot just mention the 1st NVMe rather than the 2nd one?
So, starting from scratch again.

@ssybesma
Copy link

ssybesma commented Sep 20, 2020

Since I tinkered with this, things got worse and worse to the point I can't even get it to boot anymore and I cannot figure out what broke it. I emailed Rod at Rod's Books with a $25 donation to see if he can help with the grub and efibootmgr issues I'm having. Linux is horrendously touchy if you do one thing slightly out of order or make what seems like one minor change.

Update:

FIXED...the problem? NVRAM in my BIOS was locking it up and preventing boot. I cleared it out by switching between Legacy and UEFI...went to Legacy, rebooted and then went back to UEFI. If you ever get stuck on the 2nd line of the boot sequence "Loading intial ramdisk ..." try it and see if that gets you past the obstacle. I'm back in better than ever now.

@retserj-jrester
Copy link

Hey, I am very impressed by how simple this installation can be. Good job mate.

But I wonder, do you have such simple script for a legacy BIOS machine, as EFI doesnt work on my PC sadly.

Thank you for your response!

@ssybesma
Copy link

ssybesma commented Oct 11, 2020 via email

@retserj-jrester
Copy link

Thank you for this amazingly fast response.

For my part, setting up the working RAID partition is no problem. My software RAID0 over two HDDs works fine.
My only problem is a working bootloader for legacy BIOS that is able to work with the RAID as such.
I am pretty sure it will work somehow, but like you, I am clueless atm.

@retserj-jrester
Copy link

Guys!
I did it. Ubuntu Desktop 20.04 up and running on software RAID 0.
Everything works and boots correctly.

@ffrogurt
Copy link

If anyone is trying this with nvme keep in mind that sda1 would become nvme0n1p1, partition needs "p" to define the num.

sdb2 would be nvme1n1p2 and so on. If you get "is in use/busy" during the mkfs fat formatting it might be due to a previous raid being active on the disk, check mdadm commands to stop and remove a specific raid, then try again.

@swatwithmk3
Copy link

Hey,

I really appreciate the work and wanted to thank you for the code, I also wanted to warn people that I had an issue with the array being locked after OS installed and it was listed as "Encrypted" in spite of me not encrypting it. In this case I rebooted into the live USB again then re assembled the array and proceeded from there. I also wanted to clarify for inexperienced users like myself that at line 29 you need to mount the partition where the OS was installed to "/mnt" which in my case was "/dev/md1p2" and not "/dev/md1p1". I've also mentioned these issues in my fork of this repo which also modified the commands for a 4 disk raid 0 array.

@ssybesma
Copy link

ssybesma commented Jan 25, 2022 via email

@swatwithmk3
Copy link

swatwithmk3 commented Jan 25, 2022

Hi ssybesma,

Given that the EFI partition is copied to the second disk in the original script, I believe that it is intended for all disks to be visible so that no matter which one you choose you would boot into the RAID array which is why in my 4 disk script the EFI is copied to the other three disks. At line 55, the command is supposed to give a name you specify to the disk and this name will appear in the bios boot menu. Upon further inspection of my bios, it seems that 1 of the disks was named incorrectly and I am not sure if this is an issue with the motherboard or from the OS. If you want each disk to be identified with a unique name in the bios then use the command at line 55 once for each disk except the first one which in your case would be two times since you have 3 disks. You need to make sure that at the end of the command the correct /dev/nvme is set and change the name from "Ubuntu #2" to whatever you want and that should appear in your bios. You can try skipping the command altogether and that should leave the names in the Bios as what they were set by the manufacturer.

PS: I have just tested it to be sure and choosing any of my array disks at bootup will still load Ubuntu normally, the fact that you choosing the wrong one did not boot Ubuntu for you means that you have either skipped copying the EFI partition to your other 2 SSD's or that the copy of the partition was not successful. I hope I managed to clear things up and help :)

If you're still having trouble then a video recording of the installation you're doing might help with discovering the source, the next best thing would be a doc with every command you have used for the array creation and OS installation would be helpful too.

@pdxwebdev
Copy link

Confirmed this works for Ubuntu 22.04

@sparks79
Copy link

sparks79 commented Nov 6, 2022

I've been looking for how to install raid on linux for a few years now ( with the o/s as part of the bootable array ).
I tested ubuntu about vers 18 or there abouts and with the Server Vers it's possible to do it, and you can install the desktop later to get a more user friendley system.
I've also used fedora 36 and it is by far the easiest version to set up raid with ( so far from my observations ).
It's a pitty one can't swap ubuntu into fedora to get their easy raid setup.
The way that ( umpirsky ) has done it looks interesting.
And I'm sure he's put a lot of time into it ( Congrats ).
But it's a very long set of procedure ( 59 to be precise ).
And am I right in guessing that each of the 59 have to be entered Line by Line.
That's a Long Process.
I would be happy to use fedora because of it's easy raid setup.
But I'm not very conversant with linux and I just find fedora to hard use.
I am an avid windows user as for the last 27 years.
And at the age of 74 I really need a linux system in raid0 that is similar to windows.
Or at least a bit easier to use.
Has anyone got any sugestions.

@JorgeBasaure
Copy link

Me gustaría que explicaras con detalle esos comandos. Esto es por que requiero lo siguiente:

  • Instalar Ubuntu en su versión mas reciente, en los 4 SSD x 240Gb, En RAID 0, SIN SWAP. Debido, a que no tiene sentido meter SWAP en los SSD (Mi tarjeta madre lleva Intel Rapid Storage para dejar listo el Raid 0 desde la BIOS (MSI Z97 MPOWER), Y el Ubuntu no desencadena el aviso de que no se puede instalar con IRST)
  • Dejar configurado los 4 HDD x 2Tb en RAID 0, Para almacenamiento

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment