Installing Linux Mint 19.1 with UEFI boot, RAID, encryption and LVM

This post was written by eli on November 14, 2018
Posted Under: Linux

Introduction

These are my notes as I attempted to install Linux Mint 19.1 (Tara) on a machine with software RAID, full disk encryption (boot partitions excluded) and LVM. The thing is that the year is 2018, and the old MBR booting method is still available but not a good idea for a system that’s supposed to last. So UEFI it is. And that caused some issues.

For the RAID / encryption part, I had to set up the disks manually, which I’m completely fine with, as I merely repeated something I’ve already done several years ago, and then I thought the installer would get the hint.

But this wasn’t that simple at all. I believe I’ve run the installer some 20 times until I got it right. This reminded of a Windows installation: It’s simple as long as the installation is mainstream. Otherwise, you’re cooked.

And if this post seems a bit long, it’s because I spent two whole days shaving this yak.

Rule #1

This is a bit of a forward reference, but important enough for breaking the order: Whenever manipulating anything related to boot loading, be sure that the machine is already booted in UEFI mode. In particular, when booting from a Live USB stick, the computer might have chosen MBR mode and then the installation will be a mess.

The easiest way to check is with

# efibootmgr
EFI variables are not supported on this system.

If the error message above shows, it’s bad. Re-boot the system, and pick the UEFI boot alternative from the BIOS’ boot menu. If that doesn’t help, look in the kernel log for a reason UEFI isn’t activated. It might be a driver issue (even though it’s not the likely case).

When it’s fine, you’ll get something like this:

# efibootmgr
BootCurrent: 0003
Timeout: 1 seconds
BootOrder: 0000,0003,0004,0002
Boot0000* ubuntu
Boot0002* Hard Drive
Boot0003* UEFI: SanDisk Cruzer Switch 1.27
Boot0004* UEFI: SanDisk Cruzer Switch 1.27, Partition 2

Alternatively, check for the existence of /sys/firmware/efi/. If the “efi” directory is present, it’s most likely fine.

GPT in brief

The GUID partition table is the replacement for the (good old?) MBR-based one. It supports much larger disks, the old head-cylinder-sector terminology is gone forever, and it allows for many more partitions that you’ll ever need. In particular since we’ve got LVM. And instead of those plain numbers for each partition, they are now assigned long GUID identifiers, so there’s more mumbo-jumbo to print out.

GPT is often related to UEFI boot, but I’m not sure there’s any necessary connection. It’s nevertheless a good choice unless you’re a fan of dinosaurs.

UEFI in brief

UEFI / EFI is the boot process which replaces the not-so-good old MBR boot. The old MBR method involved reading a snippet of machine code from the MBR sector and execute it. That little piece of code would then load another chunk of code into memory from some sectors on the disk, and so on. All in all, a tiny bootloader loaded a small bootloader which loaded GRUB or LILO, and that eventually loaded Linux.

Confused with the MBR thingy? That’s because the MBR sector contains the partition information as well as the first stage boot loader. Homework: Can you do MBR boot on GPT? Can you do UEFI on an MBR partition?

Aside from the complicated boot process, this also required keeping track of those hidden sectors, so they won’t be overwritten by files. After all, the boot loader had to sit somewhere, and that was usually on sectors belonging to the main filesystem.

So it was messy.

EFI (and later UEFI) is a simple concept. Let the BIOS read the bootloader from a dedicated EFI partition in FAT format: When the computer is powered up, the BIOS scans this partition (or partitions) for boot binary candidates (files with .efi extension, containing the bootloader’s executable, in specific parts of the hierarchy), and lists them on its boot menu. Note that it may (and probably will) add good old MBR boot possibilities, if such exist, to the menu, even though they have nothing to do with UEFI.

And then the BIOS selects one boot option, possibly after asking the user. In our case, it’s preferably the one belonging to GRUB. Which turns out to be one of /EFI/BOOT/BOOTX64.EFI, /EFI/ubuntu/fwupx64.efi and /EFI/ubuntu/grubx64.efi (don’t ask me why GRUB generates three of them).

A lengthy guide to UEFI can be found here.

UEFI summarized

  • The entire boot process is based upon plain files only. No “active boot partition”, no hidden sectors. Easy to backup, restore, even reverting to a previous setting by replacing the file content of two partitions.
  • … but there’s now a need for a special EFI boot partition in FAT format.
  • The BIOS doesn’t just list devices to boot from, but possibly several boot options from each device.

Two partitions just to boot?

In the good old days, GRUB hid somewhere on the disk, and the kernel / initramfs image could be on the root partition. So one could run Linux on a single partition (swap excluded, if any).

But the EFI partition is of FAT format (preferably FAT32), and then we have a little GRUB convention thing: The kernel and the initramfs image are placed in /boot. The EFI partition is on /boot/efi. So in theory, it’s possible to load the kernel and initramfs from the EFI partition, but the files won’t be where they usually are, and now have fun playing with GRUB’s configuration.

Now, even though it seems possible to have GRUB open both RAID and an encrypted filesystem, I’m not into this level of trickery. Hence /boot can’t be placed on the RAID’s filesystem, as it won’t be visible before the kernel has booted. So /boot has to be in a partition of its own. Actually, this is what is usually done with any software RAID / full disk encryption setting.

This is barely an issue in a RAID setting, because if one disk has a partition for booting purposes, it makes sense allocating the same non-RAID partition on the others. So put the EFI partition on one disk, and /boot on another.

Remember to back up the files in these two partitions. If something goes wrong, just restore the files from the backup tarball. Just don’t forget when recovering, that the EFI partition is FAT.

Finally: Does a partition need to be assigned EFI type to be detected as such? Probably not, but it’s a good idea to set it so.

Installing: The Wrong Way

What I did initially, was to boot from the Live USB stick, set up the RAID and encrypted /dev/md0, and happily click the “Install Ubuntu” icon. Then I went for a “something else” installation, picked the relevant LVM partitions, and kicked it off.

The installation failed with a popup saying “The ‘grub-efi-amd64-signed’ package failed to install into /target/” and then warn me that without the GRUB package the installed system won’t boot (which is sadly correct, but partly: I was thrown into a GRUB shell). Looking into /var/log/syslog, it said on behalf of grub-install: “Cannot find EFI directory.”

This was the case regardless of whether I selected /dev/sda or /dev/sda1 as the device to write bootloader into.

Different attempts to generate an EFI partition and then run the installer failed as well.

Installation (the right way)

Boot the system from a Live USB stick, and verify that you follow Rule #1 above. That is: Check that the “efibootmgr” returns something else than an error.

Then set up RAID + LUKS + LVM as described in this old post of mine. 8 years later, nothing has changed (except for the format of /etc/crypttab, actually). Only the Mint wasn’t as smooth on installing on top of this setting.

The EFI partition should be FAT32, and selected as “use as EFI partition” in the installer’s parted. Set the partition type of /dev/sda1 (only) to EFI (number 1 in GPT) and format it as FAT32. Ubiquity didn’t do this for me, for some reason. So manually:

# mkfs.fat -v -F 32 /dev/sda1

/dev/sdb1 will be used for /boot. /dev/sdc1 remains unused, most likely a place to keep the backups of the two boot related partitions.

So now to the installation itself.

Inspired by this guide, the trick is to skip the installation of the bootloader, and then do it manually. So kick off the RAID with mdadm, open the encrypted partition, verify that the LVM devfiles are in place in /dev/mapper. When opening the encrypted disk, assign the /dev/mapper name that you want to stay with — you’ll have to reboot to fix this later otherwise.

Then use the -b flag in the invocation of ubiquity to run a full installation, just without the bootloader.

# ubiquity -b

Go for a “something else” installation type, select to mount / in the dedicated encrypted LVM partition, and /boot in /dev/sdb1 (or any other non-RAID, non-encrypted partition). Make sure /dev/sda1 is detected an EFI partition, and that it’s intended for EFI boot.

Once it finishes (takes 50 minutes or so, all in all), an “Installation Complete” popup will suggest “Continue Testing” or “Restart Now”. So pick “Continue Testing”. There’s no bootloader yet.

The new operating system will still be mounted as /target. So bind-mount some necessities, and chroot into the new installation:

# for i in /dev /dev/pts /sys /proc /run ; do mount --bind $i /target/$i ; done
# chroot /target

All that follows below is within the new root.

First, mount /boot and /boot/efi with

# mount -a

This should work, as /etc/fstab should have been set up properly during the installation.

Then, (re)install RAID support:

# apt-get install mdadm

It may seem peculiar to install mdadm again, as it was necessary to run exactly the same apt-get command before assembling the RAID in order to get this far. However mdadm isn’t installed on the new system, and without that, there will be no RAID support in the to-be initramfs. Without that, the RAID won’t be assembled on boot, and hence boot will fail.

Set up /etc/crypttab, so it refers to the encrypted partition. Otherwise, there will be no attempt to open it during boot. Find the UUID with

# cryptsetup luksUUID /dev/md0
201b318f-3ffd-47fc-9e00-0356747e3a73

and then /etc/crypttab should say something like

luks-disk UUID=201b318f-3ffd-47fc-9e00-0356747e3a73 none luks

Note that “luks-disk” is just an arbitrary name, which will appear in /dev/mapper. This name should match the one currently found in /dev/mapper, or the inclusion of the crypttab’s info in the new initramfs is likely to fail (with a warning from cryptsetup).

Next, edit /etc/default/grub, making changes as desired (I went for GRUB_TIMEOUT_STYLE to “menu”, to always get a GRUB menu, and also remove “quiet splash” from the kernel command). There is no need for anything related to the use of RAID nor encryption.

Install the GRUB EFI package:

# apt-get install grub-efi-amd64

It might be a good idea to make sure that the initramfs is in sync:

# update-initramfs -u

Then install GRUB:

# update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.15.0-20-generic
Found initrd image: /boot/initrd.img-4.15.0-20-generic
grub-probe: error: cannot find a GRUB drive for /dev/sdd1.  Check your device.map.
Adding boot menu entry for EFI firmware configuration
done
# grub-install
Installing for x86_64-efi platform.
Installation finished. No error reported.

It seems like the apt-get command also led to the execution of the initramfs update and GRUB installation. However I ran these commands nevertheless.

Don’t worry about the error on not finding anything for /dev/sdd1. It’s the USB stick. Indeed, it doesn’t belong.

That’s it. Cross fingers, and reboot. You should be prompted for the passphrase.

Epilogue: How does GRUB executable know where to go next?

Recall that GRUB is packaged as a chunk of code in an .efi file, which is loaded from a dedicated partition. The images are elsewhere. How does it know where to look for them?

So I don’t know exactly how, but it’s clearly fused into the GRUB’s bootloader binary:

# strings -n 8 /boot/efi/EFI/ubuntu/grubx64.efi | tail -2
search.fs_uuid f573c12a-c7e4-41e4-99ef-5fda4a595873 root hd1,gpt1
set prefix=($root)'/grub'

and it so happens that hd1,gpt1 is exactly /dev/sdb1, where the /boot partition is kept, and that the UUID matches the one given as “UUID=” for that partition by the “blkid” utility.

So moving /boot most likely requires reinstalling GRUB. Which isn’t a great surprise.

Conclusion

It’s a bit unfortunate that in 2018 Linux Mint Ubiquity didn’t manage to land on its feet, and even worse, to warn the user that it’s about to fail colossally. It could even have suggested not to install the bootloader…?

And maybe that’s the way it is: If you want a professional Linux system, better be professional yourself…

Reader Comments

May I ask : Why Mint ?

#1 
Written By Rami Rosen on November 25th, 2018 @ 20:58

Because of the Cinnamon desktop, and a year of good experience on my media center computer.

#2 
Written By eli on November 25th, 2018 @ 21:01

Mint now sucks, its gotten worse and worse over time, partly due to these type of issues with FDE getting harder, and because of SystemD. I used it for 6 years but now I’ve since moved to Debian, the mother ship.

#3 
Written By john on June 19th, 2019 @ 07:46

It’s a matter of taste, I guess. More than half a year later I’m really happy with Mint, and systemd is a godsend. And I don’t thing there’s a way to avoid the latter, surely not with Debian.

#4 
Written By eli on June 19th, 2019 @ 08:35

so at the sudo for i in /dev /dev/pts /sys /proc /run ; do mount –bind $i /target/$i ; done

i get an error
bash: syntax error near unexpected token `do’

as you can imagine, this about the 12th time i am trying to run this process and rage is starting to get out of control.

#5 
Written By derp on July 20th, 2019 @ 23:06

mount: /target//dev: mount point does not exist.
mount: /target//dev/pts: mount point does not exist.
mount: /target//sys: mount point does not exist.
mount: /target//proc: mount point does not exist.
mount: /target//run: mount point does not exist.
root@mint:/#

#6 
Written By Anonymous on July 20th, 2019 @ 23:09

Add a Comment

required, use real name
required, will not be published
optional, your blog address