[pve-devel] [PATCH docs 1/2] Add documentation on bootloaders (systemd-boot)
Thomas Lamprecht
t.lamprecht at proxmox.com
Fri Jul 5 19:21:52 CEST 2019
On 7/5/19 6:31 PM, Stoiko Ivanov wrote:
> With the recently added support for booting ZFS on root on EFI systems via
> `systemd-boot` the documentation needs adapting (mostly related to editing
> the kernel commandline).
>
> This patch adds a short section on Bootloaders to the sysadmin chapter
> describing both `grub` and PVE's use of `systemd-boot`
>
while:
> I would be grateful for feedback if the phrasing makes sense to people who did
> not occupy themselves with the intricasies of bootloader recently
is not fully true with me still some comments inline ;)
> Signed-off-by: Stoiko Ivanov <s.ivanov at proxmox.com>
> ---
> sysadmin.adoc | 2 +
> system-booting.adoc | 144 ++++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 146 insertions(+)
> create mode 100644 system-booting.adoc
>
> diff --git a/sysadmin.adoc b/sysadmin.adoc
> index 21537f1..e045610 100644
> --- a/sysadmin.adoc
> +++ b/sysadmin.adoc
> @@ -74,6 +74,8 @@ include::local-zfs.adoc[]
>
> include::certificate-management.adoc[]
>
> +include::system-booting.adoc[]
> +
> endif::wiki[]
>
>
> diff --git a/system-booting.adoc b/system-booting.adoc
> new file mode 100644
> index 0000000..389a0e9
> --- /dev/null
> +++ b/system-booting.adoc
> @@ -0,0 +1,144 @@
> +[[system_booting]]
> +Bootloaders
> +-----------
> +ifdef::wiki[]
> +:pve-toplevel:
> +endif::wiki[]
> +
> +Depending on the disk setup chosen in the installer {pve} uses two bootloaders
> +for bootstrapping the system.
> +
> +For EFI Systems installed with ZFS as the root filesystem `systemd-boot` is
> +used. All other deployments use the standard `grub` bootloader (this usually
> +also applies to systems which are installed on top of Debian).
> +
> +[[installer_partitioning_scheme]]
> +Partitioning scheme used by the installer
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +The {pve} installer creates 3 partitions on disks:
> +
> +* a 1M BIOS Boot Partition (gdisk type EF02)
> +
> +* a 512M EFI System Partition (ESP, gdisk type EF00)
phrasing feels a bit like it's meant for really all disks, but it's
* non-zfs: selected disk
* zfs: depends on mode, at least on one device and for each raid1 group and
on every raidz dev, IIRC?
> +
> +* a third partition spanning the remaining space used for the chosen storage
> + type
not always the remaining, $hdsize - which, yes, defaults to the remaining ;)
> +
> +`grub` in BIOS mode (`--target i386-pc`) is installed onto the BIOS Boot
> +Partition of all bootable disks for supporting older systems.
> +
> +
> +Grub
> +~~~~
> +
> +`grub` has been the de-facto standard for booting Linux systems for many years
> +and is quite well documented
> +footnote:[Grub Manual https://www.gnu.org/software/grub/manual/grub/grub.html].
> +
> +The kernel and initrd images are taken from `/boot` and its configuration file
> +`/boot/grub/grub.cfg` gets updated by the kernel installation process.
> +
> +Configuration
> +^^^^^^^^^^^^^
> +Changes to the `grub` configuration are done via the defaults file
> + `/etc/default/grub` or config snippets in `/etc/default/grub.d`.
> +To regenerate the `/boot/grub/grub.cfg` after a change to the configuration
> +run `update-grub`.
maybe put above in a
----
pre-formatted code section
----
to highlight it a bit more?
> +
> +Systemd-boot
> +~~~~~~~~~~~~
> +
> +`systemd-boot` is a lightweight EFI bootloader, which reads the kernel and
> +initrd images directly from the EFI Service Partition (ESP) where it is
> +installed. The main advantage of directly loading the
early linebreak above
> +kernel from the ESP is that it does not need to reimplement the drivers for
> +accessing the storage. In the context of ZFS as root filesystem this means
> +that you can use all optional features on your root pool instead of the subset
> +which is also present in the ZFS implementation in `grub` or having to create
> +a separate small boot-pool
> +footnote:[Booting ZFS on root with grub https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS].
> +
> +In setups with redundancy (RAID1, RAID10, RAIDZ*) all bootable disks (those
> +being part of the first `vdev`) are partitioned with an ESP, ensuring the
> +system boots even if the first boot device fails. The ESPs are kept in sync by
> +a kernel postinstall hook script `/etc/kernel/postinst.d/zz-pve-efiboot`. The
> +script copies certain kernel versions and the initrd images to `EFI/proxmox/`
> +on the root of each ESP and creates the appropriate config files in
> +`loader/entries/proxmox-*.conf`.
> +
> +The following kernel versions are configured by default:
> +
> +* the currently booted kernel
s/booted/running/ - while it's true and technically OK, we (and pveversion) always
talks about running kernel.
> +* the version being installed
.. being newly installed on package updates
> +* the two latest kernels
> +* the latest version of each kernel series (e.g. 4.15, 5.0).
> +
> +The ESPs are not kept mounted during regular operation, in contrast to `grub`,
> +which keeps an ESP mounted on `/boot/efi`. This helps preventing filesystem
> +corruption to the `vfat` formatted ESPs in case of a system crash, and removes
> +the need to manually adapt `/etc/fstab` in case the primary boot device fails.
> +
> +[[systemd_boot_config]]
> +Configuration
> +^^^^^^^^^^^^^
> +
> +`systemd-boot` itself is configured via the file `loader/loader.conf` in the
> +root directory of an ESP. See the `loader.conf(5)` manpage for details.
maybe reintroduce ESP in full here, "EFI Service Partition"
> +
> +Each bootloader entry is placed in a file of its own in the directory
> +`loader/entries/`
> +
> +An example entry.conf looks like this (`/` refers to the root of the ESP):
> +
> +----
> +title Proxmox
> +version 5.0.15-1-pve
> +options root=ZFS=rpool/ROOT/pve-1 boot=zfs
> +linux /EFI/proxmox/5.0.15-1-pve/vmlinuz-5.0.15-1-pve
> +initrd /EFI/proxmox/5.0.15-1-pve/initrd.img-5.0.15-1-pve
> +----
> +
> +.Manually keeping a kernel bootable
> +
> +Should you wish to add a certain kernel and initrd image to the list of
> +bootable kernels you need to:
> +
> +* create a directory on the ESP (e.g. `/EFI/personalkernel`)
> +* copy the kernel and initrd image to that directory
> +* create a entry for this kernel in `/loader/entries/*.conf`
maybe a quick example? maybe with the above bulletin points as comments inside?
(just as an idea)
----
mkdir /boot/efi/EFI/best-kernel
cp /boot/EFI/proxmox/initrd.img-5.0.15-1-pve /boot/EFI/proxmox/vmlinuz-5.0.15-1-pve /boot/efi/EFI/best-kernel
echo "Best Kernel
version 5.0.15-1-pve
...
" > /boot/efi/loader/entries/
----
> +
> +NOTE: do not use `/EFI/proxmox` as directory since all entries there can be
> +removed by `/etc/kernel/postinst.d/zz-pve-efiboot`
> +
> +[[systemd-boot-refresh]]
> +.Updating the configuration on all ESPs
> +
> +If you added a new ESP, or made any changes to the available kernels you can
> +sync the kernel and initrd images and their config to all ESPs by running the
> +kernel hook script `/etc/kernel/postinst.d/zz-pve-efiboot`.
Why/When would one need to add a new ESP, could be a question some users have.
> +This is equivalent to running `update-grub` on Systems being booted with
> +`grub`.
> +
> +
> +[[edit_kernel_cmdline]]
> +Editing the kernel commandline
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Depending on the used bootloader you can modify the kernel commandline in the
> +following places:
> +
> +.Grub
> +
> +The kernel commandline needs to be placed in the variable
> +`GRUB_CMDLINE_LINUX_DEFAULT` in the file `/etc/default/grub`. Running
> +`update-grub` appends its content to all `linux` entries in
> +`/boot/grub/grub.cfg`.
> +
> +.Systemd-boot
> +
> +The kernel commandline needs to be placed as line in `/etc/kernel/cmdline`
> +Running `/etc/kernel/postinst.d/zz-pve-efiboot` sets it as `option` line for
> +all config files in `loader/entries/proxmox-*.conf`.
> +
> +
>
More information about the pve-devel
mailing list