Consider dropping call to update-initramfs in config/chroot_local-hooks/99-initramfs-compress
If I read this correctly:
08:18:36 Configuring compression of the initramfs 08:18:36 update-initramfs: Generating /boot/initrd.img-4.19.0-2-amd64 08:18:41 cryptsetup: WARNING: could not determine root device from /etc/fstab 08:18:45 live-boot: core filesystems devices utils memdisk udev wget blockdev dns. 08:20:33 Setting correct file permissions 08:20:33 Removing *.pyc 08:20:33 Setting mtime on large files whose content generally do not change 08:20:33 Checking for .orig files 08:20:34 Checking UIDs and GIDs stability 08:20:34 Truncating log files 08:20:34 Post processing filesystem to make it reproducible 08:20:34 P: Begin executing hooks... 08:20:34 P: Begin executing hacks... 08:20:34 update-initramfs: Generating /boot/initrd.img-4.19.0-2-amd64 08:20:39 cryptsetup: WARNING: could not determine root device from /etc/fstab 08:20:44 live-boot: core filesystems devices utils memdisk udev wget blockdev dns. 08:22:38 P: Begin ensuring chroot contents are reproducible...
… we force an initramfs update ourselves, to apply our compression settings, and this takes ~2 minutes. But then
live-build does the same again, which takes ~2 minutes as well.
So it looks like we could remove the
update-initramfs -u call from our own hook and save 2 minutes on the build time, i.e. ~3%, which is pretty good for a one line change :)
Bonus points if, to avoid future regressions in case we ever update to a version of
live-build that does not update the initramfs anymore, we add a sanity check (probably in
binary_local-hooks/) which verifies that the size of the resulting initramfs is within expected bounds.
Drop useless manual initramfs update (refs: #16452)
live-build will do that itself later on.
This saves ~2 minutes (~3%) on the total build time.
Sanity check the size of the initramfs (refs: #16452)
This will help us detect if any of this happens:
- config/chroot_local-hooks/99-initramfs-compress is broken
- live-build does not generate the initramfs after
config/chroot_local-hooks/99-initramfs-compress has applied
our preferred configuration
- Some unrelated change makes the initramfs substantially larger,
as an unintended side-effect.
As of Tails 3.12.1, our current initramfs is 30M large so a 35M
limit should give us just enough safety margin.
#6 Updated by intrigeri about 1 month ago
- Assignee changed from intrigeri to lamby
- Target version changed from Tails_3.14 to Tails_3.13
- % Done changed from 10 to 50
- QA Check set to Ready for QA
Builds fine and I've checked that the initramfs hasn't grown (which confirms that
live-build regenerates it). I observed 3-4% total build time decrease on my local Jenkins ⇒ as I suspected, that's a pretty cheap improvement :) I did not bother benchmarking on our shared Jenkins because I've already analyzed above that the removed call took 2 minutes.
@lamby, would you like to review this, ideally by the end of the week so this makes it into 3.13? The changes are tiny (30 insertions — most of it being boilerplate — and 7 deletions).
#8 Updated by lamby about 1 month ago
- (I did not build this myself as my provisioning is currently broken for some reason.)
- Downloaded the build log and ISO from https://nightly.tails.boum.org/build_Tails_ISO_bugfix-16452-remove-useless-extra-initramfs-update/lastSuccessful/archive/build-artifacts/
- Noted new `P: checking the size of the initramfs` in build log as well as the now-removed manual initramfs compression.
- Booted the ISO - boots fine (see attached screenshot)
- Mounted the ISO on a loopback device and observed `live/initrd.img` is 31,543,032 bytes which not only LGTM and is comparable to an ISO built as part of #16559 (31,543,032 bytes) so we are still applying xz compression.