blog/docs/server-os/posts/01-alpine-install.md
Luc 915ede7842 docs/server-os/posts/01-alpine-install.md: add
Added old and possibly outdated alpine base install blog entry.
2025-07-20 13:09:18 +02:00

10 KiB

title slug date draft authors tags categories
Alpine Linux base installation alpine-linux-base-install 2024-08-30 false
luc
Alpine Linux
Linux
Base installation

This blog entry will demonstrate how to install Alpine Linux for a server application. Alpine Linux will run on a raid configured encrypted ZFS filesystem with automatic decryption using TPM. Alpine Linux makes a good base for a server because of its simplicity, lightweightness and security. Check out the Alpine Linux wiki for additional resources and information.

Provisioning

Flash the Alpine Linux extended ISO and make sure the secureboot keys are reset and TPM is enabled in the BIOS of the host.

After booting the Alpine Linux extended ISO, partition the disks. For this action internet is required since zfs, sgdisk and various other necessary packages are not included on the extended ISO, therefore they need to be obtained from the alpine package repository.

To set it up the setup-interfaces and setup-apkrepos scripts present on the Alpine Linux ISO will be used.

sh# setup-interfaces -ar
sh# setup-apkrepos -c1

To use Wi-Fi simply run setup-interfaces -r and select wlan0 or similar.

A few packages will have to be installed first.

sh# apk add zfs lsblk sgdisk wipefs dosfstools acpid mdadm zlevis

The zlevis package is as of this moment not yet in the alpine package repository. Try to get it into the bin via a different method and add its dependencies tpm2-tools and jose.

and load the ZFS kernel module:

sh# modprobe zfs

Define the disks you want to use for this install:

sh# export disks="/dev/disk/by-id/<id-disk-1> ... /dev/disk/by-id/<id-disk-n>"

with <id-disk-n> for n an integer, the id of the disk.

According to openzfs-FAQ using /dev/disk/by-id/ is the best practice for small pools. For larger pools, the best practice changes to using serial Attached SCSI (SAS), see vdev_id for proper configuration.

Wipe the existing disk partitions:

sh# for disk in $disks; do
> zpool labelclear -f $disk
> wipefs -a $disk
> sgdisk --zap-all $disk
> done

Create on each disk an EFI system partition (ESP) and a Linux filesystem partition:

sh# for disk in $disks; do
> sgdisk -n 1:1m:+512m -t 1:ef00 $disk
> sgdisk -n 2:0:-10m -t 2:8300 $disk
> done

Reload the device nodes:

sh# mdev -s

Define the EFI partitions:

sh# export efiparts=""
sh# for disk in $disks; do
> efipart=${disk}-part-1
> efiparts="$efiparts $efipart"
> done

Create a mdraid array on the EFI partitions:

sh# modprobe raid1
sh# mdadm --create --level 1 --metadata 1.0 --raid-devices <n> /dev/md/esp $efiparts
sh# mdadm --assemble --scan

Format the array with a FAT32 filesystem:

sh# mkfs.fat -F 32 /dev/md/esp

ZFS pool creation

Define the pool partitions:

sh# export poolparts=""
sh# for disk in $disks; do
> poolpart=${disk}-part-2
> poolparts="$poolparts $poolpart"
> done

The ZFS system pool is going to be encrypted. First generate an encryption key and save it temporarily to the file /tmp/rpool.key with:

sh# cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1 > /tmp/rpool.key && cat /tmp/rpool.key

While zlevis is used for automatic decryption, this key is required when changes are made to the BIOS or secureboot, so make sure to save it.

Create the system pool:

sh# zpool create -f \
        -o ashift=12 \
        -O compression=lz4 \
        -O acltype=posix \
        -O xattr=sa \
        -O dnodesize=auto \
        -O encryption=on \
        -O keyformat=passphrase \
        -O keylocation=prompt \
        -m none \
        rpool raidz1 $poolparts

Additionally, the spare option can be used to indicate spare disks. If more redundancy is preferred than raidz2 and raidz3 are possible alternatives for raidz1. If a single disk is used the raidz option can be left aside. For further information see zpool-create.

Then create the system datasets:

sh# zfs create -o mountpoint=none rpool/root
sh# zfs create -o mountpoint=legacy -o quota=24g rpool/root/alpine
sh# zfs create -o mountpoint=legacy -o quota=16g rpool/root/alpine/var
sh# zfs create -o mountpoint=/home -o atime=off -o setuid=off -o devices=off -o quota=<home-quota> rpool/home

Setting the <home-quota> depends on the total size of the pool, generally try to reserve some empty space in the pool.

Write the encryption key to TPM with zlevis:

sh# zlevis encrypt rpool '{}' < /tmp/rpool.key

We are using the default configuration settings for zlevis encrypt but a different configuration is possible by setting '{}' accordingly.

To check if it worked, perform zlevis decrypt rpool.

Finally, export the zpool:

sh# zpool export rpool

Installation

To install the Alpine Linux distribution on the system, the datasets of the system pool and the EFI partitions have to be mounted to the main system.

First import and decrypt the system pool:

sh# zpool import -N -R /mnt rpool
sh# zfs load-key -L file:///tmp/rpool.key rpool

Then mount the datasets and the ESP on /mnt:

sh# mount -t zfs rpool/root/alpine /mnt
sh# mkdir /mnt/var
sh# mount -t zfs rpool/root/alpine/var /mnt/var
sh# mkdir /mnt/efi
sh# mount -t vfat /dev/md/esp /mnt/efi

Now we may install Alpine Linux with the setup-disk script:

sh# export BOOTLOADER=none
sh# setup-disk -m sys /mnt

To have a functional chroot into the system, bind the system process directories:

sh# for dir in dev proc sys run; do
> mount --rbind --make-rslave /$dir /mnt/$dir
> done
sh# chroot /mnt

The other setup scripts can be used to configure key aspects of the system. Besides that a few necessary services have to be activated.

sh# setup-hostname <hostname>
sh# setup-keymap us
sh# setup-timezone -i <area>/<subarea>
sh# setup-ntp openntpd
sh# setup-sshd -c dropbear
sh# rc-update add acpid default
sh# rc-update add seedrng boot
sh# passwd root #(1)!
  1. The root password does not really matter because it is going to be locked after a user has been created.

Set the hwclock to use UTC and disable writing the time to hardware. Running a NTP negates its usability.

clock="UTC"
clock_hctosys="NO"
clock_systohc="NO"

Configure the ESP raid array to mount:

sh# modprobe raid1
sh# echo raid1 >> /etc/modules-load.d/raid1.conf
sh# mdadm --detail --scan >> /etc/mdadm.conf
sh# rc-update add mdadm boot
sh# rc-update add mdadm-raid boot

Configure ZFS to mount:

sh# rc-update add zfs-import sysinit
sh# rc-update add zfs-mount sysinit
sh# rc-update add zfs-load-key sysinit

If a faster boot time is preferred, zfs-import and zfs-load-key can be omitted in certain cases.

Edit the fstab to set the correct mounts:

rpool/root/alpine       /           zfs     rw,noatime,xattr,posixacl,casesensitive                 0 1
rpool/root/alpine/var   /var        zfs     rw,noatime,nodev,nosuid,xattr,posixacl,casesensitive    0 2
/dev/md/esp             /efi        vfat    defaults,nodev,nosuid,noexec,umask=0077                 0 2
tmpfs                   /tmp        tmpfs   rw,nodev,nosuid,noexec,mode=1777                        0 0
proc                    /proc       proc    nodev,nosuid,noexec,hidepid=2                           0 0

Install the following packages to make mkinitfs compatible with secureboot and TPM decryption:

sh# apk add secureboot-hook sbctl tpm2-tools zlevis 

Configure mkinitfs to disable trigger and to add the zlevis-hook:

features="... zlevis"
disable_trigger="yes"

The mkinitfs package that supports zlevis is as of this moment not yet in the alpine package repository, for the relevant steps see the zlevis mkinitfs-implementation.

The most important step is the creation of a UKI using secureboot-hook which also automatically signs them. Configure the kernel-hooks to set the kernel cmdline options and secureboot:

cmdline="rw root=ZFS=rpool/root/alpine rootflags=noatime quiet splash"

signing_cert="/var/lib/sbctl/keys/db/db.pem"
signing_key="/var/lib/sbctl/keys/db/db.key"

output_dir="/efi/efi/linux"
output_name="alpine-linux-{flavor}.efi"

Use sbctl to create secureboot keys and sign them:

sh# sbctl create-keys
sh# sbctl enroll-keys

Whilst enrolling the keys it might be necessary to add the --microsoft flag if you are unable to use custom keys.

Set the cache-file of the ZFS pool:

sh# zpool set cachefile=/etc/zfs/zpool.cache rpool

Now to see if everything went successfully, run:

sh# apk fix kernel-hooks

and it should give no warnings if done properly.

To install gummiboot as friendly bootloader:

sh# apk add gummiboot
sh# gummiboot install

One may verify the signed files by running sbctl verify.

Configure gummiboot to specify the timeout and the default OS :

default alpine-linux-lts.efi
timeout 2
editor no

Now exit the chroot and you should be able to reboot into a working Alpine system.

sh# exit
sh# umount -lf /mnt
sh# zpool export rpool
sh# reboot