docs/server-os/posts/01-alpine-install.md: add

Added old and possibly outdated alpine base install blog entry.
This commit is contained in:
Luc Bijl 2025-07-20 13:09:18 +02:00
parent 5b30ca997f
commit 915ede7842

View file

@ -0,0 +1,339 @@
---
title: Alpine Linux base installation
slug: alpine-linux-base-install
date: 2024-08-30
draft: false
authors:
- luc
tags:
- Alpine Linux
- Linux
categories:
- Base installation
---
This blog entry will demonstrate how to install [Alpine Linux](https://www.alpinelinux.org/) for a server application. Alpine Linux will run on a raid configured encrypted ZFS filesystem with automatic decryption using TPM. Alpine Linux makes a good base for a server because of its simplicity, lightweightness and security. Check out the [Alpine Linux wiki](https://wiki.alpinelinux.org/wiki/Main_Page) for additional resources and information.
<!-- more -->
## Provisioning
Flash the Alpine Linux extended ISO and make sure the secureboot keys are reset and TPM is enabled in the BIOS of the host.
After booting the Alpine Linux extended ISO, partition the disks. For this action internet is required since `zfs`, `sgdisk` and various other necessary packages are not included on the extended ISO, therefore they need to be obtained from the alpine package repository.
To set it up the `setup-interfaces` and `setup-apkrepos` scripts present on the Alpine Linux ISO will be used.
``` shell-session
sh# setup-interfaces -ar
sh# setup-apkrepos -c1
```
> To use Wi-Fi simply run `setup-interfaces -r` and select `wlan0` or similar.
A few packages will have to be installed first.
``` shell-session
sh# apk add zfs lsblk sgdisk wipefs dosfstools acpid mdadm zlevis
```
> The `zlevis` package is as of this moment not yet in the alpine package repository. Try to get it into the `bin` via a different method and add its dependencies `tpm2-tools` and `jose`.
and load the ZFS kernel module:
``` shell-session
sh# modprobe zfs
```
Define the disks you want to use for this install:
``` shell-session
sh# export disks="/dev/disk/by-id/<id-disk-1> ... /dev/disk/by-id/<id-disk-n>"
```
with `<id-disk-n>` for `n` an integer, the `id` of the disk.
> According to [openzfs-FAQ](https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html) using `/dev/disk/by-id/` is the best practice for small pools. For larger pools, the best practice changes to using serial Attached SCSI (SAS), see [vdev_id](https://openzfs.github.io/openzfs-docs/man/master/5/vdev_id.conf.5.html) for proper configuration.
Wipe the existing disk partitions:
``` shell-session
sh# for disk in $disks; do
> zpool labelclear -f $disk
> wipefs -a $disk
> sgdisk --zap-all $disk
> done
```
Create on each disk an `EFI system` partition (ESP) and a `Linux filesystem` partition:
``` shell-session
sh# for disk in $disks; do
> sgdisk -n 1:1m:+512m -t 1:ef00 $disk
> sgdisk -n 2:0:-10m -t 2:8300 $disk
> done
```
Reload the device nodes:
``` shell-session
sh# mdev -s
```
Define the EFI partitions:
``` shell-session
sh# export efiparts=""
sh# for disk in $disks; do
> efipart=${disk}-part-1
> efiparts="$efiparts $efipart"
> done
```
Create a `mdraid` array on the EFI partitions:
``` shell-session
sh# modprobe raid1
sh# mdadm --create --level 1 --metadata 1.0 --raid-devices <n> /dev/md/esp $efiparts
sh# mdadm --assemble --scan
```
Format the array with a FAT32 filesystem:
``` shell-session
sh# mkfs.fat -F 32 /dev/md/esp
```
## ZFS pool creation
Define the pool partitions:
``` shell-session
sh# export poolparts=""
sh# for disk in $disks; do
> poolpart=${disk}-part-2
> poolparts="$poolparts $poolpart"
> done
```
The ZFS system pool is going to be encrypted. First generate an encryption key and save it temporarily to the file `/tmp/rpool.key` with:
``` shell-session
sh# cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1 > /tmp/rpool.key && cat /tmp/rpool.key
```
> While `zlevis` is used for automatic decryption, this key is required when changes are made to the BIOS or secureboot, so make sure to save it.
Create the system pool:
``` shell-session
sh# zpool create -f \
-o ashift=12 \
-O compression=lz4 \
-O acltype=posix \
-O xattr=sa \
-O dnodesize=auto \
-O encryption=on \
-O keyformat=passphrase \
-O keylocation=prompt \
-m none \
rpool raidz1 $poolparts
```
> Additionally, the `spare` option can be used to indicate spare disks. If more redundancy is preferred than `raidz2` and `raidz3` are possible [alternatives](https://openzfs.github.io/openzfs-docs/man/master/7/zpoolconcepts.7.html) for `raidz1`. If a single disk is used the `raidz` option can be left aside. For further information see [zpool-create](https://openzfs.github.io/openzfs-docs/man/master/8/zpool-create.8.html).
Then create the system datasets:
``` shell-session
sh# zfs create -o mountpoint=none rpool/root
sh# zfs create -o mountpoint=legacy -o quota=24g rpool/root/alpine
sh# zfs create -o mountpoint=legacy -o quota=16g rpool/root/alpine/var
sh# zfs create -o mountpoint=/home -o atime=off -o setuid=off -o devices=off -o quota=<home-quota> rpool/home
```
> Setting the `<home-quota>` depends on the total size of the pool, generally try to reserve some empty space in the pool.
Write the encryption key to TPM with `zlevis`:
``` shell-session
sh# zlevis encrypt rpool '{}' < /tmp/rpool.key
```
> We are using the default configuration settings for `zlevis encrypt` but a different configuration is possible by setting `'{}'` accordingly.
<break>
> To check if it worked, perform `zlevis decrypt rpool`.
Finally, export the zpool:
``` shell-session
sh# zpool export rpool
```
## Installation
To install the Alpine Linux distribution on the system, the datasets of the system pool and the EFI partitions have to be mounted to the main system.
First import and decrypt the system pool:
``` shell-session
sh# zpool import -N -R /mnt rpool
sh# zfs load-key -L file:///tmp/rpool.key rpool
```
Then mount the datasets and the ESP on `/mnt`:
``` shell-session
sh# mount -t zfs rpool/root/alpine /mnt
sh# mkdir /mnt/var
sh# mount -t zfs rpool/root/alpine/var /mnt/var
sh# mkdir /mnt/efi
sh# mount -t vfat /dev/md/esp /mnt/efi
```
Now we may install Alpine Linux with the `setup-disk` script:
``` shell-session
sh# export BOOTLOADER=none
sh# setup-disk -m sys /mnt
```
To have a functional chroot into the system, bind the system process directories:
``` shell-session
sh# for dir in dev proc sys run; do
> mount --rbind --make-rslave /$dir /mnt/$dir
> done
sh# chroot /mnt
```
The other setup scripts can be used to configure key aspects of the system. Besides that a few necessary services have to be activated.
``` shell-session
sh# setup-hostname <hostname>
sh# setup-keymap us
sh# setup-timezone -i <area>/<subarea>
sh# setup-ntp openntpd
sh# setup-sshd -c dropbear
sh# rc-update add acpid default
sh# rc-update add seedrng boot
sh# passwd root #(1)!
```
1. The root password does not really matter because it is going to be locked after a user has been created.
Set the `hwclock` to use `UTC` and disable writing the time to hardware. Running a NTP negates its usability.
``` shell title="/etc/conf.d/hwclock"
clock="UTC"
clock_hctosys="NO"
clock_systohc="NO"
```
Configure the ESP raid array to mount:
``` shell-session
sh# modprobe raid1
sh# echo raid1 >> /etc/modules-load.d/raid1.conf
sh# mdadm --detail --scan >> /etc/mdadm.conf
sh# rc-update add mdadm boot
sh# rc-update add mdadm-raid boot
```
Configure ZFS to mount:
``` shell-session
sh# rc-update add zfs-import sysinit
sh# rc-update add zfs-mount sysinit
sh# rc-update add zfs-load-key sysinit
```
> If a faster boot time is preferred, `zfs-import` and `zfs-load-key` can be omitted in certain cases.
Edit the fstab to set the correct mounts:
``` shell title="/etc/fstab"
rpool/root/alpine / zfs rw,noatime,xattr,posixacl,casesensitive 0 1
rpool/root/alpine/var /var zfs rw,noatime,nodev,nosuid,xattr,posixacl,casesensitive 0 2
/dev/md/esp /efi vfat defaults,nodev,nosuid,noexec,umask=0077 0 2
tmpfs /tmp tmpfs rw,nodev,nosuid,noexec,mode=1777 0 0
proc /proc proc nodev,nosuid,noexec,hidepid=2 0 0
```
Install the following packages to make `mkinitfs` compatible with secureboot and TPM decryption:
``` shell-sessions
sh# apk add secureboot-hook sbctl tpm2-tools zlevis
```
Configure `mkinitfs` to disable trigger and to add the `zlevis-hook`:
``` shell title="/etc/mkinitfs/mkinitfs.conf"
features="... zlevis"
disable_trigger="yes"
```
> The `mkinitfs` package that supports `zlevis` is as of this moment not yet in the alpine package repository, for the relevant steps see the [zlevis mkinitfs-implementation](https://docs.ampel.dev/zlevis).
The most important step is the creation of a UKI using `secureboot-hook` which also automatically signs them. Configure the `kernel-hooks` to set the kernel cmdline options and secureboot:
``` shell title="/etc/kernel-hooks.d/secureboot.conf"
cmdline="rw root=ZFS=rpool/root/alpine rootflags=noatime quiet splash"
signing_cert="/var/lib/sbctl/keys/db/db.pem"
signing_key="/var/lib/sbctl/keys/db/db.key"
output_dir="/efi/efi/linux"
output_name="alpine-linux-{flavor}.efi"
```
Use `sbctl` to create secureboot keys and sign them:
``` shell-session
sh# sbctl create-keys
sh# sbctl enroll-keys
```
> Whilst enrolling the keys it might be necessary to add the `--microsoft` flag if you are unable to use custom keys.
Set the cache-file of the ZFS pool:
``` shell-session
sh# zpool set cachefile=/etc/zfs/zpool.cache rpool
```
Now to see if everything went successfully, run:
``` shell-session
sh# apk fix kernel-hooks
```
and it should give no warnings if done properly.
To install `gummiboot` as friendly bootloader:
``` shell-session
sh# apk add gummiboot
sh# gummiboot install
```
> One may verify the signed files by running `sbctl verify`.
Configure `gummiboot` to specify the timeout and the default OS :
``` shell title="/efi/loader/loader.conf"
default alpine-linux-lts.efi
timeout 2
editor no
```
Now exit the chroot and you should be able to reboot into a working Alpine system.
``` shell-session
sh# exit
sh# umount -lf /mnt
sh# zpool export rpool
sh# reboot
```