13 KiB
title | slug | date | draft | authors | tags | categories | ||||
---|---|---|---|---|---|---|---|---|---|---|
An Alpine Linux base installation | alpine-linux-base-install | 2024-08-12 | false |
|
|
|
This blog entry will demonstrate how to install x86_64
Alpine Linux for a server application. Alpine Linux will run on a raid configured encrypted ZFS filesystem with automatic decryption using TPM. Alpine Linux makes a good base for a server because of its simplicity, lightweightness and security. Check out the Alpine Linux wiki for additional resources and information.
Provisioning
Flash the Alpine Linux extended ISO and make sure the secureboot keys are reset and TPM is enabled in the BIOS of the host.
After booting the Alpine Linux extended ISO, partition the disks. For this action internet is required since zfs
, sgdisk
and various other necessary packages are not included on the extended ISO, therefore they need to be obtained from the alpine package repository.
To set it up the setup-interfaces
and setup-apkrepos
scripts present on the Alpine Linux ISO will be used.
sh# setup-interfaces -ar #(1)!
sh# setup-apkrepos -c1
- To use Wi-Fi simply run
setup-interfaces -r
and selectwlan0
or similar.
A few packages will have to be installed first.
sh# apk add zfs lsblk sgdisk wipefs dosfstools mdadm zlevis
and load the ZFS kernel module:
sh# modprobe zfs
Define the disks you want to use for this install:
sh# export disks="/dev/disk/by-id/<id-disk-1> ... /dev/disk/by-id/<id-disk-n>"
with <id-disk-n>
for n
an integer, the id
of the disk.
According to openzfs-FAQ using
/dev/disk/by-id/
is the best practice for small pools. For larger pools, the best practice changes to using serial Attached SCSI (SAS), see vdev_id for proper configuration.
Wipe the existing disk partitions:
sh# for disk in $disks; do
> zpool labelclear -f $disk
> wipefs -a $disk
> sgdisk --zap-all $disk
> done
Create on each disk an EFI system
partition (ESP) and a Linux filesystem
partition:
sh# for disk in $disks; do
> sgdisk -n 1:1m:+512m -t 1:ef00 $disk
> sgdisk -n 2:0:-10m -t 2:8300 $disk
> done
Reload the device nodes:
sh# mdev -s
Define the EFI partitions:
sh# export efiparts=""
sh# for disk in $disks; do
> efipart=${disk}-part-1
> efiparts="$efiparts $efipart"
> done
Create a mdraid
array on the EFI partitions:
sh# modprobe raid1
sh# mdadm --create --level 1 --metadata 1.0 --raid-devices <n> /dev/md/esp $efiparts
sh# mdadm --assemble --scan
Format the array with a FAT32 filesystem:
sh# mkfs.fat -F 32 /dev/md/esp
ZFS pool creation
Define the pool partitions:
sh# export poolparts=""
sh# for disk in $disks; do
> poolpart=${disk}-part-2
> poolparts="$poolparts $poolpart"
> done
The ZFS system pool is going to be encrypted. First generate an encryption key and save it temporarily to the file /tmp/rpool.key
with:
sh# cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 20 | head -n 1 > /tmp/rpool.key && cat /tmp/rpool.key
While
zlevis
is used for automatic decryption, this key is required when changes are made to the BIOS or secureboot, so make sure to save it.
Create the system pool:
sh# zpool create -f \
-o ashift=12 \
-O compression=lz4 \
-O acltype=posix \
-O xattr=sa \
-O dnodesize=auto \
-O encryption=on \
-O keyformat=passphrase \
-O keylocation=prompt \
-m none \
rpool raidz1 $poolparts
Additionally, the
spare
option can be used to indicate spare disks. If more redundancy is preferred thanraidz2
andraidz3
are possible alternatives forraidz1
. If a single disk is used theraidz
option can be left aside. For further information see zpool-create.
Then create the system datasets:
sh# zfs create -o mountpoint=none rpool/root
sh# zfs create -o mountpoint=legacy -o quota=24g rpool/root/alpine
sh# zfs create -o mountpoint=legacy -o quota=16g rpool/root/alpine/var
sh# zfs create -o mountpoint=/home -o atime=off -o setuid=off -o devices=off -o quota=<home-quota> rpool/home
Setting the
<home-quota>
depends on the total size of the pool, generally try to reserve some empty space in the pool.
Write the encryption key to TPM with zlevis
:
sh# zlevis encrypt rpool '{"pcr_ids":"0,5,7"}' < /tmp/rpool.key #(1)!
- See zlevis functionality to see the functionality of each
pcr_id
, and the other options that can be set.
To check if it worked, perform
zlevis decrypt rpool
.
Finally, export the zpool:
sh# zpool export rpool
Installation
To install the Alpine Linux distribution on the system, the datasets of the system pool and the EFI partitions have to be mounted to the live (ISO) environment.
First import and decrypt the system pool:
sh# zpool import -N -R /mnt rpool
sh# zfs load-key -L file:///tmp/rpool.key rpool
Then mount the datasets and the ESP on /mnt
:
sh# mount -t zfs rpool/root/alpine /mnt
sh# mkdir /mnt/var
sh# mount -t zfs rpool/root/alpine/var /mnt/var
sh# mkdir /mnt/efi
sh# mount -t vfat /dev/md/esp /mnt/efi
Now we may install Alpine Linux with the setup-disk
script:
sh# export BOOTLOADER=none
sh# setup-disk -m sys /mnt
To have a functional chroot into the system, bind the system process directories:
sh# for dir in dev proc sys run; do
> mount --rbind --make-rslave /$dir /mnt/$dir
> done
sh# chroot /mnt
The other setup scripts can be used to configure key aspects of the system. Besides that a few necessary services have to be activated.
sh# setup-hostname <hostname>
sh# setup-keymap us
sh# setup-timezone -i <area>/<subarea>
sh# setup-ntp openntpd
sh# setup-sshd -c dropbear
sh# rc-update add acpid default
sh# rc-update add seedrng boot
sh# passwd root #(1)!
- The root password does not really matter because it is going to be locked after a user has been created.
Set the hwclock
to use UTC
and disable writing the time to hardware. Running a NTP negates its usability.
clock="UTC"
clock_hctosys="NO"
clock_systohc="NO"
Configure the ESP raid array to mount:
sh# modprobe raid1
sh# echo raid1 >> /etc/modules-load.d/raid1.conf
sh# mdadm --detail --scan >> /etc/mdadm.conf
sh# rc-update add mdadm boot
sh# rc-update add mdadm-raid boot
Configure ZFS to mount:
sh# rc-update add zfs-mount sysinit
sh# rc-update add zfs-import sysinit
sh# rc-update add zfs-load-key sysinit
If a faster boot time is preferred,
zfs-import
andzfs-load-key
can be omitted in certain cases.
Edit the fstab to set the correct mounts:
rpool/root/alpine / zfs rw,noatime,xattr,posixacl,casesensitive 0 1
rpool/root/alpine/var /var zfs rw,noatime,nodev,nosuid,xattr,posixacl,casesensitive 0 2
/dev/md/esp /efi vfat defaults,nodev,nosuid,noexec,umask=0077 0 2
tmpfs /tmp tmpfs rw,nodev,nosuid,noexec,mode=1777 0 0
proc /proc proc nodev,nosuid,noexec,hidepid=2 0 0
Install the following packages to make mkinitfs
compatible with secureboot and TPM decryption:
sh# apk add secureboot-hook sbctl zlevis zlevis-mkinitfs #(1)!
- The
mkinitfs-zlevis
package is as of this moment not yet in the alpine package repository, for the relevant steps see the zlevis mkinitfs-implementation.
Configure mkinitfs
to disable the trigger and to add the zlevis
module:
features="... zlevis"
disable_trigger="yes"
The most important step is the creation of a UKI using the secureboot-hook
of mkinitfs
, which also automatically signs them. Configure the kernel-hooks
to set the kernel cmdline options and secureboot:
cmdline="rw root=ZFS=rpool/root/alpine rootflags=noatime quiet splash"
signing_cert="/var/lib/sbctl/keys/db/db.pem"
signing_key="/var/lib/sbctl/keys/db/db.key"
output_dir="/efi/EFI/Linux"
output_name="alpine-linux-{flavor}.efi"
Use sbctl
to create secureboot keys and sign them:
sh# sbctl create-keys
sh# sbctl enroll-keys #(1)!
- Whilst enrolling the keys it might be necessary to add the
--microsoft
flag if you are unable to use custom keys.
Set the cache-file of the ZFS pool:
sh# zpool set cachefile=/etc/zfs/zpool.cache rpool
Now to see if everything went successfully, run:
sh# apk fix kernel-hooks
and it should give no warnings if done properly.
To install systemd-boot
as friendly bootloader:
sh# apk add systemd-boot
sh# bootctl install
One may verify the signed files by running
sbctl verify
.
Configure systemd-boot
to specify the timeout and the default OS :
default alpine-linux-lts.efi
timeout 2
editor no
Now exit the chroot and you should be able to reboot into a working Alpine system.
sh# exit
sh# umount -lf /mnt
sh# zpool export rpool
sh# reboot
Post installation
Repositories
To set the correct repositories configure:
https://dl-cdn.alpinelinux.org/alpine/latest-stable/main
https://dl-cdn.alpinelinux.org/alpine/latest-stable/community
This will use the latest stable repository of Alpine (for example v3.19
). To use a different version of Alpine simply change latest-stable
to whatever version you want. Do note that you cannot (easily) downgrade your system's version.
There is also the edge
repository which contains the latest packages, but is not recommended, due to the instability it imposes on the system.
If a package is not yet in a stable release one may additionally configure:
@<repository> https://dl-cdn.alpinelinux.org/alpine/edge/<repository>
for the relevant
<repository>
and perform:sh# apk add <package>@<repository>
for the relevant
<package>
.
Firmware and drivers
Install the device firmware for either AMD or Intel:
=== "AMD"
``` shell-session
sh# apk add amd-ucode
```
=== "Intel"
``` shell-session
sh# apk add intel-ucode
```
To make sure it is included during boot, regenerate the UKI with:
sh# apk fix kernel-hooks
Swap
To configure Swap install zram-init
:
sh# apk add zram-init
Configure zram-init
to create a swap device of size one fourth of the ram size:
load_on_start="yes"
unload_on_stop="yes"
num_devices="1"
type0="swap"
size0=`LC_ALL=C free -m | awk '/^mem:/{print int($2/4)}'`
maxs0=1
algo0=zstd
labl0=zram_swap
and add zram-init
to the default runlevel:
sh# rc-update add zram-init default
Users
To run applications securely, in an environment with fewer privileges, a user is necessary.
Before creating the user, install doas
. To be able to "do as" root when it is required:
sh# apk add doas
and configure doas
by editing:
permit persist :wheel as root
A user can be added in Alpine Linux with the setup-user
script. Here we can specify the name, groups and more:
# setup-user -g wheel <username>
# passwd <username>
You may have to change the shell of the user in /etc/passwd
from /sbin/nologin
to a shell from /etc/shells
. Alpine Linux comes with /bin/ash
by default:
<username>:x:1234:1234:<Full Name>:/home/<username>:/bin/<shell>
If you have checked that doas
works with the user then you can lock the root account because it imposes security risks if it is kept open. This can be done with:
# passwd -l root
and by changing its login shell to:
root:x:0:0:root:/root:/sbin/nologin
Concluding remarks
This is essentially it, you now have a fully operational alpine base system running, configured for server use. The next steps are the improvement of the security of the system and the configuration of the container management software.