--- title: Rootless container management with Podman and runit slug: rootless-container-management-with-podman-and-runit date: 2024-08-30 draft: true authors: - luc tags: - Alpine Linux - Gentoo Linux categories: - Container management --- Containers and pods (a collection of containers in the same namespace) enables easy and secure management of hosted applications. Rootless containers and pods can be deployed on a server with [Podman](https://podman.io/) as the rootless container engine and [runit]() as the user service manager. The service manager will be set-up to automatically start and update the containers and pods at boot and to periodically back-up the volumes and databases of the pods. ## User services with runsvdir Using `runsvdir` requires `runit` to be installed on the system: === "Alpine Linux" ``` shell-session sh# apk add runit ``` === "Gentoo Linux" ``` shell-session sh# emerge -a runit ``` Now create an `openrc` entry that will manage `runsvdir`: ``` shell title="/etc/init.d/runsvdir-user" #!/sbin/openrc-run user="${RC_SVCNAME##*.}" svdir="/home/${user}/.local/service" pidfile="/run/runsvdir-user.${user}.pid" command="/usr/bin/runsvdir" command_args="$svdir" command_user="$user" command_background=true depend() { after network-online } ``` Make the entry executable, link user `` and add the service to the default runlevel: ``` shell-session sh# chmod +x /etc/init.d/runsvdir-user sh# ln -s /etc/init.d/runsvdir-user /etc/init.d/runsvdir-user. sh# rc-update add runsvdir-user. default ``` > This process can of course be repeated for any number of users. ## Container management with Podman Install `podman` with: === "Alpine Linux" ``` shell-session sh# apk add podman ``` === "Gentoo Linux" ``` shell-session sh# emerge -a podman ``` Rootless `podman` requires `cgroups` to run, therefore add it to the default runlevel: ``` shell-session sh# rc-update add cgroups default ``` Set up the network namespace configuration for the user: ``` shell-session sh# modprobe tun sh# echo tun >> /etc/modules-load.d/tun.conf sh# for i in subuid subgid; do > echo :100000:65536 >> /etc/$i > done ``` Run the following container to verify if everything works: ``` shell-session sh$ podman run --rm hello-world ``` ### Management of containers To run a single container create: ``` shell title="~/.config/sv//run" #!/bin/sh command="/usr/bin/podman" command_args="run --replace --rm --name= --network=pasta" env="" ports="" mounts="" image="" exec 2>&1 exec $command $command_args $env $ports $mounts $image ``` Make it executable and link it to the service directory: ``` shell-session sh$ chmod +x ~/.config/sv//run sh$ ln -s /.config/sv/ /.local/service ``` ### Management of pods To check if a pod is running, create: ``` shell title="~/.local/bin/checkpod" #!/bin/sh . ./conf exec 2>&1 state=0 while [ $state == 0 ] do sleep 10 $command pod inspect ${name}-pod | grep -q '"State": "Running"' || state=1 done ``` and make it executable with: ``` shell-session sh$ chmod +x ~/.local/bin/checkpod ``` To run a pod configured with `~/.config/pods//.yml`, see [alpine-server](https://git.lucbijl.nl/luc/alpine-server) for examples, we setup the `runit` entry with a `conf`, `run` and `finish` structure. Therefore create: ``` shell title="~/.config/sv/{pod-name}/conf" name="" home="" pod_location="${home}/.config/pods/" bin_location="${home}/.local/bin" command="/usr/bin/podman" command_args="--replace --network=pasta" ``` will contain all the relevant configuration specific to the pod. Now create: ``` shell title="~/.config/sv/{pod-name}/run" #!/bin/sh . ./conf exec 2>&1 $command kube play $command_args ${pod_location}/${name}-pod.yml exec ${bin_location}/checkpod ``` and create: ``` shell title="~/.config/sv/{pod-name}/finish" #!/bin/sh . ./conf exec 2>&1 exec $command kube down ${pod_location}/${name}-pod.yml ``` will stay the same for any pod. Make both `run` and `finish` executable: ``` shell-session sh$ chmod +x ~/.config/sv//run sh$ chmod +x ~/.config/sv//finish ``` Finally, link the pod to the service directory: ``` shell-session sh$ ln -s /.config/sv/ /.local/service ``` ### Backup of volumes and databases To back up volumes of containers and postgresql databases create: ``` shell title="~/.local/bin/dump" #!/bin/sh command="/usr/bin/podman" # Dumps databases postgres_databases="" for database in $postgres_databases do exec $command exec -it ${database}-pod-postgres sh -c "pg_dumpall -U postgres | gzip > /dump/${database}.sql.gz" done # Exports volumes volumes="" for volume in $volumes do exec $command volume export $volume --output /.volumes/${volume}.tar done ``` Make it executable: ``` shell-session sh$ chmod +x ~/.local/bin/dump ``` Automate it with `snooze`: === "Alpine Linux" ``` shell-session sh# apk add snooze ``` === "Gentoo Linux" ``` shell-session sh# emerge -a snooze ``` and create the corresponding `runit` entry: ``` shell title="~/.config/sv/dump/run" #!/bin/sh exec 2>&1 exec snooze -H* /home/neutrino/.local/bin/dump ``` which executes dump every hour. Make it executable and link it to the service directory: ``` shell-session sh$ chmod +x ~/.config/sv/dump/run sh$ ln -s /.config/dump /.local/service ``` Then `restic` can be used to back up the `.dump` and `.volumes` folders to another server if necessary. By creating: ``` shell title="~/.local/bin/load" #!/bin/sh command="/usr/bin/podman" # Loads dumped databases postgres_databases="" for database in $postgres_databases do exec $command exec -it ${database}-pod-postgres sh -c "gunzip -c /dump/${database}.sql.gz | psql -U postgres" done # Imports volumes volumes="" for volume in $volumes do exec $command volume import $volume /.volumes/${volume}.tar done ``` the volumes and postgresql databases can be reloaded. Do not forget to make it executable: ``` shell-session sh$ chmod +x ~/.local/bin/load ``` ## Proxying with Caddy While it would be more optimal to run a reverse proxy in a container and link the network namespaces to this container, this is unfortunately not possible with `pasta` user network namespaces. Therefore, the reverse proxy should be run in front of the containers and thus on the system. Caddy is a simple and modern web-server that supports automatic HTTPS and can act as a reverse-proxy. Install `caddy` and `libcap` (necessary dependency) with: === "Alpine Linux" ``` shell-session sh# apk add caddy libcap ``` === "Gentoo Linux" ``` shell-session sh# emerge -a caddy libcap ``` Give `caddy` privileges to accesss all ports: (1) { .annotate } 1. Such that we are able to run `caddy` rootless. ``` shell-session sh# setcap cap_net_bind_service=+ep /usr/sbin/caddy ``` Create the `caddyfile` (1) according to your needs (2). Then convert it with the following to make it persistent: { .annotate } 1. The configuration file of `caddy`. 2. See [alpine-server](https://git.lucbijl.nl/luc/alpine-server) for examples. ``` shell-session sh$ caddy adapt -c ~/.config/caddy/caddyfile -p > ~/.config/caddy/caddy.json ``` Create the corresponding `runit` entry for `caddy`: ``` shell title="~/.config/sv/caddy/run" #!/bin/sh command="/usr/sbin/caddy" command_args="run" exec ps | grep '[${command}] ${command_args}' > /dev/null if [$? != 0]; then exec 2>&1 exec $command $command_args fi ``` Make it executable and link it to the service directory: ``` shell-session sh$ chmod +x ~/.config/sv/caddy/run sh$ ln -s /.config/sv/caddy /.local/service ```