7.9 KiB
title | slug | date | draft | authors | tags | categories | ||||
---|---|---|---|---|---|---|---|---|---|---|
Rootless container management with Podman and runit | rootless-container-management-with-podman-and-runit | 2024-08-30 | true |
|
|
|
Containers and pods (a collection of containers in the same namespace) enables easy and secure management of hosted applications. Rootless containers and pods can be deployed on a server with Podman as the rootless container engine and runit as the user service manager. The service manager will be set-up to automatically start and update the containers and pods at boot and to periodically back-up the volumes and databases of the pods.
User services with runsvdir
Using runsvdir
requires runit
to be installed on the system:
=== "Alpine Linux"
``` shell-session
sh# apk add runit
```
=== "Gentoo Linux"
``` shell-session
sh# emerge -a runit
```
Now create an openrc
entry that will manage runsvdir
:
#!/sbin/openrc-run
user="${RC_SVCNAME##*.}"
svdir="/home/${user}/.local/service"
pidfile="/run/runsvdir-user.${user}.pid"
command="/usr/bin/runsvdir"
command_args="$svdir"
command_user="$user"
command_background=true
depend()
{
after network-online
}
Make the entry executable, link user <username>
and add the service to the default runlevel:
sh# chmod +x /etc/init.d/runsvdir-user
sh# ln -s /etc/init.d/runsvdir-user /etc/init.d/runsvdir-user.<username>
sh# rc-update add runsvdir-user.<username> default
This process can of course be repeated for any number of users.
Container management with Podman
Install podman
with:
=== "Alpine Linux"
``` shell-session
sh# apk add podman
```
=== "Gentoo Linux"
``` shell-session
sh# emerge -a podman
```
Rootless podman
requires cgroups
to run, therefore add it to the default runlevel:
sh# rc-update add cgroups default
Set up the network namespace configuration for the user:
sh# modprobe tun
sh# echo tun >> /etc/modules-load.d/tun.conf
sh# for i in subuid subgid; do
> echo <username>:100000:65536 >> /etc/$i
> done
Run the following container to verify if everything works:
sh$ podman run --rm hello-world
Management of containers
To run a single container create:
#!/bin/sh
command="/usr/bin/podman"
command_args="run --replace --rm --name=<container-name> --network=pasta"
env="<container-envs>"
ports="<container-ports>"
mounts="<container-mounts>"
image="<container-image>"
exec 2>&1
exec $command $command_args $env $ports $mounts $image
Make it executable and link it to the service directory:
sh$ chmod +x ~/.config/sv/<container-name>/run
sh$ ln -s <home>/.config/sv/<container-name> <home>/.local/service
Management of pods
To check if a pod is running, create:
#!/bin/sh
. ./conf
exec 2>&1
state=0
while [ $state == 0 ]
do
sleep 10
$command pod inspect ${name}-pod | grep -q '"State": "Running"' || state=1
done
and make it executable with:
sh$ chmod +x ~/.local/bin/checkpod
To run a pod configured with ~/.config/pods/<pod-name>/<pod-name>.yml
, see alpine-server for examples, we setup the runit
entry with a conf
, run
and finish
structure. Therefore create:
name="<pod-name>"
home="<home>"
pod_location="${home}/.config/pods/<pod-name>"
bin_location="${home}/.local/bin"
command="/usr/bin/podman"
command_args="--replace --network=pasta"
will contain all the relevant configuration specific to the pod. Now create:
#!/bin/sh
. ./conf
exec 2>&1
$command kube play $command_args ${pod_location}/${name}-pod.yml
exec ${bin_location}/checkpod
and create:
#!/bin/sh
. ./conf
exec 2>&1
exec $command kube down ${pod_location}/${name}-pod.yml
will stay the same for any pod.
Make both run
and finish
executable:
sh$ chmod +x ~/.config/sv/<pod-name>/run
sh$ chmod +x ~/.config/sv/<pod-name>/finish
Finally, link the pod to the service directory:
sh$ ln -s <home>/.config/sv/<pod-name> <home>/.local/service
Backup of volumes and databases
To back up volumes of containers and postgresql databases create:
#!/bin/sh
command="/usr/bin/podman"
# Dumps databases
postgres_databases="<list-of-postgres-databases>"
for database in $postgres_databases
do
exec $command exec -it ${database}-pod-postgres sh -c "pg_dumpall -U postgres | gzip > /dump/${database}.sql.gz"
done
# Exports volumes
volumes="<list-of-volumes>"
for volume in $volumes
do
exec $command volume export $volume --output <home>/.volumes/${volume}.tar
done
Make it executable:
sh$ chmod +x ~/.local/bin/dump
Automate it with snooze
:
=== "Alpine Linux"
``` shell-session
sh# apk add snooze
```
=== "Gentoo Linux"
``` shell-session
sh# emerge -a snooze
```
and create the corresponding runit
entry:
#!/bin/sh
exec 2>&1
exec snooze -H* /home/neutrino/.local/bin/dump
which executes dump every hour.
Make it executable and link it to the service directory:
sh$ chmod +x ~/.config/sv/dump/run
sh$ ln -s <home>/.config/dump <home>/.local/service
Then restic
can be used to back up the .dump
and .volumes
folders to another server if necessary.
By creating:
#!/bin/sh
command="/usr/bin/podman"
# Loads dumped databases
postgres_databases="<list-of-postgres-databases>"
for database in $postgres_databases
do
exec $command exec -it ${database}-pod-postgres sh -c "gunzip -c /dump/${database}.sql.gz | psql -U postgres"
done
# Imports volumes
volumes="<list-of-volumes>"
for volume in $volumes
do
exec $command volume import $volume <home>/.volumes/${volume}.tar
done
the volumes and postgresql databases can be reloaded.
Do not forget to make it executable:
sh$ chmod +x ~/.local/bin/load
Proxying with Caddy
While it would be more optimal to run a reverse proxy in a container and link the network namespaces to this container, this is unfortunately not possible with pasta
user network namespaces. Therefore, the reverse proxy should be run in front of the containers and thus on the system.
Caddy is a simple and modern web-server that supports automatic HTTPS and can act as a reverse-proxy. Install caddy
and libcap
(necessary dependency) with:
=== "Alpine Linux"
``` shell-session
sh# apk add caddy libcap
```
=== "Gentoo Linux"
``` shell-session
sh# emerge -a caddy libcap
```
Give caddy
privileges to accesss all ports: (1)
{ .annotate }
- Such that we are able to run
caddy
rootless.
sh# setcap cap_net_bind_service=+ep /usr/sbin/caddy
Create the caddyfile
(1) according to your needs (2). Then convert it with the following to make it persistent:
{ .annotate }
- The configuration file of
caddy
. - See alpine-server for examples.
sh$ caddy adapt -c ~/.config/caddy/caddyfile -p > ~/.config/caddy/caddy.json
Create the corresponding runit
entry for caddy
:
#!/bin/sh
command="/usr/sbin/caddy"
command_args="run"
exec ps | grep '[${command}] ${command_args}' > /dev/null
if [$? != 0]; then
exec 2>&1
exec $command $command_args
fi
Make it executable and link it to the service directory:
sh$ chmod +x ~/.config/sv/caddy/run
sh$ ln -s <home>/.config/sv/caddy <home>/.local/service