docs/server-os/posts/03-container-management-podman-runit.md: add
Added container management with podman and runit blog entry in server-os blog.
This commit is contained in:
parent
99efb0470e
commit
516eeed792
1 changed files with 363 additions and 0 deletions
363
docs/server-os/posts/03-container-management-podman-runit.md
Normal file
363
docs/server-os/posts/03-container-management-podman-runit.md
Normal file
|
@ -0,0 +1,363 @@
|
||||||
|
---
|
||||||
|
title: Rootless container management with Podman and runit
|
||||||
|
slug: rootless-container-management-with-podman-and-runit
|
||||||
|
date: 2024-08-30
|
||||||
|
draft: true
|
||||||
|
authors:
|
||||||
|
- luc
|
||||||
|
tags:
|
||||||
|
- Alpine Linux
|
||||||
|
- Gentoo Linux
|
||||||
|
categories:
|
||||||
|
- Container management
|
||||||
|
---
|
||||||
|
|
||||||
|
Containers and pods (a collection of containers in the same namespace) enables easy and secure management of hosted applications. Rootless containers and pods can be deployed on a server with [Podman](https://podman.io/) as the rootless container engine and [runit]() as the user service manager. The service manager will be set-up to automatically start and update the containers and pods at boot and to periodically back-up the volumes and databases of the pods.
|
||||||
|
|
||||||
|
<!-- more -->
|
||||||
|
|
||||||
|
## User services with runsvdir
|
||||||
|
|
||||||
|
Using `runsvdir` requires `runit` to be installed on the system:
|
||||||
|
|
||||||
|
=== "Alpine Linux"
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh# apk add runit
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Gentoo Linux"
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh# emerge -a runit
|
||||||
|
```
|
||||||
|
|
||||||
|
Now create an `openrc` entry that will manage `runsvdir`:
|
||||||
|
|
||||||
|
``` shell title="/etc/init.d/runsvdir-user"
|
||||||
|
#!/sbin/openrc-run
|
||||||
|
|
||||||
|
user="${RC_SVCNAME##*.}"
|
||||||
|
svdir="/home/${user}/.local/service"
|
||||||
|
pidfile="/run/runsvdir-user.${user}.pid"
|
||||||
|
|
||||||
|
command="/usr/bin/runsvdir"
|
||||||
|
command_args="$svdir"
|
||||||
|
command_user="$user"
|
||||||
|
command_background=true
|
||||||
|
|
||||||
|
depend()
|
||||||
|
{
|
||||||
|
after network-online
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Make the entry executable, link user `<username>` and add the service to the default runlevel:
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh# chmod +x /etc/init.d/runsvdir-user
|
||||||
|
sh# ln -s /etc/init.d/runsvdir-user /etc/init.d/runsvdir-user.<username>
|
||||||
|
sh# rc-update add runsvdir-user.<username> default
|
||||||
|
```
|
||||||
|
|
||||||
|
> This process can of course be repeated for any number of users.
|
||||||
|
|
||||||
|
## Container management with Podman
|
||||||
|
|
||||||
|
Install `podman` with:
|
||||||
|
|
||||||
|
=== "Alpine Linux"
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh# apk add podman
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Gentoo Linux"
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh# emerge -a podman
|
||||||
|
```
|
||||||
|
|
||||||
|
Rootless `podman` requires `cgroups` to run, therefore add it to the default runlevel:
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh# rc-update add cgroups default
|
||||||
|
```
|
||||||
|
|
||||||
|
Set up the network namespace configuration for the user:
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh# modprobe tun
|
||||||
|
sh# echo tun >> /etc/modules-load.d/tun.conf
|
||||||
|
sh# for i in subuid subgid; do
|
||||||
|
> echo <username>:100000:65536 >> /etc/$i
|
||||||
|
> done
|
||||||
|
```
|
||||||
|
|
||||||
|
Run the following container to verify if everything works:
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh$ podman run --rm hello-world
|
||||||
|
```
|
||||||
|
|
||||||
|
### Management of containers
|
||||||
|
|
||||||
|
To run a single container create:
|
||||||
|
|
||||||
|
``` shell title="~/.config/sv/<container-name>/run"
|
||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
command="/usr/bin/podman"
|
||||||
|
command_args="run --replace --rm --name=<container-name> --network=pasta"
|
||||||
|
env="<container-envs>"
|
||||||
|
ports="<container-ports>"
|
||||||
|
mounts="<container-mounts>"
|
||||||
|
image="<container-image>"
|
||||||
|
|
||||||
|
exec 2>&1
|
||||||
|
exec $command $command_args $env $ports $mounts $image
|
||||||
|
```
|
||||||
|
|
||||||
|
Make it executable and link it to the service directory:
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh$ chmod +x ~/.config/sv/<container-name>/run
|
||||||
|
sh$ ln -s <home>/.config/sv/<container-name> <home>/.local/service
|
||||||
|
```
|
||||||
|
|
||||||
|
### Management of pods
|
||||||
|
|
||||||
|
To check if a pod is running, create:
|
||||||
|
|
||||||
|
``` shell title="~/.local/bin/checkpod"
|
||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
. ./conf
|
||||||
|
|
||||||
|
exec 2>&1
|
||||||
|
|
||||||
|
state=0
|
||||||
|
|
||||||
|
while [ $state == 0 ]
|
||||||
|
do
|
||||||
|
sleep 10
|
||||||
|
$command pod inspect ${name}-pod | grep -q '"State": "Running"' || state=1
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
and make it executable with:
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh$ chmod +x ~/.local/bin/checkpod
|
||||||
|
```
|
||||||
|
|
||||||
|
To run a pod configured with `~/.config/pods/<pod-name>/<pod-name>.yml`, see [alpine-server](https://git.lucbijl.nl/luc/alpine-server) for examples, we setup the `runit` entry with a `conf`, `run` and `finish` structure. Therefore create:
|
||||||
|
|
||||||
|
``` shell title="~/.config/sv/{pod-name}/conf"
|
||||||
|
name="<pod-name>"
|
||||||
|
home="<home>"
|
||||||
|
pod_location="${home}/.config/pods/<pod-name>"
|
||||||
|
bin_location="${home}/.local/bin"
|
||||||
|
command="/usr/bin/podman"
|
||||||
|
command_args="--replace --network=pasta"
|
||||||
|
```
|
||||||
|
|
||||||
|
will contain all the relevant configuration specific to the pod. Now create:
|
||||||
|
|
||||||
|
``` shell title="~/.config/sv/{pod-name}/run"
|
||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
. ./conf
|
||||||
|
|
||||||
|
exec 2>&1
|
||||||
|
$command kube play $command_args ${pod_location}/${name}-pod.yml
|
||||||
|
exec ${bin_location}/checkpod
|
||||||
|
```
|
||||||
|
|
||||||
|
and create:
|
||||||
|
|
||||||
|
``` shell title="~/.config/sv/{pod-name}/finish"
|
||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
. ./conf
|
||||||
|
|
||||||
|
exec 2>&1
|
||||||
|
exec $command kube down ${pod_location}/${name}-pod.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
will stay the same for any pod.
|
||||||
|
|
||||||
|
Make both `run` and `finish` executable:
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh$ chmod +x ~/.config/sv/<pod-name>/run
|
||||||
|
sh$ chmod +x ~/.config/sv/<pod-name>/finish
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, link the pod to the service directory:
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh$ ln -s <home>/.config/sv/<pod-name> <home>/.local/service
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup of volumes and databases
|
||||||
|
|
||||||
|
To back up volumes of containers and postgresql databases create:
|
||||||
|
|
||||||
|
``` shell title="~/.local/bin/dump"
|
||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
command="/usr/bin/podman"
|
||||||
|
|
||||||
|
# Dumps databases
|
||||||
|
|
||||||
|
postgres_databases="<list-of-postgres-databases>"
|
||||||
|
|
||||||
|
for database in $postgres_databases
|
||||||
|
do
|
||||||
|
exec $command exec -it ${database}-pod-postgres sh -c "pg_dumpall -U postgres | gzip > /dump/${database}.sql.gz"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Exports volumes
|
||||||
|
|
||||||
|
volumes="<list-of-volumes>"
|
||||||
|
|
||||||
|
for volume in $volumes
|
||||||
|
do
|
||||||
|
exec $command volume export $volume --output <home>/.volumes/${volume}.tar
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
Make it executable:
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh$ chmod +x ~/.local/bin/dump
|
||||||
|
```
|
||||||
|
|
||||||
|
Automate it with `snooze`:
|
||||||
|
|
||||||
|
=== "Alpine Linux"
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh# apk add snooze
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Gentoo Linux"
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh# emerge -a snooze
|
||||||
|
```
|
||||||
|
|
||||||
|
and create the corresponding `runit` entry:
|
||||||
|
|
||||||
|
``` shell title="~/.config/sv/dump/run"
|
||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
exec 2>&1
|
||||||
|
exec snooze -H* /home/neutrino/.local/bin/dump
|
||||||
|
```
|
||||||
|
|
||||||
|
which executes dump every hour.
|
||||||
|
|
||||||
|
Make it executable and link it to the service directory:
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh$ chmod +x ~/.config/sv/dump/run
|
||||||
|
sh$ ln -s <home>/.config/dump <home>/.local/service
|
||||||
|
```
|
||||||
|
|
||||||
|
Then `restic` can be used to back up the `.dump` and `.volumes` folders to another server if necessary.
|
||||||
|
|
||||||
|
By creating:
|
||||||
|
|
||||||
|
``` shell title="~/.local/bin/load"
|
||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
command="/usr/bin/podman"
|
||||||
|
|
||||||
|
# Loads dumped databases
|
||||||
|
|
||||||
|
postgres_databases="<list-of-postgres-databases>"
|
||||||
|
|
||||||
|
for database in $postgres_databases
|
||||||
|
do
|
||||||
|
exec $command exec -it ${database}-pod-postgres sh -c "gunzip -c /dump/${database}.sql.gz | psql -U postgres"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Imports volumes
|
||||||
|
|
||||||
|
volumes="<list-of-volumes>"
|
||||||
|
|
||||||
|
for volume in $volumes
|
||||||
|
do
|
||||||
|
exec $command volume import $volume <home>/.volumes/${volume}.tar
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
the volumes and postgresql databases can be reloaded.
|
||||||
|
|
||||||
|
Do not forget to make it executable:
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh$ chmod +x ~/.local/bin/load
|
||||||
|
```
|
||||||
|
|
||||||
|
## Proxying with Caddy
|
||||||
|
|
||||||
|
While it would be more optimal to run a reverse proxy in a container and link the network namespaces to this container, this is unfortunately not possible with `pasta` user network namespaces. Therefore, the reverse proxy should be run in front of the containers and thus on the system.
|
||||||
|
|
||||||
|
Caddy is a simple and modern web-server that supports automatic HTTPS and can act as a reverse-proxy. Install `caddy` and `libcap` (necessary dependency) with:
|
||||||
|
|
||||||
|
=== "Alpine Linux"
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh# apk add caddy libcap
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Gentoo Linux"
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh# emerge -a caddy libcap
|
||||||
|
```
|
||||||
|
|
||||||
|
Give `caddy` privileges to accesss all ports: (1)
|
||||||
|
{ .annotate }
|
||||||
|
|
||||||
|
1. Such that we are able to run `caddy` rootless.
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh# setcap cap_net_bind_service=+ep /usr/sbin/caddy
|
||||||
|
```
|
||||||
|
|
||||||
|
Create the `caddyfile` (1) according to your needs (2). Then convert it with the following to make it persistent:
|
||||||
|
{ .annotate }
|
||||||
|
|
||||||
|
1. The configuration file of `caddy`.
|
||||||
|
2. See [alpine-server](https://git.lucbijl.nl/luc/alpine-server) for examples.
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh$ caddy adapt -c ~/.config/caddy/caddyfile -p > ~/.config/caddy/caddy.json
|
||||||
|
```
|
||||||
|
|
||||||
|
Create the corresponding `runit` entry for `caddy`:
|
||||||
|
|
||||||
|
``` shell title="~/.config/sv/caddy/run"
|
||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
command="/usr/sbin/caddy"
|
||||||
|
command_args="run"
|
||||||
|
|
||||||
|
exec ps | grep '[${command}] ${command_args}' > /dev/null
|
||||||
|
|
||||||
|
if [$? != 0]; then
|
||||||
|
exec 2>&1
|
||||||
|
exec $command $command_args
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
Make it executable and link it to the service directory:
|
||||||
|
|
||||||
|
``` shell-session
|
||||||
|
sh$ chmod +x ~/.config/sv/caddy/run
|
||||||
|
sh$ ln -s <home>/.config/sv/caddy <home>/.local/service
|
||||||
|
```
|
Loading…
Add table
Add a link
Reference in a new issue