Tiny multi process init for containers written in Rust 🦀
Tiny multi process init
for containers written in Rust. For example if you
want to run nginx and php-fpm in a single container. It's just a single
statically linked binary.
This is very similiar to concurrently but also acts as a init
by
implementing zombie process reaping and signal forwarding. You could think
this as a combination of tini
(the default init
in Docker) and
concurrently
.
Wait. Containers should run only a single process, right? Read this
multip
will be the one used by the first dead childGrab a pre-build binary from the releases page.
The binary is statically linked with musl libc so it will run in bare bones distros too such as Alpine Linux.
multip "web: nginx" "php: php-fpm"
The web:
and php:
are the prefixes for each processes output. The rest is
passed to /bin/sh
with exec
. Ex. /bin/sh -c "exec nginx"
.
There are none but you can delegate to wrapper scripts.
Create start.sh
with
#/bin/sh
set -eu
export API_ENDPOINT=http://api.example/graphql
exec node /app/server.js
and call multip "server: /app/start.sh" "other: /path/to/some/executable"
.
Remember call the actual command with exec
so it will replace the wrapper
script process instead of starting new subprocess.
If you start multip
as root you can drop the root privileges with setpriv
for example
#!/bin/sh
set -eu
exec setpriv \
--ruid www-data \
--rgid www-data \
--clear-groups \
node /app/server.js
#!/bin/sh
set -eu
while true; do
ret=0
node /app/server.js || ret=$?
echo "Server died with $ret. Restarting soon..."
sleep 1
done
Note that here we cannot use exec
because we need to keep the script alive
for restarts.
multip
brings all processes down even when child exits with success status
code (zero). You can keep others running with sleep infinity
.
#!/bin/sh
set -eu
ret=0
node /app/server.js || ret=$?
if [ "$ret" = "0" ]; then
exec sleep infinity
fi
exit $ret
Single process inits
init
shipped with DockerMulti process inits
Plain multi process runners
In reality most your containers are multiprocess containers any way if they happen to use worker processes or spawn out processes to do one off tasks. So there's nothing technically wrong with them.
It is usually a good design to create single purpose containers but it's not always the best approach. For example if you want to serve PHP apps with php-fpm it is difficult to do with single process containers because php-fpm does not speak HTTP but FastCGI so you need some web server to translate FastCGI to HTTP. You can run nginx in a separate container which proxies to the php-fpm container but because php-fpm cannot share static files you must deploy your code to both containers which can be a hassle to manage.
With multip
it is possible to create a container which runs both but acts
like it has only one process with minimal overhead. This way the fact that it
uses FastCGI is internal to the container and for users of the container it's
like any other container speaking HTTP.