Difference between revisions of "Running runit-supervised services inside a vserver"

From Linux-VServer

Jump to: navigation, search
(+cat)
(spelling)
Line 216: Line 216:
 
--[[User:KornAndras|Guy-]] 12:18, 28 September 2008 (CET)
 
--[[User:KornAndras|Guy-]] 12:18, 28 September 2008 (CET)
  
[[Category:Documentaion]]
+
[[Category:Documentation]]

Revision as of 19:59, 21 October 2011

This page describes a setup where you have a runit installation on the host and use it to directly supervise services running in vserver guests.

Contents

Motivation and goals

This is what I wanted to achieve:

  • Partition a physical server with many responsibilities into vservers that can easily be upgraded individually without breaking any unrelated stuff.
  • Each service or small set of services should run in its own vserver.
  • Services should be supervised (started, stopped and managed) by runit.
  • Service logs should accumulate on the host, not the guests.
    • svlogd rotates them nicely and can invoke my multilogcheck script to alert me of unusual events as a postprocessor, but
    • I don't want to install multilogcheck inside every vserver because it's messy.
    • I don't like syslog. There are many problems with it, but I won't go into that here.
      • Therefore, services mostly log to stdout and thus to svlogd.
      • For services that absolutely must use syslog, I provide socklog.
      • Again, I don't want a separate socklog instance in every vserver.
      • Alas, socklog is unable to listen on more than one Unix domain socket.
        • Luckily, socat can be used to relay syslog messages from the vservers to the master socklog on the host. (syslog-ng would also have done the job nicely.)
        • It's also possible to bind mount the /dev/log socket in every guest; however, I'm afraid this breaks if you restart socklog after a guest is up. Workaround: make /dev/log a symlink to, say, /var/run/syslog/socket everywhere and bind mount the host's /var/run/syslog directory in the guests; that way the socket will still be available even if socklog unlinks and recreates it.
  • It should be straightforward and next to transparent to manage the services running in vservers.
    • With runit, it's easy to delegate management rights of a service to users (chown and chmod some files in the pertinent supervise directory). This should continue to work.

Prerequisites

  1. A fairly recent vserver kernel with support for persistent contexts (I used 2.6.19.2-vs2.2.0-rc8.7).
  2. A recent version of util-vserver that supports persistent contexts correctly (I used 0.30.213-rc1).
  3. Daniel_Hozac's signal-relay program.

The big picture

Let's see how it all fits together.

We'd like to achieve something like this (output of vps axfu):

USER       PID CONTEXT             %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1     0 MAIN           0.0  0.0    104    16 ?        Ss   Jan29   0:06 runit
root      4188     0 MAIN           0.0  0.0    132    32 ?        Ss   Jan29   0:10 runsvdir -P /var/service log: .......................................................
root      4250     0 MAIN           0.0  0.0    108    28 ?        Ss   Jan29   0:00  \_ runsv vserver-logrelay-squid
log       4256     0 MAIN           0.0  0.0    128    44 ?        S    Jan29   0:00  |   \_ svlogd -t /var/log/sv/vserver-logrelay-squid
logrelay  3106     0 MAIN           0.0  0.1  25428  1944 ?        S    00:44   0:00  |   \_ socat -d -d -d -D -ls -u UNIX-LISTEN:/etc/vservers/squid/vdir/dev/log,unlink-early,nonblock,mode=666,setuid=logrelay UNIX-CONNECT:/dev/log,type=2,nonblock,forever
root     19272     0 MAIN           0.0  0.0    108    28 ?        Ss   Feb01   0:00  \_ runsv squid
log      19273     0 MAIN           0.0  0.0    128    44 ?        S    Feb01   0:00  |   \_ svlogd -t /var/log/sv/squid
root     10324     0 MAIN           0.0  0.0   5728   364 ?        S    Feb01   0:00  |   \_ signal-relay vserver squid exec squid -N -D -sYC
proxy    10334     2 squid          0.0  0.5  24968  8036 ?        Sl   Feb01   0:01  |       \_ squid -N -D -sYC
root     24219     0 MAIN           0.0  0.0    108    32 ?        Ss   02:02   0:00  \_ runsv nmbd
root     24220     0 MAIN           0.0  0.0   5732   376 ?        S    02:02   0:00  |   \_ signal-relay vserver samba exec /usr/sbin/nmbd -F
root     24230     3 samba          0.0  0.1  30080  2360 ?        Ss   02:02   0:00  |       \_ /usr/sbin/nmbd -F
root     24251     3 samba          0.0  0.0  32172  1496 ?        S    02:02   0:00  |           \_ /usr/sbin/nmbd -F
root     24253     0 MAIN           0.0  0.0    108    28 ?        Ss   02:02   0:00  \_ runsv smbd
root     24254     0 MAIN           0.0  0.0   5732   376 ?        S    02:02   0:00  |   \_ signal-relay vserver samba exec /usr/sbin/smbd -F
root     24268     3 samba          0.0  0.2  41888  3404 ?        Ss   02:02   0:00  |       \_ /usr/sbin/smbd -F
root     24289     3 samba          0.0  0.0  41888  1232 ?        S    02:02   0:00  |           \_ /usr/sbin/smbd -F
root     29625     0 MAIN           0.0  0.0    104    28 ?        Ss   02:22   0:00  \_ runsv cron-squid
root     29626     0 MAIN           0.0  0.0   5732   376 ?        S    02:22   0:00      \_ signal-relay vserver squid exec cron -f
root     29637     2 squid          0.0  0.0  11492  1060 ?        S    02:22   0:00          \_ cron -f

For service supervision to work, we must be able to send signals to our services. Specifically, runsv must be able to send signals to its children. Alas, it's not prepared to send signals across context boundaries, which is where signal-relay comes in.

signal-relay

signal-relay is a small program not unlike runit's chpst that does the following:

  • it forks a child;
    • inside the child, it execs the program specified on its command line;
  • in the parent, it sets up signal handlers for every signal that relay the signal to the child, even if the child is running in a different context;
  • if the child exits, signal-relay exits.
  • (It can also put the child into its own process group; use the -P switch. Sending signals to process groups in a different context doesn't work yet, though.)

Setting up the vservers

When we start a service for runit, we want the command that starts the service to stay in the foreground until the moment the service dies. vserver exec looks just right, but there is a catch: it only works for vservers that have been "started". vserver start, however, doesn't fit very well into the runit way of doing things. You can set it up as a service (this is discussed in util-vserver:InitStyles), but it seems superfluous to leave some processes around just to keep a vserver "started" so that we can run services in it.

What we need is a way of basically doing "start vserver <guest> if it's not started, then exec program <service> inside it". Normally, a context with no processes running inside it is destroyed by the kernel; thus, just setting /etc/vservers/guest/apps/init/cmd.start to /bin/true isn't going to work; we would need something that stays around for a while, like a script that calls sleep 1m &. This would make vserver start happy, so in our service run script, we could do something like

vserver guest status || vserver guest start
exec signal-relay vserver guest exec /path/to/service-program

A more elegant solution is to make the guest context persistent. This way it sticks around even if there are no processes left in it.

cd /etc/vservers/guest
echo persistent >>flags
echo persistent >>nflags
echo /bin/true >apps/init/cmd.start

(I guess nflags is for the network context.)

Now, vserver guest start should be able to "start" our vserver (set up all of its state), and exit without leaving stray processes around.

Running services

Say you have a vserver called squid, and you want to run your squid proxy inside it to keep it insulated from the other processes on the system, and vice versa.

In our current setup, the following run script will do the trick (it borrows a bit from Debian's initscript and uses the Debian defaults file):

#!/bin/sh
exec 2>&1

# Set a default for max. filedescriptors
SQUID_MAXFD=4096

# Figure out the name of the service we're running as
SVNAME=$(basename $(pwd))

# Read the configfile of this service; if it sets VSERVERNAME, we'll assume we have to run inside the specified vserver
CONFIG=/etc/default/"$SVNAME"
[ -r "$CONFIG" ] && . "$CONFIG"

# Source Debian defaults
[ -n "$VSERVERNAME" ] && VROOT="/etc/vservers/$VSERVERNAME/vdir" || VROOT=/
[ -r "$VROOT/etc/default/squid" ] && . "$VROOT/etc/default/squid"

[ "$SQUID_MAXFD" -gt 4096 ] && SQUID_MAXFD=4096

if test -f /proc/sys/fs/file-max; then
        global_file_max=$(cat /proc/sys/fs/file-max)
        minimal_file_max=$(($SQUID_MAXFD + 4096))
        [ "$global_file_max" -lt "$minimal_file_max" ] && echo "$minimal_file_max" >/proc/sys/fs/file-max
fi

if [ ! "$VSERVERNAME" = "" ]; then
        VSERVERARGS="signal-relay vserver $VSERVERNAME exec"
        vserver "$VSERVERNAME" status || vserver "$VSERVERNAME" start # start the vserver if necessary
fi

exec chpst -o"$SQUID_MAXFD" $VSERVERARGS squid -N -D -sYC

This script is generic in the sense that it works with or without a vserver setup.

You can run an svlogd for this service like you normally would; or you could have all your svlogds run in a different vserver.

A run script for cron that automatically figures out if it should run in a vserver:

#!/bin/sh
exec 2>&1
DEPENDENCIES=""
RLIMIT="chpst -t 172800"
SVNAME=$(basename $(pwd))
CONFIG=/etc/default/"$SVNAME"

# Assume part after dash is name of vserver
if [ ! "${SVNAME/*-/}" = "$SVNAME" ]; then
        VSERVERNAME="${SVNAME/*-/}"
fi

# By default, we depend on the socklog service if it exists
[ -e /service/socklog ] && DEPENDENCIES=socklog

# And also on the logrelay service of the pertinent vserver
[ -e /service/vserver-logrelay-"$VSERVERNAME" ] && DEPENDENCIES="$DEPENDENCIES vserver-logrelay-$VSERVERNAME"

# Can override VSERVERNAME, RLIMIT and DEPENDENCIES here if necessary
[ -r "$CONFIG" ] && . "$CONFIG"

if [ ! "$VSERVERNAME" = "" ]; then
        VSERVERARGS="signal-relay vserver $VSERVERNAME exec"
        vserver "$VSERVERNAME" status || vserver "$VSERVERNAME" start
fi

if [ -n "$DEPENDENCIES" ]; then
        sv start $DEPENDENCIES || exit 1
fi

exec $RLIMIT $VSERVERARGS cron -f

Now, if you place this run script in a service directory called cron-squid, it will run a cron daemon in the squid guest. This takes care of rotating the squid logs under /var/log/squid, for example; although this could also be done on the host with a few kludges.

The vserver-logrelay-template/run script looks like this:

#!/bin/sh
exec 2>&1
SVNAME=$(basename $(pwd))
RUNASUSER=logrelay
MODE=666
VSERVERNAME=$(echo $SVNAME | sed 's/vserver-logrelay-//')
[ -r "/etc/default/$SVNAME" ] && . "/etc/default/$SVNAME"

exec socat -d -d -d -D -ls -u \
        UNIX-LISTEN:/etc/vservers/$VSERVERNAME/vdir/dev/log,unlink-early,nonblock,mode=$MODE,setuid=$RUNASUSER \
        UNIX-CONNECT:/dev/log,type=2,nonblock,forever

This will pass syslog messages from a vserver to the syslog of the host by acting as a syslog server for the /dev/log socket of the guest. Thus, you don't need to run a syslogd inside the vserver and you don't need to migrate to syslog-ng from socklog on the host.

Symlink this run script into a service directory called vserver-logrelay-squid.

Unfortunately, socat has a largish memory footprint; something more lightweight could, perhaps, be used.

Comments and additions welcome.

Disadvantages

The most significant problem with this approach (as opposed to using initstyle plain and a separate runit instance in each vserver) is that it's no longer straightforward to manage the services running in vservers from inside the vservers; package management scripts can't stop them for upgrades, for example (unless you modify the initscripts in quite horrible ways).

Initial setup is also slightly more complicated.

These have to be weighed against the advantages of:

  • having fewer superfluous processes (like a separate runit and runsvdir in each vserver);
  • being able to manage the services easily from the host;
  • avoiding the need to run sshd inside each guest in order to be able to easily delegate service management privileges to others.

Nowadays I tend to think the initstyle plain method is better after all.

--Guy- 12:18, 28 September 2008 (CET)

Personal tools