http://linux-vserver.org/api.php?action=feedcontributions&user=212.123.252.242&feedformat=atomLinux-VServer - User contributions [en]2024-03-29T09:07:06ZUser contributionsMediaWiki 1.20.2http://linux-vserver.org/Getting_high_with_lennyGetting high with lenny2008-10-02T08:39:12Z<p>212.123.252.242: /* not needed???? */</p>
<hr />
<div>== Getting High with Lenny ==<br />
<br />
The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released)<br />
<br />
<br />
There is a lot of buzz going on for a while now about virtualisation and High Availability and while Vserver is very well capable for this job the number of documented examples compared to some other virtualisation techniques are a little lacking so i thought i'd do my share. <br />
<br />
I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed.<br />
DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly.<br />
In my experience it takes a few seconds to have several Vservers fail over to another machine with this setup.<br />
<br />
The main attempt here is to give a single working example without going to much in to the details of every option, the scenario is relatively simple but different variations can be made.<br />
<br />
For this set up we will have <br />
<br />
** 2 machines<br />
** both machines have 1 single large DRBD partition<br />
** primary/seconday there is always 1 machine active and 1 on standby<br />
** 1 LVM partition per Vserver on top of the DRBD partition, for quota support from within the guest and LVM snapshots<br />
** the Vservers /etc/vserver and /var/lib/vservers directories will be placed on the DRBD partition.<br />
<br />
In case the main machine that runs the Vservers goes down, the synchronized second machine should take over and automatically start the Vservers.<br />
<br />
Basically this is an on-line RAID solution that can keep your services running in case of hardware failure, it is NOT a back-up replacement.<br />
<br />
The cost for this setup is that you always have 1 idle machine standby, this cost can be justified by the fact that Linux-Vserver enables you to make full use of the 1 machine that is running, you also could consider to run this on a little less expensive (reliable) hardware.<br />
<br />
Also note that i will be using R1 style configuration for heartbeat, R1 style can be considered to be depreciated when using Heartbeat2 but i could not get my head around the R2 xml configuration, so if you want R2 you might want to have a look here.<br />
[[Fail-over]])<br />
<br />
The partitioning looks as follows<br />
<br />
<code> <br />
c0d0p1 Boot Primary Linux ext3 10001.95<br />
c0d0p5 Logical Linux swap / Solaris 1003.49<br />
c0d0p6 Logical Linux 280325.77<br />
<br />
</code><br />
<br />
<br />
<blockquote><br />
* '''machine1''' will use the following names. <br><br />
* hostname = node1 <br> <br />
* IP number = 192.168.1.100<br><br />
* is primary for r0 on disk c0d0p6 <br><br />
* physical volume on r0 is /dev/drbd0 <br><br />
* volume group on /dev/drbd0 is called drbdvg0 <br><br />
</blockquote><br />
<br />
<blockquote><br />
* '''machine2''' will use the following names. <br><br />
* hostname = node2 <br><br />
* IP number = 192.168.1.200 <br><br />
* is secondary for r0 on disk c0d0p6 <br><br />
<br />
The Volume Group and the Physical Volume will be identical on node2 if this one becomes the primary for r0.<br />
</blockquote><br />
<br />
== Loadbalance-Failover the network cards ==<br />
<br />
Maybe not very specific to Vserver, Heartbeat or DRBD, but loadbalancing your network cards for failover is always usefull. Some more indepth details by Carla Schroder can be found here. <br />
[[http://www.enterprisenetworkingplanet.com/nethub/article.php/3696561]]<br />
I did not do it for the DRBD crossover cable between the nodes while this is actually highly recomended.<br />
We need both mii-tool and ethtool.<br />
<br />
<code><br />
apt-get install ethtool ifenslave-2.6<br />
</code><br />
<br />
<code><br />
nano /etc/modprobe.d/arch/i386<br />
</code><br />
<br />
To load the modules with the correct options at boot time.<br />
<br />
<pre><br />
alias bond0 bonding<br />
options bond0 mode=balance-alb miimon=100 <br />
</pre><br />
<br />
And set the interfaces eth0 and eth1 as slaves to bond0, also eth2 is set here for the crossover cable for the DRBD connection to the fail over machine.<br />
<br />
<code><br />
nano /etc/network/interfaces<br />
</code><br />
<pre><br />
# This file describes the network interfaces available on your system<br />
# and how to activate them. For more information, see interfaces(5).<br />
<br />
# The loopback network interface<br />
auto lo<br />
iface lo inet loopback<br />
<br />
# The primary network interface<br />
auto bond0<br />
iface bond0 inet static<br />
address 123.123.123.100<br />
netmask 255.255.255.0<br />
network 123.123.123.0<br />
broadcast 123.123.123.255<br />
gateway 123.123.123.1<br />
# dns-* options are implemented by the resolvconf package, if installed<br />
dns-nameservers 123.123.123.45<br />
dns-search example.com<br />
up /sbin/ifenslave bond0 eth0 eth1<br />
down ifenslave -d bond0 eth0 eth1<br />
<br />
<br />
auto eth2<br />
iface eth2 inet static<br />
address 192.168.1.100<br />
netmask 255.255.255.0<br />
</pre><br />
<br />
This way the system needs to be rebooted before the changes take effect, otherwise you should load the drivers and ifdown eth0 and eth1 first before ifup bond0 but i'm planning to install a new kernel anyway in the next step.<br />
<br />
== Install the Vserver packages ==<br />
<br />
<code><br />
apt-get install linux-image-2.6-vserver-686-bigmem util-vserver vserver-debiantools<br />
</code><br />
<br />
As usual a reboot is needed to boot this kernel.<br />
<br />
<blockquote><br />
With Etch i found that the Vserver kernel often ended up as second in the grub list, not so in Lenny but to be safe check the kernel stanza in /boot/grub/menu.lst especially when doing this from a remote location.<br />
</blockquote><br />
<br />
== Install DRBD8, LVM2 and Heartbeat ==<br />
<br />
<code><br />
apt-get install drbd8-modules-2.6-vserver-686-bigmem drbd8-module-source lvm2 heartbeat-2<br />
</code><br />
<br />
<blockquote><br />
not sure about this, but DRBD always needed to be compiled against the running kernel, is this still the case with the kernel specific modules? I did not check but it would be good to know in case of a kernel upgrade.<br />
</blockquote><br />
<br />
== Build DRBD8 ==<br />
<br />
Although packages are available in the repositorie for DRBD8, the purpose of these packages is that you can built it easily from source and patch the running kernel.<br />
<br />
To do this we just issue this command<br />
<br />
<code><br />
m-a a-i drbd8<br />
</code><br />
<br />
And to load it into the kernel..<br />
<br />
<code><br />
depmod -ae<br />
</code><br />
<br />
<code><br />
modprobe drbd<br />
</code><br />
<br />
==== Configure DRBD8 ====<br />
<br />
Now that we have the essentials installed we can configure DRBD. Again, i will not go in to the details of all the options here so check out the default config and http://www.drbd.org/ to find a match for your set up.<br />
<br />
<code><br />
mv /etc/drbd.conf /etc/drbd.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/drbd.conf<br />
</code><br />
<br />
<pre><br />
global {<br />
usage-count no;<br />
}<br />
<br />
common {<br />
syncer { rate 100M; } <br />
}<br />
<br />
resource r0 {<br />
protocol C;<br />
handlers {<br />
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt f";<br />
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt f";<br />
local-io-error "echo o > /proc/sysrq-trigger ; halt f";<br />
outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";<br />
}<br />
<br />
startup {<br />
degr-wfc-timeout 120; # 2 minutes.<br />
}<br />
<br />
disk {<br />
on-io-error detach;<br />
}<br />
<br />
net { <br />
after-sb-0pri disconnect;<br />
after-sb-1pri disconnect;<br />
after-sb-2pri disconnect;<br />
rr-conflict disconnect;<br />
}<br />
<br />
syncer {<br />
rate 100M;<br />
al-extents 257;<br />
}<br />
<br />
<br />
on node1 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.100:7788;<br />
meta-disk internal;<br />
}<br />
<br />
on node2 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.200:7788;<br />
meta-disk internal;<br />
}<br />
}<br />
</pre><br />
<br />
Before we start DRBD we change some permissions, otherwise it will ask for it.<br />
So on both nodes<br />
<pre><br />
chgrp haclient /sbin/drbdsetup<br />
chmod o-x /sbin/drbdsetup<br />
chmod u+s /sbin/drbdsetup<br />
chgrp haclient /sbin/drbdmeta<br />
chmod o-x /sbin/drbdmeta<br />
chmod u+s /sbin/drbdmeta<br />
</pre><br />
<br />
==== Create the DRBD devices ====<br />
<br />
On both nodes<br />
<br />
node1<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node1<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
<blockquote><br />
'''The following should be done on the node that will be the primary!'''<br />
</blockquote><br />
<br />
On node1<br />
<br />
<code><br />
drbdadm -- --overwrite-data-of-peer primary r0<br />
</code><br />
<br />
<br />
watch cat /proc/drbd should show you something like this<br />
<pre><br />
version: 8.0.13 (api:86/proto:86)<br />
GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil@fat-tyre, 2008-08-04 15:28:07<br />
0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---<br />
ns:62059328 nr:0 dw:3298052 dr:58770141 al:2102 bm:3641 lo:1 pe:261 ua:251 ap:0<br />
[===>................] sync'ed: 22.1% (208411/267331)M<br />
finish: 4:04:44 speed: 14,472 (12,756) K/sec<br />
resync: used:1/61 hits:4064317 misses:5172 starving:0 dirty:0 changed:5172<br />
act_log: used:0/257 hits:822411 misses:46655 starving:110 dirty:44552 changed:2102<br />
<br />
<br />
</pre><br />
<br />
== Configure LVM2 ==<br />
<br />
<br />
<note important><br />
LVM will normally scan all available devices under /dev, but since /dev/cciss/c0d0p6 and /dev/drbd0 are basically the same this will lead to errors where LVM reads and writes the same data to both devices.<br />
So to limit it to scan /dev/drbd devices only we do the following on both nodes.<br />
<br />
</note><br />
<br />
<code><br />
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/lvm/lvm.conf<br />
</code><br />
<br />
<pre><br />
#filter = [ "a/.*/" ]<br />
filter = [ "a|/dev/drbd|", "r|.*|" ]<br />
</pre><br />
<br />
to re-scan with the new settings on both nodes<br />
<code><br />
<br />
vgscan<br />
</code><br />
<br />
=== Create the Physical Volume ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
On node1<br />
<br />
<code><br />
pvcreate /dev/drbd0<br />
</code><br />
<br />
=== Create the Volume Group ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
One node1<br />
<br />
<code><br />
vgcreate drbdvg0 /dev/drbd0<br />
</code><br />
<br />
=== Create the Logical Volume ===<br />
<br />
Yes, again only on the node that is primary!!!<br />
<br />
For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on.<br />
<br />
On node1<br />
<br />
<code><br />
lvcreate -L50000 -n web drbdvg0<br />
</code><br />
<br />
Then we put a file system on the logical volumes<br />
<br />
<code><br />
mkfs.ext3 /dev/drbdvg0/web<br />
</code><br />
<br />
create the directory where we want to mount the Vservers<br />
<br />
<code><br />
mkdir -p /VSERVERS/web<br />
</code><br />
<br />
and mount the volume group to the mount point<br />
<br />
<code><br />
mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/<br />
</code><br />
<br />
== Get informed ==<br />
<br />
Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.<br />
<br />
This should be done on both nodes<br />
<br />
<code><br />
apt-get install postfix mailx<br />
</code><br />
<br />
and go for the defaults, "internet site" and node1.example.com"<br />
<br />
We don't want postfix to listen to all interfaces,<br />
<br />
<code><br />
nano /etc/postfix/main.cf<br />
</code><br />
<br />
and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.<br />
<br />
<code><br />
inet_interfaces = loopback-only<br />
</code><br />
<br />
<br />
== Heartbeat ==<br />
<br />
=== Get aquinted ===<br />
Add the other node in the hosts file of both nodes, this way Heartbeat knows who is who.<br />
<br />
so for node1 do<br />
<br />
<code><br />
nano /etc/hosts<br />
</code><br />
<br />
and add node2<br />
<br />
<pre><br />
192.168.1.200 node2<br />
</pre><br />
<br />
=== Get intimate ===<br />
<br />
Set up some keys on both boxes so we can ssh login without a password (defaults, no passphrase)<br />
<br />
<code><br />
ssh-keygen<br />
</code><br />
<br />
then copy over the public keys<br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.100:/root/.ssh/authorized_keys<br />
</code><br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.200:/root/.ssh/authorized_keys<br />
</code><br />
<br />
=== Configure Heartbeat ===<br />
<br />
Without the ha.cf file Heartbeat wil not start, this should only be done on 1 of the nodes.<br />
<br />
<code><br />
nano /etc/ha.d/ha.cf<br />
</code><br />
<br />
<pre><br />
autojoin none <br />
#crm on #enables heartbeat2 cluster manager - we want that!<br />
use_logd on<br />
logfacility syslog<br />
keepalive 1<br />
deadtime 10<br />
warntime 10<br />
udpport 694<br />
auto_failback on #resources move back once node is back online<br />
mcast bond0 239.0.0.43 694 1 0 <br />
bcast eth2 <br />
node node1 #hostnames of the nodes<br />
node node2<br />
</pre><br />
<br />
This one also on 1 of the nodes<br />
<br />
<code><br />
nano /etc/ha.d/authkeys<br />
</code><br />
<br />
<pre><br />
auth 3<br />
3 md5 failover ## this is just a string, enter what you want ! auth 3 md5 uses md5 encryption<br />
</pre><br />
<br />
<code><br />
chmod 600 /etc/ha.d/authkeys<br />
</code><br />
<br />
<note><br />
We will be using heartbeat R1-style configuration here simply because i don't understand the R2 xml based syntax.<br />
</note><br />
We only did the above 2 config files on 1 node but we need it on both, heartbeat can do that for us.<br />
<br />
<code><br />
/usr/lib/heartbeat/ha_propagate<br />
</code><br />
<br />
=== Heatbeat behavior ===<br />
<br />
After above 2 files are set, the haresources is where we want to be to control Heartbeats behaviour.<br />
This is an example for 1 Vserver that we will set up later on.<br />
<br />
<code><br />
nano /etc/ha.d/haresources<br />
</code><br />
<br />
<pre><br />
node1 drbddisk::r1 LVM::drbdvg1 Filesystem::/dev/drbdvg1/web::/VSERVERS/web::ext3 vserver-web SendArp::123.123.123.125/bond0 MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
The above will default the Vserver named web to node1 and specify the mount points, the vserver-web script will start and stop heartbeat, the sendarp is for notifying the network that this IP can be found somewhere else then before. (have added the SendArp an extra time below for better result)<br />
<br />
Another example for more than 1 Vserver,<br />
We only specify 1 default node here for all Vservers and the same DRBD disk and Volume Group, the individual start scripts and mount points are specified separately, mind the \, its all in 1 line. the last mail command is only needed once.<br />
<br />
<pre><br />
node1 \<br />
drbddisk::r0 \<br />
LVM::drbdvg0 \<br />
Filesystem::/dev/drbdvg0/web::/VSERVERS/web::ext3 \<br />
Filesystem::/dev/drbdvg0/ns1::/VSERVERS/ns1::ext3 \<br />
Vserver-web \<br />
Vserver-ns1 \<br />
SendArp::123.123.123.125/bond0 \<br />
SendArp::123.123.123.126/bond0 \<br />
MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
=== start/stop script ===<br />
<br />
The vserver-web script as specified to be called by heartbeat above is basically a demolished version of the original R2 style agent by Martin Fick from here http://www.theficks.name/bin/lib/ocf/VServer.<br />
<br />
What i did is remove the sensible top part and replace "$OCF_RESKEY_vserver" with the specific Vserver name, also added an extra<br />
<br />
<pre><br />
/etc/ha.d/resource.d/SendArp 123.123.123.126/bond0 start<br />
</pre><br />
<br />
to the start part because i had various results when done by Heartbeat in the first tests i did, not sure if it is still needed but i guess it doesn't hurt.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
<pre><br />
#!/bin/sh<br />
#<br />
# License: GNU General Public License (GPL) <br />
# Author: Martin Fick <mogulguy@yahoo.com><br />
# Date: 04/19/07<br />
# Version: 1.1<br />
#<br />
# This script manages a VServer instance<br />
#<br />
# It can start or stop a VServer<br />
#<br />
# usage: $0 {start|stop|status|monitor|meta-data}<br />
#<br />
#<br />
# OCF parameters are as below<br />
# OCF_RESKEY_vserver<br />
#<br />
#######################################################################<br />
# Initialization:<br />
#<br />
#. /usr/lib/heartbeat/ocf-shellfuncs<br />
#<br />
#USAGE="usage: $0 {start|stop|status|monitor|meta-data}";<br />
#<br />
#######################################################################<br />
#<br />
#<br />
#meta_data() {<br />
# cat <<END<br />
#<?xml version="1.0"?><br />
#<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd"><br />
#<resource-agent name="VServer"><br />
# <version>1.0</version><br />
# <longdesc lang="en"><br />
#This script manages a VServer instance.<br />
#It can start or stop a VServer.<br />
# </longdesc><br />
# <shortdesc lang="en">OCF Resource Agent compliant VServer script.</shortdesc><br />
#<br />
# <parameters><br />
#<br />
# <parameter name="vserver" unique="1" required="1"><br />
# <longdesc lang="en"><br />
#The vserver name is the name as found under /etc/vservers<br />
# </longdesc><br />
# <shortdesc lang="en">VServer Name</shortdesc><br />
# <content type="string" default="" /><br />
# </parameter><br />
#<br />
# </parameters><br />
#<br />
# <actions><br />
# <action name="start" timeout="2m" /><br />
# <action name="stop" timeout="1m" /><br />
# <action name="monitor" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="status" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="meta-data" timeout="1m" /><br />
# </actions><br />
#</resource-agent><br />
#END<br />
#}<br />
<br />
vserver_reload() {<br />
vserver_stop || return<br />
vserver_start<br />
}<br />
<br />
vserver_stop() {<br />
#<br />
# Is the VServer already stopped?<br />
#<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "stop"<br />
<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
return 1<br />
}<br />
<br />
vserver_start() {<br />
vserver_status<br />
[ $? -eq 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "start"<br />
vserver_status<br />
/etc/ha.d/resource.d/SendArp 123.123.123.125/bond0 start<br />
}<br />
<br />
vserver_status() {<br />
/usr/sbin/vserver "web" "status"<br />
rc=$?<br />
if [ $rc -eq 0 ]; then<br />
echo "running"<br />
return 0<br />
elif [ $rc -eq 3 ]; then<br />
echo "stopped"<br />
else<br />
echo "unknown"<br />
fi<br />
return 7<br />
}<br />
<br />
vserver_monitor() {<br />
vserver_status<br />
}<br />
<br />
<br />
vserver_usage() {<br />
<br />
echo $USAGE >&2<br />
}<br />
<br />
vserver_info() {<br />
cat - <<!INFO<br />
Abstract=VServer Instance takeover<br />
Argument=VServer Name<br />
Description:<br />
A Vserver is a simulated server which is fairly hardware independent<br />
so it can be easily setup to run on several machines.<br />
Please rerun with the meta-data command for a list of \\<br />
valid arguments and their defaults.<br />
!INFO<br />
}<br />
<br />
#<br />
# Start or Stop the given VServer...<br />
#<br />
<br />
if [ $# -ne 1 ] ; then<br />
vserver_usage<br />
exit 2<br />
fi<br />
<br />
case "$1" in<br />
start|stop|status|monitor|reload|info|usage) vserver_$1 ;;<br />
meta-data) meta_data ;;<br />
validate-all|notify|promote|demote) exit 3 ;;<br />
<br />
*) vserver_usage ; exit 2 ;;<br />
esac<br />
<br />
<br />
</pre><br />
To make this file executable by Heartbeat<br />
<br />
<code><br />
chmod a+x /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
=== not needed???? ===<br />
<br />
There is some more interesting discussion going on here, [[Advanced_DRBD_mount_issues]]) , for those who have multiple Vservers on multiple DRBD devices. Not sure if it also applies for this setup but i'm using it without any drawbacks at the moment.<br />
<br />
Below is a changed version of option 4 by Christian Balzer<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/drbddisk<br />
</code><br />
<br />
<pre><br />
stop)<br />
# Kill off any vserver mounts that might hog this<br />
VNSPACE=/usr/sbin/vnamespace<br />
<br />
for CTX in `/usr/sbin/vserver-stat | tail -n +2 | awk '{print $1}'`<br />
do<br />
MPOINT="`$VNSPACE -e $CTX cat /proc/mounts | grep $RES | awk '{print $2}'`"<br />
echo Unmounting mount point $MPOINT from within context $CTX<br />
### MOUNT POINT IS COMPULSORY. DEVICE NAME DOES NOT WORK!!!<br />
$VNSPACE -e $CTX /bin/umount $MPOINT || continue;<br />
done<br />
# exec, so the exit code of drbdadm propagates<br />
exec $DRBDADM secondary $RES<br />
<br />
</pre><br />
<br />
== Create a Vserver ==<br />
<br />
Note that we already have mounted the LVM partition on /VSERVERS/web in an earlier step, we're going to place both the /var and /etc directories on the mountpoint and symlink to it, this way the complete Vserver and its config are available on the other node when mounted.<br />
<br />
<code><br />
mkdir -p /VSERVERS/web/etc<br />
</code><br />
<br />
<code><br />
mkdir -p /VSERVERS/web/barrier/var<br />
</code><br />
<br />
When making the Vserver it will be in the default location /var/lib/vservers/web and its config in /etc/vservers/web <br />
<br />
<pre><br />
newvserver --hostname web --domain example.com --ip 123.123.123.125/24 --dist etch --mirror http://123.123.123.81:3142/debian.apt-get.eu/debian --interface bond0<br />
</pre><br />
<br />
<pre><br />
enter the root password<br />
</pre><br />
<br />
<pre><br />
Create a normal user account now? <br />
<No> <br />
</pre><br />
<br />
<pre><br />
Choose software to install: <br />
<Ok> <br />
</pre><br />
<br />
On node1 we move the Vserver directories to the LVM volume on the DRBD disks and make symlinks from the normal locations.<br />
<br />
On node1<br />
<br />
<code><br />
mv /etc/vservers/web/* /VSERVERS/web/etc/<br />
</code><br />
<br />
<code><br />
rmdir /etc/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<code><br />
mv /var/lib/vservers/web/* /VSERVERS/web/barrier/var<br />
</code><br />
<br />
<code><br />
rmdir /var/lib/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
We need to set the same symlinks on node2, but the we need the Vserver directories available there first.<br />
The mounting should be handled by heartbeat by now so we make our resources move to the other machine.<br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat stop<br />
</code><br />
<br />
On node2<br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat start<br />
</code><br />
<br />
<code><br />
Vserver web start<br />
</code><br />
<br />
and enjoy!</div>212.123.252.242http://linux-vserver.org/Getting_high_with_lennyGetting high with lenny2008-10-02T08:38:57Z<p>212.123.252.242: /* not needed???? */</p>
<hr />
<div>== Getting High with Lenny ==<br />
<br />
The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released)<br />
<br />
<br />
There is a lot of buzz going on for a while now about virtualisation and High Availability and while Vserver is very well capable for this job the number of documented examples compared to some other virtualisation techniques are a little lacking so i thought i'd do my share. <br />
<br />
I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed.<br />
DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly.<br />
In my experience it takes a few seconds to have several Vservers fail over to another machine with this setup.<br />
<br />
The main attempt here is to give a single working example without going to much in to the details of every option, the scenario is relatively simple but different variations can be made.<br />
<br />
For this set up we will have <br />
<br />
** 2 machines<br />
** both machines have 1 single large DRBD partition<br />
** primary/seconday there is always 1 machine active and 1 on standby<br />
** 1 LVM partition per Vserver on top of the DRBD partition, for quota support from within the guest and LVM snapshots<br />
** the Vservers /etc/vserver and /var/lib/vservers directories will be placed on the DRBD partition.<br />
<br />
In case the main machine that runs the Vservers goes down, the synchronized second machine should take over and automatically start the Vservers.<br />
<br />
Basically this is an on-line RAID solution that can keep your services running in case of hardware failure, it is NOT a back-up replacement.<br />
<br />
The cost for this setup is that you always have 1 idle machine standby, this cost can be justified by the fact that Linux-Vserver enables you to make full use of the 1 machine that is running, you also could consider to run this on a little less expensive (reliable) hardware.<br />
<br />
Also note that i will be using R1 style configuration for heartbeat, R1 style can be considered to be depreciated when using Heartbeat2 but i could not get my head around the R2 xml configuration, so if you want R2 you might want to have a look here.<br />
[[Fail-over]])<br />
<br />
The partitioning looks as follows<br />
<br />
<code> <br />
c0d0p1 Boot Primary Linux ext3 10001.95<br />
c0d0p5 Logical Linux swap / Solaris 1003.49<br />
c0d0p6 Logical Linux 280325.77<br />
<br />
</code><br />
<br />
<br />
<blockquote><br />
* '''machine1''' will use the following names. <br><br />
* hostname = node1 <br> <br />
* IP number = 192.168.1.100<br><br />
* is primary for r0 on disk c0d0p6 <br><br />
* physical volume on r0 is /dev/drbd0 <br><br />
* volume group on /dev/drbd0 is called drbdvg0 <br><br />
</blockquote><br />
<br />
<blockquote><br />
* '''machine2''' will use the following names. <br><br />
* hostname = node2 <br><br />
* IP number = 192.168.1.200 <br><br />
* is secondary for r0 on disk c0d0p6 <br><br />
<br />
The Volume Group and the Physical Volume will be identical on node2 if this one becomes the primary for r0.<br />
</blockquote><br />
<br />
== Loadbalance-Failover the network cards ==<br />
<br />
Maybe not very specific to Vserver, Heartbeat or DRBD, but loadbalancing your network cards for failover is always usefull. Some more indepth details by Carla Schroder can be found here. <br />
[[http://www.enterprisenetworkingplanet.com/nethub/article.php/3696561]]<br />
I did not do it for the DRBD crossover cable between the nodes while this is actually highly recomended.<br />
We need both mii-tool and ethtool.<br />
<br />
<code><br />
apt-get install ethtool ifenslave-2.6<br />
</code><br />
<br />
<code><br />
nano /etc/modprobe.d/arch/i386<br />
</code><br />
<br />
To load the modules with the correct options at boot time.<br />
<br />
<pre><br />
alias bond0 bonding<br />
options bond0 mode=balance-alb miimon=100 <br />
</pre><br />
<br />
And set the interfaces eth0 and eth1 as slaves to bond0, also eth2 is set here for the crossover cable for the DRBD connection to the fail over machine.<br />
<br />
<code><br />
nano /etc/network/interfaces<br />
</code><br />
<pre><br />
# This file describes the network interfaces available on your system<br />
# and how to activate them. For more information, see interfaces(5).<br />
<br />
# The loopback network interface<br />
auto lo<br />
iface lo inet loopback<br />
<br />
# The primary network interface<br />
auto bond0<br />
iface bond0 inet static<br />
address 123.123.123.100<br />
netmask 255.255.255.0<br />
network 123.123.123.0<br />
broadcast 123.123.123.255<br />
gateway 123.123.123.1<br />
# dns-* options are implemented by the resolvconf package, if installed<br />
dns-nameservers 123.123.123.45<br />
dns-search example.com<br />
up /sbin/ifenslave bond0 eth0 eth1<br />
down ifenslave -d bond0 eth0 eth1<br />
<br />
<br />
auto eth2<br />
iface eth2 inet static<br />
address 192.168.1.100<br />
netmask 255.255.255.0<br />
</pre><br />
<br />
This way the system needs to be rebooted before the changes take effect, otherwise you should load the drivers and ifdown eth0 and eth1 first before ifup bond0 but i'm planning to install a new kernel anyway in the next step.<br />
<br />
== Install the Vserver packages ==<br />
<br />
<code><br />
apt-get install linux-image-2.6-vserver-686-bigmem util-vserver vserver-debiantools<br />
</code><br />
<br />
As usual a reboot is needed to boot this kernel.<br />
<br />
<blockquote><br />
With Etch i found that the Vserver kernel often ended up as second in the grub list, not so in Lenny but to be safe check the kernel stanza in /boot/grub/menu.lst especially when doing this from a remote location.<br />
</blockquote><br />
<br />
== Install DRBD8, LVM2 and Heartbeat ==<br />
<br />
<code><br />
apt-get install drbd8-modules-2.6-vserver-686-bigmem drbd8-module-source lvm2 heartbeat-2<br />
</code><br />
<br />
<blockquote><br />
not sure about this, but DRBD always needed to be compiled against the running kernel, is this still the case with the kernel specific modules? I did not check but it would be good to know in case of a kernel upgrade.<br />
</blockquote><br />
<br />
== Build DRBD8 ==<br />
<br />
Although packages are available in the repositorie for DRBD8, the purpose of these packages is that you can built it easily from source and patch the running kernel.<br />
<br />
To do this we just issue this command<br />
<br />
<code><br />
m-a a-i drbd8<br />
</code><br />
<br />
And to load it into the kernel..<br />
<br />
<code><br />
depmod -ae<br />
</code><br />
<br />
<code><br />
modprobe drbd<br />
</code><br />
<br />
==== Configure DRBD8 ====<br />
<br />
Now that we have the essentials installed we can configure DRBD. Again, i will not go in to the details of all the options here so check out the default config and http://www.drbd.org/ to find a match for your set up.<br />
<br />
<code><br />
mv /etc/drbd.conf /etc/drbd.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/drbd.conf<br />
</code><br />
<br />
<pre><br />
global {<br />
usage-count no;<br />
}<br />
<br />
common {<br />
syncer { rate 100M; } <br />
}<br />
<br />
resource r0 {<br />
protocol C;<br />
handlers {<br />
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt f";<br />
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt f";<br />
local-io-error "echo o > /proc/sysrq-trigger ; halt f";<br />
outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";<br />
}<br />
<br />
startup {<br />
degr-wfc-timeout 120; # 2 minutes.<br />
}<br />
<br />
disk {<br />
on-io-error detach;<br />
}<br />
<br />
net { <br />
after-sb-0pri disconnect;<br />
after-sb-1pri disconnect;<br />
after-sb-2pri disconnect;<br />
rr-conflict disconnect;<br />
}<br />
<br />
syncer {<br />
rate 100M;<br />
al-extents 257;<br />
}<br />
<br />
<br />
on node1 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.100:7788;<br />
meta-disk internal;<br />
}<br />
<br />
on node2 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.200:7788;<br />
meta-disk internal;<br />
}<br />
}<br />
</pre><br />
<br />
Before we start DRBD we change some permissions, otherwise it will ask for it.<br />
So on both nodes<br />
<pre><br />
chgrp haclient /sbin/drbdsetup<br />
chmod o-x /sbin/drbdsetup<br />
chmod u+s /sbin/drbdsetup<br />
chgrp haclient /sbin/drbdmeta<br />
chmod o-x /sbin/drbdmeta<br />
chmod u+s /sbin/drbdmeta<br />
</pre><br />
<br />
==== Create the DRBD devices ====<br />
<br />
On both nodes<br />
<br />
node1<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node1<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
<blockquote><br />
'''The following should be done on the node that will be the primary!'''<br />
</blockquote><br />
<br />
On node1<br />
<br />
<code><br />
drbdadm -- --overwrite-data-of-peer primary r0<br />
</code><br />
<br />
<br />
watch cat /proc/drbd should show you something like this<br />
<pre><br />
version: 8.0.13 (api:86/proto:86)<br />
GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil@fat-tyre, 2008-08-04 15:28:07<br />
0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---<br />
ns:62059328 nr:0 dw:3298052 dr:58770141 al:2102 bm:3641 lo:1 pe:261 ua:251 ap:0<br />
[===>................] sync'ed: 22.1% (208411/267331)M<br />
finish: 4:04:44 speed: 14,472 (12,756) K/sec<br />
resync: used:1/61 hits:4064317 misses:5172 starving:0 dirty:0 changed:5172<br />
act_log: used:0/257 hits:822411 misses:46655 starving:110 dirty:44552 changed:2102<br />
<br />
<br />
</pre><br />
<br />
== Configure LVM2 ==<br />
<br />
<br />
<note important><br />
LVM will normally scan all available devices under /dev, but since /dev/cciss/c0d0p6 and /dev/drbd0 are basically the same this will lead to errors where LVM reads and writes the same data to both devices.<br />
So to limit it to scan /dev/drbd devices only we do the following on both nodes.<br />
<br />
</note><br />
<br />
<code><br />
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/lvm/lvm.conf<br />
</code><br />
<br />
<pre><br />
#filter = [ "a/.*/" ]<br />
filter = [ "a|/dev/drbd|", "r|.*|" ]<br />
</pre><br />
<br />
to re-scan with the new settings on both nodes<br />
<code><br />
<br />
vgscan<br />
</code><br />
<br />
=== Create the Physical Volume ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
On node1<br />
<br />
<code><br />
pvcreate /dev/drbd0<br />
</code><br />
<br />
=== Create the Volume Group ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
One node1<br />
<br />
<code><br />
vgcreate drbdvg0 /dev/drbd0<br />
</code><br />
<br />
=== Create the Logical Volume ===<br />
<br />
Yes, again only on the node that is primary!!!<br />
<br />
For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on.<br />
<br />
On node1<br />
<br />
<code><br />
lvcreate -L50000 -n web drbdvg0<br />
</code><br />
<br />
Then we put a file system on the logical volumes<br />
<br />
<code><br />
mkfs.ext3 /dev/drbdvg0/web<br />
</code><br />
<br />
create the directory where we want to mount the Vservers<br />
<br />
<code><br />
mkdir -p /VSERVERS/web<br />
</code><br />
<br />
and mount the volume group to the mount point<br />
<br />
<code><br />
mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/<br />
</code><br />
<br />
== Get informed ==<br />
<br />
Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.<br />
<br />
This should be done on both nodes<br />
<br />
<code><br />
apt-get install postfix mailx<br />
</code><br />
<br />
and go for the defaults, "internet site" and node1.example.com"<br />
<br />
We don't want postfix to listen to all interfaces,<br />
<br />
<code><br />
nano /etc/postfix/main.cf<br />
</code><br />
<br />
and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.<br />
<br />
<code><br />
inet_interfaces = loopback-only<br />
</code><br />
<br />
<br />
== Heartbeat ==<br />
<br />
=== Get aquinted ===<br />
Add the other node in the hosts file of both nodes, this way Heartbeat knows who is who.<br />
<br />
so for node1 do<br />
<br />
<code><br />
nano /etc/hosts<br />
</code><br />
<br />
and add node2<br />
<br />
<pre><br />
192.168.1.200 node2<br />
</pre><br />
<br />
=== Get intimate ===<br />
<br />
Set up some keys on both boxes so we can ssh login without a password (defaults, no passphrase)<br />
<br />
<code><br />
ssh-keygen<br />
</code><br />
<br />
then copy over the public keys<br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.100:/root/.ssh/authorized_keys<br />
</code><br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.200:/root/.ssh/authorized_keys<br />
</code><br />
<br />
=== Configure Heartbeat ===<br />
<br />
Without the ha.cf file Heartbeat wil not start, this should only be done on 1 of the nodes.<br />
<br />
<code><br />
nano /etc/ha.d/ha.cf<br />
</code><br />
<br />
<pre><br />
autojoin none <br />
#crm on #enables heartbeat2 cluster manager - we want that!<br />
use_logd on<br />
logfacility syslog<br />
keepalive 1<br />
deadtime 10<br />
warntime 10<br />
udpport 694<br />
auto_failback on #resources move back once node is back online<br />
mcast bond0 239.0.0.43 694 1 0 <br />
bcast eth2 <br />
node node1 #hostnames of the nodes<br />
node node2<br />
</pre><br />
<br />
This one also on 1 of the nodes<br />
<br />
<code><br />
nano /etc/ha.d/authkeys<br />
</code><br />
<br />
<pre><br />
auth 3<br />
3 md5 failover ## this is just a string, enter what you want ! auth 3 md5 uses md5 encryption<br />
</pre><br />
<br />
<code><br />
chmod 600 /etc/ha.d/authkeys<br />
</code><br />
<br />
<note><br />
We will be using heartbeat R1-style configuration here simply because i don't understand the R2 xml based syntax.<br />
</note><br />
We only did the above 2 config files on 1 node but we need it on both, heartbeat can do that for us.<br />
<br />
<code><br />
/usr/lib/heartbeat/ha_propagate<br />
</code><br />
<br />
=== Heatbeat behavior ===<br />
<br />
After above 2 files are set, the haresources is where we want to be to control Heartbeats behaviour.<br />
This is an example for 1 Vserver that we will set up later on.<br />
<br />
<code><br />
nano /etc/ha.d/haresources<br />
</code><br />
<br />
<pre><br />
node1 drbddisk::r1 LVM::drbdvg1 Filesystem::/dev/drbdvg1/web::/VSERVERS/web::ext3 vserver-web SendArp::123.123.123.125/bond0 MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
The above will default the Vserver named web to node1 and specify the mount points, the vserver-web script will start and stop heartbeat, the sendarp is for notifying the network that this IP can be found somewhere else then before. (have added the SendArp an extra time below for better result)<br />
<br />
Another example for more than 1 Vserver,<br />
We only specify 1 default node here for all Vservers and the same DRBD disk and Volume Group, the individual start scripts and mount points are specified separately, mind the \, its all in 1 line. the last mail command is only needed once.<br />
<br />
<pre><br />
node1 \<br />
drbddisk::r0 \<br />
LVM::drbdvg0 \<br />
Filesystem::/dev/drbdvg0/web::/VSERVERS/web::ext3 \<br />
Filesystem::/dev/drbdvg0/ns1::/VSERVERS/ns1::ext3 \<br />
Vserver-web \<br />
Vserver-ns1 \<br />
SendArp::123.123.123.125/bond0 \<br />
SendArp::123.123.123.126/bond0 \<br />
MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
=== start/stop script ===<br />
<br />
The vserver-web script as specified to be called by heartbeat above is basically a demolished version of the original R2 style agent by Martin Fick from here http://www.theficks.name/bin/lib/ocf/VServer.<br />
<br />
What i did is remove the sensible top part and replace "$OCF_RESKEY_vserver" with the specific Vserver name, also added an extra<br />
<br />
<pre><br />
/etc/ha.d/resource.d/SendArp 123.123.123.126/bond0 start<br />
</pre><br />
<br />
to the start part because i had various results when done by Heartbeat in the first tests i did, not sure if it is still needed but i guess it doesn't hurt.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
<pre><br />
#!/bin/sh<br />
#<br />
# License: GNU General Public License (GPL) <br />
# Author: Martin Fick <mogulguy@yahoo.com><br />
# Date: 04/19/07<br />
# Version: 1.1<br />
#<br />
# This script manages a VServer instance<br />
#<br />
# It can start or stop a VServer<br />
#<br />
# usage: $0 {start|stop|status|monitor|meta-data}<br />
#<br />
#<br />
# OCF parameters are as below<br />
# OCF_RESKEY_vserver<br />
#<br />
#######################################################################<br />
# Initialization:<br />
#<br />
#. /usr/lib/heartbeat/ocf-shellfuncs<br />
#<br />
#USAGE="usage: $0 {start|stop|status|monitor|meta-data}";<br />
#<br />
#######################################################################<br />
#<br />
#<br />
#meta_data() {<br />
# cat <<END<br />
#<?xml version="1.0"?><br />
#<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd"><br />
#<resource-agent name="VServer"><br />
# <version>1.0</version><br />
# <longdesc lang="en"><br />
#This script manages a VServer instance.<br />
#It can start or stop a VServer.<br />
# </longdesc><br />
# <shortdesc lang="en">OCF Resource Agent compliant VServer script.</shortdesc><br />
#<br />
# <parameters><br />
#<br />
# <parameter name="vserver" unique="1" required="1"><br />
# <longdesc lang="en"><br />
#The vserver name is the name as found under /etc/vservers<br />
# </longdesc><br />
# <shortdesc lang="en">VServer Name</shortdesc><br />
# <content type="string" default="" /><br />
# </parameter><br />
#<br />
# </parameters><br />
#<br />
# <actions><br />
# <action name="start" timeout="2m" /><br />
# <action name="stop" timeout="1m" /><br />
# <action name="monitor" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="status" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="meta-data" timeout="1m" /><br />
# </actions><br />
#</resource-agent><br />
#END<br />
#}<br />
<br />
vserver_reload() {<br />
vserver_stop || return<br />
vserver_start<br />
}<br />
<br />
vserver_stop() {<br />
#<br />
# Is the VServer already stopped?<br />
#<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "stop"<br />
<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
return 1<br />
}<br />
<br />
vserver_start() {<br />
vserver_status<br />
[ $? -eq 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "start"<br />
vserver_status<br />
/etc/ha.d/resource.d/SendArp 123.123.123.125/bond0 start<br />
}<br />
<br />
vserver_status() {<br />
/usr/sbin/vserver "web" "status"<br />
rc=$?<br />
if [ $rc -eq 0 ]; then<br />
echo "running"<br />
return 0<br />
elif [ $rc -eq 3 ]; then<br />
echo "stopped"<br />
else<br />
echo "unknown"<br />
fi<br />
return 7<br />
}<br />
<br />
vserver_monitor() {<br />
vserver_status<br />
}<br />
<br />
<br />
vserver_usage() {<br />
<br />
echo $USAGE >&2<br />
}<br />
<br />
vserver_info() {<br />
cat - <<!INFO<br />
Abstract=VServer Instance takeover<br />
Argument=VServer Name<br />
Description:<br />
A Vserver is a simulated server which is fairly hardware independent<br />
so it can be easily setup to run on several machines.<br />
Please rerun with the meta-data command for a list of \\<br />
valid arguments and their defaults.<br />
!INFO<br />
}<br />
<br />
#<br />
# Start or Stop the given VServer...<br />
#<br />
<br />
if [ $# -ne 1 ] ; then<br />
vserver_usage<br />
exit 2<br />
fi<br />
<br />
case "$1" in<br />
start|stop|status|monitor|reload|info|usage) vserver_$1 ;;<br />
meta-data) meta_data ;;<br />
validate-all|notify|promote|demote) exit 3 ;;<br />
<br />
*) vserver_usage ; exit 2 ;;<br />
esac<br />
<br />
<br />
</pre><br />
To make this file executable by Heartbeat<br />
<br />
<code><br />
chmod a+x /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
=== not needed???? ===<br />
<br />
There is some more interesting discussion going on here, [[Advanced_DRBD_mount_issues]]) , for those who have multiple Vservers on multiple DRBD devices. Not sure if it also applies for this setup but i'm using it without any drawbacks at the moment.<br />
<br />
Below is a changed version of option 4 by Christian Balzer<br />
<code><br />
nano /etc/ha.d/resource.d/drbddisk<br />
</code><br />
<br />
<pre><br />
stop)<br />
# Kill off any vserver mounts that might hog this<br />
VNSPACE=/usr/sbin/vnamespace<br />
<br />
for CTX in `/usr/sbin/vserver-stat | tail -n +2 | awk '{print $1}'`<br />
do<br />
MPOINT="`$VNSPACE -e $CTX cat /proc/mounts | grep $RES | awk '{print $2}'`"<br />
echo Unmounting mount point $MPOINT from within context $CTX<br />
### MOUNT POINT IS COMPULSORY. DEVICE NAME DOES NOT WORK!!!<br />
$VNSPACE -e $CTX /bin/umount $MPOINT || continue;<br />
done<br />
# exec, so the exit code of drbdadm propagates<br />
exec $DRBDADM secondary $RES<br />
<br />
</pre><br />
<br />
== Create a Vserver ==<br />
<br />
Note that we already have mounted the LVM partition on /VSERVERS/web in an earlier step, we're going to place both the /var and /etc directories on the mountpoint and symlink to it, this way the complete Vserver and its config are available on the other node when mounted.<br />
<br />
<code><br />
mkdir -p /VSERVERS/web/etc<br />
</code><br />
<br />
<code><br />
mkdir -p /VSERVERS/web/barrier/var<br />
</code><br />
<br />
When making the Vserver it will be in the default location /var/lib/vservers/web and its config in /etc/vservers/web <br />
<br />
<pre><br />
newvserver --hostname web --domain example.com --ip 123.123.123.125/24 --dist etch --mirror http://123.123.123.81:3142/debian.apt-get.eu/debian --interface bond0<br />
</pre><br />
<br />
<pre><br />
enter the root password<br />
</pre><br />
<br />
<pre><br />
Create a normal user account now? <br />
<No> <br />
</pre><br />
<br />
<pre><br />
Choose software to install: <br />
<Ok> <br />
</pre><br />
<br />
On node1 we move the Vserver directories to the LVM volume on the DRBD disks and make symlinks from the normal locations.<br />
<br />
On node1<br />
<br />
<code><br />
mv /etc/vservers/web/* /VSERVERS/web/etc/<br />
</code><br />
<br />
<code><br />
rmdir /etc/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<code><br />
mv /var/lib/vservers/web/* /VSERVERS/web/barrier/var<br />
</code><br />
<br />
<code><br />
rmdir /var/lib/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
We need to set the same symlinks on node2, but the we need the Vserver directories available there first.<br />
The mounting should be handled by heartbeat by now so we make our resources move to the other machine.<br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat stop<br />
</code><br />
<br />
On node2<br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat start<br />
</code><br />
<br />
<code><br />
Vserver web start<br />
</code><br />
<br />
and enjoy!</div>212.123.252.242http://linux-vserver.org/Advanced_DRBD_mount_issuesAdvanced DRBD mount issues2008-10-02T07:51:14Z<p>212.123.252.242: </p>
<hr />
<div>==Advanced DRBD mount issues==<br />
<br />
'''This is currently under construction - think before using it'''<br />
<br />
This HowTo? covers the problem with mutiple vServer depending on mutliple DRBD mounted devices like discussed on the mailinglist in August 2005.<br />
<br />
===Problem===<br />
<br />
You run more than one vServer guest and have more than one DRBD device on your host system. You are now unable to unmount the drbd devices and always get messages about "filesystem in use".<br />
<br />
This is because whenever you start a new vServer, the kernel's mount table is copied into the new namespace and thus, you also copy the references on the DBRD mounts which than cannot be shutdown.<br />
==Solution 1: mount per vServer==<br />
<br />
This approach is the favourised one, if you have a setup like mine:<br />
<br />
I run the root partition of the server on a non-drbd device and mount one drbd partition as data-storage inside each vServer.<br />
<br />
All you have to do is to mount the drbd device inside the fstab of the vServer:<br />
<br />
<code><br />
<vserver>/etc/fstab<br />
</code><br />
<br />
<pre><br />
none /proc proc defaults 0 0<br />
none /tmp tmpfs size=16m,mode=1777 0 0<br />
none /dev/pts devpts gid=5,mode=620 0 0<br />
/dev/drbd/www1 /data ext3 none 0 0<br />
</pre><br />
<br />
<br />
This will result in the DRBD device mounted on /data inside the vServer and only be visible inside this namespace. So the mount is not copied to other vServers outside and thus, if you shutdown this instance you immediately free the DRBD device and can shut it down.<br />
<br />
<blockquote><br />
Note: You cannot bind-mount on this mountpoint from inside the fstab because of different visibility of the nodes - if you need bind-mount on this device see the other approaches.<br />
</blockquote><br />
<br />
<br />
== Solution 2: script based positive mounting ==<br />
<br />
vServers have a script architecture that enables you to do some things during startup and shutdown.<br />
<br />
positive mounting is similar to the above approach, with the difference, that the mount operation is done via a script.<br />
<br />
Put your mount command into the file /etc/vservers/<servername>/scripts/prepre-start - note that the file must not have the x-bit set!<br />
<br />
<pre><br />
#/bin/bash<br />
mount -t ext3 /dev/drbd/www1 /vservers/www1/data<br />
</pre><br />
<br />
Note that the mountpoint is referenced against the root-servers filesystem!<br />
<br />
If you have all identical vServer and want to this for all guests you can put the prepre-start file to /etc/vservers/.defaults/scripts, you can grab the name of the vServer through the shell arg $2.<br />
<br />
As the mounting is done prior execution of the fstab but running in the right namspace already, you can now do bind-mounting inside the vServer's fstab<br />
<br />
<pre><br />
/vservers/www1/data/webtree /webtree none bind<br />
/vservers/www1/data/var /var none bind<br />
</pre><br />
<br />
Note that the mount source is relative to the root-fs again while the target is relative to the guest root.<br />
<br />
== Solution 3: script based unmounting (bit of brute-force....) ==<br />
<br />
Using the script architecture mentioned above, we now force an unmount of certain DRBD devices when fireing up the server.<br />
<br />
I got a draft version of the attached script from a guy from the mailing list. It worked but I didn't like the approach and worked out the other ones - it might be helpful if you use one drbd device inside multiple vservers and so you cant use the above mentioned ideas.<br />
<br />
The script first tries to detect the mount-points occupied by the current vServer, than runs through all mounts and unmounts all that are not related to the current vServer. Please see the script as an idea - it might not work out of the box for you because you have to adjust the detection of the occupied mounts.<br />
<br />
The code must go into the prepre-start file as mentioned above.<br />
<br />
<pre><br />
#!/bin/bash<br />
# published version done by Oliver Welter, mail-at-oliwel.de<br />
# based on a script from martin rueegg, metaworx.ch, mrueegg-at-metaworx.ch<br />
# provide without waranty as is and free to modify and copy with this notice kept intact<br />
<br />
DF=/bin/df<br />
CUT=/bin/cut<br />
TAIL=/usr/bin/tail<br />
GREP=/bin/grep<br />
CAT=/bin/cat<br />
<br />
# I dont mirror the vServer's itself, just the data, so all vServers share one volume - I think most people must adjust this<br />
vs_dir=/vservers<br />
vs_etc=/etc/vservers<br />
vs_data=/data/www1<br />
<br />
# get the device, the vserver is located on<br />
vs_device=`$DF -kh $vs_dir | $CUT -f1 -d' ' | $TAIL -n 1`<br />
<br />
# get the device, the vserver config dir is located on<br />
vs_etc_mount=`$DF -kh $vs_etc | $TAIL -n 1 | $GREP -Eo '[^[:space:]]+$'`<br />
<br />
vs_data_device=`$DF -kh $vs_data | $CUT -f1 -d' ' | $TAIL -n 1`<br />
<br />
for i in `$CAT /proc/mounts \<br />
| $CUT -f1,2 -d' ' --output-delimiter='|' \<br />
| $GREP -E '^/dev/drbd/[^|]+\|'`; do<br />
<br />
# extract the device<br />
device=`echo $i | $CUT -f1 -d'|'`<br />
<br />
# extract the mountpoint<br />
mountpoint=`echo $i | $CUT -f2 -d'|'`<br />
<br />
# unmount the file system unless it's the<br />
# - device the vserver's on<br />
# - device the vserver config dir is on<br />
if ! [ ."$vs_device" == ."$device" -o ."$vs_etc_mount" == ."$mountpoint" -o ."$vs_data_device" == ."$device" ] ; then<br />
echo "umount -nv $mountpoint"<br />
echo `umount -nv $mountpoint || exit $?`<br />
fi<br />
done<br />
</pre><br />
<br />
== Solution 4: Modifying Heartbeat's Drbddisk Script ==<br />
<br />
This solution/principle is valid for the combination of Vserver + DRBD + Heartbeat, where the latter is used to transfer virtual servers between the nodes of a HA cluster. "Out of the box", Heartbeat will reboot the cluster node if it cannot unmount the vserver mount point when it tries to shut down a virtual server. Unfortunately, this happens rather often if there is more than one virtual server on a cluster. Every time a vserver is about to be shut down, the vserver itself is getting stopped, then its file system is unmounted, and the underlaying DRBD device is set into "secondary" state (to allow the other node of the cluster to take over the DRBD block device). Now, if there are any references to the vserver moint point remaining in the namespaces of any other running virtual server (copies from the master namespace when a vserver is started), switching the DRBD device into "secondary" mode will fail, and, alas, unmounting the mount point consequentially also. There goes your cluster node...<br />
<br />
To cirumvent this problem I wrote a little script, which should be hooked into the Heartbeat DRBD control script "/usr/etc/ha.d/resource.d/drbddisk", right before the line "exec $DRBDADM secondary $RES" in the "stop" branch.<br />
<br />
It removes the mount point of the virtual server that is about to be shut down from all running virtual server contextes:<br />
<br />
<pre><br />
VNSPACE=/usr/local/sbin/vnamespace<br />
for CTX in `/usr/local/sbin/vserver-stat | tail +3 | awk '{print $1}'`<br />
do<br />
MPOINT="`$VNSPACE -e $CTX cat /proc/mounts | grep $RES | awk '{print $2}'`"<br />
echo Unmounting mount point $MPOINT from within context $CTX<br />
### MOUNT POINT IS COMPULSORY. DEVICE NAME DOES NOT WORK!!!<br />
$VNSPACE -e $CTX /bin/umount $MPOINT || continue;<br />
done<br />
# here shall be the original line then (uncommented, of course ;-))<br />
# exec $DRBDADM secondary $RES<br />
</pre></div>212.123.252.242http://linux-vserver.org/Advanced_DRBD_mount_issuesAdvanced DRBD mount issues2008-10-02T07:32:13Z<p>212.123.252.242: New page: =Advanced DRBD mount issues= '''This is currently under construction - think before using it''' This HowTo? covers the problem with mutiple vServer depending on mutliple DRBD mounted dev...</p>
<hr />
<div>=Advanced DRBD mount issues=<br />
<br />
'''This is currently under construction - think before using it'''<br />
<br />
This HowTo? covers the problem with mutiple vServer depending on mutliple DRBD mounted devices like discussed on the mailinglist in August 2005.<br />
<br />
===Problem===<br />
<br />
You run more than one vServer guest and have more than one DRBD device on your host system. You are now unable to unmount the drbd devices and always get messages about "filesystem in use".<br />
<br />
This is because whenever you start a new vServer, the kernel's mount table is copied into the new namespace and thus, you also copy the references on the DBRD mounts which than cannot be shutdown.<br />
==Solution 1: mount per vServer==<br />
<br />
This approach is the favourised one, if you have a setup like mine:<br />
<br />
I run the root partition of the server on a non-drbd device and mount one drbd partition as data-storage inside each vServer.<br />
<br />
All you have to do is to mount the drbd device inside the fstab of the vServer:<br />
<br />
<code><br />
<vserver>/etc/fstab<br />
</code><br />
<br />
<pre><br />
none /proc proc defaults 0 0<br />
none /tmp tmpfs size=16m,mode=1777 0 0<br />
none /dev/pts devpts gid=5,mode=620 0 0<br />
/dev/drbd/www1 /data ext3 none 0 0<br />
</pre><br />
<br />
<br />
This will result in the DRBD device mounted on /data inside the vServer and only be visible inside this namespace. So the mount is not copied to other vServers outside and thus, if you shutdown this instance you immediately free the DRBD device and can shut it down.<br />
<br />
<blockquote><br />
Note: You cannot bind-mount on this mountpoint from inside the fstab because of different visibility of the nodes - if you need bind-mount on this device see the other approaches.<br />
</blockquote><br />
<br />
<br />
== Solution 2: script based positive mounting ==<br />
<br />
vServers have a script architecture that enables you to do some things during startup and shutdown.<br />
<br />
positive mounting is similar to the above approach, with the difference, that the mount operation is done via a script.<br />
<br />
Put your mount command into the file /etc/vservers/<servername>/scripts/prepre-start - note that the file must not have the x-bit set!<br />
<br />
<pre><br />
#/bin/bash<br />
mount -t ext3 /dev/drbd/www1 /vservers/www1/data<br />
</pre><br />
<br />
Note that the mountpoint is referenced against the root-servers filesystem!<br />
<br />
If you have all identical vServer and want to this for all guests you can put the prepre-start file to /etc/vservers/.defaults/scripts, you can grab the name of the vServer through the shell arg $2.<br />
<br />
As the mounting is done prior execution of the fstab but running in the right namspace already, you can now do bind-mounting inside the vServer's fstab<br />
<br />
<pre><br />
/vservers/www1/data/webtree /webtree none bind<br />
/vservers/www1/data/var /var none bind<br />
</pre><br />
<br />
Note that the mount source is relative to the root-fs again while the target is relative to the guest root.<br />
<br />
== Solution 3: script based unmounting (bit of brute-force....) ==<br />
<br />
Using the script architecture mentioned above, we now force an unmount of certain DRBD devices when fireing up the server.<br />
<br />
I got a draft version of the attached script from a guy from the mailing list. It worked but I didn't like the approach and worked out the other ones - it might be helpful if you use one drbd device inside multiple vservers and so you cant use the above mentioned ideas.<br />
<br />
The script first tries to detect the mount-points occupied by the current vServer, than runs through all mounts and unmounts all that are not related to the current vServer. Please see the script as an idea - it might not work out of the box for you because you have to adjust the detection of the occupied mounts.<br />
<br />
The code must go into the prepre-start file as mentioned above.<br />
<br />
<pre><br />
#!/bin/bash<br />
# published version done by Oliver Welter, mail-at-oliwel.de<br />
# based on a script from martin rueegg, metaworx.ch, mrueegg-at-metaworx.ch<br />
# provide without waranty as is and free to modify and copy with this notice kept intact<br />
<br />
DF=/bin/df<br />
CUT=/bin/cut<br />
TAIL=/usr/bin/tail<br />
GREP=/bin/grep<br />
CAT=/bin/cat<br />
<br />
# I dont mirror the vServer's itself, just the data, so all vServers share one volume - I think most people must adjust this<br />
vs_dir=/vservers<br />
vs_etc=/etc/vservers<br />
vs_data=/data/www1<br />
<br />
# get the device, the vserver is located on<br />
vs_device=`$DF -kh $vs_dir | $CUT -f1 -d' ' | $TAIL -n 1`<br />
<br />
# get the device, the vserver config dir is located on<br />
vs_etc_mount=`$DF -kh $vs_etc | $TAIL -n 1 | $GREP -Eo '[^[:space:]]+$'`<br />
<br />
vs_data_device=`$DF -kh $vs_data | $CUT -f1 -d' ' | $TAIL -n 1`<br />
<br />
for i in `$CAT /proc/mounts \<br />
| $CUT -f1,2 -d' ' --output-delimiter='|' \<br />
| $GREP -E '^/dev/drbd/[^|]+\|'`; do<br />
<br />
# extract the device<br />
device=`echo $i | $CUT -f1 -d'|'`<br />
<br />
# extract the mountpoint<br />
mountpoint=`echo $i | $CUT -f2 -d'|'`<br />
<br />
# unmount the file system unless it's the<br />
# - device the vserver's on<br />
# - device the vserver config dir is on<br />
if ! [ ."$vs_device" == ."$device" -o ."$vs_etc_mount" == ."$mountpoint" -o ."$vs_data_device" == ."$device" ] ; then<br />
echo "umount -nv $mountpoint"<br />
echo `umount -nv $mountpoint || exit $?`<br />
fi<br />
done<br />
</pre><br />
<br />
== Solution 4: Modifying Heartbeat's Drbddisk Script ==<br />
<br />
This solution/principle is valid for the combination of Vserver + DRBD + Heartbeat, where the latter is used to transfer virtual servers between the nodes of a HA cluster. "Out of the box", Heartbeat will reboot the cluster node if it cannot unmount the vserver mount point when it tries to shut down a virtual server. Unfortunately, this happens rather often if there is more than one virtual server on a cluster. Every time a vserver is about to be shut down, the vserver itself is getting stopped, then its file system is unmounted, and the underlaying DRBD device is set into "secondary" state (to allow the other node of the cluster to take over the DRBD block device). Now, if there are any references to the vserver moint point remaining in the namespaces of any other running virtual server (copies from the master namespace when a vserver is started), switching the DRBD device into "secondary" mode will fail, and, alas, unmounting the mount point consequentially also. There goes your cluster node...<br />
<br />
To cirumvent this problem I wrote a little script, which should be hooked into the Heartbeat DRBD control script "/usr/etc/ha.d/resource.d/drbddisk", right before the line "exec $DRBDADM secondary $RES" in the "stop" branch.<br />
<br />
It removes the mount point of the virtual server that is about to be shut down from all running virtual server contextes:<br />
<br />
<pre><br />
VNSPACE=/usr/local/sbin/vnamespace<br />
for CTX in `/usr/local/sbin/vserver-stat | tail +3 | awk '{print $1}'`<br />
do<br />
MPOINT="`$VNSPACE -e $CTX cat /proc/mounts | grep $RES | awk '{print $2}'`"<br />
echo Unmounting mount point $MPOINT from within context $CTX<br />
### MOUNT POINT IS COMPULSORY. DEVICE NAME DOES NOT WORK!!!<br />
$VNSPACE -e $CTX /bin/umount $MPOINT || continue;<br />
done<br />
# here shall be the original line then (uncommented, of course ;-))<br />
# exec $DRBDADM secondary $RES<br />
</pre></div>212.123.252.242http://linux-vserver.org/Getting_high_with_lennyGetting high with lenny2008-10-02T07:12:20Z<p>212.123.252.242: /* Getting High with Lenny */</p>
<hr />
<div>= Getting High with Lenny =<br />
<br />
The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released)<br />
<br />
<br />
There is a lot of buzz going on for a while now about virtualisation and High Availability and while Vserver is very well capable for this job the number of documented examples compared to some other virtualisation techniques are a little lacking so i thought i'd do my share. <br />
<br />
I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed.<br />
DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly.<br />
In my experience it takes a few seconds to have several Vservers fail over to another machine with this setup.<br />
<br />
The main attempt here is to give a single working example without going to much in to the details of every option, the scenario is relatively simple but different variations can be made.<br />
<br />
For this set up we will have <br />
<br />
** 2 machines<br />
** both machines have 1 single large DRBD partition<br />
** primary/seconday there is always 1 machine active and 1 on standby<br />
** 1 LVM partition per Vserver on top of the DRBD partition, for quota support from within the guest and LVM snapshots<br />
** the Vservers /etc/vserver and /var/lib/vservers directories will be placed on the DRBD partition.<br />
<br />
In case the main machine that runs the Vservers goes down, the synchronized second machine should take over and automatically start the Vservers.<br />
<br />
Basically this is an on-line RAID solution that can keep your services running in case of hardware failure, it is NOT a back-up replacement.<br />
<br />
The cost for this setup is that you always have 1 idle machine standby, this cost can be justified by the fact that Linux-Vserver enables you to make full use of the 1 machine that is running, you also could consider to run this on a little less expensive (reliable) hardware.<br />
<br />
Also note that i will be using R1 style configuration for heartbeat, R1 style can be considered to be depreciated when using Heartbeat2 but i could not get my head around the R2 xml configuration, so if you want R2 you might want to have a look here.<br />
[[Fail-over]])<br />
<br />
The partitioning looks as follows<br />
<br />
<code> <br />
c0d0p1 Boot Primary Linux ext3 10001.95<br />
c0d0p5 Logical Linux swap / Solaris 1003.49<br />
c0d0p6 Logical Linux 280325.77<br />
<br />
</code><br />
<br />
<br />
<blockquote><br />
* '''machine1''' will use the following names. <br><br />
* hostname = node1 <br> <br />
* IP number = 192.168.1.100<br><br />
* is primary for r0 on disk c0d0p6 <br><br />
* physical volume on r0 is /dev/drbd0 <br><br />
* volume group on /dev/drbd0 is called drbdvg0 <br><br />
</blockquote><br />
<br />
<blockquote><br />
* '''machine2''' will use the following names. <br><br />
* hostname = node2 <br><br />
* IP number = 192.168.1.200 <br><br />
* is secondary for r0 on disk c0d0p6 <br><br />
<br />
The Volume Group and the Physical Volume will be identical on node2 if this one becomes the primary for r0.<br />
</blockquote><br />
<br />
== Loadbalance-Failover the network cards ==<br />
<br />
Maybe not very specific to Vserver, Heartbeat or DRBD, but loadbalancing your network cards for failover is always usefull. Some more indepth details by Carla Schroder can be found here. <br />
[[http://www.enterprisenetworkingplanet.com/nethub/article.php/3696561]]<br />
I did not do it for the DRBD crossover cable between the nodes while this is actually highly recomended.<br />
We need both mii-tool and ethtool.<br />
<br />
<code><br />
apt-get install ethtool ifenslave-2.6<br />
</code><br />
<br />
<code><br />
nano /etc/modprobe.d/arch/i386<br />
</code><br />
<br />
To load the modules with the correct options at boot time.<br />
<br />
<pre><br />
alias bond0 bonding<br />
options bond0 mode=balance-alb miimon=100 <br />
</pre><br />
<br />
And set the interfaces eth0 and eth1 as slaves to bond0, also eth2 is set here for the crossover cable for the DRBD connection to the fail over machine.<br />
<br />
<code><br />
nano /etc/network/interfaces<br />
</code><br />
<pre><br />
# This file describes the network interfaces available on your system<br />
# and how to activate them. For more information, see interfaces(5).<br />
<br />
# The loopback network interface<br />
auto lo<br />
iface lo inet loopback<br />
<br />
# The primary network interface<br />
auto bond0<br />
iface bond0 inet static<br />
address 123.123.123.100<br />
netmask 255.255.255.0<br />
network 123.123.123.0<br />
broadcast 123.123.123.255<br />
gateway 123.123.123.1<br />
# dns-* options are implemented by the resolvconf package, if installed<br />
dns-nameservers 123.123.123.45<br />
dns-search example.com<br />
up /sbin/ifenslave bond0 eth0 eth1<br />
down ifenslave -d bond0 eth0 eth1<br />
<br />
<br />
auto eth2<br />
iface eth2 inet static<br />
address 192.168.1.100<br />
netmask 255.255.255.0<br />
</pre><br />
<br />
This way the system needs to be rebooted before the changes take effect, otherwise you should load the drivers and ifdown eth0 and eth1 first before ifup bond0 but i'm planning to install a new kernel anyway in the next step.<br />
<br />
== Install the Vserver packages ==<br />
<br />
<code><br />
apt-get install linux-image-2.6-vserver-686-bigmem util-vserver vserver-debiantools<br />
</code><br />
<br />
As usual a reboot is needed to boot this kernel.<br />
<br />
<blockquote><br />
With Etch i found that the Vserver kernel often ended up as second in the grub list, not so in Lenny but to be safe check the kernel stanza in /boot/grub/menu.lst especially when doing this from a remote location.<br />
</blockquote><br />
<br />
== Install DRBD8, LVM2 and Heartbeat ==<br />
<br />
<code><br />
apt-get install drbd8-modules-2.6-vserver-686-bigmem drbd8-module-source lvm2 heartbeat-2<br />
</code><br />
<br />
<blockquote><br />
not sure about this, but DRBD always needed to be compiled against the running kernel, is this still the case with the kernel specific modules? I did not check but it would be good to know in case of a kernel upgrade.<br />
</blockquote><br />
<br />
== Build DRBD8 ==<br />
<br />
Although packages are available in the repositorie for DRBD8, the purpose of these packages is that you can built it easily from source and patch the running kernel.<br />
<br />
To do this we just issue this command<br />
<br />
<code><br />
m-a a-i drbd8<br />
</code><br />
<br />
And to load it into the kernel..<br />
<br />
<code><br />
depmod -ae<br />
</code><br />
<br />
<code><br />
modprobe drbd<br />
</code><br />
<br />
==== Configure DRBD8 ====<br />
<br />
Now that we have the essentials installed we can configure DRBD. Again, i will not go in to the details of all the options here so check out the default config and http://www.drbd.org/ to find a match for your set up.<br />
<br />
<code><br />
mv /etc/drbd.conf /etc/drbd.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/drbd.conf<br />
</code><br />
<br />
<pre><br />
global {<br />
usage-count no;<br />
}<br />
<br />
common {<br />
syncer { rate 100M; } <br />
}<br />
<br />
resource r0 {<br />
protocol C;<br />
handlers {<br />
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt f";<br />
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt f";<br />
local-io-error "echo o > /proc/sysrq-trigger ; halt f";<br />
outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";<br />
}<br />
<br />
startup {<br />
degr-wfc-timeout 120; # 2 minutes.<br />
}<br />
<br />
disk {<br />
on-io-error detach;<br />
}<br />
<br />
net { <br />
after-sb-0pri disconnect;<br />
after-sb-1pri disconnect;<br />
after-sb-2pri disconnect;<br />
rr-conflict disconnect;<br />
}<br />
<br />
syncer {<br />
rate 100M;<br />
al-extents 257;<br />
}<br />
<br />
<br />
on node1 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.100:7788;<br />
meta-disk internal;<br />
}<br />
<br />
on node2 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.200:7788;<br />
meta-disk internal;<br />
}<br />
}<br />
</pre><br />
<br />
Before we start DRBD we change some permissions, otherwise it will ask for it.<br />
So on both nodes<br />
<pre><br />
chgrp haclient /sbin/drbdsetup<br />
chmod o-x /sbin/drbdsetup<br />
chmod u+s /sbin/drbdsetup<br />
chgrp haclient /sbin/drbdmeta<br />
chmod o-x /sbin/drbdmeta<br />
chmod u+s /sbin/drbdmeta<br />
</pre><br />
<br />
==== Create the DRBD devices ====<br />
<br />
On both nodes<br />
<br />
node1<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node1<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
<blockquote><br />
'''The following should be done on the node that will be the primary!'''<br />
</blockquote><br />
<br />
On node1<br />
<br />
<code><br />
drbdadm -- --overwrite-data-of-peer primary r0<br />
</code><br />
<br />
<br />
watch cat /proc/drbd should show you something like this<br />
<pre><br />
version: 8.0.13 (api:86/proto:86)<br />
GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil@fat-tyre, 2008-08-04 15:28:07<br />
0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---<br />
ns:62059328 nr:0 dw:3298052 dr:58770141 al:2102 bm:3641 lo:1 pe:261 ua:251 ap:0<br />
[===>................] sync'ed: 22.1% (208411/267331)M<br />
finish: 4:04:44 speed: 14,472 (12,756) K/sec<br />
resync: used:1/61 hits:4064317 misses:5172 starving:0 dirty:0 changed:5172<br />
act_log: used:0/257 hits:822411 misses:46655 starving:110 dirty:44552 changed:2102<br />
<br />
<br />
</pre><br />
<br />
== Configure LVM2 ==<br />
<br />
<br />
<note important><br />
LVM will normally scan all available devices under /dev, but since /dev/cciss/c0d0p6 and /dev/drbd0 are basically the same this will lead to errors where LVM reads and writes the same data to both devices.<br />
So to limit it to scan /dev/drbd devices only we do the following on both nodes.<br />
<br />
</note><br />
<br />
<code><br />
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/lvm/lvm.conf<br />
</code><br />
<br />
<pre><br />
#filter = [ "a/.*/" ]<br />
filter = [ "a|/dev/drbd|", "r|.*|" ]<br />
</pre><br />
<br />
to re-scan with the new settings on both nodes<br />
<code><br />
<br />
vgscan<br />
</code><br />
<br />
=== Create the Physical Volume ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
On node1<br />
<br />
<code><br />
pvcreate /dev/drbd0<br />
</code><br />
<br />
=== Create the Volume Group ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
One node1<br />
<br />
<code><br />
vgcreate drbdvg0 /dev/drbd0<br />
</code><br />
<br />
=== Create the Logical Volume ===<br />
<br />
Yes, again only on the node that is primary!!!<br />
<br />
For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on.<br />
<br />
On node1<br />
<br />
<code><br />
lvcreate -L50000 -n web drbdvg0<br />
</code><br />
<br />
Then we put a file system on the logical volumes<br />
<br />
<code><br />
mkfs.ext3 /dev/drbdvg0/web<br />
</code><br />
<br />
create the directory where we want to mount the Vservers<br />
<br />
<code><br />
mkdir -p /VSERVERS/web<br />
</code><br />
<br />
and mount the volume group to the mount point<br />
<br />
<code><br />
mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/<br />
</code><br />
<br />
== Get informed ==<br />
<br />
Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.<br />
<br />
This should be done on both nodes<br />
<br />
<code><br />
apt-get install postfix mailx<br />
</code><br />
<br />
and go for the defaults, "internet site" and node1.example.com"<br />
<br />
We don't want postfix to listen to all interfaces,<br />
<br />
<code><br />
nano /etc/postfix/main.cf<br />
</code><br />
<br />
and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.<br />
<br />
<code><br />
inet_interfaces = loopback-only<br />
</code><br />
<br />
<br />
== Heartbeat ==<br />
<br />
=== Get aquinted ===<br />
Add the other node in the hosts file of both nodes, this way Heartbeat knows who is who.<br />
<br />
so for node1 do<br />
<br />
<code><br />
nano /etc/hosts<br />
</code><br />
<br />
and add node2<br />
<br />
<pre><br />
192.168.1.200 node2<br />
</pre><br />
<br />
=== Get intimate ===<br />
<br />
Set up some keys on both boxes so we can ssh login without a password (defaults, no passphrase)<br />
<br />
<code><br />
ssh-keygen<br />
</code><br />
<br />
then copy over the public keys<br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.100:/root/.ssh/authorized_keys<br />
</code><br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.200:/root/.ssh/authorized_keys<br />
</code><br />
<br />
=== Configure Heartbeat ===<br />
<br />
Without the ha.cf file Heartbeat wil not start, this should only be done on 1 of the nodes.<br />
<br />
<code><br />
nano /etc/ha.d/ha.cf<br />
</code><br />
<br />
<pre><br />
autojoin none <br />
#crm on #enables heartbeat2 cluster manager - we want that!<br />
use_logd on<br />
logfacility syslog<br />
keepalive 1<br />
deadtime 10<br />
warntime 10<br />
udpport 694<br />
auto_failback on #resources move back once node is back online<br />
mcast bond0 239.0.0.43 694 1 0 <br />
bcast eth2 <br />
node node1 #hostnames of the nodes<br />
node node2<br />
</pre><br />
<br />
This one also on 1 of the nodes<br />
<br />
<code><br />
nano /etc/ha.d/authkeys<br />
</code><br />
<br />
<pre><br />
auth 3<br />
3 md5 failover ## this is just a string, enter what you want ! auth 3 md5 uses md5 encryption<br />
</pre><br />
<br />
<code><br />
chmod 600 /etc/ha.d/authkeys<br />
</code><br />
<br />
<note><br />
We will be using heartbeat R1-style configuration here simply because i don't understand the R2 xml based syntax.<br />
</note><br />
We only did the above 2 config files on 1 node but we need it on both, heartbeat can do that for us.<br />
<br />
<code><br />
/usr/lib/heartbeat/ha_propagate<br />
</code><br />
<br />
=== Heatbeat behavior ===<br />
<br />
After above 2 files are set, the haresources is where we want to be to control Heartbeats behaviour.<br />
This is an example for 1 Vserver that we will set up later on.<br />
<br />
<code><br />
nano /etc/ha.d/haresources<br />
</code><br />
<br />
<pre><br />
node1 drbddisk::r1 LVM::drbdvg1 Filesystem::/dev/drbdvg1/web::/VSERVERS/web::ext3 vserver-web SendArp::123.123.123.125/bond0 MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
The above will default the Vserver named web to node1 and specify the mount points, the vserver-web script will start and stop heartbeat, the sendarp is for notifying the network that this IP can be found somewhere else then before. (have added the SendArp an extra time below for better result)<br />
<br />
Another example for more than 1 Vserver,<br />
We only specify 1 default node here for all Vservers and the same DRBD disk and Volume Group, the individual start scripts and mount points are specified separately, mind the \, its all in 1 line. the last mail command is only needed once.<br />
<br />
<pre><br />
node1 \<br />
drbddisk::r0 \<br />
LVM::drbdvg0 \<br />
Filesystem::/dev/drbdvg0/web::/VSERVERS/web::ext3 \<br />
Filesystem::/dev/drbdvg0/ns1::/VSERVERS/ns1::ext3 \<br />
Vserver-web \<br />
Vserver-ns1 \<br />
SendArp::123.123.123.125/bond0 \<br />
SendArp::123.123.123.126/bond0 \<br />
MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
=== start/stop script ===<br />
<br />
The vserver-web script as specified to be called by heartbeat above is basically a demolished version of the original R2 style agent by Martin Fick from here http://www.theficks.name/bin/lib/ocf/VServer.<br />
<br />
What i did is remove the sensible top part and replace "$OCF_RESKEY_vserver" with the specific Vserver name, also added an extra<br />
<br />
<pre><br />
/etc/ha.d/resource.d/SendArp 123.123.123.126/bond0 start<br />
</pre><br />
<br />
to the start part because i had various results when done by Heartbeat in the first tests i did, not sure if it is still needed but i guess it doesn't hurt.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
<pre><br />
#!/bin/sh<br />
#<br />
# License: GNU General Public License (GPL) <br />
# Author: Martin Fick <mogulguy@yahoo.com><br />
# Date: 04/19/07<br />
# Version: 1.1<br />
#<br />
# This script manages a VServer instance<br />
#<br />
# It can start or stop a VServer<br />
#<br />
# usage: $0 {start|stop|status|monitor|meta-data}<br />
#<br />
#<br />
# OCF parameters are as below<br />
# OCF_RESKEY_vserver<br />
#<br />
#######################################################################<br />
# Initialization:<br />
#<br />
#. /usr/lib/heartbeat/ocf-shellfuncs<br />
#<br />
#USAGE="usage: $0 {start|stop|status|monitor|meta-data}";<br />
#<br />
#######################################################################<br />
#<br />
#<br />
#meta_data() {<br />
# cat <<END<br />
#<?xml version="1.0"?><br />
#<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd"><br />
#<resource-agent name="VServer"><br />
# <version>1.0</version><br />
# <longdesc lang="en"><br />
#This script manages a VServer instance.<br />
#It can start or stop a VServer.<br />
# </longdesc><br />
# <shortdesc lang="en">OCF Resource Agent compliant VServer script.</shortdesc><br />
#<br />
# <parameters><br />
#<br />
# <parameter name="vserver" unique="1" required="1"><br />
# <longdesc lang="en"><br />
#The vserver name is the name as found under /etc/vservers<br />
# </longdesc><br />
# <shortdesc lang="en">VServer Name</shortdesc><br />
# <content type="string" default="" /><br />
# </parameter><br />
#<br />
# </parameters><br />
#<br />
# <actions><br />
# <action name="start" timeout="2m" /><br />
# <action name="stop" timeout="1m" /><br />
# <action name="monitor" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="status" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="meta-data" timeout="1m" /><br />
# </actions><br />
#</resource-agent><br />
#END<br />
#}<br />
<br />
vserver_reload() {<br />
vserver_stop || return<br />
vserver_start<br />
}<br />
<br />
vserver_stop() {<br />
#<br />
# Is the VServer already stopped?<br />
#<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "stop"<br />
<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
return 1<br />
}<br />
<br />
vserver_start() {<br />
vserver_status<br />
[ $? -eq 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "start"<br />
vserver_status<br />
/etc/ha.d/resource.d/SendArp 123.123.123.125/bond0 start<br />
}<br />
<br />
vserver_status() {<br />
/usr/sbin/vserver "web" "status"<br />
rc=$?<br />
if [ $rc -eq 0 ]; then<br />
echo "running"<br />
return 0<br />
elif [ $rc -eq 3 ]; then<br />
echo "stopped"<br />
else<br />
echo "unknown"<br />
fi<br />
return 7<br />
}<br />
<br />
vserver_monitor() {<br />
vserver_status<br />
}<br />
<br />
<br />
vserver_usage() {<br />
<br />
echo $USAGE >&2<br />
}<br />
<br />
vserver_info() {<br />
cat - <<!INFO<br />
Abstract=VServer Instance takeover<br />
Argument=VServer Name<br />
Description:<br />
A Vserver is a simulated server which is fairly hardware independent<br />
so it can be easily setup to run on several machines.<br />
Please rerun with the meta-data command for a list of \\<br />
valid arguments and their defaults.<br />
!INFO<br />
}<br />
<br />
#<br />
# Start or Stop the given VServer...<br />
#<br />
<br />
if [ $# -ne 1 ] ; then<br />
vserver_usage<br />
exit 2<br />
fi<br />
<br />
case "$1" in<br />
start|stop|status|monitor|reload|info|usage) vserver_$1 ;;<br />
meta-data) meta_data ;;<br />
validate-all|notify|promote|demote) exit 3 ;;<br />
<br />
*) vserver_usage ; exit 2 ;;<br />
esac<br />
<br />
<br />
</pre><br />
To make this file executable by Heartbeat<br />
<br />
<code><br />
chmod a+x /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
=== Diazepam ===<br />
<br />
Add a modificaton to the drbddisk resource, as pointed out by Christian Balzer on the Vserver mailing list http://list.linux-vserver.org/archive?mss:835:200803:cgehldioambmojimggpf, it seems to help Heartbeat to be a little more patient if it wants to close down the r0 while not all Vservers are stopped yet, not unimportant.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/drbddisk<br />
</code><br />
<br />
<pre><br />
stop)<br />
# Kill off any vserver mounts that might hog this<br />
VNSPACE=/usr/sbin/vnamespace<br />
<br />
for CTX in `/usr/sbin/vserver-stat | tail -n +2 | awk '{print $1}'`<br />
do<br />
MPOINT="`$VNSPACE -e $CTX cat /proc/mounts | grep $RES | awk '{print $2}'`"<br />
echo Unmounting mount point $MPOINT from within context $CTX<br />
### MOUNT POINT IS COMPULSORY. DEVICE NAME DOES NOT WORK!!!<br />
$VNSPACE -e $CTX /bin/umount $MPOINT || continue;<br />
done<br />
# exec, so the exit code of drbdadm propagates<br />
exec $DRBDADM secondary $RES<br />
<br />
</pre><br />
<br />
== Create a Vserver ==<br />
<br />
Note that we already have mounted the LVM partition on /VSERVERS/web in an earlier step, we're going to place both the /var and /etc directories on the mountpoint and symlink to it, this way the complete Vserver and its config are available on the other node when mounted.<br />
<br />
<code><br />
mkdir -p /VSERVERS/web/etc<br />
</code><br />
<br />
<code><br />
mkdir -p /VSERVERS/web/barrier/var<br />
</code><br />
<br />
When making the Vserver it will be in the default location /var/lib/vservers/web and its config in /etc/vservers/web <br />
<br />
<pre><br />
newvserver --hostname web --domain example.com --ip 123.123.123.125/24 --dist etch --mirror http://123.123.123.81:3142/debian.apt-get.eu/debian --interface bond0<br />
</pre><br />
<br />
<pre><br />
enter the root password<br />
</pre><br />
<br />
<pre><br />
Create a normal user account now? <br />
<No> <br />
</pre><br />
<br />
<pre><br />
Choose software to install: <br />
<Ok> <br />
</pre><br />
<br />
On node1 we move the Vserver directories to the LVM volume on the DRBD disks and make symlinks from the normal locations.<br />
<br />
On node1<br />
<br />
<code><br />
mv /etc/vservers/web/* /VSERVERS/web/etc/<br />
</code><br />
<br />
<code><br />
rmdir /etc/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<code><br />
mv /var/lib/vservers/web/* /VSERVERS/web/barrier/var<br />
</code><br />
<br />
<code><br />
rmdir /var/lib/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
We need to set the same symlinks on node2, but the we need the Vserver directories available there first.<br />
The mounting should be handled by heartbeat by now so we make our resources move to the other machine.<br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat stop<br />
</code><br />
<br />
On node2<br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat start<br />
</code><br />
<br />
<code><br />
Vserver web start<br />
</code><br />
<br />
and enjoy!</div>212.123.252.242http://linux-vserver.org/Getting_high_with_lennyGetting high with lenny2008-10-02T07:10:38Z<p>212.123.252.242: /* Getting High with Lenny */</p>
<hr />
<div>= Getting High with Lenny =<br />
<br />
The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released)<br />
<br />
<br />
There is a lot of buzz going on for a while now about virtualisation and High Availability and while Vserver is very well capable for this job the number of documented examples compared to some other virtualisation techniques are a little lacking so i thought i'd do my share. <br />
<br />
I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed.<br />
DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly.<br />
In my experience it takes a few seconds to have several Vservers fail over to another machine with this setup.<br />
<br />
The main attempt here is to give a single working example without going to much in to the details of every option, the scenario is relatively simple but different variations can be made.<br />
<br />
For this set up we will have <br />
<br />
** 2 machines<br />
** both machines have 1 single large DRBD partition<br />
** primary/seconday there is always 1 machine active and 1 on standby<br />
** 1 LVM partition per Vserver on top of the DRBD partition, for quota support from within the guest and LVM snapshots<br />
** the Vservers /etc/vserver and /var/lib/vservers directories will be placed on the DRBD partition.<br />
<br />
In case the main machine that runs the Vservers goes down, the synchronized second machine should take over and automatically start the Vservers.<br />
<br />
Basically this is an on-line RAID solution that can keep your services running in case of hardware failure, it is NOT a back-up replacement.<br />
<br />
The cost for this setup is that you always have 1 idle machine standby, this cost can be justified by the fact that Linux-Vserver enables you to make full use of the 1 machine that is running, you also could consider to run this on a little less expensive (reliable) hardware.<br />
<br />
Also note that i will be using R1 style configuration for heartbeat, R1 style can be considered to be depreciated when using Heartbeat2 but i could not get my head around the R2 xml configuration, so if you want R2 you might want to have a look here.<br />
[[Fail-over]])<br />
<br />
The partitioning looks as follows<br />
<br />
<code> <br />
c0d0p1 Boot Primary Linux ext3 10001.95<br />
c0d0p5 Logical Linux swap / Solaris 1003.49<br />
c0d0p6 Logical Linux 280325.77<br />
<br />
</code><br />
<br />
<br />
<blockquote><br />
*#* '''machine1''' will use the following names. <br><br />
*#* hostname = node1. <br> <br />
*#* IP number = 192.168.1.100.<br><br />
*#* is primary for r0 on disk c0d0p6. <br><br />
*#* physical volume on r0 is /dev/drbd0. <br><br />
*#* volume group on /dev/drbd0 is called drbdvg0. <br><br />
</blockquote><br />
<br />
<blockquote><br />
*#* '''machine2''' will use the following names<br />
*#* hostname = node2.<br />
*#* IP number = 192.168.1.200.<br />
*#*is secondary for r0 on disk c0d0p6.<br />
<br />
The Volume Group and the Physical Volume will be identical on node2 if this one becomes the primary for r0.<br />
</blockquote><br />
<br />
== Loadbalance-Failover the network cards ==<br />
<br />
Maybe not very specific to Vserver, Heartbeat or DRBD, but loadbalancing your network cards for failover is always usefull. Some more indepth details by Carla Schroder can be found here. <br />
[[http://www.enterprisenetworkingplanet.com/nethub/article.php/3696561]]<br />
I did not do it for the DRBD crossover cable between the nodes while this is actually highly recomended.<br />
We need both mii-tool and ethtool.<br />
<br />
<code><br />
apt-get install ethtool ifenslave-2.6<br />
</code><br />
<br />
<code><br />
nano /etc/modprobe.d/arch/i386<br />
</code><br />
<br />
To load the modules with the correct options at boot time.<br />
<br />
<pre><br />
alias bond0 bonding<br />
options bond0 mode=balance-alb miimon=100 <br />
</pre><br />
<br />
And set the interfaces eth0 and eth1 as slaves to bond0, also eth2 is set here for the crossover cable for the DRBD connection to the fail over machine.<br />
<br />
<code><br />
nano /etc/network/interfaces<br />
</code><br />
<pre><br />
# This file describes the network interfaces available on your system<br />
# and how to activate them. For more information, see interfaces(5).<br />
<br />
# The loopback network interface<br />
auto lo<br />
iface lo inet loopback<br />
<br />
# The primary network interface<br />
auto bond0<br />
iface bond0 inet static<br />
address 123.123.123.100<br />
netmask 255.255.255.0<br />
network 123.123.123.0<br />
broadcast 123.123.123.255<br />
gateway 123.123.123.1<br />
# dns-* options are implemented by the resolvconf package, if installed<br />
dns-nameservers 123.123.123.45<br />
dns-search example.com<br />
up /sbin/ifenslave bond0 eth0 eth1<br />
down ifenslave -d bond0 eth0 eth1<br />
<br />
<br />
auto eth2<br />
iface eth2 inet static<br />
address 192.168.1.100<br />
netmask 255.255.255.0<br />
</pre><br />
<br />
This way the system needs to be rebooted before the changes take effect, otherwise you should load the drivers and ifdown eth0 and eth1 first before ifup bond0 but i'm planning to install a new kernel anyway in the next step.<br />
<br />
== Install the Vserver packages ==<br />
<br />
<code><br />
apt-get install linux-image-2.6-vserver-686-bigmem util-vserver vserver-debiantools<br />
</code><br />
<br />
As usual a reboot is needed to boot this kernel.<br />
<br />
<blockquote><br />
With Etch i found that the Vserver kernel often ended up as second in the grub list, not so in Lenny but to be safe check the kernel stanza in /boot/grub/menu.lst especially when doing this from a remote location.<br />
</blockquote><br />
<br />
== Install DRBD8, LVM2 and Heartbeat ==<br />
<br />
<code><br />
apt-get install drbd8-modules-2.6-vserver-686-bigmem drbd8-module-source lvm2 heartbeat-2<br />
</code><br />
<br />
<blockquote><br />
not sure about this, but DRBD always needed to be compiled against the running kernel, is this still the case with the kernel specific modules? I did not check but it would be good to know in case of a kernel upgrade.<br />
</blockquote><br />
<br />
== Build DRBD8 ==<br />
<br />
Although packages are available in the repositorie for DRBD8, the purpose of these packages is that you can built it easily from source and patch the running kernel.<br />
<br />
To do this we just issue this command<br />
<br />
<code><br />
m-a a-i drbd8<br />
</code><br />
<br />
And to load it into the kernel..<br />
<br />
<code><br />
depmod -ae<br />
</code><br />
<br />
<code><br />
modprobe drbd<br />
</code><br />
<br />
==== Configure DRBD8 ====<br />
<br />
Now that we have the essentials installed we can configure DRBD. Again, i will not go in to the details of all the options here so check out the default config and http://www.drbd.org/ to find a match for your set up.<br />
<br />
<code><br />
mv /etc/drbd.conf /etc/drbd.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/drbd.conf<br />
</code><br />
<br />
<pre><br />
global {<br />
usage-count no;<br />
}<br />
<br />
common {<br />
syncer { rate 100M; } <br />
}<br />
<br />
resource r0 {<br />
protocol C;<br />
handlers {<br />
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt f";<br />
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt f";<br />
local-io-error "echo o > /proc/sysrq-trigger ; halt f";<br />
outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";<br />
}<br />
<br />
startup {<br />
degr-wfc-timeout 120; # 2 minutes.<br />
}<br />
<br />
disk {<br />
on-io-error detach;<br />
}<br />
<br />
net { <br />
after-sb-0pri disconnect;<br />
after-sb-1pri disconnect;<br />
after-sb-2pri disconnect;<br />
rr-conflict disconnect;<br />
}<br />
<br />
syncer {<br />
rate 100M;<br />
al-extents 257;<br />
}<br />
<br />
<br />
on node1 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.100:7788;<br />
meta-disk internal;<br />
}<br />
<br />
on node2 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.200:7788;<br />
meta-disk internal;<br />
}<br />
}<br />
</pre><br />
<br />
Before we start DRBD we change some permissions, otherwise it will ask for it.<br />
So on both nodes<br />
<pre><br />
chgrp haclient /sbin/drbdsetup<br />
chmod o-x /sbin/drbdsetup<br />
chmod u+s /sbin/drbdsetup<br />
chgrp haclient /sbin/drbdmeta<br />
chmod o-x /sbin/drbdmeta<br />
chmod u+s /sbin/drbdmeta<br />
</pre><br />
<br />
==== Create the DRBD devices ====<br />
<br />
On both nodes<br />
<br />
node1<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node1<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
<blockquote><br />
'''The following should be done on the node that will be the primary!'''<br />
</blockquote><br />
<br />
On node1<br />
<br />
<code><br />
drbdadm -- --overwrite-data-of-peer primary r0<br />
</code><br />
<br />
<br />
watch cat /proc/drbd should show you something like this<br />
<pre><br />
version: 8.0.13 (api:86/proto:86)<br />
GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil@fat-tyre, 2008-08-04 15:28:07<br />
0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---<br />
ns:62059328 nr:0 dw:3298052 dr:58770141 al:2102 bm:3641 lo:1 pe:261 ua:251 ap:0<br />
[===>................] sync'ed: 22.1% (208411/267331)M<br />
finish: 4:04:44 speed: 14,472 (12,756) K/sec<br />
resync: used:1/61 hits:4064317 misses:5172 starving:0 dirty:0 changed:5172<br />
act_log: used:0/257 hits:822411 misses:46655 starving:110 dirty:44552 changed:2102<br />
<br />
<br />
</pre><br />
<br />
== Configure LVM2 ==<br />
<br />
<br />
<note important><br />
LVM will normally scan all available devices under /dev, but since /dev/cciss/c0d0p6 and /dev/drbd0 are basically the same this will lead to errors where LVM reads and writes the same data to both devices.<br />
So to limit it to scan /dev/drbd devices only we do the following on both nodes.<br />
<br />
</note><br />
<br />
<code><br />
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/lvm/lvm.conf<br />
</code><br />
<br />
<pre><br />
#filter = [ "a/.*/" ]<br />
filter = [ "a|/dev/drbd|", "r|.*|" ]<br />
</pre><br />
<br />
to re-scan with the new settings on both nodes<br />
<code><br />
<br />
vgscan<br />
</code><br />
<br />
=== Create the Physical Volume ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
On node1<br />
<br />
<code><br />
pvcreate /dev/drbd0<br />
</code><br />
<br />
=== Create the Volume Group ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
One node1<br />
<br />
<code><br />
vgcreate drbdvg0 /dev/drbd0<br />
</code><br />
<br />
=== Create the Logical Volume ===<br />
<br />
Yes, again only on the node that is primary!!!<br />
<br />
For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on.<br />
<br />
On node1<br />
<br />
<code><br />
lvcreate -L50000 -n web drbdvg0<br />
</code><br />
<br />
Then we put a file system on the logical volumes<br />
<br />
<code><br />
mkfs.ext3 /dev/drbdvg0/web<br />
</code><br />
<br />
create the directory where we want to mount the Vservers<br />
<br />
<code><br />
mkdir -p /VSERVERS/web<br />
</code><br />
<br />
and mount the volume group to the mount point<br />
<br />
<code><br />
mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/<br />
</code><br />
<br />
== Get informed ==<br />
<br />
Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.<br />
<br />
This should be done on both nodes<br />
<br />
<code><br />
apt-get install postfix mailx<br />
</code><br />
<br />
and go for the defaults, "internet site" and node1.example.com"<br />
<br />
We don't want postfix to listen to all interfaces,<br />
<br />
<code><br />
nano /etc/postfix/main.cf<br />
</code><br />
<br />
and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.<br />
<br />
<code><br />
inet_interfaces = loopback-only<br />
</code><br />
<br />
<br />
== Heartbeat ==<br />
<br />
=== Get aquinted ===<br />
Add the other node in the hosts file of both nodes, this way Heartbeat knows who is who.<br />
<br />
so for node1 do<br />
<br />
<code><br />
nano /etc/hosts<br />
</code><br />
<br />
and add node2<br />
<br />
<pre><br />
192.168.1.200 node2<br />
</pre><br />
<br />
=== Get intimate ===<br />
<br />
Set up some keys on both boxes so we can ssh login without a password (defaults, no passphrase)<br />
<br />
<code><br />
ssh-keygen<br />
</code><br />
<br />
then copy over the public keys<br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.100:/root/.ssh/authorized_keys<br />
</code><br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.200:/root/.ssh/authorized_keys<br />
</code><br />
<br />
=== Configure Heartbeat ===<br />
<br />
Without the ha.cf file Heartbeat wil not start, this should only be done on 1 of the nodes.<br />
<br />
<code><br />
nano /etc/ha.d/ha.cf<br />
</code><br />
<br />
<pre><br />
autojoin none <br />
#crm on #enables heartbeat2 cluster manager - we want that!<br />
use_logd on<br />
logfacility syslog<br />
keepalive 1<br />
deadtime 10<br />
warntime 10<br />
udpport 694<br />
auto_failback on #resources move back once node is back online<br />
mcast bond0 239.0.0.43 694 1 0 <br />
bcast eth2 <br />
node node1 #hostnames of the nodes<br />
node node2<br />
</pre><br />
<br />
This one also on 1 of the nodes<br />
<br />
<code><br />
nano /etc/ha.d/authkeys<br />
</code><br />
<br />
<pre><br />
auth 3<br />
3 md5 failover ## this is just a string, enter what you want ! auth 3 md5 uses md5 encryption<br />
</pre><br />
<br />
<code><br />
chmod 600 /etc/ha.d/authkeys<br />
</code><br />
<br />
<note><br />
We will be using heartbeat R1-style configuration here simply because i don't understand the R2 xml based syntax.<br />
</note><br />
We only did the above 2 config files on 1 node but we need it on both, heartbeat can do that for us.<br />
<br />
<code><br />
/usr/lib/heartbeat/ha_propagate<br />
</code><br />
<br />
=== Heatbeat behavior ===<br />
<br />
After above 2 files are set, the haresources is where we want to be to control Heartbeats behaviour.<br />
This is an example for 1 Vserver that we will set up later on.<br />
<br />
<code><br />
nano /etc/ha.d/haresources<br />
</code><br />
<br />
<pre><br />
node1 drbddisk::r1 LVM::drbdvg1 Filesystem::/dev/drbdvg1/web::/VSERVERS/web::ext3 vserver-web SendArp::123.123.123.125/bond0 MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
The above will default the Vserver named web to node1 and specify the mount points, the vserver-web script will start and stop heartbeat, the sendarp is for notifying the network that this IP can be found somewhere else then before. (have added the SendArp an extra time below for better result)<br />
<br />
Another example for more than 1 Vserver,<br />
We only specify 1 default node here for all Vservers and the same DRBD disk and Volume Group, the individual start scripts and mount points are specified separately, mind the \, its all in 1 line. the last mail command is only needed once.<br />
<br />
<pre><br />
node1 \<br />
drbddisk::r0 \<br />
LVM::drbdvg0 \<br />
Filesystem::/dev/drbdvg0/web::/VSERVERS/web::ext3 \<br />
Filesystem::/dev/drbdvg0/ns1::/VSERVERS/ns1::ext3 \<br />
Vserver-web \<br />
Vserver-ns1 \<br />
SendArp::123.123.123.125/bond0 \<br />
SendArp::123.123.123.126/bond0 \<br />
MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
=== start/stop script ===<br />
<br />
The vserver-web script as specified to be called by heartbeat above is basically a demolished version of the original R2 style agent by Martin Fick from here http://www.theficks.name/bin/lib/ocf/VServer.<br />
<br />
What i did is remove the sensible top part and replace "$OCF_RESKEY_vserver" with the specific Vserver name, also added an extra<br />
<br />
<pre><br />
/etc/ha.d/resource.d/SendArp 123.123.123.126/bond0 start<br />
</pre><br />
<br />
to the start part because i had various results when done by Heartbeat in the first tests i did, not sure if it is still needed but i guess it doesn't hurt.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
<pre><br />
#!/bin/sh<br />
#<br />
# License: GNU General Public License (GPL) <br />
# Author: Martin Fick <mogulguy@yahoo.com><br />
# Date: 04/19/07<br />
# Version: 1.1<br />
#<br />
# This script manages a VServer instance<br />
#<br />
# It can start or stop a VServer<br />
#<br />
# usage: $0 {start|stop|status|monitor|meta-data}<br />
#<br />
#<br />
# OCF parameters are as below<br />
# OCF_RESKEY_vserver<br />
#<br />
#######################################################################<br />
# Initialization:<br />
#<br />
#. /usr/lib/heartbeat/ocf-shellfuncs<br />
#<br />
#USAGE="usage: $0 {start|stop|status|monitor|meta-data}";<br />
#<br />
#######################################################################<br />
#<br />
#<br />
#meta_data() {<br />
# cat <<END<br />
#<?xml version="1.0"?><br />
#<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd"><br />
#<resource-agent name="VServer"><br />
# <version>1.0</version><br />
# <longdesc lang="en"><br />
#This script manages a VServer instance.<br />
#It can start or stop a VServer.<br />
# </longdesc><br />
# <shortdesc lang="en">OCF Resource Agent compliant VServer script.</shortdesc><br />
#<br />
# <parameters><br />
#<br />
# <parameter name="vserver" unique="1" required="1"><br />
# <longdesc lang="en"><br />
#The vserver name is the name as found under /etc/vservers<br />
# </longdesc><br />
# <shortdesc lang="en">VServer Name</shortdesc><br />
# <content type="string" default="" /><br />
# </parameter><br />
#<br />
# </parameters><br />
#<br />
# <actions><br />
# <action name="start" timeout="2m" /><br />
# <action name="stop" timeout="1m" /><br />
# <action name="monitor" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="status" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="meta-data" timeout="1m" /><br />
# </actions><br />
#</resource-agent><br />
#END<br />
#}<br />
<br />
vserver_reload() {<br />
vserver_stop || return<br />
vserver_start<br />
}<br />
<br />
vserver_stop() {<br />
#<br />
# Is the VServer already stopped?<br />
#<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "stop"<br />
<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
return 1<br />
}<br />
<br />
vserver_start() {<br />
vserver_status<br />
[ $? -eq 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "start"<br />
vserver_status<br />
/etc/ha.d/resource.d/SendArp 123.123.123.125/bond0 start<br />
}<br />
<br />
vserver_status() {<br />
/usr/sbin/vserver "web" "status"<br />
rc=$?<br />
if [ $rc -eq 0 ]; then<br />
echo "running"<br />
return 0<br />
elif [ $rc -eq 3 ]; then<br />
echo "stopped"<br />
else<br />
echo "unknown"<br />
fi<br />
return 7<br />
}<br />
<br />
vserver_monitor() {<br />
vserver_status<br />
}<br />
<br />
<br />
vserver_usage() {<br />
<br />
echo $USAGE >&2<br />
}<br />
<br />
vserver_info() {<br />
cat - <<!INFO<br />
Abstract=VServer Instance takeover<br />
Argument=VServer Name<br />
Description:<br />
A Vserver is a simulated server which is fairly hardware independent<br />
so it can be easily setup to run on several machines.<br />
Please rerun with the meta-data command for a list of \\<br />
valid arguments and their defaults.<br />
!INFO<br />
}<br />
<br />
#<br />
# Start or Stop the given VServer...<br />
#<br />
<br />
if [ $# -ne 1 ] ; then<br />
vserver_usage<br />
exit 2<br />
fi<br />
<br />
case "$1" in<br />
start|stop|status|monitor|reload|info|usage) vserver_$1 ;;<br />
meta-data) meta_data ;;<br />
validate-all|notify|promote|demote) exit 3 ;;<br />
<br />
*) vserver_usage ; exit 2 ;;<br />
esac<br />
<br />
<br />
</pre><br />
To make this file executable by Heartbeat<br />
<br />
<code><br />
chmod a+x /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
=== Diazepam ===<br />
<br />
Add a modificaton to the drbddisk resource, as pointed out by Christian Balzer on the Vserver mailing list http://list.linux-vserver.org/archive?mss:835:200803:cgehldioambmojimggpf, it seems to help Heartbeat to be a little more patient if it wants to close down the r0 while not all Vservers are stopped yet, not unimportant.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/drbddisk<br />
</code><br />
<br />
<pre><br />
stop)<br />
# Kill off any vserver mounts that might hog this<br />
VNSPACE=/usr/sbin/vnamespace<br />
<br />
for CTX in `/usr/sbin/vserver-stat | tail -n +2 | awk '{print $1}'`<br />
do<br />
MPOINT="`$VNSPACE -e $CTX cat /proc/mounts | grep $RES | awk '{print $2}'`"<br />
echo Unmounting mount point $MPOINT from within context $CTX<br />
### MOUNT POINT IS COMPULSORY. DEVICE NAME DOES NOT WORK!!!<br />
$VNSPACE -e $CTX /bin/umount $MPOINT || continue;<br />
done<br />
# exec, so the exit code of drbdadm propagates<br />
exec $DRBDADM secondary $RES<br />
<br />
</pre><br />
<br />
== Create a Vserver ==<br />
<br />
Note that we already have mounted the LVM partition on /VSERVERS/web in an earlier step, we're going to place both the /var and /etc directories on the mountpoint and symlink to it, this way the complete Vserver and its config are available on the other node when mounted.<br />
<br />
<code><br />
mkdir -p /VSERVERS/web/etc<br />
</code><br />
<br />
<code><br />
mkdir -p /VSERVERS/web/barrier/var<br />
</code><br />
<br />
When making the Vserver it will be in the default location /var/lib/vservers/web and its config in /etc/vservers/web <br />
<br />
<pre><br />
newvserver --hostname web --domain example.com --ip 123.123.123.125/24 --dist etch --mirror http://123.123.123.81:3142/debian.apt-get.eu/debian --interface bond0<br />
</pre><br />
<br />
<pre><br />
enter the root password<br />
</pre><br />
<br />
<pre><br />
Create a normal user account now? <br />
<No> <br />
</pre><br />
<br />
<pre><br />
Choose software to install: <br />
<Ok> <br />
</pre><br />
<br />
On node1 we move the Vserver directories to the LVM volume on the DRBD disks and make symlinks from the normal locations.<br />
<br />
On node1<br />
<br />
<code><br />
mv /etc/vservers/web/* /VSERVERS/web/etc/<br />
</code><br />
<br />
<code><br />
rmdir /etc/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<code><br />
mv /var/lib/vservers/web/* /VSERVERS/web/barrier/var<br />
</code><br />
<br />
<code><br />
rmdir /var/lib/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
We need to set the same symlinks on node2, but the we need the Vserver directories available there first.<br />
The mounting should be handled by heartbeat by now so we make our resources move to the other machine.<br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat stop<br />
</code><br />
<br />
On node2<br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat start<br />
</code><br />
<br />
<code><br />
Vserver web start<br />
</code><br />
<br />
and enjoy!</div>212.123.252.242http://linux-vserver.org/Getting_high_with_lennyGetting high with lenny2008-10-02T07:05:59Z<p>212.123.252.242: /* Getting High with Lenny */</p>
<hr />
<div>= Getting High with Lenny =<br />
<br />
The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released)<br />
<br />
<br />
There is a lot of buzz going on for a while now about virtualisation and High Availability and while Vserver is very well capable for this job the number of documented examples compared to some other virtualisation techniques are a little lacking so i thought i'd do my share. <br />
<br />
I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed.<br />
DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly.<br />
In my experience it takes a few seconds to have several Vservers fail over to another machine with this setup.<br />
<br />
The main attempt here is to give a single working example without going to much in to the details of every option, the scenario is relatively simple but different variations can be made.<br />
<br />
For this set up we will have <br />
<br />
** 2 machines<br />
** both machines have 1 single large DRBD partition<br />
** primary/seconday there is always 1 machine active and 1 on standby<br />
** 1 LVM partition per Vserver on top of the DRBD partition, for quota support from within the guest and LVM snapshots<br />
** the Vservers /etc/vserver and /var/lib/vservers directories will be placed on the DRBD partition.<br />
<br />
In case the main machine that runs the Vservers goes down, the synchronized second machine should take over and automatically start the Vservers.<br />
<br />
Basically this is an on-line RAID solution that can keep your services running in case of hardware failure, it is NOT a back-up replacement.<br />
<br />
The cost for this setup is that you always have 1 idle machine standby, this cost can be justified by the fact that Linux-Vserver enables you to make full use of the 1 machine that is running, you also could consider to run this on a little less expensive (reliable) hardware.<br />
<br />
Also note that i will be using R1 style configuration for heartbeat, R1 style can be considered to be depreciated when using Heartbeat2 but i could not get my head around the R2 xml configuration, so if you want R2 you might want to have a look here.<br />
[[Fail-over]])<br />
<br />
The partitioning looks as follows<br />
<br />
<code> <br />
c0d0p1 Boot Primary Linux ext3 10001.95<br />
c0d0p5 Logical Linux swap / Solaris 1003.49<br />
c0d0p6 Logical Linux 280325.77<br />
<br />
</code><br />
<br />
<br />
<blockquote><br />
*#* '''machine1''' will use the following names.<br />
*#* hostname = node1.<br />
*#* IP number = 192.168.1.100.<br />
*#* is primary for r0 on disk c0d0p6.<br />
*#* physical volume on r0 is /dev/drbd0.<br />
*#* volume group on /dev/drbd0 is called drbdvg0.<br />
</blockquote><br />
<br />
<blockquote><br />
*#* '''machine2''' will use the following names<br />
*#* hostname = node2.<br />
*#* IP number = 192.168.1.200.<br />
*#*is secondary for r0 on disk c0d0p6.<br />
<br />
The Volume Group and the Physical Volume will be identical on node2 if this one becomes the primary for r0.<br />
</blockquote><br />
<br />
== Loadbalance-Failover the network cards ==<br />
<br />
Maybe not very specific to Vserver, Heartbeat or DRBD, but loadbalancing your network cards for failover is always usefull. Some more indepth details by Carla Schroder can be found here. <br />
[[http://www.enterprisenetworkingplanet.com/nethub/article.php/3696561]]<br />
I did not do it for the DRBD crossover cable between the nodes while this is actually highly recomended.<br />
We need both mii-tool and ethtool.<br />
<br />
<code><br />
apt-get install ethtool ifenslave-2.6<br />
</code><br />
<br />
<code><br />
nano /etc/modprobe.d/arch/i386<br />
</code><br />
<br />
To load the modules with the correct options at boot time.<br />
<br />
<pre><br />
alias bond0 bonding<br />
options bond0 mode=balance-alb miimon=100 <br />
</pre><br />
<br />
And set the interfaces eth0 and eth1 as slaves to bond0, also eth2 is set here for the crossover cable for the DRBD connection to the fail over machine.<br />
<br />
<code><br />
nano /etc/network/interfaces<br />
</code><br />
<pre><br />
# This file describes the network interfaces available on your system<br />
# and how to activate them. For more information, see interfaces(5).<br />
<br />
# The loopback network interface<br />
auto lo<br />
iface lo inet loopback<br />
<br />
# The primary network interface<br />
auto bond0<br />
iface bond0 inet static<br />
address 123.123.123.100<br />
netmask 255.255.255.0<br />
network 123.123.123.0<br />
broadcast 123.123.123.255<br />
gateway 123.123.123.1<br />
# dns-* options are implemented by the resolvconf package, if installed<br />
dns-nameservers 123.123.123.45<br />
dns-search example.com<br />
up /sbin/ifenslave bond0 eth0 eth1<br />
down ifenslave -d bond0 eth0 eth1<br />
<br />
<br />
auto eth2<br />
iface eth2 inet static<br />
address 192.168.1.100<br />
netmask 255.255.255.0<br />
</pre><br />
<br />
This way the system needs to be rebooted before the changes take effect, otherwise you should load the drivers and ifdown eth0 and eth1 first before ifup bond0 but i'm planning to install a new kernel anyway in the next step.<br />
<br />
== Install the Vserver packages ==<br />
<br />
<code><br />
apt-get install linux-image-2.6-vserver-686-bigmem util-vserver vserver-debiantools<br />
</code><br />
<br />
As usual a reboot is needed to boot this kernel.<br />
<br />
<blockquote><br />
With Etch i found that the Vserver kernel often ended up as second in the grub list, not so in Lenny but to be safe check the kernel stanza in /boot/grub/menu.lst especially when doing this from a remote location.<br />
</blockquote><br />
<br />
== Install DRBD8, LVM2 and Heartbeat ==<br />
<br />
<code><br />
apt-get install drbd8-modules-2.6-vserver-686-bigmem drbd8-module-source lvm2 heartbeat-2<br />
</code><br />
<br />
<blockquote><br />
not sure about this, but DRBD always needed to be compiled against the running kernel, is this still the case with the kernel specific modules? I did not check but it would be good to know in case of a kernel upgrade.<br />
</blockquote><br />
<br />
== Build DRBD8 ==<br />
<br />
Although packages are available in the repositorie for DRBD8, the purpose of these packages is that you can built it easily from source and patch the running kernel.<br />
<br />
To do this we just issue this command<br />
<br />
<code><br />
m-a a-i drbd8<br />
</code><br />
<br />
And to load it into the kernel..<br />
<br />
<code><br />
depmod -ae<br />
</code><br />
<br />
<code><br />
modprobe drbd<br />
</code><br />
<br />
==== Configure DRBD8 ====<br />
<br />
Now that we have the essentials installed we can configure DRBD. Again, i will not go in to the details of all the options here so check out the default config and http://www.drbd.org/ to find a match for your set up.<br />
<br />
<code><br />
mv /etc/drbd.conf /etc/drbd.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/drbd.conf<br />
</code><br />
<br />
<pre><br />
global {<br />
usage-count no;<br />
}<br />
<br />
common {<br />
syncer { rate 100M; } <br />
}<br />
<br />
resource r0 {<br />
protocol C;<br />
handlers {<br />
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt f";<br />
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt f";<br />
local-io-error "echo o > /proc/sysrq-trigger ; halt f";<br />
outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";<br />
}<br />
<br />
startup {<br />
degr-wfc-timeout 120; # 2 minutes.<br />
}<br />
<br />
disk {<br />
on-io-error detach;<br />
}<br />
<br />
net { <br />
after-sb-0pri disconnect;<br />
after-sb-1pri disconnect;<br />
after-sb-2pri disconnect;<br />
rr-conflict disconnect;<br />
}<br />
<br />
syncer {<br />
rate 100M;<br />
al-extents 257;<br />
}<br />
<br />
<br />
on node1 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.100:7788;<br />
meta-disk internal;<br />
}<br />
<br />
on node2 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.200:7788;<br />
meta-disk internal;<br />
}<br />
}<br />
</pre><br />
<br />
Before we start DRBD we change some permissions, otherwise it will ask for it.<br />
So on both nodes<br />
<pre><br />
chgrp haclient /sbin/drbdsetup<br />
chmod o-x /sbin/drbdsetup<br />
chmod u+s /sbin/drbdsetup<br />
chgrp haclient /sbin/drbdmeta<br />
chmod o-x /sbin/drbdmeta<br />
chmod u+s /sbin/drbdmeta<br />
</pre><br />
<br />
==== Create the DRBD devices ====<br />
<br />
On both nodes<br />
<br />
node1<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node1<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
<blockquote><br />
'''The following should be done on the node that will be the primary!'''<br />
</blockquote><br />
<br />
On node1<br />
<br />
<code><br />
drbdadm -- --overwrite-data-of-peer primary r0<br />
</code><br />
<br />
<br />
watch cat /proc/drbd should show you something like this<br />
<pre><br />
version: 8.0.13 (api:86/proto:86)<br />
GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil@fat-tyre, 2008-08-04 15:28:07<br />
0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---<br />
ns:62059328 nr:0 dw:3298052 dr:58770141 al:2102 bm:3641 lo:1 pe:261 ua:251 ap:0<br />
[===>................] sync'ed: 22.1% (208411/267331)M<br />
finish: 4:04:44 speed: 14,472 (12,756) K/sec<br />
resync: used:1/61 hits:4064317 misses:5172 starving:0 dirty:0 changed:5172<br />
act_log: used:0/257 hits:822411 misses:46655 starving:110 dirty:44552 changed:2102<br />
<br />
<br />
</pre><br />
<br />
== Configure LVM2 ==<br />
<br />
<br />
<note important><br />
LVM will normally scan all available devices under /dev, but since /dev/cciss/c0d0p6 and /dev/drbd0 are basically the same this will lead to errors where LVM reads and writes the same data to both devices.<br />
So to limit it to scan /dev/drbd devices only we do the following on both nodes.<br />
<br />
</note><br />
<br />
<code><br />
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/lvm/lvm.conf<br />
</code><br />
<br />
<pre><br />
#filter = [ "a/.*/" ]<br />
filter = [ "a|/dev/drbd|", "r|.*|" ]<br />
</pre><br />
<br />
to re-scan with the new settings on both nodes<br />
<code><br />
<br />
vgscan<br />
</code><br />
<br />
=== Create the Physical Volume ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
On node1<br />
<br />
<code><br />
pvcreate /dev/drbd0<br />
</code><br />
<br />
=== Create the Volume Group ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
One node1<br />
<br />
<code><br />
vgcreate drbdvg0 /dev/drbd0<br />
</code><br />
<br />
=== Create the Logical Volume ===<br />
<br />
Yes, again only on the node that is primary!!!<br />
<br />
For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on.<br />
<br />
On node1<br />
<br />
<code><br />
lvcreate -L50000 -n web drbdvg0<br />
</code><br />
<br />
Then we put a file system on the logical volumes<br />
<br />
<code><br />
mkfs.ext3 /dev/drbdvg0/web<br />
</code><br />
<br />
create the directory where we want to mount the Vservers<br />
<br />
<code><br />
mkdir -p /VSERVERS/web<br />
</code><br />
<br />
and mount the volume group to the mount point<br />
<br />
<code><br />
mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/<br />
</code><br />
<br />
== Get informed ==<br />
<br />
Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.<br />
<br />
This should be done on both nodes<br />
<br />
<code><br />
apt-get install postfix mailx<br />
</code><br />
<br />
and go for the defaults, "internet site" and node1.example.com"<br />
<br />
We don't want postfix to listen to all interfaces,<br />
<br />
<code><br />
nano /etc/postfix/main.cf<br />
</code><br />
<br />
and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.<br />
<br />
<code><br />
inet_interfaces = loopback-only<br />
</code><br />
<br />
<br />
== Heartbeat ==<br />
<br />
=== Get aquinted ===<br />
Add the other node in the hosts file of both nodes, this way Heartbeat knows who is who.<br />
<br />
so for node1 do<br />
<br />
<code><br />
nano /etc/hosts<br />
</code><br />
<br />
and add node2<br />
<br />
<pre><br />
192.168.1.200 node2<br />
</pre><br />
<br />
=== Get intimate ===<br />
<br />
Set up some keys on both boxes so we can ssh login without a password (defaults, no passphrase)<br />
<br />
<code><br />
ssh-keygen<br />
</code><br />
<br />
then copy over the public keys<br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.100:/root/.ssh/authorized_keys<br />
</code><br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.200:/root/.ssh/authorized_keys<br />
</code><br />
<br />
=== Configure Heartbeat ===<br />
<br />
Without the ha.cf file Heartbeat wil not start, this should only be done on 1 of the nodes.<br />
<br />
<code><br />
nano /etc/ha.d/ha.cf<br />
</code><br />
<br />
<pre><br />
autojoin none <br />
#crm on #enables heartbeat2 cluster manager - we want that!<br />
use_logd on<br />
logfacility syslog<br />
keepalive 1<br />
deadtime 10<br />
warntime 10<br />
udpport 694<br />
auto_failback on #resources move back once node is back online<br />
mcast bond0 239.0.0.43 694 1 0 <br />
bcast eth2 <br />
node node1 #hostnames of the nodes<br />
node node2<br />
</pre><br />
<br />
This one also on 1 of the nodes<br />
<br />
<code><br />
nano /etc/ha.d/authkeys<br />
</code><br />
<br />
<pre><br />
auth 3<br />
3 md5 failover ## this is just a string, enter what you want ! auth 3 md5 uses md5 encryption<br />
</pre><br />
<br />
<code><br />
chmod 600 /etc/ha.d/authkeys<br />
</code><br />
<br />
<note><br />
We will be using heartbeat R1-style configuration here simply because i don't understand the R2 xml based syntax.<br />
</note><br />
We only did the above 2 config files on 1 node but we need it on both, heartbeat can do that for us.<br />
<br />
<code><br />
/usr/lib/heartbeat/ha_propagate<br />
</code><br />
<br />
=== Heatbeat behavior ===<br />
<br />
After above 2 files are set, the haresources is where we want to be to control Heartbeats behaviour.<br />
This is an example for 1 Vserver that we will set up later on.<br />
<br />
<code><br />
nano /etc/ha.d/haresources<br />
</code><br />
<br />
<pre><br />
node1 drbddisk::r1 LVM::drbdvg1 Filesystem::/dev/drbdvg1/web::/VSERVERS/web::ext3 vserver-web SendArp::123.123.123.125/bond0 MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
The above will default the Vserver named web to node1 and specify the mount points, the vserver-web script will start and stop heartbeat, the sendarp is for notifying the network that this IP can be found somewhere else then before. (have added the SendArp an extra time below for better result)<br />
<br />
Another example for more than 1 Vserver,<br />
We only specify 1 default node here for all Vservers and the same DRBD disk and Volume Group, the individual start scripts and mount points are specified separately, mind the \, its all in 1 line. the last mail command is only needed once.<br />
<br />
<pre><br />
node1 \<br />
drbddisk::r0 \<br />
LVM::drbdvg0 \<br />
Filesystem::/dev/drbdvg0/web::/VSERVERS/web::ext3 \<br />
Filesystem::/dev/drbdvg0/ns1::/VSERVERS/ns1::ext3 \<br />
Vserver-web \<br />
Vserver-ns1 \<br />
SendArp::123.123.123.125/bond0 \<br />
SendArp::123.123.123.126/bond0 \<br />
MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
=== start/stop script ===<br />
<br />
The vserver-web script as specified to be called by heartbeat above is basically a demolished version of the original R2 style agent by Martin Fick from here http://www.theficks.name/bin/lib/ocf/VServer.<br />
<br />
What i did is remove the sensible top part and replace "$OCF_RESKEY_vserver" with the specific Vserver name, also added an extra<br />
<br />
<pre><br />
/etc/ha.d/resource.d/SendArp 123.123.123.126/bond0 start<br />
</pre><br />
<br />
to the start part because i had various results when done by Heartbeat in the first tests i did, not sure if it is still needed but i guess it doesn't hurt.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
<pre><br />
#!/bin/sh<br />
#<br />
# License: GNU General Public License (GPL) <br />
# Author: Martin Fick <mogulguy@yahoo.com><br />
# Date: 04/19/07<br />
# Version: 1.1<br />
#<br />
# This script manages a VServer instance<br />
#<br />
# It can start or stop a VServer<br />
#<br />
# usage: $0 {start|stop|status|monitor|meta-data}<br />
#<br />
#<br />
# OCF parameters are as below<br />
# OCF_RESKEY_vserver<br />
#<br />
#######################################################################<br />
# Initialization:<br />
#<br />
#. /usr/lib/heartbeat/ocf-shellfuncs<br />
#<br />
#USAGE="usage: $0 {start|stop|status|monitor|meta-data}";<br />
#<br />
#######################################################################<br />
#<br />
#<br />
#meta_data() {<br />
# cat <<END<br />
#<?xml version="1.0"?><br />
#<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd"><br />
#<resource-agent name="VServer"><br />
# <version>1.0</version><br />
# <longdesc lang="en"><br />
#This script manages a VServer instance.<br />
#It can start or stop a VServer.<br />
# </longdesc><br />
# <shortdesc lang="en">OCF Resource Agent compliant VServer script.</shortdesc><br />
#<br />
# <parameters><br />
#<br />
# <parameter name="vserver" unique="1" required="1"><br />
# <longdesc lang="en"><br />
#The vserver name is the name as found under /etc/vservers<br />
# </longdesc><br />
# <shortdesc lang="en">VServer Name</shortdesc><br />
# <content type="string" default="" /><br />
# </parameter><br />
#<br />
# </parameters><br />
#<br />
# <actions><br />
# <action name="start" timeout="2m" /><br />
# <action name="stop" timeout="1m" /><br />
# <action name="monitor" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="status" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="meta-data" timeout="1m" /><br />
# </actions><br />
#</resource-agent><br />
#END<br />
#}<br />
<br />
vserver_reload() {<br />
vserver_stop || return<br />
vserver_start<br />
}<br />
<br />
vserver_stop() {<br />
#<br />
# Is the VServer already stopped?<br />
#<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "stop"<br />
<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
return 1<br />
}<br />
<br />
vserver_start() {<br />
vserver_status<br />
[ $? -eq 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "start"<br />
vserver_status<br />
/etc/ha.d/resource.d/SendArp 123.123.123.125/bond0 start<br />
}<br />
<br />
vserver_status() {<br />
/usr/sbin/vserver "web" "status"<br />
rc=$?<br />
if [ $rc -eq 0 ]; then<br />
echo "running"<br />
return 0<br />
elif [ $rc -eq 3 ]; then<br />
echo "stopped"<br />
else<br />
echo "unknown"<br />
fi<br />
return 7<br />
}<br />
<br />
vserver_monitor() {<br />
vserver_status<br />
}<br />
<br />
<br />
vserver_usage() {<br />
<br />
echo $USAGE >&2<br />
}<br />
<br />
vserver_info() {<br />
cat - <<!INFO<br />
Abstract=VServer Instance takeover<br />
Argument=VServer Name<br />
Description:<br />
A Vserver is a simulated server which is fairly hardware independent<br />
so it can be easily setup to run on several machines.<br />
Please rerun with the meta-data command for a list of \\<br />
valid arguments and their defaults.<br />
!INFO<br />
}<br />
<br />
#<br />
# Start or Stop the given VServer...<br />
#<br />
<br />
if [ $# -ne 1 ] ; then<br />
vserver_usage<br />
exit 2<br />
fi<br />
<br />
case "$1" in<br />
start|stop|status|monitor|reload|info|usage) vserver_$1 ;;<br />
meta-data) meta_data ;;<br />
validate-all|notify|promote|demote) exit 3 ;;<br />
<br />
*) vserver_usage ; exit 2 ;;<br />
esac<br />
<br />
<br />
</pre><br />
To make this file executable by Heartbeat<br />
<br />
<code><br />
chmod a+x /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
=== Diazepam ===<br />
<br />
Add a modificaton to the drbddisk resource, as pointed out by Christian Balzer on the Vserver mailing list http://list.linux-vserver.org/archive?mss:835:200803:cgehldioambmojimggpf, it seems to help Heartbeat to be a little more patient if it wants to close down the r0 while not all Vservers are stopped yet, not unimportant.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/drbddisk<br />
</code><br />
<br />
<pre><br />
stop)<br />
# Kill off any vserver mounts that might hog this<br />
VNSPACE=/usr/sbin/vnamespace<br />
<br />
for CTX in `/usr/sbin/vserver-stat | tail -n +2 | awk '{print $1}'`<br />
do<br />
MPOINT="`$VNSPACE -e $CTX cat /proc/mounts | grep $RES | awk '{print $2}'`"<br />
echo Unmounting mount point $MPOINT from within context $CTX<br />
### MOUNT POINT IS COMPULSORY. DEVICE NAME DOES NOT WORK!!!<br />
$VNSPACE -e $CTX /bin/umount $MPOINT || continue;<br />
done<br />
# exec, so the exit code of drbdadm propagates<br />
exec $DRBDADM secondary $RES<br />
<br />
</pre><br />
<br />
== Create a Vserver ==<br />
<br />
Note that we already have mounted the LVM partition on /VSERVERS/web in an earlier step, we're going to place both the /var and /etc directories on the mountpoint and symlink to it, this way the complete Vserver and its config are available on the other node when mounted.<br />
<br />
<code><br />
mkdir -p /VSERVERS/web/etc<br />
</code><br />
<br />
<code><br />
mkdir -p /VSERVERS/web/barrier/var<br />
</code><br />
<br />
When making the Vserver it will be in the default location /var/lib/vservers/web and its config in /etc/vservers/web <br />
<br />
<pre><br />
newvserver --hostname web --domain example.com --ip 123.123.123.125/24 --dist etch --mirror http://123.123.123.81:3142/debian.apt-get.eu/debian --interface bond0<br />
</pre><br />
<br />
<pre><br />
enter the root password<br />
</pre><br />
<br />
<pre><br />
Create a normal user account now? <br />
<No> <br />
</pre><br />
<br />
<pre><br />
Choose software to install: <br />
<Ok> <br />
</pre><br />
<br />
On node1 we move the Vserver directories to the LVM volume on the DRBD disks and make symlinks from the normal locations.<br />
<br />
On node1<br />
<br />
<code><br />
mv /etc/vservers/web/* /VSERVERS/web/etc/<br />
</code><br />
<br />
<code><br />
rmdir /etc/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<code><br />
mv /var/lib/vservers/web/* /VSERVERS/web/barrier/var<br />
</code><br />
<br />
<code><br />
rmdir /var/lib/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
We need to set the same symlinks on node2, but the we need the Vserver directories available there first.<br />
The mounting should be handled by heartbeat by now so we make our resources move to the other machine.<br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat stop<br />
</code><br />
<br />
On node2<br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat start<br />
</code><br />
<br />
<code><br />
Vserver web start<br />
</code><br />
<br />
and enjoy!</div>212.123.252.242http://linux-vserver.org/Getting_high_with_lennyGetting high with lenny2008-10-02T07:03:55Z<p>212.123.252.242: /* Getting High with Lenny */</p>
<hr />
<div>= Getting High with Lenny =<br />
<br />
The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released)<br />
<br />
<br />
There is a lot of buzz going on for a while now about virtualisation and High Availability and while Vserver is very well capable for this job the number of documented examples compared to some other virtualisation techniques are a little lacking so i thought i'd do my share. <br />
<br />
I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed.<br />
DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly.<br />
In my experience it takes a few seconds to have several Vservers fail over to another machine with this setup.<br />
<br />
The main attempt here is to give a single working example without going to much in to the details of every option, the scenario is relatively simple but different variations can be made.<br />
<br />
For this set up we will have <br />
<br />
** 2 machines<br />
** both machines have 1 single large DRBD partition<br />
** primary/seconday there is always 1 machine active and 1 on standby<br />
** 1 LVM partition per Vserver on top of the DRBD partition, for quota support from within the guest and LVM snapshots<br />
** the Vservers /etc/vserver and /var/lib/vservers directories will be placed on the DRBD partition.<br />
<br />
In case the main machine that runs the Vservers goes down, the synchronized second machine should take over and automatically start the Vservers.<br />
<br />
Basically this is an on-line RAID solution that can keep your services running in case of hardware failure, it is NOT a back-up replacement.<br />
<br />
The cost for this setup is that you always have 1 idle machine standby, this cost can be justified by the fact that Linux-Vserver enables you to make full use of the 1 machine that is running, you also could consider to run this on a little less expensive (reliable) hardware.<br />
<br />
Also note that i will be using R1 style configuration for heartbeat, R1 style can be considered to be depreciated when using Heartbeat2 but i could not get my head around the R2 xml configuration, so if you want R2 you might want to have a look here.<br />
[[Fail-over]])<br />
<br />
The partitioning looks as follows<br />
<br />
<code> <br />
c0d0p1 Boot Primary Linux ext3 10001.95<br />
c0d0p5 Logical Linux swap / Solaris 1003.49<br />
c0d0p6 Logical Linux 280325.77<br />
<br />
</code><br />
<br />
<br />
<blockquote><br />
'''machine1''' will use the following names.<br />
hostname = node1.<br />
IP number = 192.168.1.100.<br />
is primary for r0 on disk c0d0p6.<br />
physical volume on r0 is /dev/drbd0.<br />
volume group on /dev/drbd0 is called drbdvg0.<br />
</blockquote><br />
<br />
<blockquote><br />
'''machine2''' will use the following names<br />
hostname = node2.<br />
IP number = 192.168.1.200.<br />
is secondary for r0 on disk c0d0p6.<br />
<br />
The Volume Group and the Physical Volume will be identical on node2 if this one becomes the primary for r0.<br />
</blockquote><br />
<br />
== Loadbalance-Failover the network cards ==<br />
<br />
Maybe not very specific to Vserver, Heartbeat or DRBD, but loadbalancing your network cards for failover is always usefull. Some more indepth details by Carla Schroder can be found here. <br />
[[http://www.enterprisenetworkingplanet.com/nethub/article.php/3696561]]<br />
I did not do it for the DRBD crossover cable between the nodes while this is actually highly recomended.<br />
We need both mii-tool and ethtool.<br />
<br />
<code><br />
apt-get install ethtool ifenslave-2.6<br />
</code><br />
<br />
<code><br />
nano /etc/modprobe.d/arch/i386<br />
</code><br />
<br />
To load the modules with the correct options at boot time.<br />
<br />
<pre><br />
alias bond0 bonding<br />
options bond0 mode=balance-alb miimon=100 <br />
</pre><br />
<br />
And set the interfaces eth0 and eth1 as slaves to bond0, also eth2 is set here for the crossover cable for the DRBD connection to the fail over machine.<br />
<br />
<code><br />
nano /etc/network/interfaces<br />
</code><br />
<pre><br />
# This file describes the network interfaces available on your system<br />
# and how to activate them. For more information, see interfaces(5).<br />
<br />
# The loopback network interface<br />
auto lo<br />
iface lo inet loopback<br />
<br />
# The primary network interface<br />
auto bond0<br />
iface bond0 inet static<br />
address 123.123.123.100<br />
netmask 255.255.255.0<br />
network 123.123.123.0<br />
broadcast 123.123.123.255<br />
gateway 123.123.123.1<br />
# dns-* options are implemented by the resolvconf package, if installed<br />
dns-nameservers 123.123.123.45<br />
dns-search example.com<br />
up /sbin/ifenslave bond0 eth0 eth1<br />
down ifenslave -d bond0 eth0 eth1<br />
<br />
<br />
auto eth2<br />
iface eth2 inet static<br />
address 192.168.1.100<br />
netmask 255.255.255.0<br />
</pre><br />
<br />
This way the system needs to be rebooted before the changes take effect, otherwise you should load the drivers and ifdown eth0 and eth1 first before ifup bond0 but i'm planning to install a new kernel anyway in the next step.<br />
<br />
== Install the Vserver packages ==<br />
<br />
<code><br />
apt-get install linux-image-2.6-vserver-686-bigmem util-vserver vserver-debiantools<br />
</code><br />
<br />
As usual a reboot is needed to boot this kernel.<br />
<br />
<blockquote><br />
With Etch i found that the Vserver kernel often ended up as second in the grub list, not so in Lenny but to be safe check the kernel stanza in /boot/grub/menu.lst especially when doing this from a remote location.<br />
</blockquote><br />
<br />
== Install DRBD8, LVM2 and Heartbeat ==<br />
<br />
<code><br />
apt-get install drbd8-modules-2.6-vserver-686-bigmem drbd8-module-source lvm2 heartbeat-2<br />
</code><br />
<br />
<blockquote><br />
not sure about this, but DRBD always needed to be compiled against the running kernel, is this still the case with the kernel specific modules? I did not check but it would be good to know in case of a kernel upgrade.<br />
</blockquote><br />
<br />
== Build DRBD8 ==<br />
<br />
Although packages are available in the repositorie for DRBD8, the purpose of these packages is that you can built it easily from source and patch the running kernel.<br />
<br />
To do this we just issue this command<br />
<br />
<code><br />
m-a a-i drbd8<br />
</code><br />
<br />
And to load it into the kernel..<br />
<br />
<code><br />
depmod -ae<br />
</code><br />
<br />
<code><br />
modprobe drbd<br />
</code><br />
<br />
==== Configure DRBD8 ====<br />
<br />
Now that we have the essentials installed we can configure DRBD. Again, i will not go in to the details of all the options here so check out the default config and http://www.drbd.org/ to find a match for your set up.<br />
<br />
<code><br />
mv /etc/drbd.conf /etc/drbd.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/drbd.conf<br />
</code><br />
<br />
<pre><br />
global {<br />
usage-count no;<br />
}<br />
<br />
common {<br />
syncer { rate 100M; } <br />
}<br />
<br />
resource r0 {<br />
protocol C;<br />
handlers {<br />
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt f";<br />
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt f";<br />
local-io-error "echo o > /proc/sysrq-trigger ; halt f";<br />
outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";<br />
}<br />
<br />
startup {<br />
degr-wfc-timeout 120; # 2 minutes.<br />
}<br />
<br />
disk {<br />
on-io-error detach;<br />
}<br />
<br />
net { <br />
after-sb-0pri disconnect;<br />
after-sb-1pri disconnect;<br />
after-sb-2pri disconnect;<br />
rr-conflict disconnect;<br />
}<br />
<br />
syncer {<br />
rate 100M;<br />
al-extents 257;<br />
}<br />
<br />
<br />
on node1 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.100:7788;<br />
meta-disk internal;<br />
}<br />
<br />
on node2 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.200:7788;<br />
meta-disk internal;<br />
}<br />
}<br />
</pre><br />
<br />
Before we start DRBD we change some permissions, otherwise it will ask for it.<br />
So on both nodes<br />
<pre><br />
chgrp haclient /sbin/drbdsetup<br />
chmod o-x /sbin/drbdsetup<br />
chmod u+s /sbin/drbdsetup<br />
chgrp haclient /sbin/drbdmeta<br />
chmod o-x /sbin/drbdmeta<br />
chmod u+s /sbin/drbdmeta<br />
</pre><br />
<br />
==== Create the DRBD devices ====<br />
<br />
On both nodes<br />
<br />
node1<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node1<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
<blockquote><br />
'''The following should be done on the node that will be the primary!'''<br />
</blockquote><br />
<br />
On node1<br />
<br />
<code><br />
drbdadm -- --overwrite-data-of-peer primary r0<br />
</code><br />
<br />
<br />
watch cat /proc/drbd should show you something like this<br />
<pre><br />
version: 8.0.13 (api:86/proto:86)<br />
GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil@fat-tyre, 2008-08-04 15:28:07<br />
0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---<br />
ns:62059328 nr:0 dw:3298052 dr:58770141 al:2102 bm:3641 lo:1 pe:261 ua:251 ap:0<br />
[===>................] sync'ed: 22.1% (208411/267331)M<br />
finish: 4:04:44 speed: 14,472 (12,756) K/sec<br />
resync: used:1/61 hits:4064317 misses:5172 starving:0 dirty:0 changed:5172<br />
act_log: used:0/257 hits:822411 misses:46655 starving:110 dirty:44552 changed:2102<br />
<br />
<br />
</pre><br />
<br />
== Configure LVM2 ==<br />
<br />
<br />
<note important><br />
LVM will normally scan all available devices under /dev, but since /dev/cciss/c0d0p6 and /dev/drbd0 are basically the same this will lead to errors where LVM reads and writes the same data to both devices.<br />
So to limit it to scan /dev/drbd devices only we do the following on both nodes.<br />
<br />
</note><br />
<br />
<code><br />
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/lvm/lvm.conf<br />
</code><br />
<br />
<pre><br />
#filter = [ "a/.*/" ]<br />
filter = [ "a|/dev/drbd|", "r|.*|" ]<br />
</pre><br />
<br />
to re-scan with the new settings on both nodes<br />
<code><br />
<br />
vgscan<br />
</code><br />
<br />
=== Create the Physical Volume ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
On node1<br />
<br />
<code><br />
pvcreate /dev/drbd0<br />
</code><br />
<br />
=== Create the Volume Group ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
One node1<br />
<br />
<code><br />
vgcreate drbdvg0 /dev/drbd0<br />
</code><br />
<br />
=== Create the Logical Volume ===<br />
<br />
Yes, again only on the node that is primary!!!<br />
<br />
For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on.<br />
<br />
On node1<br />
<br />
<code><br />
lvcreate -L50000 -n web drbdvg0<br />
</code><br />
<br />
Then we put a file system on the logical volumes<br />
<br />
<code><br />
mkfs.ext3 /dev/drbdvg0/web<br />
</code><br />
<br />
create the directory where we want to mount the Vservers<br />
<br />
<code><br />
mkdir -p /VSERVERS/web<br />
</code><br />
<br />
and mount the volume group to the mount point<br />
<br />
<code><br />
mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/<br />
</code><br />
<br />
== Get informed ==<br />
<br />
Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.<br />
<br />
This should be done on both nodes<br />
<br />
<code><br />
apt-get install postfix mailx<br />
</code><br />
<br />
and go for the defaults, "internet site" and node1.example.com"<br />
<br />
We don't want postfix to listen to all interfaces,<br />
<br />
<code><br />
nano /etc/postfix/main.cf<br />
</code><br />
<br />
and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.<br />
<br />
<code><br />
inet_interfaces = loopback-only<br />
</code><br />
<br />
<br />
== Heartbeat ==<br />
<br />
=== Get aquinted ===<br />
Add the other node in the hosts file of both nodes, this way Heartbeat knows who is who.<br />
<br />
so for node1 do<br />
<br />
<code><br />
nano /etc/hosts<br />
</code><br />
<br />
and add node2<br />
<br />
<pre><br />
192.168.1.200 node2<br />
</pre><br />
<br />
=== Get intimate ===<br />
<br />
Set up some keys on both boxes so we can ssh login without a password (defaults, no passphrase)<br />
<br />
<code><br />
ssh-keygen<br />
</code><br />
<br />
then copy over the public keys<br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.100:/root/.ssh/authorized_keys<br />
</code><br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.200:/root/.ssh/authorized_keys<br />
</code><br />
<br />
=== Configure Heartbeat ===<br />
<br />
Without the ha.cf file Heartbeat wil not start, this should only be done on 1 of the nodes.<br />
<br />
<code><br />
nano /etc/ha.d/ha.cf<br />
</code><br />
<br />
<pre><br />
autojoin none <br />
#crm on #enables heartbeat2 cluster manager - we want that!<br />
use_logd on<br />
logfacility syslog<br />
keepalive 1<br />
deadtime 10<br />
warntime 10<br />
udpport 694<br />
auto_failback on #resources move back once node is back online<br />
mcast bond0 239.0.0.43 694 1 0 <br />
bcast eth2 <br />
node node1 #hostnames of the nodes<br />
node node2<br />
</pre><br />
<br />
This one also on 1 of the nodes<br />
<br />
<code><br />
nano /etc/ha.d/authkeys<br />
</code><br />
<br />
<pre><br />
auth 3<br />
3 md5 failover ## this is just a string, enter what you want ! auth 3 md5 uses md5 encryption<br />
</pre><br />
<br />
<code><br />
chmod 600 /etc/ha.d/authkeys<br />
</code><br />
<br />
<note><br />
We will be using heartbeat R1-style configuration here simply because i don't understand the R2 xml based syntax.<br />
</note><br />
We only did the above 2 config files on 1 node but we need it on both, heartbeat can do that for us.<br />
<br />
<code><br />
/usr/lib/heartbeat/ha_propagate<br />
</code><br />
<br />
=== Heatbeat behavior ===<br />
<br />
After above 2 files are set, the haresources is where we want to be to control Heartbeats behaviour.<br />
This is an example for 1 Vserver that we will set up later on.<br />
<br />
<code><br />
nano /etc/ha.d/haresources<br />
</code><br />
<br />
<pre><br />
node1 drbddisk::r1 LVM::drbdvg1 Filesystem::/dev/drbdvg1/web::/VSERVERS/web::ext3 vserver-web SendArp::123.123.123.125/bond0 MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
The above will default the Vserver named web to node1 and specify the mount points, the vserver-web script will start and stop heartbeat, the sendarp is for notifying the network that this IP can be found somewhere else then before. (have added the SendArp an extra time below for better result)<br />
<br />
Another example for more than 1 Vserver,<br />
We only specify 1 default node here for all Vservers and the same DRBD disk and Volume Group, the individual start scripts and mount points are specified separately, mind the \, its all in 1 line. the last mail command is only needed once.<br />
<br />
<pre><br />
node1 \<br />
drbddisk::r0 \<br />
LVM::drbdvg0 \<br />
Filesystem::/dev/drbdvg0/web::/VSERVERS/web::ext3 \<br />
Filesystem::/dev/drbdvg0/ns1::/VSERVERS/ns1::ext3 \<br />
Vserver-web \<br />
Vserver-ns1 \<br />
SendArp::123.123.123.125/bond0 \<br />
SendArp::123.123.123.126/bond0 \<br />
MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
=== start/stop script ===<br />
<br />
The vserver-web script as specified to be called by heartbeat above is basically a demolished version of the original R2 style agent by Martin Fick from here http://www.theficks.name/bin/lib/ocf/VServer.<br />
<br />
What i did is remove the sensible top part and replace "$OCF_RESKEY_vserver" with the specific Vserver name, also added an extra<br />
<br />
<pre><br />
/etc/ha.d/resource.d/SendArp 123.123.123.126/bond0 start<br />
</pre><br />
<br />
to the start part because i had various results when done by Heartbeat in the first tests i did, not sure if it is still needed but i guess it doesn't hurt.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
<pre><br />
#!/bin/sh<br />
#<br />
# License: GNU General Public License (GPL) <br />
# Author: Martin Fick <mogulguy@yahoo.com><br />
# Date: 04/19/07<br />
# Version: 1.1<br />
#<br />
# This script manages a VServer instance<br />
#<br />
# It can start or stop a VServer<br />
#<br />
# usage: $0 {start|stop|status|monitor|meta-data}<br />
#<br />
#<br />
# OCF parameters are as below<br />
# OCF_RESKEY_vserver<br />
#<br />
#######################################################################<br />
# Initialization:<br />
#<br />
#. /usr/lib/heartbeat/ocf-shellfuncs<br />
#<br />
#USAGE="usage: $0 {start|stop|status|monitor|meta-data}";<br />
#<br />
#######################################################################<br />
#<br />
#<br />
#meta_data() {<br />
# cat <<END<br />
#<?xml version="1.0"?><br />
#<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd"><br />
#<resource-agent name="VServer"><br />
# <version>1.0</version><br />
# <longdesc lang="en"><br />
#This script manages a VServer instance.<br />
#It can start or stop a VServer.<br />
# </longdesc><br />
# <shortdesc lang="en">OCF Resource Agent compliant VServer script.</shortdesc><br />
#<br />
# <parameters><br />
#<br />
# <parameter name="vserver" unique="1" required="1"><br />
# <longdesc lang="en"><br />
#The vserver name is the name as found under /etc/vservers<br />
# </longdesc><br />
# <shortdesc lang="en">VServer Name</shortdesc><br />
# <content type="string" default="" /><br />
# </parameter><br />
#<br />
# </parameters><br />
#<br />
# <actions><br />
# <action name="start" timeout="2m" /><br />
# <action name="stop" timeout="1m" /><br />
# <action name="monitor" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="status" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="meta-data" timeout="1m" /><br />
# </actions><br />
#</resource-agent><br />
#END<br />
#}<br />
<br />
vserver_reload() {<br />
vserver_stop || return<br />
vserver_start<br />
}<br />
<br />
vserver_stop() {<br />
#<br />
# Is the VServer already stopped?<br />
#<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "stop"<br />
<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
return 1<br />
}<br />
<br />
vserver_start() {<br />
vserver_status<br />
[ $? -eq 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "start"<br />
vserver_status<br />
/etc/ha.d/resource.d/SendArp 123.123.123.125/bond0 start<br />
}<br />
<br />
vserver_status() {<br />
/usr/sbin/vserver "web" "status"<br />
rc=$?<br />
if [ $rc -eq 0 ]; then<br />
echo "running"<br />
return 0<br />
elif [ $rc -eq 3 ]; then<br />
echo "stopped"<br />
else<br />
echo "unknown"<br />
fi<br />
return 7<br />
}<br />
<br />
vserver_monitor() {<br />
vserver_status<br />
}<br />
<br />
<br />
vserver_usage() {<br />
<br />
echo $USAGE >&2<br />
}<br />
<br />
vserver_info() {<br />
cat - <<!INFO<br />
Abstract=VServer Instance takeover<br />
Argument=VServer Name<br />
Description:<br />
A Vserver is a simulated server which is fairly hardware independent<br />
so it can be easily setup to run on several machines.<br />
Please rerun with the meta-data command for a list of \\<br />
valid arguments and their defaults.<br />
!INFO<br />
}<br />
<br />
#<br />
# Start or Stop the given VServer...<br />
#<br />
<br />
if [ $# -ne 1 ] ; then<br />
vserver_usage<br />
exit 2<br />
fi<br />
<br />
case "$1" in<br />
start|stop|status|monitor|reload|info|usage) vserver_$1 ;;<br />
meta-data) meta_data ;;<br />
validate-all|notify|promote|demote) exit 3 ;;<br />
<br />
*) vserver_usage ; exit 2 ;;<br />
esac<br />
<br />
<br />
</pre><br />
To make this file executable by Heartbeat<br />
<br />
<code><br />
chmod a+x /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
=== Diazepam ===<br />
<br />
Add a modificaton to the drbddisk resource, as pointed out by Christian Balzer on the Vserver mailing list http://list.linux-vserver.org/archive?mss:835:200803:cgehldioambmojimggpf, it seems to help Heartbeat to be a little more patient if it wants to close down the r0 while not all Vservers are stopped yet, not unimportant.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/drbddisk<br />
</code><br />
<br />
<pre><br />
stop)<br />
# Kill off any vserver mounts that might hog this<br />
VNSPACE=/usr/sbin/vnamespace<br />
<br />
for CTX in `/usr/sbin/vserver-stat | tail -n +2 | awk '{print $1}'`<br />
do<br />
MPOINT="`$VNSPACE -e $CTX cat /proc/mounts | grep $RES | awk '{print $2}'`"<br />
echo Unmounting mount point $MPOINT from within context $CTX<br />
### MOUNT POINT IS COMPULSORY. DEVICE NAME DOES NOT WORK!!!<br />
$VNSPACE -e $CTX /bin/umount $MPOINT || continue;<br />
done<br />
# exec, so the exit code of drbdadm propagates<br />
exec $DRBDADM secondary $RES<br />
<br />
</pre><br />
<br />
== Create a Vserver ==<br />
<br />
Note that we already have mounted the LVM partition on /VSERVERS/web in an earlier step, we're going to place both the /var and /etc directories on the mountpoint and symlink to it, this way the complete Vserver and its config are available on the other node when mounted.<br />
<br />
<code><br />
mkdir -p /VSERVERS/web/etc<br />
</code><br />
<br />
<code><br />
mkdir -p /VSERVERS/web/barrier/var<br />
</code><br />
<br />
When making the Vserver it will be in the default location /var/lib/vservers/web and its config in /etc/vservers/web <br />
<br />
<pre><br />
newvserver --hostname web --domain example.com --ip 123.123.123.125/24 --dist etch --mirror http://123.123.123.81:3142/debian.apt-get.eu/debian --interface bond0<br />
</pre><br />
<br />
<pre><br />
enter the root password<br />
</pre><br />
<br />
<pre><br />
Create a normal user account now? <br />
<No> <br />
</pre><br />
<br />
<pre><br />
Choose software to install: <br />
<Ok> <br />
</pre><br />
<br />
On node1 we move the Vserver directories to the LVM volume on the DRBD disks and make symlinks from the normal locations.<br />
<br />
On node1<br />
<br />
<code><br />
mv /etc/vservers/web/* /VSERVERS/web/etc/<br />
</code><br />
<br />
<code><br />
rmdir /etc/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<code><br />
mv /var/lib/vservers/web/* /VSERVERS/web/barrier/var<br />
</code><br />
<br />
<code><br />
rmdir /var/lib/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
We need to set the same symlinks on node2, but the we need the Vserver directories available there first.<br />
The mounting should be handled by heartbeat by now so we make our resources move to the other machine.<br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat stop<br />
</code><br />
<br />
On node2<br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat start<br />
</code><br />
<br />
<code><br />
Vserver web start<br />
</code><br />
<br />
and enjoy!</div>212.123.252.242http://linux-vserver.org/Getting_high_with_lennyGetting high with lenny2008-10-02T07:00:18Z<p>212.123.252.242: /* Getting High with Lenny */</p>
<hr />
<div>= Getting High with Lenny =<br />
<br />
The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released)<br />
<br />
<br />
There is a lot of buzz going on for a while now about virtualisation and High Availability and while Vserver is very well capable for this job the number of documented examples compared to some other virtualisation techniques are a little lacking so i thought i'd do my share. <br />
<br />
I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed.<br />
DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly.<br />
In my experience it takes a few seconds to have several Vservers fail over to another machine with this setup.<br />
<br />
The main attempt here is to give a single working example without going to much in to the details of every option, the scenario is relatively simple but different variations can be made.<br />
<br />
For this set up we will have <br />
<br />
** 2 machines<br />
** both machines have 1 single large DRBD partition<br />
** primary/seconday there is always 1 machine active and 1 on standby<br />
** 1 LVM partition per Vserver on top of the DRBD partition, for quota support from within the guest and LVM snapshots<br />
** the Vservers /etc/vserver and /var/lib/vservers directories will be placed on the DRBD partition.<br />
<br />
In case the main machine that runs the Vservers goes down, the synchronized second machine should take over and automatically start the Vservers.<br />
<br />
Basically this is an on-line RAID solution that can keep your services running in case of hardware failure, it is NOT a back-up replacement.<br />
<br />
The cost for this setup is that you always have 1 idle machine standby, this cost can be justified by the fact that Linux-Vserver enables you to make full use of the 1 machine that is running, you also could consider to run this on a little less expensive (reliable) hardware.<br />
<br />
Also note that i will be using R1 style configuration for heartbeat, R1 style can be considered to be depreciated when using Heartbeat2 but i could not get my head around the R2 xml configuration, so if you want R2 you might want to have a look here.<br />
[[Fail-over]])<br />
<br />
The partitioning looks as follows<br />
<br />
<code> <br />
c0d0p1 Boot Primary Linux ext3 10001.95<br />
c0d0p5 Logical Linux swap / Solaris 1003.49<br />
c0d0p6 Logical Linux 280325.77<br />
<br />
</code><br />
<br />
<br />
<blockquote><br />
*'''machine1''' will use the following names<br />
*hostname = node1<br />
IP number = 192.168.1.100 <br />
is primary for r0 on disk c0d0p6<br />
physical volume on r0 is /dev/drbd0<br />
volume group on /dev/drbd0 is called drbdvg0<br />
</blockquote><br />
<br />
<blockquote><br />
'''machine2''' will use the following names<br />
hostname = node2<br />
IP number = 192.168.1.200 <br />
is secondary for r0 on disk c0d0p6<br />
<br />
The Volume Group and the Physical Volume will be identical on node2 if this one becomes the primary for r0.<br />
</blockquote><br />
<br />
== Loadbalance-Failover the network cards ==<br />
<br />
Maybe not very specific to Vserver, Heartbeat or DRBD, but loadbalancing your network cards for failover is always usefull. Some more indepth details by Carla Schroder can be found here. <br />
[[http://www.enterprisenetworkingplanet.com/nethub/article.php/3696561]]<br />
I did not do it for the DRBD crossover cable between the nodes while this is actually highly recomended.<br />
We need both mii-tool and ethtool.<br />
<br />
<code><br />
apt-get install ethtool ifenslave-2.6<br />
</code><br />
<br />
<code><br />
nano /etc/modprobe.d/arch/i386<br />
</code><br />
<br />
To load the modules with the correct options at boot time.<br />
<br />
<pre><br />
alias bond0 bonding<br />
options bond0 mode=balance-alb miimon=100 <br />
</pre><br />
<br />
And set the interfaces eth0 and eth1 as slaves to bond0, also eth2 is set here for the crossover cable for the DRBD connection to the fail over machine.<br />
<br />
<code><br />
nano /etc/network/interfaces<br />
</code><br />
<pre><br />
# This file describes the network interfaces available on your system<br />
# and how to activate them. For more information, see interfaces(5).<br />
<br />
# The loopback network interface<br />
auto lo<br />
iface lo inet loopback<br />
<br />
# The primary network interface<br />
auto bond0<br />
iface bond0 inet static<br />
address 123.123.123.100<br />
netmask 255.255.255.0<br />
network 123.123.123.0<br />
broadcast 123.123.123.255<br />
gateway 123.123.123.1<br />
# dns-* options are implemented by the resolvconf package, if installed<br />
dns-nameservers 123.123.123.45<br />
dns-search example.com<br />
up /sbin/ifenslave bond0 eth0 eth1<br />
down ifenslave -d bond0 eth0 eth1<br />
<br />
<br />
auto eth2<br />
iface eth2 inet static<br />
address 192.168.1.100<br />
netmask 255.255.255.0<br />
</pre><br />
<br />
This way the system needs to be rebooted before the changes take effect, otherwise you should load the drivers and ifdown eth0 and eth1 first before ifup bond0 but i'm planning to install a new kernel anyway in the next step.<br />
<br />
== Install the Vserver packages ==<br />
<br />
<code><br />
apt-get install linux-image-2.6-vserver-686-bigmem util-vserver vserver-debiantools<br />
</code><br />
<br />
As usual a reboot is needed to boot this kernel.<br />
<br />
<blockquote><br />
With Etch i found that the Vserver kernel often ended up as second in the grub list, not so in Lenny but to be safe check the kernel stanza in /boot/grub/menu.lst especially when doing this from a remote location.<br />
</blockquote><br />
<br />
== Install DRBD8, LVM2 and Heartbeat ==<br />
<br />
<code><br />
apt-get install drbd8-modules-2.6-vserver-686-bigmem drbd8-module-source lvm2 heartbeat-2<br />
</code><br />
<br />
<blockquote><br />
not sure about this, but DRBD always needed to be compiled against the running kernel, is this still the case with the kernel specific modules? I did not check but it would be good to know in case of a kernel upgrade.<br />
</blockquote><br />
<br />
== Build DRBD8 ==<br />
<br />
Although packages are available in the repositorie for DRBD8, the purpose of these packages is that you can built it easily from source and patch the running kernel.<br />
<br />
To do this we just issue this command<br />
<br />
<code><br />
m-a a-i drbd8<br />
</code><br />
<br />
And to load it into the kernel..<br />
<br />
<code><br />
depmod -ae<br />
</code><br />
<br />
<code><br />
modprobe drbd<br />
</code><br />
<br />
==== Configure DRBD8 ====<br />
<br />
Now that we have the essentials installed we can configure DRBD. Again, i will not go in to the details of all the options here so check out the default config and http://www.drbd.org/ to find a match for your set up.<br />
<br />
<code><br />
mv /etc/drbd.conf /etc/drbd.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/drbd.conf<br />
</code><br />
<br />
<pre><br />
global {<br />
usage-count no;<br />
}<br />
<br />
common {<br />
syncer { rate 100M; } <br />
}<br />
<br />
resource r0 {<br />
protocol C;<br />
handlers {<br />
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt f";<br />
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt f";<br />
local-io-error "echo o > /proc/sysrq-trigger ; halt f";<br />
outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";<br />
}<br />
<br />
startup {<br />
degr-wfc-timeout 120; # 2 minutes.<br />
}<br />
<br />
disk {<br />
on-io-error detach;<br />
}<br />
<br />
net { <br />
after-sb-0pri disconnect;<br />
after-sb-1pri disconnect;<br />
after-sb-2pri disconnect;<br />
rr-conflict disconnect;<br />
}<br />
<br />
syncer {<br />
rate 100M;<br />
al-extents 257;<br />
}<br />
<br />
<br />
on node1 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.100:7788;<br />
meta-disk internal;<br />
}<br />
<br />
on node2 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.200:7788;<br />
meta-disk internal;<br />
}<br />
}<br />
</pre><br />
<br />
Before we start DRBD we change some permissions, otherwise it will ask for it.<br />
So on both nodes<br />
<pre><br />
chgrp haclient /sbin/drbdsetup<br />
chmod o-x /sbin/drbdsetup<br />
chmod u+s /sbin/drbdsetup<br />
chgrp haclient /sbin/drbdmeta<br />
chmod o-x /sbin/drbdmeta<br />
chmod u+s /sbin/drbdmeta<br />
</pre><br />
<br />
==== Create the DRBD devices ====<br />
<br />
On both nodes<br />
<br />
node1<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node1<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
<blockquote><br />
'''The following should be done on the node that will be the primary!'''<br />
</blockquote><br />
<br />
On node1<br />
<br />
<code><br />
drbdadm -- --overwrite-data-of-peer primary r0<br />
</code><br />
<br />
<br />
watch cat /proc/drbd should show you something like this<br />
<pre><br />
version: 8.0.13 (api:86/proto:86)<br />
GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil@fat-tyre, 2008-08-04 15:28:07<br />
0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---<br />
ns:62059328 nr:0 dw:3298052 dr:58770141 al:2102 bm:3641 lo:1 pe:261 ua:251 ap:0<br />
[===>................] sync'ed: 22.1% (208411/267331)M<br />
finish: 4:04:44 speed: 14,472 (12,756) K/sec<br />
resync: used:1/61 hits:4064317 misses:5172 starving:0 dirty:0 changed:5172<br />
act_log: used:0/257 hits:822411 misses:46655 starving:110 dirty:44552 changed:2102<br />
<br />
<br />
</pre><br />
<br />
== Configure LVM2 ==<br />
<br />
<br />
<note important><br />
LVM will normally scan all available devices under /dev, but since /dev/cciss/c0d0p6 and /dev/drbd0 are basically the same this will lead to errors where LVM reads and writes the same data to both devices.<br />
So to limit it to scan /dev/drbd devices only we do the following on both nodes.<br />
<br />
</note><br />
<br />
<code><br />
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/lvm/lvm.conf<br />
</code><br />
<br />
<pre><br />
#filter = [ "a/.*/" ]<br />
filter = [ "a|/dev/drbd|", "r|.*|" ]<br />
</pre><br />
<br />
to re-scan with the new settings on both nodes<br />
<code><br />
<br />
vgscan<br />
</code><br />
<br />
=== Create the Physical Volume ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
On node1<br />
<br />
<code><br />
pvcreate /dev/drbd0<br />
</code><br />
<br />
=== Create the Volume Group ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
One node1<br />
<br />
<code><br />
vgcreate drbdvg0 /dev/drbd0<br />
</code><br />
<br />
=== Create the Logical Volume ===<br />
<br />
Yes, again only on the node that is primary!!!<br />
<br />
For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on.<br />
<br />
On node1<br />
<br />
<code><br />
lvcreate -L50000 -n web drbdvg0<br />
</code><br />
<br />
Then we put a file system on the logical volumes<br />
<br />
<code><br />
mkfs.ext3 /dev/drbdvg0/web<br />
</code><br />
<br />
create the directory where we want to mount the Vservers<br />
<br />
<code><br />
mkdir -p /VSERVERS/web<br />
</code><br />
<br />
and mount the volume group to the mount point<br />
<br />
<code><br />
mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/<br />
</code><br />
<br />
== Get informed ==<br />
<br />
Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.<br />
<br />
This should be done on both nodes<br />
<br />
<code><br />
apt-get install postfix mailx<br />
</code><br />
<br />
and go for the defaults, "internet site" and node1.example.com"<br />
<br />
We don't want postfix to listen to all interfaces,<br />
<br />
<code><br />
nano /etc/postfix/main.cf<br />
</code><br />
<br />
and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.<br />
<br />
<code><br />
inet_interfaces = loopback-only<br />
</code><br />
<br />
<br />
== Heartbeat ==<br />
<br />
=== Get aquinted ===<br />
Add the other node in the hosts file of both nodes, this way Heartbeat knows who is who.<br />
<br />
so for node1 do<br />
<br />
<code><br />
nano /etc/hosts<br />
</code><br />
<br />
and add node2<br />
<br />
<pre><br />
192.168.1.200 node2<br />
</pre><br />
<br />
=== Get intimate ===<br />
<br />
Set up some keys on both boxes so we can ssh login without a password (defaults, no passphrase)<br />
<br />
<code><br />
ssh-keygen<br />
</code><br />
<br />
then copy over the public keys<br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.100:/root/.ssh/authorized_keys<br />
</code><br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.200:/root/.ssh/authorized_keys<br />
</code><br />
<br />
=== Configure Heartbeat ===<br />
<br />
Without the ha.cf file Heartbeat wil not start, this should only be done on 1 of the nodes.<br />
<br />
<code><br />
nano /etc/ha.d/ha.cf<br />
</code><br />
<br />
<pre><br />
autojoin none <br />
#crm on #enables heartbeat2 cluster manager - we want that!<br />
use_logd on<br />
logfacility syslog<br />
keepalive 1<br />
deadtime 10<br />
warntime 10<br />
udpport 694<br />
auto_failback on #resources move back once node is back online<br />
mcast bond0 239.0.0.43 694 1 0 <br />
bcast eth2 <br />
node node1 #hostnames of the nodes<br />
node node2<br />
</pre><br />
<br />
This one also on 1 of the nodes<br />
<br />
<code><br />
nano /etc/ha.d/authkeys<br />
</code><br />
<br />
<pre><br />
auth 3<br />
3 md5 failover ## this is just a string, enter what you want ! auth 3 md5 uses md5 encryption<br />
</pre><br />
<br />
<code><br />
chmod 600 /etc/ha.d/authkeys<br />
</code><br />
<br />
<note><br />
We will be using heartbeat R1-style configuration here simply because i don't understand the R2 xml based syntax.<br />
</note><br />
We only did the above 2 config files on 1 node but we need it on both, heartbeat can do that for us.<br />
<br />
<code><br />
/usr/lib/heartbeat/ha_propagate<br />
</code><br />
<br />
=== Heatbeat behavior ===<br />
<br />
After above 2 files are set, the haresources is where we want to be to control Heartbeats behaviour.<br />
This is an example for 1 Vserver that we will set up later on.<br />
<br />
<code><br />
nano /etc/ha.d/haresources<br />
</code><br />
<br />
<pre><br />
node1 drbddisk::r1 LVM::drbdvg1 Filesystem::/dev/drbdvg1/web::/VSERVERS/web::ext3 vserver-web SendArp::123.123.123.125/bond0 MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
The above will default the Vserver named web to node1 and specify the mount points, the vserver-web script will start and stop heartbeat, the sendarp is for notifying the network that this IP can be found somewhere else then before. (have added the SendArp an extra time below for better result)<br />
<br />
Another example for more than 1 Vserver,<br />
We only specify 1 default node here for all Vservers and the same DRBD disk and Volume Group, the individual start scripts and mount points are specified separately, mind the \, its all in 1 line. the last mail command is only needed once.<br />
<br />
<pre><br />
node1 \<br />
drbddisk::r0 \<br />
LVM::drbdvg0 \<br />
Filesystem::/dev/drbdvg0/web::/VSERVERS/web::ext3 \<br />
Filesystem::/dev/drbdvg0/ns1::/VSERVERS/ns1::ext3 \<br />
Vserver-web \<br />
Vserver-ns1 \<br />
SendArp::123.123.123.125/bond0 \<br />
SendArp::123.123.123.126/bond0 \<br />
MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
=== start/stop script ===<br />
<br />
The vserver-web script as specified to be called by heartbeat above is basically a demolished version of the original R2 style agent by Martin Fick from here http://www.theficks.name/bin/lib/ocf/VServer.<br />
<br />
What i did is remove the sensible top part and replace "$OCF_RESKEY_vserver" with the specific Vserver name, also added an extra<br />
<br />
<pre><br />
/etc/ha.d/resource.d/SendArp 123.123.123.126/bond0 start<br />
</pre><br />
<br />
to the start part because i had various results when done by Heartbeat in the first tests i did, not sure if it is still needed but i guess it doesn't hurt.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
<pre><br />
#!/bin/sh<br />
#<br />
# License: GNU General Public License (GPL) <br />
# Author: Martin Fick <mogulguy@yahoo.com><br />
# Date: 04/19/07<br />
# Version: 1.1<br />
#<br />
# This script manages a VServer instance<br />
#<br />
# It can start or stop a VServer<br />
#<br />
# usage: $0 {start|stop|status|monitor|meta-data}<br />
#<br />
#<br />
# OCF parameters are as below<br />
# OCF_RESKEY_vserver<br />
#<br />
#######################################################################<br />
# Initialization:<br />
#<br />
#. /usr/lib/heartbeat/ocf-shellfuncs<br />
#<br />
#USAGE="usage: $0 {start|stop|status|monitor|meta-data}";<br />
#<br />
#######################################################################<br />
#<br />
#<br />
#meta_data() {<br />
# cat <<END<br />
#<?xml version="1.0"?><br />
#<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd"><br />
#<resource-agent name="VServer"><br />
# <version>1.0</version><br />
# <longdesc lang="en"><br />
#This script manages a VServer instance.<br />
#It can start or stop a VServer.<br />
# </longdesc><br />
# <shortdesc lang="en">OCF Resource Agent compliant VServer script.</shortdesc><br />
#<br />
# <parameters><br />
#<br />
# <parameter name="vserver" unique="1" required="1"><br />
# <longdesc lang="en"><br />
#The vserver name is the name as found under /etc/vservers<br />
# </longdesc><br />
# <shortdesc lang="en">VServer Name</shortdesc><br />
# <content type="string" default="" /><br />
# </parameter><br />
#<br />
# </parameters><br />
#<br />
# <actions><br />
# <action name="start" timeout="2m" /><br />
# <action name="stop" timeout="1m" /><br />
# <action name="monitor" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="status" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="meta-data" timeout="1m" /><br />
# </actions><br />
#</resource-agent><br />
#END<br />
#}<br />
<br />
vserver_reload() {<br />
vserver_stop || return<br />
vserver_start<br />
}<br />
<br />
vserver_stop() {<br />
#<br />
# Is the VServer already stopped?<br />
#<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "stop"<br />
<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
return 1<br />
}<br />
<br />
vserver_start() {<br />
vserver_status<br />
[ $? -eq 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "start"<br />
vserver_status<br />
/etc/ha.d/resource.d/SendArp 123.123.123.125/bond0 start<br />
}<br />
<br />
vserver_status() {<br />
/usr/sbin/vserver "web" "status"<br />
rc=$?<br />
if [ $rc -eq 0 ]; then<br />
echo "running"<br />
return 0<br />
elif [ $rc -eq 3 ]; then<br />
echo "stopped"<br />
else<br />
echo "unknown"<br />
fi<br />
return 7<br />
}<br />
<br />
vserver_monitor() {<br />
vserver_status<br />
}<br />
<br />
<br />
vserver_usage() {<br />
<br />
echo $USAGE >&2<br />
}<br />
<br />
vserver_info() {<br />
cat - <<!INFO<br />
Abstract=VServer Instance takeover<br />
Argument=VServer Name<br />
Description:<br />
A Vserver is a simulated server which is fairly hardware independent<br />
so it can be easily setup to run on several machines.<br />
Please rerun with the meta-data command for a list of \\<br />
valid arguments and their defaults.<br />
!INFO<br />
}<br />
<br />
#<br />
# Start or Stop the given VServer...<br />
#<br />
<br />
if [ $# -ne 1 ] ; then<br />
vserver_usage<br />
exit 2<br />
fi<br />
<br />
case "$1" in<br />
start|stop|status|monitor|reload|info|usage) vserver_$1 ;;<br />
meta-data) meta_data ;;<br />
validate-all|notify|promote|demote) exit 3 ;;<br />
<br />
*) vserver_usage ; exit 2 ;;<br />
esac<br />
<br />
<br />
</pre><br />
To make this file executable by Heartbeat<br />
<br />
<code><br />
chmod a+x /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
=== Diazepam ===<br />
<br />
Add a modificaton to the drbddisk resource, as pointed out by Christian Balzer on the Vserver mailing list http://list.linux-vserver.org/archive?mss:835:200803:cgehldioambmojimggpf, it seems to help Heartbeat to be a little more patient if it wants to close down the r0 while not all Vservers are stopped yet, not unimportant.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/drbddisk<br />
</code><br />
<br />
<pre><br />
stop)<br />
# Kill off any vserver mounts that might hog this<br />
VNSPACE=/usr/sbin/vnamespace<br />
<br />
for CTX in `/usr/sbin/vserver-stat | tail -n +2 | awk '{print $1}'`<br />
do<br />
MPOINT="`$VNSPACE -e $CTX cat /proc/mounts | grep $RES | awk '{print $2}'`"<br />
echo Unmounting mount point $MPOINT from within context $CTX<br />
### MOUNT POINT IS COMPULSORY. DEVICE NAME DOES NOT WORK!!!<br />
$VNSPACE -e $CTX /bin/umount $MPOINT || continue;<br />
done<br />
# exec, so the exit code of drbdadm propagates<br />
exec $DRBDADM secondary $RES<br />
<br />
</pre><br />
<br />
== Create a Vserver ==<br />
<br />
Note that we already have mounted the LVM partition on /VSERVERS/web in an earlier step, we're going to place both the /var and /etc directories on the mountpoint and symlink to it, this way the complete Vserver and its config are available on the other node when mounted.<br />
<br />
<code><br />
mkdir -p /VSERVERS/web/etc<br />
</code><br />
<br />
<code><br />
mkdir -p /VSERVERS/web/barrier/var<br />
</code><br />
<br />
When making the Vserver it will be in the default location /var/lib/vservers/web and its config in /etc/vservers/web <br />
<br />
<pre><br />
newvserver --hostname web --domain example.com --ip 123.123.123.125/24 --dist etch --mirror http://123.123.123.81:3142/debian.apt-get.eu/debian --interface bond0<br />
</pre><br />
<br />
<pre><br />
enter the root password<br />
</pre><br />
<br />
<pre><br />
Create a normal user account now? <br />
<No> <br />
</pre><br />
<br />
<pre><br />
Choose software to install: <br />
<Ok> <br />
</pre><br />
<br />
On node1 we move the Vserver directories to the LVM volume on the DRBD disks and make symlinks from the normal locations.<br />
<br />
On node1<br />
<br />
<code><br />
mv /etc/vservers/web/* /VSERVERS/web/etc/<br />
</code><br />
<br />
<code><br />
rmdir /etc/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<code><br />
mv /var/lib/vservers/web/* /VSERVERS/web/barrier/var<br />
</code><br />
<br />
<code><br />
rmdir /var/lib/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
We need to set the same symlinks on node2, but the we need the Vserver directories available there first.<br />
The mounting should be handled by heartbeat by now so we make our resources move to the other machine.<br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat stop<br />
</code><br />
<br />
On node2<br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat start<br />
</code><br />
<br />
<code><br />
Vserver web start<br />
</code><br />
<br />
and enjoy!</div>212.123.252.242http://linux-vserver.org/Getting_high_with_lennyGetting high with lenny2008-10-02T06:57:51Z<p>212.123.252.242: /* Getting High with Lenny */</p>
<hr />
<div>= Getting High with Lenny =<br />
<br />
The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released)<br />
<br />
<br />
There is a lot of buzz going on for a while now about virtualisation and High Availability and while Vserver is very well capable for this job the number of documented examples compared to some other virtualisation techniques are a little lacking so i thought i'd do my share. <br />
<br />
I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed.<br />
DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly.<br />
In my experience it takes a few seconds to have several Vservers fail over to another machine with this setup.<br />
<br />
The main attempt here is to give a single working example without going to much in to the details of every option, the scenario is relatively simple but different variations can be made.<br />
<br />
For this set up we will have <br />
<br />
** 2 machines<br />
** both machines have 1 single large DRBD partition<br />
** primary/seconday there is always 1 machine active and 1 on standby<br />
** 1 LVM partition per Vserver on top of the DRBD partition, for quota support from within the guest and LVM snapshots<br />
** the Vservers /etc/vserver and /var/lib/vservers directories will be placed on the DRBD partition.<br />
<br />
In case the main machine that runs the Vservers goes down, the synchronized second machine should take over and automatically start the Vservers.<br />
<br />
Basically this is an on-line RAID solution that can keep your services running in case of hardware failure, it is NOT a back-up replacement.<br />
<br />
The cost for this setup is that you always have 1 idle machine standby, this cost can be justified by the fact that Linux-Vserver enables you to make full use of the 1 machine that is running, you also could consider to run this on a little less expensive (reliable) hardware.<br />
<br />
Also note that i will be using R1 style configuration for heartbeat, R1 style can be considered to be depreciated when using Heartbeat2 but i could not get my head around the R2 xml configuration, so if you want R2 you might want to have a look here.<br />
[[Fail-over]])<br />
<br />
The partitioning looks as follows<br />
<br />
<code> <br />
c0d0p1 Boot Primary Linux ext3 10001.95<br />
c0d0p5 Logical Linux swap / Solaris 1003.49<br />
c0d0p6 Logical Linux 280325.77<br />
<br />
</code><br />
<br />
<br />
<blockquote><br />
'''machine1''' will use the following names<br />
hostname = node1<br />
IP number = 192.168.1.100 <br />
is primary for r0 on disk c0d0p6<br />
physical volume on r0 is /dev/drbd0<br />
volume group on /dev/drbd0 is called drbdvg0<br />
</blockquote><br />
<br />
<blockquote><br />
'''machine2''' will use the following names<br />
hostname = node2<br />
IP number = 192.168.1.200 <br />
is secondary for r0 on disk c0d0p6<br />
<br />
The Volume Group and the Physical Volume will be identical on node2 if this one becomes the primary for r0.<br />
</blockquote><br />
<br />
== Loadbalance-Failover the network cards ==<br />
<br />
Maybe not very specific to Vserver, Heartbeat or DRBD, but loadbalancing your network cards for failover is always usefull. Some more indepth details by Carla Schroder can be found here. <br />
[[http://www.enterprisenetworkingplanet.com/nethub/article.php/3696561]]<br />
I did not do it for the DRBD crossover cable between the nodes while this is actually highly recomended.<br />
We need both mii-tool and ethtool.<br />
<br />
<code><br />
apt-get install ethtool ifenslave-2.6<br />
</code><br />
<br />
<code><br />
nano /etc/modprobe.d/arch/i386<br />
</code><br />
<br />
To load the modules with the correct options at boot time.<br />
<br />
<pre><br />
alias bond0 bonding<br />
options bond0 mode=balance-alb miimon=100 <br />
</pre><br />
<br />
And set the interfaces eth0 and eth1 as slaves to bond0, also eth2 is set here for the crossover cable for the DRBD connection to the fail over machine.<br />
<br />
<code><br />
nano /etc/network/interfaces<br />
</code><br />
<pre><br />
# This file describes the network interfaces available on your system<br />
# and how to activate them. For more information, see interfaces(5).<br />
<br />
# The loopback network interface<br />
auto lo<br />
iface lo inet loopback<br />
<br />
# The primary network interface<br />
auto bond0<br />
iface bond0 inet static<br />
address 123.123.123.100<br />
netmask 255.255.255.0<br />
network 123.123.123.0<br />
broadcast 123.123.123.255<br />
gateway 123.123.123.1<br />
# dns-* options are implemented by the resolvconf package, if installed<br />
dns-nameservers 123.123.123.45<br />
dns-search example.com<br />
up /sbin/ifenslave bond0 eth0 eth1<br />
down ifenslave -d bond0 eth0 eth1<br />
<br />
<br />
auto eth2<br />
iface eth2 inet static<br />
address 192.168.1.100<br />
netmask 255.255.255.0<br />
</pre><br />
<br />
This way the system needs to be rebooted before the changes take effect, otherwise you should load the drivers and ifdown eth0 and eth1 first before ifup bond0 but i'm planning to install a new kernel anyway in the next step.<br />
<br />
== Install the Vserver packages ==<br />
<br />
<code><br />
apt-get install linux-image-2.6-vserver-686-bigmem util-vserver vserver-debiantools<br />
</code><br />
<br />
As usual a reboot is needed to boot this kernel.<br />
<br />
<blockquote><br />
With Etch i found that the Vserver kernel often ended up as second in the grub list, not so in Lenny but to be safe check the kernel stanza in /boot/grub/menu.lst especially when doing this from a remote location.<br />
</blockquote><br />
<br />
== Install DRBD8, LVM2 and Heartbeat ==<br />
<br />
<code><br />
apt-get install drbd8-modules-2.6-vserver-686-bigmem drbd8-module-source lvm2 heartbeat-2<br />
</code><br />
<br />
<blockquote><br />
not sure about this, but DRBD always needed to be compiled against the running kernel, is this still the case with the kernel specific modules? I did not check but it would be good to know in case of a kernel upgrade.<br />
</blockquote><br />
<br />
== Build DRBD8 ==<br />
<br />
Although packages are available in the repositorie for DRBD8, the purpose of these packages is that you can built it easily from source and patch the running kernel.<br />
<br />
To do this we just issue this command<br />
<br />
<code><br />
m-a a-i drbd8<br />
</code><br />
<br />
And to load it into the kernel..<br />
<br />
<code><br />
depmod -ae<br />
</code><br />
<br />
<code><br />
modprobe drbd<br />
</code><br />
<br />
==== Configure DRBD8 ====<br />
<br />
Now that we have the essentials installed we can configure DRBD. Again, i will not go in to the details of all the options here so check out the default config and http://www.drbd.org/ to find a match for your set up.<br />
<br />
<code><br />
mv /etc/drbd.conf /etc/drbd.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/drbd.conf<br />
</code><br />
<br />
<pre><br />
global {<br />
usage-count no;<br />
}<br />
<br />
common {<br />
syncer { rate 100M; } <br />
}<br />
<br />
resource r0 {<br />
protocol C;<br />
handlers {<br />
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt f";<br />
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt f";<br />
local-io-error "echo o > /proc/sysrq-trigger ; halt f";<br />
outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";<br />
}<br />
<br />
startup {<br />
degr-wfc-timeout 120; # 2 minutes.<br />
}<br />
<br />
disk {<br />
on-io-error detach;<br />
}<br />
<br />
net { <br />
after-sb-0pri disconnect;<br />
after-sb-1pri disconnect;<br />
after-sb-2pri disconnect;<br />
rr-conflict disconnect;<br />
}<br />
<br />
syncer {<br />
rate 100M;<br />
al-extents 257;<br />
}<br />
<br />
<br />
on node1 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.100:7788;<br />
meta-disk internal;<br />
}<br />
<br />
on node2 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.200:7788;<br />
meta-disk internal;<br />
}<br />
}<br />
</pre><br />
<br />
Before we start DRBD we change some permissions, otherwise it will ask for it.<br />
So on both nodes<br />
<pre><br />
chgrp haclient /sbin/drbdsetup<br />
chmod o-x /sbin/drbdsetup<br />
chmod u+s /sbin/drbdsetup<br />
chgrp haclient /sbin/drbdmeta<br />
chmod o-x /sbin/drbdmeta<br />
chmod u+s /sbin/drbdmeta<br />
</pre><br />
<br />
==== Create the DRBD devices ====<br />
<br />
On both nodes<br />
<br />
node1<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node1<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
<blockquote><br />
'''The following should be done on the node that will be the primary!'''<br />
</blockquote><br />
<br />
On node1<br />
<br />
<code><br />
drbdadm -- --overwrite-data-of-peer primary r0<br />
</code><br />
<br />
<br />
watch cat /proc/drbd should show you something like this<br />
<pre><br />
version: 8.0.13 (api:86/proto:86)<br />
GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil@fat-tyre, 2008-08-04 15:28:07<br />
0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---<br />
ns:62059328 nr:0 dw:3298052 dr:58770141 al:2102 bm:3641 lo:1 pe:261 ua:251 ap:0<br />
[===>................] sync'ed: 22.1% (208411/267331)M<br />
finish: 4:04:44 speed: 14,472 (12,756) K/sec<br />
resync: used:1/61 hits:4064317 misses:5172 starving:0 dirty:0 changed:5172<br />
act_log: used:0/257 hits:822411 misses:46655 starving:110 dirty:44552 changed:2102<br />
<br />
<br />
</pre><br />
<br />
== Configure LVM2 ==<br />
<br />
<br />
<note important><br />
LVM will normally scan all available devices under /dev, but since /dev/cciss/c0d0p6 and /dev/drbd0 are basically the same this will lead to errors where LVM reads and writes the same data to both devices.<br />
So to limit it to scan /dev/drbd devices only we do the following on both nodes.<br />
<br />
</note><br />
<br />
<code><br />
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/lvm/lvm.conf<br />
</code><br />
<br />
<pre><br />
#filter = [ "a/.*/" ]<br />
filter = [ "a|/dev/drbd|", "r|.*|" ]<br />
</pre><br />
<br />
to re-scan with the new settings on both nodes<br />
<code><br />
<br />
vgscan<br />
</code><br />
<br />
=== Create the Physical Volume ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
On node1<br />
<br />
<code><br />
pvcreate /dev/drbd0<br />
</code><br />
<br />
=== Create the Volume Group ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
One node1<br />
<br />
<code><br />
vgcreate drbdvg0 /dev/drbd0<br />
</code><br />
<br />
=== Create the Logical Volume ===<br />
<br />
Yes, again only on the node that is primary!!!<br />
<br />
For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on.<br />
<br />
On node1<br />
<br />
<code><br />
lvcreate -L50000 -n web drbdvg0<br />
</code><br />
<br />
Then we put a file system on the logical volumes<br />
<br />
<code><br />
mkfs.ext3 /dev/drbdvg0/web<br />
</code><br />
<br />
create the directory where we want to mount the Vservers<br />
<br />
<code><br />
mkdir -p /VSERVERS/web<br />
</code><br />
<br />
and mount the volume group to the mount point<br />
<br />
<code><br />
mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/<br />
</code><br />
<br />
== Get informed ==<br />
<br />
Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.<br />
<br />
This should be done on both nodes<br />
<br />
<code><br />
apt-get install postfix mailx<br />
</code><br />
<br />
and go for the defaults, "internet site" and node1.example.com"<br />
<br />
We don't want postfix to listen to all interfaces,<br />
<br />
<code><br />
nano /etc/postfix/main.cf<br />
</code><br />
<br />
and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.<br />
<br />
<code><br />
inet_interfaces = loopback-only<br />
</code><br />
<br />
<br />
== Heartbeat ==<br />
<br />
=== Get aquinted ===<br />
Add the other node in the hosts file of both nodes, this way Heartbeat knows who is who.<br />
<br />
so for node1 do<br />
<br />
<code><br />
nano /etc/hosts<br />
</code><br />
<br />
and add node2<br />
<br />
<pre><br />
192.168.1.200 node2<br />
</pre><br />
<br />
=== Get intimate ===<br />
<br />
Set up some keys on both boxes so we can ssh login without a password (defaults, no passphrase)<br />
<br />
<code><br />
ssh-keygen<br />
</code><br />
<br />
then copy over the public keys<br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.100:/root/.ssh/authorized_keys<br />
</code><br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.200:/root/.ssh/authorized_keys<br />
</code><br />
<br />
=== Configure Heartbeat ===<br />
<br />
Without the ha.cf file Heartbeat wil not start, this should only be done on 1 of the nodes.<br />
<br />
<code><br />
nano /etc/ha.d/ha.cf<br />
</code><br />
<br />
<pre><br />
autojoin none <br />
#crm on #enables heartbeat2 cluster manager - we want that!<br />
use_logd on<br />
logfacility syslog<br />
keepalive 1<br />
deadtime 10<br />
warntime 10<br />
udpport 694<br />
auto_failback on #resources move back once node is back online<br />
mcast bond0 239.0.0.43 694 1 0 <br />
bcast eth2 <br />
node node1 #hostnames of the nodes<br />
node node2<br />
</pre><br />
<br />
This one also on 1 of the nodes<br />
<br />
<code><br />
nano /etc/ha.d/authkeys<br />
</code><br />
<br />
<pre><br />
auth 3<br />
3 md5 failover ## this is just a string, enter what you want ! auth 3 md5 uses md5 encryption<br />
</pre><br />
<br />
<code><br />
chmod 600 /etc/ha.d/authkeys<br />
</code><br />
<br />
<note><br />
We will be using heartbeat R1-style configuration here simply because i don't understand the R2 xml based syntax.<br />
</note><br />
We only did the above 2 config files on 1 node but we need it on both, heartbeat can do that for us.<br />
<br />
<code><br />
/usr/lib/heartbeat/ha_propagate<br />
</code><br />
<br />
=== Heatbeat behavior ===<br />
<br />
After above 2 files are set, the haresources is where we want to be to control Heartbeats behaviour.<br />
This is an example for 1 Vserver that we will set up later on.<br />
<br />
<code><br />
nano /etc/ha.d/haresources<br />
</code><br />
<br />
<pre><br />
node1 drbddisk::r1 LVM::drbdvg1 Filesystem::/dev/drbdvg1/web::/VSERVERS/web::ext3 vserver-web SendArp::123.123.123.125/bond0 MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
The above will default the Vserver named web to node1 and specify the mount points, the vserver-web script will start and stop heartbeat, the sendarp is for notifying the network that this IP can be found somewhere else then before. (have added the SendArp an extra time below for better result)<br />
<br />
Another example for more than 1 Vserver,<br />
We only specify 1 default node here for all Vservers and the same DRBD disk and Volume Group, the individual start scripts and mount points are specified separately, mind the \, its all in 1 line. the last mail command is only needed once.<br />
<br />
<pre><br />
node1 \<br />
drbddisk::r0 \<br />
LVM::drbdvg0 \<br />
Filesystem::/dev/drbdvg0/web::/VSERVERS/web::ext3 \<br />
Filesystem::/dev/drbdvg0/ns1::/VSERVERS/ns1::ext3 \<br />
Vserver-web \<br />
Vserver-ns1 \<br />
SendArp::123.123.123.125/bond0 \<br />
SendArp::123.123.123.126/bond0 \<br />
MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
=== start/stop script ===<br />
<br />
The vserver-web script as specified to be called by heartbeat above is basically a demolished version of the original R2 style agent by Martin Fick from here http://www.theficks.name/bin/lib/ocf/VServer.<br />
<br />
What i did is remove the sensible top part and replace "$OCF_RESKEY_vserver" with the specific Vserver name, also added an extra<br />
<br />
<pre><br />
/etc/ha.d/resource.d/SendArp 123.123.123.126/bond0 start<br />
</pre><br />
<br />
to the start part because i had various results when done by Heartbeat in the first tests i did, not sure if it is still needed but i guess it doesn't hurt.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
<pre><br />
#!/bin/sh<br />
#<br />
# License: GNU General Public License (GPL) <br />
# Author: Martin Fick <mogulguy@yahoo.com><br />
# Date: 04/19/07<br />
# Version: 1.1<br />
#<br />
# This script manages a VServer instance<br />
#<br />
# It can start or stop a VServer<br />
#<br />
# usage: $0 {start|stop|status|monitor|meta-data}<br />
#<br />
#<br />
# OCF parameters are as below<br />
# OCF_RESKEY_vserver<br />
#<br />
#######################################################################<br />
# Initialization:<br />
#<br />
#. /usr/lib/heartbeat/ocf-shellfuncs<br />
#<br />
#USAGE="usage: $0 {start|stop|status|monitor|meta-data}";<br />
#<br />
#######################################################################<br />
#<br />
#<br />
#meta_data() {<br />
# cat <<END<br />
#<?xml version="1.0"?><br />
#<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd"><br />
#<resource-agent name="VServer"><br />
# <version>1.0</version><br />
# <longdesc lang="en"><br />
#This script manages a VServer instance.<br />
#It can start or stop a VServer.<br />
# </longdesc><br />
# <shortdesc lang="en">OCF Resource Agent compliant VServer script.</shortdesc><br />
#<br />
# <parameters><br />
#<br />
# <parameter name="vserver" unique="1" required="1"><br />
# <longdesc lang="en"><br />
#The vserver name is the name as found under /etc/vservers<br />
# </longdesc><br />
# <shortdesc lang="en">VServer Name</shortdesc><br />
# <content type="string" default="" /><br />
# </parameter><br />
#<br />
# </parameters><br />
#<br />
# <actions><br />
# <action name="start" timeout="2m" /><br />
# <action name="stop" timeout="1m" /><br />
# <action name="monitor" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="status" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="meta-data" timeout="1m" /><br />
# </actions><br />
#</resource-agent><br />
#END<br />
#}<br />
<br />
vserver_reload() {<br />
vserver_stop || return<br />
vserver_start<br />
}<br />
<br />
vserver_stop() {<br />
#<br />
# Is the VServer already stopped?<br />
#<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "stop"<br />
<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
return 1<br />
}<br />
<br />
vserver_start() {<br />
vserver_status<br />
[ $? -eq 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "start"<br />
vserver_status<br />
/etc/ha.d/resource.d/SendArp 123.123.123.125/bond0 start<br />
}<br />
<br />
vserver_status() {<br />
/usr/sbin/vserver "web" "status"<br />
rc=$?<br />
if [ $rc -eq 0 ]; then<br />
echo "running"<br />
return 0<br />
elif [ $rc -eq 3 ]; then<br />
echo "stopped"<br />
else<br />
echo "unknown"<br />
fi<br />
return 7<br />
}<br />
<br />
vserver_monitor() {<br />
vserver_status<br />
}<br />
<br />
<br />
vserver_usage() {<br />
<br />
echo $USAGE >&2<br />
}<br />
<br />
vserver_info() {<br />
cat - <<!INFO<br />
Abstract=VServer Instance takeover<br />
Argument=VServer Name<br />
Description:<br />
A Vserver is a simulated server which is fairly hardware independent<br />
so it can be easily setup to run on several machines.<br />
Please rerun with the meta-data command for a list of \\<br />
valid arguments and their defaults.<br />
!INFO<br />
}<br />
<br />
#<br />
# Start or Stop the given VServer...<br />
#<br />
<br />
if [ $# -ne 1 ] ; then<br />
vserver_usage<br />
exit 2<br />
fi<br />
<br />
case "$1" in<br />
start|stop|status|monitor|reload|info|usage) vserver_$1 ;;<br />
meta-data) meta_data ;;<br />
validate-all|notify|promote|demote) exit 3 ;;<br />
<br />
*) vserver_usage ; exit 2 ;;<br />
esac<br />
<br />
<br />
</pre><br />
To make this file executable by Heartbeat<br />
<br />
<code><br />
chmod a+x /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
=== Diazepam ===<br />
<br />
Add a modificaton to the drbddisk resource, as pointed out by Christian Balzer on the Vserver mailing list http://list.linux-vserver.org/archive?mss:835:200803:cgehldioambmojimggpf, it seems to help Heartbeat to be a little more patient if it wants to close down the r0 while not all Vservers are stopped yet, not unimportant.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/drbddisk<br />
</code><br />
<br />
<pre><br />
stop)<br />
# Kill off any vserver mounts that might hog this<br />
VNSPACE=/usr/sbin/vnamespace<br />
<br />
for CTX in `/usr/sbin/vserver-stat | tail -n +2 | awk '{print $1}'`<br />
do<br />
MPOINT="`$VNSPACE -e $CTX cat /proc/mounts | grep $RES | awk '{print $2}'`"<br />
echo Unmounting mount point $MPOINT from within context $CTX<br />
### MOUNT POINT IS COMPULSORY. DEVICE NAME DOES NOT WORK!!!<br />
$VNSPACE -e $CTX /bin/umount $MPOINT || continue;<br />
done<br />
# exec, so the exit code of drbdadm propagates<br />
exec $DRBDADM secondary $RES<br />
<br />
</pre><br />
<br />
== Create a Vserver ==<br />
<br />
Note that we already have mounted the LVM partition on /VSERVERS/web in an earlier step, we're going to place both the /var and /etc directories on the mountpoint and symlink to it, this way the complete Vserver and its config are available on the other node when mounted.<br />
<br />
<code><br />
mkdir -p /VSERVERS/web/etc<br />
</code><br />
<br />
<code><br />
mkdir -p /VSERVERS/web/barrier/var<br />
</code><br />
<br />
When making the Vserver it will be in the default location /var/lib/vservers/web and its config in /etc/vservers/web <br />
<br />
<pre><br />
newvserver --hostname web --domain example.com --ip 123.123.123.125/24 --dist etch --mirror http://123.123.123.81:3142/debian.apt-get.eu/debian --interface bond0<br />
</pre><br />
<br />
<pre><br />
enter the root password<br />
</pre><br />
<br />
<pre><br />
Create a normal user account now? <br />
<No> <br />
</pre><br />
<br />
<pre><br />
Choose software to install: <br />
<Ok> <br />
</pre><br />
<br />
On node1 we move the Vserver directories to the LVM volume on the DRBD disks and make symlinks from the normal locations.<br />
<br />
On node1<br />
<br />
<code><br />
mv /etc/vservers/web/* /VSERVERS/web/etc/<br />
</code><br />
<br />
<code><br />
rmdir /etc/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<code><br />
mv /var/lib/vservers/web/* /VSERVERS/web/barrier/var<br />
</code><br />
<br />
<code><br />
rmdir /var/lib/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
We need to set the same symlinks on node2, but the we need the Vserver directories available there first.<br />
The mounting should be handled by heartbeat by now so we make our resources move to the other machine.<br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat stop<br />
</code><br />
<br />
On node2<br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat start<br />
</code><br />
<br />
<code><br />
Vserver web start<br />
</code><br />
<br />
and enjoy!</div>212.123.252.242http://linux-vserver.org/Getting_high_with_lennyGetting high with lenny2008-10-02T06:57:14Z<p>212.123.252.242: /* Getting High with Lenny */</p>
<hr />
<div>= Getting High with Lenny =<br />
<br />
The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released)<br />
<br />
<br />
There is a lot of buzz going on for a while now about virtualisation and High Availability and while Vserver is very well capable for this job the number of documented examples compared to some other virtualisation techniques are a little lacking so i thought i'd do my share. <br />
<br />
I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed.<br />
DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly.<br />
In my experience it takes a few seconds to have several Vservers fail over to another machine with this setup.<br />
<br />
The main attempt here is to give a single working example without going to much in to the details of every option, the scenario is relatively simple but different variations can be made.<br />
<br />
For this set up we will have <br />
<br />
** 2 machines<br />
** both machines have 1 single large DRBD partition<br />
** primary/seconday there is always 1 machine active and 1 on standby<br />
** 1 LVM partition per Vserver on top of the DRBD partition, for quota support from within the guest and LVM snapshots<br />
** the Vservers /etc/vserver and /var/lib/vservers directories will be placed on the DRBD partition.<br />
<br />
In case the main machine that runs the Vservers goes down, the synchronized second machine should take over and automatically start the Vservers.<br />
<br />
Basically this is an on-line RAID solution that can keep your services running in case of hardware failure, it is NOT a back-up replacement.<br />
<br />
The cost for this setup is that you always have 1 idle machine standby, this cost can be justified by the fact that Linux-Vserver enables you to make full use of the 1 machine that is running, you also could consider to run this on a little less expensive (reliable) hardware.<br />
<br />
Also note that i will be using R1 style configuration for heartbeat, R1 style can be considered to be depreciated when using Heartbeat2 but i could not get my head around the R2 xml configuration, so if you want R2 you might want to have a look here.<br />
[[Fail-over]])<br />
<br />
The partitioning looks as follows<br />
<br />
<code> <br />
c0d0p1 Boot Primary Linux ext3 10001.95<br />
c0d0p5 Logical Linux swap / Solaris 1003.49<br />
c0d0p6 Logical Linux 280325.77<br />
<br />
</code><br />
<br />
<br />
<blockquote><br />
'''machine1''' will use the following names<br />
hostname = node1<br />
IP number = 192.168.1.100 <br />
is primary for r0 on disk c0d0p6<br />
physical volume on r0 is /dev/drbd0<br />
volume group on /dev/drbd0 is called drbdvg0<br />
</blockquote><br />
<br />
<blockquote><br />
'''machine2''' will use the following names<br />
hostname = node2<br />
IP number = 192.168.1.200 <br />
is secondary for r0 on disk c0d0p6<br />
<br />
The Volume Group and the Physical Volume will be identical on node2 if this one becomes the primary for r0.<br />
</blockquote><br />
<br />
== Loadbalance-Failover the network cards ==<br />
<br />
Maybe not very specific to Vserver, Heartbeat or DRBD, but loadbalancing your network cards for failover is always usefull. Some more indepth details by Carla Schroder can be found here. <br />
[[http://www.enterprisenetworkingplanet.com/nethub/article.php/3696561]]<br />
I did not do it for the DRBD crossover cable between the nodes while this is actually highly recomended.<br />
We need both mii-tool and ethtool.<br />
<br />
<code><br />
apt-get install ethtool ifenslave-2.6<br />
</code><br />
<br />
<code><br />
nano /etc/modprobe.d/arch/i386<br />
</code><br />
<br />
To load the modules with the correct options at boot time.<br />
<br />
<pre><br />
alias bond0 bonding<br />
options bond0 mode=balance-alb miimon=100 <br />
</pre><br />
<br />
And set the interfaces eth0 and eth1 as slaves to bond0, also eth2 is set here for the crossover cable for the DRBD connection to the fail over machine.<br />
<br />
<code><br />
nano /etc/network/interfaces<br />
</code><br />
<pre><br />
# This file describes the network interfaces available on your system<br />
# and how to activate them. For more information, see interfaces(5).<br />
<br />
# The loopback network interface<br />
auto lo<br />
iface lo inet loopback<br />
<br />
# The primary network interface<br />
auto bond0<br />
iface bond0 inet static<br />
address 123.123.123.100<br />
netmask 255.255.255.0<br />
network 123.123.123.0<br />
broadcast 123.123.123.255<br />
gateway 123.123.123.1<br />
# dns-* options are implemented by the resolvconf package, if installed<br />
dns-nameservers 123.123.123.45<br />
dns-search example.com<br />
up /sbin/ifenslave bond0 eth0 eth1<br />
down ifenslave -d bond0 eth0 eth1<br />
<br />
<br />
auto eth2<br />
iface eth2 inet static<br />
address 192.168.1.100<br />
netmask 255.255.255.0<br />
</pre><br />
<br />
This way the system needs to be rebooted before the changes take effect, otherwise you should load the drivers and ifdown eth0 and eth1 first before ifup bond0 but i'm planning to install a new kernel anyway in the next step.<br />
<br />
== Install the Vserver packages ==<br />
<br />
<code><br />
apt-get install linux-image-2.6-vserver-686-bigmem util-vserver vserver-debiantools<br />
</code><br />
<br />
As usual a reboot is needed to boot this kernel.<br />
<br />
<blockquote><br />
With Etch i found that the Vserver kernel often ended up as second in the grub list, not so in Lenny but to be safe check the kernel stanza in /boot/grub/menu.lst especially when doing this from a remote location.<br />
</blockquote><br />
<br />
== Install DRBD8, LVM2 and Heartbeat ==<br />
<br />
<code><br />
apt-get install drbd8-modules-2.6-vserver-686-bigmem drbd8-module-source lvm2 heartbeat-2<br />
</code><br />
<br />
<blockquote><br />
not sure about this, but DRBD always needed to be compiled against the running kernel, is this still the case with the kernel specific modules? I did not check but it would be good to know in case of a kernel upgrade.<br />
</blockquote><br />
<br />
== Build DRBD8 ==<br />
<br />
Although packages are available in the repositorie for DRBD8, the purpose of these packages is that you can built it easily from source and patch the running kernel.<br />
<br />
To do this we just issue this command<br />
<br />
<code><br />
m-a a-i drbd8<br />
</code><br />
<br />
And to load it into the kernel..<br />
<br />
<code><br />
depmod -ae<br />
</code><br />
<br />
<code><br />
modprobe drbd<br />
</code><br />
<br />
==== Configure DRBD8 ====<br />
<br />
Now that we have the essentials installed we can configure DRBD. Again, i will not go in to the details of all the options here so check out the default config and http://www.drbd.org/ to find a match for your set up.<br />
<br />
<code><br />
mv /etc/drbd.conf /etc/drbd.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/drbd.conf<br />
</code><br />
<br />
<pre><br />
global {<br />
usage-count no;<br />
}<br />
<br />
common {<br />
syncer { rate 100M; } <br />
}<br />
<br />
resource r0 {<br />
protocol C;<br />
handlers {<br />
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt f";<br />
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt f";<br />
local-io-error "echo o > /proc/sysrq-trigger ; halt f";<br />
outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";<br />
}<br />
<br />
startup {<br />
degr-wfc-timeout 120; # 2 minutes.<br />
}<br />
<br />
disk {<br />
on-io-error detach;<br />
}<br />
<br />
net { <br />
after-sb-0pri disconnect;<br />
after-sb-1pri disconnect;<br />
after-sb-2pri disconnect;<br />
rr-conflict disconnect;<br />
}<br />
<br />
syncer {<br />
rate 100M;<br />
al-extents 257;<br />
}<br />
<br />
<br />
on node1 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.100:7788;<br />
meta-disk internal;<br />
}<br />
<br />
on node2 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.200:7788;<br />
meta-disk internal;<br />
}<br />
}<br />
</pre><br />
<br />
Before we start DRBD we change some permissions, otherwise it will ask for it.<br />
So on both nodes<br />
<pre><br />
chgrp haclient /sbin/drbdsetup<br />
chmod o-x /sbin/drbdsetup<br />
chmod u+s /sbin/drbdsetup<br />
chgrp haclient /sbin/drbdmeta<br />
chmod o-x /sbin/drbdmeta<br />
chmod u+s /sbin/drbdmeta<br />
</pre><br />
<br />
==== Create the DRBD devices ====<br />
<br />
On both nodes<br />
<br />
node1<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node1<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
<blockquote><br />
'''The following should be done on the node that will be the primary!'''<br />
</blockquote><br />
<br />
On node1<br />
<br />
<code><br />
drbdadm -- --overwrite-data-of-peer primary r0<br />
</code><br />
<br />
<br />
watch cat /proc/drbd should show you something like this<br />
<pre><br />
version: 8.0.13 (api:86/proto:86)<br />
GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil@fat-tyre, 2008-08-04 15:28:07<br />
0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---<br />
ns:62059328 nr:0 dw:3298052 dr:58770141 al:2102 bm:3641 lo:1 pe:261 ua:251 ap:0<br />
[===>................] sync'ed: 22.1% (208411/267331)M<br />
finish: 4:04:44 speed: 14,472 (12,756) K/sec<br />
resync: used:1/61 hits:4064317 misses:5172 starving:0 dirty:0 changed:5172<br />
act_log: used:0/257 hits:822411 misses:46655 starving:110 dirty:44552 changed:2102<br />
<br />
<br />
</pre><br />
<br />
== Configure LVM2 ==<br />
<br />
<br />
<note important><br />
LVM will normally scan all available devices under /dev, but since /dev/cciss/c0d0p6 and /dev/drbd0 are basically the same this will lead to errors where LVM reads and writes the same data to both devices.<br />
So to limit it to scan /dev/drbd devices only we do the following on both nodes.<br />
<br />
</note><br />
<br />
<code><br />
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/lvm/lvm.conf<br />
</code><br />
<br />
<pre><br />
#filter = [ "a/.*/" ]<br />
filter = [ "a|/dev/drbd|", "r|.*|" ]<br />
</pre><br />
<br />
to re-scan with the new settings on both nodes<br />
<code><br />
<br />
vgscan<br />
</code><br />
<br />
=== Create the Physical Volume ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
On node1<br />
<br />
<code><br />
pvcreate /dev/drbd0<br />
</code><br />
<br />
=== Create the Volume Group ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
One node1<br />
<br />
<code><br />
vgcreate drbdvg0 /dev/drbd0<br />
</code><br />
<br />
=== Create the Logical Volume ===<br />
<br />
Yes, again only on the node that is primary!!!<br />
<br />
For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on.<br />
<br />
On node1<br />
<br />
<code><br />
lvcreate -L50000 -n web drbdvg0<br />
</code><br />
<br />
Then we put a file system on the logical volumes<br />
<br />
<code><br />
mkfs.ext3 /dev/drbdvg0/web<br />
</code><br />
<br />
create the directory where we want to mount the Vservers<br />
<br />
<code><br />
mkdir -p /VSERVERS/web<br />
</code><br />
<br />
and mount the volume group to the mount point<br />
<br />
<code><br />
mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/<br />
</code><br />
<br />
== Get informed ==<br />
<br />
Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.<br />
<br />
This should be done on both nodes<br />
<br />
<code><br />
apt-get install postfix mailx<br />
</code><br />
<br />
and go for the defaults, "internet site" and node1.example.com"<br />
<br />
We don't want postfix to listen to all interfaces,<br />
<br />
<code><br />
nano /etc/postfix/main.cf<br />
</code><br />
<br />
and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.<br />
<br />
<code><br />
inet_interfaces = loopback-only<br />
</code><br />
<br />
<br />
== Heartbeat ==<br />
<br />
=== Get aquinted ===<br />
Add the other node in the hosts file of both nodes, this way Heartbeat knows who is who.<br />
<br />
so for node1 do<br />
<br />
<code><br />
nano /etc/hosts<br />
</code><br />
<br />
and add node2<br />
<br />
<pre><br />
192.168.1.200 node2<br />
</pre><br />
<br />
=== Get intimate ===<br />
<br />
Set up some keys on both boxes so we can ssh login without a password (defaults, no passphrase)<br />
<br />
<code><br />
ssh-keygen<br />
</code><br />
<br />
then copy over the public keys<br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.100:/root/.ssh/authorized_keys<br />
</code><br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.200:/root/.ssh/authorized_keys<br />
</code><br />
<br />
=== Configure Heartbeat ===<br />
<br />
Without the ha.cf file Heartbeat wil not start, this should only be done on 1 of the nodes.<br />
<br />
<code><br />
nano /etc/ha.d/ha.cf<br />
</code><br />
<br />
<pre><br />
autojoin none <br />
#crm on #enables heartbeat2 cluster manager - we want that!<br />
use_logd on<br />
logfacility syslog<br />
keepalive 1<br />
deadtime 10<br />
warntime 10<br />
udpport 694<br />
auto_failback on #resources move back once node is back online<br />
mcast bond0 239.0.0.43 694 1 0 <br />
bcast eth2 <br />
node node1 #hostnames of the nodes<br />
node node2<br />
</pre><br />
<br />
This one also on 1 of the nodes<br />
<br />
<code><br />
nano /etc/ha.d/authkeys<br />
</code><br />
<br />
<pre><br />
auth 3<br />
3 md5 failover ## this is just a string, enter what you want ! auth 3 md5 uses md5 encryption<br />
</pre><br />
<br />
<code><br />
chmod 600 /etc/ha.d/authkeys<br />
</code><br />
<br />
<note><br />
We will be using heartbeat R1-style configuration here simply because i don't understand the R2 xml based syntax.<br />
</note><br />
We only did the above 2 config files on 1 node but we need it on both, heartbeat can do that for us.<br />
<br />
<code><br />
/usr/lib/heartbeat/ha_propagate<br />
</code><br />
<br />
=== Heatbeat behavior ===<br />
<br />
After above 2 files are set, the haresources is where we want to be to control Heartbeats behaviour.<br />
This is an example for 1 Vserver that we will set up later on.<br />
<br />
<code><br />
nano /etc/ha.d/haresources<br />
</code><br />
<br />
<pre><br />
node1 drbddisk::r1 LVM::drbdvg1 Filesystem::/dev/drbdvg1/web::/VSERVERS/web::ext3 vserver-web SendArp::123.123.123.125/bond0 MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
The above will default the Vserver named web to node1 and specify the mount points, the vserver-web script will start and stop heartbeat, the sendarp is for notifying the network that this IP can be found somewhere else then before. (have added the SendArp an extra time below for better result)<br />
<br />
Another example for more than 1 Vserver,<br />
We only specify 1 default node here for all Vservers and the same DRBD disk and Volume Group, the individual start scripts and mount points are specified separately, mind the \, its all in 1 line. the last mail command is only needed once.<br />
<br />
<pre><br />
node1 \<br />
drbddisk::r0 \<br />
LVM::drbdvg0 \<br />
Filesystem::/dev/drbdvg0/web::/VSERVERS/web::ext3 \<br />
Filesystem::/dev/drbdvg0/ns1::/VSERVERS/ns1::ext3 \<br />
Vserver-web \<br />
Vserver-ns1 \<br />
SendArp::123.123.123.125/bond0 \<br />
SendArp::123.123.123.126/bond0 \<br />
MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
=== start/stop script ===<br />
<br />
The vserver-web script as specified to be called by heartbeat above is basically a demolished version of the original R2 style agent by Martin Fick from here http://www.theficks.name/bin/lib/ocf/VServer.<br />
<br />
What i did is remove the sensible top part and replace "$OCF_RESKEY_vserver" with the specific Vserver name, also added an extra<br />
<br />
<pre><br />
/etc/ha.d/resource.d/SendArp 123.123.123.126/bond0 start<br />
</pre><br />
<br />
to the start part because i had various results when done by Heartbeat in the first tests i did, not sure if it is still needed but i guess it doesn't hurt.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
<pre><br />
#!/bin/sh<br />
#<br />
# License: GNU General Public License (GPL) <br />
# Author: Martin Fick <mogulguy@yahoo.com><br />
# Date: 04/19/07<br />
# Version: 1.1<br />
#<br />
# This script manages a VServer instance<br />
#<br />
# It can start or stop a VServer<br />
#<br />
# usage: $0 {start|stop|status|monitor|meta-data}<br />
#<br />
#<br />
# OCF parameters are as below<br />
# OCF_RESKEY_vserver<br />
#<br />
#######################################################################<br />
# Initialization:<br />
#<br />
#. /usr/lib/heartbeat/ocf-shellfuncs<br />
#<br />
#USAGE="usage: $0 {start|stop|status|monitor|meta-data}";<br />
#<br />
#######################################################################<br />
#<br />
#<br />
#meta_data() {<br />
# cat <<END<br />
#<?xml version="1.0"?><br />
#<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd"><br />
#<resource-agent name="VServer"><br />
# <version>1.0</version><br />
# <longdesc lang="en"><br />
#This script manages a VServer instance.<br />
#It can start or stop a VServer.<br />
# </longdesc><br />
# <shortdesc lang="en">OCF Resource Agent compliant VServer script.</shortdesc><br />
#<br />
# <parameters><br />
#<br />
# <parameter name="vserver" unique="1" required="1"><br />
# <longdesc lang="en"><br />
#The vserver name is the name as found under /etc/vservers<br />
# </longdesc><br />
# <shortdesc lang="en">VServer Name</shortdesc><br />
# <content type="string" default="" /><br />
# </parameter><br />
#<br />
# </parameters><br />
#<br />
# <actions><br />
# <action name="start" timeout="2m" /><br />
# <action name="stop" timeout="1m" /><br />
# <action name="monitor" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="status" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="meta-data" timeout="1m" /><br />
# </actions><br />
#</resource-agent><br />
#END<br />
#}<br />
<br />
vserver_reload() {<br />
vserver_stop || return<br />
vserver_start<br />
}<br />
<br />
vserver_stop() {<br />
#<br />
# Is the VServer already stopped?<br />
#<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "stop"<br />
<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
return 1<br />
}<br />
<br />
vserver_start() {<br />
vserver_status<br />
[ $? -eq 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "start"<br />
vserver_status<br />
/etc/ha.d/resource.d/SendArp 123.123.123.125/bond0 start<br />
}<br />
<br />
vserver_status() {<br />
/usr/sbin/vserver "web" "status"<br />
rc=$?<br />
if [ $rc -eq 0 ]; then<br />
echo "running"<br />
return 0<br />
elif [ $rc -eq 3 ]; then<br />
echo "stopped"<br />
else<br />
echo "unknown"<br />
fi<br />
return 7<br />
}<br />
<br />
vserver_monitor() {<br />
vserver_status<br />
}<br />
<br />
<br />
vserver_usage() {<br />
<br />
echo $USAGE >&2<br />
}<br />
<br />
vserver_info() {<br />
cat - <<!INFO<br />
Abstract=VServer Instance takeover<br />
Argument=VServer Name<br />
Description:<br />
A Vserver is a simulated server which is fairly hardware independent<br />
so it can be easily setup to run on several machines.<br />
Please rerun with the meta-data command for a list of \\<br />
valid arguments and their defaults.<br />
!INFO<br />
}<br />
<br />
#<br />
# Start or Stop the given VServer...<br />
#<br />
<br />
if [ $# -ne 1 ] ; then<br />
vserver_usage<br />
exit 2<br />
fi<br />
<br />
case "$1" in<br />
start|stop|status|monitor|reload|info|usage) vserver_$1 ;;<br />
meta-data) meta_data ;;<br />
validate-all|notify|promote|demote) exit 3 ;;<br />
<br />
*) vserver_usage ; exit 2 ;;<br />
esac<br />
<br />
<br />
</pre><br />
To make this file executable by Heartbeat<br />
<br />
<code><br />
chmod a+x /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
=== Diazepam ===<br />
<br />
Add a modificaton to the drbddisk resource, as pointed out by Christian Balzer on the Vserver mailing list http://list.linux-vserver.org/archive?mss:835:200803:cgehldioambmojimggpf, it seems to help Heartbeat to be a little more patient if it wants to close down the r0 while not all Vservers are stopped yet, not unimportant.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/drbddisk<br />
</code><br />
<br />
<pre><br />
stop)<br />
# Kill off any vserver mounts that might hog this<br />
VNSPACE=/usr/sbin/vnamespace<br />
<br />
for CTX in `/usr/sbin/vserver-stat | tail -n +2 | awk '{print $1}'`<br />
do<br />
MPOINT="`$VNSPACE -e $CTX cat /proc/mounts | grep $RES | awk '{print $2}'`"<br />
echo Unmounting mount point $MPOINT from within context $CTX<br />
### MOUNT POINT IS COMPULSORY. DEVICE NAME DOES NOT WORK!!!<br />
$VNSPACE -e $CTX /bin/umount $MPOINT || continue;<br />
done<br />
# exec, so the exit code of drbdadm propagates<br />
exec $DRBDADM secondary $RES<br />
<br />
</pre><br />
<br />
== Create a Vserver ==<br />
<br />
Note that we already have mounted the LVM partition on /VSERVERS/web in an earlier step, we're going to place both the /var and /etc directories on the mountpoint and symlink to it, this way the complete Vserver and its config are available on the other node when mounted.<br />
<br />
<code><br />
mkdir -p /VSERVERS/web/etc<br />
</code><br />
<br />
<code><br />
mkdir -p /VSERVERS/web/barrier/var<br />
</code><br />
<br />
When making the Vserver it will be in the default location /var/lib/vservers/web and its config in /etc/vservers/web <br />
<br />
<pre><br />
newvserver --hostname web --domain example.com --ip 123.123.123.125/24 --dist etch --mirror http://123.123.123.81:3142/debian.apt-get.eu/debian --interface bond0<br />
</pre><br />
<br />
<pre><br />
enter the root password<br />
</pre><br />
<br />
<pre><br />
Create a normal user account now? <br />
<No> <br />
</pre><br />
<br />
<pre><br />
Choose software to install: <br />
<Ok> <br />
</pre><br />
<br />
On node1 we move the Vserver directories to the LVM volume on the DRBD disks and make symlinks from the normal locations.<br />
<br />
On node1<br />
<br />
<code><br />
mv /etc/vservers/web/* /VSERVERS/web/etc/<br />
</code><br />
<br />
<code><br />
rmdir /etc/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<code><br />
mv /var/lib/vservers/web/* /VSERVERS/web/barrier/var<br />
</code><br />
<br />
<code><br />
rmdir /var/lib/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
We need to set the same symlinks on node2, but the we need the Vserver directories available there first.<br />
The mounting should be handled by heartbeat by now so we make our resources move to the other machine.<br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat stop<br />
</code><br />
<br />
On node2<br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat start<br />
</code><br />
<br />
<code><br />
Vserver web start<br />
</code><br />
<br />
and enjoy!</div>212.123.252.242http://linux-vserver.org/Getting_high_with_lennyGetting high with lenny2008-10-02T06:50:11Z<p>212.123.252.242: </p>
<hr />
<div>= Getting High with Lenny =<br />
<br />
The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released)<br />
<br />
<br />
There is a lot of buzz going on for a while now about virtualisation and High Availability and while Vserver is very well capable for this job the number of documented examples compared to some other virtualisation techniques are a little lacking so i thought i'd do my share. <br />
<br />
I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed.<br />
DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly.<br />
In my experience it takes a few seconds to have several Vservers fail over to another machine with this setup.<br />
<br />
The main attempt here is to give a single working example without going to much in to the details of every option, the scenario is relatively simple but different variations can be made.<br />
<br />
For this set up we will have <br />
<br />
** 2 machines<br />
** both machines have 1 single large DRBD partition<br />
** primary/seconday there is always 1 machine active and 1 on standby<br />
** 1 LVM partition per Vserver on top of the DRBD partition, for quota support from within the guest and LVM snapshots<br />
** the Vservers /etc/vserver and /var/lib/vservers directories will be placed on the DRBD partition.<br />
<br />
In case the main machine that runs the Vservers goes down, the synchronized second machine should take over and automatically start the Vservers.<br />
<br />
Basically this is an on-line RAID solution that can keep your services running in case of hardware failure, it is NOT a back-up replacement.<br />
<br />
The cost for this setup is that you always have 1 idle machine standby, this cost can be justified by the fact that Linux-Vserver enables you to make full use of the 1 machine that is running, you also could consider to run this on a little less expensive (reliable) hardware.<br />
<br />
Also note that i will be using R1 style configuration for heartbeat, R1 style can be considered to be depreciated when using Heartbeat2 but i could not get my head around the R2 xml configuration, so if you want R2 you might want to have a look here.<br />
[[Fail-over]])<br />
<br />
The partitioning looks as follows<br />
<br />
<code> <br />
c0d0p1 Boot Primary Linux ext3 10001.95<br />
c0d0p5 Logical Linux swap / Solaris 1003.49<br />
c0d0p6 Logical Linux 280325.77<br />
<br />
</code><br />
<br />
<br />
<blockquote><br />
machine1 will use the following names<br />
* hostname = node1<br />
* IP number = 192.168.1.100 <br />
* is primary for r0 on disk c0d0p6<br />
* physical volume on r0 is /dev/drbd0<br />
* volume group on /dev/drbd0 is called drbdvg0<br />
</blockquote><br />
<br />
<blockquote><br />
machine2 will use the following names<br />
* hostname = node2<br />
* IP number = 192.168.1.200 <br />
* is secondary for r0 on disk c0d0p6<br />
<br />
The Volume Group and the Physical Volume will be identical on node2 if this one becomes the primary for r0.<br />
</blockquote><br />
<br />
== Loadbalance-Failover the network cards ==<br />
<br />
Maybe not very specific to Vserver, Heartbeat or DRBD, but loadbalancing your network cards for failover is always usefull. Some more indepth details by Carla Schroder can be found here. <br />
[[http://www.enterprisenetworkingplanet.com/nethub/article.php/3696561]]<br />
I did not do it for the DRBD crossover cable between the nodes while this is actually highly recomended.<br />
We need both mii-tool and ethtool.<br />
<br />
<code><br />
apt-get install ethtool ifenslave-2.6<br />
</code><br />
<br />
<code><br />
nano /etc/modprobe.d/arch/i386<br />
</code><br />
<br />
To load the modules with the correct options at boot time.<br />
<br />
<pre><br />
alias bond0 bonding<br />
options bond0 mode=balance-alb miimon=100 <br />
</pre><br />
<br />
And set the interfaces eth0 and eth1 as slaves to bond0, also eth2 is set here for the crossover cable for the DRBD connection to the fail over machine.<br />
<br />
<code><br />
nano /etc/network/interfaces<br />
</code><br />
<pre><br />
# This file describes the network interfaces available on your system<br />
# and how to activate them. For more information, see interfaces(5).<br />
<br />
# The loopback network interface<br />
auto lo<br />
iface lo inet loopback<br />
<br />
# The primary network interface<br />
auto bond0<br />
iface bond0 inet static<br />
address 123.123.123.100<br />
netmask 255.255.255.0<br />
network 123.123.123.0<br />
broadcast 123.123.123.255<br />
gateway 123.123.123.1<br />
# dns-* options are implemented by the resolvconf package, if installed<br />
dns-nameservers 123.123.123.45<br />
dns-search example.com<br />
up /sbin/ifenslave bond0 eth0 eth1<br />
down ifenslave -d bond0 eth0 eth1<br />
<br />
<br />
auto eth2<br />
iface eth2 inet static<br />
address 192.168.1.100<br />
netmask 255.255.255.0<br />
</pre><br />
<br />
This way the system needs to be rebooted before the changes take effect, otherwise you should load the drivers and ifdown eth0 and eth1 first before ifup bond0 but i'm planning to install a new kernel anyway in the next step.<br />
<br />
== Install the Vserver packages ==<br />
<br />
<code><br />
apt-get install linux-image-2.6-vserver-686-bigmem util-vserver vserver-debiantools<br />
</code><br />
<br />
As usual a reboot is needed to boot this kernel.<br />
<br />
<blockquote><br />
With Etch i found that the Vserver kernel often ended up as second in the grub list, not so in Lenny but to be safe check the kernel stanza in /boot/grub/menu.lst especially when doing this from a remote location.<br />
</blockquote><br />
<br />
== Install DRBD8, LVM2 and Heartbeat ==<br />
<br />
<code><br />
apt-get install drbd8-modules-2.6-vserver-686-bigmem drbd8-module-source lvm2 heartbeat-2<br />
</code><br />
<br />
<blockquote><br />
not sure about this, but DRBD always needed to be compiled against the running kernel, is this still the case with the kernel specific modules? I did not check but it would be good to know in case of a kernel upgrade.<br />
</blockquote><br />
<br />
== Build DRBD8 ==<br />
<br />
Although packages are available in the repositorie for DRBD8, the purpose of these packages is that you can built it easily from source and patch the running kernel.<br />
<br />
To do this we just issue this command<br />
<br />
<code><br />
m-a a-i drbd8<br />
</code><br />
<br />
And to load it into the kernel..<br />
<br />
<code><br />
depmod -ae<br />
</code><br />
<br />
<code><br />
modprobe drbd<br />
</code><br />
<br />
==== Configure DRBD8 ====<br />
<br />
Now that we have the essentials installed we can configure DRBD. Again, i will not go in to the details of all the options here so check out the default config and http://www.drbd.org/ to find a match for your set up.<br />
<br />
<code><br />
mv /etc/drbd.conf /etc/drbd.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/drbd.conf<br />
</code><br />
<br />
<pre><br />
global {<br />
usage-count no;<br />
}<br />
<br />
common {<br />
syncer { rate 100M; } <br />
}<br />
<br />
resource r0 {<br />
protocol C;<br />
handlers {<br />
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt f";<br />
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt f";<br />
local-io-error "echo o > /proc/sysrq-trigger ; halt f";<br />
outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";<br />
}<br />
<br />
startup {<br />
degr-wfc-timeout 120; # 2 minutes.<br />
}<br />
<br />
disk {<br />
on-io-error detach;<br />
}<br />
<br />
net { <br />
after-sb-0pri disconnect;<br />
after-sb-1pri disconnect;<br />
after-sb-2pri disconnect;<br />
rr-conflict disconnect;<br />
}<br />
<br />
syncer {<br />
rate 100M;<br />
al-extents 257;<br />
}<br />
<br />
<br />
on node1 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.100:7788;<br />
meta-disk internal;<br />
}<br />
<br />
on node2 {<br />
device /dev/drbd0;<br />
disk /dev/cciss/c0d0p6;<br />
address 192.168.1.200:7788;<br />
meta-disk internal;<br />
}<br />
}<br />
</pre><br />
<br />
Before we start DRBD we change some permissions, otherwise it will ask for it.<br />
So on both nodes<br />
<pre><br />
chgrp haclient /sbin/drbdsetup<br />
chmod o-x /sbin/drbdsetup<br />
chmod u+s /sbin/drbdsetup<br />
chgrp haclient /sbin/drbdmeta<br />
chmod o-x /sbin/drbdmeta<br />
chmod u+s /sbin/drbdmeta<br />
</pre><br />
<br />
==== Create the DRBD devices ====<br />
<br />
On both nodes<br />
<br />
node1<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm create-md r0<br />
</code><br />
<br />
node1<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
node2<br />
<br />
<code><br />
drbdadm up r0<br />
</code><br />
<br />
<blockquote><br />
'''The following should be done on the node that will be the primary!'''<br />
</blockquote><br />
<br />
On node1<br />
<br />
<code><br />
drbdadm -- --overwrite-data-of-peer primary r0<br />
</code><br />
<br />
<br />
watch cat /proc/drbd should show you something like this<br />
<pre><br />
version: 8.0.13 (api:86/proto:86)<br />
GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil@fat-tyre, 2008-08-04 15:28:07<br />
0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---<br />
ns:62059328 nr:0 dw:3298052 dr:58770141 al:2102 bm:3641 lo:1 pe:261 ua:251 ap:0<br />
[===>................] sync'ed: 22.1% (208411/267331)M<br />
finish: 4:04:44 speed: 14,472 (12,756) K/sec<br />
resync: used:1/61 hits:4064317 misses:5172 starving:0 dirty:0 changed:5172<br />
act_log: used:0/257 hits:822411 misses:46655 starving:110 dirty:44552 changed:2102<br />
<br />
<br />
</pre><br />
<br />
== Configure LVM2 ==<br />
<br />
<br />
<note important><br />
LVM will normally scan all available devices under /dev, but since /dev/cciss/c0d0p6 and /dev/drbd0 are basically the same this will lead to errors where LVM reads and writes the same data to both devices.<br />
So to limit it to scan /dev/drbd devices only we do the following on both nodes.<br />
<br />
</note><br />
<br />
<code><br />
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.original<br />
</code><br />
<br />
<code><br />
nano /etc/lvm/lvm.conf<br />
</code><br />
<br />
<pre><br />
#filter = [ "a/.*/" ]<br />
filter = [ "a|/dev/drbd|", "r|.*|" ]<br />
</pre><br />
<br />
to re-scan with the new settings on both nodes<br />
<code><br />
<br />
vgscan<br />
</code><br />
<br />
=== Create the Physical Volume ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
On node1<br />
<br />
<code><br />
pvcreate /dev/drbd0<br />
</code><br />
<br />
=== Create the Volume Group ===<br />
<br />
The following only needs to be done on the node that is the primary!!<br />
<br />
One node1<br />
<br />
<code><br />
vgcreate drbdvg0 /dev/drbd0<br />
</code><br />
<br />
=== Create the Logical Volume ===<br />
<br />
Yes, again only on the node that is primary!!!<br />
<br />
For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on.<br />
<br />
On node1<br />
<br />
<code><br />
lvcreate -L50000 -n web drbdvg0<br />
</code><br />
<br />
Then we put a file system on the logical volumes<br />
<br />
<code><br />
mkfs.ext3 /dev/drbdvg0/web<br />
</code><br />
<br />
create the directory where we want to mount the Vservers<br />
<br />
<code><br />
mkdir -p /VSERVERS/web<br />
</code><br />
<br />
and mount the volume group to the mount point<br />
<br />
<code><br />
mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/<br />
</code><br />
<br />
== Get informed ==<br />
<br />
Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.<br />
<br />
This should be done on both nodes<br />
<br />
<code><br />
apt-get install postfix mailx<br />
</code><br />
<br />
and go for the defaults, "internet site" and node1.example.com"<br />
<br />
We don't want postfix to listen to all interfaces,<br />
<br />
<code><br />
nano /etc/postfix/main.cf<br />
</code><br />
<br />
and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.<br />
<br />
<code><br />
inet_interfaces = loopback-only<br />
</code><br />
<br />
<br />
== Heartbeat ==<br />
<br />
=== Get aquinted ===<br />
Add the other node in the hosts file of both nodes, this way Heartbeat knows who is who.<br />
<br />
so for node1 do<br />
<br />
<code><br />
nano /etc/hosts<br />
</code><br />
<br />
and add node2<br />
<br />
<pre><br />
192.168.1.200 node2<br />
</pre><br />
<br />
=== Get intimate ===<br />
<br />
Set up some keys on both boxes so we can ssh login without a password (defaults, no passphrase)<br />
<br />
<code><br />
ssh-keygen<br />
</code><br />
<br />
then copy over the public keys<br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.100:/root/.ssh/authorized_keys<br />
</code><br />
<br />
<code><br />
scp /root/.ssh/id_rsa.pub 192.168.1.200:/root/.ssh/authorized_keys<br />
</code><br />
<br />
=== Configure Heartbeat ===<br />
<br />
Without the ha.cf file Heartbeat wil not start, this should only be done on 1 of the nodes.<br />
<br />
<code><br />
nano /etc/ha.d/ha.cf<br />
</code><br />
<br />
<pre><br />
autojoin none <br />
#crm on #enables heartbeat2 cluster manager - we want that!<br />
use_logd on<br />
logfacility syslog<br />
keepalive 1<br />
deadtime 10<br />
warntime 10<br />
udpport 694<br />
auto_failback on #resources move back once node is back online<br />
mcast bond0 239.0.0.43 694 1 0 <br />
bcast eth2 <br />
node node1 #hostnames of the nodes<br />
node node2<br />
</pre><br />
<br />
This one also on 1 of the nodes<br />
<br />
<code><br />
nano /etc/ha.d/authkeys<br />
</code><br />
<br />
<pre><br />
auth 3<br />
3 md5 failover ## this is just a string, enter what you want ! auth 3 md5 uses md5 encryption<br />
</pre><br />
<br />
<code><br />
chmod 600 /etc/ha.d/authkeys<br />
</code><br />
<br />
<note><br />
We will be using heartbeat R1-style configuration here simply because i don't understand the R2 xml based syntax.<br />
</note><br />
We only did the above 2 config files on 1 node but we need it on both, heartbeat can do that for us.<br />
<br />
<code><br />
/usr/lib/heartbeat/ha_propagate<br />
</code><br />
<br />
=== Heatbeat behavior ===<br />
<br />
After above 2 files are set, the haresources is where we want to be to control Heartbeats behaviour.<br />
This is an example for 1 Vserver that we will set up later on.<br />
<br />
<code><br />
nano /etc/ha.d/haresources<br />
</code><br />
<br />
<pre><br />
node1 drbddisk::r1 LVM::drbdvg1 Filesystem::/dev/drbdvg1/web::/VSERVERS/web::ext3 vserver-web SendArp::123.123.123.125/bond0 MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
The above will default the Vserver named web to node1 and specify the mount points, the vserver-web script will start and stop heartbeat, the sendarp is for notifying the network that this IP can be found somewhere else then before. (have added the SendArp an extra time below for better result)<br />
<br />
Another example for more than 1 Vserver,<br />
We only specify 1 default node here for all Vservers and the same DRBD disk and Volume Group, the individual start scripts and mount points are specified separately, mind the \, its all in 1 line. the last mail command is only needed once.<br />
<br />
<pre><br />
node1 \<br />
drbddisk::r0 \<br />
LVM::drbdvg0 \<br />
Filesystem::/dev/drbdvg0/web::/VSERVERS/web::ext3 \<br />
Filesystem::/dev/drbdvg0/ns1::/VSERVERS/ns1::ext3 \<br />
Vserver-web \<br />
Vserver-ns1 \<br />
SendArp::123.123.123.125/bond0 \<br />
SendArp::123.123.123.126/bond0 \<br />
MailTo::randall@songshu.org::DRBDFailure<br />
</pre><br />
<br />
=== start/stop script ===<br />
<br />
The vserver-web script as specified to be called by heartbeat above is basically a demolished version of the original R2 style agent by Martin Fick from here http://www.theficks.name/bin/lib/ocf/VServer.<br />
<br />
What i did is remove the sensible top part and replace "$OCF_RESKEY_vserver" with the specific Vserver name, also added an extra<br />
<br />
<pre><br />
/etc/ha.d/resource.d/SendArp 123.123.123.126/bond0 start<br />
</pre><br />
<br />
to the start part because i had various results when done by Heartbeat in the first tests i did, not sure if it is still needed but i guess it doesn't hurt.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
<pre><br />
#!/bin/sh<br />
#<br />
# License: GNU General Public License (GPL) <br />
# Author: Martin Fick <mogulguy@yahoo.com><br />
# Date: 04/19/07<br />
# Version: 1.1<br />
#<br />
# This script manages a VServer instance<br />
#<br />
# It can start or stop a VServer<br />
#<br />
# usage: $0 {start|stop|status|monitor|meta-data}<br />
#<br />
#<br />
# OCF parameters are as below<br />
# OCF_RESKEY_vserver<br />
#<br />
#######################################################################<br />
# Initialization:<br />
#<br />
#. /usr/lib/heartbeat/ocf-shellfuncs<br />
#<br />
#USAGE="usage: $0 {start|stop|status|monitor|meta-data}";<br />
#<br />
#######################################################################<br />
#<br />
#<br />
#meta_data() {<br />
# cat <<END<br />
#<?xml version="1.0"?><br />
#<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd"><br />
#<resource-agent name="VServer"><br />
# <version>1.0</version><br />
# <longdesc lang="en"><br />
#This script manages a VServer instance.<br />
#It can start or stop a VServer.<br />
# </longdesc><br />
# <shortdesc lang="en">OCF Resource Agent compliant VServer script.</shortdesc><br />
#<br />
# <parameters><br />
#<br />
# <parameter name="vserver" unique="1" required="1"><br />
# <longdesc lang="en"><br />
#The vserver name is the name as found under /etc/vservers<br />
# </longdesc><br />
# <shortdesc lang="en">VServer Name</shortdesc><br />
# <content type="string" default="" /><br />
# </parameter><br />
#<br />
# </parameters><br />
#<br />
# <actions><br />
# <action name="start" timeout="2m" /><br />
# <action name="stop" timeout="1m" /><br />
# <action name="monitor" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="status" depth="0" timeout="1m" interval="5s" start-delay="2m" /><br />
# <action name="meta-data" timeout="1m" /><br />
# </actions><br />
#</resource-agent><br />
#END<br />
#}<br />
<br />
vserver_reload() {<br />
vserver_stop || return<br />
vserver_start<br />
}<br />
<br />
vserver_stop() {<br />
#<br />
# Is the VServer already stopped?<br />
#<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "stop"<br />
<br />
vserver_status<br />
[ $? -ne 0 ] && return 0<br />
<br />
return 1<br />
}<br />
<br />
vserver_start() {<br />
vserver_status<br />
[ $? -eq 0 ] && return 0<br />
<br />
/usr/sbin/vserver "web" "start"<br />
vserver_status<br />
/etc/ha.d/resource.d/SendArp 123.123.123.125/bond0 start<br />
}<br />
<br />
vserver_status() {<br />
/usr/sbin/vserver "web" "status"<br />
rc=$?<br />
if [ $rc -eq 0 ]; then<br />
echo "running"<br />
return 0<br />
elif [ $rc -eq 3 ]; then<br />
echo "stopped"<br />
else<br />
echo "unknown"<br />
fi<br />
return 7<br />
}<br />
<br />
vserver_monitor() {<br />
vserver_status<br />
}<br />
<br />
<br />
vserver_usage() {<br />
<br />
echo $USAGE >&2<br />
}<br />
<br />
vserver_info() {<br />
cat - <<!INFO<br />
Abstract=VServer Instance takeover<br />
Argument=VServer Name<br />
Description:<br />
A Vserver is a simulated server which is fairly hardware independent<br />
so it can be easily setup to run on several machines.<br />
Please rerun with the meta-data command for a list of \\<br />
valid arguments and their defaults.<br />
!INFO<br />
}<br />
<br />
#<br />
# Start or Stop the given VServer...<br />
#<br />
<br />
if [ $# -ne 1 ] ; then<br />
vserver_usage<br />
exit 2<br />
fi<br />
<br />
case "$1" in<br />
start|stop|status|monitor|reload|info|usage) vserver_$1 ;;<br />
meta-data) meta_data ;;<br />
validate-all|notify|promote|demote) exit 3 ;;<br />
<br />
*) vserver_usage ; exit 2 ;;<br />
esac<br />
<br />
<br />
</pre><br />
To make this file executable by Heartbeat<br />
<br />
<code><br />
chmod a+x /etc/ha.d/resource.d/Vserver-web<br />
</code><br />
<br />
=== Diazepam ===<br />
<br />
Add a modificaton to the drbddisk resource, as pointed out by Christian Balzer on the Vserver mailing list http://list.linux-vserver.org/archive?mss:835:200803:cgehldioambmojimggpf, it seems to help Heartbeat to be a little more patient if it wants to close down the r0 while not all Vservers are stopped yet, not unimportant.<br />
<br />
<code><br />
nano /etc/ha.d/resource.d/drbddisk<br />
</code><br />
<br />
<pre><br />
stop)<br />
# Kill off any vserver mounts that might hog this<br />
VNSPACE=/usr/sbin/vnamespace<br />
<br />
for CTX in `/usr/sbin/vserver-stat | tail -n +2 | awk '{print $1}'`<br />
do<br />
MPOINT="`$VNSPACE -e $CTX cat /proc/mounts | grep $RES | awk '{print $2}'`"<br />
echo Unmounting mount point $MPOINT from within context $CTX<br />
### MOUNT POINT IS COMPULSORY. DEVICE NAME DOES NOT WORK!!!<br />
$VNSPACE -e $CTX /bin/umount $MPOINT || continue;<br />
done<br />
# exec, so the exit code of drbdadm propagates<br />
exec $DRBDADM secondary $RES<br />
<br />
</pre><br />
<br />
== Create a Vserver ==<br />
<br />
Note that we already have mounted the LVM partition on /VSERVERS/web in an earlier step, we're going to place both the /var and /etc directories on the mountpoint and symlink to it, this way the complete Vserver and its config are available on the other node when mounted.<br />
<br />
<code><br />
mkdir -p /VSERVERS/web/etc<br />
</code><br />
<br />
<code><br />
mkdir -p /VSERVERS/web/barrier/var<br />
</code><br />
<br />
When making the Vserver it will be in the default location /var/lib/vservers/web and its config in /etc/vservers/web <br />
<br />
<pre><br />
newvserver --hostname web --domain example.com --ip 123.123.123.125/24 --dist etch --mirror http://123.123.123.81:3142/debian.apt-get.eu/debian --interface bond0<br />
</pre><br />
<br />
<pre><br />
enter the root password<br />
</pre><br />
<br />
<pre><br />
Create a normal user account now? <br />
<No> <br />
</pre><br />
<br />
<pre><br />
Choose software to install: <br />
<Ok> <br />
</pre><br />
<br />
On node1 we move the Vserver directories to the LVM volume on the DRBD disks and make symlinks from the normal locations.<br />
<br />
On node1<br />
<br />
<code><br />
mv /etc/vservers/web/* /VSERVERS/web/etc/<br />
</code><br />
<br />
<code><br />
rmdir /etc/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<code><br />
mv /var/lib/vservers/web/* /VSERVERS/web/barrier/var<br />
</code><br />
<br />
<code><br />
rmdir /var/lib/vservers/web/<br />
</code><br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
We need to set the same symlinks on node2, but the we need the Vserver directories available there first.<br />
The mounting should be handled by heartbeat by now so we make our resources move to the other machine.<br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat stop<br />
</code><br />
<br />
On node2<br />
<br />
<code><br />
ln -s /VSERVERS/web/etc /etc/vservers/web<br />
</code><br />
<br />
<br />
<code><br />
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web<br />
</code><br />
<br />
On node1<br />
<br />
<code><br />
/etc/init.d/heartbeat start<br />
</code><br />
<br />
<code><br />
Vserver web start<br />
</code><br />
<br />
and enjoy!</div>212.123.252.242