Difference between revisions of "Getting high with lenny"

From Linux-VServer

Jump to: navigation, search
(Loadbalance-Failover the network cards)
(+cat)
 
(65 intermediate revisions by 5 users not shown)
Line 1: Line 1:
====== Getting High with Lenny ======
+
== Getting High with Lenny ==
  
 
The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released)
 
The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released)
  
  
There is a lot of buzz going on for a while now about virtualisation and High Availability and while Vserver is very well capable for this job the number of documented examples compared to some other virtualisation techniques are a little lacking.  
+
There is a lot of buzz going on for a while now about virtualisation and High Availability and while Vserver is very well capable for this job the number of documented examples compared to some other virtualisation techniques are a little lacking so i thought i'd do my share.  
  
 
I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed.
 
I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed.
 
DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly.
 
DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly.
 +
In my experience it takes a few seconds to have several Vservers fail over to another machine with this setup.
 +
 +
The main attempt here is to give a single working example without going to much in to the details of every option, the scenario is relatively simple but different variations can be made.
 +
 +
For this set up we will have
 +
 +
<blockquote>
 +
*2 machines <br>
 +
*both machines have 1 single large DRBD partition <br>
 +
*primary/seconday there is always 1 machine active and 1 on standby <br>
 +
*1 LVM partition per Vserver on top of the DRBD partition, for quota support from within the guest and LVM snapshots
 +
*the Vservers /etc/vserver and /var/lib/vservers directories will be placed on the DRBD partition. <br>
 +
</blockquote>
 +
 +
In case the main machine that runs the Vservers goes down, the synchronized second machine should take over and automatically start the Vservers.
 +
 +
Basically this is an on-line RAID solution that can keep your services running in case of hardware failure, it is NOT a back-up replacement.
 +
 +
The cost for this setup is that you always have 1 idle machine standby, this cost can be justified by the fact that Linux-Vserver enables you to make full use of the 1 machine that is running, you also could consider to run this on a little less expensive (reliable) hardware.
  
The main attempt here is to give a single working example without going to much in to the details of every option, the scenario is relatively simple but different variations can be made offcourse.
 
For this set up we will have 1 single large DRBD device here with 2 machines in a primary/seconday setting and we use LVM to provide some partitioning on top of DRBD to place the Vservers on.
 
 
Also note that i will be using R1 style configuration for heartbeat, R1 style can be considered to be depreciated when using Heartbeat2 but i could not get my head around the R2 xml configuration, so if you want R2 you might want to have a look here.
 
Also note that i will be using R1 style configuration for heartbeat, R1 style can be considered to be depreciated when using Heartbeat2 but i could not get my head around the R2 xml configuration, so if you want R2 you might want to have a look here.
 
[[Fail-over]])
 
[[Fail-over]])
Line 24: Line 41:
  
  
<note>
+
<blockquote>
machine1 will use the following names
+
'''machine1''' will use the following names. <br>
  * hostname = node1
+
*hostname = node1 <br>
  * IP number = 192.168.1.100  
+
*IP number = 192.168.1.100<br>
  * is primary for r0 on disk c0d0p6
+
*is primary for r0 on disk c0d0p6 <br>
  * physical volume on r0 is /dev/drbd0
+
*physical volume on r0 is /dev/drbd0 <br>
  * volume group on /dev/drbd0 is called drbdvg0
+
*volume group on /dev/drbd0 is called drbdvg0 <br>
</note>
+
</blockquote>
  
<note>
+
<blockquote>
machine2 will use the following names
+
'''machine2''' will use the following names. <br>
  * hostname = node2
+
*hostname = node2 <br>
  * IP number = 192.168.1.200  
+
*IP number = 192.168.1.200 <br>
  * is secondary for r0 on disk c0d0p6
+
*is secondary for r0 on disk c0d0p6 <br>
  
 
The Volume Group and the Physical Volume will be identical on node2 if this one becomes the primary for r0.
 
The Volume Group and the Physical Volume will be identical on node2 if this one becomes the primary for r0.
</note>
+
</blockquote>
  
===== Loadbalance-Failover the network cards =====
+
== Loadbalance-Failover the network cards ==
  
 
Maybe not very specific to Vserver, Heartbeat or DRBD, but loadbalancing your network cards for failover is always usefull. Some more indepth details by Carla Schroder can be found here.   
 
Maybe not very specific to Vserver, Heartbeat or DRBD, but loadbalancing your network cards for failover is always usefull. Some more indepth details by Carla Schroder can be found here.   
Line 48: Line 65:
 
I did not do it for the DRBD crossover cable between the nodes while this is actually highly recomended.
 
I did not do it for the DRBD crossover cable between the nodes while this is actually highly recomended.
 
We need both mii-tool and ethtool.
 
We need both mii-tool and ethtool.
 +
 
<code>
 
<code>
 
apt-get install ethtool ifenslave-2.6
 
apt-get install ethtool ifenslave-2.6
Line 57: Line 75:
  
 
To load the modules with the correct options at boot time.
 
To load the modules with the correct options at boot time.
<code>
+
 
 +
<pre>
 
alias bond0 bonding
 
alias bond0 bonding
 
options bond0 mode=balance-alb miimon=100  
 
options bond0 mode=balance-alb miimon=100  
</code>
+
</pre>
  
 
And set the interfaces eth0 and eth1 as slaves to bond0, also eth2 is set here for the crossover cable for the DRBD connection to the fail over machine.
 
And set the interfaces eth0 and eth1 as slaves to bond0, also eth2 is set here for the crossover cable for the DRBD connection to the fail over machine.
 +
 
<code>
 
<code>
 
nano /etc/network/interfaces
 
nano /etc/network/interfaces
 
</code>
 
</code>
<code>
+
<pre>
 
# This file describes the network interfaces available on your system
 
# This file describes the network interfaces available on your system
 
# and how to activate them. For more information, see interfaces(5).
 
# and how to activate them. For more information, see interfaces(5).
Line 87: Line 107:
 
         up /sbin/ifenslave bond0 eth0 eth1
 
         up /sbin/ifenslave bond0 eth0 eth1
 
         down ifenslave -d bond0 eth0 eth1
 
         down ifenslave -d bond0 eth0 eth1
 +
 +
 
auto eth2
 
auto eth2
 
iface eth2 inet static
 
iface eth2 inet static
 
         address 192.168.1.100
 
         address 192.168.1.100
 
         netmask 255.255.255.0
 
         netmask 255.255.255.0
</code>
+
</pre>
  
<note>
 
 
This way the system needs to be rebooted before the changes take effect, otherwise you should load the drivers and ifdown eth0 and eth1 first before ifup bond0 but i'm planning to install a new kernel anyway in the next step.
 
This way the system needs to be rebooted before the changes take effect, otherwise you should load the drivers and ifdown eth0 and eth1 first before ifup bond0 but i'm planning to install a new kernel anyway in the next step.
</note>
 
  
===== Install the Vserver packages =====
+
== Install the Vserver packages ==
  
 
<code>
 
<code>
apt-get install linux-image-2.6-vserver-686-bigmem util-vserver vserver-debiantools
+
apt-get install linux-image-2.6-vserver-686-bigmem util-vserver
 
</code>
 
</code>
  
 
As usual a reboot is needed to boot this kernel.
 
As usual a reboot is needed to boot this kernel.
  
<note>
+
<blockquote>
 
With Etch i found that the Vserver kernel often ended up as second in the grub list, not so in Lenny but to be safe check the kernel stanza in /boot/grub/menu.lst especially when doing this from a remote location.
 
With Etch i found that the Vserver kernel often ended up as second in the grub list, not so in Lenny but to be safe check the kernel stanza in /boot/grub/menu.lst especially when doing this from a remote location.
</note>
+
</blockquote>
  
===== Install DRBD8, LVM2 and Heartbeat =====
+
== Install DRBD8, LVM2 and Heartbeat ==
  
 
<code>
 
<code>
apt-get install drbd8-modules-2.6-vserver-686-bigmem drbd8-module-source lvm2 heartbeat-2.
+
apt-get install drbd8-modules-2.6-vserver-686-bigmem drbd8-module-source lvm2 heartbeat-2
 
</code>
 
</code>
<note>
+
 
 +
<blockquote>
 
not sure about this, but DRBD always needed to be compiled against the running kernel, is this still the case with the kernel specific modules? I did not check but it would be good to know in case of a kernel upgrade.
 
not sure about this, but DRBD always needed to be compiled against the running kernel, is this still the case with the kernel specific modules? I did not check but it would be good to know in case of a kernel upgrade.
</note>
+
</blockquote>
  
==== Build DRBD8 ====
+
== Build DRBD8 ==
  
Although packages are available in the repositorie dor DRBD8, the purpose of these packages is that you can built it easily from source and patch the running kernel.
+
Although packages are available in the repositorie for DRBD8, the purpose of these packages is that you can built it easily from source and patch the running kernel.
  
 
To do this we just issue this command
 
To do this we just issue this command
 +
 
<code>
 
<code>
 
m-a a-i drbd8
 
m-a a-i drbd8
 
</code>
 
</code>
 +
 
And to load it into the kernel..
 
And to load it into the kernel..
 +
 
<code>
 
<code>
 
depmod -ae
 
depmod -ae
 
</code>
 
</code>
 +
 
<code>
 
<code>
 
modprobe drbd
 
modprobe drbd
 
</code>
 
</code>
 +
 
==== Configure DRBD8 ====
 
==== Configure DRBD8 ====
  
Line 140: Line 166:
 
mv /etc/drbd.conf /etc/drbd.conf.original
 
mv /etc/drbd.conf /etc/drbd.conf.original
 
</code>
 
</code>
 +
 
<code>
 
<code>
 
nano /etc/drbd.conf
 
nano /etc/drbd.conf
 
</code>
 
</code>
<code>
+
 
 +
<pre>
 
global {
 
global {
 
         usage-count no;
 
         usage-count no;
Line 196: Line 224:
 
         }
 
         }
 
}
 
}
 +
</pre>
  
</code>
+
Before we start DRBD we change some permissions, otherwise it will ask for it.
 
+
So on both nodes
<code>
+
<pre>
 
chgrp haclient /sbin/drbdsetup
 
chgrp haclient /sbin/drbdsetup
 
chmod o-x /sbin/drbdsetup
 
chmod o-x /sbin/drbdsetup
Line 206: Line 235:
 
chmod o-x /sbin/drbdmeta
 
chmod o-x /sbin/drbdmeta
 
chmod u+s /sbin/drbdmeta
 
chmod u+s /sbin/drbdmeta
</code>
+
</pre>
 +
 
 +
==== Create the DRBD devices ====
  
 
On both nodes
 
On both nodes
  
 
node1
 
node1
 +
 
<code>
 
<code>
 
drbdadm create-md r0
 
drbdadm create-md r0
Line 216: Line 248:
  
 
node2
 
node2
 +
 
<code>
 
<code>
 
drbdadm create-md r0
 
drbdadm create-md r0
Line 221: Line 254:
  
 
node1
 
node1
 +
 
<code>
 
<code>
 
drbdadm up r0
 
drbdadm up r0
Line 226: Line 260:
  
 
node2
 
node2
 +
 
<code>
 
<code>
 
drbdadm up r0
 
drbdadm up r0
 
</code>
 
</code>
<note warning>
+
 
The following should be done on the node that will be the primary  
+
<blockquote>
</note>
+
'''The following should be done on the node that will be the primary!'''
 +
</blockquote>
 +
 
 
On node1
 
On node1
 +
 
<code>
 
<code>
 
drbdadm -- --overwrite-data-of-peer primary r0
 
drbdadm -- --overwrite-data-of-peer primary r0
Line 239: Line 277:
  
 
watch cat /proc/drbd should show you something like this
 
watch cat /proc/drbd should show you something like this
<code>
+
<pre>
 
version: 8.0.13 (api:86/proto:86)
 
version: 8.0.13 (api:86/proto:86)
 
GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil@fat-tyre, 2008-08-04 15:28:07
 
GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil@fat-tyre, 2008-08-04 15:28:07
Line 250: Line 288:
  
  
</code>
+
</pre>
 
+
  
==== Configure LVM2 ====
+
== Configure LVM2 ==
  
  
Line 261: Line 298:
  
 
</note>
 
</note>
 +
 
<code>
 
<code>
 
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.original
 
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.original
 
</code>
 
</code>
 +
 
<code>
 
<code>
 
nano /etc/lvm/lvm.conf
 
nano /etc/lvm/lvm.conf
 
</code>
 
</code>
<code>
+
 
 +
<pre>
 
     #filter = [ "a/.*/" ]
 
     #filter = [ "a/.*/" ]
 
     filter = [ "a|/dev/drbd|", "r|.*|" ]
 
     filter = [ "a|/dev/drbd|", "r|.*|" ]
</code>
+
</pre>
 +
 
 
to re-scan with the new settings on both nodes
 
to re-scan with the new settings on both nodes
 
<code>
 
<code>
 +
 
vgscan
 
vgscan
 
</code>
 
</code>
  
==== Create the Physical Volume ====
+
=== Create the Physical Volume ===
  
 
The following only needs to be done on the node that is the primary!!
 
The following only needs to be done on the node that is the primary!!
  
 
On node1
 
On node1
 +
 
<code>
 
<code>
 
pvcreate /dev/drbd0
 
pvcreate /dev/drbd0
 
</code>
 
</code>
  
==== Create the Volume Group ====
+
=== Create the Volume Group ===
  
 
The following only needs to be done on the node that is the primary!!
 
The following only needs to be done on the node that is the primary!!
  
 
One node1
 
One node1
 +
 
<code>
 
<code>
 
vgcreate drbdvg0 /dev/drbd0
 
vgcreate drbdvg0 /dev/drbd0
 
</code>
 
</code>
  
==== Create the Logical Volume ====
+
=== Create the Logical Volume ===
  
 
Yes, again only on the node that is primary!!!
 
Yes, again only on the node that is primary!!!
  
 
For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on.
 
For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on.
 +
 
On node1
 
On node1
 +
 
<code>
 
<code>
 
lvcreate -L50000 -n web drbdvg0
 
lvcreate -L50000 -n web drbdvg0
Line 305: Line 351:
  
 
Then we put a file system on the logical volumes
 
Then we put a file system on the logical volumes
 +
 
<code>
 
<code>
 
mkfs.ext3 /dev/drbdvg0/web
 
mkfs.ext3 /dev/drbdvg0/web
Line 310: Line 357:
  
 
create the directory where we want to mount the Vservers
 
create the directory where we want to mount the Vservers
 +
 
<code>
 
<code>
 
mkdir -p /VSERVERS/web
 
mkdir -p /VSERVERS/web
 
</code>
 
</code>
 +
 
and mount the volume group to the mount point
 
and mount the volume group to the mount point
 +
 
<code>
 
<code>
 
mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/
 
mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/
 
</code>
 
</code>
  
===== Get informed =====
+
== Get informed ==
  
 
Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.
 
Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.
Line 331: Line 381:
  
 
We don't want postfix to listen to all interfaces,
 
We don't want postfix to listen to all interfaces,
 +
 
<code>
 
<code>
 
nano /etc/postfix/main.cf
 
nano /etc/postfix/main.cf
 
</code>
 
</code>
 +
 
and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.
 
and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.
  
Line 341: Line 393:
  
  
===== Heartbeat =====
+
== Heartbeat ==
  
==== Get aquinted ====
+
=== Get aquinted ===
 
Add the other node in the hosts file of both nodes, this way Heartbeat knows who is who.
 
Add the other node in the hosts file of both nodes, this way Heartbeat knows who is who.
  
 
so for node1 do
 
so for node1 do
 +
 
<code>
 
<code>
 
nano /etc/hosts
 
nano /etc/hosts
 
</code>
 
</code>
 +
 
and add node2
 
and add node2
<code>
+
 
 +
<pre>
 
192.168.1.200  node2
 
192.168.1.200  node2
</code>
+
</pre>
 +
 
 +
=== Get intimate ===
 +
 
 +
Set up some keys on both boxes so we can ssh login without a password (defaults, no passphrase)
  
Set up some keys on both boxes so we can get intimate, (defaults, no passphrase)
 
 
<code>
 
<code>
 
ssh-keygen
 
ssh-keygen
 
</code>
 
</code>
 +
 
then copy over the public keys
 
then copy over the public keys
 +
 
<code>
 
<code>
 
scp /root/.ssh/id_rsa.pub 192.168.1.100:/root/.ssh/authorized_keys
 
scp /root/.ssh/id_rsa.pub 192.168.1.100:/root/.ssh/authorized_keys
 
</code>
 
</code>
 +
 
<code>
 
<code>
 
scp /root/.ssh/id_rsa.pub 192.168.1.200:/root/.ssh/authorized_keys
 
scp /root/.ssh/id_rsa.pub 192.168.1.200:/root/.ssh/authorized_keys
 
</code>
 
</code>
==== Configure Heartbeat ====
+
 
 +
=== Configure Heartbeat ===
  
 
Without the ha.cf file Heartbeat wil not start, this should only be done on 1 of the nodes.
 
Without the ha.cf file Heartbeat wil not start, this should only be done on 1 of the nodes.
Line 373: Line 435:
 
nano /etc/ha.d/ha.cf
 
nano /etc/ha.d/ha.cf
 
</code>
 
</code>
<code>
+
 
 +
<pre>
 
autojoin none  
 
autojoin none  
 
#crm            on      #enables heartbeat2 cluster manager - we want that!
 
#crm            on      #enables heartbeat2 cluster manager - we want that!
Line 387: Line 450:
 
node node1      #hostnames of the nodes
 
node node1      #hostnames of the nodes
 
node node2
 
node node2
</code>
+
</pre>
  
 
This one also on 1 of the nodes
 
This one also on 1 of the nodes
 +
 
<code>
 
<code>
 
nano /etc/ha.d/authkeys
 
nano /etc/ha.d/authkeys
 
</code>
 
</code>
<code>
+
 
 +
<pre>
 
auth 3
 
auth 3
 
3 md5 failover  ## this is just a string, enter what you want ! auth 3 md5 uses md5 encryption
 
3 md5 failover  ## this is just a string, enter what you want ! auth 3 md5 uses md5 encryption
</code>
+
</pre>
 +
 
 
<code>
 
<code>
 
chmod 600 /etc/ha.d/authkeys
 
chmod 600 /etc/ha.d/authkeys
Line 409: Line 475:
 
/usr/lib/heartbeat/ha_propagate
 
/usr/lib/heartbeat/ha_propagate
 
</code>
 
</code>
 +
 +
=== Heatbeat behavior ===
  
 
After above 2 files are set, the haresources is where we want to be to control Heartbeats behaviour.
 
After above 2 files are set, the haresources is where we want to be to control Heartbeats behaviour.
 
This is an example for 1 Vserver that we will set up later on.
 
This is an example for 1 Vserver that we will set up later on.
 +
 
<code>
 
<code>
 
nano /etc/ha.d/haresources
 
nano /etc/ha.d/haresources
 
</code>
 
</code>
<code>
+
 
 +
<pre>
 
node1 drbddisk::r1 LVM::drbdvg1 Filesystem::/dev/drbdvg1/web::/VSERVERS/web::ext3 vserver-web SendArp::123.123.123.125/bond0 MailTo::randall@songshu.org::DRBDFailure
 
node1 drbddisk::r1 LVM::drbdvg1 Filesystem::/dev/drbdvg1/web::/VSERVERS/web::ext3 vserver-web SendArp::123.123.123.125/bond0 MailTo::randall@songshu.org::DRBDFailure
 +
</pre>
  
</code>
 
 
The above will default the Vserver named web to node1 and specify the mount points, the vserver-web script will start and stop heartbeat, the sendarp is for notifying the network that this IP can be found somewhere else then before. (have added the SendArp an extra time below for better result)
 
The above will default the Vserver named web to node1 and specify the mount points, the vserver-web script will start and stop heartbeat, the sendarp is for notifying the network that this IP can be found somewhere else then before. (have added the SendArp an extra time below for better result)
  
Line 424: Line 494:
 
We only specify 1 default node here for all Vservers and the same DRBD disk and Volume Group, the individual start scripts and mount points are specified separately, mind the \, its all in 1 line. the last mail command is only needed once.
 
We only specify 1 default node here for all Vservers and the same DRBD disk and Volume Group, the individual start scripts and mount points are specified separately, mind the \, its all in 1 line. the last mail command is only needed once.
  
 +
<pre>
 
node1 \
 
node1 \
 
drbddisk::r0 \
 
drbddisk::r0 \
Line 434: Line 505:
 
SendArp::123.123.123.126/bond0 \
 
SendArp::123.123.123.126/bond0 \
 
MailTo::randall@songshu.org::DRBDFailure
 
MailTo::randall@songshu.org::DRBDFailure
 +
</pre>
 +
 +
=== start/stop script ===
  
 
The vserver-web script as specified to be called by heartbeat above is basically a demolished version of the original R2 style agent by Martin Fick from here http://www.theficks.name/bin/lib/ocf/VServer.
 
The vserver-web script as specified to be called by heartbeat above is basically a demolished version of the original R2 style agent by Martin Fick from here http://www.theficks.name/bin/lib/ocf/VServer.
 +
 +
To handle ARP properly, a smaller script has been added to make sure the IP brought up on the new host (should fail over occur) can still communicate with the network. Being by installing "arping"
 +
 +
<code>
 +
apt-get install arping
 +
</code>
 +
 +
Then create the file <pre>/bin/garp</pre> and set the gatewayIp to be that of your router/host gateway.
 +
 +
<pre>
 +
#!/bin/sh
 +
 +
gatewayIp="XXX.XXX.XXX.XXX"
 +
host=$1
 +
 +
# Send a gratuitous arp to claim ownership of the new IP
 +
for i in `cat /etc/vservers/$host/interfaces/*/ip`; do
 +
  echo "ARPing $i"
 +
  arping -c 1 -S $i $gatewayIp
 +
done;
 +
 +
exit
 +
</pre>
  
 
What i did is remove the sensible top part and replace "$OCF_RESKEY_vserver" with the specific Vserver name, also added an extra
 
What i did is remove the sensible top part and replace "$OCF_RESKEY_vserver" with the specific Vserver name, also added an extra
 +
 +
<pre>
 
/etc/ha.d/resource.d/SendArp 123.123.123.126/bond0 start
 
/etc/ha.d/resource.d/SendArp 123.123.123.126/bond0 start
 +
</pre>
 +
 
to the start part because i had various results when done by Heartbeat in the first tests i did, not sure if it is still needed but i guess it doesn't hurt.
 
to the start part because i had various results when done by Heartbeat in the first tests i did, not sure if it is still needed but i guess it doesn't hurt.
  
Line 444: Line 545:
 
nano /etc/ha.d/resource.d/Vserver-web
 
nano /etc/ha.d/resource.d/Vserver-web
 
</code>
 
</code>
<code>
+
 
 +
<pre>
 
#!/bin/sh
 
#!/bin/sh
 
#
 
#
Line 531: Line 633:
 
     [ $? -eq 0 ] && return 0
 
     [ $? -eq 0 ] && return 0
  
 +
    /bin/garp web
 
     /usr/sbin/vserver "web" "start"
 
     /usr/sbin/vserver "web" "start"
 
     vserver_status
 
     vserver_status
Line 590: Line 693:
  
  
</code>
+
</pre>
 
To make this file executable by Heartbeat
 
To make this file executable by Heartbeat
 +
 
<code>
 
<code>
 
chmod a+x /etc/ha.d/resource.d/Vserver-web
 
chmod a+x /etc/ha.d/resource.d/Vserver-web
 
</code>
 
</code>
  
Add a modificaton to the drbddisk resource, as pointed out by Christian Balzer on the Vserver mailing list http://list.linux-vserver.org/archive?mss:835:200803:cgehldioambmojimggpf, it seems to help Heartbeat to be a little more patient if it wantsto close down the r0 while not all Vservers are stopped yet.
+
=== not needed???? ===
 +
 
 +
There is some more interesting discussion going on here, [[Advanced_DRBD_mount_issues]]) , for those who have multiple Vservers on multiple DRBD devices. Not sure if it also applies for this setup but i'm using it without any drawbacks at the moment.
 +
 
 +
Below is a changed version of option 4 by Christian Balzer
  
 
<code>
 
<code>
 
nano /etc/ha.d/resource.d/drbddisk
 
nano /etc/ha.d/resource.d/drbddisk
 
</code>
 
</code>
<code>
+
 
 +
<pre>
 
stop)
 
stop)
 
         # Kill off any vserver mounts that might hog this
 
         # Kill off any vserver mounts that might hog this
Line 616: Line 725:
 
         exec $DRBDADM secondary $RES
 
         exec $DRBDADM secondary $RES
  
</code>
+
</pre>
  
===== Create a Vserver =====
+
== Create a Vserver ==
  
 
Note that we already have mounted the LVM partition on /VSERVERS/web in an earlier step,  we're going to place both the /var and /etc directories on the mountpoint and symlink to it, this way the complete Vserver and its config are available on the other node when mounted.
 
Note that we already have mounted the LVM partition on /VSERVERS/web in an earlier step,  we're going to place both the /var and /etc directories on the mountpoint and symlink to it, this way the complete Vserver and its config are available on the other node when mounted.
Line 625: Line 734:
 
mkdir -p /VSERVERS/web/etc
 
mkdir -p /VSERVERS/web/etc
 
</code>
 
</code>
 +
 
<code>
 
<code>
 
mkdir -p /VSERVERS/web/barrier/var
 
mkdir -p /VSERVERS/web/barrier/var
 
</code>
 
</code>
 +
 
When making the Vserver it will be in the default location /var/lib/vservers/web and its config in /etc/vservers/web  
 
When making the Vserver it will be in the default location /var/lib/vservers/web and its config in /etc/vservers/web  
<code>
+
 
newvserver --hostname web --domain example.com --ip 123.123.123.125/24 --dist etch --mirror http://123.123.123.81:3142/debian.apt-get.eu/debian --interface bond0
+
<pre>
</code>
+
vserver web build -m debootstrap --hostname web.example.com --interface bond0:123.123.123.125/24 -- -d etch -m http://123.123.123.81:3142/debian.apt-get.eu/debian
 +
</pre>
 +
 
 +
<pre>
 +
enter the root password
 +
</pre>
 +
 
 +
<pre>
 +
Create a normal user account now?     
 +
<No>   
 +
</pre>
 +
 
 +
<pre>
 +
Choose software to install:               
 +
<Ok>
 +
</pre>
 +
 
 +
On node1 we move the Vserver directories to the LVM volume on the DRBD disks and make symlinks from the normal locations.
 +
 
 +
On node1
  
 
<code>
 
<code>
 
mv /etc/vservers/web/* /VSERVERS/web/etc/
 
mv /etc/vservers/web/* /VSERVERS/web/etc/
 
</code>
 
</code>
 +
 
<code>
 
<code>
 
rmdir /etc/vservers/web/
 
rmdir /etc/vservers/web/
 
</code>
 
</code>
 +
 
<code>
 
<code>
 
ln -s /VSERVERS/web/etc /etc/vservers/web
 
ln -s /VSERVERS/web/etc /etc/vservers/web
 
</code>
 
</code>
 +
 
<code>
 
<code>
 
mv /var/lib/vservers/web/* /VSERVERS/web/barrier/var
 
mv /var/lib/vservers/web/* /VSERVERS/web/barrier/var
 
</code>
 
</code>
 +
 
<code>
 
<code>
 
rmdir /var/lib/vservers/web/
 
rmdir /var/lib/vservers/web/
 
</code>
 
</code>
 +
 
<code>
 
<code>
 
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web
 
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web
 
</code>
 
</code>
 +
 +
We need to set the same symlinks on node2, but the we need the Vserver directories available there first.
 +
The mounting should be handled by heartbeat by now so we make our resources move to the other machine.
 +
 +
On node1
 +
 
<code>
 
<code>
enter the root password
+
/etc/init.d/heartbeat stop
 
</code>
 
</code>
 +
 +
On node2
  
 
<code>
 
<code>
Create a normal user account now?     
+
ln -s /VSERVERS/web/etc /etc/vservers/web
<No>   
+
 
</code>
 
</code>
 +
  
 
<code>
 
<code>
Choose software to install:               
+
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web
<Ok>  
+
</code>
 +
 
 +
On node1
 +
 
 +
<code>
 +
/etc/init.d/heartbeat start
 
</code>
 
</code>
  
Line 670: Line 818:
  
 
and enjoy!
 
and enjoy!
 +
 +
[[Category:Documentation]]

Latest revision as of 20:19, 21 October 2011

Contents

[edit] Getting High with Lenny

The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released)


There is a lot of buzz going on for a while now about virtualisation and High Availability and while Vserver is very well capable for this job the number of documented examples compared to some other virtualisation techniques are a little lacking so i thought i'd do my share.

I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed. DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly. In my experience it takes a few seconds to have several Vservers fail over to another machine with this setup.

The main attempt here is to give a single working example without going to much in to the details of every option, the scenario is relatively simple but different variations can be made.

For this set up we will have

  • 2 machines
  • both machines have 1 single large DRBD partition
  • primary/seconday there is always 1 machine active and 1 on standby
  • 1 LVM partition per Vserver on top of the DRBD partition, for quota support from within the guest and LVM snapshots
  • the Vservers /etc/vserver and /var/lib/vservers directories will be placed on the DRBD partition.

In case the main machine that runs the Vservers goes down, the synchronized second machine should take over and automatically start the Vservers.

Basically this is an on-line RAID solution that can keep your services running in case of hardware failure, it is NOT a back-up replacement.

The cost for this setup is that you always have 1 idle machine standby, this cost can be justified by the fact that Linux-Vserver enables you to make full use of the 1 machine that is running, you also could consider to run this on a little less expensive (reliable) hardware.

Also note that i will be using R1 style configuration for heartbeat, R1 style can be considered to be depreciated when using Heartbeat2 but i could not get my head around the R2 xml configuration, so if you want R2 you might want to have a look here. Fail-over)

The partitioning looks as follows

     c0d0p1             Boot              Primary         Linux ext3                                        10001.95
     c0d0p5                               Logical         Linux swap / Solaris                               1003.49
     c0d0p6                               Logical         Linux                                            280325.77


machine1 will use the following names.
  • hostname = node1
  • IP number = 192.168.1.100
  • is primary for r0 on disk c0d0p6
  • physical volume on r0 is /dev/drbd0
  • volume group on /dev/drbd0 is called drbdvg0
machine2 will use the following names.
  • hostname = node2
  • IP number = 192.168.1.200
  • is secondary for r0 on disk c0d0p6
The Volume Group and the Physical Volume will be identical on node2 if this one becomes the primary for r0.

[edit] Loadbalance-Failover the network cards

Maybe not very specific to Vserver, Heartbeat or DRBD, but loadbalancing your network cards for failover is always usefull. Some more indepth details by Carla Schroder can be found here. [[1]] I did not do it for the DRBD crossover cable between the nodes while this is actually highly recomended. We need both mii-tool and ethtool.

apt-get install ethtool ifenslave-2.6

nano /etc/modprobe.d/arch/i386

To load the modules with the correct options at boot time.

alias bond0 bonding
options bond0 mode=balance-alb miimon=100 

And set the interfaces eth0 and eth1 as slaves to bond0, also eth2 is set here for the crossover cable for the DRBD connection to the fail over machine.

nano /etc/network/interfaces

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto bond0
iface bond0 inet static
        address 123.123.123.100
        netmask 255.255.255.0
        network 123.123.123.0
        broadcast 123.123.123.255
        gateway 123.123.123.1
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 123.123.123.45
        dns-search example.com
        up /sbin/ifenslave bond0 eth0 eth1
        down ifenslave -d bond0 eth0 eth1


auto eth2
iface eth2 inet static
        address 192.168.1.100
        netmask 255.255.255.0

This way the system needs to be rebooted before the changes take effect, otherwise you should load the drivers and ifdown eth0 and eth1 first before ifup bond0 but i'm planning to install a new kernel anyway in the next step.

[edit] Install the Vserver packages

apt-get install linux-image-2.6-vserver-686-bigmem util-vserver

As usual a reboot is needed to boot this kernel.

With Etch i found that the Vserver kernel often ended up as second in the grub list, not so in Lenny but to be safe check the kernel stanza in /boot/grub/menu.lst especially when doing this from a remote location.

[edit] Install DRBD8, LVM2 and Heartbeat

apt-get install drbd8-modules-2.6-vserver-686-bigmem drbd8-module-source lvm2 heartbeat-2

not sure about this, but DRBD always needed to be compiled against the running kernel, is this still the case with the kernel specific modules? I did not check but it would be good to know in case of a kernel upgrade.

[edit] Build DRBD8

Although packages are available in the repositorie for DRBD8, the purpose of these packages is that you can built it easily from source and patch the running kernel.

To do this we just issue this command

m-a a-i drbd8

And to load it into the kernel..

depmod -ae

modprobe drbd

[edit] Configure DRBD8

Now that we have the essentials installed we can configure DRBD. Again, i will not go in to the details of all the options here so check out the default config and http://www.drbd.org/ to find a match for your set up.

mv /etc/drbd.conf /etc/drbd.conf.original

nano /etc/drbd.conf

global {
        usage-count no;
}

common {
  syncer { rate 100M; }                                                                                            
}

resource r0 {
  protocol C;
  handlers {
    pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt  f";
    pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt f";
    local-io-error "echo o > /proc/sysrq-trigger ; halt f";
    outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";
  }

  startup {
    degr-wfc-timeout 120;    # 2 minutes.
  }

  disk {
    on-io-error   detach;
  }

  net {                   
    after-sb-0pri disconnect;
    after-sb-1pri disconnect;
    after-sb-2pri disconnect;
    rr-conflict disconnect;
  }
    
  syncer {
    rate 100M;
    al-extents 257;
  }


        on node1 {
                device     /dev/drbd0;
                disk       /dev/cciss/c0d0p6;
                address    192.168.1.100:7788;
                meta-disk  internal;
        }

        on node2 {
                device     /dev/drbd0;
                disk       /dev/cciss/c0d0p6;
                address    192.168.1.200:7788;
                meta-disk  internal;
        }
}

Before we start DRBD we change some permissions, otherwise it will ask for it. So on both nodes

chgrp haclient /sbin/drbdsetup
chmod o-x /sbin/drbdsetup
chmod u+s /sbin/drbdsetup
chgrp haclient /sbin/drbdmeta
chmod o-x /sbin/drbdmeta
chmod u+s /sbin/drbdmeta

[edit] Create the DRBD devices

On both nodes

node1

drbdadm create-md r0

node2

drbdadm create-md r0

node1

drbdadm up r0

node2

drbdadm up r0

The following should be done on the node that will be the primary!

On node1

drbdadm -- --overwrite-data-of-peer primary r0


watch cat /proc/drbd should show you something like this

version: 8.0.13 (api:86/proto:86)
GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil@fat-tyre, 2008-08-04 15:28:07
 0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---
    ns:62059328 nr:0 dw:3298052 dr:58770141 al:2102 bm:3641 lo:1 pe:261 ua:251 ap:0
	[===>................] sync'ed: 22.1% (208411/267331)M
	finish: 4:04:44 speed: 14,472 (12,756) K/sec
	resync: used:1/61 hits:4064317 misses:5172 starving:0 dirty:0 changed:5172
	act_log: used:0/257 hits:822411 misses:46655 starving:110 dirty:44552 changed:2102


[edit] Configure LVM2

<note important> LVM will normally scan all available devices under /dev, but since /dev/cciss/c0d0p6 and /dev/drbd0 are basically the same this will lead to errors where LVM reads and writes the same data to both devices. So to limit it to scan /dev/drbd devices only we do the following on both nodes.

</note>

cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.original

nano /etc/lvm/lvm.conf

    #filter = [ "a/.*/" ]
    filter = [ "a|/dev/drbd|", "r|.*|" ]

to re-scan with the new settings on both nodes

vgscan

[edit] Create the Physical Volume

The following only needs to be done on the node that is the primary!!

On node1

pvcreate /dev/drbd0

[edit] Create the Volume Group

The following only needs to be done on the node that is the primary!!

One node1

vgcreate drbdvg0 /dev/drbd0

[edit] Create the Logical Volume

Yes, again only on the node that is primary!!!

For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on.

On node1

lvcreate -L50000 -n web drbdvg0

Then we put a file system on the logical volumes

mkfs.ext3 /dev/drbdvg0/web

create the directory where we want to mount the Vservers

mkdir -p /VSERVERS/web

and mount the volume group to the mount point

mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/

[edit] Get informed

Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.

This should be done on both nodes

apt-get install postfix mailx

and go for the defaults, "internet site" and node1.example.com"

We don't want postfix to listen to all interfaces,

nano /etc/postfix/main.cf

and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.

inet_interfaces = loopback-only


[edit] Heartbeat

[edit] Get aquinted

Add the other node in the hosts file of both nodes, this way Heartbeat knows who is who.

so for node1 do

nano /etc/hosts

and add node2

192.168.1.200   node2

[edit] Get intimate

Set up some keys on both boxes so we can ssh login without a password (defaults, no passphrase)

ssh-keygen

then copy over the public keys

scp /root/.ssh/id_rsa.pub 192.168.1.100:/root/.ssh/authorized_keys

scp /root/.ssh/id_rsa.pub 192.168.1.200:/root/.ssh/authorized_keys

[edit] Configure Heartbeat

Without the ha.cf file Heartbeat wil not start, this should only be done on 1 of the nodes.

nano /etc/ha.d/ha.cf

autojoin none 
#crm             on      #enables heartbeat2 cluster manager - we want that!
use_logd        on
logfacility     syslog
keepalive       1
deadtime        10
warntime        10
udpport         694
auto_failback   on      #resources move back once node is back online
mcast bond0 239.0.0.43 694 1 0 
bcast eth2      
node node1       #hostnames of the nodes
node node2

This one also on 1 of the nodes

nano /etc/ha.d/authkeys

auth 3
3 md5 failover  ## this is just a string, enter what you want ! auth 3 md5 uses md5 encryption

chmod 600 /etc/ha.d/authkeys

<note> We will be using heartbeat R1-style configuration here simply because i don't understand the R2 xml based syntax. </note> We only did the above 2 config files on 1 node but we need it on both, heartbeat can do that for us.

/usr/lib/heartbeat/ha_propagate

[edit] Heatbeat behavior

After above 2 files are set, the haresources is where we want to be to control Heartbeats behaviour. This is an example for 1 Vserver that we will set up later on.

nano /etc/ha.d/haresources

node1 drbddisk::r1 LVM::drbdvg1 Filesystem::/dev/drbdvg1/web::/VSERVERS/web::ext3 vserver-web SendArp::123.123.123.125/bond0 MailTo::randall@songshu.org::DRBDFailure

The above will default the Vserver named web to node1 and specify the mount points, the vserver-web script will start and stop heartbeat, the sendarp is for notifying the network that this IP can be found somewhere else then before. (have added the SendArp an extra time below for better result)

Another example for more than 1 Vserver, We only specify 1 default node here for all Vservers and the same DRBD disk and Volume Group, the individual start scripts and mount points are specified separately, mind the \, its all in 1 line. the last mail command is only needed once.

node1 \
drbddisk::r0 \
LVM::drbdvg0 \
Filesystem::/dev/drbdvg0/web::/VSERVERS/web::ext3 \
Filesystem::/dev/drbdvg0/ns1::/VSERVERS/ns1::ext3 \
Vserver-web \
Vserver-ns1 \
SendArp::123.123.123.125/bond0 \
SendArp::123.123.123.126/bond0 \
MailTo::randall@songshu.org::DRBDFailure

[edit] start/stop script

The vserver-web script as specified to be called by heartbeat above is basically a demolished version of the original R2 style agent by Martin Fick from here http://www.theficks.name/bin/lib/ocf/VServer.

To handle ARP properly, a smaller script has been added to make sure the IP brought up on the new host (should fail over occur) can still communicate with the network. Being by installing "arping"

apt-get install arping

Then create the file
/bin/garp
and set the gatewayIp to be that of your router/host gateway.
#!/bin/sh

gatewayIp="XXX.XXX.XXX.XXX"
host=$1

# Send a gratuitous arp to claim ownership of the new IP
for i in `cat /etc/vservers/$host/interfaces/*/ip`; do
  echo "ARPing $i"
  arping -c 1 -S $i $gatewayIp
done;

exit

What i did is remove the sensible top part and replace "$OCF_RESKEY_vserver" with the specific Vserver name, also added an extra

/etc/ha.d/resource.d/SendArp 123.123.123.126/bond0 start

to the start part because i had various results when done by Heartbeat in the first tests i did, not sure if it is still needed but i guess it doesn't hurt.

nano /etc/ha.d/resource.d/Vserver-web

#!/bin/sh
#
# License: GNU General Public License (GPL) 
# Author:  Martin Fick <mogulguy@yahoo.com>
# Date:    04/19/07
# Version: 1.1
#
#	This script manages a VServer instance
#
#	It can start or stop a VServer
#
#	usage: $0 {start|stop|status|monitor|meta-data}
#
#
#       OCF parameters are as below
#       OCF_RESKEY_vserver
#
#######################################################################
# Initialization:
#
#. /usr/lib/heartbeat/ocf-shellfuncs
#
#USAGE="usage: $0 {start|stop|status|monitor|meta-data}";
#
#######################################################################
#
#
#meta_data() {
#        cat <<END
#<?xml version="1.0"?>
#<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
#<resource-agent name="VServer">
# <version>1.0</version>
# <longdesc lang="en">
#This script manages a VServer instance.
#It can start or stop a VServer.
# </longdesc>
# <shortdesc lang="en">OCF Resource Agent compliant VServer script.</shortdesc>
#
# <parameters>
#
#  <parameter name="vserver" unique="1" required="1">
#   <longdesc lang="en">
#The vserver name is the name as found under /etc/vservers
#   </longdesc>
#   <shortdesc lang="en">VServer Name</shortdesc>
#    <content type="string" default="" />
#   </parameter>
#
# </parameters>
#
# <actions>
#  <action name="start"   timeout="2m" />
#  <action name="stop"    timeout="1m" />
#  <action name="monitor" depth="0"  timeout="1m" interval="5s" start-delay="2m" />
#  <action name="status" depth="0"  timeout="1m" interval="5s" start-delay="2m" />
#  <action name="meta-data"  timeout="1m" />
# </actions>
#</resource-agent>
#END
#}

vserver_reload() {
    vserver_stop || return
    vserver_start
}

vserver_stop() {
  #
  #	Is the VServer already stopped?
  #
    vserver_status
    [ $? -ne 0 ] && return 0

    /usr/sbin/vserver "web" "stop"

    vserver_status
    [ $? -ne 0 ] && return 0

    return 1
}

vserver_start() {
    vserver_status
    [ $? -eq 0 ] && return 0

    /bin/garp web
    /usr/sbin/vserver "web" "start"
    vserver_status
    /etc/ha.d/resource.d/SendArp 123.123.123.125/bond0 start
}

vserver_status() {
    /usr/sbin/vserver "web" "status"
    rc=$?
    if [ $rc -eq 0 ]; then
	echo "running"
        return 0
    elif [ $rc -eq 3 ]; then
	echo "stopped"
    else
	echo "unknown"
    fi
    return 7
}

vserver_monitor() {
  vserver_status
}


vserver_usage() {

  echo $USAGE >&2
}

vserver_info() {
cat - <<!INFO
	Abstract=VServer Instance takeover
	Argument=VServer Name
	Description:
	A Vserver is a simulated server which is fairly hardware independent
        so it can be easily setup to run on several machines.
	Please rerun with the meta-data command for a list of \\
	valid arguments and their defaults.
!INFO
}

#
#	Start or Stop the given VServer...
#

if [ $# -ne 1 ] ; then
  vserver_usage
  exit 2
fi

  case "$1" in
    start|stop|status|monitor|reload|info|usage)    vserver_$1 ;;
    meta-data)   meta_data ;;
    validate-all|notify|promote|demote)  exit 3 ;;

    *)  vserver_usage ; exit 2 ;;
  esac


To make this file executable by Heartbeat

chmod a+x /etc/ha.d/resource.d/Vserver-web

[edit] not needed????

There is some more interesting discussion going on here, Advanced_DRBD_mount_issues) , for those who have multiple Vservers on multiple DRBD devices. Not sure if it also applies for this setup but i'm using it without any drawbacks at the moment.

Below is a changed version of option 4 by Christian Balzer

nano /etc/ha.d/resource.d/drbddisk

stop)
        # Kill off any vserver mounts that might hog this
        VNSPACE=/usr/sbin/vnamespace

        for CTX in `/usr/sbin/vserver-stat | tail -n +2 | awk '{print $1}'`
        do
            MPOINT="`$VNSPACE -e $CTX cat /proc/mounts | grep $RES | awk '{print $2}'`"
            echo Unmounting mount point $MPOINT from within context $CTX
            ### MOUNT POINT IS COMPULSORY. DEVICE NAME DOES NOT WORK!!!
            $VNSPACE -e $CTX /bin/umount $MPOINT || continue;
        done
        # exec, so the exit code of drbdadm propagates
        exec $DRBDADM secondary $RES

[edit] Create a Vserver

Note that we already have mounted the LVM partition on /VSERVERS/web in an earlier step, we're going to place both the /var and /etc directories on the mountpoint and symlink to it, this way the complete Vserver and its config are available on the other node when mounted.

mkdir -p /VSERVERS/web/etc

mkdir -p /VSERVERS/web/barrier/var

When making the Vserver it will be in the default location /var/lib/vservers/web and its config in /etc/vservers/web

vserver web build -m debootstrap --hostname web.example.com --interface bond0:123.123.123.125/24 -- -d etch -m http://123.123.123.81:3142/debian.apt-get.eu/debian
enter the root password
Create a normal user account now?       
 <No>    
Choose software to install:                 
<Ok> 

On node1 we move the Vserver directories to the LVM volume on the DRBD disks and make symlinks from the normal locations.

On node1

mv /etc/vservers/web/* /VSERVERS/web/etc/

rmdir /etc/vservers/web/

ln -s /VSERVERS/web/etc /etc/vservers/web

mv /var/lib/vservers/web/* /VSERVERS/web/barrier/var

rmdir /var/lib/vservers/web/

ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web

We need to set the same symlinks on node2, but the we need the Vserver directories available there first. The mounting should be handled by heartbeat by now so we make our resources move to the other machine.

On node1

/etc/init.d/heartbeat stop

On node2

ln -s /VSERVERS/web/etc /etc/vservers/web


ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web

On node1

/etc/init.d/heartbeat start

Vserver web start

and enjoy!

Personal tools