Difference between revisions of "util-vserver:Cgroups"

From Linux-VServer

Jump to: navigation, search
(Sharing out the CPU between guest servers: fixed cat to echo command and /etc/vserver/ to /etc/vservers/)
(Dishing out different processor to different guest servers: cat renamed to echo, and vserver renamed to vservers)
Line 179: Line 179:
 
Give me the result 0-1, meaning that the set consists of CPUs 0 and 1 (for a quad core system one would expect the result 0-3, or for quad core with HT, 0-7). I stopped my guests, then for each guest specified a cpuset limited to CPU 0 for each of them:
 
Give me the result 0-1, meaning that the set consists of CPUs 0 and 1 (for a quad core system one would expect the result 0-3, or for quad core with HT, 0-7). I stopped my guests, then for each guest specified a cpuset limited to CPU 0 for each of them:
  
  $ cat "0" > /etc/vserver/onetime/cgroup/cpuset.cpus
+
  $ echo "0" > /etc/vservers/onetime/cgroup/cpuset.cpus
  $ cat "0" > /etc/vserver/twotime/cgroup/cpuset.cpus
+
  $ echo "0" > /etc/vservers/twotime/cgroup/cpuset.cpus
  $ cat "0" > /etc/vserver/threetime/cgroup/cpuset.cpus
+
  $ echo "0" > /etc/vservers/threetime/cgroup/cpuset.cpus
  $ cat "0" > /etc/vserver/fourtime/cgroup/cpuset.cpus
+
  $ echo "0" > /etc/vservers/fourtime/cgroup/cpuset.cpus
  $ cat "0" > /etc/vserver/fivetime/cgroup/cpuset.cpus
+
  $ echo "0" > /etc/vservers/fivetime/cgroup/cpuset.cpus
  
 
This meant that, on restarting, I could see with vtop that these guest were only using the CPU 0 (column "Last used cpu (SMP)" needs to be on in vtop in order to see this). This set up isn't particularly useful, but did allow me to check that the percentages I had intended for my cpu shares were working.
 
This meant that, on restarting, I could see with vtop that these guest were only using the CPU 0 (column "Last used cpu (SMP)" needs to be on in vtop in order to see this). This set up isn't particularly useful, but did allow me to check that the percentages I had intended for my cpu shares were working.

Revision as of 08:13, 4 February 2010

Bears run away when you yell at them, even lynxes. ,

Contents

Kernel configuration

When configuring your kernel for cgroups with util-vserver you must make sure CONFIG_CGROUP_NS is unset so guests start properly for the time being.

Draft - Distributing cpu shares with cgroups

From what i gathered in sched-design-CFS.txt [1]

This is simply done by adjusting the cpu.shares. Just do:

echo '512' > /dev/cgroup/<guest name>/cpu.shares

The share you get is equal to the guest's share divided by the sum of the cpu shares of all the guest. So for exemple :

vserver guest 1 => 512   
vserver guest 2 => 512
vserver guest 3 => 2048
vserver guest 4 => 512

so you have a total of 3584 cpu shares (2048+512+512+512) , then you get :

vserver guest 1 => 512 / 3584 = 14%  cpu
vserver guest 2 => 512 / 3584 = 14%  cpu
vserver guest 3 => 2048 / 3584 = 57% cpu
vserver guest 4 => 512 / 3584 = 14%  cpu



Note that this is fair scheduling and this will not enfore HARD limit (as far as i know).

Making share permanent with util vserver

You must use the "cgroup" directory. You can apply defaults to all vservers or choose different settings for each guest:

  • /etc/vservers/.default/cgroup , this directory contains settings applying to all guest when they start
  • /etc/vservers/<guestname>/cgroup , this directory contains settings for the guest when it starts.


Example :

mkdir /etc/vservers/.defaults/cgroup
mkdir /etc/vservers/<guestname>/cgroup
echo '2048' > /etc/vservers/<guestname>/cgroup/cpu.shares
# List of CPUs
echo 1 > /etc/vservers/<guestname>/cgroup/cpuset.cpus
# NUMA nodes
echo 1 > /etc/vservers/<guestname>/cgroup/cpuset.mems

Note that /etc/vservers is an example, in my Aqueos install i use /usr/local/etc/vservers but /etc/vservers seems to be the defaults for the classic installs.

Regards, Ghislain.

cgroup and CFS based CPU hard limiting that replaces sched_hard

You can find documentation about the cfs hard limiting in Documentation/scheduler/sched-cfs-hard-limits.txt inside your kernel source dir.

This feature is currently available in patch-2.6.31.2-vs2.3.0.36.15.diff and is in testing phase as of this patch set so report any bugs to the mailing list.

To get the hard limit setup on every vserver start you need a recent utils package. It worked for me with: 0.30.216-pre2864.

Before trying to setup limits for one guest you should mount the cgroup filesystem:

[ -d /dev/cgroup ] || mkdir /dev/cgroup
mount -t cgroup -ocpu none /dev/cgroup

Example for an upper bound of 2/5th (or 40%) of the all cpu power that a guest/cgroup can use :

# force CFS hard limit (only needed for older kernel versions)
# echo 1 > /etc/vservers/<guestname>/cgroup/cpu.cfs_hard_limit
# time assigned to guest (in microseconds) 200000 = 0,2 sec 
echo 200000 > /etc/vservers/<guestname>/cgroup/cpu.cfs_runtime_us
# in each specified period (in microseconds) 500000 = 0,5 sec 
echo 500000 > /etc/vservers/<guestname>/cgroup/cpu.cfs_period_us

This limit is an hard limit, see it like an upper wall for the ressources used by the cgroup. If you set both cpu share AND hard limit the system will do fine but hard limits takes priority over cpu share scheduling, so cpu share will do the job but each cgroup will have an upper bound that it cannot cross even if the cpu share you gived it is higher.

  Hard limit feature adds 3 cgroup files for CFS group scheduler:
cfs_runtime_us: Hard limit for the group in microseconds.
cfs_period_us: Time period in microseconds within which hard limits is enforced.
cfs_hard_limit: The control file to enable or disable hard limiting for the group.


real world exemples of scheduling

this part is to be filled with exemple you have put in place and are working and have been tested, please add the patch and kernel version for each exemple you put here.

Ben's install on Debian Lenny

I used the kernels from [2], described at [3]. I've done this on a few versions, works for 2.6.31.7 with patch vs2.3.0.36.27 on amd64, also 2.6.31.11 with patch vs2.3.0.36.28. Stock Lenny util-vserver, patched as described below. The kernel config is critically important, with specific cgroup options necessary in order to get cgroups working in this way. Options used in the kernels from repo.psand.net were:

CONFIG_CGROUP_SCHED=y
CONFIG_CGROUPS=y
# CONFIG_CGROUP_DEBUG is not set
# CONFIG_CGROUP_NS is not set
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_MEM_RES_CTLR=y
# CONFIG_CGROUP_MEM_RES_CTLR_SWAP is not set
CONFIG_NET_CLS_CGROUP=y

Getting Lenny Ready

There's a very old version of util-vserver on Lenny, it needs this patch applying before it will set the cgroups properly, it basically only adds one line. I patched my with this patch found on the Linux-Vserver mailing list:

--- /usr/lib/util-vserver/vserver.suexec.orig	2008-12-12 22:56:25.000000000 -0600
+++ /usr/lib/util-vserver/vserver.suexec	2009-08-20 02:11:42.000000000 -0500
@@ -22,7 +22,8 @@ test -z "$is_stopped" -o "$OPTION_INSECU
     exit 1
 }
 generateOptions  "$VSERVER_DIR"
-addtoCPUSET  "$VSERVER_DIR"
+addtoCPUSET      "$VSERVER_DIR"
+attachToCgroup   "$VSERVER_DIR"
 
 user=$1
 shift

Next I added a correctly mounted cgroup file system on /dev/cgroup/.

$ mkdir /dev/cgroup
$ mount -t cgroup vserver /dev/cgroup

For the util-vserver to do the right thing, this directory needs adding too:

$ mkdir /etc/vservers/.defaults/cgroup

Sharing out the CPU between guest servers

I have a few test guests hanging around that I play with, call onetime, twotime, threetime, fourtime and fivetime. I order to set the shares for each guest I did this:

mkdir /etc/vservers/fivetime/cgroup/ /etc/vservers/fourtime/cgroup/ /etc/vservers/threetime/cgroup/ /etc/vservers/twotime/cgroup/ /etc/vservers/twotime/cgroup/
echo "512" > /etc/vservers/fivetime/cgroup/cpu.shares
echo "1024" > /etc/vservers/fourtime/cgroup/cpu.shares
echo "1024" > /etc/vservers/threetime/cgroup/cpu.shares
echo "1536" > /etc/vservers/twotime/cgroup/cpu.shares
echo "1024" > /etc/vservers/onetime/cgroup/cpu.shares

Then started the guests. When the system was loaded (I used one instance of cpuburn on each server - not advised but worked for me) they each should have got the following percentage of CPU.

Guest Name cpu.share given percentage of cpu
fivetime 512 10%
fourtime 1024 20%
threetime 1024 20%
twotime 1536 30%
onetime 1024 20%

This didn't quite happen, as each process could migrate to other CPUs. I fixed every guest to use only one same CPU (see below how I did this) and the percentages were pretty much exact! Easy process was given exactly it's designated percentage of time according to vtop.

Dishing out different processor to different guest servers

To limit each guest the cpuset of each cgroup needs to be changed. I found out the number of CPUs available by doing this:

$ cat /dev/cgroup/cpuset.cpus

Give me the result 0-1, meaning that the set consists of CPUs 0 and 1 (for a quad core system one would expect the result 0-3, or for quad core with HT, 0-7). I stopped my guests, then for each guest specified a cpuset limited to CPU 0 for each of them:

$ echo "0" > /etc/vservers/onetime/cgroup/cpuset.cpus
$ echo "0" > /etc/vservers/twotime/cgroup/cpuset.cpus
$ echo "0" > /etc/vservers/threetime/cgroup/cpuset.cpus
$ echo "0" > /etc/vservers/fourtime/cgroup/cpuset.cpus
$ echo "0" > /etc/vservers/fivetime/cgroup/cpuset.cpus

This meant that, on restarting, I could see with vtop that these guest were only using the CPU 0 (column "Last used cpu (SMP)" needs to be on in vtop in order to see this). This set up isn't particularly useful, but did allow me to check that the percentages I had intended for my cpu shares were working.

Doing this to servers live

The parameters in the last two sections can be set when the servers are running. For example to move the guest "threetime" so that it could use both CPUs was I did this:

$ cat "0-1" > /dev/cgroup/threetime/cpuset.cpus

The processes running on threetime instantly were allocated cycle on both CPUs. Then:

$ cat "1" > /dev/cgroup/threetime/cpuset.cpus

Shifts them all to CPU 1. One can change where cycles are allocated with impunity. The same with CPU shares:

$ cat "4096" > /dev/cgroup/threetime/cpu.shares

Gave threetime a much bigger slice of the processors when it was under load.

NOTE: The range "0-1" is not the only way of specifying a set of CPUs, I could have used "0,1". On bigger systems, with say 8 CPUs one could use "0-2,4,5", which would be the same as "0,1,2,4,5" or "0-2,4-5".

Personal tools