Difference between revisions of "util-vserver:Cgroups"

From Linux-VServer

Jump to: navigation, search
(Ben's install on Debian Lenny)
m (Prerequisites: additional flag required)
 
(52 intermediate revisions by 18 users not shown)
Line 1: Line 1:
Bears run away when you yell at them, even lynxes. ,
+
Bears run away when you yell at them, even <tt>lynx</tt>es.
  
 
== Kernel configuration ==
 
== Kernel configuration ==
  
When configuring your kernel for cgroups with util-vserver you must make sure CONFIG_CGROUP_NS is unset so guests start properly for the time being.
+
When configuring your kernel for cgroups with util-vserver you must make sure <tt>CONFIG_CGROUP_NS</tt> ('''CGroup Namespaces''') is unset with util-vserver version lower than 0.30.216-pre2882.
 +
 
 +
'''CGroup Namespaces''' are a different approach to namespaces than that used by Linux vServer, and are not currently supported.
 +
 
 +
== Prerequisites ==
 +
 
 +
To use <tt>util-vserver</tt>'s Control Groups (<tt>cgroups</tt>) support, you need to have <tt>/dev/cgroup</tt> mounted.
 +
 
 +
Recent versions of <tt>util-vserver</tt> sort this out for you by including the appropriate mount command in the <tt>util-vserver</tt> <tt>init</tt> (ie: runlevel) script included in the <tt>util-vserver</tt> distribution, however this apparently only works for the <tt>sysv</tt> <tt>init</tt> script, and not the Debian or Gentoo ones.
 +
 
 +
If you were to mount the <tt>cgroup</tt> Control Groups filesystem manually, you would use something like:
 +
: <tt># mkdir /dev/cgroup
 +
: # mount -t cgroup -o ''<subsystems>'' none /dev/cgroup</tt>
 +
 
 +
Where <tt>''<subsystems>''</tt> is something like <tt>cpuset,memory</tt>.
 +
 
 +
To avoid the need for manual configuration after reboot, on Gentoo you may wish to add the cgroup mount to <tt>/etc/fstab</tt>.  For Debian see the live examples section at the bottom of this page.
 +
<pre>
 +
none /dev/cgroup cgroup cpu,cpuset,memory 0 2
 +
</pre>
  
 
== Draft - Distributing cpu shares with cgroups ==
 
== Draft - Distributing cpu shares with cgroups ==
Line 13: Line 32:
 
echo '512' > /dev/cgroup/<guest name>/cpu.shares
 
echo '512' > /dev/cgroup/<guest name>/cpu.shares
  
The share you get is equal to the guest's share divided by the sum of the cpu shares of all the guest. So for exemple :
+
The share you get is equal to the guest's share divided by the sum of the cpu shares of all the guest. So for example :
  
 
<pre>
 
<pre>
Line 39: Line 58:
 
You must use the "cgroup" directory. You can apply defaults to all vservers or choose different settings for each guest:
 
You must use the "cgroup" directory. You can apply defaults to all vservers or choose different settings for each guest:
  
* /etc/vservers/.default/cgroup    , this directory contains settings applying to all guest when they start
+
* /etc/vservers/.defaults/cgroup    , this directory contains settings applying to all guest when they start
 
* /etc/vservers/<guestname>/cgroup , this directory contains settings for the guest when it starts.
 
* /etc/vservers/<guestname>/cgroup , this directory contains settings for the guest when it starts.
  
  
Exemple :
+
Example :
  
 
<pre>
 
<pre>
Line 55: Line 74:
 
</pre>
 
</pre>
  
Note that /etc/vservers is an exemple, in my Aqueos install i use /usr/local/etc/vservers but /etc/vservers seems to be the defaults for the classic installs.
+
Note that /etc/vservers is an example, in my Aqueos install i use /usr/local/etc/vservers but /etc/vservers seems to be the defaults for the classic installs.
  
 
Regards,
 
Regards,
Line 62: Line 81:
 
== cgroup and CFS based CPU hard limiting that replaces sched_hard ==
 
== cgroup and CFS based CPU hard limiting that replaces sched_hard ==
  
You can find documentation about the cfs hard limiting in Documentation/scheduler/sched-cfs-hard-limits.txt inside your kernel source dir.
+
===References===
 +
You can find documentation about the CFS hard limiting in <tt>Documentation/scheduler/sched-cfs-hard-limits.txt</tt> inside your kernel source dir.
  
This feature is currently available in patch-2.6.31.2-vs2.3.0.36.15.diff and is in testing phase as of this patch set so report any bugs to the mailing list.
+
===Requirements===
 +
This feature is currently available in <tt>patch-2.6.32.20-vs2.3.0.36.29.6.diff</tt> and is in testing phase as of this patch set so report any bugs to the mailing list.
  
To get the hard limit setup on every vserver start you need a recent utils package. It worked for me with: 0.30.216-pre2864.
+
To get the hard limit setup on every vServer start you need a recent utils package. It worked for me with: <tt>0.30.216-pre2864</tt>. (Download from [http://people.linux-vserver.org/~dhozac/t/uv-testing/ util-vserver prereleases]) (also see note at top of page regarding CONFIG_CGROUP_NS, which can usually be found by grepping /proc/config.gz or /boot/config-`uname -r`)
  
 
Before trying to setup limits for one guest you should mount the cgroup filesystem:
 
Before trying to setup limits for one guest you should mount the cgroup filesystem:
Line 73: Line 94:
 
  mount -t cgroup -ocpu none /dev/cgroup
 
  mount -t cgroup -ocpu none /dev/cgroup
  
Example for an upper bound of 2/5th (or 40%) of the all cpu power that a guest/cgroup can use :
+
===Configuration===
 +
Example for an upper bound of 2/5th (or 40%) of the all CPU power that a guest/cgroup can use :
  
 
<pre>
 
<pre>
# force CFS hard limit
 
echo 1 > /etc/vservers/<guestname>/cgroup/cpu.cfs_hard_limit
 
 
# time assigned to guest (in microseconds) 200000 = 0,2 sec  
 
# time assigned to guest (in microseconds) 200000 = 0,2 sec  
 
echo 200000 > /etc/vservers/<guestname>/cgroup/cpu.cfs_runtime_us
 
echo 200000 > /etc/vservers/<guestname>/cgroup/cpu.cfs_runtime_us
Line 84: Line 104:
 
</pre>
 
</pre>
  
This limit is an hard limit, see it like an upper wall for the ressources used by the cgroup.
+
This limit is an hard limit, see it like an upper wall for the resources used by the cgroup.
If you set both cpu share AND hard limit the system will do fine but hard limits takes priority over cpu share scheduling, so cpu share will do the job but each cgroup will have an upper bound that it cannot cross even if the cpu share you gived it is higher.
+
  
<pre>
+
If you set both CPU share AND hard limit the system will do fine but hard limits takes priority over CPU share scheduling, so CPU share will do the job but each cgroup will have an upper bound that it cannot cross even if the CPU share you gave it is higher.
  Hard limit feature adds 3 cgroup files for CFS group scheduler:
+
cfs_runtime_us: Hard limit for the group in microseconds.
+
cfs_period_us: Time period in microseconds within which hard limits is enforced.
+
cfs_hard_limit: The control file to enable or disable hard limiting for the group.
+
</pre>
+
+
<br/>
+
  
== real world exemples of scheduling ==
+
The hard limit feature adds 2 cgroup files for the CFS group scheduler:
 +
* <tt>cfs_runtime_us</tt>: Hard limit for the group in microseconds.
 +
* <tt>cfs_period_us</tt>: Time period in microseconds within which hard limits is enforced.
  
 +
== using cgroup to enforce memory limits ==
  
this part is to be filled with exemple you have put in place and are working and have been tested, please add the patch and kernel version for each exemple you put here.
+
in linux-vserver patch version vs2.3.0.36.29 memory limiting by cgroup is introduced. to use it you need to have the following config lines in your kernel build (aditionally to the others mentioned for cgroup cpu limits):
  
=== Ben's install on Debian Lenny ===
+
* CONFIG_RESOURCE_COUNTERS=y
 +
* CONFIG_CGROUP_MEM_RES_CTLR=y
 +
* CONFIG_CGROUP_MEM_RES_CTLR_SWAP=y
  
I used the kernels from [http://repo.psand.net], described at [http://kernels.bristolwireless.net/]. I've done this on a few versions, works for 2.6.31.7 with patch vs2.3.0.36.27 on amd64, also 2.6.31.11 with patch vs2.3.0.36.28. Stock Lenny util-vserver, patched as described below. The kernel config is critically important, with specific cgroup options necessary in order to get cgroups working in this way. Options used in the kernels from repo.psand.net were:
+
make sure /dev/cgroup is mounted with -o...,memory to be able to use this feature. The following files let you adjust memory limits of a running vserver (create them in /etc/vservers/-vserver-name-
 +
/cgroup/ to make them permanent):
  
CONFIG_CGROUP_SCHED=y
+
* memory.memsw.limit_in_bytes the total memory limit (memory+swap) of your cgroup context
CONFIG_CGROUPS=y
+
* memory.limit_in_bytes the total memory limit
# CONFIG_CGROUP_DEBUG is not set
+
 
# CONFIG_CGROUP_NS is not set
+
values are stored in bytes. When writing to those files you can use suffixes: K,M,G.
CONFIG_CGROUP_FREEZER=y
+
 
CONFIG_CGROUP_DEVICE=y
+
Note: cgroup memory limits are to replace rss.soft and rss.hard some time in the future.
CONFIG_CGROUP_CPUACCT=y
+
 
CONFIG_CGROUP_MEM_RES_CTLR=y
+
Note: from kernel 3.2.x+ you HAVE to boot with the kernel parameter swapaccount=1 or swap accounting is disabled
# CONFIG_CGROUP_MEM_RES_CTLR_SWAP is not set
+
 
CONFIG_NET_CLS_CGROUP=y
+
Limiting the memory and not the swap mean as soon as the memory limit is reached the guest will swap until it fills out the swap completly so swap limits are somewhat necessary in preventing runaway process
 +
 
 +
When you wish the guests to see only their limited memory pool, be sure to include VIRT_MEM in your cflags config file.
 +
 
 +
'''BUG-ALERT: '''If you're getting into trouble (Unable to handle kernel paging request at [..] + trace from Error in dmesg or /var/messages) please use the following experimental patch provided by Bertl: [http://vserver.13thfloor.at/ExperimentalT/delta-memcg-fix04.diff http://vserver.13thfloor.at/ExperimentalT/delta-memcg-fix04.diff]
 +
 
 +
For a deeper understanding check out Documentation/cgroups/memory.txt of your kernel source tree.
 +
 
 +
= Real world Examples of Scheduling =
 +
 
 +
This section is for working and tested examples you have put in place.
 +
 
 +
Please add the following information for each example you put here (use <tt>vserver-info</tt>).
 +
* Base kernel version
 +
* vServer version
 +
* Other kernel patches in use (<tt>grsec</tt>, etc.)
 +
* <tt>util-vserver</tt> release
 +
 
 +
== Ben's install on Debian Lenny ==
 +
 
 +
I used the kernels from [http://repo.psand.net], described at [http://kernels.bristolwireless.net/]. I've done this on a few versions, works for 2.6.31.7 with patch vs2.3.0.36.27 on amd64, also 2.6.31.11 with patch vs2.3.0.36.28. I used the stock Lenny util-vserver, patched as described below. The kernel config is critically important, with specific cgroup options necessary in order to get cgroups working in this way. Check the configs for the [http://repo.psand.net] kernels to see which ones I used.
  
 
==== Getting Lenny Ready ====
 
==== Getting Lenny Ready ====
  
There's a very old version of util-vserver on Lenny, it needs this patch applying before it will set the cgroups properly, it basically only adds one line. I patched my with this patch found on the Linux-Vserver mailing list:
+
There's a very old version of util-vserver on Lenny, it needs this patch applying before it will set the cgroups properly, it basically only adds one line:
  
 
  --- /usr/lib/util-vserver/vserver.suexec.orig 2008-12-12 22:56:25.000000000 -0600
 
  --- /usr/lib/util-vserver/vserver.suexec.orig 2008-12-12 22:56:25.000000000 -0600
Line 138: Line 176:
 
  $ mount -t cgroup vserver /dev/cgroup
 
  $ mount -t cgroup vserver /dev/cgroup
  
For the util-vserver to do the right thing, this directory needs adding too:
+
For the util-vserver to do the right thing, this directory needed adding too:
  
 
  $ mkdir /etc/vservers/.defaults/cgroup
 
  $ mkdir /etc/vservers/.defaults/cgroup
Line 147: Line 185:
  
 
  mkdir /etc/vservers/fivetime/cgroup/ /etc/vservers/fourtime/cgroup/ /etc/vservers/threetime/cgroup/ /etc/vservers/twotime/cgroup/ /etc/vservers/twotime/cgroup/
 
  mkdir /etc/vservers/fivetime/cgroup/ /etc/vservers/fourtime/cgroup/ /etc/vservers/threetime/cgroup/ /etc/vservers/twotime/cgroup/ /etc/vservers/twotime/cgroup/
  cat "512" > /etc/vserver/fivetime/cgroup/cpu.shares
+
  echo "512" > /etc/vservers/fivetime/cgroup/cpu.shares
  cat "1024" > /etc/vserver/fourtime/cgroup/cpu.shares
+
  echo "1024" > /etc/vservers/fourtime/cgroup/cpu.shares
  cat "1024" > /etc/vserver/threetime/cgroup/cpu.shares
+
  echo "1024" > /etc/vservers/threetime/cgroup/cpu.shares
  cat "1536" > /etc/vserver/twotime/cgroup/cpu.shares
+
  echo "1536" > /etc/vservers/twotime/cgroup/cpu.shares
  cat "1024" > /etc/vserver/onetime/cgroup/cpu.shares
+
  echo "1024" > /etc/vservers/onetime/cgroup/cpu.shares
  
Then started the guests. When the system was loaded (I used one instance of cpuburn on each server - not advised but worked for me) they each should have got the following percentage of CPU.
+
Then started the guests. When the system was loaded (I used one instance of cpuburn on each server - not advised but a useful test) they each should have got the following percentage of CPU.
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 169: Line 207:
 
|}
 
|}
  
This didn't quite happen, as each process could migrate to other CPUs. I fixed every guest to use only one same CPU (see below how I did this) and the percentages were pretty much exact! Easy process was given exactly it's designated percentage of time according to vtop.
+
This didn't quite happen, as each process could migrate to other CPUs. When I fixed every guest to use only one of the available CPUs (see below how I did this) the percentage of processing time alloted to each guest were then pretty much exact! Each process was given exactly it's designated percentage of time according to vtop.
  
==== Dishing out different processor to different guest servers ====
+
==== Dishing out different processors sets to different guest servers ====
  
To limit each guest the cpuset of each cgroup needs to be changed. I found out the number of CPUs available by doing this:
+
The "cpuset" for each guest is the subset of CPUs which it is permitted to use. I found out the number of CPUs available on my system by doing this:
  
 
  $ cat /dev/cgroup/cpuset.cpus
 
  $ cat /dev/cgroup/cpuset.cpus
  
Give me the result 0-1, meaning that the set consists of CPUs 0 and 1 (for a quad core system one would expect the result 0-3, or for quad core with HT, 0-7). I stopped my guests, then for each guest specified a cpuset limited to CPU 0 for each of them:
+
This gave me the result 0-1, meaning that the overall set for my cgroups consists of CPUs 0 and 1 (for a quad core system one would expect the result 0-3, or for quad core with HT, 0-7). I stopped my guests, then for each guest specified a cpuset containing only CPU 0 for each of them:
  
  $ cat "0" > /etc/vserver/onetime/cgroup/cpuset.cpus
+
  $ echo "0" > /etc/vservers/onetime/cgroup/cpuset.cpus
  $ cat "0" > /etc/vserver/twotime/cgroup/cpuset.cpus
+
  $ echo "0" > /etc/vservers/twotime/cgroup/cpuset.cpus
  $ cat "0" > /etc/vserver/threetime/cgroup/cpuset.cpus
+
  $ echo "0" > /etc/vservers/threetime/cgroup/cpuset.cpus
  $ cat "0" > /etc/vserver/fourtime/cgroup/cpuset.cpus
+
  $ echo "0" > /etc/vservers/fourtime/cgroup/cpuset.cpus
  $ cat "0" > /etc/vserver/fivetime/cgroup/cpuset.cpus
+
  $ echo "0" > /etc/vservers/fivetime/cgroup/cpuset.cpus
  
This meant that, on restarting, I could see with vtop that these guest were only using the CPU 0 (column "Last used cpu (SMP)" needs to be on in vtop in order to see this). This set up isn't particularly useful, but did allow me to check that the percentages I had intended for my cpu shares were working.
+
On restarting the guest, I could see (using vtop) that these guest were only using the CPU 0 (the column "Last used cpu (SMP)" needs to be on in vtop in order to see this). This set up isn't particularly useful, but did allow me to check that the cpu.shares I specified for my guest were working as expected.
  
 
==== Doing this to servers live ====
 
==== Doing this to servers live ====
  
The parameters in the last two sections can be set when the servers are running. For example to move the guest "threetime" so that it could use both CPUs was I did this:
+
The parameters in the last two sections can be set when the servers are running. For example to move the guest "threetime" so that it could use both CPUs I did this:
  
  $ cat "0-1" > /dev/cgroup/threetime/cpuset.cpus
+
  $ echo "0-1" > /dev/cgroup/threetime/cpuset.cpus
  
 
The processes running on threetime instantly were allocated cycle on both CPUs. Then:
 
The processes running on threetime instantly were allocated cycle on both CPUs. Then:
  
  $ cat "1" > /dev/cgroup/threetime/cpuset.cpus
+
  $ echo "1" > /dev/cgroup/threetime/cpuset.cpus
  
 
Shifts them all to CPU 1. One can change where cycles are allocated with impunity. The same with CPU shares:
 
Shifts them all to CPU 1. One can change where cycles are allocated with impunity. The same with CPU shares:
  
  $ cat "4096" > /dev/cgroup/threetime/cpu.shares
+
  $ echo "4096" > /dev/cgroup/threetime/cpu.shares
  
 
Gave threetime a much bigger slice of the processors when it was under load.
 
Gave threetime a much bigger slice of the processors when it was under load.
  
 
'''NOTE''': The range "0-1" is not the only way of specifying a set of CPUs, I could have used "0,1". On bigger systems, with say 8 CPUs one could use "0-2,4,5", which would be the same as "0,1,2,4,5" or "0-2,4-5".
 
'''NOTE''': The range "0-1" is not the only way of specifying a set of CPUs, I could have used "0,1". On bigger systems, with say 8 CPUs one could use "0-2,4,5", which would be the same as "0,1,2,4,5" or "0-2,4-5".
 +
 +
==== Making sure all of this gets set up after a reboot ====
 +
 +
This process will make sure /dev/cgroup is present at boot and correctly mounted:
 +
 +
* patch util-vserver (see above)
 +
* mkdir /etc/vservers/.defaults/cgroup
 +
* mkdir /lib/udev/devices/cgroup (this will mean that the /dev/cgroup is created early in the boot process)
 +
* add the following line to /etc/fstab
 +
vserver    /dev/cgroup    cgroup    cpu,cpuset,memory    0 0
 +
 +
== Ben's install on Debian Squeeze/Sid ==
 +
 +
These instructions are for Debian's own packages.
 +
 +
Squeeze ships with the 2.6.32 kernel. Currently the package linux-image-2.6.32-5-vserver-amd64 works well for cgroup scheduling. The following steps are simplest way to set it up:
 +
 +
* mkdir /etc/vservers/.defaults/cgroup
 +
* mkdir /lib/udev/devices/cgroup (this will mean that the /dev/cgroup is created early in the boot process)
 +
* add the following line to /etc/fstab
 +
vserver    /dev/cgroup    cgroup  cpuset,cpu,cpuacct,devices,freezer,net_cls    0 0
 +
* reboot the server
 +
 +
Instructions for setting particular parameters are the same as for Lenny. The reason for specifying the cgroup subsystems is that if the namespace subsystem "ns" is included, Linux-Vserver will not work. The /etc/fstab line above mounts /dev/cgroup with all the available subsystems excluding "ns".
 +
 +
Note that the "memory" cgroup subsystem is omitted as Squeeze has the legacy memory controls through rlimits compiled in. It is possible to add "memory" to the cgroup fstab line and use the cgroup based memory control. Please add any success with this to this page.
 +
 +
[[Category:Configuration]]

Latest revision as of 14:16, 6 July 2012

Bears run away when you yell at them, even lynxes.

Contents

[edit] Kernel configuration

When configuring your kernel for cgroups with util-vserver you must make sure CONFIG_CGROUP_NS (CGroup Namespaces) is unset with util-vserver version lower than 0.30.216-pre2882.

CGroup Namespaces are a different approach to namespaces than that used by Linux vServer, and are not currently supported.

[edit] Prerequisites

To use util-vserver's Control Groups (cgroups) support, you need to have /dev/cgroup mounted.

Recent versions of util-vserver sort this out for you by including the appropriate mount command in the util-vserver init (ie: runlevel) script included in the util-vserver distribution, however this apparently only works for the sysv init script, and not the Debian or Gentoo ones.

If you were to mount the cgroup Control Groups filesystem manually, you would use something like:

# mkdir /dev/cgroup
# mount -t cgroup -o <subsystems> none /dev/cgroup

Where <subsystems> is something like cpuset,memory.

To avoid the need for manual configuration after reboot, on Gentoo you may wish to add the cgroup mount to /etc/fstab. For Debian see the live examples section at the bottom of this page.

none /dev/cgroup cgroup cpu,cpuset,memory 0 2

[edit] Draft - Distributing cpu shares with cgroups

From what i gathered in sched-design-CFS.txt [1]

This is simply done by adjusting the cpu.shares. Just do:

echo '512' > /dev/cgroup/<guest name>/cpu.shares

The share you get is equal to the guest's share divided by the sum of the cpu shares of all the guest. So for example :

vserver guest 1 => 512   
vserver guest 2 => 512
vserver guest 3 => 2048
vserver guest 4 => 512

so you have a total of 3584 cpu shares (2048+512+512+512) , then you get :

vserver guest 1 => 512 / 3584 = 14%  cpu
vserver guest 2 => 512 / 3584 = 14%  cpu
vserver guest 3 => 2048 / 3584 = 57% cpu
vserver guest 4 => 512 / 3584 = 14%  cpu



Note that this is fair scheduling and this will not enfore HARD limit (as far as i know).

[edit] Making share permanent with util vserver

You must use the "cgroup" directory. You can apply defaults to all vservers or choose different settings for each guest:

  • /etc/vservers/.defaults/cgroup , this directory contains settings applying to all guest when they start
  • /etc/vservers/<guestname>/cgroup , this directory contains settings for the guest when it starts.


Example :

mkdir /etc/vservers/.defaults/cgroup
mkdir /etc/vservers/<guestname>/cgroup
echo '2048' > /etc/vservers/<guestname>/cgroup/cpu.shares
# List of CPUs
echo 1 > /etc/vservers/<guestname>/cgroup/cpuset.cpus
# NUMA nodes
echo 1 > /etc/vservers/<guestname>/cgroup/cpuset.mems

Note that /etc/vservers is an example, in my Aqueos install i use /usr/local/etc/vservers but /etc/vservers seems to be the defaults for the classic installs.

Regards, Ghislain.

[edit] cgroup and CFS based CPU hard limiting that replaces sched_hard

[edit] References

You can find documentation about the CFS hard limiting in Documentation/scheduler/sched-cfs-hard-limits.txt inside your kernel source dir.

[edit] Requirements

This feature is currently available in patch-2.6.32.20-vs2.3.0.36.29.6.diff and is in testing phase as of this patch set so report any bugs to the mailing list.

To get the hard limit setup on every vServer start you need a recent utils package. It worked for me with: 0.30.216-pre2864. (Download from util-vserver prereleases) (also see note at top of page regarding CONFIG_CGROUP_NS, which can usually be found by grepping /proc/config.gz or /boot/config-`uname -r`)

Before trying to setup limits for one guest you should mount the cgroup filesystem:

[ -d /dev/cgroup ] || mkdir /dev/cgroup
mount -t cgroup -ocpu none /dev/cgroup

[edit] Configuration

Example for an upper bound of 2/5th (or 40%) of the all CPU power that a guest/cgroup can use :

# time assigned to guest (in microseconds) 200000 = 0,2 sec 
echo 200000 > /etc/vservers/<guestname>/cgroup/cpu.cfs_runtime_us
# in each specified period (in microseconds) 500000 = 0,5 sec 
echo 500000 > /etc/vservers/<guestname>/cgroup/cpu.cfs_period_us

This limit is an hard limit, see it like an upper wall for the resources used by the cgroup.

If you set both CPU share AND hard limit the system will do fine but hard limits takes priority over CPU share scheduling, so CPU share will do the job but each cgroup will have an upper bound that it cannot cross even if the CPU share you gave it is higher.

The hard limit feature adds 2 cgroup files for the CFS group scheduler:

  • cfs_runtime_us: Hard limit for the group in microseconds.
  • cfs_period_us: Time period in microseconds within which hard limits is enforced.

[edit] using cgroup to enforce memory limits

in linux-vserver patch version vs2.3.0.36.29 memory limiting by cgroup is introduced. to use it you need to have the following config lines in your kernel build (aditionally to the others mentioned for cgroup cpu limits):

  • CONFIG_RESOURCE_COUNTERS=y
  • CONFIG_CGROUP_MEM_RES_CTLR=y
  • CONFIG_CGROUP_MEM_RES_CTLR_SWAP=y

make sure /dev/cgroup is mounted with -o...,memory to be able to use this feature. The following files let you adjust memory limits of a running vserver (create them in /etc/vservers/-vserver-name- /cgroup/ to make them permanent):

  • memory.memsw.limit_in_bytes the total memory limit (memory+swap) of your cgroup context
  • memory.limit_in_bytes the total memory limit

values are stored in bytes. When writing to those files you can use suffixes: K,M,G.

Note: cgroup memory limits are to replace rss.soft and rss.hard some time in the future.

Note: from kernel 3.2.x+ you HAVE to boot with the kernel parameter swapaccount=1 or swap accounting is disabled

Limiting the memory and not the swap mean as soon as the memory limit is reached the guest will swap until it fills out the swap completly so swap limits are somewhat necessary in preventing runaway process

When you wish the guests to see only their limited memory pool, be sure to include VIRT_MEM in your cflags config file.

BUG-ALERT: If you're getting into trouble (Unable to handle kernel paging request at [..] + trace from Error in dmesg or /var/messages) please use the following experimental patch provided by Bertl: http://vserver.13thfloor.at/ExperimentalT/delta-memcg-fix04.diff

For a deeper understanding check out Documentation/cgroups/memory.txt of your kernel source tree.

[edit] Real world Examples of Scheduling

This section is for working and tested examples you have put in place.

Please add the following information for each example you put here (use vserver-info).

  • Base kernel version
  • vServer version
  • Other kernel patches in use (grsec, etc.)
  • util-vserver release

[edit] Ben's install on Debian Lenny

I used the kernels from [2], described at [3]. I've done this on a few versions, works for 2.6.31.7 with patch vs2.3.0.36.27 on amd64, also 2.6.31.11 with patch vs2.3.0.36.28. I used the stock Lenny util-vserver, patched as described below. The kernel config is critically important, with specific cgroup options necessary in order to get cgroups working in this way. Check the configs for the [4] kernels to see which ones I used.

[edit] Getting Lenny Ready

There's a very old version of util-vserver on Lenny, it needs this patch applying before it will set the cgroups properly, it basically only adds one line:

--- /usr/lib/util-vserver/vserver.suexec.orig	2008-12-12 22:56:25.000000000 -0600
+++ /usr/lib/util-vserver/vserver.suexec	2009-08-20 02:11:42.000000000 -0500
@@ -22,7 +22,8 @@ test -z "$is_stopped" -o "$OPTION_INSECU
     exit 1
 }
 generateOptions  "$VSERVER_DIR"
-addtoCPUSET  "$VSERVER_DIR"
+addtoCPUSET      "$VSERVER_DIR"
+attachToCgroup   "$VSERVER_DIR"
 
 user=$1
 shift

Next I added a correctly mounted cgroup file system on /dev/cgroup/.

$ mkdir /dev/cgroup
$ mount -t cgroup vserver /dev/cgroup

For the util-vserver to do the right thing, this directory needed adding too:

$ mkdir /etc/vservers/.defaults/cgroup

[edit] Sharing out the CPU between guest servers

I have a few test guests hanging around that I play with, call onetime, twotime, threetime, fourtime and fivetime. I order to set the shares for each guest I did this:

mkdir /etc/vservers/fivetime/cgroup/ /etc/vservers/fourtime/cgroup/ /etc/vservers/threetime/cgroup/ /etc/vservers/twotime/cgroup/ /etc/vservers/twotime/cgroup/
echo "512" > /etc/vservers/fivetime/cgroup/cpu.shares
echo "1024" > /etc/vservers/fourtime/cgroup/cpu.shares
echo "1024" > /etc/vservers/threetime/cgroup/cpu.shares
echo "1536" > /etc/vservers/twotime/cgroup/cpu.shares
echo "1024" > /etc/vservers/onetime/cgroup/cpu.shares

Then started the guests. When the system was loaded (I used one instance of cpuburn on each server - not advised but a useful test) they each should have got the following percentage of CPU.

Guest Name cpu.share given percentage of cpu
fivetime 512 10%
fourtime 1024 20%
threetime 1024 20%
twotime 1536 30%
onetime 1024 20%

This didn't quite happen, as each process could migrate to other CPUs. When I fixed every guest to use only one of the available CPUs (see below how I did this) the percentage of processing time alloted to each guest were then pretty much exact! Each process was given exactly it's designated percentage of time according to vtop.

[edit] Dishing out different processors sets to different guest servers

The "cpuset" for each guest is the subset of CPUs which it is permitted to use. I found out the number of CPUs available on my system by doing this:

$ cat /dev/cgroup/cpuset.cpus

This gave me the result 0-1, meaning that the overall set for my cgroups consists of CPUs 0 and 1 (for a quad core system one would expect the result 0-3, or for quad core with HT, 0-7). I stopped my guests, then for each guest specified a cpuset containing only CPU 0 for each of them:

$ echo "0" > /etc/vservers/onetime/cgroup/cpuset.cpus
$ echo "0" > /etc/vservers/twotime/cgroup/cpuset.cpus
$ echo "0" > /etc/vservers/threetime/cgroup/cpuset.cpus
$ echo "0" > /etc/vservers/fourtime/cgroup/cpuset.cpus
$ echo "0" > /etc/vservers/fivetime/cgroup/cpuset.cpus

On restarting the guest, I could see (using vtop) that these guest were only using the CPU 0 (the column "Last used cpu (SMP)" needs to be on in vtop in order to see this). This set up isn't particularly useful, but did allow me to check that the cpu.shares I specified for my guest were working as expected.

[edit] Doing this to servers live

The parameters in the last two sections can be set when the servers are running. For example to move the guest "threetime" so that it could use both CPUs I did this:

$ echo "0-1" > /dev/cgroup/threetime/cpuset.cpus

The processes running on threetime instantly were allocated cycle on both CPUs. Then:

$ echo "1" > /dev/cgroup/threetime/cpuset.cpus

Shifts them all to CPU 1. One can change where cycles are allocated with impunity. The same with CPU shares:

$ echo "4096" > /dev/cgroup/threetime/cpu.shares

Gave threetime a much bigger slice of the processors when it was under load.

NOTE: The range "0-1" is not the only way of specifying a set of CPUs, I could have used "0,1". On bigger systems, with say 8 CPUs one could use "0-2,4,5", which would be the same as "0,1,2,4,5" or "0-2,4-5".

[edit] Making sure all of this gets set up after a reboot

This process will make sure /dev/cgroup is present at boot and correctly mounted:

  • patch util-vserver (see above)
  • mkdir /etc/vservers/.defaults/cgroup
  • mkdir /lib/udev/devices/cgroup (this will mean that the /dev/cgroup is created early in the boot process)
  • add the following line to /etc/fstab
vserver    /dev/cgroup    cgroup    cpu,cpuset,memory    0 0

[edit] Ben's install on Debian Squeeze/Sid

These instructions are for Debian's own packages.

Squeeze ships with the 2.6.32 kernel. Currently the package linux-image-2.6.32-5-vserver-amd64 works well for cgroup scheduling. The following steps are simplest way to set it up:

  • mkdir /etc/vservers/.defaults/cgroup
  • mkdir /lib/udev/devices/cgroup (this will mean that the /dev/cgroup is created early in the boot process)
  • add the following line to /etc/fstab
vserver    /dev/cgroup    cgroup   cpuset,cpu,cpuacct,devices,freezer,net_cls    0 0
  • reboot the server

Instructions for setting particular parameters are the same as for Lenny. The reason for specifying the cgroup subsystems is that if the namespace subsystem "ns" is included, Linux-Vserver will not work. The /etc/fstab line above mounts /dev/cgroup with all the available subsystems excluding "ns".

Note that the "memory" cgroup subsystem is omitted as Squeeze has the legacy memory controls through rlimits compiled in. It is possible to add "memory" to the cgroup fstab line and use the cgroup based memory control. Please add any success with this to this page.

Personal tools