Frequently Asked Questions

From Linux-VServer

Revision as of 05:05, 3 May 2008 by 89.149.242.226 (Talk)

Jump to: navigation, search
Icon-Caution.png

We currently migrate to MediaWiki from our old installation, but not all content has been migrated yet. Take a look at the Wiki Team page for instructions how to help or look at the old wiki to find the information not migrated yet.

To ease migration we created a List of old Documentation pages.

CURRENTLY THE CONTENT OF THE OLD WIKI FAQ (AND MORE) IS BEING MIGRATED TO THIS PAGE (TASK: DERJOHN)


Contents


What is a 'Guest'?

To talk about stuff, we need some naming. The physical machine is called 'Host' and the 'main' context running the Host Distro is called 'Host Context'. The virtual machine/distro is called 'Guest' and basically is a Distribution (Userspace) running inside a 'Guest Context'.
derjohn




What kind of Operating System (OS) can I run as guest?

A: With VServer you can only run Linux guests. The trick is that a guest does not run a kernel on its own (as XEN and UML do), it merely uses a virtualized host kernel-interface. VServer offers so called security contexts which make it possible to seperate one guest from each other, i.e. they cannot get data from each other. Imagine it as a chroot environment with much more security and features.
derjohn



Which distributions did you test?

A: Some. Check out the wiki for ready-made guest images. But you can easily build own guest images, e.g. with Debian's debootstrap. Checkout ((Building Guest Systems)) how to do that.
derjohn



Is VServer comparable to XEN/UML/QEMU?

A: Nope. XEN/UML/QEMU and VServer are just good friends. Because you ask, you probably know what XEN/UML/QEMU are. VServer in contrary to XEN/UML/QEMU not "emulate" any hardware you run a kernel on. You can run a VServer kernel in a XEN/UML/QEMU guest. This is confirmed to work at least with Linux 2.6/vs2.0.
derjohn



Is VServer secure?

A: We hope so. It should be as least as secure as Linux is. We consider it much much more secure though.
derjohn



Performance?

A: For a single guest, we basically have native performance. Some tests showed insignificant overhead (about 1-2%) others ran faster than on an unpatched kernel. This is IMVHO significantly less than other solutions waste, especially if you have more than a single guest (because of the resource sharing).
derjohn



Is SMP Supported?

A: Yes, on all SMP capable kernel architectures.
derjohn



Resource sharing?

A: Yes ....
  • memory: Dynamically.
  • CPU usage: Dynamically (token bucket)
derjohn



Resource limiting?

A: Yes, you can set maximum limits per guest, but you can only offer guaranteed resource availability with some ticks at the time. There is the possibility to ulimit and to rlimit. Rlimit is a new feature of kernel 2.6/vs2.0.
derjohn



Disk I/O limiting? Is that possible?

A: Well, since vs2.1.1 Linux-VServer supports a mechanism called 'I/O scheduling', which appeared in the 2.6 mainline some time ago. The mainline kernel offers several I/O schedulers:
# cat /sys/block/hdc/queue/scheduler
noop [anticipatory] deadline cfq

The default is anticipatory a.k.a. "AS". When running several guests on a host you probably want the I/O performance shared in a fair way among the different guests. The kernel comes with a "completely fair queueing" scheduler, CFQ, which can do that. (More on schedulers can be found at http://lwn.net/Articles/114770/)

This is how to set the scheduler to "cfq" manually:

root# echo "cfq" > /sys/block/hdc/queue/scheduler
root# cat /sys/block/hdc/queue/scheduler
noop anticipatory deadline [cfq]

Keep in mind that you have to do it on all physical discs. So if you run an md-softraid, do it to all physical /dev/hdXYZ discs!

If you run Debian there is a predefined way to set the /sys values at boot-time:

# apt-get install sysfsutils
[...]

# grep cfq /etc/sysfs.conf
block/sda/queue/scheduler = cfq
block/sdc/queue/scheduler = cfq

# /etc/init.d/sysfsutils restart

For non-vserver processes and CFQ you can set by which key the kernel decides about the fairness:

cat /sys/block/hdc/queue/iosched/key_type
pgid [tgid] uid gid

Hint: The 'key_type'-feature has been removed in the mainline kernel recently. Don't look for it any longer :(

The default is tgid, which means to share fairly among process groups. Think every guest is treated like a own process group. It's not possible to set a scheduler strategy within a guest. All processes belonging to the same guest are treated like "noop" within the guest. So: If you run apache and some ftp-server within the _same_ guest, there is no fair scheduling between them, but there is fair scheduling between the whole guest and all other guests.

And: It's possible to tune the scheduler parameters in several ways. Have a look at /sys/block/hdc/queue/....
derjohn



Nice disk I/O scheduling, is that possible?

A: Well, since linux 2.6.13 processess have another priority next to the cpu nice scheduling hint, it's called io nice.

It's split into three groups, called real-time, best effort and idle. The default is best-effort, but within best-effort, you can have a niceness from 0 to and including 7. You can set this niceness by the tool ionice, which for debian is either in the package util-linux or schedutils. To change the io-niceness you need the CAP_SYS_NICE, *and* need to have the same uid as the processe you want to ionice. If you want to increase the niceness of an I/O hogging process within a vserver you need to do:

chcontext --xid sponlp1 sudo -u '#2089' ionice -c2 -n5 -p24409
with sudo and ionice installed on the root server to increase the *nice*ness of pid 24409, with uid 2089
Groteblup



Why isn't there a device /dev/xyz within a guest?

A: Device nodes allow userspace to access hardware (or virtual resources). Creating a device node inside the guest's namespace will give access to that device, so for security reasons, the number of 'given' devices is small.
derjohn



What is unification (vunify)?

A: Unification is Hard Links on Steroids. Guests can 'share' common files (usually binaries and libraries) in a secure way, by creating hard links with special properties (immutable but unlinkable (removable)). The tool to identify common files and to unify them is called vunify.
derjohn



What is vhashify?

A: The successor of vunify, a tool which does unification based on hash values (which allows to find common files in arbitrary paths.)

It creates hardlinks to files named after a hash of the content of the file. If you have a recent version of the vserver patch (2.2+), with CONFIG_VSERVER_COWBL enabled, you can even modify the hardlinked files inside the vservers and the links will be broken automatically.

There seems to be a catch when a hashified file has multiple hardlinks inside a guest, or when another internal hardlink is added after hashification. Link breaking will remove all the internal hardlinks too, so the guest will end up with different copies of the original file. The correct solution would be to not hashify files that have multiple links prior to hashification, and to break the link to the hashified version when a new internal hardlink is created. Apparently, this is not implemented yet (?).
Guy-



How do I manage a multi-guest setup with vhashify?

A: For 'vhashify', just do these once:
mkdir /etc/vservers/.defaults/apps/vunify/hash /vservers/.hash
ln -s /vservers/.hash /etc/vservers/.defaults/apps/vunify/hash/root

Then, do this one line per vserver:

mkdir /etc/vservers/<vservername>/apps/vunify   # vhashify reuses vunify configuration

To hashify a running vserver, do (possibly from a cronjob):

vserver name-of-guest hashify

The guest needs to be running because vhashify tries to figure out what files not to hashify by calling the package manager of the guest via vserver enter.

In order for the OS cache to benefit from the hardlinking, you'll have to restart the vservers.

To clean up hashified files that are no longer referenced by any vserver, do (possibly from a cronjob):

find /vservers/.hash -type f -links 1 -print0 | xargs -0 rm
Until you do this, the files still take up place even though no vservers need them.
Guy-



With which version should I begin?

A: If you are new to VServer I recommend to try the latest stable kernel patch, and the latest util-vserver "alpha" release.
derjohn



Is there a way to implement "user/group quota" per VServer?

A: Yes, but not on a shared partition for now. You need to put the guest on a separate partition, setup a vroot device (to make the quota access secure), copy that into the guest, and adjust the mtab line inside the guest.
derjohn



What about "Quota" for a context?

A: Context quotas are now called Disk Limits (so that we can tell them apart from the user/group quotas :). They are supported out of the box (with vs2.0+) for all major filesystems (ext2/3, ReiserFS, JFS)
derjohn



Does it support IPv6?

A: Currently it requires an additional patch, but the functionality should be available in 2.3+ soon. IPv6 has more information.
derjohn



I can't do all I want with the network interfaces inside the guest?

A: For now the networking is 'Host Business' -- the host is a router, and each guest is a server. You can set the capability ICMP_RAW in the context of the guest, or even the capability CAP_NET_RAW (which would even allow to sniff interfaces of other guests!). Likely to change with ngnet.
derjohn



Is there a web-based interface for vserver that will allow creation/deletion/configuration etc. of vserver guests?

A. http://OpenVPS.org which is a set of scripts with a web-interface for webhosters/ISPs. http://Openvcp.org which is a distributed system (agent!) with a web-interface, with which you can build/remove guests. http://vsmon.revolutionlinux.com/ is a distributed monitoring-only solution that allows you to search for a particular vserver in your park.
derjohn



What is old-style and new-style config?

A. Old-style config refers to a single text-file that contains all the configuration settings. With new-style config the configuration is split into several directories and files. You should probably go for new-style config if you are asking.
derjohn



What is the "great flower page"?

A. Well, this page contains all configuration options for util-vserver. The name of the page is derived from the stylesheet(s) it contains.
derjohn



How do I add several IPs to a vserver?

A: First of all a single guest vserver only supports up to 16 IPs (There is a 64-IP patch available, which is in "derjohn's kernel").

Here is a little helper-script that adds a list of IPs defined in a text file, one per line.

#!/bin/bash
j=1
for i in `cat myiplist`; do
        j=$(($j+1))
        mkdir $j
        echo $i > $j/ipst. This has to do with the 'ipc namespace' feature that was added to the mainline kernel in version 2.6.19. Linux-VServer uses that feature to give each guest a separate 'ipc namespace' and thus 'own' sysctl values per guest. Because shmmax is such a sysctl value, you have to set it per guest.
Here is an example how to do so:

<pre>
# mkdir /etc/vservers/<vserver>/sysctl/0 -p
# echo kernel.shmall > /etc/vservers/<vserver>/sysctl/0/setting
# echo 134217728 > /etc/vservers/<vserver>/sysctl/0/value
# mkdir /etc/vservers/<vserver>/sysctl/1 -p
# echo kernel.shmmax > /etc/vservers/<vserver>/sysctl/1/setting
# echo 134217728 > /etc/vservers/<vserver>/sysctl/1/value

It's also explained on the geat flower page:

  1. see: http://www.nongnu.org/util-vserver/doc/conf/configuration.html -> Look for "sysctl".

After changing those values, restart your guest, enter it and check if the values are set:

# sysctl -a | grep shm
...
kernel.shmall = 134217728
kernel.shmmax = 134217728
derjohn


Personal tools