Difference between revisions of "Installation on Linux 2.6"

From Linux-VServer

Jump to: navigation, search
(Manual Kernel Compilation)
(+cat)
 
(12 intermediate revisions by 7 users not shown)
Line 5: Line 5:
 
You might ask yourself, why should I build a custom kernel? Manually configuring a kernel is often seen as the most difficult procedure a Linux user ever has to perform. Nothing is less true -- after configuring a couple of kernels you don't even remember that it was difficult ;)
 
You might ask yourself, why should I build a custom kernel? Manually configuring a kernel is often seen as the most difficult procedure a Linux user ever has to perform. Nothing is less true -- after configuring a couple of kernels you don't even remember that it was difficult ;)
  
However, one thing is true: you must know your system when you start configuring a kernel manually. Nevertheless there are good reasons to build your kernel manually:
+
However, one thing is true: you must know your system when you start configuring a kernel manually.  
 +
 
 +
There are good reasons to build your kernel manually:
  
 
* Your distribution does not have a prebuilt Linux-VServer kernel
 
* Your distribution does not have a prebuilt Linux-VServer kernel
 +
** or maybe it does, but it hasn't been compiled with the options you need
 
* Your distribution does not have the latest and greatest
 
* Your distribution does not have the latest and greatest
 
* You don't want to install bloated prebuilt kernels
 
* You don't want to install bloated prebuilt kernels
Line 13: Line 16:
 
* You can tell everyone that you built your kernels manually ;)
 
* You can tell everyone that you built your kernels manually ;)
  
If you still intend to built your own kernel, read on. Otherwise have a look at our [[Documentation]] section for how to install a prebuilt Linux-VServer kernel for your distribution.
+
If you still don't want to build your own kernel, have a look at our [[Documentation]] section for how to install a prebuilt Linux-VServer kernel for your distribution. Otherwise, read on.
  
 
=== Getting the Sources ===
 
=== Getting the Sources ===
  
You'll need the vanilla kernel sources (i.e. those from [http://www.kernel.org kernel.org]) and (of course) a Linux-VServer patch for the kernel version you intend to use. You can find links to both files in our [[Downloads]] section.
+
You'll need the vanilla kernel sources (i.e. those from [http://www.kernel.org kernel.org]) and (of course) a Linux-VServer patch for the kernel version you intend to use. You can find links to both files in our [[Downloads]] section. (Note that for recent kernels, only development versions of the vserver patch exist. You can obtain these from [http://vserver.13thfloor.at/Experimental http://vserver.13thfloor.at/Experimental].)
 +
 
  
 
In this document we will use Linux 2.6.22.19 with Linux-VServer 2.2.0.7.
 
In this document we will use Linux 2.6.22.19 with Linux-VServer 2.2.0.7.
Line 62: Line 66:
  
 
But be aware that this needs some discipline when hacking the source. Because hard-linked files share the same data on the disk, you need to make sure that your editor does ''The Right Thing'', otherwise you might mess up all your source trees...
 
But be aware that this needs some discipline when hacking the source. Because hard-linked files share the same data on the disk, you need to make sure that your editor does ''The Right Thing'', otherwise you might mess up all your source trees...
 +
 +
An even better way of dealing with this, if using more disk space is not a big issue, is to use a version control system. [http://git-scm.com/ Git], for example, is specifically designed for dealing with a project with many different but similar versions, and is in fact used to handle the source code of the Linux kernel itself. Rather than downloading and extract a tarfile, it's possible to directly copy the kernel sources from the Git repository at [git.kernel.org] into your own local repository.
  
 
=== Configuring the Kernel ===
 
=== Configuring the Kernel ===
  
Now go to your kernel source directory and execute make menuconfig. This will fire up an ncurses-based configuration menu. (Of course you can use whatever configuration method you like, there is a text based one (make config), a GTK based one (make gconfig), and even a QT based one (make xconfig))
+
Under Ubuntu (on 8.04 Hardy x86_64 tested) the configuration files of the existing kernel can be found in the /boot directory with a name similar to: config-'uname -r'-general. This file can be used if copied to the source dir of the kernel as a starting point to configure the rest of the kernel. The filename must be <tt>.config</tt>.
 +
 
 +
Now go to your kernel source directory and execute <tt>make menuconfig</tt>. This will fire up an ncurses-based configuration menu. (Of course you can use whatever configuration method you like, there is a text based one (make config), a GTK based one (make gconfig), and even a QT based one (make xconfig)). <tt>make oldconfig</tt> is particularly useful because it only asks you about kernel options that are supported by the kernel but that don't already have a value assigned to them in your <tt>.config</tt>. Looking through all the available options can be tedious; <tt>make oldconfig</tt> saves you time by only showing new the new ones (in this case, vserver-related options).
  
 
<pre>
 
<pre>
Line 72: Line 80:
 
</pre>
 
</pre>
  
It is out of the scope of this guide to explain all the available configuration options. If you feel unsure about certain options either leave it with the default value, or consult your distribution manuals for help.
+
It is out of the scope of this guide to explain all the available configuration options. If you feel unsure about certain options, either use the default value, or consult your distribution manuals and the documentation shipped with the kernel for help.
  
 
Nevertheless, we will explain the Linux-VServer configuration options, of course. Depending on your version your configuration options may look similar to the following:
 
Nevertheless, we will explain the Linux-VServer configuration options, of course. Depending on your version your configuration options may look similar to the following:
Line 108: Line 116:
  
 
; Enable Legacy Kernel API
 
; Enable Legacy Kernel API
: This enables the legacy API used in vs1.xx, maintaining compatibility with older vserver tools, and guest images that are configured using the legacy method.
+
: This enables the legacy API used in vs1.xx, maintaining compatibility with older vserver tools, and guest images that are configured using the legacy method. You shouldn't enable it for new linux-vserver installations.
  
 
; Show a Legacy Version ID
 
; Show a Legacy Version ID
Line 117: Line 125:
  
 
; Enable/Disable Legacy Networking Kernel API
 
; Enable/Disable Legacy Networking Kernel API
: This enables/disables the legacy networking API which is required by the chbind tool in util-vserver <= 0.30.209. Do not disable it unless you exactly know what you are doing.
+
: This enables/disables the legacy networking API which is required by the chbind tool in util-vserver <= 0.30.209. That is a fairly old version of util-vserver, so unless you know you'll be using something that ancient, feel free to disable this option.
  
 
; Automatically Assign Loopback IP
 
; Automatically Assign Loopback IP
: Enable this to get a unique 127.x.y.1 address for each network context automatically, and enable the NXF_LBACK_REMAP and NXF_HIDE_LBACK flags. This creates a per-guest, isolated 127.0.0.1 address.
+
: Enable this to get a unique 127.x.y.1(x.y matches the context id) address for each network context automatically, and enable the NXF_LBACK_REMAP and NXF_HIDE_LBACK flags. This creates a per-guest, isolated 127.0.0.1 address. This has the side effect that services bound to 127.0.0.1 on the host will be inaccessible from guests by default; be sure this is what you want before enabling this option.
  
 
; Automatic Single IP Special Casing
 
; Automatic Single IP Special Casing
: Enabling this option will make the kernel automatically set NXF_SINGLE_IP for contexts which have only one IP address (note: an lback address does not count).
+
: Enabling this option will make the kernel automatically set NXF_SINGLE_IP for contexts which have only one IP address (note: a loopback address does not count). (TODO: add a link here to a page that explains what NXF_SINGLE_IP does, or briefly explain it here.)
  
 
; Remap Source IP Address
 
; Remap Source IP Address
Line 129: Line 137:
  
 
; Enable COW Immutable Link Breaking
 
; Enable COW Immutable Link Breaking
: This enables the COW (Copy-On-Write) link break code. It allows you to treat [[Unification|unified files]] like normal files when writing to them (which will implicitly break the link and create a copy of the unified file)
+
: This enables the COW (Copy-On-Write) link break code. It allows you to treat [[Unification|unified files]] like normal files when writing to them (which will implicitly break the link and create a copy of the unified file). Note that this currently doesn't work on xfs; on xfs, the new copy of the unified file will contain only binary zeroes.
  
 
; Enable Virtualized Guest Time
 
; Enable Virtualized Guest Time
Line 135: Line 143:
  
 
; Enable Guest Device Mapping
 
; Enable Guest Device Mapping
: This enables a generic remapping/access control interface for device nodes used inside the guest.
+
: This enables a generic remapping/access control interface for device nodes used inside the guest. For example, you could rewrite a guest's attempts to use /dev/hda to /dev/sda. This is normally not needed; guests don't normally use device nodes associated with physical hardware at all. (Of course, remapping can also be applied to device nodes that don't correspond to physical hardware.)
  
 
; Enable Proc Security
 
; Enable Proc Security
Line 165: Line 173:
  
 
; VServer Warnings
 
; VServer Warnings
: Enables warnings. There's not really a good reason to disable it.
+
: Enables warnings (sent to the kernel log during runtime). There's really no good reason to disable this.
  
 
; VServer Debugging Code
 
; VServer Debugging Code
Line 192: Line 200:
 
# make && make modules_install
 
# make && make modules_install
 
</pre>
 
</pre>
 +
 +
(Note: this will copy the resulting modules to your filesystem directly, bypassing any package manager you may have. It may be a better idea to build a kernel package you can install using your package manager. For rpm-based distributions, <tt>make rpm</tt> might work; dpkg-based distributions provide a package called <tt>kernel-package</tt> which you can use. We won't be covering these methods here.)
  
 
If you don't happen to have a really fast box, it is a good time to get a new cup of coffee now ;)
 
If you don't happen to have a really fast box, it is a good time to get a new cup of coffee now ;)
Line 203: Line 213:
 
=== Getting the Sources ===
 
=== Getting the Sources ===
  
You will have to download the latest util-vserver source tarball from our [[Downloads]] section. In this guide we will use util-vserver-0.30.214.
+
You will have to download the latest util-vserver source tarball from our [[Downloads]] section. In this guide we will use util-vserver-0.30.215, but note that for recent kernels and especially development versions of the vserver kernel patch, you'll need a much more recent [http://people.linux-vserver.org/~dhozac/t/uv-testing/ development version].
  
 
As a first step, of course, we need to get the sources.
 
As a first step, of course, we need to get the sources.
Line 212: Line 222:
  
 
# Get the sources for util-vserver
 
# Get the sources for util-vserver
wget http://ftp.linux-vserver.org/pub/utils/util-vserver/util-vserver-0.30.214.tar.bz2
+
wget http://ftp.linux-vserver.org/pub/utils/util-vserver/util-vserver-0.30.215.tar.bz2
  
 
# Extract the sources
 
# Extract the sources
tar xjf util-vserver-0.30.214.tar.bz2
+
tar xjf util-vserver-0.30.215.tar.bz2
 
</pre>
 
</pre>
  
Line 224: Line 234:
 
<pre>
 
<pre>
 
# Switch to the util-vserver source directory
 
# Switch to the util-vserver source directory
cd util-vserver-0.30.214
+
cd util-vserver-0.30.215
  
 
# Configure the sources (you may want to adjust settings here, the defaults work, but may not suit your needs)
 
# Configure the sources (you may want to adjust settings here, the defaults work, but may not suit your needs)
./configure --prefix=
+
./configure --prefix=... --sysconfdir=... --localstatedir=...
  
 
# Build the tools
 
# Build the tools
Line 233: Line 243:
  
 
# Install the tools
 
# Install the tools
make install
+
make install install-distribution
  
 
# It's a good point to fix the /proc entries for the guests
 
# It's a good point to fix the /proc entries for the guests
 
/etc/init.d/vprocunhide restart (this path depends on configuration, see output of 'vserver-info')
 
/etc/init.d/vprocunhide restart (this path depends on configuration, see output of 'vserver-info')
 
</pre>
 
</pre>
 +
 +
=== Enabling VServers on startup ===
 +
 +
You need to enable 2 initscripts:
 +
* <code>vprocunhide</code> - does necessary stuff in <code>/proc</code>
 +
* <code>vservers-default</code> - runs vservers marked as 'default' (<code>echo "default" > /etc/vservers/XXX/apps/init/mark</code>) on startup
 +
 +
To do so, you can use 'update-rc.d' or 'rcconf' (Debian), 'chkconfig' or 'ntsysv' (Fedora).
 +
 +
If you get errors like:
 +
<pre>
 +
/proc/uptime can not be accessed. Usually, this is caused by
 +
procfs-security. Please read the FAQ for more details
 +
http://linux-vserver.org/Proc-Security
 +
</pre>
 +
then you probably need to enable <code>vprocunhide</code>.
  
 
=== Testing your setup ===
 
=== Testing your setup ===
Line 281: Line 307:
  
 
Now that your setup is complete and working as expected, it is time to create your first guest system. Read on at [[Building Guest Systems]].
 
Now that your setup is complete and working as expected, it is time to create your first guest system. Read on at [[Building Guest Systems]].
 +
 +
[[Category:Installation]]

Latest revision as of 20:11, 21 October 2011

This guide will explain how to install a Linux-VServer kernel and util-vserver manually from source. It is assumed that you have basic knowledge about building a custom kernel, i.e. that you know which stuff to turn on in the kernel configuration. Of course some Linux-VServer specific options are explained here.

Contents

[edit] Manual Kernel Compilation

You might ask yourself, why should I build a custom kernel? Manually configuring a kernel is often seen as the most difficult procedure a Linux user ever has to perform. Nothing is less true -- after configuring a couple of kernels you don't even remember that it was difficult ;)

However, one thing is true: you must know your system when you start configuring a kernel manually.

There are good reasons to build your kernel manually:

  • Your distribution does not have a prebuilt Linux-VServer kernel
    • or maybe it does, but it hasn't been compiled with the options you need
  • Your distribution does not have the latest and greatest
  • You don't want to install bloated prebuilt kernels
  • You want a monolithic kernel and your distribution uses modules
  • You can tell everyone that you built your kernels manually ;)

If you still don't want to build your own kernel, have a look at our Documentation section for how to install a prebuilt Linux-VServer kernel for your distribution. Otherwise, read on.

[edit] Getting the Sources

You'll need the vanilla kernel sources (i.e. those from kernel.org) and (of course) a Linux-VServer patch for the kernel version you intend to use. You can find links to both files in our Downloads section. (Note that for recent kernels, only development versions of the vserver patch exist. You can obtain these from http://vserver.13thfloor.at/Experimental.)


In this document we will use Linux 2.6.22.19 with Linux-VServer 2.2.0.7.

First, you have to create a directory for the sources, if you already have one, feel free to skip this step and/or adjust the paths to your needs.

# Create a directory for our sources
mkdir ~/src

# Switch to that directory
cd ~/src

Now that we have a place to store our sources, we need to fetch them. We start with the vanilla sources.

# Get Linux 2.6.22.19 sources
wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.22.19.tar.bz2

# Extract them
tar xjf linux-2.6.22.19.tar.bz2

Now it is time to get the Linux-VServer patch and apply it to the sources. While we're at it, I tell you a nice trick I learned from Bertl, that allows you to keep a lot of source trees on your disk without using up lots of disk space (and this also speeds up 'diff' a lot, which is really nice if you do kernel-hacking). What we do is creating a hard-linked copy of our sources and patch this copy with the Linux-VServer patch. That way, only the patched files use additional disk space (and because hard-linked files are equal by definition, diff doesn't need to compare them).

# Get the Linux-VServer 2.2.0.7 patch
wget http://ftp.linux-vserver.org/pub/kernel/vs2.2/patch-2.6.22.19-vs2.2.0.7.diff

# Create a hard-linked copy of the vanilla sources, this will get the Linux-VServer patch applied
cp -la linux-2.6.22.19 linux-2.6.22.19-vs2.2.0.7

# Switch to that new directory
cd linux-2.6.22.19-vs2.2.0.7

# Patch the sources
cat ../patch-2.6.22.19-vs2.2.0.7.diff | patch -p1

Now you have two sources, the vanilla sources for 2.6.22.19 and the Linux-VServer sources for 2.6.22.19-vs2.2.0.7. You might ask "Why do I need two source trees at all? I only want one kernel!" and that's a good question.

Here's one answer: Updates! If a new vanilla kernel is released, you can just download the patch from your version to the new version. Otherwise, if you would have applied the patch to your one and only vanilla source tree, you would not be able to do this. The same applies for new Linux-VServer releases. That is, if a new Linux-VServer patch is available, you can simply create another hardlinked copy of your vanilla sources and apply the new patch using the copy. This can really save you time (and bandwidth), since you can keep everything you might need, without wasting a lot of disk space.

But be aware that this needs some discipline when hacking the source. Because hard-linked files share the same data on the disk, you need to make sure that your editor does The Right Thing, otherwise you might mess up all your source trees...

An even better way of dealing with this, if using more disk space is not a big issue, is to use a version control system. Git, for example, is specifically designed for dealing with a project with many different but similar versions, and is in fact used to handle the source code of the Linux kernel itself. Rather than downloading and extract a tarfile, it's possible to directly copy the kernel sources from the Git repository at [git.kernel.org] into your own local repository.

[edit] Configuring the Kernel

Under Ubuntu (on 8.04 Hardy x86_64 tested) the configuration files of the existing kernel can be found in the /boot directory with a name similar to: config-'uname -r'-general. This file can be used if copied to the source dir of the kernel as a starting point to configure the rest of the kernel. The filename must be .config.

Now go to your kernel source directory and execute make menuconfig. This will fire up an ncurses-based configuration menu. (Of course you can use whatever configuration method you like, there is a text based one (make config), a GTK based one (make gconfig), and even a QT based one (make xconfig)). make oldconfig is particularly useful because it only asks you about kernel options that are supported by the kernel but that don't already have a value assigned to them in your .config. Looking through all the available options can be tedious; make oldconfig saves you time by only showing new the new ones (in this case, vserver-related options).

# Configure the kernel using a ncurses based menu
make menuconfig

It is out of the scope of this guide to explain all the available configuration options. If you feel unsure about certain options, either use the default value, or consult your distribution manuals and the documentation shipped with the kernel for help.

Nevertheless, we will explain the Linux-VServer configuration options, of course. Depending on your version your configuration options may look similar to the following:

Linux VServer --->
  [*] Enable Legacy Kernel API                    (<2.3)
  [ ]   Show a Legacy Version ID
  [*]   Enable dynamic context IDs                (2.1 - 2.2)
  [ ] Disable Legacy Networking Kernel API        (2.0.x only)
  [*] Enable Legacy Networking Kernel API         (2.1 - 2.2)
  [*] Automatically Assign Loopback IP            (2.3+)
  [*] Automatic Single IP Special Casing          (2.3+)
  [ ] Remap Source IP Address                     (<2.3)
  [*] Enable COW Immutable Link Breaking          (2.1+)
  [ ] Enable Virtualized Guest Time               (2.1+)
  [ ] Enable Guest Device Mapping                 (2.1, 2.3)
  [*] Enable Proc Security
  [ ] Enable Hard CPU Limits
  [ ]   Avoid idle CPUs by skipping Time          (2.1+)
  [ ]   Limit the IDLE task
      Persistent Inode Tagging (UID24/GID24)  --->
  [ ] Tag NFSD User Auth and Files
  [ ] Enable Inode Tag Propagation                (2.1+)
  [ ] Honor Privacy Aspects of Guests             (2.1+)
  [256] Maximum number of Contexts (1-65533)      (2.2+)
  [*] VServer Warnings                            (2.2+)
  [ ] VServer Debugging Code
  [ ]   VServer History Tracing
  (64)  Per-CPU History Size (32-65536)
  [ ]   VServer Scheduling Monitor                (2.1+)
  (1024) Per-CPU Monitor Queue Size (32-65536)    (2.1+)
  (256)  Per-CPU Monitor Sync Interval (0-65536)  (2.1+)
Enable Legacy Kernel API
This enables the legacy API used in vs1.xx, maintaining compatibility with older vserver tools, and guest images that are configured using the legacy method. You shouldn't enable it for new linux-vserver installations.
Show a Legacy Version ID
This shows a special legacy version to very old tools which do not handle the current version correctly. This will probably disable some features of newer tools so better avoid it, unless you really, really need it for backwards compatibility.
Enable dynamic context IDs
This enables support for in-kernel dynamic context IDs which are deprecated and soon to be removed.
Enable/Disable Legacy Networking Kernel API
This enables/disables the legacy networking API which is required by the chbind tool in util-vserver <= 0.30.209. That is a fairly old version of util-vserver, so unless you know you'll be using something that ancient, feel free to disable this option.
Automatically Assign Loopback IP
Enable this to get a unique 127.x.y.1(x.y matches the context id) address for each network context automatically, and enable the NXF_LBACK_REMAP and NXF_HIDE_LBACK flags. This creates a per-guest, isolated 127.0.0.1 address. This has the side effect that services bound to 127.0.0.1 on the host will be inaccessible from guests by default; be sure this is what you want before enabling this option.
Automatic Single IP Special Casing
Enabling this option will make the kernel automatically set NXF_SINGLE_IP for contexts which have only one IP address (note: a loopback address does not count). (TODO: add a link here to a page that explains what NXF_SINGLE_IP does, or briefly explain it here.)
Remap Source IP Address
This allows to remap the source IP address of 'local' connections from 127.0.0.1 to the first assigned guest IP.
Enable COW Immutable Link Breaking
This enables the COW (Copy-On-Write) link break code. It allows you to treat unified files like normal files when writing to them (which will implicitly break the link and create a copy of the unified file). Note that this currently doesn't work on xfs; on xfs, the new copy of the unified file will contain only binary zeroes.
Enable Virtualized Guest Time
This enables per guest time offsets to allow for adjusting the system clock individually per guest. This adds some overhead to the time functions and therefore should not be enabled without good reason.
Enable Guest Device Mapping
This enables a generic remapping/access control interface for device nodes used inside the guest. For example, you could rewrite a guest's attempts to use /dev/hda to /dev/sda. This is normally not needed; guests don't normally use device nodes associated with physical hardware at all. (Of course, remapping can also be applied to device nodes that don't correspond to physical hardware.)
Enable Proc Security
This configures ProcFS security to initially hide non-process entries for all contexts except the main and spectator context (i.e. for all guests), which is a secure default.
Enable Hard CPU Limits
This will compile in code that allows the Token Bucket Scheduler to put processes on hold when a context's tokens are depleted (provided that its per-context sched_hard flag is set).
Avoid idle CPUs by skipping Time
This option allows the scheduler to artificially advance time (per cpu) when otherwise the idle task would be scheduled, thus keeping the cpu busy and sharing the available resources among certain contexts.
Limit the IDLE task
Limit the idle slices, so the the next context will be scheduled as soon as possible. This might improve interactivity and latency, but will also marginally increase scheduling overhead.
Persistent Inode Tagging
This adds persistent context information to filesystems mounted with the tagxid option. Tagging is a requirement for per-context Disk Limits and Quota.
Tag NFSD User Auth and Files
Enable this if you do want the in-kernel NFS Server to use the xid tagging specified above.
Enable Inode Tag Propagation
This allows for the tagid= mount option to specify a tagid which is to be used for the entire mount tree.
Honor Privacy Aspects of Guests
When enabled, most context checks will disallow access to structures assigned to a specific context, like ptys or loop devices.
Maximum number of Contexts
This makes sure that at least this many contexts can be created, by making sure that this much per-CPU memory is available.
VServer Warnings
Enables warnings (sent to the kernel log during runtime). There's really no good reason to disable this.
VServer Debugging Code
Set this to yes if you want to be able to activate debugging output at runtime. It adds a probably small overhead to all vserver related functions and increases the kernel size by about 20k.
VServer History Tracing
This records a history of Linux-VServer events that can be replayed in the event of a panic or an oops.
Per-CPU History Size
This allows you to set the size of the per-CPU history buffer.
VServer Scheduling Monitor
Set this to yes if you want to record the scheduling decisions, so that they can be relayed to userspace for detailed analysis.
Per-CPU Monitor Queue Size
This allows you to specify the number of entries in the per-CPU scheduling monitor buffer.
Per-CPU Monitor Sync Interval
This allows you to specify the interval in ticks when a time sync entry is inserted.

[edit] Compiling and Installing

Now that your kernel is configured, it is time to compile and install it. Exit the configuration and start the compilation process:

# make && make modules_install

(Note: this will copy the resulting modules to your filesystem directly, bypassing any package manager you may have. It may be a better idea to build a kernel package you can install using your package manager. For rpm-based distributions, make rpm might work; dpkg-based distributions provide a package called kernel-package which you can use. We won't be covering these methods here.)

If you don't happen to have a really fast box, it is a good time to get a new cup of coffee now ;)

When the kernel has finished compiling, you have to copy the kernel image to your /boot partition and configure your boot loader. If you don't know how to do this, please consult your distribution manual or ask Google for help.

[edit] Manual util-vserver Compilation

The kernel alone does not help you, you also need some tools to exploit all those new features you got, so let's get them.

[edit] Getting the Sources

You will have to download the latest util-vserver source tarball from our Downloads section. In this guide we will use util-vserver-0.30.215, but note that for recent kernels and especially development versions of the vserver kernel patch, you'll need a much more recent development version.

As a first step, of course, we need to get the sources.

# Go to our source directory
cd ~/src

# Get the sources for util-vserver
wget http://ftp.linux-vserver.org/pub/utils/util-vserver/util-vserver-0.30.215.tar.bz2

# Extract the sources
tar xjf util-vserver-0.30.215.tar.bz2

[edit] Compiling and Installing

Now that we have extracted the util-vserver source we have to do the usual configure, make, make install chain. While configuring the tools you may get some error messages about missing stuff, for example dietlibc, vconfig and e2fs headers. The error messages are accompanied by explanations what you should do, so read them carefully.

# Switch to the util-vserver source directory
cd util-vserver-0.30.215

# Configure the sources (you may want to adjust settings here, the defaults work, but may not suit your needs)
./configure --prefix=... --sysconfdir=... --localstatedir=...

# Build the tools
make

# Install the tools
make install install-distribution

# It's a good point to fix the /proc entries for the guests
/etc/init.d/vprocunhide restart (this path depends on configuration, see output of 'vserver-info')

[edit] Enabling VServers on startup

You need to enable 2 initscripts:

  • vprocunhide - does necessary stuff in /proc
  • vservers-default - runs vservers marked as 'default' (echo "default" > /etc/vservers/XXX/apps/init/mark) on startup

To do so, you can use 'update-rc.d' or 'rcconf' (Debian), 'chkconfig' or 'ntsysv' (Fedora).

If you get errors like:

/proc/uptime can not be accessed. Usually, this is caused by
procfs-security. Please read the FAQ for more details
http://linux-vserver.org/Proc-Security

then you probably need to enable vprocunhide.

[edit] Testing your setup

To ensure that your setup works we have created two small test scripts. The testme.sh script ensures basic functionality whereas the testfs.sh script is for inode attribute testing for various filesystems.

# get the script
wget http://vserver.13thfloor.at/Stuff/SCRIPT/testme.sh

# make it executable
chmod +x testme.sh

# run the test script
./testme.sh

Be careful! The testfs.sh script might easily reformat your hard disk :)

# get the script
wget http://vserver.13thfloor.at/Stuff/SCRIPT/testfs.sh

# make it executable
chmod +x testfs.sh

# make a loopback file
dd bs=1024k count=1024 if=/dev/zero of=1gb.testfile

# setup the loopback
losetup /dev/loop0 1gb.testfile

# run the test script for legacy mode
./testfs.sh -l -t -D /dev/loop0 -M /mnt

# run the test script for new-style config
./testfs.sh -t -D /dev/loop0 -M /mnt

If the scripts show any error, be sure to read how to report a bug and contact the Linux-VServer Developers for help. See Communicate for details.

[edit] Where to go from here

Now that your setup is complete and working as expected, it is time to create your first guest system. Read on at Building Guest Systems.

Personal tools