Installation on Linux 2.6
From Linux-VServer
This guide will explain how to install a Linux-VServer kernel and util-vserver manually from source. It is assumed that you have basic knowledge about building a custom kernel, i.e. that you know which stuff to turn on in the kernel configuration. Of course some Linux-VServer specific options are explained here.
Contents |
Manual Kernel Compilation
You might ask yourself, why should I build a custom kernel? Manually configuring a kernel is often seen as the most difficult procedure a Linux user ever has to perform. Nothing is less true -- after configuring a couple of kernels you don't even remember that it was difficult ;)
However, one thing is true: you must know your system when you start configuring a kernel manually. Nevertheless there are good reasons to build your kernel manually:
- Your distribution does not have a prebuilt Linux-VServer kernel
- Your distribution does not have the latest and greatest
- You don't want to install bloated prebuilt kernels
- You want a monolithic kernel and your distribution uses modules
- You can tell everyone that you built your kernels manually ;)
If you still intend to built your own kernel, read on. Otherwise have a look at our Documentation section for how to install a prebuilt Linux-VServer kernel for your distribution.
Getting the Sources
You'll need the vanilla kernel sources (i.e. those from kernel.org) and (of course) a Linux-VServer patch for the kernel version you intend to use. You can find links to both files in our Downloads section.
In this document we will use Linux 2.6.22.19 with Linux-VServer 2.2.0.7.
First, you have to create a directory for the sources, if you already have one, feel free to skip this step and/or adjust the paths to your needs.
# Create a directory for our sources mkdir ~/src # Switch to that directory cd ~/src
Now that we have a place to store our sources, we need to fetch them. We start with the vanilla sources.
# Get Linux 2.6.22.19 sources wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.22.19.tar.bz2 # Extract them tar xjf linux-2.6.22.19.tar.bz2
Now it is time to get the Linux-VServer patch and apply it to the sources. While we're at it, I tell you a nice trick I learned from Bertl, that allows you to keep a lot of source trees on your disk without using up lots of disk space (and this also speeds up 'diff' a lot, which is really nice if you do kernel-hacking). What we do is creating a hard-linked copy of our sources and patch this copy with the Linux-VServer patch. That way, only the patched files use additional disk space (and because hard-linked files are equal by definition, diff doesn't need to compare them).
# Get the Linux-VServer 2.2.0.7 patch wget http://ftp.linux-vserver.org/pub/kernel/vs2.2/patch-2.6.22.19-vs2.2.0.7.diff # Create a hard-linked copy of the vanilla sources, this will get the Linux-VServer patch applied cp -la linux-2.6.22.19 linux-2.6.22.19-vs2.2.0.7 # Switch to that new directory cd linux-2.6.22.19-vs2.2.0.7 # Patch the sources cat ../patch-2.6.22.19-vs2.2.0.7.diff | patch -p1
Now you have two sources, the vanilla sources for 2.6.22.19 and the Linux-VServer sources for 2.6.22.19-vs2.2.0.7. You might ask "Why do I need two source trees at all? I only want one kernel!" and that's a good question.
Here's one answer: Updates! If a new vanilla kernel is released, you can just download the patch from your version to the new version. Otherwise, if you would have applied the patch to your one and only vanilla source tree, you would not be able to do this. The same applies for new Linux-VServer releases. That is, if a new Linux-VServer patch is available, you can simply create another hardlinked copy of your vanilla sources and apply the new patch using the copy. This can really save you time (and bandwidth), since you can keep everything you might need, without wasting a lot of disk space.
But be aware that this needs some discipline when hacking the source. Because hard-linked files share the same data on the disk, you need to make sure that your editor does The Right Thing, otherwise you might mess up all your source trees...
Configuring the Kernel
Under Ubuntu (on 8.04 Hardy x86_64 tested) the configuration files of the existing kernel can be found in the /boot directory with a name similar to: config-'uname -r'-general. This file can be used if copied to the source dir of the kernel as a starting point to configure the rest of the kernel. The filename must be .config Now go to your kernel source directory and execute make menuconfig. This will fire up an ncurses-based configuration menu. (Of course you can use whatever configuration method you like, there is a text based one (make config), a GTK based one (make gconfig), and even a QT based one (make xconfig))
# Configure the kernel using a ncurses based menu make menuconfig
It is out of the scope of this guide to explain all the available configuration options. If you feel unsure about certain options either leave it with the default value, or consult your distribution manuals for help.
Nevertheless, we will explain the Linux-VServer configuration options, of course. Depending on your version your configuration options may look similar to the following:
Linux VServer ---> [*] Enable Legacy Kernel API (<2.3) [ ] Show a Legacy Version ID [*] Enable dynamic context IDs (2.1 - 2.2) [ ] Disable Legacy Networking Kernel API (2.0.x only) [*] Enable Legacy Networking Kernel API (2.1 - 2.2) [*] Automatically Assign Loopback IP (2.3+) [*] Automatic Single IP Special Casing (2.3+) [ ] Remap Source IP Address (<2.3) [*] Enable COW Immutable Link Breaking (2.1+) [ ] Enable Virtualized Guest Time (2.1+) [ ] Enable Guest Device Mapping (2.1, 2.3) [*] Enable Proc Security [ ] Enable Hard CPU Limits [ ] Avoid idle CPUs by skipping Time (2.1+) [ ] Limit the IDLE task Persistent Inode Tagging (UID24/GID24) ---> [ ] Tag NFSD User Auth and Files [ ] Enable Inode Tag Propagation (2.1+) [ ] Honor Privacy Aspects of Guests (2.1+) [256] Maximum number of Contexts (1-65533) (2.2+) [*] VServer Warnings (2.2+) [ ] VServer Debugging Code [ ] VServer History Tracing (64) Per-CPU History Size (32-65536) [ ] VServer Scheduling Monitor (2.1+) (1024) Per-CPU Monitor Queue Size (32-65536) (2.1+) (256) Per-CPU Monitor Sync Interval (0-65536) (2.1+)
- Enable Legacy Kernel API
- This enables the legacy API used in vs1.xx, maintaining compatibility with older vserver tools, and guest images that are configured using the legacy method.
- Show a Legacy Version ID
- This shows a special legacy version to very old tools which do not handle the current version correctly. This will probably disable some features of newer tools so better avoid it, unless you really, really need it for backwards compatibility.
- Enable dynamic context IDs
- This enables support for in-kernel dynamic context IDs which are deprecated and soon to be removed.
- Enable/Disable Legacy Networking Kernel API
- This enables/disables the legacy networking API which is required by the chbind tool in util-vserver <= 0.30.209. Do not disable it unless you exactly know what you are doing.
- Automatically Assign Loopback IP
- Enable this to get a unique 127.x.y.1 address for each network context automatically, and enable the NXF_LBACK_REMAP and NXF_HIDE_LBACK flags. This creates a per-guest, isolated 127.0.0.1 address.
- Automatic Single IP Special Casing
- Enabling this option will make the kernel automatically set NXF_SINGLE_IP for contexts which have only one IP address (note: an lback address does not count).
- Remap Source IP Address
- This allows to remap the source IP address of 'local' connections from 127.0.0.1 to the first assigned guest IP.
- Enable COW Immutable Link Breaking
- This enables the COW (Copy-On-Write) link break code. It allows you to treat unified files like normal files when writing to them (which will implicitly break the link and create a copy of the unified file)
- Enable Virtualized Guest Time
- This enables per guest time offsets to allow for adjusting the system clock individually per guest. This adds some overhead to the time functions and therefore should not be enabled without good reason.
- Enable Guest Device Mapping
- This enables a generic remapping/access control interface for device nodes used inside the guest.
- Enable Proc Security
- This configures ProcFS security to initially hide non-process entries for all contexts except the main and spectator context (i.e. for all guests), which is a secure default.
- Enable Hard CPU Limits
- This will compile in code that allows the Token Bucket Scheduler to put processes on hold when a context's tokens are depleted (provided that its per-context sched_hard flag is set).
- Avoid idle CPUs by skipping Time
- This option allows the scheduler to artificially advance time (per cpu) when otherwise the idle task would be scheduled, thus keeping the cpu busy and sharing the available resources among certain contexts.
- Limit the IDLE task
- Limit the idle slices, so the the next context will be scheduled as soon as possible. This might improve interactivity and latency, but will also marginally increase scheduling overhead.
- Persistent Inode Tagging
- This adds persistent context information to filesystems mounted with the tagxid option. Tagging is a requirement for per-context Disk Limits and Quota.
- Tag NFSD User Auth and Files
- Enable this if you do want the in-kernel NFS Server to use the xid tagging specified above.
- Enable Inode Tag Propagation
- This allows for the tagid= mount option to specify a tagid which is to be used for the entire mount tree.
- Honor Privacy Aspects of Guests
- When enabled, most context checks will disallow access to structures assigned to a specific context, like ptys or loop devices.
- Maximum number of Contexts
- This makes sure that at least this many contexts can be created, by making sure that this much per-CPU memory is available.
- VServer Warnings
- Enables warnings. There's not really a good reason to disable it.
- VServer Debugging Code
- Set this to yes if you want to be able to activate debugging output at runtime. It adds a probably small overhead to all vserver related functions and increases the kernel size by about 20k.
- VServer History Tracing
- This records a history of Linux-VServer events that can be replayed in the event of a panic or an oops.
- Per-CPU History Size
- This allows you to set the size of the per-CPU history buffer.
- VServer Scheduling Monitor
- Set this to yes if you want to record the scheduling decisions, so that they can be relayed to userspace for detailed analysis.
- Per-CPU Monitor Queue Size
- This allows you to specify the number of entries in the per-CPU scheduling monitor buffer.
- Per-CPU Monitor Sync Interval
- This allows you to specify the interval in ticks when a time sync entry is inserted.
Compiling and Installing
Now that your kernel is configured, it is time to compile and install it. Exit the configuration and start the compilation process:
# make && make modules_install
If you don't happen to have a really fast box, it is a good time to get a new cup of coffee now ;)
When the kernel has finished compiling, you have to copy the kernel image to your /boot partition and configure your boot loader. If you don't know how to do this, please consult your distribution manual or ask Google for help.
Manual util-vserver Compilation
The kernel alone does not help you, you also need some tools to exploit all those new features you got, so let's get them.
Getting the Sources
You will have to download the latest util-vserver source tarball from our Downloads section. In this guide we will use util-vserver-0.30.215.
As a first step, of course, we need to get the sources.
# Go to our source directory cd ~/src # Get the sources for util-vserver wget http://ftp.linux-vserver.org/pub/utils/util-vserver/util-vserver-0.30.215.tar.bz2 # Extract the sources tar xjf util-vserver-0.30.215.tar.bz2
Compiling and Installing
Now that we have extracted the util-vserver source we have to do the usual configure, make, make install chain. While configuring the tools you may get some error messages about missing stuff, for example dietlibc, vconfig and e2fs headers. The error messages are accompanied by explanations what you should do, so read them carefully.
# Switch to the util-vserver source directory cd util-vserver-0.30.215 # Configure the sources (you may want to adjust settings here, the defaults work, but may not suit your needs) ./configure --prefix= # Build the tools make # Install the tools make install install-distribution # It's a good point to fix the /proc entries for the guests /etc/init.d/vprocunhide restart (this path depends on configuration, see output of 'vserver-info')
Testing your setup
To ensure that your setup works we have created two small test scripts. The testme.sh script ensures basic functionality whereas the testfs.sh script is for inode attribute testing for various filesystems.
# get the script wget http://vserver.13thfloor.at/Stuff/SCRIPT/testme.sh # make it executable chmod +x testme.sh # run the test script ./testme.sh
Be careful! The testfs.sh script might easily reformat your hard disk :)
# get the script wget http://vserver.13thfloor.at/Stuff/SCRIPT/testfs.sh # make it executable chmod +x testfs.sh # make a loopback file dd bs=1024k count=1024 if=/dev/zero of=1gb.testfile # setup the loopback losetup /dev/loop0 1gb.testfile # run the test script for legacy mode ./testfs.sh -l -t -D /dev/loop0 -M /mnt # run the test script for new-style config ./testfs.sh -t -D /dev/loop0 -M /mnt
If the scripts show any error, be sure to read how to report a bug and contact the Linux-VServer Developers for help. See Communicate for details.
Where to go from here
Now that your setup is complete and working as expected, it is time to create your first guest system. Read on at Building Guest Systems.