Difference between revisions of "Installation Considerations"
From Linux-VServer
(→Feature Comparison: moved to separate page) |
(+cat) |
||
(5 intermediate revisions by 4 users not shown) | |||
Line 61: | Line 61: | ||
The Linux-VServer project maintains several branches of the kernel patch. Since version 1.00 the versioning is similar to the kernel versioning scheme. Even numbered releases (a.X.z with even X) are stable, reasonably well tested and expected not to change feature-wise. Odd numbered (a.Y.z with odd Y) releases are development releases. The last digit/number (z) is a subversion identifier. Experimental versions and Release Candidates might add a fourth identifier to that scheme. | The Linux-VServer project maintains several branches of the kernel patch. Since version 1.00 the versioning is similar to the kernel versioning scheme. Even numbered releases (a.X.z with even X) are stable, reasonably well tested and expected not to change feature-wise. Odd numbered (a.Y.z with odd Y) releases are development releases. The last digit/number (z) is a subversion identifier. Experimental versions and Release Candidates might add a fourth identifier to that scheme. | ||
− | Basically the stable and development releases should be similar in functionality, but the development releases will include features and enhancements not present in the stable branch. Once those features mature (and get well tested), they will be incorporated by the stable branch. | + | Basically the stable and development releases should be similar in functionality, but the development releases will include features and enhancements not present in the stable branch. Once those features mature (and get well tested), they will be incorporated by the stable branch.Check out the [[Feature Matrix]] for a comparison. |
For example the first stable release (1.00) uses two systemcalls as the previous releases did. However, the vserver system calls have been changed in the first development release (1.1.0). Linus assigned the vserver project a single system call, so a [[System Call Switch]] has been implemented. Running a development release usually requires using recent (latest) tools from the util-vserver development branch. | For example the first stable release (1.00) uses two systemcalls as the previous releases did. However, the vserver system calls have been changed in the first development release (1.1.0). Linus assigned the vserver project a single system call, so a [[System Call Switch]] has been implemented. Running a development release usually requires using recent (latest) tools from the util-vserver development branch. | ||
Line 68: | Line 68: | ||
All downloads are available in the [[Downloads]] section. Also take a look at the [[ChangeLogs]]. | All downloads are available in the [[Downloads]] section. Also take a look at the [[ChangeLogs]]. | ||
− | |||
− | |||
== Disk Partitioning == | == Disk Partitioning == | ||
− | Since each | + | Since each guest is a seperate root filesystem somewhere in the host system filesystem hierarchy, it is advisable to create a partitioning scheme that fits your needs. Discussing all possible disk configurations is surely out of the scope of this document, however you should take the following guidelines into account while setting up disk space for your guest systems: |
* Generally, one big partition for ''all'' guest systems should suffice | * Generally, one big partition for ''all'' guest systems should suffice | ||
Line 79: | Line 77: | ||
* If you want to use [[Disk Limits and Quota|Quota]] ''inside'' your guest system you have to use seperate partitions per guest system. This will probably change in the future | * If you want to use [[Disk Limits and Quota|Quota]] ''inside'' your guest system you have to use seperate partitions per guest system. This will probably change in the future | ||
* You should take care of hard disk failure, e.g. use RAID systems, make regular backups, etc. | * You should take care of hard disk failure, e.g. use RAID systems, make regular backups, etc. | ||
+ | * To obtain more flexible space management consider using some Volume Management solution like lvm, lvm2 or evms. | ||
+ | |||
+ | == Networking == | ||
+ | |||
+ | As the host and guest system both share the same physical networkconnection it is advisable to | ||
+ | have as little as possible networkdaemons using the same port on both the host and guest system. If, as with ssh, it is inavoidable to have both daemons share a port, one has to make sure the daemon on the host only listens to the hosts IP address. The guestsytem is isolated by default. | ||
== Final Notes == | == Final Notes == | ||
If you think the information provided in this document is too vague, feel free to [[Communicate|contact]] the Linux-VServer community and ask for help on your specific setup. | If you think the information provided in this document is too vague, feel free to [[Communicate|contact]] the Linux-VServer community and ask for help on your specific setup. | ||
+ | |||
+ | [[Category:Installation| ]] |
Latest revision as of 20:10, 21 October 2011
This guide will give you an idea about pre-requesites and installation considerations of the host system. It is targeted towards production systems, so some of the information provided here may or may not be important to your setup. Decide for your own demand.
Contents |
[edit] Hardware Compatibility
The Linux-VServer kernel runs on many platforms, including those listed below:
- alpha
- arm
- ia64
- m68k
- mips
- ppc
- ppc64
- s390
- sparc
- sparc64
- x86
- x86_64
See Tested Configurations for details.
[edit] Hardware Availability
The host system availability is more critical than the availability of a typical server. Since it runs multiple Virtual Private Servers providing a number of critical services each, the outage of the host system may be very costly. Additionally outage can be as disastrous as the simultaneous outage of a number of servers running critical services.
Discussing all aspects of high-availability is out of the scope of this document, however the following considerations should be followed to have a clean and secure host system:
- Use RAID storage for guest filesystems.
- Do not run software on the host system. Instead create guest systems where you can host necessary services. The only needed service on the host system is probably sshd.
- Do not create users on the host system. You can create as many users as you need in any guest system.
[edit] Hardware Requirements
The exact hardware configuration depends on how many Virtual Private Servers you are going to run on the computer and what load these VPSs are going to produce. Thus, in order to choose the right configuration, you should follow the recommendations below:
- CPUs
- The more Virtual Private Servers you plan to run simultaneously, the more CPUs you need.
- Memory
- The more memory you have, the more Virtual Private Servers you can run. The exact figure depends on the number and nature of applications you are planning to run in your Virtual Private Servers. However, on the average, at least 1 GB of RAM is recommended for every 20-30 Virtual Private Servers
- Disk space
- Each Virtual Private Server occupies 10–500 MB of hard disk space for system files (depends on the use of Unification) in addition to the user data inside the Virtual Private Server (for example, web site content). You should consider it when planning disk partitioning and the number of Virtual Private Servers to run.
[edit] Choose Your Distribution
There are many different Linux distributions, or versions. A distribution is the compiled Linux source code, usually combined with extra features and software. Some distributions are available for download at no charge while others are available at affordable prices on CD-ROM from Linux retailers worldwide.
Each distribution has its own purpose, and a number of factors should go into deciding which distribution is best for each user. Some distributions are better suited to home users, others are excellent for commercial settings. Some are better suited for Intel or Macintosh PCs, other are excellent for use on high-performance computers.
Any current Linux distribution most likely contains the software needed to do the job, including kernel and drivers, libraries, utilities and applications programs. Still, one of the most common questions people ask is "which distribution should I get?" This question is often answered by an assortment of people, each proclaiming their favorite distribution is better than all the rest.
Probably most people favor the first distribution they successfully installed. Or, if they had problems with the first, they favor the next distribution they install which addresses the problems of the first, and so on.
It is not in the scope of this document to discuss features of all the distributions out there, but nearly all should work with Linux-VServer, so it is up to you to decide which distribution fits your requirements. For an overview of available distributions checkout DistroWatch.
[edit] Choose Your Kernel Version
[edit] Versioning explained
The Linux-VServer project maintains several branches of the kernel patch. Since version 1.00 the versioning is similar to the kernel versioning scheme. Even numbered releases (a.X.z with even X) are stable, reasonably well tested and expected not to change feature-wise. Odd numbered (a.Y.z with odd Y) releases are development releases. The last digit/number (z) is a subversion identifier. Experimental versions and Release Candidates might add a fourth identifier to that scheme.
Basically the stable and development releases should be similar in functionality, but the development releases will include features and enhancements not present in the stable branch. Once those features mature (and get well tested), they will be incorporated by the stable branch.Check out the Feature Matrix for a comparison.
For example the first stable release (1.00) uses two systemcalls as the previous releases did. However, the vserver system calls have been changed in the first development release (1.1.0). Linus assigned the vserver project a single system call, so a System Call Switch has been implemented. Running a development release usually requires using recent (latest) tools from the util-vserver development branch.
1.X.z and 1.Y.z releases are for the 2.4 kernels, while 1.9.x (obsoleted by now) and 2.X.y releases are for the 2.6 series.
All downloads are available in the Downloads section. Also take a look at the ChangeLogs.
[edit] Disk Partitioning
Since each guest is a seperate root filesystem somewhere in the host system filesystem hierarchy, it is advisable to create a partitioning scheme that fits your needs. Discussing all possible disk configurations is surely out of the scope of this document, however you should take the following guidelines into account while setting up disk space for your guest systems:
- Generally, one big partition for all guest systems should suffice
- If you don't want to use Disk Limits you can use one partition per guest system to limit its available disk space. However this will (in most cases) prevent an easy enlargement of disk space later on.
- If you want to use Quota inside your guest system you have to use seperate partitions per guest system. This will probably change in the future
- You should take care of hard disk failure, e.g. use RAID systems, make regular backups, etc.
- To obtain more flexible space management consider using some Volume Management solution like lvm, lvm2 or evms.
[edit] Networking
As the host and guest system both share the same physical networkconnection it is advisable to have as little as possible networkdaemons using the same port on both the host and guest system. If, as with ssh, it is inavoidable to have both daemons share a port, one has to make sure the daemon on the host only listens to the hosts IP address. The guestsytem is isolated by default.
[edit] Final Notes
If you think the information provided in this document is too vague, feel free to contact the Linux-VServer community and ask for help on your specific setup.