CPU and 'vsched'

From Linux-VServer

Revision as of 20:28, 21 October 2011 by Glenn (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

note: all information is taken from the linux-vserver mailing list or other pages in this wiki.

The token-bucket scheduler principle is pretty well explained here: http://www.linux-vserver.org/index.php?page=Linux-VServer-Paper-06

vscheduling a vserver

vsched takes the following arguments:

  --fill-rate
       The number of tokens that will be placed in the bucket.

  --interval
       How often (the above specified) number of tokens will be placed.
       This is in jiffies. 

note: The important factor is the ratio:

   fill-rate
   --------- * 100   =   % CPU allocation
    interval

Note that that this is the proportion of a *single* CPU in the system. So, if you have four CPUs and you want one context to get an average of one whole CPU to itself, then you'd set fill-rate to 1 and interval to 4. It is advantageous to smooth operation of the algorithm to make the interval as small as possible (or much smaller than the bucket size). You can in most cases simplify the fraction, such as changing --fill-rate=30 and interval=100 to fill-rate=3 and --interval=10.

For simple cases, like evenly distributing cpu time between vservers, you probably just want to set the ratio to somewhere between 1/N (where N is the number of servers) and 1/P (where P is the maximum expected peak load per CPU), and not bother with hard scheduling. Process count ulimits will put an upper bound on possible abuse by a context.

  --tokens
       The bucket starts out with this many tokens. Tokens_max takes
       precedence here, so it cannot be higher than tokens_max.

  --tokens_min
       When a bucket is empty, the context is on hold _until_ at least
       this many tokens are in the bucket.

  --tokens_max
       The size of the bucket. When tokens aren't being used, the bucket
       will be getting fuller and fuller, but up to this value. So in effect
       this is your CPU burst parameter.

  --cpu_mask
       This is obsolete, but I've found the current vsched is a little
       picky and will segfault if you omit parameters, so I always
       specified 0 here.

According to the VServer paper, "At each timer tick, a running process consumes exactly one token from the bucket". Here running means actually needing the CPU as opposed to "running" as in "existing". Most processes are not running most of the time, e.g. an httpd waiting on a socket isn't running, even though ps would list it.

To put it another way, processes can have various states:

  • R (runnable),
  • S (sleeping)
  • T,Z,D ... (see man ps(1))

and processes in 'R' state can be scheduled (running) or not scheduled (waiting to be run) those which are scheduled (i.e. running on a cpu) will consume one token for every tick ...

A token is quite a bit of CPU time - the ticks (or jiffies for now) are generated at a (usually) constant intervall called HZ which was 100 for 2.4 and typically is 1000 for 2.6 so you can assume to get a tick every 1ms (or 1000 ticks each second)

Here are some guidelines. All this is very much unscientific and without a lot of testing and theory behind, so if someone has better guidelines, please pitch in. [(again, not sure if this is CPU speed dependent - tests were on a 2.8GHz Xeon). Typing "python" on the command line (which is a huge operation IMHO) consumes 17 tokens in my tests. Having 100000 tokens in your bucket is probably sufficient for a medium size compile job.] When trying to come up with a good setting in my environment (basically hosting), I was looking for values that would not cripple the snappiness of the server, but prevent people from being stupid (e.g. cat /dev/zero | bzip2 | bzip2 | bzip2 > /dev/null).

To achieve this, it is important that contexts that are being CPU hogs are penalised fairly quickly... As the tokens in the bucket deplete, the "nice" value of the contexts is adjusted - they lose their vavavoom. As this happens, the processes get shorter and shorter timeslices. Other, more deserving processes will get longer timeslices and hence more CPU time.

Additionally, bear in mind that individual processes also get a minor nice boost or penalty, depending on whether those processes have been CPU hogs recently or not. This is diminished in vserver kernels compared to standard kernels, but should still have sufficient effect to counter extreme conditions.

The fill interval should be short enough to not be noticeable, so something like 100 jiffies. The fill rate should be relatively small, something like 30 tokens. Tokens_min seems like it should simply equal to the fill rate. The tokens_max should be generous so that people can do short cpu-intensive things when the need them, so something like 10000 tokens.

From the experimentation I did, I'd say 10,000 tokens is quite large - 10 seconds of real CPU time. Compare this with the default value of 500. If you've given a context 30% of the CPU as described above, then that actually means about 10-15 wall clock seconds of CPU hogging before the context gets appreciably penalised. For the algorithm to work best, I think you would want to reduce this to about 1-2 seconds' worth of jiffies. You are right in saying that tokens_max is the "burst" CPU rate, so setting it to a large value like 10000, while setting the interval to a large value like 100, would indicate that you are optimising your system for batch scheduling (long time slices, higher overall throughput), not interactive use (short time slices, reduced throughput). My guess is that min_tokens (not in my original implementation) is a batch optimisation as well, but perhaps small values (~10) are useful to avoid excessive context switching.

But then, I didn't really experiment with the hard scheduling side of things, so maybe if you are hard scheduling it is more important to make sure that the buckets don't normally run out.

Of course just because I wrote the original algorithm does not by any means lend much extra weight to my opinion on how to use it, and I invite others to respond with their experience.

You can see current token stats by looking at /proc/virtual/<xid>/sched on the mother server.

vscheduling an application inside a vserver

You can also use vsched to pace any cpu intensive command, e.g.:

vcontext --create --     \
 vsched --fill-rate 30  \
        --interval 100  \
        --tokens 100    \
        --tokens_min 30 \
        --tokens_max 200 \
         --cpu_mask 0   -- /bin/my_cpu_hog

however, this is dependent upon the hard scheduler actually being enabled. If either the minimum is not reached yet, or the context is paused (a special flag) then the process will enter the new 'H' (on hold) state which doesn't allow it to do anything until the minfill has been reached again ...

While playing with this stuff I've run into situations where a context has no tokens left, at which point you cannot even kill the processes in it. Don't panic - you can always reenter the context and call vsched with new parameters.

So the pacing example should really be:

vcontext --create --       \
  vsched --fill-rate 30    \
         --interval 100    \
         --tokens 100      \
         --tokens_min 30   \
         --tokens_max 200  \
          --cpu_mask 0 --  \
     vattribute --flag sched_hard -- /bin/my_cpu_hog

A load of 30 is not a real problem (in terms of CPU, anyway), if those processes have such a low priority that everything else on the system is pretty much real time. What you are seeing is probably just due to the context not getting enough penalisation by the time the load hits 30, or some secondary effect like disk load or memory exhaustion. Try it with a smaller bucket size.

When a context goes on hold with runnable processes, those processes might not contribute to the visible load factor, but they could be said to still be runnable. So all you're doing is hiding the problem and underutilising your CPUs.

Having said that, because the cpu scheduler tries to avoid process starvation (where a process gets no CPU at all), then if a context has a *lot* of processes then it will `exploit' the anti-starvation code into getting more CPU than allocated, without hard scheduling. To give you an indication, this is around the number of minimum time slices (MIN_TIMESLICE, 5ms) than fit into the starvation limit (MAX_SLEEP_AVG, 2.5s) - ie, a per-second CPU load of 500.

Turning on the hard scheduling means letting processes starve. If you're happy to let processes starve then you can make the scheduler perform better in other ways - that is a classic trade-off in CPU scheduling.


Heh. I don't know if this is current behaviour or not, but I think the signals should really queue and the context will close as soon as the processes wake up and receive enough cycles to process them and exit. Sending -KILL signals would clean it up pretty quickly (as soon as enough tokens are allocated for the processes to run), as chances are they won't consume any tokens to receive a KILL signal. Though, it would be nice if they didn't need tokens allocated to be stopped via KILL.


Links

Personal tools