On Mon, Mar 19, 2007 at 11:40:01PM +0100, Herbert Poetzl wrote:
> On Mon, Mar 19, 2007 at 01:52:42PM -0700, Albert Mak (almak) wrote:
> > Hi Herbert,
> > 
> > I repeated the same expriment with sched_hard. The result is the
> > same, vserver is not able to enforce the CPU limit. I am under the
> > impression that sched_prio will also make use of the priority scheme
> > to limit CPU utilization per Vserver context....
> 
> sounds really strange, as it is working fine here ...
> (with linux-2.6.19.7-vs2.2.0-rc19)
> 
> here is a short example how you can test it, eliminating
> all possible reasons for doing something wrong
> 
>  - get and compile the vcmd tool [1] and the cpuhog [2]
>  - do the following incantations:
> 
>    vcmd -i 100 -BC ctx_create .flagword=^34^33^32^8 -- cpuhog
> 
>  - check the results with 'vtop' which should show something
>    like this:
> 
>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND         
>    
>    29 root      25   0  1312  252  200 H 24.5  0.5   0:33.81 cpuhog           
>   
>    30 root      16   0  1808  900  728 R  1.5  1.6   0:14.84 top              
>   
> 
> by default, the CPU limit will be roughly 25% without
> doing any adjustments to the token buckets ...
> 
> also note that a working token bucket looks like this:
> 
>  FillRate:           1,1
>  Interval:           4,8
>  TokensMin:          6
>  TokensMax:         50
>  PrioBias:           0
>  cpu 0: 5296 11 17101 5288 0 R- 6 6 50 1/4 1/8 0 0
>                 ~~~~~~ hold ticks
> 
> I will check that with your ancient kernel and patch 
> version shortly ...

tested now with 2.6.14.3-vs2.0.1 ...

works fine here as expected:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
   22 root      25   0  1304  252  200 R 24.1  0.4   0:09.59 cpuhog             
   23 root      17   0  1800  896  728 R  2.5  1.5   0:02.72 top                

best,
Herbert

> HTH,
> Herbert
> 
> [1] http://vserver.13thfloor.at/Experimental/TOOLS/vcmd-0.08.tar.bz2
> [2] http://vserver.13thfloor.at/Experimental/TOOLS/cpuhog.c
> 
> > Thanks for your help.
> > -Albert
> > 
> > -bash-2.05b# cat /proc/virtual/2/status
> > UseCnt: 9
> > Tasks:  3
> > Flags:  0000000202020110
> > BCaps:  00000000354c24ff
> > CCaps:  0000000000000101
> > Ticks:  0
> > 
> > -bash-2.05b# cat /proc/virtual/3/status
> > UseCnt: 9
> > Tasks:  3
> > Flags:  0000000202020110
> > BCaps:  00000000354c24ff
> > CCaps:  0000000000000101
> > Ticks:  0
> > 
> > 
> > top - 14:02:25 up  2:34,  3 users,  load average: 1.91, 0.88, 0.34
> > Tasks: 132 total,   3 running, 129 sleeping,   0 stopped,   0 zombie
> > Cpu(s): 100.0% us,  0.0% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.0% hi,  0.0% 
> > si
> > Mem:    513084k total,   118572k used,   394512k free,    16704k buffers
> > Swap:        0k total,        0k used,        0k free,    46648k cached
> > 
> >   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> > 32600 root      25   0  1336  232  184 R 49.8  0.0   1:38.31 exceed_cpu_limi
> > 32697 root      25   0  1336  232  184 R 49.8  0.0   1:14.18 exceed_cpu_limi
> > 
> > 
> > -bash-2.05b# cat /proc/virtual/2/sched
> > Token:               140
> > FillRate:             80
> > Interval:            100
> > TokensMin:            50
> > TokensMax:           140
> > PrioBias:              0
> > VaVaVoom:              0
> > cpu 0: 127657 47 0
> > 
> > -bash-2.05b# cat /proc/virtual/3/sched
> > Token:               140
> > FillRate:             10
> > Interval:            100
> > TokensMin:            50
> > TokensMax:           140
> > PrioBias:              0
> > VaVaVoom:              0
> > cpu 0: 113825 45 0
> > 
> > 
> > 
> > -----Original Message-----
> > From: Herbert Poetzl [mailto:[EMAIL PROTECTED]
> > Sent: Sun 3/18/2007 7:45 AM
> > To: Albert Mak (almak)
> > Cc: vserver@list.linux-vserver.org
> > Subject: Re: [Vserver] Vserver CPU limit question
> >  
> > On Sat, Mar 17, 2007 at 10:17:47PM -0700, Albert Mak (almak) wrote:
> > > Hi Herbert
> > > 
> > > Here is the output of /proc/virtual/2/status as requested.... Both
> > > context 2 and 3 have the same setting.
> > > 
> > > -bash-2.05b# cat /proc/virtual/2/status 
> > > UseCnt: 7
> > > Tasks:  2
> > > Flags:  0000000202020210
> >             ~~~~~~~~~~
> > http://linux-vserver.org/Capabilities_and_Flags
> > 
> >       0000000000000100 sched_hard
> >       0000000000000200 sched_prio
> > 
> > so you haven't enabled sched_hard here, which explains
> > why you do not see hard scheduling behaviour :)
> > 
> > HTC,
> > Herbert
> > 
> > > BCaps:  00000000354c24ff
> > > CCaps:  0000000000000101
> > > Ticks:  0
> > > 
> > > Thanks.
> > > 
> > > -Albert
> > > -----Original Message-----
> > > From: Herbert Poetzl [mailto:[EMAIL PROTECTED] 
> > > Sent: Saturday, March 17, 2007 11:36 AM
> > > To: Albert Mak (almak)
> > > Cc: vserver@list.linux-vserver.org
> > > Subject: Re: [Vserver] Vserver CPU limit question
> > > 
> > > On Fri, Mar 16, 2007 at 06:54:26PM -0700, Albert Mak (almak) wrote:
> > > > Hi,
> > > > 
> > > > I have Linux (2.6.14.3 Kernel) with Vserver 2.0.1 and testing the CPU 
> > > > limit capabilities. I have 2 vserver contexts both running CPU 
> > > > intensive app capable of using up 100% CPU, I am setting up on vserver
> > > 
> > > > to limit 1 context to 10% CPU  and the 2nd to 80% CPU, both using
> > > flags sched_prio.
> > > > I am seeing CPU usage split 50/50 between the 2 contexts. I repeated 
> > > > the same test using sched_hard with the same result (kernel 
> > > > VSERVER_HARDCPU config set to y). I am expecting to see at least the 
> > > > CPU usage close to the Vserver limits.
> > > > 
> > > > Have I got the wrong settings or some other issues. Your help is 
> > > > really appreciated.
> > > > 
> > > > -Albert
> > > > 
> > > > top - 18:37:04 up 26 min,  1 user,  load average: 2.04, 1.40, 0.62
> > > > Tasks: 127 total,   3 running, 124 sleeping,   0 stopped,   0 zombie
> > > > Cpu(s): 98.7% us,  1.3% sy,  0.0% ni,  0.0% id,  0.0% wa,  0.0% hi, 
> > > > 0.0% si
> > > > Mem:    513084k total,   115660k used,   397424k free,    10200k
> > > buffers
> > > > Swap:        0k total,        0k used,        0k free,    39332k
> > > cached
> > > > 
> > > >   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> > > >  6616 root      20   0  1332  228  184 R 49.8  0.0   2:23.12
> > > > exceed_cpu_limi
> > > >  6513 root      20   0  1336  232  184 R 48.1  0.0   2:43.79
> > > > exceed_cpu_limi
> > > > 
> > > > -bash-2.05b# vps
> > > >   PID CONTEXT             TTY          TIME CMD
> > > >  3672     0 MAIN          pts/0    00:00:00 bash
> > > >  6513     2 APP1          pts/0    00:03:01 exceed_cpu_limi
> > > >  6616     3 APP2          pts/0    00:02:40 exceed_cpu_limi
> > > >  7655     1 ALL_PROC      pts/0    00:00:00 vps
> > > >  7656     1 ALL_PROC      pts/0    00:00:00 ps
> > > > 
> > > > -bash-2.05b# pwd
> > > > /etc/vservers/APP1
> > > > -bash-2.05b# cat flags
> > > > sched_prio
> > > 
> > > you want to add sched_hard here if you want hard scheduling, the prio
> > > scheduler will only adjust priorities according to the token buckets ...
> > > 
> > > I'd also suggest to use a more recent kernel (and probably Linux-VServer
> > > patch) than this one as the scheduler was enhanced quite a lot in 2.2.x
> > > 
> > > > -bash-2.05b# cat schedule
> > > > 80
> > > > 100
> > > > 200
> > > > 50
> > > > 140
> > > > dummy
> > > > 
> > > > -bash-2.05b# pwd
> > > > /etc/vservers/APP2
> > > > -bash-2.05b# cat flags
> > > > sched_prio
> > > > -bash-2.05b# cat schedule
> > > > 10
> > > > 100
> > > > 200
> > > > 50
> > > > 140
> > > > dummy
> > > > 
> > > > -bash-2.05b# cat /proc/virtual/2/sched
> > > > Token:               140
> > > > FillRate:              1
> > > > Interval:            100
> > > > TokensMin:            50
> > > > TokensMax:           140
> > > > PrioBias:              0
> > > > VaVaVoom:             -5
> > > > cpu 0: 229674 71 0
> > > > 
> > > > -bash-2.05b# cat /proc/virtual/3/sched
> > > > Token:               140
> > > > FillRate:             10
> > > > Interval:            100
> > > > TokensMin:            50
> > > > TokensMax:           140
> > > > PrioBias:              0
> > > > VaVaVoom:             -5
> > > > cpu 0: 217275 54 0
> > > 
> > > looks like none of the token buckets is active here, what does the
> > > /proc/virtual/2/status show?
> > > 
> > > TIA,
> > > Herbert
> > > 
> > > > _______________________________________________
> > > > Vserver mailing list
> > > > Vserver@list.linux-vserver.org
> > > > http://list.linux-vserver.org/mailman/listinfo/vserver
> > 
> _______________________________________________
> Vserver mailing list
> Vserver@list.linux-vserver.org
> http://list.linux-vserver.org/mailman/listinfo/vserver
_______________________________________________
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver

Reply via email to