[Devel] Re: Pid namespaces approaches testing results

2007-06-06 Thread Pavel Emelianov
Cedric Le Goater wrote:
>> The flat model has many optimization ways in comparison with the multilevel
>> one. Like we can cache the pid value on structs and some other.
>>
>> Moreover having generic level nesting sounds reasonable. Having single level
>> nesting - too as all the namespace we have are single nested. But having the
>> 4 level nesting sounds strange... Why 4? Why not 5? What if I don't know how
>> many I will need exactly, but do know that it will be definitely more than 1?
>>
>> Moreover - I have shown that we can have 1% or less performance on generic
>> nesting model, why not keep it?
> 
> did you send that patchset ? is it included in the one you sent ?

The patchset I sent earlier changed slightly. The tests were performed
on the version I sent. Right now I'm waiting for your results to make
a final decision whether or not to develop the flat model together with
the hierarchical one.

So what are we going to do? The ways we have:
1. Make two models - hierarchical and flat. Maybe we'll see how to merge
   them later;
2. Optimize the hierarchical model to produce no performance hit on the
   first 2 levels (init and VS). I don't see the way to make this
   gracefully, but I maybe this can be solved ... somehow. Anyway, if
   the latest patches from Suka do not produce any noticeable overhead,
   I am OK to go on with them;
3. Make the CONFIG_MAX_NS_DEPTH model. This is likely to be fast in the
   flat case, but I am in doubt whether Andrew will like it :)

> sorry if i missed something :( 
> 
> C.
> 

Thanks,
Pavel
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel


[Devel] Re: Pid namespaces approaches testing results

2007-06-05 Thread Cedric Le Goater

> The flat model has many optimization ways in comparison with the multilevel
> one. Like we can cache the pid value on structs and some other.
> 
> Moreover having generic level nesting sounds reasonable. Having single level
> nesting - too as all the namespace we have are single nested. But having the
> 4 level nesting sounds strange... Why 4? Why not 5? What if I don't know how
> many I will need exactly, but do know that it will be definitely more than 1?
> 
> Moreover - I have shown that we can have 1% or less performance on generic
> nesting model, why not keep it?

did you send that patchset ? is it included in the one you sent ?

sorry if i missed something :( 

C.
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel


[Devel] Re: Pid namespaces approaches testing results

2007-06-01 Thread Pavel Emelianov
Cedric Le Goater wrote:
> Serge E. Hallyn wrote:
>> Quoting Pavel Emelianov ([EMAIL PROTECTED]):
>>> Dave Hansen wrote:
 On Tue, 2007-05-29 at 15:45 +0400, Pavel Emelianov wrote:
> The detailed results are the following:
> Test name:spawn execlshellps (sys time)
> 1(no ns) :579.1 618.31623.2   3.052s
> 2(suka's):570.7 610.81600.2   3.107s
> Slowdown :1.5%  1.3% 1.4% 1.8%
>
> 3(no ns) :580.6 616.01633.8   3.050s
> 4(flat)  :580.8 615.11632.2   3.054s
> Slowdown :0%0.1% <0.1%0.1%
> 5(multi) :576.9 611.01618.8   3.065s
> Slowdown :0.6%  0.8% 0.9% 0.5%
 Wow, thanks so much for running those.  You're a step ahead of us,
 there!
>>> Thanks :) Maybe we shall cooperate then and make three series
>>> of patches like
>>>
>>> 1. * The Kconfig options;
>>>
>>>* The API. I.e. calls like task_pid_nr(), task_session_nr_ns() etc;
>>>This part is rather important as I found that some places in kernel
>>>where I had to lookup the hash in multilevel model were just pid->vpid
>>>dereference in flat model. This is a good optimization.
>>>
>>>* The changes in the generic code that intruduce a bunch of 
>>>#ifdef CONFIG_PID_NS
>>> ...
>>>#else
>>>#ifdef CONFIG_PID_NS_FLAT
>>>#endif
>>>#ifdef CONFIG_PID_NS_MULTILEVEL
>>>#endif
>>>#endif
>>>code in pid.c, sched.c, fork.c etc
>>>
>>>This patchset will have to make kernel prepared for namespaces injections
>>>and (!) not to break normal kernel operation with CONFIG_PID_NS=n.
>> In principle there's nothing at all wrong with that (imo).  But the
>> thing is, given the way Suka's patchset is set up, there really isn't
>> any reason why it should be slower when using only one or two pid
>> namespaces.
>>
>> Suka, right now are you allocating the struct upid separately from the
>> struct pid?  That alone might slow things down quite a bit.  By
>> allocating them as one large struct - saving both an alloc at clone, and
>> a dereference when looking at pid.upid[0] to get the pid_ns for instance
>> - you might get some of this perf back.
>>
>> (Hmm, taking a quick look, it seems you're allocating the memory as one
>> chunk, but then even though the struct upid is just at the end of the
>> struct pid, you use a pointer to find the struct upid.  That could slow
>> things down a bit)
> 
> what about being more agressive and defining :
> 
>   struct pid
>   {
>   atomic_t count;
>   /* lists of tasks that use this pid */
>   struct hlist_head tasks[PIDTYPE_MAX];
>   int num_upids;
>   struct upid upid_list[CONFIG_MAX_NESTED_PIDNS];
>   struct rcu_head rcu;
>   };
> 
> if CONFIG_MAX_NESTED_PIDNS is 1, then pid namespaces are not available.
> at 2, the model is flat and at 3, we start nesting them.

The flat model has many optimization ways in comparison with the multilevel
one. Like we can cache the pid value on structs and some other.

Moreover having generic level nesting sounds reasonable. Having single level
nesting - too as all the namespace we have are single nested. But having the
4 level nesting sounds strange... Why 4? Why not 5? What if I don't know how
many I will need exactly, but do know that it will be definitely more than 1?

Moreover - I have shown that we can have 1% or less performance on generic
nesting model, why not keep it?

> it should improve performance as profiling gave higher memory usage
> in the current 2.6.21-mm2-pidns3 patchset. 
> 
> C.
> 
> 
> C.
> 

___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel


[Devel] Re: Pid namespaces approaches testing results

2007-06-01 Thread Cedric Le Goater
Serge E. Hallyn wrote:
> Quoting Pavel Emelianov ([EMAIL PROTECTED]):
>> Dave Hansen wrote:
>>> On Tue, 2007-05-29 at 15:45 +0400, Pavel Emelianov wrote:
 The detailed results are the following:
 Test name:spawn execlshellps (sys time)
 1(no ns) :579.1 618.31623.2   3.052s
 2(suka's):570.7 610.81600.2   3.107s
 Slowdown :1.5%  1.3% 1.4% 1.8%

 3(no ns) :580.6 616.01633.8   3.050s
 4(flat)  :580.8 615.11632.2   3.054s
 Slowdown :0%0.1% <0.1%0.1%
 5(multi) :576.9 611.01618.8   3.065s
 Slowdown :0.6%  0.8% 0.9% 0.5%
>>> Wow, thanks so much for running those.  You're a step ahead of us,
>>> there!
>> Thanks :) Maybe we shall cooperate then and make three series
>> of patches like
>>
>> 1. * The Kconfig options;
>>
>>* The API. I.e. calls like task_pid_nr(), task_session_nr_ns() etc;
>>This part is rather important as I found that some places in kernel
>>where I had to lookup the hash in multilevel model were just pid->vpid
>>dereference in flat model. This is a good optimization.
>>
>>* The changes in the generic code that intruduce a bunch of 
>>#ifdef CONFIG_PID_NS
>> ...
>>#else
>>#ifdef CONFIG_PID_NS_FLAT
>>#endif
>>#ifdef CONFIG_PID_NS_MULTILEVEL
>>#endif
>>#endif
>>code in pid.c, sched.c, fork.c etc
>>
>>This patchset will have to make kernel prepared for namespaces injections
>>and (!) not to break normal kernel operation with CONFIG_PID_NS=n.
> 
> In principle there's nothing at all wrong with that (imo).  But the
> thing is, given the way Suka's patchset is set up, there really isn't
> any reason why it should be slower when using only one or two pid
> namespaces.
> 
> Suka, right now are you allocating the struct upid separately from the
> struct pid?  That alone might slow things down quite a bit.  By
> allocating them as one large struct - saving both an alloc at clone, and
> a dereference when looking at pid.upid[0] to get the pid_ns for instance
> - you might get some of this perf back.
> 
> (Hmm, taking a quick look, it seems you're allocating the memory as one
> chunk, but then even though the struct upid is just at the end of the
> struct pid, you use a pointer to find the struct upid.  That could slow
> things down a bit)

what about being more agressive and defining :

struct pid
{
atomic_t count;
/* lists of tasks that use this pid */
struct hlist_head tasks[PIDTYPE_MAX];
int num_upids;
struct upid upid_list[CONFIG_MAX_NESTED_PIDNS];
struct rcu_head rcu;
};

if CONFIG_MAX_NESTED_PIDNS is 1, then pid namespaces are not available.
at 2, the model is flat and at 3, we start nesting them.

it should improve performance as profiling gave higher memory usage
in the current 2.6.21-mm2-pidns3 patchset. 

C.


C.
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel


[Devel] Re: Pid namespaces approaches testing results

2007-05-30 Thread Pavel Emelianov
Serge E. Hallyn wrote:
> Quoting Pavel Emelianov ([EMAIL PROTECTED]):
>> Dave Hansen wrote:
>>> On Tue, 2007-05-29 at 15:45 +0400, Pavel Emelianov wrote:
 The detailed results are the following:
 Test name:spawn execlshellps (sys time)
 1(no ns) :579.1 618.31623.2   3.052s
 2(suka's):570.7 610.81600.2   3.107s
 Slowdown :1.5%  1.3% 1.4% 1.8%

 3(no ns) :580.6 616.01633.8   3.050s
 4(flat)  :580.8 615.11632.2   3.054s
 Slowdown :0%0.1% <0.1%0.1%
 5(multi) :576.9 611.01618.8   3.065s
 Slowdown :0.6%  0.8% 0.9% 0.5%
>>> Wow, thanks so much for running those.  You're a step ahead of us,
>>> there!
>> Thanks :) Maybe we shall cooperate then and make three series
>> of patches like
>>
>> 1. * The Kconfig options;
>>
>>* The API. I.e. calls like task_pid_nr(), task_session_nr_ns() etc;
>>This part is rather important as I found that some places in kernel
>>where I had to lookup the hash in multilevel model were just pid->vpid
>>dereference in flat model. This is a good optimization.
>>
>>* The changes in the generic code that intruduce a bunch of 
>>#ifdef CONFIG_PID_NS
>> ...
>>#else
>>#ifdef CONFIG_PID_NS_FLAT
>>#endif
>>#ifdef CONFIG_PID_NS_MULTILEVEL
>>#endif
>>#endif
>>code in pid.c, sched.c, fork.c etc
>>
>>This patchset will have to make kernel prepared for namespaces injections
>>and (!) not to break normal kernel operation with CONFIG_PID_NS=n.
> 
> In principle there's nothing at all wrong with that (imo).  But the
> thing is, given the way Suka's patchset is set up, there really isn't
> any reason why it should be slower when using only one or two pid
> namespaces.

One of the main bottlenecks I see is that the routine struct_pid_to_number()
is "pid->vnr" in my case and a for() loop in your. 

Nevertheless, that's just a guess.

> Suka, right now are you allocating the struct upid separately from the
> struct pid?  That alone might slow things down quite a bit.  By
> allocating them as one large struct - saving both an alloc at clone, and
> a dereference when looking at pid.upid[0] to get the pid_ns for instance
> - you might get some of this perf back.
> 
> (Hmm, taking a quick look, it seems you're allocating the memory as one
> chunk, but then even though the struct upid is just at the end of the
> struct pid, you use a pointer to find the struct upid.  That could slow
> things down a bit)

Right now Suka is allocating a struct pid and struct pid_elem as one chunk.
There even exists a kmem cache names pid+1elem :)

> Anyway, Pavel, I'd like to look at some profiling data (when Suka or I
> collects some) and see whether the slowdown is fixable.  If it isn't,
> then we should definately look at combining the patchsets.

OK. Please, keep me advised.

> thanks,
> -serge
> 
>> 2. The flat pid namespaces (my part)
>> 3. The multilevel pid namespaces (suka's part)
>>
>>> Did you happen to collect any profiling information during your runs? 
>> Unfortunately no :( My intention was to prove that hierarchy has
>> performance implications and should be considered carefully.
>>
>>> -- Dave
>>>
>>>
>> ___
>> Containers mailing list
>> [EMAIL PROTECTED]
>> https://lists.linux-foundation.org/mailman/listinfo/containers
> 

___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel


[Devel] Re: Pid namespaces approaches testing results

2007-05-30 Thread Serge E. Hallyn
Quoting Pavel Emelianov ([EMAIL PROTECTED]):
> Dave Hansen wrote:
> > On Tue, 2007-05-29 at 15:45 +0400, Pavel Emelianov wrote:
> >> The detailed results are the following:
> >> Test name:spawn execlshellps (sys time)
> >> 1(no ns) :579.1 618.31623.2   3.052s
> >> 2(suka's):570.7 610.81600.2   3.107s
> >> Slowdown :1.5%  1.3% 1.4% 1.8%
> >>
> >> 3(no ns) :580.6 616.01633.8   3.050s
> >> 4(flat)  :580.8 615.11632.2   3.054s
> >> Slowdown :0%0.1% <0.1%0.1%
> >> 5(multi) :576.9 611.01618.8   3.065s
> >> Slowdown :0.6%  0.8% 0.9% 0.5%
> > 
> > Wow, thanks so much for running those.  You're a step ahead of us,
> > there!
> 
> Thanks :) Maybe we shall cooperate then and make three series
> of patches like
> 
> 1. * The Kconfig options;
> 
>* The API. I.e. calls like task_pid_nr(), task_session_nr_ns() etc;
>This part is rather important as I found that some places in kernel
>where I had to lookup the hash in multilevel model were just pid->vpid
>dereference in flat model. This is a good optimization.
> 
>* The changes in the generic code that intruduce a bunch of 
>#ifdef CONFIG_PID_NS
> ...
>#else
>#ifdef CONFIG_PID_NS_FLAT
>#endif
>#ifdef CONFIG_PID_NS_MULTILEVEL
>#endif
>#endif
>code in pid.c, sched.c, fork.c etc
> 
>This patchset will have to make kernel prepared for namespaces injections
>and (!) not to break normal kernel operation with CONFIG_PID_NS=n.

In principle there's nothing at all wrong with that (imo).  But the
thing is, given the way Suka's patchset is set up, there really isn't
any reason why it should be slower when using only one or two pid
namespaces.

Suka, right now are you allocating the struct upid separately from the
struct pid?  That alone might slow things down quite a bit.  By
allocating them as one large struct - saving both an alloc at clone, and
a dereference when looking at pid.upid[0] to get the pid_ns for instance
- you might get some of this perf back.

(Hmm, taking a quick look, it seems you're allocating the memory as one
chunk, but then even though the struct upid is just at the end of the
struct pid, you use a pointer to find the struct upid.  That could slow
things down a bit)

Anyway, Pavel, I'd like to look at some profiling data (when Suka or I
collects some) and see whether the slowdown is fixable.  If it isn't,
then we should definately look at combining the patchsets.

thanks,
-serge

> 2. The flat pid namespaces (my part)
> 3. The multilevel pid namespaces (suka's part)
> 
> > Did you happen to collect any profiling information during your runs? 
> 
> Unfortunately no :( My intention was to prove that hierarchy has
> performance implications and should be considered carefully.
> 
> > -- Dave
> > 
> > 
> 
> ___
> Containers mailing list
> [EMAIL PROTECTED]
> https://lists.linux-foundation.org/mailman/listinfo/containers
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel


[Devel] Re: Pid namespaces approaches testing results

2007-05-30 Thread Pavel Emelianov
Dave Hansen wrote:
> On Tue, 2007-05-29 at 15:45 +0400, Pavel Emelianov wrote:
>> The detailed results are the following:
>> Test name:spawn execlshellps (sys time)
>> 1(no ns) :579.1 618.31623.2   3.052s
>> 2(suka's):570.7 610.81600.2   3.107s
>> Slowdown :1.5%  1.3% 1.4% 1.8%
>>
>> 3(no ns) :580.6 616.01633.8   3.050s
>> 4(flat)  :580.8 615.11632.2   3.054s
>> Slowdown :0%0.1% <0.1%0.1%
>> 5(multi) :576.9 611.01618.8   3.065s
>> Slowdown :0.6%  0.8% 0.9% 0.5%
> 
> Wow, thanks so much for running those.  You're a step ahead of us,
> there!

Thanks :) Maybe we shall cooperate then and make three series
of patches like

1. * The Kconfig options;

   * The API. I.e. calls like task_pid_nr(), task_session_nr_ns() etc;
   This part is rather important as I found that some places in kernel
   where I had to lookup the hash in multilevel model were just pid->vpid
   dereference in flat model. This is a good optimization.

   * The changes in the generic code that intruduce a bunch of 
   #ifdef CONFIG_PID_NS
...
   #else
   #ifdef CONFIG_PID_NS_FLAT
   #endif
   #ifdef CONFIG_PID_NS_MULTILEVEL
   #endif
   #endif
   code in pid.c, sched.c, fork.c etc

   This patchset will have to make kernel prepared for namespaces injections
   and (!) not to break normal kernel operation with CONFIG_PID_NS=n.

2. The flat pid namespaces (my part)
3. The multilevel pid namespaces (suka's part)

> Did you happen to collect any profiling information during your runs? 

Unfortunately no :( My intention was to prove that hierarchy has
performance implications and should be considered carefully.

> -- Dave
> 
> 

___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel


[Devel] Re: Pid namespaces approaches testing results

2007-05-29 Thread Dave Hansen
On Tue, 2007-05-29 at 15:45 +0400, Pavel Emelianov wrote:
> The detailed results are the following:
> Test name:spawn execlshellps (sys time)
> 1(no ns) :579.1 618.31623.2   3.052s
> 2(suka's):570.7 610.81600.2   3.107s
> Slowdown :1.5%  1.3% 1.4% 1.8%
> 
> 3(no ns) :580.6 616.01633.8   3.050s
> 4(flat)  :580.8 615.11632.2   3.054s
> Slowdown :0%0.1% <0.1%0.1%
> 5(multi) :576.9 611.01618.8   3.065s
> Slowdown :0.6%  0.8% 0.9% 0.5%

Wow, thanks so much for running those.  You're a step ahead of us,
there!

Did you happen to collect any profiling information during your runs? 

-- Dave

___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel


[Devel] Re: Pid namespaces approaches testing results

2007-05-29 Thread Pavel Emelianov
Eric W. Biederman wrote:
> Pavel Emelianov <[EMAIL PROTECTED]> writes:
> 
>> Hi Eric, Suka, guys.
>>
>> I have tested the following configurations:
>>
>> 1. 2.6.21-mm2 kernel with Suka's patches with CONFIG_PID_NS=n
>> 2. the same with CONFIG_PID_NS=y
>>
>> 3. 2.6.22-rc1-mm1 kernel with my own realisation (patches will
>>be sent later if interesting) with CONFIG_PID_NS=n;
>> 4. the same with CONFIG_PID_NS=y and flat model (OpenVZ view)
>>I sent earlier;
>> 5. the same with multilevel model of my own. The difference is
>>that I use hash to lookup pid_elem from struct pid/pid_t nr, 
>>not a plain "for" loop like in Suka's patches.
> 
> For small levels of nesting a for loop should actually be faster.

Nope. I thought the same when worked on OpenVZ RSS fractions accounting
and found out that loop and hash lookup are almost the same even for
one-element-length list. I don't know what the problem is exactly but
since then I tend to measure my guesses.

> These tests were all taken in the initial pid namespace?
> Yes.  You mention that below.
> 
>> The tests run were:
>> 1. Unixbench spawn test
>> 2. Unixbench execl test
>> 3. Unixbensh shell test
>> 4. System time for ps -xaf run in a loop (1000 times)
> 
> If these test accurately measure what the purport to measure
> these appear to fair, and useful for discussion.  Although we may have
> cache hot vs cache cold effects doing weird things to us.
> 
> These results need to be reproduced.
> 
> We need to get all of the patches against the same kernel
> so we can truly have an apples to apples comparison.
> 
> The rough number of pids in the system when the tests are taken needs
> to be known.

Sure. cat /proc/slabinfo | grep pid shows ~500 pids/pid+1upids
on each kernel (roughly) before the tests.

>> The hardware used is 2x Intel(R) Xeon(TM) CPU 3.20GHz box with
>> 2Gb of RAM. All the results are reproducible with 0.1% accuracy.
>> The slowdown is shown in comparison to the according results for
>> CONFIG_PID_NS=n kernel.
>>
>> Summary:
>> Suka's model gives us about 1.5% of overhead.
>> My multilevel model gives us about 0.7% of overhead.
>> My flat model gives us an overhead comparative to 
>> the accuracy of the measurement, i.e. zero overhead.
>>
>> The detailed results are the following:
>> Test name:spawn execlshellps (sys time)
>> 1(no ns) :579.1 618.31623.2   3.052s
>> 2(suka's):570.7 610.81600.2   3.107s
>> Slowdown :1.5%  1.3% 1.4% 1.8%
>>
>> 3(no ns) :580.6 616.01633.8   3.050s
>> 4(flat)  :580.8 615.11632.2   3.054s
>> Slowdown :0%0.1% <0.1%0.1%
>> 5(multi) :576.9 611.01618.8   3.065s
>> Slowdown :0.6%  0.8% 0.9% 0.5%
> 
> Just for my own amusement.

Of course - the base kernels differ.

>> 1(no ns) :579.1 618.31623.2   3.052s
>> 3(no ns) :580.6 616.01633.8   3.050s
> -0.25%0.3% -0.65%   0.065%

Not - but + - the larger the number is the better the result is.

I emphasize - the results of namespaces patches were get against
*the base kernel*. I.e. Suka's patches slow down 2.6.21 by 1.5%.
My patches with flat model slowdown the 2.6.22 kernel by 0%. 

I believe that the flat model will slowdown even 2.6.21 kernel for
0%, but Suka's - even 2.6.22 by somewhat similar (about 1-2%).

Yet again: the intention of my measurements are not to prove my 
multilevel model is better than Suka's one, but to prove that the
*flat* model is faster than multilevel one and thus must be present
in the kernel as well.

> 
>> For the first three tests the result is better the higher the 
>> number is. For the last test - the result is better the lower the
>> number is (since it is a time spent in kernel).
>>
>> The results in the namespace may be worse.
>>
>> If you are interested I can send my patches for pre-review and
>> cooperation. With the results shown I think the we do must have
>> the flat model as an option in the kernel for those who don't
>> need the infinite nesting, but cares for the kernel performance.
> 
> Your results do seem to indicate there is measurable overhead,
> although in all cases it is slight.  So if we care about performance
> we need to look at things very carefully.

This is slight for init namespace. In sub-namespace the results
may be worse.

IMHO 1.5% is significant enough. 1.5% here and 0.4% there and 0.6%
over there and we have Xen overhead after all :) And no way to find
out what has happened.

> Eric
> 

Thank,
Pavel
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel


[Devel] Re: Pid namespaces approaches testing results

2007-05-29 Thread Eric W. Biederman
Pavel Emelianov <[EMAIL PROTECTED]> writes:

> Hi Eric, Suka, guys.
>
> I have tested the following configurations:
>
> 1. 2.6.21-mm2 kernel with Suka's patches with CONFIG_PID_NS=n
> 2. the same with CONFIG_PID_NS=y
>
> 3. 2.6.22-rc1-mm1 kernel with my own realisation (patches will
>be sent later if interesting) with CONFIG_PID_NS=n;
> 4. the same with CONFIG_PID_NS=y and flat model (OpenVZ view)
>I sent earlier;
> 5. the same with multilevel model of my own. The difference is
>that I use hash to lookup pid_elem from struct pid/pid_t nr, 
>not a plain "for" loop like in Suka's patches.

For small levels of nesting a for loop should actually be faster.

These tests were all taken in the initial pid namespace?
Yes.  You mention that below.

> The tests run were:
> 1. Unixbench spawn test
> 2. Unixbench execl test
> 3. Unixbensh shell test
> 4. System time for ps -xaf run in a loop (1000 times)

If these test accurately measure what the purport to measure
these appear to fair, and useful for discussion.  Although we may have
cache hot vs cache cold effects doing weird things to us.

These results need to be reproduced.

We need to get all of the patches against the same kernel
so we can truly have an apples to apples comparison.

The rough number of pids in the system when the tests are taken needs
to be known.

> The hardware used is 2x Intel(R) Xeon(TM) CPU 3.20GHz box with
> 2Gb of RAM. All the results are reproducible with 0.1% accuracy.
> The slowdown is shown in comparison to the according results for
> CONFIG_PID_NS=n kernel.
>
> Summary:
> Suka's model gives us about 1.5% of overhead.
> My multilevel model gives us about 0.7% of overhead.
> My flat model gives us an overhead comparative to 
> the accuracy of the measurement, i.e. zero overhead.
>
> The detailed results are the following:
> Test name:spawn execlshellps (sys time)
> 1(no ns) :579.1 618.31623.2   3.052s
> 2(suka's):570.7 610.81600.2   3.107s
> Slowdown :1.5%  1.3% 1.4% 1.8%
>
> 3(no ns) :580.6 616.01633.8   3.050s
> 4(flat)  :580.8 615.11632.2   3.054s
> Slowdown :0%0.1% <0.1%0.1%
> 5(multi) :576.9 611.01618.8   3.065s
> Slowdown :0.6%  0.8% 0.9% 0.5%

Just for my own amusement.
> 1(no ns) :579.1 618.31623.2   3.052s
> 3(no ns) :580.6 616.01633.8   3.050s
-0.25%0.3% -0.65%   0.065%


> For the first three tests the result is better the higher the 
> number is. For the last test - the result is better the lower the
> number is (since it is a time spent in kernel).
>
> The results in the namespace may be worse.
>
> If you are interested I can send my patches for pre-review and
> cooperation. With the results shown I think the we do must have
> the flat model as an option in the kernel for those who don't
> need the infinite nesting, but cares for the kernel performance.

Your results do seem to indicate there is measurable overhead,
although in all cases it is slight.  So if we care about performance
we need to look at things very carefully.

Eric
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers

___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel