Hi,
A first round about virtualisation benchmarks can be found here:
http://lxc.sourceforge.net/bench/
These benchmarks run with vanilla kernels and the patched versions of
well know virtualisation solutions: VServer and OpenVZ. Some benchs also
run inside the virtual 'guest' but we ran into troub
Clement,
Thanks for sharing the results! A few comments...
(1) General
1.1 It would be nice to run vmstat (say, vmstat 10) for the duration of
the tests, and put the vmstat output logs to the site.
1.2 Can you tell how you run the tests. I am particularly interested in
- how many iterations
Hi Kirill,
Thanks for the feedback. Not sure whether you are referring to Clement's work
or our paper. I'll assume you are referring to our paper.
> >From what I see just just after 1 minute check of your results:
>
> DBench:
> - different disk I/O schedulers are compared. This makes
> co
Hi,
> 1.1 It would be nice to run vmstat (say, vmstat 10) for the duration of
> the tests, and put the vmstat output logs to the site.
Our benchmark framework allows us to use oprofile during test...
couldn't it be better than vmstat?
> Basically, the detailed description of a process would be
Hi,
Sorry, just forgot one part of your email...
> 1.2 Can you tell how you run the tests. I am particularly interested in
> - how many iterations do you do?
> - what result do you choose from those iterations?
> - how reproducible are the results?
> - are you rebooting the box between the iterat
Hi,
Sorry, I just forgot one part of your email... (and sorry for the mail
spamming, I probably got too big fingers or too tiny keyboard)
> 1.2 Can you tell how you run the tests. I am particularly interested in
> - how many iterations do you do?
> - what result do you choose from those iteration
On Fri, Jun 30, 2006 at 07:28:06PM +0200, Clément Calmels wrote:
> Hi,
>
> A first round about virtualisation benchmarks can be found here:
> http://lxc.sourceforge.net/bench/
very interesting results, tx ...
> These benchmarks run with vanilla kernels and the patched versions of
> well know vir
Hi,
> from the tests:
> "For benchs inside real 'guest' nodes (OpenVZ/VServer) you should
> take into account that the FS tested is not the 'host' node one's."
>
> at least for Linux-VServer it should not be hard to avoid the
> chroot/filesystem namespace part and have it run on the host fs.
Clément Calmels wrote:
> Hi,
>
> Sorry, just forgot one part of your email...
>
>> 1.2 Can you tell how you run the tests. I am particularly interested in
>> - how many iterations do you do?
>> - what result do you choose from those iterations?
>> - how reproducible are the results?
>> - are you
Clément,
Thanks for addressing my concerns! See comments below.
Clément Calmels wrote:
Hi,
1.1 It would be nice to run vmstat (say, vmstat 10) for the duration of
the tests, and put the vmstat output logs to the site.
Our benchmark framework allows us to use oprofile during test...
Hi,
> > I'm wondering why a default 'guest' creation implies some resources
> > restrictions? Couldn't the resources be unlimited? I understand the need
> > for resource management, but the default values look a little bit
> > tiny...
> >
> The reason is security. A guest is untrusted by defaul
from the tests:
"For benchs inside real 'guest' nodes (OpenVZ/VServer) you should
take into account that the FS tested is not the 'host' node one's."
at least for Linux-VServer it should not be hard to avoid the
chroot/filesystem namespace part and have it run on the host fs.
a bind mount into
Clément Calmels wrote:
Hi,
I'm wondering why a default 'guest' creation implies some resources
restrictions? Couldn't the resources be unlimited? I understand the need
for resource management, but the default values look a little bit
tiny...
The reason is security. A guest is untru
Clément Calmels wrote:
> Hi,
>
> Sorry, I just forgot one part of your email... (and sorry for the mail
> spamming, I probably got too big fingers or too tiny keyboard)
>
>> 1.2 Can you tell how you run the tests. I am particularly interested in
>> - how many iterations do you do?
>> - what resul
Cedric,
these informations are not explicit yet but please check the raw data, for
example :
http://lxc.sourceforge.net/bench/r3/dbenchraw
you will see that each test is run nearly 100 times. the 5% min and max
values are stripped before doing an average. min, max and std dev are
missi
Kirill Korotaev wrote:
> For OpenVZ it is also possible to test different subsytems separately
> (virtualization/isolation, resource management, disk quota, CPU scheduler).
> I would notice also, that in OpenVZ all these features are ON by default.
hmm, we didn't realize that. Good, it will make
Kirill Korotaev wrote:
> Cedric,
>
>> these informations are not explicit yet but please check the raw data,
>> for
>> example :
>>
>> http://lxc.sourceforge.net/bench/r3/dbenchraw
>>
>> you will see that each test is run nearly 100 times. the 5% min and max
>> values are stripped before doing
Kir Kolyshkin wrote:
> In case you are testing performance (but not, say, isolation), you can
> definitely set all the UBCs to unlimited values (i.e. both barrier and
> limit for each parameter should be set to MAX_LONG). The only issues is
> with vmguarpages parameter, because this is a guarantee
See my comments below.
In general - please don't get the impression I try to be fastidious. I'm
just trying to help you create a system in which results can be
reproducible and trusted. There are a lot of factors that influence the
performance; some of those are far from being obvious.
Cléme
Hi,
> In general - please don't get the impression I try to be fastidious. I'm
> just trying to help you create a system in which results can be
> reproducible and trusted. There are a lot of factors that influence the
> performance; some of those are far from being obvious.
Don't get me wrong
Clément Calmels wrote:
What do you think of
something like this:
o reboot
o run dbench (or wathever) X times
o reboot
Perfectly fine with me.
Here you do not have to reboot. OpenVZ tools does not require OpenVZ
kernel to be built.
You got me... I was still believing the VZKERNEL_HEAD
- All binaries are always build in the test node.
I assuming you are doing your tests on the same system (i.e. same
compiler/libs/whatever else), and you do not change that system over
time (i.e. you do not upgrade gcc on it in between the tests).
I hope! :)
All binaries should be built
On Tue, Jul 04, 2006 at 06:32:04PM +0400, Kirill Korotaev wrote:
>>> from the tests:
>>> "For benchs inside real 'guest' nodes (OpenVZ/VServer) you should
>>> take into account that the FS tested is not the 'host' node one's."
>>>
>>> at least for Linux-VServer it should not be hard to avoid th
Clément Calmels wrote:
> I don't do this first because I didn't want to get test nodes wasting
> their time rebooting instead of running test. What do you think of
> something like this:
> o reboot
> o run dbench (or wathever) X times
> o reboot
[ ... ]
> I can split the "launch a guest" part i
Kirill Korotaev wrote:
- All binaries are always build in the test node.
>>>
>>> I assuming you are doing your tests on the same system (i.e. same
>>> compiler/libs/whatever else), and you do not change that system over
>>> time (i.e. you do not upgrade gcc on it in between the tests).
On Tue, Jul 04, 2006 at 04:19:02PM +0400, Kir Kolyshkin wrote:
[lot of stuff zapped here]
> >The different configs used are available in the lxc site. You will
> >notice that I used a minimal config file for most of the test, but for
> >Openvz I had to use the one I found in the OpenVZ site becau
On Wed, 05 Jul 2006 14:43:17 +0400, Kirill Korotaev wrote:
> >>>- All binaries are always build in the test node.
> >>>
> >>
> >>I assuming you are doing your tests on the same system (i.e. same
> >>compiler/libs/whatever else), and you do not change that system over
> >>time (i.e. you do not
Gerrit,
I assuming you are doing your tests on the same system (i.e. same
compiler/libs/whatever else), and you do not change that system over
time (i.e. you do not upgrade gcc on it in between the tests).
I hope! :)
All binaries should be built statically to work the same way inside host/
On Tue, Jul 04, 2006 at 05:34:23PM +0200, Cedric Le Goater wrote:
> Kirill Korotaev wrote:
> > Cedric,
> >
> >> these informations are not explicit yet but please check the raw data,
> >> for
> >> example :
> >>
> >> http://lxc.sourceforge.net/bench/r3/dbenchraw
> >>
> >> you will see that eac
On Tue, Jul 04, 2006 at 03:02:54PM +0200, Clément Calmels wrote:
> Hi,
>
> Sorry, I just forgot one part of your email... (and sorry for the mail
> spamming, I probably got too big fingers or too tiny keyboard)
>
> > 1.2 Can you tell how you run the tests. I am particularly interested in
> > - ho
On Wed, Jul 05, 2006 at 02:43:17PM +0400, Kirill Korotaev wrote:
> >>>- All binaries are always build in the test node.
> >>>
> >>
> >>I assuming you are doing your tests on the same system (i.e. same
> >>compiler/libs/whatever else), and you do not change that system over
> >>time (i.e. you do
ok, we will, with and without. It will add one bar to the graph.
I'd suggest to test _all_ available schedulers if possible,
for example Linux-VServer decided to favor the cfq scheduler
for 'fair' I/O scheduling per context, and OVZ did similar
(IIRC)
For OpenVZ the reason is not CFQ fair s
Herbert Poetzl wrote:
sidenote: on a 'typical' Linux-VServer guest, tmp
will be mounted as tmpfs, so be careful with that
OVZ might do similar as might your host distro :)
good point. Can we document all these issues somewhere?
Kirill
___
Vserver ma
On Thu, 06 Jul 2006 14:44:23 +0400, Kirill Korotaev wrote:
> Gerrit,
>
> I assuming you are doing your tests on the same system (i.e. same
> compiler/libs/whatever else), and you do not change that system over
> time (i.e. you do not upgrade gcc on it in between the tests).
> >>>
>
Gerrit,
Great! this is what I wanted to hear :) Fully agree.
Thanks,
Kirill
On Thu, 06 Jul 2006 14:44:23 +0400, Kirill Korotaev wrote:
Gerrit,
I assuming you are doing your tests on the same system (i.e. same
compiler/libs/whatever else), and you do not change that system over
time (i.e.
35 matches
Mail list logo