Gerrit,
Great! this is what I wanted to hear :) Fully agree.
Thanks,
Kirill
On Thu, 06 Jul 2006 14:44:23 +0400, Kirill Korotaev wrote:
Gerrit,
I assuming you are doing your tests on the same system (i.e. same
compiler/libs/whatever else), and you do not change that system over
time (i.e
Herbert Poetzl wrote:
sidenote: on a 'typical' Linux-VServer guest, tmp
will be mounted as tmpfs, so be careful with that
OVZ might do similar as might your host distro :)
good point. Can we document all these issues somewhere?
Kirill
___
Vserver ma
ok, we will, with and without. It will add one bar to the graph.
I'd suggest to test _all_ available schedulers if possible,
for example Linux-VServer decided to favor the cfq scheduler
for 'fair' I/O scheduling per context, and OVZ did similar
(IIRC)
For OpenVZ the reason is not CFQ fair s
Gerrit,
I assuming you are doing your tests on the same system (i.e. same
compiler/libs/whatever else), and you do not change that system over
time (i.e. you do not upgrade gcc on it in between the tests).
I hope! :)
All binaries should be built statically to work the same way inside host/
- All binaries are always build in the test node.
I assuming you are doing your tests on the same system (i.e. same
compiler/libs/whatever else), and you do not change that system over
time (i.e. you do not upgrade gcc on it in between the tests).
I hope! :)
All binaries should be built
Cedric,
these informations are not explicit yet but please check the raw data, for
example :
http://lxc.sourceforge.net/bench/r3/dbenchraw
you will see that each test is run nearly 100 times. the 5% min and max
values are stripped before doing an average. min, max and std dev are
missi
from the tests:
"For benchs inside real 'guest' nodes (OpenVZ/VServer) you should
take into account that the FS tested is not the 'host' node one's."
at least for Linux-VServer it should not be hard to avoid the
chroot/filesystem namespace part and have it run on the host fs.
a bind mount into
Eugen Leitl wrote:
Before I try OpenVZ I would like to hear comments of people
who've ran both VServer and OpenVZ, preferrably on the same
hardware, on how both compare.
Factors of interest are stability, Debian support, hardware
utilization, documentation and community support,
security.
A