Re: [SLUG] Which Virtualisation, and why?

2011-01-12 Thread onlyjob
Regarding virtual loopback it seems that in standard builds of
VServer-enabled 2.6.32 kernels available from Debian repositories this
problem do not exist. I'm not too sure though but I don't remember
experiencing it. Besides it is possible to change localhost address in
/etc/hosts

Absence of network virtualization in VServer is deliberate and for a
good reason.

My bad - for some reason I mistakenly thought that OpenVZ license is
not GPL - thanks for highlighting this error.

I don't see how kernel support relevant to RHEL upstream - a year ago
there was no OpenVZ support for 2.6.32 whatsoever. And frankly this
was one of the reasons I've chosen VServer for a machine hosting
around 20 VMs.
Obviously 2.6.32 have a number of important features notably KSM which
makes a lot of sense for virtual host and also EXT4.
At that time (a year ago) I installed VServer-patched kernel 2.6.32
from native Debian repository.

*more performant: I agree with you that difference in network
performance between VServer and OpenVZ is not terribly different.
Perhaps it can be manifested with some sort of artificial testing.
However here I was quoting Herbert Poetzl (VServer developer).
While performance difference is not too big there is another thing
which I believe is equally important - simplicity. If the same result
can be achieved easier, without even little virtualization overhead it
is certainly better, more maintainable, probably has less bugs etc.
Simplicity matters.

Easier:
Well this is really quite a subjective matter. Available tools is a
different argument.
I got my first experience with OpenVZ about 18 months ago when I
created several VMs but there were some problems motivating my
migration to VServer - a decision I've never regret about. Somehow I
found memory management easier for VServer. It could be just my
perception but to me Vserver is easier to configure and use. Debian
make VServer installation trivial.

One of my problems with OpenVZ was understanding of how its memory
limits work. it is indeed a problem related to lack of experience but
number of times services inside OpenVZ VM were failing to allocate
required amount of RAM so a tweaked some parameter until it happen again,
then I had to tweak another setting and so on and so on. After week of
struggling I had no confidence regarding settings I used and I had to
read a lot to get a detailed understanding of all those parameters.
Obviously defaults was not good.
Then I had a chat with the guy from another company who tried to adopt
OpenVZ for large Java-based application spanned across a dozen VMs.
They gave up after running into problems with Java memory management
in OpenVZ so they end up using KVM for a project. (Personally I do
believe they just hadn't enough patience to chase all the problems).

When I decided to try VServer - I already had 5 or 6 OpenVZ VMs.
Using VServer was surprisingly easy (perhaps only documentation
lacking some up-to-date examples) so soon enough I found myself
virtualizing physical servers to VServer and creating more VMs mostly
Debian or CentOS based.
Migration to VServer was trivial for me - for a year I had no problems
with memory allocations in any of more than 20 VServer VMs, many of
which have Java application servers running. Defaults are not
restrictive in VServer so it is easier to set up a VM and restrict it
later upon finalizing its configuration when memory usage already
known.

I like VServer more, particularly the way we do things in VServer.
To me administration efforts less for VServer but you may argue that
this is a matter of experience.

Regards,
Onlyjob.


On 12 January 2011 14:56, Daniel Pittman dan...@rimspace.net wrote:
 On Tue, Jan 11, 2011 at 16:24, onlyjob only...@gmail.com wrote:

 No, no, please not OpenVZ. It is certainly not for beginners.
 Better use VServer instead.
 I used both, first OpenVZ (but was never really happy with it) and then 
 VServer.

 Have VServer added network virtualization yet?  Last time I used it
 they hadn't, so your containers didn't have, for example, the loopback
 interface, or a 127.0.0.1 address they could use.

 That made for a constant, ongoing pain in the neck compared to OpenVZ
 which *did* look like that.  Every single distribution package that
 assumed, for example, that it could talk to 'localhost' would do the
 wrong thing.

 Ah.  I see the experimental releases do add support for a virtualized
 loopback adapter, along with IPv6, which is nice, and probably
 addresses my biggest operational issue with VServer.

 There are number of benefits of VServer over OpenVZ:

 * GPL License

 http://openvz.org/documentation/licenses
 The OpenVZ software — the kernel and and the user-level tools — is
 licensed under GNU GPL version 2.

 It is also notable that a bunch of the upstream, in-kernel code *is*
 from OpenVZ, including a bunch of the namespace support that underpins
 the LXC implementations and, these days, OpenVZ itself.

 Can you tell me where you got the 

Re: [SLUG] Which Virtualisation, and why?

2011-01-12 Thread Daniel Pittman
On Wed, Jan 12, 2011 at 16:11, onlyjob only...@gmail.com wrote:

 Regarding virtual loopback it seems that in standard builds of
 VServer-enabled 2.6.32 kernels available from Debian repositories this
 problem do not exist. I'm not too sure though but I don't remember
 experiencing it.

It was definitely there is the stable releases of Debian before Lenny,
when they presumably decided the experimental VServer patches were
sufficiently stable or whatever.  Anyway, nice they have solved it. :)

 Besides it is possible to change localhost address in /etc/hosts

...but not to remap 127.0.0.1, or easily create another interface
named lo, or bind to 0.0.0.0, all of which a surprisingly large number
of packages assumed would always be present and operational in Debian
(and, for which, Ubuntu is generally worse. :)

 Absence of network virtualization in VServer is deliberate and for a
 good reason.

I can't find much useful information on this on their website about
why, but wikipedia claims that this is based on isolation rather than
virtualization to avoid overhead, which seems relatively bogus to me
given the in-kernel network namespace support is pretty much directly
isolation based.

Do you know of a good source of information on the VServer side?  I am
curious to know what the technical differences are (and if that is,
indeed, the correct argument from their side) so I can better
understand what the trade-off here might atually be.

[...]

 I don't see how kernel support relevant to RHEL upstream - a year ago
 there was no OpenVZ support for 2.6.32 whatsoever. And frankly this
 was one of the reasons I've chosen VServer for a machine hosting
 around 20 VMs.

...er, OK.  So, the use of the RHEL kernel is relevant because RedHat
invest substantially in stabilizing the kernel and backporting newer
drivers to it.  This means that unlike the Linus 2.6.18 kernel (for
example) you can get support for modern Intel NICs and other
controllers, so it doesn't suffer nearly the trouble running on modern
hardware that the version number might suggest.

Given that, in many cases, security and driver support are the primary
motivators for using a newer kernel this can, indeed, help with
(though not universally solve) the issue that the OpenVZ kernel
version lags the upstream kernel version available in distributions.

 Obviously 2.6.32 have a number of important features notably KSM which
 makes a lot of sense for virtual host and also EXT4.

Er, as of 2.6.36 the KSM feature still scans only memory specifically
registered to it by the application.  So far as I can see the VServer
patches don't do anything special to mark memory as possible to share
for code running in them, and pretty much nothing other than KVM/qemu
does out in user-space, so I wouldn't have thought that KSM made much
difference.

As to ext4 ... *shrug*  I think that is a religious debate that would
distract from the real discussion at hand; I regard anyone running
ext4 on a production system as vaguely strange since it is still so
young and experimental compared to more tested code but, obviously,
you don't share that reluctance to move. :)

[...]

 *more performant: I agree with you that difference in network
 performance between VServer and OpenVZ is not terribly different.
 Perhaps it can be manifested with some sort of artificial testing.
 However here I was quoting Herbert Poetzl (VServer developer).
 While performance difference is not too big there is another thing
 which I believe is equally important - simplicity. If the same result
 can be achieved easier, without even little virtualization overhead it
 is certainly better, more maintainable, probably has less bugs etc.
 Simplicity matters.

*nod*  Part of my view on the subject is that VServer made some bad
technical decisions that kept their kernel code simple in exchange for
adding a huge amount of complexity to every container; part of that
(lo virtualization) they have obviously decided can be corrected these
days.

So, I agree, but I think that whole system complexity is a much more
important metric than just kernel complexity.  (OTOH, I also think
that a bunch of the OpenVZ bits – like UBC – are disasters of
complexity and I am very glad they will not make it in to the mainline
kernel. :)

Anyway, thanks for sharing your experiences and discussing this.  I
like to talk through these issues so I can better understand where
things stand – and I already learned some useful stuff I didn't know
from you. :)

Regards,
Daniel
-- 
✉ Daniel Pittman dan...@rimspace.net
⌨ dan...@rimspace.net (XMPP)
☎ +1 503 893 2285
♻ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-12 Thread dave b
Also has anyone looked at LXC :P ?

Having run identical kvm guests, I found that KSM actually wasn't that
much of a benefit as a 'cpu user' (while it doesn't seem to use much
... potentially the cpu could down clock to save power instead of
running KSM).
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-12 Thread Daniel Pittman
On Wed, Jan 12, 2011 at 20:10, dave b db.pub.m...@gmail.com wrote:

 Also has anyone looked at LXC :P ?

In my previous job, where we did the other bits, we did some testing
on developer systems; our conclusion was that LXC was at least a
couple of years from being useful in the real world based on a pretty
much endless collection of shortfalls and bugs in testing.

My very strong hope is that it will stabilise and one of the
implementations built on the kernel tools (because the libvirt LXC and
plain LXC projects are entirely different user-space code) will become
the standard for doing this inside Linux.

 Having run identical kvm guests, I found that KSM actually wasn't that
 much of a benefit as a 'cpu user' (while it doesn't seem to use much
 ... potentially the cpu could down clock to save power instead of
 running KSM).

We never saw much benefit, and were pretty happy that we got better
memory use overall from OpenVZ containers rather than KVM machines
even where they were identical.

Regards,
Daniel
-- 
✉ Daniel Pittman dan...@rimspace.net
⌨ dan...@rimspace.net (XMPP)
☎ +1 503 893 2285
♻ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html