Re: [SLUG] Which Virtualisation, and why?

2011-01-27 Thread Rod Butcher
The only downside I've found with KVM is its need for processor
virtualisation features. I use a cheap laptop without the feature, that
I need to keep a load of virtual images on, and for these VirtualBox
works great - I use these images for support and portability purposes
rather than production data crunching.
Are the benefits of KVM related to its production throughput
capabilities ?  RedHat to me looks to be pushing it as part of
"branding" - it needs flagship technologies to differentiate itself. Any
pointers to articles on the subject ?
thanks
Rod

On 01/10/11 21:03, Dean Hamstead wrote:
> Hi David
> 
> All the linux big boys are moving fast to KVM. Redhat and IBM have
> abandoned Xen completely, making it an out of kernel patch set
> maintained by Citrix and perhaps code from Oracle. Youll find that
> Debian has also elected to discontinue Xen in the next release.
> 
> Virtualbox is still nice for desktop quasi-trivial virtualisation. (Im
> sure someone objects to that, and has taken it to a huge scale...)
> 
> KVM is still the only in kernel hypervisor (if thats what it is, which
> it sort of isnt).
> 
> VMware is free as in beer.
> 
> At my telco of employ, we are using KVM extensively. Im of the opinion
> is the most sane design, gives you the most control and follows the unix
> way of re-using existing components to the nth degree.
> 
> Chances are its already installed on your reasonably recent release
> distribution of choice.
> 
> Dean
> 
> On 10/01/11 20:57, david wrote:
>> I've migrated a server to virtualbox for the purpose of experimentation
>> (namely, to resolve upgrade issues going from Ubuntu 8.04 to 10.04). I
>> used MondoArchive to clone the hardware server onto a Virtualbox virtual
>> server. All good so far.
>>
>> I'm thinking of building future servers within virtual environments -
>> ie: the server built as a solitary virtual machine within its host.
>>
>> I'm hoping that will make future upgrades, migration and back-up easier.
>> I currently run 3 public servers, none of which are heavily loaded.
>>
>> What virtualisation solutions would people suggest? and is there any
>> reason this is not a good idea?
>>
>> thanks..
>>
>> David.
>>
>>
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-13 Thread onlyjob
Hi Daniel,

> Do you know of a good source of information on the VServer side?  I am
> curious to know what the technical differences are (and if that is,
> indeed, the correct argument from their side) so I can better
> understand what the trade-off here might atually be.
>

Sorry, I'm not the best person to ask - perhaps someone from VServer
mail list could know?

> ...er, OK.  So, the use of the RHEL kernel is relevant because RedHat
> invest substantially in stabilizing the kernel and backporting newer
> drivers to it.  This means that unlike the Linus 2.6.18 kernel (for
> example) you can get support for modern Intel NICs and other
> controllers, so it doesn't suffer nearly the trouble running on modern
> hardware that the version number might suggest.
>
> Given that, in many cases, security and driver support are the primary
> motivators for using a newer kernel this can, indeed, help with
> (though not universally solve) the issue that the OpenVZ kernel
> version lags the upstream kernel version available in distributions.
>

This is true, but I'm not a fan of RHEL also because they are so
over-conservative with kernel version.
This is nothing but a pain in the arse for me: for example their
2.6.18 kernel didn't support IO accounting so when I tried
'dstat -M topbio' it returned 'Module topbio failed to load. (Kernel
has no I/O accounting, use at least 2.6.20)
None of the stats you selected are available.'
So useful tools like 'iotop'  do not work (and they don't even have it
in native repository) and it is very difficult to find out which
process consuming most disk IO.


>> Obviously 2.6.32 have a number of important features notably KSM which
>> makes a lot of sense for virtual host and also EXT4.
>
> Er, as of 2.6.36 the KSM feature still scans only memory specifically
> registered to it by the application.  So far as I can see the VServer
> patches don't do anything special to mark memory as possible to share
> for code running in them, and pretty much nothing other than KVM/qemu
> does out in user-space, so I wouldn't have thought that KSM made much
> difference.
>

Fantastic, thanks very much for this. I didn't know that in order to
make KSM work memory has to be tagged. But this makes another
fantastic VServer feature (hashify) even more important.

That's how Gordon Bobic describes it: "The absolute killer feature of
vservers for me is hashify. In a nutshell, it adds a feature to
provide copy-on-write hard-links, which means that once you have
hashified your guests, all the DLLs have the same inode number and
mmap into the same memory. That means that if you have 100 guests
running the same base setup, you don't have 100 instances of glibc
wasting RAM, but only one. On top of that, since identical files are
hard-linked, it makes the cache efficiency much greater. This means
you can overbook your memory way, way more than you otherwise could
and gain some performance at the same time."


> As to ext4 ... *shrug*  I think that is a religious debate that would
> distract from the real discussion at hand; I regard anyone running
> ext4 on a production system as vaguely strange since it is still so
> young and experimental compared to more tested code but, obviously,
> you don't share that reluctance to move. :)

That's because I did my research end experimentation. ;)
At the moment I consider using ext4 safe if mounted with parameters
"data=ordered,journal_checksum".
I trust ext4 with sensitive data. You may argue that it is risky or
irresponsible but so far I didn't have a chance to regret about it.
On large partitions >2 TB it makes sense. Again VServer on ext4
appears to perform better than OpenVZ on ext3 (as observed on very
same hardware).

> Anyway, thanks for sharing your experiences and discussing this.  I
> like to talk through these issues so I can better understand where
> things stand – and I already learned some useful stuff I didn't know
> from you. :)
>
> Regards,
>    Daniel

Indeed I should thank you because I learn from you as well. Thank you! :)

Regards,
Onlyjob.
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-12 Thread Daniel Pittman
On Wed, Jan 12, 2011 at 20:10, dave b  wrote:

> Also has anyone looked at LXC :P ?

In my previous job, where we did the other bits, we did some testing
on developer systems; our conclusion was that LXC was at least a
couple of years from being useful in the real world based on a pretty
much endless collection of shortfalls and bugs in testing.

My very strong hope is that it will stabilise and one of the
implementations built on the kernel tools (because the libvirt LXC and
plain LXC projects are entirely different user-space code) will become
the standard for doing this inside Linux.

> Having run identical kvm guests, I found that KSM actually wasn't that
> much of a benefit as a 'cpu user' (while it doesn't seem to use much
> ... potentially the cpu could down clock to save power instead of
> running KSM).

We never saw much benefit, and were pretty happy that we got better
memory use overall from OpenVZ containers rather than KVM machines
even where they were identical.

Regards,
Daniel
-- 
✉ Daniel Pittman 
⌨ dan...@rimspace.net (XMPP)
☎ +1 503 893 2285
♻ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-12 Thread dave b
Also has anyone looked at LXC :P ?

Having run identical kvm guests, I found that KSM actually wasn't that
much of a benefit as a 'cpu user' (while it doesn't seem to use much
... potentially the cpu could down clock to save power instead of
running KSM).
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-12 Thread Daniel Pittman
On Wed, Jan 12, 2011 at 16:11, onlyjob  wrote:

> Regarding virtual loopback it seems that in standard builds of
> VServer-enabled 2.6.32 kernels available from Debian repositories this
> problem do not exist. I'm not too sure though but I don't remember
> experiencing it.

It was definitely there is the stable releases of Debian before Lenny,
when they presumably decided the experimental VServer patches were
sufficiently stable or whatever.  Anyway, nice they have solved it. :)

> Besides it is possible to change localhost address in /etc/hosts

...but not to remap 127.0.0.1, or easily create another interface
named lo, or bind to 0.0.0.0, all of which a surprisingly large number
of packages assumed would always be present and operational in Debian
(and, for which, Ubuntu is generally worse. :)

> Absence of network virtualization in VServer is deliberate and for a
> good reason.

I can't find much useful information on this on their website about
why, but wikipedia claims that this is based on isolation rather than
virtualization to "avoid overhead", which seems relatively bogus to me
given the in-kernel network namespace support is pretty much directly
"isolation" based.

Do you know of a good source of information on the VServer side?  I am
curious to know what the technical differences are (and if that is,
indeed, the correct argument from their side) so I can better
understand what the trade-off here might atually be.

[...]

> I don't see how kernel support relevant to RHEL upstream - a year ago
> there was no OpenVZ support for 2.6.32 whatsoever. And frankly this
> was one of the reasons I've chosen VServer for a machine hosting
> around 20 VMs.

...er, OK.  So, the use of the RHEL kernel is relevant because RedHat
invest substantially in stabilizing the kernel and backporting newer
drivers to it.  This means that unlike the Linus 2.6.18 kernel (for
example) you can get support for modern Intel NICs and other
controllers, so it doesn't suffer nearly the trouble running on modern
hardware that the version number might suggest.

Given that, in many cases, security and driver support are the primary
motivators for using a newer kernel this can, indeed, help with
(though not universally solve) the issue that the OpenVZ kernel
version lags the upstream kernel version available in distributions.

> Obviously 2.6.32 have a number of important features notably KSM which
> makes a lot of sense for virtual host and also EXT4.

Er, as of 2.6.36 the KSM feature still scans only memory specifically
registered to it by the application.  So far as I can see the VServer
patches don't do anything special to mark memory as possible to share
for code running in them, and pretty much nothing other than KVM/qemu
does out in user-space, so I wouldn't have thought that KSM made much
difference.

As to ext4 ... *shrug*  I think that is a religious debate that would
distract from the real discussion at hand; I regard anyone running
ext4 on a production system as vaguely strange since it is still so
young and experimental compared to more tested code but, obviously,
you don't share that reluctance to move. :)

[...]

> "*more performant": I agree with you that difference in network
> performance between VServer and OpenVZ is not terribly different.
> Perhaps it can be manifested with some sort of artificial testing.
> However here I was quoting Herbert Poetzl (VServer developer).
> While performance difference is not too big there is another thing
> which I believe is equally important - simplicity. If the same result
> can be achieved easier, without even little virtualization overhead it
> is certainly better, more maintainable, probably has less bugs etc.
> Simplicity matters.

*nod*  Part of my view on the subject is that VServer made some bad
technical decisions that kept their kernel code simple in exchange for
adding a huge amount of complexity to every container; part of that
(lo virtualization) they have obviously decided can be corrected these
days.

So, I agree, but I think that whole system complexity is a much more
important metric than just kernel complexity.  (OTOH, I also think
that a bunch of the OpenVZ bits – like UBC – are disasters of
complexity and I am very glad they will not make it in to the mainline
kernel. :)

Anyway, thanks for sharing your experiences and discussing this.  I
like to talk through these issues so I can better understand where
things stand – and I already learned some useful stuff I didn't know
from you. :)

Regards,
Daniel
-- 
✉ Daniel Pittman 
⌨ dan...@rimspace.net (XMPP)
☎ +1 503 893 2285
♻ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-12 Thread onlyjob
Regarding virtual loopback it seems that in standard builds of
VServer-enabled 2.6.32 kernels available from Debian repositories this
problem do not exist. I'm not too sure though but I don't remember
experiencing it. Besides it is possible to change localhost address in
/etc/hosts

Absence of network virtualization in VServer is deliberate and for a
good reason.

My bad - for some reason I mistakenly thought that OpenVZ license is
not GPL - thanks for highlighting this error.

I don't see how kernel support relevant to RHEL upstream - a year ago
there was no OpenVZ support for 2.6.32 whatsoever. And frankly this
was one of the reasons I've chosen VServer for a machine hosting
around 20 VMs.
Obviously 2.6.32 have a number of important features notably KSM which
makes a lot of sense for virtual host and also EXT4.
At that time (a year ago) I installed VServer-patched kernel 2.6.32
from native Debian repository.

"*more performant": I agree with you that difference in network
performance between VServer and OpenVZ is not terribly different.
Perhaps it can be manifested with some sort of artificial testing.
However here I was quoting Herbert Poetzl (VServer developer).
While performance difference is not too big there is another thing
which I believe is equally important - simplicity. If the same result
can be achieved easier, without even little virtualization overhead it
is certainly better, more maintainable, probably has less bugs etc.
Simplicity matters.

"Easier":
Well this is really quite a subjective matter. Available tools is a
different argument.
I got my first experience with OpenVZ about 18 months ago when I
created several VMs but there were some problems motivating my
migration to VServer - a decision I've never regret about. Somehow I
found memory management easier for VServer. It could be just my
perception but to me Vserver is easier to configure and use. Debian
make VServer installation trivial.

One of my problems with OpenVZ was understanding of how its memory
limits work. it is indeed a problem related to lack of experience but
number of times services inside OpenVZ VM were failing to allocate
required amount of RAM so a tweaked some parameter until it happen again,
then I had to tweak another setting and so on and so on. After week of
struggling I had no confidence regarding settings I used and I had to
read a lot to get a detailed understanding of all those parameters.
Obviously defaults was not good.
Then I had a chat with the guy from another company who tried to adopt
OpenVZ for large Java-based application spanned across a dozen VMs.
They gave up after running into problems with Java memory management
in OpenVZ so they end up using KVM for a project. (Personally I do
believe they just hadn't enough patience to chase all the problems).

When I decided to try VServer - I already had 5 or 6 OpenVZ VMs.
Using VServer was surprisingly easy (perhaps only documentation
lacking some up-to-date examples) so soon enough I found myself
virtualizing physical servers to VServer and creating more VMs mostly
Debian or CentOS based.
Migration to VServer was trivial for me - for a year I had no problems
with memory allocations in any of more than 20 VServer VMs, many of
which have Java application servers running. Defaults are not
restrictive in VServer so it is easier to set up a VM and restrict it
later upon finalizing its configuration when memory usage already
known.

I like VServer more, particularly the way we do things in VServer.
To me administration efforts less for VServer but you may argue that
this is a matter of experience.

Regards,
Onlyjob.


On 12 January 2011 14:56, Daniel Pittman  wrote:
> On Tue, Jan 11, 2011 at 16:24, onlyjob  wrote:
>
>> No, no, please not OpenVZ. It is certainly not for beginners.
>> Better use VServer instead.
>> I used both, first OpenVZ (but was never really happy with it) and then 
>> VServer.
>
> Have VServer added network virtualization yet?  Last time I used it
> they hadn't, so your containers didn't have, for example, the loopback
> interface, or a 127.0.0.1 address they could use.
>
> That made for a constant, ongoing pain in the neck compared to OpenVZ
> which *did* look like that.  Every single distribution package that
> assumed, for example, that it could talk to 'localhost' would do the
> wrong thing.
>
> Ah.  I see the experimental releases do add support for a virtualized
> loopback adapter, along with IPv6, which is nice, and probably
> addresses my biggest operational issue with VServer.
>
>> There are number of benefits of VServer over OpenVZ:
>>
>> * GPL License
>
> http://openvz.org/documentation/licenses
> The OpenVZ software — the kernel and and the user-level tools — is
> licensed under GNU GPL version 2.
>
> It is also notable that a bunch of the upstream, in-kernel code *is*
> from OpenVZ, including a bunch of the namespace support that underpins
> the LXC implementations and, these days, OpenVZ itself.
>
> Can you tell me where you g

Re: [SLUG] Which Virtualisation, and why?

2011-01-11 Thread Dean Hamstead

its FOSS, gpl etc

Dean

On 12/01/11 15:20, david wrote:



dave b wrote:

Hum ... easy virtualisation for those who don't want to do it manually
...
http://www.proxmox.com/ -


from their homepage:


Search Keyword licence

Total: 0 results found


you can use both kvm and openvz and it has a

nice webgui.


--
http://fragfest.com.au
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-11 Thread david



dave b wrote:

Hum ... easy virtualisation for those who don't want to do it manually ...
http://www.proxmox.com/ - 


from their homepage:


Search Keyword licence

Total: 0 results found


you can use both kvm and openvz and it has a

nice webgui.

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-11 Thread Daniel Pittman
On Tue, Jan 11, 2011 at 16:24, onlyjob  wrote:

> No, no, please not OpenVZ. It is certainly not for beginners.
> Better use VServer instead.
> I used both, first OpenVZ (but was never really happy with it) and then 
> VServer.

Have VServer added network virtualization yet?  Last time I used it
they hadn't, so your containers didn't have, for example, the loopback
interface, or a 127.0.0.1 address they could use.

That made for a constant, ongoing pain in the neck compared to OpenVZ
which *did* look like that.  Every single distribution package that
assumed, for example, that it could talk to 'localhost' would do the
wrong thing.

Ah.  I see the experimental releases do add support for a virtualized
loopback adapter, along with IPv6, which is nice, and probably
addresses my biggest operational issue with VServer.

> There are number of benefits of VServer over OpenVZ:
>
> * GPL License

http://openvz.org/documentation/licenses
The OpenVZ software — the kernel and and the user-level tools — is
licensed under GNU GPL version 2.

It is also notable that a bunch of the upstream, in-kernel code *is*
from OpenVZ, including a bunch of the namespace support that underpins
the LXC implementations and, these days, OpenVZ itself.

Can you tell me where you got the impression that OpenVZ was not GPL?

> * Better kernel support:
> OpenVZ kernel 2.6.32 become available only recently.
> VServer supported 2.6.32 for a while - much much longer. OpenVZ's
> adoption of new kernels is quite slow - perhaps just too slow...

FWIW, because their upstream kernel is based on the RHEL kernel
releases, we often found that they had sufficiently recent drivers
despite the older core version.  This is a genuine drawback, however,
and makes it hard to have upstream support if you are not using RHEL
as your base system (eg: Debian, Ubuntu.)

Er, also, am I looking at the right place?  I went to check out the
"feature equivalent" stuff because I am quite interested in keeping
up, and the linux-vserver site tells me that the latest stable release
is vs2.2.0.7 for 2.6.22.19 – they have an *experimental* patch for
2.6.32, but I presume there must be some other stable release for the
.32 series or something?

[...]

> * more performant:
>  Linux-VServer has no measureable overhead for
>  network isolation and allows the full performance
>  (OpenVZ report 1-3% overhead, not verified)

Our measurements show pretty much identical performance cost for
either tool, FWIW, and we generally found that either of them was able
to exhaust the IOPS or memory capacity of a modern server well before
they could make a couple of percent of CPU overhead matter.  (KVM-like
tools were far worse for this, of course, because of their increased
IO and memory overheads.)

[...]

> * Easier.

I don't agree here: other than the more modern set of front ends (like
Proxmox) for OpenVZ, I never found there to be a detectable difference
in tools overheads between VServer and OpenVZ, and OpenVZ was actually
a bit easier to fiddle with outside the rules and all.

Regards,
Daniel
-- 
✉ Daniel Pittman 
⌨ dan...@rimspace.net (XMPP)
☎ +1 503 893 2285
♻ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-11 Thread dave b
Hum ... easy virtualisation for those who don't want to do it manually ...
http://www.proxmox.com/ - you can use both kvm and openvz and it has a
nice webgui.
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-11 Thread onlyjob
No, no, please not OpenVZ. It is certainly not for beginners.
Better use VServer instead.
I used both, first OpenVZ (but was never really happy with it) and then VServer.

There are number of benefits of VServer over OpenVZ:

* GPL License

* Better kernel support:
OpenVZ kernel 2.6.32 become available only recently.
VServer supported 2.6.32 for a while - much much longer. OpenVZ's
adoption of new kernels is quite
slow - perhaps just too slow...

* less intrusive:
 the Linux-VServer patch against 2.6.36 is 753K
 the OpenVZ patch for 2.6.36 does not exist; the
 patch for 2.6.32 (seems to be the latest) 4.9M
 note that the features are roughly the same

* more performant:
 Linux-VServer has no measureable overhead for
 network isolation and allows the full performance
 (OpenVZ report 1-3% overhead, not verified)

* better integrated:
 Linux-VServer is around for 10 years now and
 supports all Linux platforms and architectures
 (OpenVZ supports only 6 and mainly RH(EL))

* Easier.

Regards,
Onlyjob.


On 11 January 2011 14:22, Nick Andrew  wrote:
> On Mon, Jan 10, 2011 at 08:57:14PM +1100, david wrote:
>> What virtualisation solutions would people suggest?
>
> OpenVZ ...
>
>  - lightweight
>  - flexible resource limits
>  - linux based (i.e. containers and process isolation, not machine emulation)
>  - uses host filesystem, which helps with
>  - dynamic resizeable rootfs
>  - minimal functional installs from around 300 megs on disk
>  - can live-migrate across hosts with some constraints
>  - can create, install and start a new VM in under 1 minute.
>
> Nick.
> --
> PGP Key ID = 0x418487E7                      http://www.nick-andrew.net/
> PGP Key fingerprint = B3ED 6894 8E49 1770 C24A  67E3 6266 6EB9 4184 87E7
> --
> SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
> Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
>
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-10 Thread Nick Andrew
On Mon, Jan 10, 2011 at 08:57:14PM +1100, david wrote:
> What virtualisation solutions would people suggest?

OpenVZ ...

  - lightweight
  - flexible resource limits
  - linux based (i.e. containers and process isolation, not machine emulation)
  - uses host filesystem, which helps with
  - dynamic resizeable rootfs
  - minimal functional installs from around 300 megs on disk
  - can live-migrate across hosts with some constraints
  - can create, install and start a new VM in under 1 minute.

Nick.
-- 
PGP Key ID = 0x418487E7  http://www.nick-andrew.net/
PGP Key fingerprint = B3ED 6894 8E49 1770 C24A  67E3 6266 6EB9 4184 87E7
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-10 Thread Jam
On Tuesday 11 January 2011 09:00:02 slug-requ...@slug.org.au wrote:
> Hi David
> 
> All the linux big boys are moving fast to KVM. Redhat and IBM have 
> abandoned Xen completely, making it an out of kernel patch set 
> maintained by Citrix and perhaps code from Oracle. Youll find that 
> Debian has also elected to discontinue Xen in the next release.
> 
> Virtualbox is still nice for desktop quasi-trivial virtualisation. (Im 
> sure someone objects to that, and has taken it to a huge scale...)
> 
> KVM is still the only in kernel hypervisor (if thats what it is, which 
> it sort of isnt).
> 
> VMware is free as in beer.
> 
> At my telco of employ, we are using KVM extensively. Im of the opinion 
> is the most sane design, gives you the most control and follows the unix 
> way of re-using existing components to the nth degree.
> 
> Chances are its already installed on your reasonably recent release 
> distribution of choice.
> 
> Dean
> 
> On 10/01/11 20:57, david wrote:
> > I've migrated a server to virtualbox for the purpose of experimentation
> > (namely, to resolve upgrade issues going from Ubuntu 8.04 to 10.04). I
> > used MondoArchive to clone the hardware server onto a Virtualbox virtual
> > server. All good so far.
> > 
> > I'm thinking of building future servers within virtual environments -
> > ie: the server built as a solitary virtual machine within its host.
> > 
> > I'm hoping that will make future upgrades, migration and back-up easier.
> > I currently run 3 public servers, none of which are heavily loaded.
> > 
> > What virtualisation solutions would people suggest? and is there any
> > reason this is not a good idea?

David I totally agree, but I think 'For me at home wanting to virtualize a 
server or two' VirtualBox offers simple quick and easy way that is much easier 
than KVM.
My two servers have no desktop, rdesktop makes it easy from boot onwards. I 
use ssh and console.

James
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-10 Thread Mark Walkom
I'd suggest something like XenServer (if you use Windows by day).
It is very simple to use and manage which is handy when it's just a hobby
type approach and it saves having to worry about installing and managing an
OS and then the hypervisor like KVM, plus their GUI tools are great (again,
*if* you use Windows at any point).

You could turn your existing servers into a hardware pool too, irrespective
of the platform you end up using.

On 10 January 2011 20:57, david  wrote:

> I've migrated a server to virtualbox for the purpose of experimentation
> (namely, to resolve upgrade issues going from Ubuntu 8.04 to 10.04). I used
> MondoArchive to clone the hardware server onto a Virtualbox virtual server.
> All good so far.
>
> I'm thinking of building future servers within virtual environments - ie:
> the server built as a solitary virtual machine within its host.
>
> I'm hoping that will make future upgrades, migration and back-up easier. I
> currently run 3 public servers, none of which are heavily loaded.
>
> What virtualisation solutions would people suggest? and is there any reason
> this is not a good idea?
>
> thanks..
>
> David.
>
>
> --
> SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
> Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
>
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-10 Thread Dean Hamstead

Hi David

All the linux big boys are moving fast to KVM. Redhat and IBM have 
abandoned Xen completely, making it an out of kernel patch set 
maintained by Citrix and perhaps code from Oracle. Youll find that 
Debian has also elected to discontinue Xen in the next release.


Virtualbox is still nice for desktop quasi-trivial virtualisation. (Im 
sure someone objects to that, and has taken it to a huge scale...)


KVM is still the only in kernel hypervisor (if thats what it is, which 
it sort of isnt).


VMware is free as in beer.

At my telco of employ, we are using KVM extensively. Im of the opinion 
is the most sane design, gives you the most control and follows the unix 
way of re-using existing components to the nth degree.


Chances are its already installed on your reasonably recent release 
distribution of choice.


Dean

On 10/01/11 20:57, david wrote:

I've migrated a server to virtualbox for the purpose of experimentation
(namely, to resolve upgrade issues going from Ubuntu 8.04 to 10.04). I
used MondoArchive to clone the hardware server onto a Virtualbox virtual
server. All good so far.

I'm thinking of building future servers within virtual environments -
ie: the server built as a solitary virtual machine within its host.

I'm hoping that will make future upgrades, migration and back-up easier.
I currently run 3 public servers, none of which are heavily loaded.

What virtualisation solutions would people suggest? and is there any
reason this is not a good idea?

thanks..

David.



--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html