Re: [SLUG] Virtualization - Whither goes thou?

2010-05-13 Thread Amos Shapira
On 13 May 2010 13:45, Jake Anderson ya...@vapourforge.com wrote:
 Amos Shapira wrote:

 We use Xen+CentOS 5+DRBD+Linux-HA to achieve similar goals.
 We actually build each side of the cluster separately using automatic
 deployment tools (puppet and some glue around it).
 We use ext3 on the DRBD patition, the DRBD is actually managed from
 inside the Xen guests, not the host (we have different DRBD partitions
 for different guests).


 That's an interesting idea so your giving the VM the raw partition
 /dev/sdfoo and running drbd in the guest on that.

We give the Xen guests an LV from the host (mapped with a disk =
[phy:... ] in /etc/xen/guest) then we run DRBD inside the Xen on that
partition (it looks like just another physical disk from inside the
guest).

 how are you getting around booting? or are you doing something with xen for
 that, feeding it a running kernel it can mount / as a drbd or some such?

What do you mean by that? That xen guest's OS is not on the DRBD, just
a data partition. We build the guests using usual kickstart. CentOS
comes with DRBD built-in (I think I saw that RHEL 6 beta dropped it).


 Linux-HA gives automatic fail-over (been tested a few times under
 fire when hardware failed - the other side took over automatically
 and all we saw from this was an SMS from Nagios about the crashed
 server being down).


 That is pretty much the optimal solution, nice to hear it working in the
 real world.

 But DRBD could come at a performance cost, depends on how much you are
 pushing the setup it could hurt and we are looking at cheap SAN
 replacements for the development/office stuff.


 It depends on the settings for your drbd setup as well doesn't it? If you
 turn its paranoia level down somewhat I was under the impression its
 performance hit wasn't that large. IE set it to ok on transmission.

You must be referring to DRBD's protocol A|B|C settings. We use
protocol B already (complete upon arrival to the remote buffer
cache). We can use that because we use controllers with battery
backup for their cache.




 If you want seemless transitions your going to want something like OCFS or


 We tried to setup GFS on top of DRBD (+on top of Xen) in order to move
 some of the functions to primary/primary mode but the performance was
 horrendous. Maybe we could get it to work if we spent more time
 tweaking it but just switched back to primary/secondary and ext3 setup
 for now.


 What sort of load were you running, it sounds disk intensive, I've found
 that even raw with paravirt drivers diskio tasks are not VM friendly.

It wasn't very heavy and we never got to the bottom of it but the guy
who worked on it noticed that it sometimes lost connection between the
nodes and waste lots of time on auto-healing. Even when that didn't
happen the disk was very sluggish. That's not typical for DRBD which
is usually very very smart and our internal network is very reliable.
We just never had the justification to investigate this all the way.




 Correct.

 Another option brought by a hosting provider we talked to was to setup
 a couple of CentOS servers (or FreeNAS/Openfiler as was mentioned
 before) to replicate the disks between them using DRBD and serve
 access to the disks through iSCSI to the application servers.
 Effectively building a highly-available SAN cluster from existing
 hardware. The possible advantage there might be that you have hosts
 (CPU, bus, disk controller) dedicated for disk access so even though
 the applications access the disks over a network it could still free
 up other resources and make the app actually run faster.


 I was thinking about the possibility of running iscsi nodes and using mdadm
 to perform the equivalent of DRBD but I figured if it tried to stripe reads
 it would be a massive performance hit.

I'd guess so though I never tried this. I guess there is a lot of
smarts in DRBD that just plain mdadm over iSCSI won't have.

 IE run host A and B as iscsi nodes create your VM and mount a node on host A
 and on host B under mdadm.

 As far as I saw on the web (a bit to my surprise), ext3 journaling is
 supposed to be good enough to allow live snapshots, so you don't have
 to take the client down for this. Many people on the net report doing
 backups that way. Windows NTFS might be different but it might also be
 good enough for such a trick.


 It'd be basically like restoring the power on a machine after yanking the
 cable, I wouldn't bet on that working reliably even in the hobby scale, I've
 had enough corrupted tables on my mythtv install at home resulting from that
 that I converted the thing to innodb rather than myisam and stuck it on a
 UPS. When I said snapshot, I was referring to the practise where you take an

Well, that could be just myisam being crap, regardless of the
underlying disk and file system.

 image of the running machines ram meaning you can restore it to a known
 working state exactly, with no risk of really screwing things up. (well no
 

Re: [SLUG] Virtualization - Whither goes thou?

2010-05-13 Thread Dean Hamstead
Stay away from Xen as IBM and RedHat have both abandoned it in favour of
KVM.
Stay away from vmware as its closed source and only developed by vmware :)

KVM is in centos 5.4 and every other distribution (debian etc). Centos
4.8 supports virtio for much faster io and network performance.

At my undisclosed business we are running 14 physical machines, 128gig
ram 2x6 core amd, each with ~100 VMs.

Pretty mind boggling stuff. But much more easily managed with KVM on
linux than that lock-you-out-make-you-use-our-gui vmware thing.

Stuff like SElinux around vm's for example, and KSM really works :)



Dean

On 5/13/2010, Jake Anderson ya...@vapourforge.com wrote:

Personally I'd go with the max memory setup you were talking about but I
wouldn't bother with the NAS.
With only 2 nodes DRBD is fairly easy to setup, it gives you complete
synchronisation of partitions, IE when you write in one place that write
will only come back as ok if it has made it across the network and been
written to disk on the remote machine (depending on settings). If your
ok with a manual change over with a little downtime (in the case of an
intentional transition between servers) I'd put something like ext4 on a
LVM ontop of the DRBD partition mainly to keep things fairly simple. to
migrate machines you shutdown the guests, unmount the file system on
host A, mount it on host B and start the guests there
If you want seemless transitions your going to want something like OCFS
or somesuch for the file system, which gives you the ability to have it
mounted at both locations and hence live migration, you might be able to
feed your VM's raw lvm partions on the DRBD system and not bother with
OCFS which would make life easier but I haven't looked into that.
Upside to this system is you don't have a NAS that can go down as a
single point failure.

For your offsite backup I'd then snapshot the machines and LVM's and
rsync them to your remote location.
rsync of the memory snapshot could consume a decent amount of bandwidth,
its probably going to be pretty volatile, if you can shutdown the guest
snapshot its disk then boot it back up again then the rsync traffic
should only be a little over the quantity of changes made to the disk IE
files added/changed, so not much more than your existing offsite backup
needs.


I'm using KVM for my virtuilisation and it seems to be working well,
very simple to use and the host has a full OS there to do whatever you
want with. Currently I run mysql on the host to get a bit more
performance out of the machines (with a ~20Gb database) and the
application servers in VM's on the same machine, with mysql replication
to pass the data between the hosts.




Nigel Allen wrote:
 Greetings

 I need to formulate a DRP for a customer and thought that I would ask
 the slug for it's collective wisdom.

 Customer currently has 3 x HP rackmounted servers runnning Centos 4.8
 and a Dell rachmounted server running Windows Server 2003.

 Backups are currently done to tape every night using Amanda.

 Given the nature of the business and the reliance it places on
 computer availability, we're looking at replication and virtualization
 a a first step and off-site replication of some sort as step two.

 First thought was to max out the memory on two of the servers, one for
 normal running and one as a hot or warm standby, and the virtualize
 all of the servers onto the two machines. An external consultant has
 already suggested doing this with VMware, installing the ESXi
 hypervisor on the two main servers and installing a NAS shared between
 the two systems (hot and cold) so that if the hot server fails, we can
 simply switch over to the cold server using the images from the NAS.

 Couple of things concern me about this approach. The first is using
 VMWare rather than a GPL solution. The second is where we would
 install the NAS. Physically, the office space is all under one roof
 but half the building has concrete floors and half has wooden. (The
 hot server is in the wooden main office, while the cold server was
 to go in the concrete floor area. There is also a firewall (a real
 one) in between the two areas).

 Questions:

 1) Can anyone offer any gotcha's, regardless of how obvious they may
 seem to you?

 2) Is there a GPL solution that fit's this scenario? Even if it's not
 a bare metal hypervisor and needs an O/S. Remember it has to virtuaize
 both Server 2003 and CentOS

 3) What's the minimum connection we would need between the NSA and and
 the two servers sharing it?

 4) What kind of speed/bandwidth should we be looking at for the
 off-site replication.

 I'll happily take anything else anyone would like to throw at this -
 suggestions, reading matter etc - it's not an area of great expertise
 for us having only paddled around the edges with Virtualbox.

 TIA

 Nigel.


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
--
SLUG - 

Re: [SLUG] Virtualization - Whither goes thou?

2010-05-13 Thread Piers Rowan



On 13/05/10 18:08, Dean Hamstead wrote:


At my undisclosed business we are running 14 physical machines, 128gig
ram 2x6 core amd, each with ~100 VMs.
   



What server hardware? Sun Fire X4140 Server?

Just curious.


P*

*
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] web dav setup

2010-05-13 Thread Ken Foskey

I need to set up a simple read only webdav.   No security.   I installed
the dav_fs and it starts but I cannot browse to the machine.

Location /books
  DAV On
  Order allow,deny
Allow from All
/Location

Anyone got any hints.

Ta
Ken

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Virtualization - Whither goes thou?

2010-05-13 Thread Mark Walkom
Just thought I would chime in and say this has been an awesome thread, lots
of stuff learnt just from a few hours today.

Thanks for sharing all.
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Virtualization - Whither goes thou?

2010-05-13 Thread Amos Shapira
On 13 May 2010 18:38, Dean Hamstead d...@fragfest.com.au wrote:
 Stay away from Xen as IBM and RedHat have both abandoned it in favour of
 KVM.
 Stay away from vmware as its closed source and only developed by vmware :)

 KVM is in centos 5.4 and every other distribution (debian etc). Centos
 4.8 supports virtio for much faster io and network performance.

 At my undisclosed business we are running 14 physical machines, 128gig
 ram 2x6 core amd, each with ~100 VMs.

 Pretty mind boggling stuff. But much more easily managed with KVM on
 linux than that lock-you-out-make-you-use-our-gui vmware thing.

 Stuff like SElinux around vm's for example, and KSM really works :)

Thanks for the input Dean.

Just to clarify - are you using KVM successfully on CentOS 5.4 (both
Dom0 and domU) today?
I got the impression it's in a Technology Preview (euphemism for
beta testing?) stage and there are still missing tools in 5.4.

I'd love to switch to KVM even though Xen works well for us simply
because I keep hearing that its performance is much better, and the
Xen in CentOS 5 is at least one generation behind the current version.

Cheers,

--Amos
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Virtualization - Whither goes thou?

2010-05-13 Thread Amos Shapira
On 13 May 2010 22:15, Phil Manuel p...@pkje.net wrote:
 We successfully run kvm on CentOS 5.4 as well, running a mix of windows XP,
 Ubuntu desktops, further CentOS 5.4 instances.
 Currently, we use virt-manager to manage the instances, but I'll be looking
 at Convirture: Enterprise-class management for open source virtualization in
 the near future.

Thanks very much Phil.

How is the stability and performance you see? The Release Notes and
Technical Notes for RHEL 5.5
(http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.5/html/Technical_Notes/libvirt.html)
left me with the impression that there is still bug fixing and
stability work being done on it.

Cheers,

--Amos
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Virtualization - Whither goes thou?

2010-05-13 Thread Phil Manuel
We successfully run kvm on CentOS 5.4 as well, running a mix of windows XP, 
Ubuntu desktops, further CentOS 5.4 instances.

Currently, we use virt-manager to manage the instances, but I'll be looking at 
Convirture: Enterprise-class management for open source virtualization in the 
near future.


On 13/05/2010, at 10:05 PM, Amos Shapira wrote:

 On 13 May 2010 18:38, Dean Hamstead d...@fragfest.com.au wrote:
 Stay away from Xen as IBM and RedHat have both abandoned it in favour of
 KVM.
 Stay away from vmware as its closed source and only developed by vmware :)
 
 KVM is in centos 5.4 and every other distribution (debian etc). Centos
 4.8 supports virtio for much faster io and network performance.
 
 At my undisclosed business we are running 14 physical machines, 128gig
 ram 2x6 core amd, each with ~100 VMs.
 
 Pretty mind boggling stuff. But much more easily managed with KVM on
 linux than that lock-you-out-make-you-use-our-gui vmware thing.
 
 Stuff like SElinux around vm's for example, and KSM really works :)
 
 Thanks for the input Dean.
 
 Just to clarify - are you using KVM successfully on CentOS 5.4 (both
 Dom0 and domU) today?
 I got the impression it's in a Technology Preview (euphemism for
 beta testing?) stage and there are still missing tools in 5.4.
 
 I'd love to switch to KVM even though Xen works well for us simply
 because I keep hearing that its performance is much better, and the
 Xen in CentOS 5 is at least one generation behind the current version.
 
 Cheers,
 
 --Amos
 -- 
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Virtualization - Whither goes thou?

2010-05-13 Thread Phil Manuel
I have to say they have been very stable. We don't do anything fancy to them 
once they are built as we can rebuild the centos ones from kickstart easily.  
We rarely don't migrate instances to other machines, and when we do we just 
rsync everything over and start up on the other machine.

Phil
On 13/05/2010, at 10:19 PM, Amos Shapira wrote:

 On 13 May 2010 22:15, Phil Manuel p...@pkje.net wrote:
 We successfully run kvm on CentOS 5.4 as well, running a mix of windows XP,
 Ubuntu desktops, further CentOS 5.4 instances.
 Currently, we use virt-manager to manage the instances, but I'll be looking
 at Convirture: Enterprise-class management for open source virtualization in
 the near future.
 
 Thanks very much Phil.
 
 How is the stability and performance you see? The Release Notes and
 Technical Notes for RHEL 5.5
 (http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.5/html/Technical_Notes/libvirt.html)
 left me with the impression that there is still bug fixing and
 stability work being done on it.
 
 Cheers,
 
 --Amos

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] web dav setup

2010-05-13 Thread Ben Donohue

Hey Foskey

Location /books
  DAV On
  Order allow,deny
Allow from All
Deny from none--- do you need this?
/Location

Ben



On 13/05/2010 8:39 PM, Ken Foskey wrote:

Location /books
   DAV On
   Order allow,deny
 Allow from All
/Location

   

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] web dav setup

2010-05-13 Thread Rick Welykochy

Ben Donohue wrote:


Location /books
DAV On
Order allow,deny
Allow from All
Deny from none--- do you need this?
/Location


Also check your Apache error log. It will indicate denials.


cheers
rickw


--
_
Rick Welykochy || Praxis Services

In the modern world the stupid are cocksure while the intelligent are full of 
doubt.
  -- Bertrand Russell

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Virtualization - Whither goes thou?

2010-05-13 Thread Amos Shapira
On 13 May 2010 22:41, Phil Manuel p...@pkje.net wrote:
 I have to say they have been very stable. We don't do anything fancy to them 
 once they are built as we can rebuild the centos ones from kickstart easily.  
 We rarely don't migrate instances to other machines, and when we do we just 
 rsync everything over and start up on the other machine.

Thanks very much!

I suspect I'll stick to Xen until RHEL/CentOS 6 comes out and
officially supports KVM (unless I missed the change of status of KVM
in 5.5 from Technology Preview (its status in 5.4) to Supported,
have I?)

Cheers,

--Amos


 Phil
 On 13/05/2010, at 10:19 PM, Amos Shapira wrote:

 On 13 May 2010 22:15, Phil Manuel p...@pkje.net wrote:
 We successfully run kvm on CentOS 5.4 as well, running a mix of windows XP,
 Ubuntu desktops, further CentOS 5.4 instances.
 Currently, we use virt-manager to manage the instances, but I'll be looking
 at Convirture: Enterprise-class management for open source virtualization in
 the near future.

 Thanks very much Phil.

 How is the stability and performance you see? The Release Notes and
 Technical Notes for RHEL 5.5
 (http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.5/html/Technical_Notes/libvirt.html)
 left me with the impression that there is still bug fixing and
 stability work being done on it.

 Cheers,

 --Amos


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] web dav setup

2010-05-13 Thread Jobst Schmalenbach

check whether the webdav lock db exists:

IfModule mod_dav_fs.c
# Location of the WebDAV lock database.
DAVLockDB /var/lib/dav/lockdb
/IfModule

check whether the module is loaded:

LoadModule dav_module modules/mod_dav.so

I am not sure whether it should be directory and not location:

Directory /books
  DAV On
  Options Indexes Includes FollowSymLinks
  Order allow,deny
  Allow from ALL
/Directory

and make sure you have options indexes turned on!

As for a sane security setting use this as well

  AllowOverride none
  LimitExcept GET POST PUT
deny from all
  /LimitExcept




On Thu, May 13, 2010 at 08:39:18PM +1000, Ken Foskey (kfos...@tpg.com.au) wrote:
 
 I need to set up a simple read only webdav.   No security.   I installed
 the dav_fs and it starts but I cannot browse to the machine.
 
 Location /books
   DAV On
   Order allow,deny
 Allow from All
 /Location
 
 Anyone got any hints.
 
 Ta
 Ken
 
 -- 
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
Never share a foxhole with anyone braver than yourself.

  | |0| |   Jobst Schmalenbach, jo...@barrett.com.au, General Manager
  | | |0|   Barrett Consulting Group P/L  The Meditation Room P/L
  |0|0|0|   +61 3 9532 7677, POBox 277, Caulfield South, 3162, Australia
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html