Re: [SLUG] Virtualization - Whither goes thou?

2010-05-12 Thread Daniel Pittman
Nigel Allen  writes:

> I need to formulate a DRP for a customer and thought that I would ask the
> slug for it's collective wisdom.

[...]

> First thought was to max out the memory on two of the servers, one for normal
> running and one as a hot or warm standby, and the virtualize all of the
> servers onto the two machines.

If you can having two machines in an active/active setup is much better: it
means that you have to spend less money on idle hardware, and it also means
that you don't face a *sudden* load on the passive machine.

The reason that last point is valuable is that the time you discover, say, a
disk is having trouble is when you start putting load on it, not when it sits
idle.  Guess when you really don't want to find out you have disk issues on
your second machine?

> An external consultant has already suggested doing this with VMware,
> installing the ESXi hypervisor on the two main servers and installing a NAS
> shared between the two systems (hot and cold) so that if the hot server
> fails, we can simply switch over to the cold server using the images from
> the NAS.

This would let you load-balance also, which is quite nice.



> Couple of things concern me about this approach. The first is using VMWare
> rather than a GPL solution.

*shrug*  You say you plan to run Win32 under this; you are going to need
binary PV drivers for the disk and network to get acceptable performance
anyway, so you are already looking down the barrel of non-GPL software.

> The second is where we would install the NAS. Physically, the office space
> is all under one roof but half the building has concrete floors and half has
> wooden. (The hot server is in the wooden "main" office, while the cold
> server was to go in the concrete floor area. There is also a firewall (a
> real one) in between the two areas).

In your server room, connected by at least one Gigabit link to the servers.

Your replicated NAS, of course, lives in your DR location, wherever that is,
since you don't want a DR solution that works as long as the server room never
catches fire[1].


> Questions:
>
> 1) Can anyone offer any gotcha's, regardless of how obvious they may seem to
>you?

ESXi hardware support is exciting, make sure you have capable hardware.

Pay for commercial support on whatever solution you end up with.  At the end
of year one, think about dropping it, but keep it until then.

Test.  If you don't test this stuff routinely it will never, ever work when
you need it to.

You need PV disk and network drivers to get the performance you expect.

You don't need a PV kernel under Linux, though it probably doesn't hurt:
almost all the cost comes from the disk and network, and almost everything has
PV drivers for those.


Make sure you understand what happens if you pull the network from the (or a)
active machine without otherwise turning it off — what happens.

Make sure you don't spend millions on the best servers, the best NAS, then
connect them together through a single network cable that gets cut, bringing
the entire thing to a grinding halt.


> 2) Is there a GPL solution that fit's this scenario? Even if it's not a bare
>metal hypervisor and needs an O/S. Remember it has to virtuaize both Server
>2003 and CentOS

KVM can do what you want, but I don't believe there are PV disk drivers
available that are open source.  You need those.

> 3) What's the minimum connection we would need between the NSA and and the two
>servers sharing it?

A 9600bps GSM modem, provided your users have very low expectations. ;)

More seriously: assuming your disk I/O use is low enough you could get away
with 100Mbit, but you really want Gigabit — and, given that you want to live
through failure, you want several Gigabit links between the NAS and the
servers so that a single *cable* failure doesn't take down your entire system.


> 4) What kind of speed/bandwidth should we be looking at for the off-site
>replication.

That depends entirely on how much data you write during normal operation, and
how big your acceptable window for data loss is.  Talk to the vendor of your
NAS for details.

Generally, though, you want something with very low *latency* more than you
want something with very high bandwidth: having a *safe* write means that both
the local and DR NAS have acked the write as "being on disk".

If your latency is 10ms you have at least 10ms delay for every "safe" write.
If your latency is 20ms you double that, and cut your write performance in
half...

> I'll happily take anything else anyone would like to throw at this -
> suggestions, reading matter etc - it's not an area of great expertise for us
> having only paddled around the edges with Virtualbox.

This is *hard*.  Harder than it sounds.  Imagine it being as hard as you
think, and then it will likely be harder than that.

Daniel

No, seriously, still harder.  Don't forget to test it, and expect to find
things go pear shaped and die *anyway* during normal ru

Re: [SLUG] Virtualization - Whither goes thou?

2010-05-12 Thread Jake Anderson

Amos Shapira wrote:

We use Xen+CentOS 5+DRBD+Linux-HA to achieve similar goals.
We actually build each side of the cluster separately using automatic
deployment tools (puppet and some glue around it).
We use ext3 on the DRBD patition, the DRBD is actually managed from
inside the Xen guests, not the host (we have different DRBD partitions
for different guests).
  
That's an interesting idea so your giving the VM the raw partition 
/dev/sdfoo and running drbd in the guest on that.
how are you getting around booting? or are you doing something with xen 
for that, feeding it a running kernel it can mount / as a drbd or some such?



Linux-HA gives automatic fail-over (been tested a few times "under
fire" when hardware failed - the other side took over automatically
and all we saw from this was an SMS from Nagios about the crashed
server being down).
  
That is pretty much the optimal solution, nice to hear it working in the 
real world.

But DRBD could come at a performance cost, depends on how much you are
pushing the setup it could hurt and we are looking at cheap SAN
replacements for the development/office stuff.
  
It depends on the settings for your drbd setup as well doesn't it? If 
you turn its paranoia level down somewhat I was under the impression its 
performance hit wasn't that large. IE set it to ok on transmission.
  

If you want seemless transitions your going to want something like OCFS or



We tried to setup GFS on top of DRBD (+on top of Xen) in order to move
some of the functions to primary/primary mode but the performance was
horrendous. Maybe we could get it to work if we spent more time
tweaking it but just switched back to primary/secondary and ext3 setup
for now.
  
What sort of load were you running, it sounds disk intensive, I've found 
that even raw with paravirt drivers diskio tasks are not VM friendly.
  
Correct.


Another option brought by a hosting provider we talked to was to setup
a couple of CentOS servers (or FreeNAS/Openfiler as was mentioned
before) to replicate the disks between them using DRBD and serve
access to the disks through iSCSI to the "application servers".
Effectively building a highly-available SAN cluster from existing
hardware. The possible advantage there might be that you have hosts
(CPU, bus, disk controller) dedicated for disk access so even though
the applications access the disks over a network it could still free
up other resources and make the app actually run faster.
  
I was thinking about the possibility of running iscsi nodes and using 
mdadm to perform the equivalent of DRBD but I figured if it tried to 
stripe reads it would be a massive performance hit.
IE run host A and B as iscsi nodes create your VM and mount a node on 
host A and on host B under mdadm.



As far as I saw on the web (a bit to my surprise), ext3 journaling is
supposed to be good enough to allow live snapshots, so you don't have
to take the client down for this. Many people on the net report doing
backups that way. Windows NTFS might be different but it might also be
good enough for such a trick.
  
It'd be basically like restoring the power on a machine after yanking 
the cable, I wouldn't bet on that working reliably even in the hobby 
scale, I've had enough corrupted tables on my mythtv install at home 
resulting from that that I converted the thing to innodb rather than 
myisam and stuck it on a UPS. When I said snapshot, I was referring to 
the practise where you take an image of the running machines ram meaning 
you can restore it to a known working state exactly, with no risk of 
really screwing things up. (well no more than normal) the worst thing 
the client machine would notice would be a clock warp.




In general - I try to stick to the tools which come bundled with
CentOS. It comes with Xen 3.0 so that's what we use. CentOS 6 is
expected to support KVM then we'll gladly switch to it (from what I
heard about its performance over Xen I'd love to switch to it).
libvirt should help avoid virtualisation solution dependency but so
far we haven't got around to update our home-grown scripts to use it.

Cheers,

--Amos
  

I like libvirt and virt-manager, pointy clicky goodness ;->
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Virtualization - Whither goes thou?

2010-05-12 Thread Nigel Allen

On 13/05/2010 11:54, Lindsay Holmwood wrote:

On 13 May 2010 10:52, Nigel Allen  wrote:
   

Questions:

2) Is there a GPL solution that fit's this scenario? Even if it's not a bare
metal hypervisor and needs an O/S. Remember it has to virtuaize both Server
2003 and CentOS
 

KVM, Sheepdog[0], and libvirt. Sheepdog eliminates the need for a SAN
or NAS, it uses the local storage on the machines to host the images.
You can scale it horizontally pretty easily by adding more machines
with big disks.

[0] http://www.osrg.net/sheepdog/

   

4) What kind of speed/bandwidth should we be looking at for the off-site
replication.
 

Wholly depends on how much IO you're doing. Is this being hosted out
of a data center? If not, it'll probably cost more more than you can
reasonably afford


Understood - I knew it was a stupid question when I asked that one. 
Without going through a sizing exercise I figure it's probably going to 
be left until later when we can look at some kind of async replication.


N/

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Virtualization - Whither goes thou?

2010-05-12 Thread Amos Shapira
On 13 May 2010 11:45, Jake Anderson  wrote:
> Personally I'd go with the max memory setup you were talking about but I
> wouldn't bother with the NAS.
> With only 2 nodes DRBD is fairly easy to setup, it gives you complete
> synchronisation of partitions, IE when you write in one place that write
> will only come back as ok if it has made it across the network and been
> written to disk on the remote machine (depending on settings). If your ok
> with a manual change over with a little downtime (in the case of an
> intentional transition between servers) I'd put something like ext4 on a LVM
> ontop of the DRBD partition mainly to keep things fairly simple. to migrate
> machines you shutdown the guests, unmount the file system on host A, mount
> it on host B and start the guests there

We use Xen+CentOS 5+DRBD+Linux-HA to achieve similar goals.
We actually build each side of the cluster separately using automatic
deployment tools (puppet and some glue around it).
We use ext3 on the DRBD patition, the DRBD is actually managed from
inside the Xen guests, not the host (we have different DRBD partitions
for different guests).
Linux-HA gives automatic fail-over (been tested a few times "under
fire" when hardware failed - the other side took over automatically
and all we saw from this was an SMS from Nagios about the crashed
server being down).
But DRBD could come at a performance cost, depends on how much you are
pushing the setup it could hurt and we are looking at cheap SAN
replacements for the development/office stuff.

> If you want seemless transitions your going to want something like OCFS or

We tried to setup GFS on top of DRBD (+on top of Xen) in order to move
some of the functions to primary/primary mode but the performance was
horrendous. Maybe we could get it to work if we spent more time
tweaking it but just switched back to primary/secondary and ext3 setup
for now.

> somesuch for the file system, which gives you the ability to have it mounted
> at both locations and hence live migration, you might be able to feed your
> VM's raw lvm partions on the DRBD system and not bother with OCFS which

Feeding the LVM's to the xen guests will work. You can setup the DRBD
partition as a PV if you like, or setup "PC-style" partitions on it or
just use it straight. "kpartx"+"losetup" are very handy tools for such
games (mainly for accessing the xen DomU's "disks" from the Dom0).
However if you want to use the DRBD in write/write mode and put an LVM
on top of it then I think you'll have to use Clustered LVM. Not sure,
though.

> would make life easier but I haven't looked into that.
> Upside to this system is you don't have a NAS that can go down as a single
> point failure.

Correct.

Another option brought by a hosting provider we talked to was to setup
a couple of CentOS servers (or FreeNAS/Openfiler as was mentioned
before) to replicate the disks between them using DRBD and serve
access to the disks through iSCSI to the "application servers".
Effectively building a highly-available SAN cluster from existing
hardware. The possible advantage there might be that you have hosts
(CPU, bus, disk controller) dedicated for disk access so even though
the applications access the disks over a network it could still free
up other resources and make the app actually run faster.

If 1Gb network is not enough then you can add NIC's/cables and bond
them together.

Actually having at least 2x1Gb cables and two separate network
switches will avoid the switch from becoming a SPOF. This could be
critical not only for plain functionality but to avoid the dreaded
split-brain situation.

Whatever you do for HA - make sure you do the fencing right. DRBD is
very smart but the other parts should also work right.

> For your offsite backup I'd then snapshot the machines and LVM's and rsync
> them to your remote location.
> rsync of the memory snapshot could consume a decent amount of bandwidth, its
> probably going to be pretty volatile, if you can shutdown the guest snapshot
> its disk then boot it back up again then the rsync traffic should only be a
> little over the quantity of changes made to the disk IE files added/changed,
> so not much more than your existing offsite backup needs.

As far as I saw on the web (a bit to my surprise), ext3 journaling is
supposed to be good enough to allow live snapshots, so you don't have
to take the client down for this. Many people on the net report doing
backups that way. Windows NTFS might be different but it might also be
good enough for such a trick.

In general - I try to stick to the tools which come bundled with
CentOS. It comes with Xen 3.0 so that's what we use. CentOS 6 is
expected to support KVM then we'll gladly switch to it (from what I
heard about its performance over Xen I'd love to switch to it).
libvirt should help avoid virtualisation solution dependency but so
far we haven't got around to update our home-grown scripts to use it.

Cheers,

--Amos
-- 
SLUG - Sydney Linux User's 

Re: [SLUG] Virtualization - Whither goes thou?

2010-05-12 Thread Lindsay Holmwood
On 13 May 2010 10:52, Nigel Allen  wrote:
>
> Questions:
>
> 2) Is there a GPL solution that fit's this scenario? Even if it's not a bare
> metal hypervisor and needs an O/S. Remember it has to virtuaize both Server
> 2003 and CentOS

KVM, Sheepdog[0], and libvirt. Sheepdog eliminates the need for a SAN
or NAS, it uses the local storage on the machines to host the images.
You can scale it horizontally pretty easily by adding more machines
with big disks.

[0] http://www.osrg.net/sheepdog/

>
> 4) What kind of speed/bandwidth should we be looking at for the off-site
> replication.

Wholly depends on how much IO you're doing. Is this being hosted out
of a data center? If not, it'll probably cost more more than you can
reasonably afford.

Lindsay

-- 
w: http://holmwood.id.au/~lindsay/
t: @auxesis
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Virtualization - Whither goes thou?

2010-05-12 Thread Mark Walkom
Good points, especially the NAS/SAN being the single point of failure.

DRBD and OCFS work well together, I had a play around with those as well.

On 13 May 2010 11:45, Jake Anderson  wrote:

> Personally I'd go with the max memory setup you were talking about but I
> wouldn't bother with the NAS.
> With only 2 nodes DRBD is fairly easy to setup, it gives you complete
> synchronisation of partitions, IE when you write in one place that write
> will only come back as ok if it has made it across the network and been
> written to disk on the remote machine (depending on settings). If your ok
> with a manual change over with a little downtime (in the case of an
> intentional transition between servers) I'd put something like ext4 on a LVM
> ontop of the DRBD partition mainly to keep things fairly simple. to migrate
> machines you shutdown the guests, unmount the file system on host A, mount
> it on host B and start the guests there
> If you want seemless transitions your going to want something like OCFS or
> somesuch for the file system, which gives you the ability to have it mounted
> at both locations and hence live migration, you might be able to feed your
> VM's raw lvm partions on the DRBD system and not bother with OCFS which
> would make life easier but I haven't looked into that.
> Upside to this system is you don't have a NAS that can go down as a single
> point failure.
>
> For your offsite backup I'd then snapshot the machines and LVM's and rsync
> them to your remote location.
> rsync of the memory snapshot could consume a decent amount of bandwidth,
> its probably going to be pretty volatile, if you can shutdown the guest
> snapshot its disk then boot it back up again then the rsync traffic should
> only be a little over the quantity of changes made to the disk IE files
> added/changed, so not much more than your existing offsite backup needs.
>
>
> I'm using KVM for my virtuilisation and it seems to be working well, very
> simple to use and the host has a full OS there to do whatever you want with.
> Currently I run mysql on the host to get a bit more performance out of the
> machines (with a ~20Gb database) and the application servers in VM's on the
> same machine, with mysql replication to pass the data between the hosts.
>
>
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Virtualization - Whither goes thou?

2010-05-12 Thread Jake Anderson
Personally I'd go with the max memory setup you were talking about but I 
wouldn't bother with the NAS.
With only 2 nodes DRBD is fairly easy to setup, it gives you complete 
synchronisation of partitions, IE when you write in one place that write 
will only come back as ok if it has made it across the network and been 
written to disk on the remote machine (depending on settings). If your 
ok with a manual change over with a little downtime (in the case of an 
intentional transition between servers) I'd put something like ext4 on a 
LVM ontop of the DRBD partition mainly to keep things fairly simple. to 
migrate machines you shutdown the guests, unmount the file system on 
host A, mount it on host B and start the guests there
If you want seemless transitions your going to want something like OCFS 
or somesuch for the file system, which gives you the ability to have it 
mounted at both locations and hence live migration, you might be able to 
feed your VM's raw lvm partions on the DRBD system and not bother with 
OCFS which would make life easier but I haven't looked into that.
Upside to this system is you don't have a NAS that can go down as a 
single point failure.


For your offsite backup I'd then snapshot the machines and LVM's and 
rsync them to your remote location.
rsync of the memory snapshot could consume a decent amount of bandwidth, 
its probably going to be pretty volatile, if you can shutdown the guest 
snapshot its disk then boot it back up again then the rsync traffic 
should only be a little over the quantity of changes made to the disk IE 
files added/changed, so not much more than your existing offsite backup 
needs.



I'm using KVM for my virtuilisation and it seems to be working well, 
very simple to use and the host has a full OS there to do whatever you 
want with. Currently I run mysql on the host to get a bit more 
performance out of the machines (with a ~20Gb database) and the 
application servers in VM's on the same machine, with mysql replication 
to pass the data between the hosts.





Nigel Allen wrote:

Greetings

I need to formulate a DRP for a customer and thought that I would ask 
the slug for it's collective wisdom.


Customer currently has 3 x HP rackmounted servers runnning Centos 4.8 
and a Dell rachmounted server running Windows Server 2003.


Backups are currently done to tape every night using Amanda.

Given the nature of the business and the reliance it places on 
computer availability, we're looking at replication and virtualization 
a a first step and off-site replication of some sort as step two.


First thought was to max out the memory on two of the servers, one for 
normal running and one as a hot or warm standby, and the virtualize 
all of the servers onto the two machines. An external consultant has 
already suggested doing this with VMware, installing the ESXi 
hypervisor on the two main servers and installing a NAS shared between 
the two systems (hot and cold) so that if the hot server fails, we can 
simply switch over to the cold server using the images from the NAS.


Couple of things concern me about this approach. The first is using 
VMWare rather than a GPL solution. The second is where we would 
install the NAS. Physically, the office space is all under one roof 
but half the building has concrete floors and half has wooden. (The 
hot server is in the wooden "main" office, while the cold server was 
to go in the concrete floor area. There is also a firewall (a real 
one) in between the two areas).


Questions:

1) Can anyone offer any gotcha's, regardless of how obvious they may 
seem to you?


2) Is there a GPL solution that fit's this scenario? Even if it's not 
a bare metal hypervisor and needs an O/S. Remember it has to virtuaize 
both Server 2003 and CentOS


3) What's the minimum connection we would need between the NSA and and 
the two servers sharing it?


4) What kind of speed/bandwidth should we be looking at for the 
off-site replication.


I'll happily take anything else anyone would like to throw at this - 
suggestions, reading matter etc - it's not an area of great expertise 
for us having only paddled around the edges with Virtualbox.


TIA

Nigel.



--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Virtualization - Whither goes thou?

2010-05-12 Thread Mark Walkom
There is lots of gotcha's, depends on what path you want to take :)

Citrix's XenServer offering would do what you want but it's not GPL (but it
does have some upsides). Otherwise Xen would do it no worries. If you
haven't played with many offerings then XenServer might be a better option,
it's a bit more user friendly than Xen (in my opinion that is).
But I know Xen and XenServer can run CentOS and 2K3 with no major hassle.
You can even automatically virtualise the physical boxes with XenServer once
you have some hosting setup for the VMs which makes setup easier.

Couple that with FreeNAS/Openfiler for an iSCSI backend and you'd be set.
You'd want gig ethernet between the server and storage, but switches are
cheap these days. If you need them, server level NICs (ie ones that match
HCLs) can be had for $150 or less.

Regarding replication. you need to consider if it will be constantly active
(ie always syncing), or snaphots every X hours, or however else you want. Of
course that would still mean you'd need to have a handle on how much data
that would be and go from there.

We've gone down a similar path at work so I will happily share any thoughts
you want to hear or questions you have.

On 13 May 2010 10:52, Nigel Allen  wrote:

> Greetings
>
> I need to formulate a DRP for a customer and thought that I would ask the
> slug for it's collective wisdom.
>
> Customer currently has 3 x HP rackmounted servers runnning Centos 4.8 and a
> Dell rachmounted server running Windows Server 2003.
>
> Backups are currently done to tape every night using Amanda.
>
> Given the nature of the business and the reliance it places on computer
> availability, we're looking at replication and virtualization a a first step
> and off-site replication of some sort as step two.
>
> First thought was to max out the memory on two of the servers, one for
> normal running and one as a hot or warm standby, and the virtualize all of
> the servers onto the two machines. An external consultant has already
> suggested doing this with VMware, installing the ESXi hypervisor on the two
> main servers and installing a NAS shared between the two systems (hot and
> cold) so that if the hot server fails, we can simply switch over to the cold
> server using the images from the NAS.
>
> Couple of things concern me about this approach. The first is using VMWare
> rather than a GPL solution. The second is where we would install the NAS.
> Physically, the office space is all under one roof but half the building has
> concrete floors and half has wooden. (The hot server is in the wooden "main"
> office, while the cold server was to go in the concrete floor area. There is
> also a firewall (a real one) in between the two areas).
>
> Questions:
>
> 1) Can anyone offer any gotcha's, regardless of how obvious they may seem
> to you?
>
> 2) Is there a GPL solution that fit's this scenario? Even if it's not a
> bare metal hypervisor and needs an O/S. Remember it has to virtuaize both
> Server 2003 and CentOS
>
> 3) What's the minimum connection we would need between the NSA and and the
> two servers sharing it?
>
> 4) What kind of speed/bandwidth should we be looking at for the off-site
> replication.
>
> I'll happily take anything else anyone would like to throw at this -
> suggestions, reading matter etc - it's not an area of great expertise for us
> having only paddled around the edges with Virtualbox.
>
> TIA
>
> Nigel.
>
> --
> SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
> Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
>
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Virtualization - Whither goes thou?

2010-05-12 Thread Nigel Allen

Greetings

I need to formulate a DRP for a customer and thought that I would ask 
the slug for it's collective wisdom.


Customer currently has 3 x HP rackmounted servers runnning Centos 4.8 
and a Dell rachmounted server running Windows Server 2003.


Backups are currently done to tape every night using Amanda.

Given the nature of the business and the reliance it places on computer 
availability, we're looking at replication and virtualization a a first 
step and off-site replication of some sort as step two.


First thought was to max out the memory on two of the servers, one for 
normal running and one as a hot or warm standby, and the virtualize all 
of the servers onto the two machines. An external consultant has already 
suggested doing this with VMware, installing the ESXi hypervisor on the 
two main servers and installing a NAS shared between the two systems 
(hot and cold) so that if the hot server fails, we can simply switch 
over to the cold server using the images from the NAS.


Couple of things concern me about this approach. The first is using 
VMWare rather than a GPL solution. The second is where we would install 
the NAS. Physically, the office space is all under one roof but half the 
building has concrete floors and half has wooden. (The hot server is in 
the wooden "main" office, while the cold server was to go in the 
concrete floor area. There is also a firewall (a real one) in between 
the two areas).


Questions:

1) Can anyone offer any gotcha's, regardless of how obvious they may 
seem to you?


2) Is there a GPL solution that fit's this scenario? Even if it's not a 
bare metal hypervisor and needs an O/S. Remember it has to virtuaize 
both Server 2003 and CentOS


3) What's the minimum connection we would need between the NSA and and 
the two servers sharing it?


4) What kind of speed/bandwidth should we be looking at for the off-site 
replication.


I'll happily take anything else anyone would like to throw at this - 
suggestions, reading matter etc - it's not an area of great expertise 
for us having only paddled around the edges with Virtualbox.


TIA

Nigel.

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html