On Mon, 2 Mar 2009, Paul Hieromnimon wrote:
If you replicate pairs of servers, how come you're still using RAID 6?
This is actually a very good question. So I guess their are 3 ways of
doing this.
1) use JBOD and put a filesystem on each of the 16 drives and mount them
on 16 points. Howeve
Thanks for your response, Nathan.
3. Use Gluster for redundancy instead of RAID. It would be nice if I can
>> lose any single hard drive and/or entire server and still have access to
>> 100% of all the data in the pool. In this sort of setup, is it possible
>> to
>> limit the number of copies o
Hi.
I wanted to ask - how stable GlusterFS is for production?
That because on side of successes, I see a lot of issues reporting, even
people announcing rollback to other solution.
This only a question we asked ourselves infrastructure planning - not an
intention to show any disrespect to develo
On Mon, 2 Mar 2009, Paul Hieromnimon wrote:
I am considering using Gluster to build a Xen hosting cluster/cloud. Xen
requires shared storage in order to do a "live migration" (move a virtual
machine from one host to another without taking it down) so that the vm's
disk image is available on the
I am considering using Gluster to build a Xen hosting cluster/cloud. Xen
requires shared storage in order to do a "live migration" (move a virtual
machine from one host to another without taking it down) so that the vm's
disk image is available on the machine it gets moved to. The typical
deployme
Hello!
I'm using glusterfs version 2.0.0rc2 on both client and servers and
configuration copied literally from
http://gluster.org/docs/index.php/Automatic_File_Replication_(Mirror)_across_Two_Storage_Servers
with only change being replacement of "storage*.example.com" with 2
different "10.*.*.*" I
Hi!
I have been playing around with GlusterFS for a while (with strong
intend to use in production, and use it soon -- if only I get AFR
healing to work).
General impression: wow, it's /impressive/! It's so configurable, so
stackable -- building complicated configurations doesn't look like an
issu
Hi Jordan,
Replies Inline.
At 11:02 PM 2/20/2009, Jordan Mendler wrote:
>
>> I am prototyping GlusterFS with ~50-60TB of raw disk space across
>> non-raided disks in ~30 compute nodes. I initially separated the nodes into
>> groups of two, and did a replicate across each set of single drives in a