Joe Julian wrote:
It would probably be better to ask this with end-goal questions instead of with a unspecified "critical feature" list and "performance problems".

Ok... I'm running a 2-node cluster that's essentially a mini cloud stack - with storage and processing combined on the same boxes. I'm running a production VM that hosts a mail server, list server, web server, and database; another production VM providing a backup server for the cluster and for a bunch of desktop machines; and several VMs used for a variety of development and testing purposes. It's all backed by a storage stack consisting of linux raid10 -> lvm -> drbd, and uses pacemaker for high-availability failover of the production VMs. It all performs reasonably well under moderate load (mail flows, web servers respond, database transactions complete, without notable user-level delays; queues don't back up; cpu and io loads stay within reasonable bounds).

The goals are to:
- add storage and processing capacity by adding two more nodes - each consisting of several CPU cores and 4 disks each - maintain the flexibility to create/delete/migrate/failover virtual machines - across 4 nodes instead of 2 - avoid having to play games with pairwise DRBD configurations by moving to a clustered filesystem - in essence, I'm looking to do what Sheepdog purports to do, except in a Xen environment

Earlier versions of gluster had reported problems with:
- supporting databases
- supporting VMs
- locking and performance problems during disk rebuilds
- and... most of the gluster documentation implies that it's preferable to separate storage nodes from processing nodes

It looks like Gluster 3.2 and 3.3 have addressed some of these issues, and I'm trying to get a general read on whether it's worth putting in the effort of moving forward with some experimentation, or whether this is a non-starter. Is there anyone out there who's tried to run this kind of mini-cloud with gluster? What kind of results have you had?



On 12/26/2012 08:24 PM, Miles Fidelman wrote:
Hi Folks,

I find myself trying to expand a 2-node high-availability cluster from to a 4-node cluster. I'm running Xen virtualization, and currently using DRBD to mirror data, and pacemaker to failover cleanly.

The thing is, I'm trying to add 2 nodes to the cluster, and DRBD doesn't scale. Also, as a function of rackspace limits, and the hardware at hand, I can't separate storage nodes from compute nodes - instead, I have to live with 4 nodes, each with 4 large drives (but also w/ 4 gigE ports per server).

The obvious thought is to use Gluster to assemble all the drives into one large storage pool, with replication. But.. last time I looked at this (6 months or so back), it looked like some of the critical features were brand new, and performance seemed to be a problem in the configuration I'm thinking of.

Which leads me to my question: Has the situation improved to the point that I can use Gluster this way?

Thanks very much,

Miles Fidelman



_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


--
In theory, there is no difference between theory and practice.
In practice, there is.   .... Yogi Berra

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to