Re: [Gluster-devel] Architecture advice

2009-02-03 Thread Krishna Srinivas
On Wed, Jan 14, 2009 at 5:30 PM, Gordan Bobic wrote: > On Tue 13/01/09 01:15 , Martin Fick wrote: >> --- On Mon, 1/12/09, Gordan Bobic wrote: >> Ding, ding, ding ding!!! I get it, you are >> using NFS to achieve blocking, exactly my #1 >> remaining grip with glusterfs, it does not >> block! Pl

Re: [Gluster-devel] Architecture advice

2009-01-14 Thread Gordan Bobic
On Tue 13/01/09 01:15 , Martin Fick wrote: > --- On Mon, 1/12/09, Gordan Bobic wrote: > Ding, ding, ding ding!!! I get it, you are > using NFS to achieve blocking, exactly my #1 > remaining grip with glusterfs, it does not > block! Please try explaining why this is > important to you to the g

Re: [Gluster-devel] Architecture advice

2009-01-12 Thread Martin Fick
--- On Mon, 1/12/09, Gordan Bobic wrote: Ding, ding, ding ding!!! I get it, you are using NFS to achieve blocking, exactly my #1 remaining grip with glusterfs, it does not block! Please try explaining why this is important to you to the glusterfs devs! I am not sure that I made my case clear

Re: [Gluster-devel] Architecture advice

2009-01-12 Thread Gordan Bobic
Martin Fick wrote: Why is that the correct way? There's nothing wrong with having "bonding" at the glusterfs protocol level, is there? The problem is that it only covers a very narrow edge case that isn't all that likely. A bonded NIC over separate switches all the way to both servers is a muc

Re: [Gluster-devel] Architecture advice

2009-01-12 Thread Martin Fick
> > Why is that the correct way? There's nothing > wrong with having "bonding" at the glusterfs > protocol level, is there? > > The problem is that it only covers a very narrow edge case > that isn't all that likely. A bonded NIC over separate > switches all the way to both servers is a much more

Re: [Gluster-devel] Architecture advice

2009-01-12 Thread Gordan Bobic
Martin Fick wrote: --- On Mon, 1/12/09, Gordan Bobic wrote: ... No need for fencing simply because you now use HA translator. The assumption in this case is that the servers can still talk to each other but that one server's connection to the clients may have died. That means that 50% of t

Re: [Gluster-devel] Architecture advice

2009-01-12 Thread Martin Fick
--- On Mon, 1/12/09, Gordan Bobic wrote: > > > ... > > No need for fencing simply because you now use HA > > translator. The assumption in this case is that the > > servers can still talk to each other but that one > > server's connection to the clients may have died. > > That means that 50%

Re: [Gluster-devel] Architecture advice

2009-01-12 Thread Gordan Bobic
Martin Fick wrote: Not on the client, anyway. But if you're AFR-ing on server side, then your client always talks to one server anyway. The traditional way to handle server failure in that case is to set up Heartbeat or RHCS to fail over the IP address resource to the surviving server. The TCP

Re: [Gluster-devel] Architecture advice

2009-01-12 Thread Martin Fick
--- On Mon, 1/12/09, Gordan Bobic wrote: Gordon, > Not on the client, anyway. But if you're AFR-ing on > server side, then your client always talks to one server > anyway. The traditional way to handle server failure in that > case is to set up Heartbeat or RHCS to fail over the IP > address re

Re: [Gluster-devel] Architecture advice

2009-01-12 Thread Gordan Bobic
It just occurs to me that if you're using server-side AFR and want HA of servers, likely the best, fastest and most graceful way to achieve it is to have AFT GlusterFS on the servers and export the share via NFS/UDP. You'll need the patched fuse package to alow you to do this. If you're using NF

Re: [Gluster-devel] Architecture advice

2009-01-12 Thread Gordan Bobic
On Mon 12/01/09 13:32 , Daniel Maher wrote: > Gordan Bobic wrote: > >> How does the HA translator choose a node, exactly ? Does it > randomly > >> select one from the list of available subvolumes, or does it pick > them > >> in the order they're specified in the config file, or some other > way

Re: [Gluster-devel] Architecture advice

2009-01-12 Thread Daniel Maher
Gordan Bobic wrote: How does the HA translator choose a node, exactly ? Does it randomly select one from the list of available subvolumes, or does it pick them in the order they're specified in the config file, or some other way entirely ? Not sure what the default is, but you can specify a

Re: [Gluster-devel] Architecture advice

2009-01-12 Thread Gordan Bobic
Daniel Maher wrote: Basavanagowda Kanur wrote: I am curious to know how the client decides which of the subvols to use at any given time, and if there is a way to specify a "preferred" subvol. For example, i have two servers AFR'ing each other. I have two clients set up to acc

Re: [Gluster-devel] Architecture advice

2009-01-12 Thread Daniel Maher
Basavanagowda Kanur wrote: I am curious to know how the client decides which of the subvols to use at any given time, and if there is a way to specify a "preferred" subvol. For example, i have two servers AFR'ing each other. I have two clients set up to access the AFR cluster v

Re: [Gluster-devel] Architecture advice

2009-01-10 Thread Harald Stürzebecher
2009/1/10 Basavanagowda Kanur : > > > On Fri, Jan 9, 2009 at 3:22 PM, Daniel Maher wrote: >> >> Krishna Srinivas wrote: >> >>> Daniel, >>> Imagine 2 servers afred with each other. On the client side you >>> configure HA translator with its subvols as the afrs on the two >>> servers. (Previously we

Re: [Gluster-devel] Architecture advice

2009-01-09 Thread Basavanagowda Kanur
On Fri, Jan 9, 2009 at 3:22 PM, Daniel Maher > wrote: > Krishna Srinivas wrote: > > Daniel, >> Imagine 2 servers afred with each other. On the client side you >> configure HA translator with its subvols as the afrs on the two >> servers. (Previously we used DNS round robin system) If 1st server

Re: [Gluster-devel] Architecture advice

2009-01-09 Thread Daniel Maher
Krishna Srinivas wrote: Daniel, Imagine 2 servers afred with each other. On the client side you configure HA translator with its subvols as the afrs on the two servers. (Previously we used DNS round robin system) If 1st server goes down, the HA translator will continue working with the second af

Re: [Gluster-devel] Architecture advice

2009-01-08 Thread Krishna Srinivas
On Thu, Jan 8, 2009 at 6:25 PM, Daniel Maher wrote: > Krishna Srinivas wrote: > >> >> HA is also useful when we use server side AFRs. >> > > This statement is highly interesting. Would it be possible to have more > information on how the HA translator could be intelligently implemented in a > ser

Re: [Gluster-devel] Architecture advice

2009-01-08 Thread Dan Parsons
On Jan 8, 2009, at 5:23 AM, Joe Landman wrote: Dan Parsons wrote: Now that I'm upgrading to gluster 1.4/2.0, I'm going to take the time and rearchitect things. Hardware: Gluster servers: 4 blades connected via 4gbit fc to fast, dedicated storage. Each server has two bonded Gig-E links to the r

Re: [Gluster-devel] Architecture advice

2009-01-08 Thread Joe Landman
Dan Parsons wrote: Now that I'm upgrading to gluster 1.4/2.0, I'm going to take the time and rearchitect things. Hardware: Gluster servers: 4 blades connected via 4gbit fc to fast, dedicated storage. Each server has two bonded Gig-E links to the rest of my network, for 8gbit/s theoretical throug

Re: [Gluster-devel] Architecture advice

2009-01-08 Thread Daniel Maher
Krishna Srinivas wrote: HA is also useful when we use server side AFRs. This statement is highly interesting. Would it be possible to have more information on how the HA translator could be intelligently implemented in a server-side AFR setup, and what the benefits / drawbacks (if any) w

Re: [Gluster-devel] Architecture advice

2009-01-08 Thread Krishna Srinivas
Dan, On Thu, Jan 8, 2009 at 1:39 PM, Dan Parsons wrote: > Now that I'm upgrading to gluster 1.4/2.0, I'm going to take the time and > rearchitect things. > > Hardware: > Gluster servers: > 4 blades connected via 4gbit fc to fast, dedicated storage. Each server has > two bonded Gig-E links to th

[Gluster-devel] Architecture advice

2009-01-08 Thread Dan Parsons
Now that I'm upgrading to gluster 1.4/2.0, I'm going to take the time and rearchitect things. Hardware: Gluster servers: 4 blades connected via 4gbit fc to fast, dedicated storage. Each server has two bonded Gig-E links to the rest of my network, for 8gbit/s theoretical throughput. Gluster cli