On Wed, Jan 14, 2009 at 5:30 PM, Gordan Bobic wrote:
> On Tue 13/01/09 01:15 , Martin Fick wrote:
>> --- On Mon, 1/12/09, Gordan Bobic wrote:
>> Ding, ding, ding ding!!! I get it, you are
>> using NFS to achieve blocking, exactly my #1
>> remaining grip with glusterfs, it does not
>> block! Pl
On Tue 13/01/09 01:15 , Martin Fick wrote:
> --- On Mon, 1/12/09, Gordan Bobic wrote:
> Ding, ding, ding ding!!! I get it, you are
> using NFS to achieve blocking, exactly my #1
> remaining grip with glusterfs, it does not
> block! Please try explaining why this is
> important to you to the g
--- On Mon, 1/12/09, Gordan Bobic wrote:
Ding, ding, ding ding!!! I get it, you are
using NFS to achieve blocking, exactly my #1
remaining grip with glusterfs, it does not
block! Please try explaining why this is
important to you to the glusterfs devs! I
am not sure that I made my case clear
Martin Fick wrote:
Why is that the correct way? There's nothing
wrong with having "bonding" at the glusterfs
protocol level, is there?
The problem is that it only covers a very narrow edge case
that isn't all that likely. A bonded NIC over separate
switches all the way to both servers is a muc
> > Why is that the correct way? There's nothing
> wrong with having "bonding" at the glusterfs
> protocol level, is there?
>
> The problem is that it only covers a very narrow edge case
> that isn't all that likely. A bonded NIC over separate
> switches all the way to both servers is a much more
Martin Fick wrote:
--- On Mon, 1/12/09, Gordan Bobic wrote:
...
No need for fencing simply because you now use HA
translator. The assumption in this case is that the
servers can still talk to each other but that one
server's connection to the clients may have died.
That means that 50% of t
--- On Mon, 1/12/09, Gordan Bobic wrote:
>
> > ...
> > No need for fencing simply because you now use HA
> > translator. The assumption in this case is that the
> > servers can still talk to each other but that one
> > server's connection to the clients may have died.
>
> That means that 50%
Martin Fick wrote:
Not on the client, anyway. But if you're AFR-ing on
server side, then your client always talks to one server
anyway. The traditional way to handle server failure in that
case is to set up Heartbeat or RHCS to fail over the IP
address resource to the surviving server.
The TCP
--- On Mon, 1/12/09, Gordan Bobic wrote:
Gordon,
> Not on the client, anyway. But if you're AFR-ing on
> server side, then your client always talks to one server
> anyway. The traditional way to handle server failure in that
> case is to set up Heartbeat or RHCS to fail over the IP
> address re
It just occurs to me that if you're using server-side AFR and want HA of
servers, likely the best, fastest and most graceful way to achieve it is to
have AFT GlusterFS on the servers and export the share via NFS/UDP. You'll need
the patched fuse package to alow you to do this. If you're using NF
On Mon 12/01/09 13:32 , Daniel Maher wrote:
> Gordan Bobic wrote:
> >> How does the HA translator choose a node, exactly ? Does it
> randomly
> >> select one from the list of available subvolumes, or does it pick
> them
> >> in the order they're specified in the config file, or some other
> way
Gordan Bobic wrote:
How does the HA translator choose a node, exactly ? Does it randomly
select one from the list of available subvolumes, or does it pick them
in the order they're specified in the config file, or some other way
entirely ?
Not sure what the default is, but you can specify a
Daniel Maher wrote:
Basavanagowda Kanur wrote:
I am curious to know how the client decides which of the subvols to
use at any given time, and if there is a way to specify a
"preferred" subvol. For example, i have two servers AFR'ing each
other. I have two clients set up to acc
Basavanagowda Kanur wrote:
I am curious to know how the client decides which of the subvols to
use at any given time, and if there is a way to specify a
"preferred" subvol. For example, i have two servers AFR'ing each
other. I have two clients set up to access the AFR cluster v
2009/1/10 Basavanagowda Kanur :
>
>
> On Fri, Jan 9, 2009 at 3:22 PM, Daniel Maher wrote:
>>
>> Krishna Srinivas wrote:
>>
>>> Daniel,
>>> Imagine 2 servers afred with each other. On the client side you
>>> configure HA translator with its subvols as the afrs on the two
>>> servers. (Previously we
On Fri, Jan 9, 2009 at 3:22 PM, Daniel Maher
> wrote:
> Krishna Srinivas wrote:
>
> Daniel,
>> Imagine 2 servers afred with each other. On the client side you
>> configure HA translator with its subvols as the afrs on the two
>> servers. (Previously we used DNS round robin system) If 1st server
Krishna Srinivas wrote:
Daniel,
Imagine 2 servers afred with each other. On the client side you
configure HA translator with its subvols as the afrs on the two
servers. (Previously we used DNS round robin system) If 1st server
goes down, the HA translator will continue working with the second af
On Thu, Jan 8, 2009 at 6:25 PM, Daniel Maher wrote:
> Krishna Srinivas wrote:
>
>>
>> HA is also useful when we use server side AFRs.
>>
>
> This statement is highly interesting. Would it be possible to have more
> information on how the HA translator could be intelligently implemented in a
> ser
On Jan 8, 2009, at 5:23 AM, Joe Landman wrote:
Dan Parsons wrote:
Now that I'm upgrading to gluster 1.4/2.0, I'm going to take the time
and rearchitect things.
Hardware: Gluster servers: 4 blades connected via 4gbit fc to fast,
dedicated storage. Each server has two bonded Gig-E links to the r
Dan Parsons wrote:
Now that I'm upgrading to gluster 1.4/2.0, I'm going to take the time
and rearchitect things.
Hardware: Gluster servers: 4 blades connected via 4gbit fc to fast,
dedicated storage. Each server has two bonded Gig-E links to the rest
of my network, for 8gbit/s theoretical throug
Krishna Srinivas wrote:
HA is also useful when we use server side AFRs.
This statement is highly interesting. Would it be possible to have more
information on how the HA translator could be intelligently implemented
in a server-side AFR setup, and what the benefits / drawbacks (if any)
w
Dan,
On Thu, Jan 8, 2009 at 1:39 PM, Dan Parsons wrote:
> Now that I'm upgrading to gluster 1.4/2.0, I'm going to take the time and
> rearchitect things.
>
> Hardware:
> Gluster servers:
> 4 blades connected via 4gbit fc to fast, dedicated storage. Each server has
> two bonded Gig-E links to th
Now that I'm upgrading to gluster 1.4/2.0, I'm going to take the time and
rearchitect things.
Hardware:
Gluster servers:
4 blades connected via 4gbit fc to fast, dedicated storage. Each server has two
bonded Gig-E links to the rest of my network, for 8gbit/s theoretical
throughput.
Gluster cli
23 matches
Mail list logo