[Gluster-devel] Arbiter design for 3 way replica

2015-02-15 Thread Ravishankar N

Hello,

I have submitted the design doc for an arbiter mechanism for 3 way 
replication using AFR: 
http://review.gluster.org/#/c/9656/1/doc/features/afr-arbiter.txt

Review comments and feedback are very much appreciated.

Thanks and regards,
Ravi
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Multi-network support proposal

2015-02-15 Thread Ben England
Hope this message makes as much sense to me on Tuesday as it did at 3 AM in the 
airport ;-) Inline...

- Original Message -
> From: "Jeff Darcy" 
> To: "Ben England" 
> Cc: "Gluster Devel" , "Manoj Pillai" 
> 
> Sent: Sunday, February 15, 2015 1:49:17 AM
> Subject: Re: [Gluster-devel] Multi-network support proposal
> 
> > It's really important for glusterfs not to require that the clients mount
> > volumes using same subnet that is used by servers, and clearly your very
> > general-purpose proposal could address that.  For example, in a site where
> > non-glusterfs protocols are used, there are already good reasons for using
> > multiple subnets, and we want glusterfs to be able to coexist with
> > non-glusterfs protocols at a site.
> > 
> > However, is there a simpler way to allow glusterfs clients to connect to
> > servers through more than one subnet.  For example, suppose your Gluster
> > volume subnet is 172.17.50.0/24 and your "public" network used by glusterfs
> > clients is 1.2.3.0/22, but one of the servers also has an interface on
> > subnet 4.5.6.0/24 .  So at the time that the volume is either created or
> > bricks are added/removed:
> > 
> > - determine what servers are actually in the volume
> > - ask each server to return the subnet for each of its active network
> > interfaces
> > - determine set of subnets that are directly accessible to ALL the volume's
> > servers
> > - write a glusterfs volfile for each of these subnets and save it
> > 
> > This process is O(N) where N is number of servers, but it only happens for
> > volume creation or addition/removal of bricks, these events do not happen
> > very often (do they?).  In example, 1.2.3.0/22 and 172.17.50.0/24 would
> > have
> > glusterfs volfiles, but 4.5.6.0/22 would not.
> > 
> > So now when a client connects, the server knows which subnet the request
> > came
> > through (getsockaddr), so it can just return the volfile for that subnet.
> > If there is no volfile for that subnet, the client mount request is
> > rejected..  But what about existing Gluster volumes?  When software is
> > upgraded, we should provide a mechanism for triggering this volfile
> > generation process to open up additional subnets for glusterfs clients.
> > 
> > This proposal requires additional work to be done where volfiles are
> > generated and where glusterfs mount processing is done, but does not
> > require
> > any additional configuration commands or extra user knowledge of Gluster.
> > glusterfs clients can then use *any* subnet that is accessible to all the
> > servers.
> 
> That does have the advantage of not requiring any special configuration,
> and might work well enough for front-end traffic, but it has the
> drawback of not giving any control over back-end traffic.  How do
> *servers* choose which interfaces to use for NSR normal traffic,
> reconciliation/self-heal, DHT rebalance, and so on?  Which network
> should Ganesha/Samba servers use to communicate with bricks?  Even on
> the front end, what happens when we do get around to adding per-subnet
> access control or options?  For those kinds of use cases we need
> networks to be explicit parts of our model, not implicit or inferred.
> So maybe we need to reconcile the two approaches, and hope that the
> combined result isn't too complicated.  I'm open to suggestions.
> 

In defense of your proposal, you are right that it is difficult to manage each 
node's network configuration independently or by volfile, and it would be 
useful to a system manager to be able to configure Gluster network behavior 
across the entire volume.  For example, you can use pdsh to issue commands to 
any subset of Gluster servers, but what if some of them are down at the time 
the command is issued?  How do you make these configuration changes persistent? 
 What happens when you add or remove servers from the volume?  That to me is 
the real selling point of your proposal - if we have a 60-node or even a 
1000-node Gluster volume, we could provide a way to control network behavior in 
a persistent, highly-available, scalable way with as few sysadmin operations as 
possible. 

I have two concerns:


1) Do we have to specify each host's address rewriting in your example - why 
not something like this?

# gluster network add client-net 1.2.3.0/24 

glusterd could then use a discovery process as I described earlier to determine 
for each server what its IP address is on that subnet and rewrite volfiles 
accordingly.

The advantage of this subnet-based specification IMHO is that it scales - as 
you add and remove nodes, you do not have to change "client-net" entity, you 
just make sure that Gluster servers provide the appropriate network interface 
with appropriate IP address and subnet mask.


2) Could we keep the number of roles and the sysadmin interface in general from 
getting too complicated?  Here's an oversimplified model of Gluster networking 
- there are at most 2 kinds of subnets on each server in use by 

Re: [Gluster-devel] GlusterFS - Bug 1144527

2015-02-15 Thread Vijay Bellur

On 02/14/2015 11:07 PM, Marc Mance wrote:

I see that you were working on this issue for*:* glusterfs 3.4.5 built
on Aug 6 2014 19:15:07

I am running glusterfs 3.5.2 built on Aug  6 2014 19:33:49 and still
have the very same issue.  Did something change that re-introduced the
logging?



No, the same fix has not made its way to a release in 3.5.x.

Have backported the fix to release-3.5 now [1]. Niels - can you please 
consider this for inclusion in 3.5.4? I have added bug 1192832 to the 
3.5.4 tracker.


Thanks,
Vijay

[1] http://review.gluster.org/9655
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Where is glusterd-conn-mgmt.c ?

2015-02-15 Thread Emmanuel Dreyfus
Emmanuel Dreyfus  wrote:

> Someone can explain what is going on?

I found the problem: nbslave70:/autobuild/autobuild.sh includes git
fetch refs/changes/28/9428/4 

That explains everything, except I do not recall adding that, but
perhaps I am getting old :-)

-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Where is glusterd-conn-mgmt.c ?

2015-02-15 Thread Emmanuel Dreyfus
Hi

In glusterd-conn-mgmt.c is a race condition I would like to fix, but
depending on the host on which I run git clone, that file does or does
not show up:

$ git clone https://git.gluster.org/glusterfs.git glusterfs 
Cloning into 'glusterfs'...

remote: Counting objects: 89197, done
remote: Finding sources: 100% (89197/89197)
remote: Total 89197 (delta 55280), reused 85227 (delta 55280)
Receiving objects: 100% (89197/89197), 28.92 MiB | 1.17 MiB/s, done.
Resolving deltas: 100% (55280/55280), done.
$ 
$ find glusterfs -name glusterd-conn-mgmt.c  
(nothing!)

Someone can explain what is going on?

-- 
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
m...@netbsd.org
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel