[Gluster-devel] Fwd: Impact of direct-io-mode on mail-box work load

2017-03-31 Thread Bipin Kunal
Hello,

Can someone explain me the importance of "direct-io-mode"?
What I understand is enabling "direct-io-mode" will use FUSE cache and
bypass kernel/page cache.

Will it be beneficial to enable "direct-io-mode"  or it will have adverse
effect when there is very small files workload such as dovecot and other
mail boxes.
As use case here is of mail boxes, it will be write once and mostly read
once.


-- 
Thanks,

Bipin Kunal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Gluster RPC Internals - Lecture #1 - recording

2017-03-01 Thread Bipin Kunal
Milind,

Please allow download of recording.

Thanks,
Bipin Kunal


On Wed, Mar 1, 2017 at 3:19 PM, Pavel Szalbot  wrote:
> Hi Milind,
>
> is there a non-Flash version available?
>
> -ps
>
> On Wed, Mar 1, 2017 at 9:28 AM, Milind Changire  wrote:
>>
>> https://bluejeans.com/s/e59Wh/
>>
>> --
>> Milind
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster Developer Summit Program Committee

2016-08-16 Thread Bipin Kunal
If more count is needed, I would like to volunteer too.

Thanks,
Bipin

On Wed, Aug 17, 2016 at 7:36 AM, Ankit Raj  wrote:
> Hello,
>
> I would be like to be volunteer.
>
> Thanks,
> Ankit Raj
>
> On Wed, Aug 17, 2016 at 4:57 AM, Michael Adam  wrote:
>>
>> On 2016-08-16 at 11:30 -0700, Amye Scavarda wrote:
>> > Hi all,
>> > As we get closer to the CfP wrapping up (August 31, per
>> > http://www.gluster.org/pipermail/gluster-users/2016-August/028002.html)
>> > -
>> > we'll be looking for 3-4 people for the program committee to help
>> > arrange
>> > the schedule.
>> >
>> > Go ahead and respond here if you're interested, and I'll work to gather
>> > us
>> > together after September 1st.
>> > Thanks!
>> > - amye
>>
>> If you're interested in someone who's looking from a few miles higher
>> than many hard-core gluster engineers, I would help out.
>> But happy to step back if enough high profile Gluster people
>> speak up! :-)
>>
>> Cheers - Michael
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Default quorum for 2 way replication

2016-03-04 Thread Bipin Kunal
HI Pranith,

Thanks for starting this mail thread.

Looking from a user perspective most important is to get a "good copy"
of data.  I agree that people use replication for HA but having stale
data with HA will not have any value.
So I will suggest to make auto quorum as default configuration even
for 2-way replication.

If user is willing to lose data at the cost of HA, he always have
option disable it. But default preference should be data and its
integrity.

Thanks,
Bipin Kunal

On Fri, Mar 4, 2016 at 5:43 PM, Ravishankar N  wrote:
> On 03/04/2016 05:26 PM, Pranith Kumar Karampuri wrote:
>>
>> hi,
>>  So far default quorum for 2-way replication is 'none' (i.e.
>> files/directories may go into split-brain) and for 3-way replication and
>> arbiter based replication it is 'auto' (files/directories won't go into
>> split-brain). There are requests to make default as 'auto' for 2-way
>> replication as well. The line of reasoning is that people value data
>> integrity (files not going into split-brain) more than HA (operation of
>> mount even when bricks go down). And admins should explicitly change it to
>> 'none' when they are fine with split-brains in 2-way replication. We were
>> wondering if you have any inputs about what is a sane default for 2-way
>> replication.
>>
>> I like the default to be 'none'. Reason: If we have 'auto' as quorum for
>> 2-way replication and first brick dies, there is no HA.
>
>
>
> +1.  Quorum does not make sense when there are only 2 parties. There is no
> majority voting. Arbiter volumes are a better option.
> If someone wants some background, please see 'Client quorum' and 'Replica 2
> and Replica 3 volumes' section of
> http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/
>
> -Ravi
>
>> If users are fine with it, it is better to use plain distribute volume
>> rather than replication with quorum as 'auto'. What are your thoughts on the
>> matter? Please guide us in the right direction.
>>
>> Pranith
>
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] make install throwing python error

2016-01-28 Thread Bipin Kunal
Hi,

I have downloaded glusterfs source rpm from :
http://download.gluster.org/pub/gluster/glusterfs/LATEST/Fedora/fedora-21/SRPMS/

I extracted the source and I tried compiling and installing it. While
running "make install" I started getting error.


Here is the steps performed :

1) ./autogen.sh
2) ./configure
3) make
4) make install

Steps 1, 2 and 3 was error free.

Here is the error during "make install"

Making install in glupy
Making install in src
Making install in glupy
 /usr/bin/mkdir -p '/usr/lib/python2.7/site-packages/gluster/glupy'
 /usr/bin/install -c -m 644 __init__.py
'/usr/lib/python2.7/site-packages/gluster/glupy'
../../../../../py-compile: Missing argument to --destdir.
Makefile:414: recipe for target 'install-pyglupyPYTHON' failed
make[6]: *** [install-pyglupyPYTHON] Error 1
Makefile:511: recipe for target 'install-am' failed
make[5]: *** [install-am] Error 2
Makefile:658: recipe for target 'install-recursive' failed
make[4]: *** [install-recursive] Error 1
Makefile:445: recipe for target 'install-recursive' failed
make[3]: *** [install-recursive] Error 1
Makefile:448: recipe for target 'install-recursive' failed
make[2]: *** [install-recursive] Error 1
Makefile:447: recipe for target 'install-recursive' failed
make[1]: *** [install-recursive] Error 1
Makefile:576: recipe for target 'install-recursive' failed
make: *** [install-recursive] Error 1

Am I missing some binaries?

Please help me in installing.

Thanks,
Bipin Kunal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] mixed 3.7 and 3.6 environment

2015-11-13 Thread Bipin Kunal
Hi David,

I don't think that is possible or recommended.

Client compatibility with server is only with client with same version or
lower version.

Thanks,
Bipin Kunal

On Thu, Nov 12, 2015 at 10:41 PM, David Robinson <
david.robin...@corvidtec.com> wrote:

> Is there anyway to force a mount of a 3.6 server using a 3.7.6 FUSE
> client?
> My production machine is 3.6.6 and my test platform is 3.7.6.  I would
> like to test the 3.7.6 FUSE client but would need for this client to be
> able to mount both a 3.6.6 and a 3.7.6 server.
>
> When I try to mount the 3.6.6 server using a 3.7.6 client, I get the
> following:
>
>
>
>
>
>
>
>
>
> *[root@ff01bkp glusterfs]# cat homegfs.log [2015-11-12 16:55:56.860663] I
> [MSGID: 100030] [glusterfsd.c:2318:main] 0-/usr/sbin/glusterfs: Started
> running /usr/sbin/glusterfs version 3.7.6 (args: /usr/sbin/glusterfs
> --volfile-server=gfsib01a.corvidtec.com <http://gfsib01a.corvidtec.com>
> --volfile-server-transport=tcp --volfile-id=/homegfs.tcp
> /homegfs)[2015-11-12 16:55:56.868032] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1[2015-11-12 16:55:56.871923] W [socket.c:588:__socket_rwv]
> 0-glusterfs: readv on 10.200.70.1:24007 <http://10.200.70.1:24007> failed
> (No data available)[2015-11-12 16:55:56.872236] E
> [rpc-clnt.c:362:saved_frames_unwind] (-->
> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f9e5507ea82] (-->
> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f9e54e49a3e] (-->
> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f9e54e49b4e] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7a)[0x7f9e54e4b4da] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7f9e54e4bd08] )
> 0-glusterfs: forced unwinding frame type(GlusterFS Handshake)
> op(GETSPEC(2)) called at 2015-11-12 16:55:56.871822 (xid=0x1)[2015-11-12
> 16:55:56.872254] E [glusterfsd-mgmt.c:1603:mgmt_getspec_cbk] 0-mgmt: failed
> to fetch volume file (key:/homegfs.tcp)[2015-11-12 16:55:56.872283] W
> [glusterfsd.c:1236:cleanup_and_exit]
> (-->/lib64/libgfrpc.so.0(saved_frames_unwind+0x205) [0x7f9e54e49a65]
> -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x490) [0x7f9e6450]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f9e06d9] ) 0-:
> received signum (0), shutting down[2015-11-12 16:55:56.872299] I
> [fuse-bridge.c:5683:fini] 0-fuse: Unmounting '/homegfs'.[2015-11-12
> 16:55:56.872616] W [glusterfsd.c:1236:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7df3) [0x7f9e53ee5df3]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f9e0855]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f9e06d9] ) 0-:
> received signum (15), shutting down*
> David
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Glusterfs mainline BZs to close...

2015-10-29 Thread Bipin Kunal
Hello Niels/Humble,

I would like to help you here and by this I can even start collaborating in
upstream.

I will try to send first pull request very soon, late by next week.

Thanks,
Bipin Kunal

On Wed, Oct 28, 2015 at 5:04 PM, Niels de Vos  wrote:

> On Wed, Oct 28, 2015 at 04:02:16PM +0530, Humble Devassy Chirammal wrote:
> > Hi Niels,
> >
> > >
> >   Our shiny docs (or my bookmarks?) are broken again...
> >
> >
> http://gluster.readthedocs.org/en/latest/Developer-guide/Bug%20report%20Life%20Cycle/
> >   http://gluster.readthedocs.org/en/latest/Developer-guide/Bug%20Triage/
> >
> > >
> > As you know, the "Developer Guide" is now part of glusterfs source code (
> > https://github.com/gluster/glusterfs/tree/master/doc/developer-guide)
> [1].
> > The decision to split the glusterfs documentation  into three parts (
> > developer spec/feature , Administration ) came late and it caused the
> > bookmark to break.
>
> Right, but I do not think the documentation of procedures and workflows
> should be part of the developers guide. This counts for topics related
> to bugs, releases and probably other things. Can we have a section on
> readthedocs where procedures and workflows can be placed?
>
> > Being Media Wiki  READONLY, we thought this type of documents can be part
> > of Developer Guide for now. May be we need to sort the "developer guide"
> in
> > source code repo  again and put it into correct buckets, but this needs
> > some time and effort.
>
> Yeah, we definitely should do that. And, also make sure that the old
> wiki gets replaced by appropriate redirects to prevent confusion.
>
> Thanks,
> Niels
>
> >
> >
> > [1]
> >
> > Because of the gerrit plugin issue, the commits in gerrit has not been
> > synced to github since Sep 10. However you can see the the change here
> > http://review.gluster.org/#/c/12227/
> >
> > --Humble
> >
> >
> > On Mon, Oct 26, 2015 at 2:44 PM, Niels de Vos  wrote:
> >
> > > On Wed, Oct 21, 2015 at 05:22:49AM -0400, Nagaprasad Sathyanarayana
> wrote:
> > > > I came across the following BZs which are still open in mainline.
> But
> > > > they are fixed and made available in a upstream release.  Planning to
> > > > close them this week, unless there are any objections.
> > >
> > > We have a policy to close bugs when their patches land in a released
> > > version. The bugs against mainline will get closed when a release is
> > > made that contains those fixes. For many of the current mainline bugs,
> > > this would be the case when glusterfs-3.8 is released.
> > >
> > > What is the concern of having bugs against the mainline version open
> > > until a release contains that particular fix? There are many bugs that
> > > also get backports to stable versions (3.7, 3.6 and 3.5). Those bugs
> get
> > > closed with each minor releases for that stable version.
> > >
> > > Of course we can change our policy to close mainline bugs earlier. But,
> > > we need to be consistent in setting the rules for that, and document
> > > them well. There is an ongoing task about automatically changing the
> > > status of bugs when patches get posted/merged and releases made.
> Closing
> > > mainline bugs should be part of that process.
> > >
> > > When do you suggest that a bug against mainline should be closed, when
> > > one of the stable releases containst the fix, or when all of them do?
> > > What version with fix should we point the users at when we closed a
> > > mainline bug?
> > >
> > >   Our shiny docs (or my bookmarks?) are broken again...
> > >
> > >
> http://gluster.readthedocs.org/en/latest/Developer-guide/Bug%20report%20Life%20Cycle/
> > >
> http://gluster.readthedocs.org/en/latest/Developer-guide/Bug%20Triage/
> > >
> > >   This is the old contents:
> > >
> > >
> http://www.gluster.org/community/documentation/index.php/Bug_report_life_cycle
> > >   http://www.gluster.org/community/documentation/index.php/Bug_triage
> > >
> > > I was about to suggest to send a pull request so that we can discuss
> > > your proposal during the weekly Bug Triage meeting on Tuesdays.
> > > Unfortunately I don't know where the latest documents moved to, so
> > > please send your changes by email.
> > >
> > > Could you explain what Bugzilla query you used to find these bugs

Re: [Gluster-devel] Gluster SOS plugin expansion

2015-08-27 Thread Bipin Kunal
Hi Hari/Humble,

You might have already got my reply regarding the same on some different
mail thread, Just sending here again,

Here  is the few more area which I would wish part of gluster SOS plugin:
1) geo-replication status corresponding to each geo-replication
session(wherever applicable)

# gluster volume geo-replication *MASTER_VOL **SLAVE_HOST*::*SLAVE_VOL* status

2) Quota related information(wherever applicable)
# gluster volume quota *VOLNAME* list
# gluster volume quota *VOLNAME* status

3) Heal information
# gluster volume heal *VOLNAME* info
4) Split-brain files
# gluster volume heal *VOLNAME* info split-brain

Thanks,
Bipin Kunal

On Thu, Aug 27, 2015 at 6:43 PM, Hari Gowtham  wrote:

> Hi Vijay,
>
> Thanks for the feed back. We will add those two in the plugin.
>
> - Original Message -
> From: "Vijay Bellur" 
> To: "Humble Devassy Chirammal" , "Gluster
> Devel" 
> Cc: "hari gowtham005" 
> Sent: Thursday, August 27, 2015 6:35:05 PM
> Subject: Re: [Gluster-devel] Gluster SOS plugin expansion
>
> On Thursday 27 August 2015 01:16 PM, Humble Devassy Chirammal wrote:
> > Hi All,
> >
> > We have been working on the gluster plugin of SOS package [1].
> >
> > There were three commands (volume info, volume status and peer status)
> > in it already and to expand it, we have added a few more( snapshot
> > status, snapshot list, snapshot info, pool list, rebalance status).
> >
> > We would be happy if you could suggest the changes for this, as of to
> > remove any command from the list or add anything more which can help
> > gluster troubleshooting.
> >
> > [1] https://github.com/sosreport/sos/blob/master/sos/plugins/gluster.py
> >
>
> Adding the following could be useful:
>
> volume quota  info
>
> volume heal  info
>
> Regards,
> Vijay
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
> --
> Regards,
> Hari.
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] How to find total number of gluster mounts?

2015-06-12 Thread Bipin Kunal
Thanks Pranith and  Raghavendra for your valuable inputs.

As you mentioned the use-cases, both the use-case completely fits to my 
requirement.

I will raise a RFE for the same.

Thanks,
Bipin Kunal

- Original Message -
From: "Raghavendra Talur" 
To: "Bipin Kunal" 
Cc: gluster-devel@gluster.org, "Niels de Vos" , "Soumya 
Koduri" , "Poornima Gurusiddaiah" , 
"Pranith Kumar Karampuri" 
Sent: Wednesday, June 3, 2015 11:37:58 AM
Subject: Re: [Gluster-devel] How to find total number of gluster mounts?

On Wednesday 03 June 2015 09:13 AM, Pranith Kumar Karampuri wrote:
>
>
> On 06/01/2015 11:07 AM, Bipin Kunal wrote:
> > Hi All,
> >
> >   Is there a way to find total number of gluster mounts?
> >
> >   If not, what would be the complexity for this RFE?
> >
> >   As far as I understand finding the number of fuse mount should be
> > possible but seems unfeasible for nfs and samba mounts.
> True. Bricks have connections from each of the clients. Each of
> fuse/nfs/glustershd/quotad/glfsapi-based-clients(samba/glfsheal) would
> have separate client-context set on the bricks. So We can get this
> information. But like you said I am not sure how it can be done in nfs
> server/samba. Adding more people.

Depends on why you would want to know about the clients:

1. For most of the use cases, admin might just need to know how many
Samba/NFS servers are currently using the given volume(Say just to perform 
umount everywhere).
In this case, each Samba/NFS server is just like a FUSE mount and we can use 
the same
technique that we would use for the above case that Pranith has mentioned.

2. If the requirement is to identify all the machines which are accessing a 
volume,
(probable use case:- you may want a end-user to close a file etc)
above method won't be sufficient. To get details of SMB clients, you would have 
to
run 'smbstatus' command on all SMB server nodes and it would output details of
connected SMB clients in this format.
PID Username  Group MachineProtocol Version 
Service  pid machine   Connected at

Thanks,
Raghavendra Talur
>
> Pranith
> >
> >   Please let me know your precious thoughts on this.
> >
> > Thanks,
> > Bipin Kunal
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
>

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] How to find total number of gluster mounts?

2015-05-31 Thread Bipin Kunal
Hi All,

 Is there a way to find total number of gluster mounts?

 If not, what would be the complexity for this RFE? 

 As far as I understand finding the number of fuse mount should be possible but 
seems unfeasible for nfs and samba mounts.

 Please let me know your precious thoughts on this.

Thanks,
Bipin Kunal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] gluster volume heal info take very long time even when their is no entry.

2015-05-25 Thread Bipin Kunal
Hi Anuradha,

Its RHS-2.1.2 with gluster version 3.4.0.59rhs-1

Thanks,
Bipin Kunal

- Original Message -
From: "Anuradha Talur" 
To: "Bipin Kunal" 
Cc: gluster-devel@gluster.org
Sent: Monday, May 25, 2015 5:01:01 PM
Subject: Re: [Gluster-devel] gluster volume heal  info take very long 
time even when their is no entry.

Hi Bipin,

Could you please provide gluster version?

- Original Message -
> From: "Bipin Kunal" 
> To: gluster-devel@gluster.org
> Sent: Monday, May 25, 2015 4:53:32 PM
> Subject: [Gluster-devel] gluster volume heal  info take very long 
> time even when their is no entry.
> 
> Hi All,
> 
> My gluster volume heal  info take very long time even when their is
> no entry.
> 
> # time gluster volume heal  info
> Brick :/rhs//brick/
> Number of entries: 0
> 
> Brick :/rhs//brick/
> Number of entries: 0
> 
> Brick :/rhs//brick/
> Number of entries: 0
> 
> Brick :/rhs//brick/
> Number of entries: 0
> 
> 
> real 44m52.828s
> user 0m0.322s
> sys 0m0.109s
> 
> 
> Can anybody explain me this weird behavior? What should I look into for
> debugging?
> 
> How does the volume heal info command works?
> 
> Thanks,
> Bipin Kunal
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 

-- 
Thanks,
Anuradha.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] gluster volume heal info take very long time even when their is no entry.

2015-05-25 Thread Bipin Kunal
Hi All, 

My gluster volume heal  info take very long time even when their is 
no entry. 

# time gluster volume heal  info 
Brick :/rhs//brick/ 
Number of entries: 0 

Brick :/rhs//brick/ 
Number of entries: 0 

Brick :/rhs//brick/ 
Number of entries: 0 

Brick :/rhs//brick/ 
Number of entries: 0 


real 44m52.828s 
user 0m0.322s 
sys 0m0.109s 


Can anybody explain me this weird behavior? What should I look into for 
debugging? 

How does the volume heal info command works? 

Thanks, 
Bipin Kunal 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Fwd: How are port number for bricks incremented in gluster

2015-05-11 Thread Bipin Kunal
Hi All,

I have referred 
http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troubleshooting
 

1. What ports does Gluster need?
Preferably, your storage environment should be located on a safe segment of 
your network where firewall is not necessary. In the real world, that simply 
isn't possible for all environments. If you are willing to accept the potential 
performance loss of running a firewall, you need to know that Gluster makes use 
of the following ports:
- 24007 TCP for the Gluster Daemon
- 24008 TCP for Infiniband management (optional unless you are using IB)
- One TCP port for each brick in a volume. So, for example, if you have 4 
bricks in a volume, port 24009 – 24012 would be used in GlusterFS 3.3 & below, 
49152 - 49155 from GlusterFS 3.4 & later.
- 38465, 38466 and 38467 TCP for the inline Gluster NFS server.
- Additionally, port 111 TCP and UDP (since always) and port 2049 TCP-only 
(from GlusterFS 3.4 & later) are used for port mapper and should be open.
Note: by default Gluster/NFS does not provide services over UDP, it is TCP 
only. You would need to enable the nfs.mount-udp option if you want to add UDP 
support for the MOUNT protocol. That's completely optional and is up to your 
judgement to use.

Here we say that a port will be associated with all the brick for a volume. 

I would like to know how is this port number incremented ? Is it always 
sequential, meaning that if first brick will have port 49152 then will the 
second brick have port 49153 ?
Is their any such scenario where port number would not be sequential ? 

Is their a way to give a range of ports for bricks?

Thanks,
Bipin Kunal


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel