Re: [Gluster-users] Slow write times to gluster disk

2017-04-13 Thread Pranith Kumar Karampuri
On Sat, Apr 8, 2017 at 10:28 AM, Ravishankar N 
wrote:

> Hi Pat,
>
> I'm assuming you are using gluster native (fuse mount). If it helps, you
> could try mounting it via gluster NFS (gnfs) and then see if there is an
> improvement in speed. Fuse mounts are slower than gnfs mounts but you get
> the benefit of avoiding a single point of failure. Unlike fuse mounts, if
> the gluster node containing the gnfs server goes down, all mounts done
> using that node will fail). For fuse mounts, you could try tweaking the
> write-behind xlator settings to see if it helps. See the
> performance.write-behind and performance.write-behind-window-size options
> in `gluster volume set help`. Of course, even for gnfs mounts, you can
> achieve fail-over by using CTDB.
>

Ravi,
  Do you have any data that suggests fuse mounts are slower than gNFS
servers?

Pat,
  I see that I am late to the thread, but do you happen to have
"profile info" of the workload?

You can follow
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/
to get the information.


>
> Thanks,
> Ravi
>
>
> On 04/08/2017 12:07 AM, Pat Haley wrote:
>
>
> Hi,
>
> We noticed a dramatic slowness when writing to a gluster disk when
> compared to writing to an NFS disk. Specifically when using dd (data
> duplicator) to write a 4.3 GB file of zeros:
>
>- on NFS disk (/home): 9.5 Gb/s
>- on gluster disk (/gdata): 508 Mb/s
>
> The gluser disk is 2 bricks joined together, no replication or anything
> else. The hardware is (literally) the same:
>
>- one server with 70 hard disks  and a hardware RAID card.
>- 4 disks in a RAID-6 group (the NFS disk)
>- 32 disks in a RAID-6 group (the max allowed by the card, /mnt/brick1)
>- 32 disks in another RAID-6 group (/mnt/brick2)
>- 2 hot spare
>
> Some additional information and more tests results (after changing the log
> level):
>
> glusterfs 3.7.11 built on Apr 27 2016 14:09:22
> CentOS release 6.8 (Final)
> RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS-3 3108
> [Invader] (rev 02)
>
>
>
> *Create the file to /gdata (gluster)*
> [root@mseas-data2 gdata]# dd if=/dev/zero of=/gdata/zero1 bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 1.91876 s, *546 MB/s*
>
> *Create the file to /home (ext4)*
> [root@mseas-data2 gdata]# dd if=/dev/zero of=/home/zero1 bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1.0 GB) copied, 0.686021 s, *1.5 GB/s - *3 times as fast
>
>
>
> * Copy from /gdata to /gdata (gluster to gluster) *[root@mseas-data2
> gdata]# dd if=/gdata/zero1 of=/gdata/zero2
> 2048000+0 records in
> 2048000+0 records out
> 1048576000 bytes (1.0 GB) copied, 101.052 s, *10.4 MB/s* - realllyyy
> slooowww
>
>
> *Copy from /gdata to /gdata* *2nd time (gluster to gluster)*
> [root@mseas-data2 gdata]# dd if=/gdata/zero1 of=/gdata/zero2
> 2048000+0 records in
> 2048000+0 records out
> 1048576000 bytes (1.0 GB) copied, 92.4904 s, *11.3 MB/s* - realllyyy
> slooowww again
>
>
>
> *Copy from /home to /home (ext4 to ext4)*
> [root@mseas-data2 gdata]# dd if=/home/zero1 of=/home/zero2
> 2048000+0 records in
> 2048000+0 records out
> 1048576000 bytes (1.0 GB) copied, 3.53263 s, *297 MB/s *30 times as fast
>
>
> *Copy from /home to /home (ext4 to ext4)*
> [root@mseas-data2 gdata]# dd if=/home/zero1 of=/home/zero3
> 2048000+0 records in
> 2048000+0 records out
> 1048576000 bytes (1.0 GB) copied, 4.1737 s, *251 MB/s* - 30 times as fast
>
>
> As a test, can we copy data directly to the xfs mountpoint (/mnt/brick1)
> and bypass gluster?
>
>
> Any help you could give us would be appreciated.
>
> Thanks
>
> --
>
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> Pat Haley  Email:  pha...@mit.edu
> Center for Ocean Engineering   Phone:  (617) 253-6824
> Dept. of Mechanical EngineeringFax:(617) 253-8125
> MIT, Room 5-213http://web.mit.edu/phaley/www/
> 77 Massachusetts Avenue
> Cambridge, MA  02139-4301
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Slow write times to gluster disk

2017-04-13 Thread Ravishankar N
I'm not sure if the version you are running (glusterfs 3.7.11 ) works 
with NFS-Ganesha as the link seems to suggest version >=3.8 as a 
per-requisite. Adding Soumya for help. If it is not supported, then you 
might have to go the plain glusterNFS way.

Regards,
Ravi

On 04/14/2017 03:48 AM, Pat Haley wrote:


Hi Ravi (and list),

We are planning on testing the NFS route to see what kind of speed-up 
we get.  A little research led us to the following:


https://gluster.readthedocs.io/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/

Is this correct path to take to mount 2 xfs volumes as a single 
gluster file system volume?  If not, what would be a better path?



Pat



On 04/11/2017 12:21 AM, Ravishankar N wrote:

On 04/11/2017 12:42 AM, Pat Haley wrote:


Hi Ravi,

Thanks for the reply.  And yes, we are using the gluster native 
(fuse) mount.  Since this is not my area of expertise I have a few 
questions (mostly clarifications)


Is a factor of 20 slow-down typical when compare a fuse-mounted 
filesytem versus an NFS-mounted filesystem or should we also be 
looking for additional issues?  (Note the first dd test described 
below was run on the server that hosts the file-systems so no 
network communication was involved).


Though both the gluster bricks and the mounts are on the same 
physical machine in your setup, the I/O still passes through 
different layers of kernel/user-space fuse stack although I don't 
know if 20x slow down on gluster vs NFS share is normal. Why don't 
you try doing a gluster NFS mount on the machine and try the dd test 
and compare it with the gluster fuse mount results?




You also mention tweaking " write-behind xlator settings". Would you 
expect better speed improvements from switching the mounting from 
fuse to gnfs or from tweaking the settings? Also are these mutually 
exclusive or would the be additional benefits from both switching to 
gfns and tweaking?

You should test these out and find the answers yourself. :-)



My next question is to make sure I'm clear on the comment " if the 
gluster node containing the gnfs server goes down, all mounts done 
using that node will fail".  If you have 2 servers, each 1 brick in 
the over-all gluster FS, and one server fails, then for gnfs nothing 
on either server is visible to other nodes while under fuse only the 
files on the dead server are not visible.  Is this what you meant?
Yes, for gnfs mounts, all I/O from various mounts go to the gnfs 
server process (on the machine whose IP was used at the time of 
mounting) which then sends the I/O to the brick processes. For fuse, 
the gluster fuse mount itself talks directly to the bricks.


Finally, you mention "even for gnfs mounts, you can achieve 
fail-over by using CTDB".  Do you know if CTDB would have any 
performance impact (i.e. in a worst cast scenario could adding CTDB 
to gnfs erase the speed benefits of going to gnfs in the first place)?
I don't think it would. You can even achieve load balancing via CTDB 
to use different gnfs servers for different clients. But I don't know 
if this is needed/ helpful in your current setup where everything 
(bricks and clients) seem to be on just one server.


-Ravi

Thanks

Pat


On 04/08/2017 12:58 AM, Ravishankar N wrote:

Hi Pat,

I'm assuming you are using gluster native (fuse mount). If it 
helps, you could try mounting it via gluster NFS (gnfs) and then 
see if there is an improvement in speed. Fuse mounts are slower 
than gnfs mounts but you get the benefit of avoiding a single point 
of failure. Unlike fuse mounts, if the gluster node containing the 
gnfs server goes down, all mounts done using that node will fail). 
For fuse mounts, you could try tweaking the write-behind xlator 
settings to see if it helps. See the performance.write-behind and 
performance.write-behind-window-size options in `gluster volume set 
help`. Of course, even for gnfs mounts, you can achieve fail-over 
by using CTDB.


Thanks,
Ravi

On 04/08/2017 12:07 AM, Pat Haley wrote:


Hi,

We noticed a dramatic slowness when writing to a gluster disk when 
compared to writing to an NFS disk. Specifically when using dd 
(data duplicator) to write a 4.3 GB file of zeros:


  * on NFS disk (/home): 9.5 Gb/s
  * on gluster disk (/gdata): 508 Mb/s

The gluser disk is 2 bricks joined together, no replication or 
anything else. The hardware is (literally) the same:


  * one server with 70 hard disks  and a hardware RAID card.
  * 4 disks in a RAID-6 group (the NFS disk)
  * 32 disks in a RAID-6 group (the max allowed by the card,
/mnt/brick1)
  * 32 disks in another RAID-6 group (/mnt/brick2)
  * 2 hot spare

Some additional information and more tests results (after changing 
the log level):


glusterfs 3.7.11 built on Apr 27 2016 14:09:22
CentOS release 6.8 (Final)
RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS-3 3108 
[Invader] (rev 02)




*Create the file to /gdata (gluster)*
[root@mseas-data2 gdata]# dd if=/dev/zero 

Re: [Gluster-users] Slow write times to gluster disk

2017-04-13 Thread Pat Haley


Hi Ravi (and list),

We are planning on testing the NFS route to see what kind of speed-up we 
get.  A little research led us to the following:


https://gluster.readthedocs.io/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/

Is this correct path to take to mount 2 xfs volumes as a single gluster 
file system volume?  If not, what would be a better path?



Pat



On 04/11/2017 12:21 AM, Ravishankar N wrote:

On 04/11/2017 12:42 AM, Pat Haley wrote:


Hi Ravi,

Thanks for the reply.  And yes, we are using the gluster native 
(fuse) mount.  Since this is not my area of expertise I have a few 
questions (mostly clarifications)


Is a factor of 20 slow-down typical when compare a fuse-mounted 
filesytem versus an NFS-mounted filesystem or should we also be 
looking for additional issues?  (Note the first dd test described 
below was run on the server that hosts the file-systems so no network 
communication was involved).


Though both the gluster bricks and the mounts are on the same physical 
machine in your setup, the I/O still passes through different layers 
of kernel/user-space fuse stack although I don't know if 20x slow down 
on gluster vs NFS share is normal. Why don't you try doing a gluster 
NFS mount on the machine and try the dd test and compare it with the 
gluster fuse mount results?




You also mention tweaking " write-behind xlator settings". Would you 
expect better speed improvements from switching the mounting from 
fuse to gnfs or from tweaking the settings?  Also are these mutually 
exclusive or would the be additional benefits from both switching to 
gfns and tweaking?

You should test these out and find the answers yourself. :-)



My next question is to make sure I'm clear on the comment " if the 
gluster node containing the gnfs server goes down, all mounts done 
using that node will fail".  If you have 2 servers, each 1 brick in 
the over-all gluster FS, and one server fails, then for gnfs nothing 
on either server is visible to other nodes while under fuse only the 
files on the dead server are not visible.  Is this what you meant?
Yes, for gnfs mounts, all I/O from various mounts go to the gnfs 
server process (on the machine whose IP was used at the time of 
mounting) which then sends the I/O to the brick processes. For fuse, 
the gluster fuse mount itself talks directly to the bricks.


Finally, you mention "even for gnfs mounts, you can achieve fail-over 
by using CTDB".  Do you know if CTDB would have any performance 
impact (i.e. in a worst cast scenario could adding CTDB to gnfs erase 
the speed benefits of going to gnfs in the first place)?
I don't think it would. You can even achieve load balancing via CTDB 
to use different gnfs servers for different clients. But I don't know 
if this is needed/ helpful in your current setup where everything 
(bricks and clients) seem to be on just one server.


-Ravi

Thanks

Pat


On 04/08/2017 12:58 AM, Ravishankar N wrote:

Hi Pat,

I'm assuming you are using gluster native (fuse mount). If it helps, 
you could try mounting it via gluster NFS (gnfs) and then see if 
there is an improvement in speed. Fuse mounts are slower than gnfs 
mounts but you get the benefit of avoiding a single point of 
failure. Unlike fuse mounts, if the gluster node containing the gnfs 
server goes down, all mounts done using that node will fail). For 
fuse mounts, you could try tweaking the write-behind xlator settings 
to see if it helps. See the performance.write-behind and 
performance.write-behind-window-size options in `gluster volume set 
help`. Of course, even for gnfs mounts, you can achieve fail-over by 
using CTDB.


Thanks,
Ravi

On 04/08/2017 12:07 AM, Pat Haley wrote:


Hi,

We noticed a dramatic slowness when writing to a gluster disk when 
compared to writing to an NFS disk. Specifically when using dd 
(data duplicator) to write a 4.3 GB file of zeros:


  * on NFS disk (/home): 9.5 Gb/s
  * on gluster disk (/gdata): 508 Mb/s

The gluser disk is 2 bricks joined together, no replication or 
anything else. The hardware is (literally) the same:


  * one server with 70 hard disks  and a hardware RAID card.
  * 4 disks in a RAID-6 group (the NFS disk)
  * 32 disks in a RAID-6 group (the max allowed by the card,
/mnt/brick1)
  * 32 disks in another RAID-6 group (/mnt/brick2)
  * 2 hot spare

Some additional information and more tests results (after changing 
the log level):


glusterfs 3.7.11 built on Apr 27 2016 14:09:22
CentOS release 6.8 (Final)
RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS-3 3108 
[Invader] (rev 02)




*Create the file to /gdata (gluster)*
[root@mseas-data2 gdata]# dd if=/dev/zero of=/gdata/zero1 bs=1M 
count=1000

1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 1.91876 s, *546 MB/s*

*Create the file to /home (ext4)*
[root@mseas-data2 gdata]# dd if=/dev/zero of=/home/zero1 bs=1M 
count=1000

1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied,

Re: [Gluster-users] [gluster-devel][gluster-users] Usability Initiative for Gluster: Documentation

2017-04-13 Thread Amar Tumballi
Thanks for the initiative Amye!

On Thu, Apr 13, 2017 at 11:36 PM, Amye Scavarda  wrote:

> One of the things that we're taking on for Gluster as a project is
> improving our usability overall - user experience, developer experience,
> contributor experience. We want Gluster to be fun to work with.
>
> As part of this, we'll be working on improving our overall documentation.
> Improving documentation is one of our big focuses as a project as we move
> into being more friendly to containers overall, through improving the table
> of contents, the style guide for how we're communicating with our users and
> outlining a better template structure for contribution. Finally, we'll be
> looking to match the look and feel of our new website as well.
>
>
+1


> How is this happening?
> We'll be creating another branch to make these changes without changing
> our current documentation. However, we're going to need your help - as
> you're creating documentation now, please check with either myself, Amar or
> Nithya so that we can keep release notes running in tandem for the new
> changes happening in the new structure, and what needs to be moved over
> from the current structure. We'll be manually diffing these as we work on
> going live with the new structure.
>
> This is of vital importance for us as a project and it's something we've
> been looking at for a long time. Our timelines for this are the next month
> to six weeks, and we'll be in a better place with our documentation to
> improve from here.
>
> Documentation improvements should reduce *lot* of load on few members here
who have to respond to similar questions over and over. Also many times I
heard that our flow of documentation is not great, hence took more time to
get to the documentation. Hope to see them fixed.

Thanks,
Amar


> Thanks!
> - amye
>
> --
> Amye Scavarda | a...@redhat.com | Gluster Community Lead
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Amar Tumballi (amarts)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster geo-replication failure

2017-04-13 Thread Tanner Bruce
I have a 4 node gluster cluster, and a separate 1 node cluster that I want to 
setup geo-replication on.


I have my main volume on the 4 node cluster.


On the slave node, I've setup the geoaccount/geogroup and mountbroker according 
to these steps: 
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication/


I am able to ssh from both the root user and ubuntu user to all of 
gluster-{0,1,2,3}.grandukevpc as well as gluster-replica.grandukevpc however 
when I call sudo gluster-georep-sshkey generate I receive a python stack trace 
that ends with the rather unintuitive error:


gluster.cliutils.cliutils.GlusterCmdException: (1, '', 'Commit failed on 
gluster-2.grandukevpc. Error: Unable to end. Error : Success\nCommit failed on 
gluster-3.grandukevpc. Error: Unable to end. Error : Success\nCommit failed on 
gluster-1.grandukevpc. Error: Unable to end. Error : Success\n', 'gluster 
system:: execute georep-sshkey.py node-generate .')


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] [gluster-devel][gluster-users] Usability Initiative for Gluster: Documentation

2017-04-13 Thread Amye Scavarda
One of the things that we're taking on for Gluster as a project is
improving our usability overall - user experience, developer experience,
contributor experience. We want Gluster to be fun to work with.

As part of this, we'll be working on improving our overall documentation.
Improving documentation is one of our big focuses as a project as we move
into being more friendly to containers overall, through improving the table
of contents, the style guide for how we're communicating with our users and
outlining a better template structure for contribution. Finally, we'll be
looking to match the look and feel of our new website as well.

How is this happening?
We'll be creating another branch to make these changes without changing our
current documentation. However, we're going to need your help - as you're
creating documentation now, please check with either myself, Amar or Nithya
so that we can keep release notes running in tandem for the new changes
happening in the new structure, and what needs to be moved over from the
current structure. We'll be manually diffing these as we work on going live
with the new structure.

This is of vital importance for us as a project and it's something we've
been looking at for a long time. Our timelines for this are the next month
to six weeks, and we'll be in a better place with our documentation to
improve from here.

Thanks!
- amye

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GlusterFS Shard Feature: Max number of files in .shard-Folder

2017-04-13 Thread David Spisla
Dear Gluster-Community,

If I use the shard feature it may happen that I will have a huge number of 
shard-chunks in the hidden folder .shard
Does anybody has some experience what is the maximum number of files in one 
.shard-Folder?

If I have 1 Million files in such a folder, some operations like self-healing 
or another internal operations would need
a lot of time, I guess.

Sincerely


David Spisla
Software Developer
david.spi...@iternity.com
www.iTernity.com
Tel:   +49 761-590 34 841

[cid:image001.png@01D239C7.FDF7B430]

iTernity GmbH
Heinrich-von-Stephan-Str. 21
79100 Freiburg - Germany
---
unseren technischen Support erreichen Sie unter +49 761-387 36 66
---
Geschäftsführer: Ralf Steinemann
Eingetragen beim Amtsgericht Freiburg: HRB-Nr. 701332
USt.Id de-24266431


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Questions concerning tiering

2017-04-13 Thread David Spisla
Dear Gluster Community,

at the moment I play aroud with that Gluster tiering feature. It seems to be 
that there are alwas 2 Tiers --> hot and cold
Is there a way to have more than 2 Tiers? I think its not possible...

If I write some data e.g. big video files, in which tier will first write (hot 
or cold Tier)???


Regards

David Spisla
Software Developer
david.spi...@iternity.com
www.iTernity.com
Tel:   +49 761-590 34 841

[cid:image001.png@01D239C7.FDF7B430]

iTernity GmbH
Heinrich-von-Stephan-Str. 21
79100 Freiburg - Germany
---
unseren technischen Support erreichen Sie unter +49 761-387 36 66
---
Geschäftsführer: Ralf Steinemann
Eingetragen beim Amtsgericht Freiburg: HRB-Nr. 701332
USt.Id de-24266431


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-13 Thread Shyam

On 04/13/2017 11:32 AM, Atin Mukherjee wrote:



On Thu, Apr 13, 2017 at 8:17 PM, Shyam mailto:srang...@redhat.com>> wrote:

On 02/28/2017 10:17 AM, Shyam wrote:

Hi,

With release 3.10 shipped [1], it is time to set the dates for
release
3.11 (and subsequently 4.0).

This mail has the following sections, so please read or revisit
as needed,
  - Release 3.11 dates (the schedule)
  - 3.11 focus areas


Pinging the list on the above 2 items.

*Release 3.11 dates:*
Based on our release schedule [2], 3.11 would be 3 months from
the 3.10
release and would be a Short Term Maintenance (STM) release.

This puts 3.11 schedule as (working from the release date
backwards):
- Release: May 30th, 2017
- Branching: April 27th, 2017


Branching is about 2 weeks away, other than the initial set of
overflow features from 3.10 nothing else has been raised on the
lists and in github as requests for 3.11.

So, a reminder to folks who are working on features, to raise the
relevant github issue for the same, and post it to devel list for
consideration in 3.11 (also this helps tracking and ensuring we are
waiting for the right things at the time of branching).


https://github.com/gluster/glusterfs/issues/158 will get into 3.11. This
feature enhances the existing get-state cli implementation to add few
more parameters like client details, brick consumption to the output
file so that these new attributes can be considered by tendrl project.


Thank you, added to the board.






*3.11 focus areas:*
As maintainers of gluster, we want to harden testing around the
various
gluster features in this release. Towards this the focus area
for this
release are,

1) Testing improvements in Gluster
  - Primary focus would be to get automated test cases to determine
release health, rather than repeating a manual exercise every 3
months
  - Further, we would also attempt to focus on maturing
Glusto[7] for
this, and other needs (as much as possible)

2) Merge all (or as much as possible) Facebook patches into
master, and
hence into release 3.11
  - Facebook has (as announced earlier [3]) started posting their
patches mainline, and this needs some attention to make it into
master


Further to the above, we are also considering the following features
for this release, request feature owners to let us know if these are
actively being worked on and if these will make the branching dates.
(calling out folks that I think are the current feature owners for
the same)

1) Halo - Initial Cut (@pranith)
2) IPv6 support (@kaushal)
3) Negative lookup (@poornima)
4) Parallel Readdirp - More changes to default settings. (@poornima,
@du)


[1] 3.10 release announcement:

http://lists.gluster.org/pipermail/gluster-devel/2017-February/052188.html



[2] Gluster release schedule:
https://www.gluster.org/community/release-schedule/


[3] Mail regarding facebook patches:

http://lists.gluster.org/pipermail/gluster-devel/2016-December/051784.html



[4] Release scope:
https://github.com/gluster/glusterfs/projects/1


[5] glusterfs github issues:
https://github.com/gluster/glusterfs/issues


[6] github issues for features and major fixes:
https://hackmd.io/s/BkgH8sdtg#

[7] Glusto tests: https://github.com/gluster/glusto-tests

___
Gluster-devel mailing list
gluster-de...@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
gluster-de...@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-devel





--

ATin Mukherjee

Associate Manager, RHGS Development

Red Hat



amukh...@redhat.com M: +919739491377
 IM: IRC:
atinm, twitter: @mukherjee_atin



___
Gluster-users mailing list

Re: [Gluster-users] [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-13 Thread Atin Mukherjee
On Thu, Apr 13, 2017 at 8:17 PM, Shyam  wrote:

> On 02/28/2017 10:17 AM, Shyam wrote:
>
>> Hi,
>>
>> With release 3.10 shipped [1], it is time to set the dates for release
>> 3.11 (and subsequently 4.0).
>>
>> This mail has the following sections, so please read or revisit as needed,
>>   - Release 3.11 dates (the schedule)
>>   - 3.11 focus areas
>>
>
> Pinging the list on the above 2 items.
>
> *Release 3.11 dates:*
>> Based on our release schedule [2], 3.11 would be 3 months from the 3.10
>> release and would be a Short Term Maintenance (STM) release.
>>
>> This puts 3.11 schedule as (working from the release date backwards):
>> - Release: May 30th, 2017
>> - Branching: April 27th, 2017
>>
>
> Branching is about 2 weeks away, other than the initial set of overflow
> features from 3.10 nothing else has been raised on the lists and in github
> as requests for 3.11.
>
> So, a reminder to folks who are working on features, to raise the relevant
> github issue for the same, and post it to devel list for consideration in
> 3.11 (also this helps tracking and ensuring we are waiting for the right
> things at the time of branching).
>

https://github.com/gluster/glusterfs/issues/158 will get into 3.11. This
feature enhances the existing get-state cli implementation to add few more
parameters like client details, brick consumption to the output file so
that these new attributes can be considered by tendrl project.


>
>
>> *3.11 focus areas:*
>> As maintainers of gluster, we want to harden testing around the various
>> gluster features in this release. Towards this the focus area for this
>> release are,
>>
>> 1) Testing improvements in Gluster
>>   - Primary focus would be to get automated test cases to determine
>> release health, rather than repeating a manual exercise every 3 months
>>   - Further, we would also attempt to focus on maturing Glusto[7] for
>> this, and other needs (as much as possible)
>>
>> 2) Merge all (or as much as possible) Facebook patches into master, and
>> hence into release 3.11
>>   - Facebook has (as announced earlier [3]) started posting their
>> patches mainline, and this needs some attention to make it into master
>>
>>
> Further to the above, we are also considering the following features for
> this release, request feature owners to let us know if these are actively
> being worked on and if these will make the branching dates. (calling out
> folks that I think are the current feature owners for the same)
>
> 1) Halo - Initial Cut (@pranith)
> 2) IPv6 support (@kaushal)
> 3) Negative lookup (@poornima)
> 4) Parallel Readdirp - More changes to default settings. (@poornima, @du)
>
>
> [1] 3.10 release announcement:
>> http://lists.gluster.org/pipermail/gluster-devel/2017-Februa
>> ry/052188.html
>>
>> [2] Gluster release schedule:
>> https://www.gluster.org/community/release-schedule/
>>
>> [3] Mail regarding facebook patches:
>> http://lists.gluster.org/pipermail/gluster-devel/2016-Decemb
>> er/051784.html
>>
>> [4] Release scope: https://github.com/gluster/glusterfs/projects/1
>>
>> [5] glusterfs github issues: https://github.com/gluster/glusterfs/issues
>>
>> [6] github issues for features and major fixes:
>> https://hackmd.io/s/BkgH8sdtg#
>>
>> [7] Glusto tests: https://github.com/gluster/glusto-tests
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 

ATin Mukherjee

Associate Manager, RHGS Development

Red Hat



amukh...@redhat.comM: +919739491377
 IM: IRC:
atinm, twitter: @mukherjee_atin

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo replication stuck (rsync: link_stat "(unreachable)")

2017-04-13 Thread mabi
Hi Kotresh,

Thanks for your feedback.

So do you mean I can simply login into the geo-replication slave node, mount 
the volume with fuse, and delete the problematic directory, and finally restart 
geo-replcation?

I am planning to migrate to 3.8 as soon as I have a backup (geo-replication). 
Is this issue with DHT fixed in the latest 3.8.x release?

Regards,
M.

 Original Message 
Subject: Re: [Gluster-users] Geo replication stuck (rsync: link_stat 
"(unreachable)")
Local Time: April 13, 2017 7:57 AM
UTC Time: April 13, 2017 5:57 AM
From: khire...@redhat.com
To: mabi 
Gluster Users 

Hi,

I think the directory Workhours_2017 is deleted on master and on
slave it's failing to delete because there might be stale linkto files
at the back end. These issues are fixed in DHT with latest versions.
Upgrading to latest version would solve these issues.

To workaround the issue, you might need to cleanup the problematic
directory on slave from the backend.

Thanks and Regards,
Kotresh H R

- Original Message -
> From: "mabi" 
> To: "Kotresh Hiremath Ravishankar" 
> Cc: "Gluster Users" 
> Sent: Thursday, April 13, 2017 12:28:50 AM
> Subject: Re: [Gluster-users] Geo replication stuck (rsync: link_stat 
> "(unreachable)")
>
> Hi Kotresh,
>
> Thanks for your hint, adding the "--ignore-missing-args" option to rsync and
> restarting geo-replication worked but it only managed to sync approximately
> 1/3 of the data until it put the geo replication in status "Failed" this
> time. Now I have a different type of error as you can see below from the log
> extract on my geo replication slave node:
>
> [2017-04-12 18:01:55.268923] I [MSGID: 109066] [dht-rename.c:1574:dht_rename]
> 0-myvol-private-geo-dht: renaming
> /.gfid/1678ff37-f708-4197-bed0-3ecd87ae1314/Workhours_2017
> empty.xls.ocTransferId2118183895.part
> (hash=myvol-private-geo-client-0/cache=myvol-private-geo-client-0) =>
> /.gfid/1678ff37-f708-4197-bed0-3ecd87ae1314/Workhours_2017 empty.xls
> (hash=myvol-private-geo-client-0/cache=myvol-private-geo-client-0)
> [2017-04-12 18:01:55.269842] W [fuse-bridge.c:1787:fuse_rename_cbk]
> 0-glusterfs-fuse: 4786:
> /.gfid/1678ff37-f708-4197-bed0-3ecd87ae1314/Workhours_2017
> empty.xls.ocTransferId2118183895.part ->
> /.gfid/1678ff37-f708-4197-bed0-3ecd87ae1314/Workhours_2017 empty.xls => -1
> (Directory not empty)
> [2017-04-12 18:01:55.314062] I [fuse-bridge.c:5016:fuse_thread_proc] 0-fuse:
> unmounting /tmp/gsyncd-aux-mount-PNSR8s
> [2017-04-12 18:01:55.314311] W [glusterfsd.c:1251:cleanup_and_exit]
> (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8064) [0x7f97d3129064]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f97d438a725]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x57) [0x7f97d438a5a7] ) 0-:
> received signum (15), shutting down
> [2017-04-12 18:01:55.314335] I [fuse-bridge.c:5720:fini] 0-fuse: Unmounting
> '/tmp/gsyncd-aux-mount-PNSR8s'.
>
> How can I fix now this issue and have geo-replication continue synchronising
> again?
>
> Best regards,
> M.
>
>  Original Message 
> Subject: Re: [Gluster-users] Geo replication stuck (rsync: link_stat
> "(unreachable)")
> Local Time: April 11, 2017 9:18 AM
> UTC Time: April 11, 2017 7:18 AM
> From: khire...@redhat.com
> To: mabi 
> Gluster Users 
>
> Hi,
>
> Then please use set the following rsync config and let us know if it helps.
>
> gluster vol geo-rep  :: config rsync-options
> "--ignore-missing-args"
>
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -
> > From: "mabi" 
> > To: "Kotresh Hiremath Ravishankar" 
> > Cc: "Gluster Users" 
> > Sent: Tuesday, April 11, 2017 2:15:54 AM
> > Subject: Re: [Gluster-users] Geo replication stuck (rsync: link_stat
> > "(unreachable)")
> >
> > Hi Kotresh,
> >
> > I am using the official Debian 8 (jessie) package which has rsync version
> > 3.1.1.
> >
> > Regards,
> > M.
> >
> >  Original Message 
> > Subject: Re: [Gluster-users] Geo replication stuck (rsync: link_stat
> > "(unreachable)")
> > Local Time: April 10, 2017 6:33 AM
> > UTC Time: April 10, 2017 4:33 AM
> > From: khire...@redhat.com
> > To: mabi 
> > Gluster Users 
> >
> > Hi Mabi,
> >
> > What's the rsync version being used?
> >
> > Thanks and Regards,
> > Kotresh H R
> >
> > - Original Message -
> > > From: "mabi" 
> > > To: "Gluster Users" 
> > > Sent: Saturday, April 8, 2017 4:20:25 PM
> > > Subject: [Gluster-users] Geo replication stuck (rsync: link_stat
> > > "(unreachable)")
> > >
> > > Hello,
> > >
> > > I am using distributed geo replication with two of my GlusterFS 3.7.20
> > > replicated volumes and just noticed that the geo replication for one
> > > volume
> > > is not working anymore. It is stuck since the 2017-02-23 22:39 and I
> > > tried
> > > to stop and restart geo replication but still it stays stuck at that
> > > specific date and time under the DATA field of the geo replication
> > > "status
> > > detail" command I can see 3879 and that it has "Active" as STATUS but
> > 

Re: [Gluster-users] [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-13 Thread Shyam

On 02/28/2017 10:17 AM, Shyam wrote:

Hi,

With release 3.10 shipped [1], it is time to set the dates for release
3.11 (and subsequently 4.0).

This mail has the following sections, so please read or revisit as needed,
  - Release 3.11 dates (the schedule)
  - 3.11 focus areas


Pinging the list on the above 2 items.


*Release 3.11 dates:*
Based on our release schedule [2], 3.11 would be 3 months from the 3.10
release and would be a Short Term Maintenance (STM) release.

This puts 3.11 schedule as (working from the release date backwards):
- Release: May 30th, 2017
- Branching: April 27th, 2017


Branching is about 2 weeks away, other than the initial set of overflow 
features from 3.10 nothing else has been raised on the lists and in 
github as requests for 3.11.


So, a reminder to folks who are working on features, to raise the 
relevant github issue for the same, and post it to devel list for 
consideration in 3.11 (also this helps tracking and ensuring we are 
waiting for the right things at the time of branching).




*3.11 focus areas:*
As maintainers of gluster, we want to harden testing around the various
gluster features in this release. Towards this the focus area for this
release are,

1) Testing improvements in Gluster
  - Primary focus would be to get automated test cases to determine
release health, rather than repeating a manual exercise every 3 months
  - Further, we would also attempt to focus on maturing Glusto[7] for
this, and other needs (as much as possible)

2) Merge all (or as much as possible) Facebook patches into master, and
hence into release 3.11
  - Facebook has (as announced earlier [3]) started posting their
patches mainline, and this needs some attention to make it into master



Further to the above, we are also considering the following features for 
this release, request feature owners to let us know if these are 
actively being worked on and if these will make the branching dates. 
(calling out folks that I think are the current feature owners for the same)


1) Halo - Initial Cut (@pranith)
2) IPv6 support (@kaushal)
3) Negative lookup (@poornima)
4) Parallel Readdirp - More changes to default settings. (@poornima, @du)


[1] 3.10 release announcement:
http://lists.gluster.org/pipermail/gluster-devel/2017-February/052188.html

[2] Gluster release schedule:
https://www.gluster.org/community/release-schedule/

[3] Mail regarding facebook patches:
http://lists.gluster.org/pipermail/gluster-devel/2016-December/051784.html

[4] Release scope: https://github.com/gluster/glusterfs/projects/1

[5] glusterfs github issues: https://github.com/gluster/glusterfs/issues

[6] github issues for features and major fixes:
https://hackmd.io/s/BkgH8sdtg#

[7] Glusto tests: https://github.com/gluster/glusto-tests
___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Community Meeting minutes for 12 April 2017

2017-04-13 Thread Kaleb S. KEITHLEY


No meeting was held on 2017-03-29. Zero people responded to the roll 
call. (Possibly due to many being at the Vault storage conference.)


Also a very low turnout for this meeting — only five people and myself.

The next meeting is on 26 April 2017 at 15:00 UTC  (11AM EDT, 8AM PDT) 
or `date -d "15:00 UTC"` at the shell prompt for your local timezone.


Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-12/gluster_community_weekly_meeting.2017-04-12-15.00.html
Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-12/gluster_community_weekly_meeting.2017-04-12-15.00.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-12/gluster_community_weekly_meeting.2017-04-12-15.00.log.html


==
#gluster-meeting: Gluster community weekly meeting
==


Meeting started by kkeithley at 15:00:45 UTC. The full logs are
available at
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-12/gluster_community_weekly_meeting.2017-04-12-15.00.log.html
.



Meeting summary
---
* roll call  (kkeithley, 15:01:19)

* next meeting's host  (kkeithley, 15:05:59)
  * kshlm will host in two weeks  (kkeithley, 15:07:21)

* old pending reviews  (kkeithley, 15:07:56)
  * ACTION: nigelb to start deleting old patches in gerrit  (kkeithley,
15:11:36)

* snapshot on btrfs  (kkeithley, 15:12:18)
  * JoeJulian will check with major on status of snapshot-on-btrfs
(kkeithley, 15:16:00)

* AIs from last meeting  (kkeithley, 15:16:28)

* jdarcy and nigelb to make reverts easier  (kkeithley, 15:17:06)

* nigelb will document packaging  (kkeithley, 15:17:29)

* shyam backport whine job and feetback  (kkeithley, 15:29:25)

* amye and vbellur to work on revised maintainers' draft?  (kkeithley,
  15:31:27)

* rafi will start discussion of abandoning old reviews in gerrit
  (kkeithley, 15:33:13)
  * shyam will send a 3.11 feature nag  (kkeithley, 15:35:49)
  * 3.12 and 4.0 scope and dates to be out by end of the week
(kkeithley, 15:36:03)
  * Software Defined Storage meetup tomorrow:
https://www.meetup.com/Seattle-Storage-Meetup/events/238684916/
(kkeithley, 15:36:36)

* Open Floor  (kkeithley, 15:42:54)

Meeting ended at 15:44:30 UTC.




Action Items

* nigelb to start deleting old patches in gerrit




Action Items, by person
---
* **UNASSIGNED**
  * nigelb to start deleting old patches in gerrit




People Present (lines said)
---
* kkeithley (75)
* ndevos (25)
* JoeJulian (19)
* shyam (17)
* kshlm (10)
* amye (7)
* zodbot (5)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-13 Thread Pranith Kumar Karampuri
On Thu, Apr 13, 2017 at 12:19 PM, ABHISHEK PALIWAL 
wrote:

> yes it is ext4. but what is the impact of this.
>

Did you have a lot of data before and you deleted all that data? ext4 if I
remember correctly doesn't decrease size of directory once it expands it.
So in ext4 inside a directory if you create lots and lots of files and
delete them all, the directory size would increase at the time of creation
but won't decrease after deletion. I don't have any system with ext4 at the
moment to test it now. This is something we faced 5-6 years back but not
sure if it is fixed in ext4 in the latest releases.


>
> On Thu, Apr 13, 2017 at 9:26 AM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> Yes
>>
>> On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Means the fs where this brick has been created?
>>> On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri" 
>>> wrote:
>>>
 Is your backend filesystem ext4?

 On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL <
 abhishpali...@gmail.com> wrote:

> No,we are not using sharding
> On Apr 12, 2017 7:29 PM, "Alessandro Briosi"  wrote:
>
>> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>>
>> I have did more investigation and find out that brick dir size is
>> equivalent to gluster mount point but .glusterfs having too much 
>> difference
>>
>>
>> You are probably using sharding?
>>
>>
>> Buon lavoro.
>> *Alessandro Briosi*
>>
>> *METAL.it Nord S.r.l.*
>> Via Maioliche 57/C - 38068 Rovereto (TN)
>> Tel.+39.0464.430130 - Fax +39.0464.437393
>> www.metalit.com
>>
>>
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



 --
 Pranith

>>>
>>
>>
>> --
>> Pranith
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] rebalance fix layout necessary

2017-04-13 Thread Amudhan P
I have another issue now after expanding cluster folder listing time is
increased to 400%.

I have also tried to enable readdir-ahead & parallel-readdir but was not
showing any improvement in folder listing but started with an issue in
listing folders like random folders disappeared from listing and data read
shows IO error.

Tried disabling Cluster.readdir-optimze and remount fuse client but issue
continued. so, disabled readdir-ahead & parallel-readdir  and enabled
Cluster.readdir-optimze everything works fine.

How do I bring down folder listing time?


Below is my config in Volume :
Options Reconfigured:
nfs.disable: yes
cluster.disperse-self-heal-daemon: enable
cluster.weighted-rebalance: off
cluster.rebal-throttle: aggressive
performance.readdir-ahead: off
cluster.min-free-disk: 10%
features.default-soft-limit: 80%
performance.force-readdirp: no
dht.force-readdirp: off
cluster.readdir-optimize: on
cluster.heal-timeout: 43200
cluster.data-self-heal: on

On Fri, Apr 7, 2017 at 7:35 PM, Amudhan P  wrote:

> Volume type:
> Disperse Volume  8+2  = 1080 bricks
>
> First time added 8+2 * 3 sets and it started giving issue in listing
> folder. so, remounted mount point and it was working fine.
>
> Second added 8+2 *13 sets and it also had the same issue.
>
> when listing folder it was returning an empty folder or not showing all
> the folders.
>
> when ongoing write was interrupted it throws an error destination not
> folder not available.
>
> adding few more lines from log.. let me know if you need full log file.
>
> [2017-04-05 13:40:03.702624] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec]
> 0-mgmt: Volume file changed
> [2017-04-05 13:40:04.970055] I [MSGID: 122067] [ec-code.c:1046:ec_code_detect]
> 2-gfs-vol-disperse-123: Using 'sse' CPU extensions
> [2017-04-05 13:40:04.971194] I [MSGID: 122067] [ec-code.c:1046:ec_code_detect]
> 2-gfs-vol-disperse-122: Using 'sse' CPU extensions
> [2017-04-05 13:40:04.972144] I [MSGID: 122067] [ec-code.c:1046:ec_code_detect]
> 2-gfs-vol-disperse-121: Using 'sse' CPU extensions
> [2017-04-05 13:40:04.973131] I [MSGID: 122067] [ec-code.c:1046:ec_code_detect]
> 2-gfs-vol-disperse-120: Using 'sse' CPU extensions
> [2017-04-05 13:40:04.974072] I [MSGID: 122067] [ec-code.c:1046:ec_code_detect]
> 2-gfs-vol-disperse-119: Using 'sse' CPU extensions
> [2017-04-05 13:40:04.975005] I [MSGID: 122067] [ec-code.c:1046:ec_code_detect]
> 2-gfs-vol-disperse-118: Using 'sse' CPU extensions
> [2017-04-05 13:40:04.975936] I [MSGID: 122067] [ec-code.c:1046:ec_code_detect]
> 2-gfs-vol-disperse-117: Using 'sse' CPU extensions
> [2017-04-05 13:40:04.976905] I [MSGID: 122067] [ec-code.c:1046:ec_code_detect]
> 2-gfs-vol-disperse-116: Using 'sse' CPU extensions
> [2017-04-05 13:40:04.977825] I [MSGID: 122067] [ec-code.c:1046:ec_code_detect]
> 2-gfs-vol-disperse-115: Using 'sse' CPU extensions
> [2017-04-05 13:40:04.978755] I [MSGID: 122067] [ec-code.c:1046:ec_code_detect]
> 2-gfs-vol-disperse-114: Using 'sse' CPU extensions
> [2017-04-05 13:40:04.979689] I [MSGID: 122067] [ec-code.c:1046:ec_code_detect]
> 2-gfs-vol-disperse-113: Using 'sse' CPU extensions
> [2017-04-05 13:40:04.980626] I [MSGID: 122067] [ec-code.c:1046:ec_code_detect]
> 2-gfs-vol-disperse-112: Using 'sse' CPU extensions
> [2017-04-05 13:40:07.270412] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
> 2-gfs-vol-client-736: changing port to 49153 (from 0)
> [2017-04-05 13:40:07.271902] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
> 2-gfs-vol-client-746: changing port to 49154 (from 0)
> [2017-04-05 13:40:07.272076] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
> 2-gfs-vol-client-756: changing port to 49155 (from 0)
> [2017-04-05 13:40:07.273154] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
> 2-gfs-vol-client-766: changing port to 49156 (from 0)
> [2017-04-05 13:40:07.273193] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
> 2-gfs-vol-client-776: changing port to 49157 (from 0)
> [2017-04-05 13:40:07.273371] I [MSGID: 114046] 
> [client-handshake.c:1216:client_setvolume_cbk]
> 2-gfs-vol-client-579: Connected to gfs-vol-client-579, attached to remote
> volume '/media/disk22/brick22'.
> [2017-04-05 13:40:07.273388] I [MSGID: 114047] 
> [client-handshake.c:1227:client_setvolume_cbk]
> 2-gfs-vol-client-579: Server and Client lk-version numbers are not same,
> reopening the fds
> [2017-04-05 13:40:07.273435] I [MSGID: 114035] 
> [client-handshake.c:202:client_set_lk_version_cbk]
> 2-gfs-vol-client-433: Server lk version = 1
> [2017-04-05 13:40:07.275632] I [rpc-clnt.c:2000:rpc_clnt_reconfig]
> 2-gfs-vol-client-786: changing port to 49158 (from 0)
> [2017-04-05 13:40:07.275685] I [MSGID: 114046] 
> [client-handshake.c:1216:client_setvolume_cbk]
> 2-gfs-vol-client-589: Connected to gfs-vol-client-589, attached to remote
> volume '/media/disk23/brick23'.
> [2017-04-05 13:40:07.275707] I [MSGID: 114047] 
> [client-handshake.c:1227:client_setvolume_cbk]
> 2-gfs-vol-client-589: Server and Client lk-version numbers are not same,
> reopening the fds
> [2017-04-05 13:40:07.087011] I [rpc-clnt.c:2