[Gluster-users] Gluster Monthly Newsletter, December 2018

2019-01-07 Thread Amye Scavarda
Gluster Monthly Newsletter, December 2018

See you at FOSDEM! We have a jampacked Software Defined Storage day on
Sunday, Feb 3rd  (with a few sessions on the previous day):
https://fosdem.org/2019/schedule/track/software_defined_storage/
We also have a shared stand with Ceph, come find us!

Gluster 6 - We’re in planning for our Gluster 6 release, currently
scheduled for Feb 2019. More details on the mailing lists at
https://lists.gluster.org/pipermail/gluster-devel/2018-November/055672.html

Want swag for your meetup? https://www.gluster.org/events/ has a contact
form for us to let us know about your Gluster meetup! We’d love to hear
about Gluster presentations coming up, conference talks and gatherings. Let
us know!

Contributors
Top Contributing Companies:  Red Hat, Comcast, DataLab, Gentoo Linux,
Facebook, BioDec, Samsung, Etersoft
Top Contributors in December: Sunny Kumar, Amar Tumballi, Sheetal Pamecha,
Harpreet Kaur Lalwani, Sanju Rakonde

Noteworthy Threads:
[Gluster-users] Update from GlusterFS project (November -2018)
https://lists.gluster.org/pipermail/gluster-users/2018-December/035446.html
[Gluster-users] Glusterd2 project updates (github.com/gluster/glusterd2)
https://lists.gluster.org/pipermail/gluster-users/2018-December/035448.html
[Gluster-users] GCS 0.4 release
https://lists.gluster.org/pipermail/gluster-users/2018-December/035457.html
[Gluster-users] Announcing Gluster release 5.2
https://lists.gluster.org/pipermail/gluster-users/2018-December/035461.html
[Gluster-users] Gluster meetup: India
https://lists.gluster.org/pipermail/gluster-users/2018-December/035476.html
[Gluster-users] Update on GCS 0.5 release
https://lists.gluster.org/pipermail/gluster-users/2018-December/035505.html
[Gluster-devel] Gluster Weekly Report : Static Analyser
https://lists.gluster.org/pipermail/gluster-devel/2018-December/055711.html
[Gluster-devel] FOSDEM stand - February 2 & 3, 2019
https://lists.gluster.org/pipermail/gluster-devel/2018-December/055715.html
[Gluster-devel] Infra Update for Nov and Dec
https://lists.gluster.org/pipermail/gluster-devel/2018-December/055735.html
[Gluster-devel] Latency analysis of GlusterFS' network layer for pgbench
https://lists.gluster.org/pipermail/gluster-devel/2018-December/055741.html
[Gluster-devel] Implementing multiplexing for self heal client.
https://lists.gluster.org/pipermail/gluster-devel/2018-December/055742.html
[Gluster-devel] include-what-you-use run on Gluster
https://lists.gluster.org/pipermail/gluster-devel/2018-December/055750.html
[Gluster-devel] [DHT] serialized readdir(p) across subvols and effect on
performance
https://lists.gluster.org/pipermail/gluster-devel/2018-December/055762.html

Events:

FOSDEM, Feb 2-3 2019 in Brussels, Belgium - https://fosdem.org/2019/
Vault: February 25–26, 2019 - https://www.usenix.org/conference/vault19/

Open CFPs:
KubeCon EU - Barcelona: May 19-21 - CFP closes Jan 19!
https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/

CFP:
https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/cfp/



-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Glusterfs backup and restore

2019-01-07 Thread Kannan V
Hi,
  I am able to take the glusterfs snapshot and activated it.
Now I want to send the snapshot to another machine for backup (Preferably
tar file).
When there is a problem, I wanted to take the backed up data from another
machine and restore.
I could not compress the data. I mean snapshot have been created at
" /var/lib/glusterd/snaps/"
Now if i compress the snapshot, actual data is not present.
Where exactly, I have to compress the data and restore back ?
Kindly provide your suggestions.
Thanks,
Kannan V
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [External] Re: Input/output error on FUSE log

2019-01-07 Thread Davide Obbi
then my last idea would be trying to create the same files or run the
application on the other volumes, sorry but i will be interested in the
solution!

On Mon, Jan 7, 2019 at 7:52 PM Matt Waymack  wrote:

> Yep, first unmount/remounted, then rebooted clients.  Stopped/started the
> volumes, and rebooted all nodes.
>
>
>
> *From:* Davide Obbi 
> *Sent:* Monday, January 7, 2019 12:47 PM
> *To:* Matt Waymack 
> *Cc:* Raghavendra Gowdappa ;
> gluster-users@gluster.org List 
> *Subject:* Re: [External] Re: [Gluster-users] Input/output error on FUSE
> log
>
>
>
> i guess you tried already unmounting, stop/star and mounting?
>
>
>
> On Mon, Jan 7, 2019 at 7:44 PM Matt Waymack  wrote:
>
> Yes, all volumes use sharding.
>
>
>
> *From:* Davide Obbi 
> *Sent:* Monday, January 7, 2019 12:43 PM
> *To:* Matt Waymack 
> *Cc:* Raghavendra Gowdappa ;
> gluster-users@gluster.org List 
> *Subject:* Re: [External] Re: [Gluster-users] Input/output error on FUSE
> log
>
>
>
> are all the volumes being configured with sharding?
>
>
>
> On Mon, Jan 7, 2019 at 5:35 PM Matt Waymack  wrote:
>
> I think that I can rule out network as I have multiple volumes on the same
> nodes and not all volumes are affected.  Additionally, access via SMB using
> samba-vfs-glusterfs is not affected, even on the same volumes.   This is
> seemingly only affecting the FUSE clients.
>
>
>
> *From:* Davide Obbi 
> *Sent:* Sunday, January 6, 2019 12:26 PM
> *To:* Raghavendra Gowdappa 
> *Cc:* Matt Waymack ; gluster-users@gluster.org List <
> gluster-users@gluster.org>
> *Subject:* Re: [External] Re: [Gluster-users] Input/output error on FUSE
> log
>
>
>
> Hi,
>
>
>
> i would start doing some checks like: "(Input/output error)" seems
> returned by the operating system, this happens for instance trying to
> access a file system which is on a device not available so i would check
> the network connectivity between the client to servers  and server to
> server during the reported time.
>
>
>
> Regards
>
> Davide
>
>
>
> On Sun, Jan 6, 2019 at 3:32 AM Raghavendra Gowdappa 
> wrote:
>
>
>
>
>
> On Sun, Jan 6, 2019 at 7:58 AM Raghavendra Gowdappa 
> wrote:
>
>
>
>
>
> On Sun, Jan 6, 2019 at 4:19 AM Matt Waymack  wrote:
>
> Hi all,
>
>
>
> I'm having a problem writing to our volume.  When writing files larger
> than about 2GB, I get an intermittent issue where the write will fail and
> return Input/Output error.  This is also shown in the FUSE log of the
> client (this is affecting all clients).  A snip of a client log is below:
>
> [2019-01-05 22:39:44.581371] W [fuse-bridge.c:2474:fuse_writev_cbk]
> 0-glusterfs-fuse: 51040978: WRITE => -1
> gfid=82a0b5c4-7ef3-43c2-ad86-41e16673d7c2 fd=0x7f949839a368 (Input/output
> error)
>
> [2019-01-05 22:39:44.598392] W [fuse-bridge.c:1441:fuse_err_cbk]
> 0-glusterfs-fuse: 51040979: FLUSH() ERR => -1 (Input/output error)
>
> [2019-01-05 22:39:47.420920] W [fuse-bridge.c:2474:fuse_writev_cbk]
> 0-glusterfs-fuse: 51041266: WRITE => -1
> gfid=0e8e1e13-97a5-478a-bc58-e81ddf3698a3 fd=0x7f949809b7f8 (Input/output
> error)
>
> [2019-01-05 22:39:47.433377] W [fuse-bridge.c:1441:fuse_err_cbk]
> 0-glusterfs-fuse: 51041267: FLUSH() ERR => -1 (Input/output error)
>
> [2019-01-05 22:39:50.441531] W [fuse-bridge.c:2474:fuse_writev_cbk]
> 0-glusterfs-fuse: 51041548: WRITE => -1
> gfid=0e8e1e13-97a5-478a-bc58-e81ddf3698a3 fd=0x7f949839a368 (Input/output
> error)
>
> [2019-01-05 22:39:50.451914] W [fuse-bridge.c:1441:fuse_err_cbk]
> 0-glusterfs-fuse: 51041549: FLUSH() ERR => -1 (Input/output error)
>
> The message "W [MSGID: 109011] [dht-layout.c:163:dht_layout_search]
> 0-gv1-dht: no subvolume for hash (value) = 1311504267" repeated 1721 times
> between [2019-01-05 22:39:33.906241] and [2019-01-05 22:39:44.598371]
>
> The message "E [MSGID: 101046] [dht-common.c:1502:dht_lookup_dir_cbk]
> 0-gv1-dht: dict is null" repeated 1714 times between [2019-01-05
> 22:39:33.925981] and [2019-01-05 22:39:50.451862]
>
> The message "W [MSGID: 109011] [dht-layout.c:163:dht_layout_search]
> 0-gv1-dht: no subvolume for hash (value) = 1137142622" repeated 1707 times
> between [2019-01-05 22:39:39.636552] and [2019-01-05 22:39:50.451895]
>
>
>
> This looks to be a DHT issue. Some questions:
>
> * Are all subvolumes of DHT up and client is connected to them?
> Particularly the subvolume which contains the file in question.
>
> * Can you get all extended attributes of parent directory of the file from
> all bricks?
>
> * set diagnostics.client-log-level to TRACE, capture these errors again
> and attach the client log file.
>
>
>
> I spoke a bit early. dht_writev doesn't search hashed subvolume as its
> already been looked up in lookup. So, these msgs looks to be of a different
> issue - not  writev failure.
>
>
>
>
>
> This is intermittent for most files, but eventually if a file is large
> enough it will not write.  The workflow is SFTP tot he client which then
> writes to the volume over FUSE.  When files get to a certain point,w e can
> no longer write to 

Re: [Gluster-users] [External] Re: Input/output error on FUSE log

2019-01-07 Thread Matt Waymack
Yep, first unmount/remounted, then rebooted clients.  Stopped/started the 
volumes, and rebooted all nodes.

From: Davide Obbi 
Sent: Monday, January 7, 2019 12:47 PM
To: Matt Waymack 
Cc: Raghavendra Gowdappa ; gluster-users@gluster.org List 

Subject: Re: [External] Re: [Gluster-users] Input/output error on FUSE log

i guess you tried already unmounting, stop/star and mounting?

On Mon, Jan 7, 2019 at 7:44 PM Matt Waymack 
mailto:mwaym...@nsgdv.com>> wrote:
Yes, all volumes use sharding.

From: Davide Obbi mailto:davide.o...@booking.com>>
Sent: Monday, January 7, 2019 12:43 PM
To: Matt Waymack mailto:mwaym...@nsgdv.com>>
Cc: Raghavendra Gowdappa mailto:rgowd...@redhat.com>>; 
gluster-users@gluster.org List 
mailto:gluster-users@gluster.org>>
Subject: Re: [External] Re: [Gluster-users] Input/output error on FUSE log

are all the volumes being configured with sharding?

On Mon, Jan 7, 2019 at 5:35 PM Matt Waymack 
mailto:mwaym...@nsgdv.com>> wrote:
I think that I can rule out network as I have multiple volumes on the same 
nodes and not all volumes are affected.  Additionally, access via SMB using 
samba-vfs-glusterfs is not affected, even on the same volumes.   This is 
seemingly only affecting the FUSE clients.

From: Davide Obbi mailto:davide.o...@booking.com>>
Sent: Sunday, January 6, 2019 12:26 PM
To: Raghavendra Gowdappa mailto:rgowd...@redhat.com>>
Cc: Matt Waymack mailto:mwaym...@nsgdv.com>>; 
gluster-users@gluster.org List 
mailto:gluster-users@gluster.org>>
Subject: Re: [External] Re: [Gluster-users] Input/output error on FUSE log

Hi,

i would start doing some checks like: "(Input/output error)" seems returned by 
the operating system, this happens for instance trying to access a file system 
which is on a device not available so i would check the network connectivity 
between the client to servers  and server to server during the reported time.

Regards
Davide

On Sun, Jan 6, 2019 at 3:32 AM Raghavendra Gowdappa 
mailto:rgowd...@redhat.com>> wrote:


On Sun, Jan 6, 2019 at 7:58 AM Raghavendra Gowdappa 
mailto:rgowd...@redhat.com>> wrote:


On Sun, Jan 6, 2019 at 4:19 AM Matt Waymack 
mailto:mwaym...@nsgdv.com>> wrote:

Hi all,



I'm having a problem writing to our volume.  When writing files larger than 
about 2GB, I get an intermittent issue where the write will fail and return 
Input/Output error.  This is also shown in the FUSE log of the client (this is 
affecting all clients).  A snip of a client log is below:

[2019-01-05 22:39:44.581371] W [fuse-bridge.c:2474:fuse_writev_cbk] 
0-glusterfs-fuse: 51040978: WRITE => -1 
gfid=82a0b5c4-7ef3-43c2-ad86-41e16673d7c2 fd=0x7f949839a368 (Input/output error)

[2019-01-05 22:39:44.598392] W [fuse-bridge.c:1441:fuse_err_cbk] 
0-glusterfs-fuse: 51040979: FLUSH() ERR => -1 (Input/output error)

[2019-01-05 22:39:47.420920] W [fuse-bridge.c:2474:fuse_writev_cbk] 
0-glusterfs-fuse: 51041266: WRITE => -1 
gfid=0e8e1e13-97a5-478a-bc58-e81ddf3698a3 fd=0x7f949809b7f8 (Input/output error)

[2019-01-05 22:39:47.433377] W [fuse-bridge.c:1441:fuse_err_cbk] 
0-glusterfs-fuse: 51041267: FLUSH() ERR => -1 (Input/output error)

[2019-01-05 22:39:50.441531] W [fuse-bridge.c:2474:fuse_writev_cbk] 
0-glusterfs-fuse: 51041548: WRITE => -1 
gfid=0e8e1e13-97a5-478a-bc58-e81ddf3698a3 fd=0x7f949839a368 (Input/output error)

[2019-01-05 22:39:50.451914] W [fuse-bridge.c:1441:fuse_err_cbk] 
0-glusterfs-fuse: 51041549: FLUSH() ERR => -1 (Input/output error)

The message "W [MSGID: 109011] [dht-layout.c:163:dht_layout_search] 0-gv1-dht: 
no subvolume for hash (value) = 1311504267" repeated 1721 times between 
[2019-01-05 22:39:33.906241] and [2019-01-05 22:39:44.598371]

The message "E [MSGID: 101046] [dht-common.c:1502:dht_lookup_dir_cbk] 
0-gv1-dht: dict is null" repeated 1714 times between [2019-01-05 
22:39:33.925981] and [2019-01-05 22:39:50.451862]

The message "W [MSGID: 109011] [dht-layout.c:163:dht_layout_search] 0-gv1-dht: 
no subvolume for hash (value) = 1137142622" repeated 1707 times between 
[2019-01-05 22:39:39.636552] and [2019-01-05 22:39:50.451895]

This looks to be a DHT issue. Some questions:
* Are all subvolumes of DHT up and client is connected to them? Particularly 
the subvolume which contains the file in question.
* Can you get all extended attributes of parent directory of the file from all 
bricks?
* set diagnostics.client-log-level to TRACE, capture these errors again and 
attach the client log file.

I spoke a bit early. dht_writev doesn't search hashed subvolume as its already 
been looked up in lookup. So, these msgs looks to be of a different issue - not 
 writev failure.


This is intermittent for most files, but eventually if a file is large enough 
it will not write.  The workflow is SFTP tot he client which then writes to the 
volume over FUSE.  When files get to a certain point,w e can no longer write to 
them.  The file sizes are different as well, so it's not like they 

Re: [Gluster-users] [External] Re: Input/output error on FUSE log

2019-01-07 Thread Davide Obbi
are all the volumes being configured with sharding?

On Mon, Jan 7, 2019 at 5:35 PM Matt Waymack  wrote:

> I think that I can rule out network as I have multiple volumes on the same
> nodes and not all volumes are affected.  Additionally, access via SMB using
> samba-vfs-glusterfs is not affected, even on the same volumes.   This is
> seemingly only affecting the FUSE clients.
>
>
>
> *From:* Davide Obbi 
> *Sent:* Sunday, January 6, 2019 12:26 PM
> *To:* Raghavendra Gowdappa 
> *Cc:* Matt Waymack ; gluster-users@gluster.org List <
> gluster-users@gluster.org>
> *Subject:* Re: [External] Re: [Gluster-users] Input/output error on FUSE
> log
>
>
>
> Hi,
>
>
>
> i would start doing some checks like: "(Input/output error)" seems
> returned by the operating system, this happens for instance trying to
> access a file system which is on a device not available so i would check
> the network connectivity between the client to servers  and server to
> server during the reported time.
>
>
>
> Regards
>
> Davide
>
>
>
> On Sun, Jan 6, 2019 at 3:32 AM Raghavendra Gowdappa 
> wrote:
>
>
>
>
>
> On Sun, Jan 6, 2019 at 7:58 AM Raghavendra Gowdappa 
> wrote:
>
>
>
>
>
> On Sun, Jan 6, 2019 at 4:19 AM Matt Waymack  wrote:
>
> Hi all,
>
>
>
> I'm having a problem writing to our volume.  When writing files larger
> than about 2GB, I get an intermittent issue where the write will fail and
> return Input/Output error.  This is also shown in the FUSE log of the
> client (this is affecting all clients).  A snip of a client log is below:
>
> [2019-01-05 22:39:44.581371] W [fuse-bridge.c:2474:fuse_writev_cbk]
> 0-glusterfs-fuse: 51040978: WRITE => -1
> gfid=82a0b5c4-7ef3-43c2-ad86-41e16673d7c2 fd=0x7f949839a368 (Input/output
> error)
>
> [2019-01-05 22:39:44.598392] W [fuse-bridge.c:1441:fuse_err_cbk]
> 0-glusterfs-fuse: 51040979: FLUSH() ERR => -1 (Input/output error)
>
> [2019-01-05 22:39:47.420920] W [fuse-bridge.c:2474:fuse_writev_cbk]
> 0-glusterfs-fuse: 51041266: WRITE => -1
> gfid=0e8e1e13-97a5-478a-bc58-e81ddf3698a3 fd=0x7f949809b7f8 (Input/output
> error)
>
> [2019-01-05 22:39:47.433377] W [fuse-bridge.c:1441:fuse_err_cbk]
> 0-glusterfs-fuse: 51041267: FLUSH() ERR => -1 (Input/output error)
>
> [2019-01-05 22:39:50.441531] W [fuse-bridge.c:2474:fuse_writev_cbk]
> 0-glusterfs-fuse: 51041548: WRITE => -1
> gfid=0e8e1e13-97a5-478a-bc58-e81ddf3698a3 fd=0x7f949839a368 (Input/output
> error)
>
> [2019-01-05 22:39:50.451914] W [fuse-bridge.c:1441:fuse_err_cbk]
> 0-glusterfs-fuse: 51041549: FLUSH() ERR => -1 (Input/output error)
>
> The message "W [MSGID: 109011] [dht-layout.c:163:dht_layout_search]
> 0-gv1-dht: no subvolume for hash (value) = 1311504267" repeated 1721 times
> between [2019-01-05 22:39:33.906241] and [2019-01-05 22:39:44.598371]
>
> The message "E [MSGID: 101046] [dht-common.c:1502:dht_lookup_dir_cbk]
> 0-gv1-dht: dict is null" repeated 1714 times between [2019-01-05
> 22:39:33.925981] and [2019-01-05 22:39:50.451862]
>
> The message "W [MSGID: 109011] [dht-layout.c:163:dht_layout_search]
> 0-gv1-dht: no subvolume for hash (value) = 1137142622" repeated 1707 times
> between [2019-01-05 22:39:39.636552] and [2019-01-05 22:39:50.451895]
>
>
>
> This looks to be a DHT issue. Some questions:
>
> * Are all subvolumes of DHT up and client is connected to them?
> Particularly the subvolume which contains the file in question.
>
> * Can you get all extended attributes of parent directory of the file from
> all bricks?
>
> * set diagnostics.client-log-level to TRACE, capture these errors again
> and attach the client log file.
>
>
>
> I spoke a bit early. dht_writev doesn't search hashed subvolume as its
> already been looked up in lookup. So, these msgs looks to be of a different
> issue - not  writev failure.
>
>
>
>
>
> This is intermittent for most files, but eventually if a file is large
> enough it will not write.  The workflow is SFTP tot he client which then
> writes to the volume over FUSE.  When files get to a certain point,w e can
> no longer write to them.  The file sizes are different as well, so it's not
> like they all get to the same size and just stop either.  I've ruled out a
> free space issue, our files at their largest are only a few hundred GB and
> we have tens of terrabytes free on each brick.  We are also sharding at 1GB.
>
>
>
> I'm not sure where to go from here as the error seems vague and I can only
> see it on the client log.  I'm not seeing these errors on the nodes
> themselves.  This is also seen if I mount the volume via FUSE on any of the
> nodes as well and it is only reflected in the FUSE log.
>
>
>
> Here is the volume info:
>
> Volume Name: gv1
>
> Type: Distributed-Replicate
>
> Volume ID: 1472cc78-e2a0-4c3f-9571-dab840239b3c
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 8 x (2 + 1) = 24
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: tpc-glus4:/exp/b1/gv1
>
> Brick2: tpc-glus2:/exp/b1/gv1
>
> Brick3: tpc-arbiter1:/exp/b1/gv1 (arbiter)
>
> Brick4: 

Re: [Gluster-users] [External] Re: Input/output error on FUSE log

2019-01-07 Thread Davide Obbi
i guess you tried already unmounting, stop/star and mounting?

On Mon, Jan 7, 2019 at 7:44 PM Matt Waymack  wrote:

> Yes, all volumes use sharding.
>
>
>
> *From:* Davide Obbi 
> *Sent:* Monday, January 7, 2019 12:43 PM
> *To:* Matt Waymack 
> *Cc:* Raghavendra Gowdappa ;
> gluster-users@gluster.org List 
> *Subject:* Re: [External] Re: [Gluster-users] Input/output error on FUSE
> log
>
>
>
> are all the volumes being configured with sharding?
>
>
>
> On Mon, Jan 7, 2019 at 5:35 PM Matt Waymack  wrote:
>
> I think that I can rule out network as I have multiple volumes on the same
> nodes and not all volumes are affected.  Additionally, access via SMB using
> samba-vfs-glusterfs is not affected, even on the same volumes.   This is
> seemingly only affecting the FUSE clients.
>
>
>
> *From:* Davide Obbi 
> *Sent:* Sunday, January 6, 2019 12:26 PM
> *To:* Raghavendra Gowdappa 
> *Cc:* Matt Waymack ; gluster-users@gluster.org List <
> gluster-users@gluster.org>
> *Subject:* Re: [External] Re: [Gluster-users] Input/output error on FUSE
> log
>
>
>
> Hi,
>
>
>
> i would start doing some checks like: "(Input/output error)" seems
> returned by the operating system, this happens for instance trying to
> access a file system which is on a device not available so i would check
> the network connectivity between the client to servers  and server to
> server during the reported time.
>
>
>
> Regards
>
> Davide
>
>
>
> On Sun, Jan 6, 2019 at 3:32 AM Raghavendra Gowdappa 
> wrote:
>
>
>
>
>
> On Sun, Jan 6, 2019 at 7:58 AM Raghavendra Gowdappa 
> wrote:
>
>
>
>
>
> On Sun, Jan 6, 2019 at 4:19 AM Matt Waymack  wrote:
>
> Hi all,
>
>
>
> I'm having a problem writing to our volume.  When writing files larger
> than about 2GB, I get an intermittent issue where the write will fail and
> return Input/Output error.  This is also shown in the FUSE log of the
> client (this is affecting all clients).  A snip of a client log is below:
>
> [2019-01-05 22:39:44.581371] W [fuse-bridge.c:2474:fuse_writev_cbk]
> 0-glusterfs-fuse: 51040978: WRITE => -1
> gfid=82a0b5c4-7ef3-43c2-ad86-41e16673d7c2 fd=0x7f949839a368 (Input/output
> error)
>
> [2019-01-05 22:39:44.598392] W [fuse-bridge.c:1441:fuse_err_cbk]
> 0-glusterfs-fuse: 51040979: FLUSH() ERR => -1 (Input/output error)
>
> [2019-01-05 22:39:47.420920] W [fuse-bridge.c:2474:fuse_writev_cbk]
> 0-glusterfs-fuse: 51041266: WRITE => -1
> gfid=0e8e1e13-97a5-478a-bc58-e81ddf3698a3 fd=0x7f949809b7f8 (Input/output
> error)
>
> [2019-01-05 22:39:47.433377] W [fuse-bridge.c:1441:fuse_err_cbk]
> 0-glusterfs-fuse: 51041267: FLUSH() ERR => -1 (Input/output error)
>
> [2019-01-05 22:39:50.441531] W [fuse-bridge.c:2474:fuse_writev_cbk]
> 0-glusterfs-fuse: 51041548: WRITE => -1
> gfid=0e8e1e13-97a5-478a-bc58-e81ddf3698a3 fd=0x7f949839a368 (Input/output
> error)
>
> [2019-01-05 22:39:50.451914] W [fuse-bridge.c:1441:fuse_err_cbk]
> 0-glusterfs-fuse: 51041549: FLUSH() ERR => -1 (Input/output error)
>
> The message "W [MSGID: 109011] [dht-layout.c:163:dht_layout_search]
> 0-gv1-dht: no subvolume for hash (value) = 1311504267" repeated 1721 times
> between [2019-01-05 22:39:33.906241] and [2019-01-05 22:39:44.598371]
>
> The message "E [MSGID: 101046] [dht-common.c:1502:dht_lookup_dir_cbk]
> 0-gv1-dht: dict is null" repeated 1714 times between [2019-01-05
> 22:39:33.925981] and [2019-01-05 22:39:50.451862]
>
> The message "W [MSGID: 109011] [dht-layout.c:163:dht_layout_search]
> 0-gv1-dht: no subvolume for hash (value) = 1137142622" repeated 1707 times
> between [2019-01-05 22:39:39.636552] and [2019-01-05 22:39:50.451895]
>
>
>
> This looks to be a DHT issue. Some questions:
>
> * Are all subvolumes of DHT up and client is connected to them?
> Particularly the subvolume which contains the file in question.
>
> * Can you get all extended attributes of parent directory of the file from
> all bricks?
>
> * set diagnostics.client-log-level to TRACE, capture these errors again
> and attach the client log file.
>
>
>
> I spoke a bit early. dht_writev doesn't search hashed subvolume as its
> already been looked up in lookup. So, these msgs looks to be of a different
> issue - not  writev failure.
>
>
>
>
>
> This is intermittent for most files, but eventually if a file is large
> enough it will not write.  The workflow is SFTP tot he client which then
> writes to the volume over FUSE.  When files get to a certain point,w e can
> no longer write to them.  The file sizes are different as well, so it's not
> like they all get to the same size and just stop either.  I've ruled out a
> free space issue, our files at their largest are only a few hundred GB and
> we have tens of terrabytes free on each brick.  We are also sharding at 1GB.
>
>
>
> I'm not sure where to go from here as the error seems vague and I can only
> see it on the client log.  I'm not seeing these errors on the nodes
> themselves.  This is also seen if I mount the volume via FUSE on any of the
> nodes as well and it is only reflected in the 

Re: [Gluster-users] [External] Re: Input/output error on FUSE log

2019-01-07 Thread Matt Waymack
Yes, all volumes use sharding.

From: Davide Obbi 
Sent: Monday, January 7, 2019 12:43 PM
To: Matt Waymack 
Cc: Raghavendra Gowdappa ; gluster-users@gluster.org List 

Subject: Re: [External] Re: [Gluster-users] Input/output error on FUSE log

are all the volumes being configured with sharding?

On Mon, Jan 7, 2019 at 5:35 PM Matt Waymack 
mailto:mwaym...@nsgdv.com>> wrote:
I think that I can rule out network as I have multiple volumes on the same 
nodes and not all volumes are affected.  Additionally, access via SMB using 
samba-vfs-glusterfs is not affected, even on the same volumes.   This is 
seemingly only affecting the FUSE clients.

From: Davide Obbi mailto:davide.o...@booking.com>>
Sent: Sunday, January 6, 2019 12:26 PM
To: Raghavendra Gowdappa mailto:rgowd...@redhat.com>>
Cc: Matt Waymack mailto:mwaym...@nsgdv.com>>; 
gluster-users@gluster.org List 
mailto:gluster-users@gluster.org>>
Subject: Re: [External] Re: [Gluster-users] Input/output error on FUSE log

Hi,

i would start doing some checks like: "(Input/output error)" seems returned by 
the operating system, this happens for instance trying to access a file system 
which is on a device not available so i would check the network connectivity 
between the client to servers  and server to server during the reported time.

Regards
Davide

On Sun, Jan 6, 2019 at 3:32 AM Raghavendra Gowdappa 
mailto:rgowd...@redhat.com>> wrote:


On Sun, Jan 6, 2019 at 7:58 AM Raghavendra Gowdappa 
mailto:rgowd...@redhat.com>> wrote:


On Sun, Jan 6, 2019 at 4:19 AM Matt Waymack 
mailto:mwaym...@nsgdv.com>> wrote:

Hi all,



I'm having a problem writing to our volume.  When writing files larger than 
about 2GB, I get an intermittent issue where the write will fail and return 
Input/Output error.  This is also shown in the FUSE log of the client (this is 
affecting all clients).  A snip of a client log is below:

[2019-01-05 22:39:44.581371] W [fuse-bridge.c:2474:fuse_writev_cbk] 
0-glusterfs-fuse: 51040978: WRITE => -1 
gfid=82a0b5c4-7ef3-43c2-ad86-41e16673d7c2 fd=0x7f949839a368 (Input/output error)

[2019-01-05 22:39:44.598392] W [fuse-bridge.c:1441:fuse_err_cbk] 
0-glusterfs-fuse: 51040979: FLUSH() ERR => -1 (Input/output error)

[2019-01-05 22:39:47.420920] W [fuse-bridge.c:2474:fuse_writev_cbk] 
0-glusterfs-fuse: 51041266: WRITE => -1 
gfid=0e8e1e13-97a5-478a-bc58-e81ddf3698a3 fd=0x7f949809b7f8 (Input/output error)

[2019-01-05 22:39:47.433377] W [fuse-bridge.c:1441:fuse_err_cbk] 
0-glusterfs-fuse: 51041267: FLUSH() ERR => -1 (Input/output error)

[2019-01-05 22:39:50.441531] W [fuse-bridge.c:2474:fuse_writev_cbk] 
0-glusterfs-fuse: 51041548: WRITE => -1 
gfid=0e8e1e13-97a5-478a-bc58-e81ddf3698a3 fd=0x7f949839a368 (Input/output error)

[2019-01-05 22:39:50.451914] W [fuse-bridge.c:1441:fuse_err_cbk] 
0-glusterfs-fuse: 51041549: FLUSH() ERR => -1 (Input/output error)

The message "W [MSGID: 109011] [dht-layout.c:163:dht_layout_search] 0-gv1-dht: 
no subvolume for hash (value) = 1311504267" repeated 1721 times between 
[2019-01-05 22:39:33.906241] and [2019-01-05 22:39:44.598371]

The message "E [MSGID: 101046] [dht-common.c:1502:dht_lookup_dir_cbk] 
0-gv1-dht: dict is null" repeated 1714 times between [2019-01-05 
22:39:33.925981] and [2019-01-05 22:39:50.451862]

The message "W [MSGID: 109011] [dht-layout.c:163:dht_layout_search] 0-gv1-dht: 
no subvolume for hash (value) = 1137142622" repeated 1707 times between 
[2019-01-05 22:39:39.636552] and [2019-01-05 22:39:50.451895]

This looks to be a DHT issue. Some questions:
* Are all subvolumes of DHT up and client is connected to them? Particularly 
the subvolume which contains the file in question.
* Can you get all extended attributes of parent directory of the file from all 
bricks?
* set diagnostics.client-log-level to TRACE, capture these errors again and 
attach the client log file.

I spoke a bit early. dht_writev doesn't search hashed subvolume as its already 
been looked up in lookup. So, these msgs looks to be of a different issue - not 
 writev failure.


This is intermittent for most files, but eventually if a file is large enough 
it will not write.  The workflow is SFTP tot he client which then writes to the 
volume over FUSE.  When files get to a certain point,w e can no longer write to 
them.  The file sizes are different as well, so it's not like they all get to 
the same size and just stop either.  I've ruled out a free space issue, our 
files at their largest are only a few hundred GB and we have tens of terrabytes 
free on each brick.  We are also sharding at 1GB.

I'm not sure where to go from here as the error seems vague and I can only see 
it on the client log.  I'm not seeing these errors on the nodes themselves.  
This is also seen if I mount the volume via FUSE on any of the nodes as well 
and it is only reflected in the FUSE log.

Here is the volume info:
Volume Name: gv1
Type: Distributed-Replicate
Volume ID: 1472cc78-e2a0-4c3f-9571-dab840239b3c

Re: [Gluster-users] java application crushes while reading a zip file

2019-01-07 Thread Dmitry Isakbayev
This system is going into production.  I will try to replicate this problem
on the next installation.

On Wed, Jan 2, 2019 at 9:25 PM Raghavendra Gowdappa 
wrote:

>
>
> On Wed, Jan 2, 2019 at 9:59 PM Dmitry Isakbayev  wrote:
>
>> Still no JVM crushes.  Is it possible that running glusterfs with
>> performance options turned off for a couple of days cleared out the "stale
>> metadata issue"?
>>
>
> restarting these options, would've cleared the existing cache and hence
> previous stale metadata would've been cleared. Hitting stale metadata
> again  depends on races. That might be the reason you are still not seeing
> the issue. Can you try with enabling all perf xlators (default
> configuration)?
>
>
>>
>> On Mon, Dec 31, 2018 at 1:38 PM Dmitry Isakbayev 
>> wrote:
>>
>>> The software ran with all of the options turned off over the weekend
>>> without any problems.
>>> I will try to collect the debug info for you.  I have re-enabled the 3
>>> three options, but yet to see the problem reoccurring.
>>>
>>>
>>> On Sat, Dec 29, 2018 at 6:46 PM Raghavendra Gowdappa <
>>> rgowd...@redhat.com> wrote:
>>>
 Thanks Dmitry. Can you provide the following debug info I asked earlier:

 * strace -ff -v ... of java application
 * dump of the I/O traffic seen by the mountpoint (use --dump-fuse while
 mounting).

 regards,
 Raghavendra

 On Sat, Dec 29, 2018 at 2:08 AM Dmitry Isakbayev 
 wrote:

> These 3 options seem to trigger both (reading zip file and renaming
> files) problems.
>
> Options Reconfigured:
> performance.io-cache: off
> performance.stat-prefetch: off
> performance.quick-read: off
> performance.parallel-readdir: off
> *performance.readdir-ahead: on*
> *performance.write-behind: on*
> *performance.read-ahead: on*
> performance.client-io-threads: off
> nfs.disable: on
> transport.address-family: inet
>
>
> On Fri, Dec 28, 2018 at 10:24 AM Dmitry Isakbayev 
> wrote:
>
>> Turning a single option on at a time still worked fine.  I will keep
>> trying.
>>
>> We had used 4.1.5 on KVM/CentOS7.5 at AWS without these issues or log
>> messages.  Do you suppose these issues are triggered by the new 
>> environment
>> or did not exist in 4.1.5?
>>
>> [root@node1 ~]# glusterfs --version
>> glusterfs 4.1.5
>>
>> On AWS using
>> [root@node1 ~]# hostnamectl
>>Static hostname: node1
>>  Icon name: computer-vm
>>Chassis: vm
>> Machine ID: b30d0f2110ac3807b210c19ede3ce88f
>>Boot ID: 52bb159a0aa94043a40e7c7651967bd9
>> Virtualization: kvm
>>   Operating System: CentOS Linux 7 (Core)
>>CPE OS Name: cpe:/o:centos:centos:7
>> Kernel: Linux 3.10.0-862.3.2.el7.x86_64
>>   Architecture: x86-64
>>
>>
>>
>>
>> On Fri, Dec 28, 2018 at 8:56 AM Raghavendra Gowdappa <
>> rgowd...@redhat.com> wrote:
>>
>>>
>>>
>>> On Fri, Dec 28, 2018 at 7:23 PM Dmitry Isakbayev 
>>> wrote:
>>>
 Ok. I will try different options.

 This system is scheduled to go into production soon.  What version
 would you recommend to roll back to?

>>>
>>> These are long standing issues. So, rolling back may not make these
>>> issues go away. Instead if you think performance is agreeable to you,
>>> please keep these xlators off in production.
>>>
>>>
 On Thu, Dec 27, 2018 at 10:55 PM Raghavendra Gowdappa <
 rgowd...@redhat.com> wrote:

>
>
> On Fri, Dec 28, 2018 at 3:13 AM Dmitry Isakbayev <
> isak...@gmail.com> wrote:
>
>> Raghavendra,
>>
>> Thank  for the suggestion.
>>
>>
>> I am suing
>>
>> [root@jl-fanexoss1p glusterfs]# gluster --version
>> glusterfs 5.0
>>
>> On
>> [root@jl-fanexoss1p glusterfs]# hostnamectl
>>  Icon name: computer-vm
>>Chassis: vm
>> Machine ID: e44b8478ef7a467d98363614f4e50535
>>Boot ID: eed98992fdda4c88bdd459a89101766b
>> Virtualization: vmware
>>   Operating System: Red Hat Enterprise Linux Server 7.5 (Maipo)
>>CPE OS Name: cpe:/o:redhat:enterprise_linux:7.5:GA:server
>> Kernel: Linux 3.10.0-862.14.4.el7.x86_64
>>   Architecture: x86-64
>>
>>
>> I have configured the following options
>>
>> [root@jl-fanexoss1p glusterfs]# gluster volume info
>> Volume Name: gv0
>> Type: Replicate
>> Volume ID: 5ffbda09-c5e2-4abc-b89e-79b5d8a40824
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: 

Re: [Gluster-users] [External] Re: Input/output error on FUSE log

2019-01-07 Thread Matt Waymack
I think that I can rule out network as I have multiple volumes on the same 
nodes and not all volumes are affected.  Additionally, access via SMB using 
samba-vfs-glusterfs is not affected, even on the same volumes.   This is 
seemingly only affecting the FUSE clients.

From: Davide Obbi 
Sent: Sunday, January 6, 2019 12:26 PM
To: Raghavendra Gowdappa 
Cc: Matt Waymack ; gluster-users@gluster.org List 

Subject: Re: [External] Re: [Gluster-users] Input/output error on FUSE log

Hi,

i would start doing some checks like: "(Input/output error)" seems returned by 
the operating system, this happens for instance trying to access a file system 
which is on a device not available so i would check the network connectivity 
between the client to servers  and server to server during the reported time.

Regards
Davide

On Sun, Jan 6, 2019 at 3:32 AM Raghavendra Gowdappa 
mailto:rgowd...@redhat.com>> wrote:


On Sun, Jan 6, 2019 at 7:58 AM Raghavendra Gowdappa 
mailto:rgowd...@redhat.com>> wrote:


On Sun, Jan 6, 2019 at 4:19 AM Matt Waymack 
mailto:mwaym...@nsgdv.com>> wrote:

Hi all,



I'm having a problem writing to our volume.  When writing files larger than 
about 2GB, I get an intermittent issue where the write will fail and return 
Input/Output error.  This is also shown in the FUSE log of the client (this is 
affecting all clients).  A snip of a client log is below:

[2019-01-05 22:39:44.581371] W [fuse-bridge.c:2474:fuse_writev_cbk] 
0-glusterfs-fuse: 51040978: WRITE => -1 
gfid=82a0b5c4-7ef3-43c2-ad86-41e16673d7c2 fd=0x7f949839a368 (Input/output error)

[2019-01-05 22:39:44.598392] W [fuse-bridge.c:1441:fuse_err_cbk] 
0-glusterfs-fuse: 51040979: FLUSH() ERR => -1 (Input/output error)

[2019-01-05 22:39:47.420920] W [fuse-bridge.c:2474:fuse_writev_cbk] 
0-glusterfs-fuse: 51041266: WRITE => -1 
gfid=0e8e1e13-97a5-478a-bc58-e81ddf3698a3 fd=0x7f949809b7f8 (Input/output error)

[2019-01-05 22:39:47.433377] W [fuse-bridge.c:1441:fuse_err_cbk] 
0-glusterfs-fuse: 51041267: FLUSH() ERR => -1 (Input/output error)

[2019-01-05 22:39:50.441531] W [fuse-bridge.c:2474:fuse_writev_cbk] 
0-glusterfs-fuse: 51041548: WRITE => -1 
gfid=0e8e1e13-97a5-478a-bc58-e81ddf3698a3 fd=0x7f949839a368 (Input/output error)

[2019-01-05 22:39:50.451914] W [fuse-bridge.c:1441:fuse_err_cbk] 
0-glusterfs-fuse: 51041549: FLUSH() ERR => -1 (Input/output error)

The message "W [MSGID: 109011] [dht-layout.c:163:dht_layout_search] 0-gv1-dht: 
no subvolume for hash (value) = 1311504267" repeated 1721 times between 
[2019-01-05 22:39:33.906241] and [2019-01-05 22:39:44.598371]

The message "E [MSGID: 101046] [dht-common.c:1502:dht_lookup_dir_cbk] 
0-gv1-dht: dict is null" repeated 1714 times between [2019-01-05 
22:39:33.925981] and [2019-01-05 22:39:50.451862]

The message "W [MSGID: 109011] [dht-layout.c:163:dht_layout_search] 0-gv1-dht: 
no subvolume for hash (value) = 1137142622" repeated 1707 times between 
[2019-01-05 22:39:39.636552] and [2019-01-05 22:39:50.451895]

This looks to be a DHT issue. Some questions:
* Are all subvolumes of DHT up and client is connected to them? Particularly 
the subvolume which contains the file in question.
* Can you get all extended attributes of parent directory of the file from all 
bricks?
* set diagnostics.client-log-level to TRACE, capture these errors again and 
attach the client log file.

I spoke a bit early. dht_writev doesn't search hashed subvolume as its already 
been looked up in lookup. So, these msgs looks to be of a different issue - not 
 writev failure.


This is intermittent for most files, but eventually if a file is large enough 
it will not write.  The workflow is SFTP tot he client which then writes to the 
volume over FUSE.  When files get to a certain point,w e can no longer write to 
them.  The file sizes are different as well, so it's not like they all get to 
the same size and just stop either.  I've ruled out a free space issue, our 
files at their largest are only a few hundred GB and we have tens of terrabytes 
free on each brick.  We are also sharding at 1GB.

I'm not sure where to go from here as the error seems vague and I can only see 
it on the client log.  I'm not seeing these errors on the nodes themselves.  
This is also seen if I mount the volume via FUSE on any of the nodes as well 
and it is only reflected in the FUSE log.

Here is the volume info:
Volume Name: gv1
Type: Distributed-Replicate
Volume ID: 1472cc78-e2a0-4c3f-9571-dab840239b3c
Status: Started
Snapshot Count: 0
Number of Bricks: 8 x (2 + 1) = 24
Transport-type: tcp
Bricks:
Brick1: tpc-glus4:/exp/b1/gv1
Brick2: tpc-glus2:/exp/b1/gv1
Brick3: tpc-arbiter1:/exp/b1/gv1 (arbiter)
Brick4: tpc-glus2:/exp/b2/gv1
Brick5: tpc-glus4:/exp/b2/gv1
Brick6: tpc-arbiter1:/exp/b2/gv1 (arbiter)
Brick7: tpc-glus4:/exp/b3/gv1
Brick8: tpc-glus2:/exp/b3/gv1
Brick9: tpc-arbiter1:/exp/b3/gv1 (arbiter)
Brick10: tpc-glus4:/exp/b4/gv1
Brick11: tpc-glus2:/exp/b4/gv1
Brick12: tpc-arbiter1:/exp/b4/gv1 (arbiter)
Brick13: 

Re: [Gluster-users] update to 4.1.6-1 and fix-layout failing

2019-01-07 Thread Nithya Balachandran
On Fri, 4 Jan 2019 at 17:10, mohammad kashif  wrote:

> Hi Nithya
>
> rebalance logs has only these warnings
> 2019-01-04 09:59:20.826261] W [rpc-clnt.c:1753:rpc_clnt_submit]
> 0-atlasglust-client-5: error returned while attempting to connect to
> host:(null), port:0 [2019-01-04 09:59:20.828113] W
> [rpc-clnt.c:1753:rpc_clnt_submit] 0-atlasglust-client-6: error returned
> while attempting to connect to host:(null), port:0 [2019-01-04
> 09:59:20.832017] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-atlasglust-client-4:
> error returned while attempting to connect to host:(null), port:0
>

Please send me the rebalance logs if possible. Are 08 and 09 the newly
added nodes?  Are no directories being created on those ?

>
> gluster volume rebalance atlasglust status
>Node
> status   run time in h:m:s
>   -
> ---
>   localhost fix-layout
> in progress1:0:59
>  pplxgluster02.physics.ox.ac.uk
> fix-layout in progress1:0:59
>  pplxgluster03.physics.ox.ac.uk
> fix-layout in progress1:0:59
>  pplxgluster04.physics.ox.ac.uk
> fix-layout in progress1:0:59
>  pplxgluster05.physics.ox.ac.uk
> fix-layout in progress1:0:59
>  pplxgluster06.physics.ox.ac.uk
> fix-layout in progress1:0:59
>  pplxgluster07.physics.ox.ac.uk
> fix-layout in progress1:0:59
>  pplxgluster08.physics.ox.ac.uk
> fix-layout in progress1:0:59
>  pplxgluster09.physics.ox.ac.uk
> fix-layout in progress1:0:59
>
> But there is no new entry in logs for last one hour and I can't see any
> new directories being created.
>
> Thanks
>
> Kashif
>
>
> On Fri, Jan 4, 2019 at 10:42 AM Nithya Balachandran 
> wrote:
>
>>
>>
>> On Fri, 4 Jan 2019 at 15:48, mohammad kashif 
>> wrote:
>>
>>> Hi
>>>
>>> I have updated our distributed gluster storage from 3.12.9-1 to 4.1.6-1.
>>> The existing cluster had seven servers totalling in around 450 TB. OS is
>>> Centos7.  The update went OK and I could access files.
>>> Then I added two more servers of 90TB each to cluster and started
>>> fix-layout
>>>
>>> gluster volume rebalance atlasglust fix-layout start
>>>
>>> Some directories were created at new servers and then stopped although
>>> rebalance status was showing that it is still running. I think it stopped
>>> creating new directories after this error
>>>
>>> E [MSGID: 106061]
>>> [glusterd-utils.c:10697:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd:
>>> failed to get index The message "E [MSGID: 106061]
>>> [glusterd-utils.c:10697:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd:
>>> failed to get index" repeated 7 times between [2019-01-03 13:16:31.146779]
>>> and [2019-01-03 13:16:31.158612]
>>>
>>>
>> There are also many warning like this
>>> [2019-01-03 16:04:34.120777] I [MSGID: 106499]
>>> [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management:
>>> Received status volume req for volume atlasglust [2019-01-03
>>> 17:04:28.541805] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-management: error
>>> returned while attempting to connect to host:(null), port:0
>>>
>>> These are the glusterd logs. Do you see any errors in the rebalance logs
>> for this volume?
>>
>>
>>> I waited for around 12 hours and then stopped fix-layout and started
>>> again
>>> I can see the same error again
>>>
>>> [2019-01-04 09:59:20.825930] E [MSGID: 106061]
>>> [glusterd-utils.c:10697:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd:
>>> failed to get index The message "E [MSGID: 106061]
>>> [glusterd-utils.c:10697:glusterd_volume_rebalance_use_rsp_dict] 0-glusterd:
>>> failed to get index" repeated 7 times between [2019-01-04 09:59:20.825930]
>>> and [2019-01-04 09:59:20.837068]
>>>
>>> Please suggest as it is our production service.
>>>
>>> At the moment, I have stopped clients from using file system. Would it
>>> be OK if I allow clients to access file system while fix-layout is still
>>> going.
>>>
>>> Thanks
>>>
>>> Kashif
>>>
>>>
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users