Re: [Gluster-users] Error logged in fuse-mount log file

2017-11-09 Thread Nithya Balachandran
Hi,

Comments inline.

Regards,
Nithya

On 9 November 2017 at 15:05, Amudhan Pandian  wrote:

> resending mail from another id, doubt on whether mail reaches mailing list.
>
>
> -- Forwarded message --
> From: *Amudhan P* 
> Date: Tue, Nov 7, 2017 at 6:43 PM
> Subject: error logged in fuse-mount log file
> To: Gluster Users 
>
>
> Hi,
>
> I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log
> file.
>
> what does this error mean? should i worry about this and how do i resolve
> this?
>
> [2017-11-07 11:59:17.218973] W [MSGID: 109005]
> [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory
> selfheal fail
> ed : 1 subvolumes have unrecoverable errors. path =
> /fol1/fol2/fol3/fol4/fol5, gfid =3f856ab3-f538-43ee-b408-53dd3da617fb
> [2017-11-07 11:59:17.218935] I [MSGID: 109063]
> [dht-layout.c:713:dht_layout_normalize] 0-glustervol-dht: Found anomalies
> in /fol1/fol2/fol3/fol4/fol5 (gfid = 3f856ab3-f538-43ee-b408-53dd3da617fb).
> Holes=1 overlaps=0
>


DHT found a problem in the layout of this directory so it tries to recreate
and set the layout again.



> [2017-11-07 11:59:17.199917] W [MSGID: 122019]
> [ec-helpers.c:413:ec_loc_gfid_check] 0-glustervol-disperse-5: Mismatching
> GFID's in loc
>
>
There appears to be a GFID mismatch for this directory on one of the bricks
of this disperse set. Can you check the gfid xattr for this directory on
all the bricks of this set?



> [2017-11-07 11:03:08.999769] W [MSGID: 109005]
> [dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory
> selfheal fail
> ed : 1 subvolumes have unrecoverable errors. path =
> /sec_fol1/sec_fol2/sec_fol3/sec_fol4/sec_fol5, gfid =
> 59b9762e-f419-4d56-9fa2-eea9ebc055b2
> [2017-11-07 11:03:08.999749] I [MSGID: 109063]
> [dht-layout.c:713:dht_layout_normalize] 0-glustervol-dht: Found anomalies
> in /sec_fol1/sec_fol2/sec_fol3/sec_fol4/sec_fol5 (gfid =
> 59b9762e-f419-4d56-9fa2-eea9ebc055b2). Holes=1 overlaps=0
> [2017-11-07 11:03:08.980275] W [MSGID: 122019]
> [ec-helpers.c:413:ec_loc_gfid_check] 0-glustervol-disperse-7: Mismatching
> GFID's in loc
>
>
> [2017-11-07 12:58:43.569801] I [MSGID: 109069]
> [dht-common.c:1324:dht_lookup_unlink_cbk] 0-glustervol-dht: lookup_unlink
> returned with op_ret -> 0 and op-errno -> 0 for
> /thi_fol1/thi_fol2/thi_fol3/thi_fol4/thi_fol5/thi_fol6/thi_file1
> [2017-11-07 12:58:43.528844] I [MSGID: 109045]
> [dht-common.c:2012:dht_lookup_everywhere_cbk] 0-glustervol-dht:
> attempting deletion of stale linkfile /thi_fol1/thi_fol2/thi_fol3/th
> i_fol4/thi_fol5/thi_fol6/thi_file1 on glustervol-disperse-77 (hashed
> subvol is glustervol-disperse-106)
>

 These can be ignored.


regards
> Amudhan P
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS healing questions

2017-11-09 Thread Serkan Çoban
Hi,

You can set disperse.shd-max-threads to 2 or 4 in order to make heal
faster. This makes my heal times 2-3x faster.
Also you can play with disperse.self-heal-window-size to read more
bytes at one time, but i did not test it.

On Thu, Nov 9, 2017 at 4:47 PM, Xavi Hernandez  wrote:
> Hi Rolf,
>
> answers follow inline...
>
> On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen  wrote:
>>
>> Hi,
>>
>> We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
>> bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
>> nics)
>>
>> 1.
>> Tests show that healing takes about double the time on healing 200gb vs
>> 100, and abit under the double on 400gb vs 200gb bricksizes. Is this
>> expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377
>> hours to heal.
>>
>> 100gb brick heal: 18 hours (8+2)
>> 200gb brick heal: 37 hours (8+2) +205%
>> 400gb brick heal: 59 hours (8+2) +159%
>>
>> Each 100gb is filled with 8 x 10mb files (200gb is 2x and 400gb is 4x)
>
>
> If I understand it correctly, you are storing 80.000 files of 10 MB each
> when you are using 100GB bricks, but you double this value for 200GB bricks
> (160.000 files of 10MB each). And for 400GB bricks you create 320.000 files.
> Have I understood it correctly ?
>
> If this is true, it's normal that twice the space requires approximately
> twice the heal time. The healing time depends on the contents of the brick,
> not brick size. The same amount of files should take the same healing time,
> whatever the brick size is.
>
>>
>>
>> 2.
>> Are there any possibility to show the progress of a heal? As per now we
>> run gluster volume heal volume info, but this exit's when a brick is done
>> healing and when we run heal info again the command contiunes showing gfid's
>> until the brick is done again. This gives quite a bad picture of the status
>> of a heal.
>
>
> The output of 'gluster volume heal  info' shows the list of files
> pending to be healed on each brick. The heal is complete when the list is
> empty. A faster alternative if you don't want to see the whole list of files
> is to use 'gluster volume heal  statistics heal-count'. This will
> only show the number of pending files on each brick.
>
> I don't know any other way to track progress of self-heal.
>
>>
>>
>> 3.
>> What kind of config tweaks is recommended for these kind of EC volumes?
>
>
> I usually use the following values (specific only for ec):
>
> client.event-threads 4
> server.event-threads 4
> performance.client-io-threads on
>
> Regards,
>
> Xavi
>
>
>
>>
>>
>>
>> $ gluster volume info
>> Volume Name: test-ec-100g
>> Type: Disperse
>> Volume ID: 0254281d-2f6e-4ac4-a773-2b8e0eb8ab27
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (8 + 2) = 10
>> Transport-type: tcp
>> Bricks:
>> Brick1: dn-304:/mnt/test-ec-100/brick
>> Brick2: dn-305:/mnt/test-ec-100/brick
>> Brick3: dn-306:/mnt/test-ec-100/brick
>> Brick4: dn-307:/mnt/test-ec-100/brick
>> Brick5: dn-308:/mnt/test-ec-100/brick
>> Brick6: dn-309:/mnt/test-ec-100/brick
>> Brick7: dn-310:/mnt/test-ec-100/brick
>> Brick8: dn-311:/mnt/test-ec-2/brick
>> Brick9: dn-312:/mnt/test-ec-100/brick
>> Brick10: dn-313:/mnt/test-ec-100/brick
>> Options Reconfigured:
>> nfs.disable: on
>> transport.address-family: inet
>>
>> Volume Name: test-ec-200
>> Type: Disperse
>> Volume ID: 2ce23e32-7086-49c5-bf0c-7612fd7b3d5d
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (8 + 2) = 10
>> Transport-type: tcp
>> Bricks:
>> Brick1: dn-304:/mnt/test-ec-200/brick
>> Brick2: dn-305:/mnt/test-ec-200/brick
>> Brick3: dn-306:/mnt/test-ec-200/brick
>> Brick4: dn-307:/mnt/test-ec-200/brick
>> Brick5: dn-308:/mnt/test-ec-200/brick
>> Brick6: dn-309:/mnt/test-ec-200/brick
>> Brick7: dn-310:/mnt/test-ec-200/brick
>> Brick8: dn-311:/mnt/test-ec-200_2/brick
>> Brick9: dn-312:/mnt/test-ec-200/brick
>> Brick10: dn-313:/mnt/test-ec-200/brick
>> Options Reconfigured:
>> nfs.disable: on
>> transport.address-family: inet
>>
>> Volume Name: test-ec-400
>> Type: Disperse
>> Volume ID: fe00713a-7099-404d-ba52-46c6b4b6ecc0
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (8 + 2) = 10
>> Transport-type: tcp
>> Bricks:
>> Brick1: dn-304:/mnt/test-ec-400/brick
>> Brick2: dn-305:/mnt/test-ec-400/brick
>> Brick3: dn-306:/mnt/test-ec-400/brick
>> Brick4: dn-307:/mnt/test-ec-400/brick
>> Brick5: dn-308:/mnt/test-ec-400/brick
>> Brick6: dn-309:/mnt/test-ec-400/brick
>> Brick7: dn-310:/mnt/test-ec-400/brick
>> Brick8: dn-311:/mnt/test-ec-400_2/brick
>> Brick9: dn-312:/mnt/test-ec-400/brick
>> Brick10: dn-313:/mnt/test-ec-400/brick
>> Options Reconfigured:
>> nfs.disable: on
>> transport.address-family: inet
>>
>> --
>>
>> Regards
>> Rolf Arne Larsen
>> Ops Engineer
>> r...@jottacloud.com
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>

Re: [Gluster-users] BUG: After stop and start wrong port is advertised

2017-11-09 Thread Mike Hulsman
Thanks. good to know. 

Met vriendelijke groet, 

Mike Hulsman 

Proxy Managed Services B.V. | www.proxy.nl | Enterprise IT-Infra, Open Source 
and Cloud Technology 
Delftweg 128 3043 NB Rotterdam The Netherlands | +31 10 307 0965 

> From: "Atin Mukherjee" 
> To: "Mike Hulsman" 
> Cc: "Jo Goossens" , "gluster-users"
> 
> Sent: Wednesday, November 8, 2017 2:12:02 PM
> Subject: Re: [Gluster-users] BUG: After stop and start wrong port is 
> advertised

> We've a fix in release-3.10 branch which is merged and should be available in
> the next 3.10 update.

> On Wed, Nov 8, 2017 at 4:58 PM, Mike Hulsman < mike.huls...@proxy.nl > wrote:

>> Hi,

>> This bug is hitting me hard on two different clients.
>> In RHGS 3.3 and on glusterfs 3.10.2 on Centos 7.4
>> in once case I had 59 differences in a total of 203 bricks.

>> I wrote a quick and dirty script to check all ports against the brick file 
>> and
>> the running process.
>> #!/bin/bash

>> Host=`uname -n| awk -F"." '{print $1}'`
>> GlusterVol=`ps -eaf | grep /usr/sbin/glusterfsd| grep -v grep | awk '{print
>> $NF}'| awk -F"-server" '{print $1}'|sort | uniq`
>> Port=`ps -eaf | grep /usr/sbin/glusterfsd| grep -v grep | awk '{print $NF}'| 
>> awk
>> -F"." '{print $NF}'`

>> for Volumes in ${GlusterVol};
>> do
>> cd /var/lib/glusterd/vols/${Volumes}/bricks
>> Bricks=`ls ${Host}*`
>> for Brick in ${Bricks};
>> do
>> Onfile=`grep ^listen-port "${Brick}"`
>> BrickDir=`echo "${Brick}"| awk -F":" '{print $2}'| cut -c2-`
>> Daemon=`ps -eaf | grep "\${BrickDir}.pid" |grep -v grep | awk '{print $NF}' |
>> awk -F"." '{print $2}'`
>> #echo Onfile: ${Onfile}
>> #echo Daemon: ${Daemon}
>> if [ "${Onfile}" = "${Daemon}" ]; then
>> echo "OK For ${Brick}"
>> else
>> echo "!!! NOT OK For ${Brick}"
>> fi
>> done
>> done

>> Met vriendelijke groet,

>> Mike Hulsman

>> Proxy Managed Services B.V. | www.proxy.nl | Enterprise IT-Infra, Open Source
>> and Cloud Technology
>> Delftweg 128 3043 NB Rotterdam The Netherlands | +31 10 307 0965

>>> From: "Jo Goossens" < jo.gooss...@hosted-power.com >
>>> To: "Atin Mukherjee" < amukh...@redhat.com >
>>> Cc: gluster-users@gluster.org
>>> Sent: Friday, October 27, 2017 11:06:35 PM
>>> Subject: Re: [Gluster-users] BUG: After stop and start wrong port is 
>>> advertised

>>> RE: [Gluster-users] BUG: After stop and start wrong port is advertised

>>> Hello Atin,

>>> I just read it and very happy you found the issue. We really hope this will 
>>> be
>>> fixed in the next 3.10.7 version!

>>> PS: Wow nice all that c code and those "goto out" statements (not always
>>> considered clean but the best way often I think). Can remember the days I 
>>> wrote
>>> kernel drivers myself in c :)

>>> Regards

>>> Jo Goossens

 -Original message-
 From: Atin Mukherjee < amukh...@redhat.com >
 Sent: Fri 27-10-2017 21:01
 Subject: Re: [Gluster-users] BUG: After stop and start wrong port is 
 advertised
 To: Jo Goossens < jo.gooss...@hosted-power.com >;
 CC: gluster-users@gluster.org ;
 We (finally) figured out the root cause, Jo!
 Patch https://review.gluster.org/#/c/18579 posted upstream for review.

 On Thu, Sep 21, 2017 at 2:08 PM, Jo Goossens < 
 jo.gooss...@hosted-power.com >
 wrote:

> Hi,

> We use glusterfs 3.10.5 on Debian 9.

> When we stop or restart the service, e.g.: service glusterfs-server 
> restart

> We see that the wrong port get's advertised afterwards. For example:

> Before restart:

> Status of volume: public
> Gluster process TCP Port RDMA Port Online Pid
> --
> Brick 192.168.140.41:/gluster/public 49153 0 Y 6364
> Brick 192.168.140.42:/gluster/public 49152 0 Y 1483
> Brick 192.168.140.43:/gluster/public 49152 0 Y 5913
> Self-heal Daemon on localhost N/A N/A Y 5932
> Self-heal Daemon on 192.168.140.42 N/A N/A Y 13084
> Self-heal Daemon on 192.168.140.41 N/A N/A Y 15499
> Task Status of Volume public
> --
> There are no active volume tasks
> After restart of the service on one of the nodes (192.168.140.43) the 
> port seems
> to have changed (but it didn't):
> root@app3:/var/log/glusterfs# gluster volume status
> Status of volume: public
> Gluster process TCP Port RDMA Port Online Pid
> --
> Brick 192.168.140.41:/gluster/public 49153 0 Y 6364
> Brick 192.168.140.42:/gluster/public 49152 0 Y 1483
> Brick 192.168.140.43:/gluster/public 49154 0 Y 5913
> Self-heal Daemon on localhost N/A N/A Y 4628
> Self-heal Daemon on 192.168.140.42 N/A N/A Y 3077
> Self-heal Daemon on 192.168.140.41 N/A N/A Y 28777
> Task Status of Volume public

[Gluster-users] Error logged in fuse-mount log file

2017-11-09 Thread Amudhan Pandian
resending mail from another id, doubt on whether mail reaches mailing list.


-- Forwarded message --
From: Amudhan P >
Date: Tue, Nov 7, 2017 at 6:43 PM
Subject: error logged in fuse-mount log file
To: Gluster Users >


Hi,

I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log file.

what does this error mean? should i worry about this and how do i resolve this?

[2017-11-07 11:59:17.218973] W [MSGID: 109005] 
[dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory 
selfheal fail
ed : 1 subvolumes have unrecoverable errors. path = /fol1/fol2/fol3/fol4/fol5, 
gfid =3f856ab3-f538-43ee-b408-53dd3da617fb
[2017-11-07 11:59:17.218935] I [MSGID: 109063] 
[dht-layout.c:713:dht_layout_normalize] 0-glustervol-dht: Found anomalies in 
/fol1/fol2/fol3/fol4/fol5 (gfid = 3f856ab3-f538-43ee-b408-53dd3da617fb). 
Holes=1 overlaps=0
[2017-11-07 11:59:17.199917] W [MSGID: 122019] 
[ec-helpers.c:413:ec_loc_gfid_check] 0-glustervol-disperse-5: Mismatching 
GFID's in loc

[2017-11-07 11:03:08.999769] W [MSGID: 109005] 
[dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory 
selfheal fail
ed : 1 subvolumes have unrecoverable errors. path = 
/sec_fol1/sec_fol2/sec_fol3/sec_fol4/sec_fol5, gfid = 
59b9762e-f419-4d56-9fa2-eea9ebc055b2
[2017-11-07 11:03:08.999749] I [MSGID: 109063] 
[dht-layout.c:713:dht_layout_normalize] 0-glustervol-dht: Found anomalies in 
/sec_fol1/sec_fol2/sec_fol3/sec_fol4/sec_fol5 (gfid = 
59b9762e-f419-4d56-9fa2-eea9ebc055b2). Holes=1 overlaps=0
[2017-11-07 11:03:08.980275] W [MSGID: 122019] 
[ec-helpers.c:413:ec_loc_gfid_check] 0-glustervol-disperse-7: Mismatching 
GFID's in loc


[2017-11-07 12:58:43.569801] I [MSGID: 109069] 
[dht-common.c:1324:dht_lookup_unlink_cbk] 0-glustervol-dht: lookup_unlink 
returned with op_ret -> 0 and op-errno -> 0 for 
/thi_fol1/thi_fol2/thi_fol3/thi_fol4/thi_fol5/thi_fol6/thi_file1
[2017-11-07 12:58:43.528844] I [MSGID: 109045] 
[dht-common.c:2012:dht_lookup_everywhere_cbk] 0-glustervol-dht: attempting 
deletion of stale linkfile 
/thi_fol1/thi_fol2/thi_fol3/thi_fol4/thi_fol5/thi_fol6/thi_file1 on 
glustervol-disperse-77 (hashed subvol is glustervol-disperse-106)

regards
Amudhan P

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS healing questions

2017-11-09 Thread Xavi Hernandez
Hi Rolf,

answers follow inline...

On Thu, Nov 9, 2017 at 3:20 PM, Rolf Larsen  wrote:

> Hi,
>
> We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
> bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
> nics)
>
> 1.
> Tests show that healing takes about double the time on healing 200gb vs
> 100, and abit under the double on 400gb vs 200gb bricksizes. Is this
> expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377
> hours to heal.
>
> 100gb brick heal: 18 hours (8+2)
> 200gb brick heal: 37 hours (8+2) +205%
> 400gb brick heal: 59 hours (8+2) +159%
>
> Each 100gb is filled with 8 x 10mb files (200gb is 2x and 400gb is 4x)
>

If I understand it correctly, you are storing 80.000 files of 10 MB each
when you are using 100GB bricks, but you double this value for 200GB bricks
(160.000 files of 10MB each). And for 400GB bricks you create 320.000
files. Have I understood it correctly ?

If this is true, it's normal that twice the space requires approximately
twice the heal time. The healing time depends on the contents of the brick,
not brick size. The same amount of files should take the same healing time,
whatever the brick size is.


>
> 2.
> Are there any possibility to show the progress of a heal? As per now we
> run gluster volume heal volume info, but this exit's when a brick is done
> healing and when we run heal info again the command contiunes showing
> gfid's until the brick is done again. This gives quite a bad picture of the
> status of a heal.
>

The output of 'gluster volume heal  info' shows the list of files
pending to be healed on each brick. The heal is complete when the list is
empty. A faster alternative if you don't want to see the whole list of
files is to use 'gluster volume heal  statistics heal-count'. This
will only show the number of pending files on each brick.

I don't know any other way to track progress of self-heal.


>
> 3.
> What kind of config tweaks is recommended for these kind of EC volumes?
>

I usually use the following values (specific only for ec):

client.event-threads 4
server.event-threads 4
performance.client-io-threads on

Regards,

Xavi




>
>
> $ gluster volume info
> Volume Name: test-ec-100g
> Type: Disperse
> Volume ID: 0254281d-2f6e-4ac4-a773-2b8e0eb8ab27
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (8 + 2) = 10
> Transport-type: tcp
> Bricks:
> Brick1: dn-304:/mnt/test-ec-100/brick
> Brick2: dn-305:/mnt/test-ec-100/brick
> Brick3: dn-306:/mnt/test-ec-100/brick
> Brick4: dn-307:/mnt/test-ec-100/brick
> Brick5: dn-308:/mnt/test-ec-100/brick
> Brick6: dn-309:/mnt/test-ec-100/brick
> Brick7: dn-310:/mnt/test-ec-100/brick
> Brick8: dn-311:/mnt/test-ec-2/brick
> Brick9: dn-312:/mnt/test-ec-100/brick
> Brick10: dn-313:/mnt/test-ec-100/brick
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
>
> Volume Name: test-ec-200
> Type: Disperse
> Volume ID: 2ce23e32-7086-49c5-bf0c-7612fd7b3d5d
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (8 + 2) = 10
> Transport-type: tcp
> Bricks:
> Brick1: dn-304:/mnt/test-ec-200/brick
> Brick2: dn-305:/mnt/test-ec-200/brick
> Brick3: dn-306:/mnt/test-ec-200/brick
> Brick4: dn-307:/mnt/test-ec-200/brick
> Brick5: dn-308:/mnt/test-ec-200/brick
> Brick6: dn-309:/mnt/test-ec-200/brick
> Brick7: dn-310:/mnt/test-ec-200/brick
> Brick8: dn-311:/mnt/test-ec-200_2/brick
> Brick9: dn-312:/mnt/test-ec-200/brick
> Brick10: dn-313:/mnt/test-ec-200/brick
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
>
> Volume Name: test-ec-400
> Type: Disperse
> Volume ID: fe00713a-7099-404d-ba52-46c6b4b6ecc0
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (8 + 2) = 10
> Transport-type: tcp
> Bricks:
> Brick1: dn-304:/mnt/test-ec-400/brick
> Brick2: dn-305:/mnt/test-ec-400/brick
> Brick3: dn-306:/mnt/test-ec-400/brick
> Brick4: dn-307:/mnt/test-ec-400/brick
> Brick5: dn-308:/mnt/test-ec-400/brick
> Brick6: dn-309:/mnt/test-ec-400/brick
> Brick7: dn-310:/mnt/test-ec-400/brick
> Brick8: dn-311:/mnt/test-ec-400_2/brick
> Brick9: dn-312:/mnt/test-ec-400/brick
> Brick10: dn-313:/mnt/test-ec-400/brick
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
>
> --
>
> Regards
> Rolf Arne Larsen
> Ops Engineer
> r...@jottacloud.com 
> 
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GlusterFS healing questions

2017-11-09 Thread Rolf Larsen
Hi,

We ran a test on GlusterFS 3.12.1 with erasurecoded volumes 8+2 with 10
bricks (default config,tested with 100gb, 200gb, 400gb bricksizes,10gbit
nics)

1.
Tests show that healing takes about double the time on healing 200gb vs
100, and abit under the double on 400gb vs 200gb bricksizes. Is this
expected behaviour? In light of this would make 6,4 tb bricksizes use ~ 377
hours to heal.

100gb brick heal: 18 hours (8+2)
200gb brick heal: 37 hours (8+2) +205%
400gb brick heal: 59 hours (8+2) +159%

Each 100gb is filled with 8 x 10mb files (200gb is 2x and 400gb is 4x)

2.
Are there any possibility to show the progress of a heal? As per now we run
gluster volume heal volume info, but this exit's when a brick is done
healing and when we run heal info again the command contiunes showing
gfid's until the brick is done again. This gives quite a bad picture of the
status of a heal.

3.
What kind of config tweaks is recommended for these kind of EC volumes?


$ gluster volume info
Volume Name: test-ec-100g
Type: Disperse
Volume ID: 0254281d-2f6e-4ac4-a773-2b8e0eb8ab27
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (8 + 2) = 10
Transport-type: tcp
Bricks:
Brick1: dn-304:/mnt/test-ec-100/brick
Brick2: dn-305:/mnt/test-ec-100/brick
Brick3: dn-306:/mnt/test-ec-100/brick
Brick4: dn-307:/mnt/test-ec-100/brick
Brick5: dn-308:/mnt/test-ec-100/brick
Brick6: dn-309:/mnt/test-ec-100/brick
Brick7: dn-310:/mnt/test-ec-100/brick
Brick8: dn-311:/mnt/test-ec-2/brick
Brick9: dn-312:/mnt/test-ec-100/brick
Brick10: dn-313:/mnt/test-ec-100/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

Volume Name: test-ec-200
Type: Disperse
Volume ID: 2ce23e32-7086-49c5-bf0c-7612fd7b3d5d
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (8 + 2) = 10
Transport-type: tcp
Bricks:
Brick1: dn-304:/mnt/test-ec-200/brick
Brick2: dn-305:/mnt/test-ec-200/brick
Brick3: dn-306:/mnt/test-ec-200/brick
Brick4: dn-307:/mnt/test-ec-200/brick
Brick5: dn-308:/mnt/test-ec-200/brick
Brick6: dn-309:/mnt/test-ec-200/brick
Brick7: dn-310:/mnt/test-ec-200/brick
Brick8: dn-311:/mnt/test-ec-200_2/brick
Brick9: dn-312:/mnt/test-ec-200/brick
Brick10: dn-313:/mnt/test-ec-200/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

Volume Name: test-ec-400
Type: Disperse
Volume ID: fe00713a-7099-404d-ba52-46c6b4b6ecc0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (8 + 2) = 10
Transport-type: tcp
Bricks:
Brick1: dn-304:/mnt/test-ec-400/brick
Brick2: dn-305:/mnt/test-ec-400/brick
Brick3: dn-306:/mnt/test-ec-400/brick
Brick4: dn-307:/mnt/test-ec-400/brick
Brick5: dn-308:/mnt/test-ec-400/brick
Brick6: dn-309:/mnt/test-ec-400/brick
Brick7: dn-310:/mnt/test-ec-400/brick
Brick8: dn-311:/mnt/test-ec-400_2/brick
Brick9: dn-312:/mnt/test-ec-400/brick
Brick10: dn-313:/mnt/test-ec-400/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet

-- 

Regards
Rolf Arne Larsen
Ops Engineer
r...@jottacloud.com 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 回复: glusterfs segmentation fault in rdma mode

2017-11-09 Thread Mohammed Rafi K C
Hi,

For segfault problem,

Can you please give us more information like core dump as bent
suggested, or/and log files, even reproducible methods will also help.

For problem with Directory creation,

It looks like the client A has some problem to connect with hashed
subvolume.

Do you have a reproducible or more logs ?

Regards

Rafi KC


On 11/06/2017 11:51 AM, acfreeman wrote:
> Hi ,all
>
>  We found a strange problem. Some clients worked normally while some
> clients couldn't access sepcial files. For exmaple, Client A couldn't
> create the directory xxx, but Client B could. However, if Client B
> created the directory, Client A could acess it and even deleted it.
> But Client A still couldn't create the same directory later. If I
> changed the directory name, Client A worked without problems. It
> seemed that there were some problems with special bricks in special
> clients. But all the bricks were online.
>
> I saw this in the logs in the GlusterFS client after creating
> directory failure:
> [2017-11-06 11:55:18.420610] W [MSGID: 109011]
> [dht-layout.c:186:dht_layout_search] 0-data-dht: no subvolume for hash
> (value) = 4148753024
> [2017-11-06 11:55:18.457744] W [fuse-bridge.c:521:fuse_entry_cbk]
> 0-glusterfs-fuse: 488: MKDIR() /xxx => -1 (Input/output error)
> The message "W [MSGID: 109011] [dht-layout.c:186:dht_layout_search]
> 0-data-dht: no subvolume for hash (value) = 4148753024" repeated 3
> times between [2017-11-06 11:55:18.420610] and [2017-11-06
> 11:55:18.457731]
>
>
> -- 原始邮件 --
> *发件人:* "Bennbsp;Turner";;
> *发送时间:* 2017年11月5日(星期天) 凌晨3:00
> *收件人:* "acfreeman"<21291...@qq.com>;
> *抄送:* "gluster-users";
> *主题:* Re: [Gluster-users] glusterfs segmentation fault in rdma mode
>
> This looks like there could be some some problem requesting / leaking
> / whatever memory but without looking at the core its tought to tell
> for sure.   Note:
>
> /usr/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x78)[0x7f95bc54e618]
>
> Can you open up a bugzilla and get us the core file to review?
>
> -b
>
> - Original Message -
> > From: "自由人" <21291...@qq.com>
> > To: "gluster-users" 
> > Sent: Saturday, November 4, 2017 5:27:50 AM
> > Subject: [Gluster-users] glusterfs segmentation fault in rdma mode
> >
> >
> >
> > Hi, All,
> >
> >
> >
> >
> > I used Infiniband to connect all GlusterFS nodes and the clients.
> Previously
> > I run IP over IB and everything was OK. Now I used rdma transport mode
> > instead. And then I ran the traffic. After I while, the glusterfs
> process
> > exited because of segmentation fault.
> >
> >
> >
> >
> > Here were the messages when I saw segmentation fault:
> >
> > pending frames:
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(1) op(WRITE)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > frame : type(0) op(0)
> >
> > patchset: git:// git.gluster.org/glusterfs.git
> >
> > signal received: 11
> >
> > time of crash:
> >
> > 2017-11-01 11:11:23
> >
> > configuration details:
> >
> > argp 1
> >
> > backtrace 1
> >
> > dlfcn 1
> >
> > libpthread 1
> >
> > llistxattr 1
> >
> > setfsid 1
> >
> > spinlock 1
> >
> > epoll.h 1
> >
> > xattr.h 1
> >
> > st_atim.tv_nsec 1
> >
> > package-string: glusterfs 3.11.0
> >
> > /usr/lib64/
> libglusterfs.so.0(_gf_msg_backtrace_nomem+0x78)[0x7f95bc54e618 ]
> >
> > /usr/lib64/ libglusterfs.so.0(gf_print_trace+0x324)[0x7f95bc557834 ]
> >
> > /lib64/ libc.so.6(+0x32510)[0x7f95bace2510 ]
> >
> > The client OS was CentOS 7.3. The server OS was CentOS 6.5. The
> GlusterFS
> > version was 3.11.0 both in clients and servers. The Infiniband card was
> > Mellanox. The Mellanox IB driver version was v4.1-1.0.2 (27 Jun
> 2017) both
> > in clients and servers.
> >
> >
> > Is rdma code stable for GlusterFS? Need I upgrade the IB driver or
> apply a
> > patch?
> >
> > Thanks!
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list

Re: [Gluster-users] Fwd: Ignore failed connection messages during copying files with tiering

2017-11-09 Thread Hari Gowtham
Hi Paul,

We need the log messages to help you further with this. Can you get us
all the gluster logs and the output of gluster volume status and
gluster volume info.
and when the copy process seems to be suspended, i need the volume
status at that time.
The i would like to know what type of work load the machine is put through.


On Sat, Nov 4, 2017 at 5:47 PM, Paul  wrote:
> Hi,
>
>
> We create a GlusterFS cluster with tiers. The hot tier is
> distributed-replicated SSDs. The cold tier is a n*(6+2) disperse volume.
> When copy millions of files to the cluster, we find these logs:
>
>
> W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt
> on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file
> or directory)
> W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt
> on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file
> or directory)
> W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt
> on /var/run/gluster/39668fb028de4b1bb6f4880e7450c064.socket, (No such file
> or directory)
> …
>
> And then the copy process seems to be suspended. The command df hangs in the
> client machine. But if I restart glusterd, then the copy process continues.
> However, several minutes later the problems happens again. Later we find the
> problem seems to happen when creating directories.
>
> The GlusterFS version is 3.11.0. Does anyone knows what’s the problem? Is it
> related to tiering?
>
> Thanks,
> Paul
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Regards,
Hari Gowtham.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Adding a slack for communication?

2017-11-09 Thread Erekle Magradze

Great idea to have the slack channel

+1

Cheers

Erekle


On 11/09/2017 08:59 AM, Martin Toth wrote:

@Amye +1 for this great Idea, I am 100% for it.
@Vijay for archiving purposes maybe it will be possible to use free 
service as https://slackarchive.io/


BR,
Martin

On 9 Nov 2017, at 00:09, Vijay Bellur > wrote:




On Wed, Nov 8, 2017 at 4:22 PM, Amye Scavarda > wrote:


From today's community meeting, we had an item from the issue queue:
https://github.com/gluster/community/issues/13


Should we have a Gluster Community slack team? I'm interested in
everyone's thoughts on this.



+1 to the idea.

One of the limitations I have encountered in a few slack channels is 
the lack of archiving & hence limited search capabilities. If we 
establish a gluster channel, what would be the archiving strategy?


Thanks,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Adding a slack for communication?

2017-11-09 Thread Ravishankar N



On 11/09/2017 09:05 AM, Sam McLeod wrote:


On Wed, Nov 8, 2017 at 4:22 PM, Amye Scavarda > wrote:


From today's community meeting, we had an item from the issue queue:
https://github.com/gluster/community/issues/13


Should we have a Gluster Community slack team? I'm interested in
everyone's thoughts on this.



As fancy as Slack is, I really don't like how dependant so many 
projects have become on it, it's not open source, the desktop client 
is dreadful (heavy javascript webframe), you're at the mercy of the 
companies decisions around the direction of their 'black box' software 
platform etc...


+1.  If I am not online on IRC, I am not 'ping-able' at that point in 
time. From what I understand, slack takes that away by being able to 
send offline messages. We already have email for that. For people like 
me who get anxious until I hit reply to an email that is addressed to 
me, slack doesn't sound all that fun. Searching IRC archives on a need 
basis works just fine. On a lighter note, 
https://twitter.com/iamdevloper/status/926458505355235328 :-)


Cheers,
Ravi



Personally I've found that discourse is a fantastic platform for 
project discussion, help and ideas: https://discourse.org


and for chat I've found that if IRC + a good web frontend for 
history/search isn't enough using either Mattermost 
(https://about.mattermost.com/) or Rocket Chat (https://rocket.chat/) 
has been very successful.


Just my 2c and I'll be happy to be a part of the community no matter 
where it ends up.


--
Sam McLeod
https://smcleod.net
https://twitter.com/s_mcleod


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Adding a slack for communication?

2017-11-09 Thread lemonnierk
> 
> and for chat I've found that if IRC + a good web frontend for history/search 
> isn't enough using either Mattermost (https://about.mattermost.com/ 
> ) or Rocket Chat (https://rocket.chat/ 
> ) has been very successful.
> 

+1 for Rocket.Chat, we've switched to that when the team started asking
about slack (and I just never want to hear about that for us) and
everyone is very happy with it.


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] file shred

2017-11-09 Thread Kingsley Tart
> When I strace `shred filename`, it just seems to write + fsync random 
> values into the file based on some seed from /dev/urandom. Since these 
> are normal syscalls, and since gluster just sends these FOPS to the 
> bricks, it should work fine.
> All caveats listed in `man shred` for the on-disk file system (XFS) 
> still apply.

Thanks Ravi.

-- 
Cheers,
Kingsley.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] [Gluster-devel] Poor performance of block-store with RDMA

2017-11-09 Thread Ji-Hyeon Gim
Hi Kalever!

First of all, I really appreciate your test results for
block-store(https://github.com/pkalever/iozone_results_gluster/tree/master/block-store)
:-)

My teammate and I tested block-store(glfs backstore with tcmu-runner)
but we have met a problem of performance.

We tested some cases with one server that has RDMA volume and one client
that is connected to same RDMA network.

two machines have same environment like below.

- Distro : CentOS 6.9
- Kernel : 4.12.9
- GlusterFS : 3.10.5
- tcmu-runner : 1.2.0
- iscsi-initiator-utils : 6.2.0.873

and these are results from test.

1st. FILEIO on FUSE mounted - 333MB/sec
2nd. glfs user backstore - 140MB/sec
3rd. FILEIO on FUSE mounted with tgtd - 235MB/sec
4th. glfs user backstore with tgtd - 220MB/sec

5th. FILEIO on FUSE mounted (iSER) - 643MB/sec
6th. glfs user backstore (iSER) - 149MB/sec
7th. FILEIO on FUSE mounted with tgtd (iSER) - 677MB/sec
8th. glfs user backstore with tgtd(iSER) - 535MB/sec

Every tests were done with dd command.

As shown above, 6th test showed similar results in 2nd test.

IMOH, It may be caused by tcmu-runner itself or glfs backstore handler
so I will take similar tests with other handlers(like qcow) in order to
find this point.

If there's anything I missed, Could you give me some tips for resolving
this issue? :-)


Best regards.

--

Ji-Hyeon Gim
Research Engineer, Gluesys

Address. Gluesys R Center, 5F, 11-31, Simin-daero 327beon-gil,
 Dongan-gu, Anyang-si,
 Gyeonggi-do, Korea
 (14055)
Phone.   +82-70-8787-1053
Fax. +82-31-388-3261
Mobile.  +82-10-7293-8858
E-Mail.  potato...@potatogim.net
Website. www.potatogim.net

The time I wasted today is the tomorrow the dead man was eager to see
yesterday.
  - Sophocles



signature.asc
Description: OpenPGP digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Adding a slack for communication?

2017-11-09 Thread Niels de Vos
On Wed, Nov 08, 2017 at 04:15:17PM -0800, Amye Scavarda wrote:
> On Wed, Nov 8, 2017 at 3:23 PM, Jim Kinney  wrote:
> > The archival process of the mailing list makes searching for past issues
> > possible. Slack, and irc in general, is a more closed garden than a public
> > archived mailing list.
> >
> > That said, irc/slack is good for immediate interaction between people, say,
> > gluster user with a nightmare and a knowledgeable developer with deep
> > understanding and willingness to assist.
> >
> > If there's a way to make a debug/help/fix session publicly available, and
> > crucially, referenced in the mailing list archive, then irc/slack is a great
> > additional communication channel.
> 
> So at the moment, we do have the logs from IRC made public.

That reminds me, we should probably advertise that a little more. The
current community page on gluster.org could link to the logs and a
web-client for irc.

I've filed an issue for that with some examples and links:
  https://github.com/gluster/glusterweb/issues/168

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Adding a slack for communication?

2017-11-09 Thread Sam McLeod

> On Wed, Nov 8, 2017 at 4:22 PM, Amye Scavarda  > wrote:
> From today's community meeting, we had an item from the issue queue:
> https://github.com/gluster/community/issues/13 
> 
> 
> Should we have a Gluster Community slack team? I'm interested in
> everyone's thoughts on this.
> 

As fancy as Slack is, I really don't like how dependant so many projects have 
become on it, it's not open source, the desktop client is dreadful (heavy 
javascript webframe), you're at the mercy of the companies decisions around the 
direction of their 'black box' software platform etc...

Personally I've found that discourse is a fantastic platform for project 
discussion, help and ideas: https://discourse.org 

and for chat I've found that if IRC + a good web frontend for history/search 
isn't enough using either Mattermost (https://about.mattermost.com/ 
) or Rocket Chat (https://rocket.chat/ 
) has been very successful.

Just my 2c and I'll be happy to be a part of the community no matter where it 
ends up.

--
Sam McLeod
https://smcleod.net
https://twitter.com/s_mcleod___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] file shred

2017-11-09 Thread Ravishankar N



On 11/08/2017 11:36 PM, Kingsley Tart wrote:

Hi,

if we were to use shred to delete a file on a gluster volume, will the
correct blocks be overwritten on the bricks?

(still using Gluster 3.6.3 as have been too cautious to upgrade a
mission critical live system).
When I strace `shred filename`, it just seems to write + fsync random 
values into the file based on some seed from /dev/urandom. Since these 
are normal syscalls, and since gluster just sends these FOPS to the 
bricks, it should work fine.
All caveats listed in `man shred` for the on-disk file system (XFS) 
still apply.

Regards,
Ravi

Cheers,
Kingsley.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Adding a slack for communication?

2017-11-09 Thread Martin Toth
@Amye +1 for this great Idea, I am 100% for it.
@Vijay for archiving purposes maybe it will be possible to use free service as 
https://slackarchive.io/ 

BR,
Martin

> On 9 Nov 2017, at 00:09, Vijay Bellur  wrote:
> 
> 
> 
> On Wed, Nov 8, 2017 at 4:22 PM, Amye Scavarda  > wrote:
> From today's community meeting, we had an item from the issue queue:
> https://github.com/gluster/community/issues/13 
> 
> 
> Should we have a Gluster Community slack team? I'm interested in
> everyone's thoughts on this.
> 
> 
> 
> +1 to the idea.
> 
> One of the limitations I have encountered in a few slack channels is the lack 
> of archiving & hence limited search capabilities. If we establish a gluster 
> channel, what would be the archiving strategy?
> 
> Thanks,
> Vijay
>  
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterfs brick server use too high memory

2017-11-09 Thread Nithya Balachandran
On 8 November 2017 at 17:16, Yao Guotao  wrote:

> Hi all,
> I'm glad to add glusterfs community.
>
> I have a glusterfs cluster:
> Nodes: 4
> System: Centos7.1
> Glusterfs: 3.8.9
> Each Node:
> CPU: 48 core
> Mem: 128GB
> Disk: 1*4T
>
> There is one Distributed Replicated volume. There are ~160 k8s pods as
> clients connecting to glusterfs. But, the memory of  glusterfsd process is
> too high, gradually increase to 100G every node.
> Then, I reboot the glusterfsd process. But the memory increase during
> approximate a week.
> How can I debug the problem?
>
> Hi,

Please take statedumps at intervals (a minimum of 2 at intervals of an
hour) of a brick process for which you see the memory increasing and send
them to us.  [1] describes how to take statedumps.

Regards,
Nithya

[1] http://docs.gluster.org/en/latest/Troubleshooting/statedump/




> Thanks.
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Adding a slack for communication?

2017-11-09 Thread Amye Scavarda
On Wed, Nov 8, 2017 at 3:12 PM, Bartosz Zięba  wrote:
> It's great idea! :)
>
> But think about creating Slack for all RedHat provided opensource projects.
> For example one Slack workspace with separated Gluster, Ceph, Fedora etc.
> channels.
>
> I can't wait for it!
>
> Bartosz
>

At the moment, we're just limited to Gluster's footprint in the world,
but I can take that feedback back to the teams at Red Hat.
Thanks!
-- amye
>
>
> On 08.11.2017 22:22, Amye Scavarda wrote:
>>
>>  From today's community meeting, we had an item from the issue queue:
>> https://github.com/gluster/community/issues/13
>>
>> Should we have a Gluster Community slack team? I'm interested in
>> everyone's thoughts on this.
>> - amye
>>
>
>



-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users