Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-06 Thread mabi
To my eyes this specific case looks like a split-brain scenario but the output 
of "volume info split-brain" does not show any files. Should I still use the 
process for split-brain files as documented in the glusterfs documentation? or 
what do you recommend here?


‐‐‐ Original Message ‐‐‐
On Monday, November 5, 2018 4:36 PM, mabi  wrote:

> Ravi, I did not yet modify the cluster.data-self-heal parameter to off 
> because in the mean time node2 of my cluster had a memory shortage (this node 
> has 32 GB of RAM) and as such I had to reboot it. After that reboot all locks 
> got released and there are no more files left to heal on that volume. So the 
> reboot of node2 did the trick (but this still seems to be a bug).
>
> Now on another volume of this same cluster I have a total of 8 files (4 of 
> them being directories) unsynced from node1 and node3 (arbiter) as you can 
> see below:
>
> Brick node1:/data/myvol-pro/brick
> /data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/oc_dir
> gfid:3c92459b-8fa1-4669-9a3d-b38b8d41c360
> /data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/le_dir
> gfid:aae4098a-1a71-4155-9cc9-e564b89957cf
> Status: Connected
> Number of entries: 4
>
> Brick node2:/data/myvol-pro/brick
> Status: Connected
> Number of entries: 0
>
> Brick node3:/srv/glusterfs/myvol-pro/brick
> /data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/oc_dir
> /data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/le_dir
> gfid:aae4098a-1a71-4155-9cc9-e564b89957cf
> gfid:3c92459b-8fa1-4669-9a3d-b38b8d41c360
> Status: Connected
> Number of entries: 4
>
> If I check the "/data/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/" 
> with an "ls -l" directory on the client (gluster fuse mount) I get the 
> following garbage:
>
> drwxr-xr-x 4 www-data www-data 4096 Nov 5 14:19 .
> drwxr-xr-x 31 www-data www-data 4096 Nov 5 14:23 ..
> d? ? ? ? ? ? le_dir
>
> I checked on the nodes and indeed node1 and node3 have the same directory 
> from the time 14:19 but node2 has a directory from the time 14:12.
>
> Again here the self-heal daemon doesn't seem to be doing anything... What do 
> you recommend me to do in order to heal these unsynced files?
>
> ‐‐‐ Original Message ‐‐‐
> On Monday, November 5, 2018 2:42 AM, Ravishankar N ravishan...@redhat.com 
> wrote:
>
> > On 11/03/2018 04:13 PM, mabi wrote:
> >
> > > Ravi (or anyone else who can help), I now have even more files which are 
> > > pending for healing.
> >
> > If the count is increasing, there is likely a network (disconnect)
> > problem between the gluster clients and the bricks that needs fixing.
> >
> > > Here is the output of a "volume heal info summary":
> > > Brick node1:/data/myvol-private/brick
> > > Status: Connected
> > > Total Number of entries: 49845
> > > Number of entries in heal pending: 49845
> > > Number of entries in split-brain: 0
> > > Number of entries possibly healing: 0
> > > Brick node2:/data/myvol-private/brick
> > > Status: Connected
> > > Total Number of entries: 26644
> > > Number of entries in heal pending: 26644
> > > Number of entries in split-brain: 0
> > > Number of entries possibly healing: 0
> > > Brick node3:/srv/glusterfs/myvol-private/brick
> > > Status: Connected
> > > Total Number of entries: 0
> > > Number of entries in heal pending: 0
> > > Number of entries in split-brain: 0
> > > Number of entries possibly healing: 0
> > > Should I try to set the "cluster.data-self-heal" parameter of that volume 
> > > to "off" as mentioned in the bug?
> >
> > Yes, as  mentioned in the workaround in the thread that I shared.
> >
> > > And by doing that, does it mean that my files pending heal are in danger 
> > > of being lost?
> >
> > No.
> >
> > > Also is it dangerous to leave "cluster.data-self-heal" to off?
> >
> > No. This is only disabling client side data healing. Self-heal daemon
> > would still heal the files.
> > -Ravi
> >
> > > ‐‐‐ Original Message ‐‐‐
> > > On Saturday, November 3, 2018 1:31 AM, Ravishankar N 
> > > ravishan...@redhat.com wrote:
> > >
> > > > Mabi,
> > > > If bug 1637953 is what you are experiencing, then you need to follow the
> > > > workarounds mentioned in
> > > > https://lists.gluster.org/pipermail/gluster-users/2018-October/035178.html.
> > > > Can you see if this works?
> > > > -Ravi
> > > > On 11/02/2018 11:40 PM, mabi wrote:
> > > >
> > > > > I tried again to manually run a heal by using the "gluster volume 
> > > > > heal" command because still not files have been healed and noticed 
> > > > > the following warning in the glusterd.log file:
> > > > > [2018-11-02 18:04:19.454702] I [MSGID: 106533] 
> > > > > [glusterd-volume-ops.c:938:__glusterd_handle_cli_heal_volume] 
> > > > > 0-management: Received heal vol req for volume myvol-private
> > > > > [2018-11-02 18:04:19.457311] W [rpc-clnt.c:1753:rpc_clnt_submit] 
> > > > > 0-glustershd: error returned while attempting to connect to 
> > > > > host:(null), port:0
> > > > > It looks like 

[Gluster-users] Failed to mount automatically Gluster Volume on Ubuntu 18.04.1 and GFS v5.0

2018-11-06 Thread MOISY Jérôme
Hello,

I just installed a new storage pool with 2 x 5 Servers in version 5.0 and I can 
not automatically mount the volume on the client. But it works manually.

I found that the volume is mount during the boot then dismount.

With this line in fstab srvstogfs-b11:/GFSVOL02 /gfsvol02 glusterfs 
defaults,_netdev 0 0 there are those logs on the client
I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from 
remote-host: srvstogfs-b11
I [fuse-bridge.c:4259:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol 
versions: glusterfs 7.24 kernel 7.26
I [fuse-bridge.c:4870:fuse_graph_sync] 0-fuse: switched to graph 0
I [fuse-bridge.c:5134:fuse_thread_proc] 0-fuse: initating unmount of /gfsvol02
W [glusterfsd.c:1481:cleanup_and_exit] 
(-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f8bbe6456db] 
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xfd) [0x560315e4397d] 
-->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x560315e437d4] ) 0-: received 
signum (15), shutting down
I [fuse-bridge.c:5897:fini] 0-fuse: Unmounting '/gfsvol02'.
I [fuse-bridge.c:5902:fini] 0-fuse: Closing fuse connection to '/gfsvol02'

But when I add backupvolfile-server in fstab srvstogfs-b11:/GFSVOL02 /gfsvol02 
glusterfs defaults,_netdev,backupvolfile-server=srvstogfs-a11 0 0 it works and 
logs show this :
I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from 
remote-host: srvstogfs-b11
I [glusterfsd-mgmt.c:2464:mgmt_rpc_notify] 0-glusterfsd-mgmt: connecting to 
next volfile server srvstogfs-a11
I [fuse-bridge.c:4259:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol 
versions: glusterfs 7.24 kernel 7.26
I [fuse-bridge.c:4870:fuse_graph_sync] 0-fuse: switched to graph 0

If I invert the order of the servers it also works (srvstogfs-a11:/GFSVOL02 
/gfsvol02 glusterfs defaults,_netdev,backupvolfile-server=srvstogfs-b11 0 0) 
and logs show this :
I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from 
remote-host: srvstogfs-a11
I [glusterfsd-mgmt.c:2464:mgmt_rpc_notify] 0-glusterfsd-mgmt: connecting to 
next volfile server srvstogfs-b11
I [fuse-bridge.c:4259:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol 
versions: glusterfs 7.24 kernel 7.26
I [fuse-bridge.c:4870:fuse_graph_sync] 0-fuse: switched to graph 0

Servers and Client OS : Ubuntu 18.04.1 LTS Fully update
Glustersf version : 5.0

Thank you in advance for your help

Jérôme MOISY
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Failed to mount automatically Gluster Volume on Ubuntu 18.04.1 and GFS v5.0

2018-11-06 Thread MOISY Jérôme
Hello,

I just installed a new storage pool with 2 x 5 Servers in version 5.0 and I can 
not automatically mount the volume on the client. But it works manually.

I found that the volume is mount during the boot then dismount.

With this line in fstab srvstogfs-b11:/GFSVOL02 /gfsvol02 glusterfs 
defaults,_netdev 0 0 there are those logs on the client
I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from 
remote-host: srvstogfs-b11
I [fuse-bridge.c:4259:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol 
versions: glusterfs 7.24 kernel 7.26
I [fuse-bridge.c:4870:fuse_graph_sync] 0-fuse: switched to graph 0
I [fuse-bridge.c:5134:fuse_thread_proc] 0-fuse: initating unmount of /gfsvol02
W [glusterfsd.c:1481:cleanup_and_exit] 
(-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f8bbe6456db] 
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xfd) [0x560315e4397d] 
-->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x560315e437d4] ) 0-: received 
signum (15), shutting down
I [fuse-bridge.c:5897:fini] 0-fuse: Unmounting '/gfsvol02'.
I [fuse-bridge.c:5902:fini] 0-fuse: Closing fuse connection to '/gfsvol02'

But when I add backupvolfile-server in fstab srvstogfs-b11:/GFSVOL02 /gfsvol02 
glusterfs defaults,_netdev,backupvolfile-server=srvstogfs-a11 0 0 it works and 
logs show this :
I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from 
remote-host: srvstogfs-b11
I [glusterfsd-mgmt.c:2464:mgmt_rpc_notify] 0-glusterfsd-mgmt: connecting to 
next volfile server srvstogfs-a11
I [fuse-bridge.c:4259:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol 
versions: glusterfs 7.24 kernel 7.26
I [fuse-bridge.c:4870:fuse_graph_sync] 0-fuse: switched to graph 0

If I invert the order of the servers it also works (srvstogfs-a11:/GFSVOL02 
/gfsvol02 glusterfs defaults,_netdev,backupvolfile-server=srvstogfs-b11 0 0) 
and logs show this :
I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from 
remote-host: srvstogfs-a11
I [glusterfsd-mgmt.c:2464:mgmt_rpc_notify] 0-glusterfsd-mgmt: connecting to 
next volfile server srvstogfs-b11
I [fuse-bridge.c:4259:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol 
versions: glusterfs 7.24 kernel 7.26
I [fuse-bridge.c:4870:fuse_graph_sync] 0-fuse: switched to graph 0

Servers and Client OS : Ubuntu 18.04.1 LTS Fully update
Glustersf version : 5.0

Thank you in advance for your help

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster Monthly Newsletter, October 2018

2018-11-06 Thread Amye Scavarda
Gluster Monthly Newsletter, October 2018

Gluster 5 is out and our retrospective is currently open! This feedback is
anonymous and goes to our release team.
https://lists.gluster.org/pipermail/gluster-users/2018-October/035171.html
https://www.gluster.org/gluster-5-0-retrospective/

Upcoming Community Meeting  - November 7, November 21 - 15:00 UTC in
#gluster-meeting on freenode. https://bit.ly/gluster-community-meetings has
the agenda.

We’re participating in Outreachy this cycle!
https://www.outreachy.org/communities/cfp/gluster/

Want swag for your meetup? https://www.gluster.org/events/ has a contact
form for us to let us know about your Gluster meetup! We’d love to hear
about Gluster presentations coming up, conference talks and gatherings. Let
us know!
Contributors
Top Contributing Companies:  Red Hat, Comcast, DataLab, Gentoo Linux,
Facebook, BioDec, Samsung, Etersoft
Top Contributors in October: Sunny Kumar, Amar Tumballi, Kotresh HR,
Kinglong Mee, Sanju Rakonde

Noteworthy threads:
[Gluster-users] Glusterd2 project updates (
https://github.com/gluster/glusterd2)   -
https://lists.gluster.org/pipermail/gluster-users/2018-October/035006.html
[Gluster-users] Gluster performance updates  -
https://lists.gluster.org/pipermail/gluster-users/2018-October/035020.html
[Gluster-users] Update of work on fixing POSIX compliance issues in
Glusterfs  -
https://lists.gluster.org/pipermail/gluster-users/2018-October/035023.html
[Gluster-users] Maintainer meeting minutes : 1st Oct, 2018  -
https://lists.gluster.org/pipermail/gluster-users/2018-October/035025.html
[Gluster-users] gluster-block Dev Stream Update  -
https://lists.gluster.org/pipermail/gluster-users/2018-October/035063.html
[Gluster-users] GCS 0.1 release! -
https://lists.gluster.org/pipermail/gluster-users/2018-October/035076.html
[Gluster-users] FOSDEM Call for Participation: Software Defined Storage
devroom
https://lists.gluster.org/pipermail/gluster-users/2018-October/035100.html
[Gluster-users] Gluster Monitoring using Prometheus - Status Update
https://lists.gluster.org/pipermail/gluster-users/2018-October/035102.html
[Gluster-users] Maintainer meeting minutes : 15th Oct, 2018
https://lists.gluster.org/pipermail/gluster-users/2018-October/035115.html
[Gluster-users] GlusterFS Project Update - Week 1&2 of Oct
https://lists.gluster.org/pipermail/gluster-users/2018-October/035133.html
[Gluster-users] Announcing Glusterfs release 3.12.15 (Long Term Maintenance)
-
https://lists.gluster.org/pipermail/gluster-users/2018-October/035139.html
Gluster-users] Announcing Gluster Release 5 -
https://lists.gluster.org/pipermail/gluster-users/2018-October/035171.html
[Gluster-users] Glusterd2 project updates (github.com/gluster/glusterd2)
https://lists.gluster.org/pipermail/gluster-users/2018-October/035209.html
[Gluster-users] Gluster Monitoring project updates (
github.com/gluster/gluster-prometheus)  -
https://lists.gluster.org/pipermail/gluster-users/2018-October/035210.html
[Gluster-users] Consolidating Feature Requests in github
https://lists.gluster.org/pipermail/gluster-users/2018-November/035252.html
[Gluster-devel] gluster-ansible: status of the project -
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055481.html
[Gluster-devel] [Gluster-Maintainers] Release 5: Performance comparisons
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055493.html
[Gluster-devel] POC- Distributed regression testing framework
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055497.html
[Gluster-devel] Infra Update for the last 2 weeks -
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055498.html
[Gluster-devel] Gluster Weekly Report : Static Analyser  -
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055545.html
[Gluster-devel] gluster-ansible: current status   -
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055546.html
[Gluster-devel] Adding ALUA support for Gluster-Block   -
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055623.html
[Gluster-devel] Gluster Components Tracing: Update -
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055622.html
[Gluster-devel] Gluster Weekly Report : Static Analyser
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055625.html
[Gluster-devel] Thin Arbiter Volume : Ready to Use/Trial   -
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055634.html
[Gluster-devel] Maintainer's meeting minutes : 29th October, 2018  -
https://lists.gluster.org/pipermail/gluster-devel/2018-October/055642.html

Events:
KubeCon North America 2018, Dec 11-13, in Seattle, US

Open CFPs:

FOSDEM, Feb 2-3 2019 in Brussels, Belgium - https://fosdem.org/2019/
FOSDEM Software defined Storage DevRoom:
https://lists.gluster.org/pipermail/gluster-users/2018-October/035100.html

Vault, February 25–26, 2019:
https://www.usenix.org/conference/vault19/call-for-participation
Red Hat Summit,  May 7-9, 2019 in Boston -

[Gluster-users] resetted node peers OK but say no volume

2018-11-06 Thread fsoyer

Hi all,
after some problems on a node it finished by been marked as "rejected" by 
others nodes.
It is part of no more volumes (after some remove-brick force), so I resetted it 
as described here :
https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Administrator%20Guide/Resolving%20Peer%20Rejected/
All seems to work fine, now it sees others peers and others peers see it "in 
cluster" and connected, OK. But it has not reimported the volumes infos
# gluster volume info
No volumes present
when there is up to 5 volumes handles by the 2 other peers.
How can I recover volumes informations on this node ? (without issues or 
downtime on others hosts as they are used in production)

Thanks
--

Regards,

Frank
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Should I be using gluster 3 or gluster 4?

2018-11-06 Thread Vlad Kopylov
Tiering is working on 3.12.4
For the small file load it didn't work for us because of excessive attr
calls. After adding ssd tier all speeds for small file load doubled.
On small file load the bottle neck appears to be the network. Unless you
have 10G local cluster with RDMA you will probably get same results.

Just tune volume for small file access.

v

On Tue, Nov 6, 2018 at 2:04 AM Jeevan Patnaik  wrote:

> Hi,
>
> We are doing production deployment ande I have tested 3.12.4 and found
> okay to proceed. But we have decided to use tiering feature at the last
> minute. But something is not right or missing with tiering feature in
> 3.12.4 and hence, I'm thinking of using higher versions which may have
> fixed bugs with tiering. But we don't have enough time to test it
> completely for other possible issues for production deployment. So, I'm
> checking for a safest version with stable tiering feature.
>
> Consolidating the files to a backup storage and restoring them is not
> efficient or feasible option for us, as we have terabytes of small file
> data which can take huge times..so, downgrading or upgrading is something
> we can't do so often but only with a scheduled downtime. Hence, trying
> version 5 and falling back is not preferred option for us.
>
>
> Regards,
> Jeevan.
>
>
> On Tue, Nov 6, 2018, 12:24 PM Jeevan Patnaik  wrote:
>
>> Hi Vlad,
>>
>> I'm still confused of gluster releases. :(
>> Is 3.13 an official gluster release? It's not mentioned in
>> www.gluster.org/release-schedule
>>
>> Which is more stable 3.13.2 or 3.12.6 or 4.1.5?
>>
>> 3.13.2 was released in Jan and no minor releases since then..So, I expect
>> it's a stable release. or maybe noone has used it enough to report bugs?
>>
>> I understand we may see bugs even in the most stable version while using
>> it. I'm looking for a version thats safe to use with least chance of
>> corrupting or loosing our files.
>>
>> I'm set to test 3.13.2 tiering feature, but have my thoughts about if
>> 3.12.6 or 4.1.5 should be tested instead.
>>
>> Regards,
>> Jeevan.
>>
>>
>> On Nov 4, 2018 5:11 AM, "Vlad Kopylov"  wrote:
>>
>> If you doing replica - start with 5, run your load on it. You can always
>> fallback to 3.12 or 4. It is not like your files will be gone.
>> With distributed might be harder, files will still be on the bricks but
>> you will have to consolidate them or copy in to new volume after downgrade.
>> Documentation is all the same.
>>
>>
>> v
>>
>> On Wed, Oct 31, 2018 at 2:54 AM Jeevan Patnaik 
>> wrote:
>>
>>> Hi Vlad,
>>>
>>> Can gluster 4.1.5 too be used for production? There's no documentation
>>> for gluster 4.
>>>
>>> Regards,
>>> Jeevan.
>>>
>>> On Wed, Oct 31, 2018, 9:37 AM Vlad Kopylov  wrote:
>>>
 3.12.14 working fine in production for file access
 you can find vol and mount settings in mailing list archive

 On Tue, Oct 30, 2018 at 11:05 AM Jeevan Patnaik 
 wrote:

> Hi All,
>
> I see gluster 3 has reached end of life and gluster 5 has just been
> introduced.
>
> Is gluster 4.1.5 stable enough for production deployment? I see by
> default gluster docs point  to v3  only  and there  are no gluster docs
> for 4 or 5.  Why so? And I'm mainly looking for a stable gluster tiering
> feature and Kernek NFS support. I faced few issues with tiering in 3.14 
> and
> so thinking if I should switch to 4.1.5, as it will be a production
> deployment.
>
> Thank you.
>
> Regards,
> Jeevan.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users


>>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Can't enable shared_storage with Glusterv5.0

2018-11-06 Thread David Spisla
Ok, thanks for the update.

Am Di., 6. Nov. 2018 um 15:46 Uhr schrieb Sanju Rakonde :

> Hi David,
>
> With commit 44e4db, shared-storage functionality has broken. The test case
> we added couldn't catch this, since our .t frame work simulates a cluster
> environment in a single node. We will send out patch for this soon(into
> release-5 branch as well).
>
> On Tue, Nov 6, 2018 at 4:15 PM David Spisla  wrote:
>
>> Hello folks,
>>
>> I tried to create the shared_storage on a Gluster v5.0 4-node-cluster
>> with SLES15. Below you find the log output of glusterd.log after executing
>> this commands:
>> $ sudo systemctl start glusterd
>> $ sudo gluster vo set all cluster.enable-shared-storage enable
>>
>> The consule output tells me a success, but there is no volume
>> shared_storage, not in the volume list and there are no volfiles in
>> /var/lib/glusterd/vols.
>> SSL Support for glusterd is enabled and there is the directory
>> /var/run/gluster/shared_storage
>>
>> Regards
>> David Spisla
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *[2018-11-06 10:26:26.381458] I [MSGID: 100030] [glusterfsd.c:2691:main]
>> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 5.0 (args:
>> /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)[2018-11-06
>> 10:26:26.383793] I [MSGID: 106478] [glusterd.c:1435:init] 0-management:
>> Maximum allowed open file descriptors set to 65536[2018-11-06
>> 10:26:26.383818] I [MSGID: 106479] [glusterd.c:1491:init] 0-management:
>> Using /var/lib/glusterd as working directory[2018-11-06 10:26:26.383826] I
>> [MSGID: 106479] [glusterd.c:1497:init] 0-management: Using /var/run/gluster
>> as pid file working directory[2018-11-06 10:26:26.385081] I
>> [socket.c:4167:ssl_setup_connection_params] 0-socket.management: SSL
>> support on the I/O path is ENABLED[2018-11-06 10:26:26.385098] I
>> [socket.c:4170:ssl_setup_connection_params] 0-socket.management: SSL
>> support for glusterd is ENABLED[2018-11-06 10:26:26.385103] I
>> [socket.c:4180:ssl_setup_connection_params] 0-socket.management: using
>> certificate depth 1[2018-11-06 10:26:26.385248] I
>> [socket.c:4225:ssl_setup_connection_params] 0-socket.management: failed to
>> open /etc/ssl/dhparam.pem, DH ciphers are disabled[2018-11-06
>> 10:26:26.387073] W [MSGID: 103071] [rdma.c:4475:__gf_rdma_ctx_create]
>> 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such
>> device][2018-11-06 10:26:26.387096] W [MSGID: 103055] [rdma.c:4774:init]
>> 0-rdma.management: Failed to initialize IB Device[2018-11-06
>> 10:26:26.387104] W [rpc-transport.c:339:rpc_transport_load]
>> 0-rpc-transport: 'rdma' initialization failed[2018-11-06 10:26:26.387179] W
>> [rpcsvc.c:1789:rpcsvc_create_listener] 0-rpc-service: cannot create
>> listener, initing the transport failed[2018-11-06 10:26:26.387190] E
>> [MSGID: 106244] [glusterd.c:1798:init] 0-management: creation of 1
>> listeners failed, continuing with succeeded transport[2018-11-06
>> 10:26:26.387295] I [socket.c:4170:ssl_setup_connection_params]
>> 0-socket.management: SSL support for glusterd is ENABLED[2018-11-06
>> 10:26:26.387302] I [socket.c:4180:ssl_setup_connection_params]
>> 0-socket.management: using certificate depth 1[2018-11-06 10:26:26.387428]
>> I [socket.c:4225:ssl_setup_connection_params] 0-socket.management: failed
>> to open /etc/ssl/dhparam.pem, DH ciphers are disabled[2018-11-06
>> 10:26:29.263657] I [MSGID: 106513]
>> [glusterd-store.c:2282:glusterd_restore_op_version] 0-glusterd: retrieved
>> op-version: 5[2018-11-06 10:26:29.264613] I [MSGID: 106498]
>> [glusterd-handler.c:3647:glusterd_friend_add_from_peerinfo] 0-management:
>> connect returned 0The message "I [MSGID: 106498]
>> [glusterd-handler.c:3647:glusterd_friend_add_from_peerinfo] 0-management:
>> connect returned 0" repeated 2 times between [2018-11-06 10:26:29.264613]
>> and [2018-11-06 10:26:29.264716][2018-11-06 10:26:29.264769] W [MSGID:
>> 106061] [glusterd-handler.c:3453:glusterd_transport_inet_options_build]
>> 0-glusterd: Failed to get tcp-user-timeout[2018-11-06 10:26:29.264808] I
>> [rpc-clnt.c:1000:rpc_clnt_connection_init] 0-management: setting
>> frame-timeout to 600[2018-11-06 10:26:29.264915] I
>> [socket.c:4167:ssl_setup_connection_params] 0-management: SSL support on
>> the I/O path is ENABLED[2018-11-06 10:26:29.264922] I
>> [socket.c:4170:ssl_setup_connection_params] 0-management: SSL support for
>> glusterd is ENABLED[2018-11-06 10:26:29.264926] I
>> [socket.c:4180:ssl_setup_connection_params] 0-management: using certificate
>> depth 1[2018-11-06 10:26:29.265075] I
>> [socket.c:4225:ssl_setup_connection_params] 0-management: failed to open
>> /etc/ssl/dhparam.pem, DH ciphers are 

Re: [Gluster-users] Can't enable shared_storage with Glusterv5.0

2018-11-06 Thread Sanju Rakonde
Hi David,

With commit 44e4db, shared-storage functionality has broken. The test case
we added couldn't catch this, since our .t frame work simulates a cluster
environment in a single node. We will send out patch for this soon(into
release-5 branch as well).

On Tue, Nov 6, 2018 at 4:15 PM David Spisla  wrote:

> Hello folks,
>
> I tried to create the shared_storage on a Gluster v5.0 4-node-cluster with
> SLES15. Below you find the log output of glusterd.log after executing this
> commands:
> $ sudo systemctl start glusterd
> $ sudo gluster vo set all cluster.enable-shared-storage enable
>
> The consule output tells me a success, but there is no volume
> shared_storage, not in the volume list and there are no volfiles in
> /var/lib/glusterd/vols.
> SSL Support for glusterd is enabled and there is the directory
> /var/run/gluster/shared_storage
>
> Regards
> David Spisla
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[2018-11-06 10:26:26.381458] I [MSGID: 100030] [glusterfsd.c:2691:main]
> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 5.0 (args:
> /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)[2018-11-06
> 10:26:26.383793] I [MSGID: 106478] [glusterd.c:1435:init] 0-management:
> Maximum allowed open file descriptors set to 65536[2018-11-06
> 10:26:26.383818] I [MSGID: 106479] [glusterd.c:1491:init] 0-management:
> Using /var/lib/glusterd as working directory[2018-11-06 10:26:26.383826] I
> [MSGID: 106479] [glusterd.c:1497:init] 0-management: Using /var/run/gluster
> as pid file working directory[2018-11-06 10:26:26.385081] I
> [socket.c:4167:ssl_setup_connection_params] 0-socket.management: SSL
> support on the I/O path is ENABLED[2018-11-06 10:26:26.385098] I
> [socket.c:4170:ssl_setup_connection_params] 0-socket.management: SSL
> support for glusterd is ENABLED[2018-11-06 10:26:26.385103] I
> [socket.c:4180:ssl_setup_connection_params] 0-socket.management: using
> certificate depth 1[2018-11-06 10:26:26.385248] I
> [socket.c:4225:ssl_setup_connection_params] 0-socket.management: failed to
> open /etc/ssl/dhparam.pem, DH ciphers are disabled[2018-11-06
> 10:26:26.387073] W [MSGID: 103071] [rdma.c:4475:__gf_rdma_ctx_create]
> 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such
> device][2018-11-06 10:26:26.387096] W [MSGID: 103055] [rdma.c:4774:init]
> 0-rdma.management: Failed to initialize IB Device[2018-11-06
> 10:26:26.387104] W [rpc-transport.c:339:rpc_transport_load]
> 0-rpc-transport: 'rdma' initialization failed[2018-11-06 10:26:26.387179] W
> [rpcsvc.c:1789:rpcsvc_create_listener] 0-rpc-service: cannot create
> listener, initing the transport failed[2018-11-06 10:26:26.387190] E
> [MSGID: 106244] [glusterd.c:1798:init] 0-management: creation of 1
> listeners failed, continuing with succeeded transport[2018-11-06
> 10:26:26.387295] I [socket.c:4170:ssl_setup_connection_params]
> 0-socket.management: SSL support for glusterd is ENABLED[2018-11-06
> 10:26:26.387302] I [socket.c:4180:ssl_setup_connection_params]
> 0-socket.management: using certificate depth 1[2018-11-06 10:26:26.387428]
> I [socket.c:4225:ssl_setup_connection_params] 0-socket.management: failed
> to open /etc/ssl/dhparam.pem, DH ciphers are disabled[2018-11-06
> 10:26:29.263657] I [MSGID: 106513]
> [glusterd-store.c:2282:glusterd_restore_op_version] 0-glusterd: retrieved
> op-version: 5[2018-11-06 10:26:29.264613] I [MSGID: 106498]
> [glusterd-handler.c:3647:glusterd_friend_add_from_peerinfo] 0-management:
> connect returned 0The message "I [MSGID: 106498]
> [glusterd-handler.c:3647:glusterd_friend_add_from_peerinfo] 0-management:
> connect returned 0" repeated 2 times between [2018-11-06 10:26:29.264613]
> and [2018-11-06 10:26:29.264716][2018-11-06 10:26:29.264769] W [MSGID:
> 106061] [glusterd-handler.c:3453:glusterd_transport_inet_options_build]
> 0-glusterd: Failed to get tcp-user-timeout[2018-11-06 10:26:29.264808] I
> [rpc-clnt.c:1000:rpc_clnt_connection_init] 0-management: setting
> frame-timeout to 600[2018-11-06 10:26:29.264915] I
> [socket.c:4167:ssl_setup_connection_params] 0-management: SSL support on
> the I/O path is ENABLED[2018-11-06 10:26:29.264922] I
> [socket.c:4170:ssl_setup_connection_params] 0-management: SSL support for
> glusterd is ENABLED[2018-11-06 10:26:29.264926] I
> [socket.c:4180:ssl_setup_connection_params] 0-management: using certificate
> depth 1[2018-11-06 10:26:29.265075] I
> [socket.c:4225:ssl_setup_connection_params] 0-management: failed to open
> /etc/ssl/dhparam.pem, DH ciphers are disabled[2018-11-06 10:26:29.268184] I
> [rpc-clnt.c:1000:rpc_clnt_connection_init] 0-management: setting
> frame-timeout to 600[2018-11-06 10:26:29.268277] I
> [socket.c:4167:ssl_setup_connection_params] 0-management: SSL support on
> the I/O path is ENABLED[2018-11-06 

[Gluster-users] anyone using gluster-block?

2018-11-06 Thread Davide Obbi
Hi,

i am testing gluster-block and i am wondering if someone has used it and
have some feedback regarding its performance.. just to set some
expectations... for example:
- i have deployed a block volume using heketi on a 3 nodes gluster4.1
cluster. it's a replica3 volume.
- i have mounted via iscsi using multipath config suggested, created vg/lv
and put xfs on it
- all done without touching any volume setting or customizing xfs
parameters etc..
- all baremetal running on 10Gb, gluster has a single block device, SSD in
use by heketi

so i tried a dd and i get a 4.7 MB/s?
- on the gluster nodes i have in write ~200iops, ~15MB/s, 75% util steady
and spiky await time up to 100ms alternating between the servers. CPUs are
mostly idle but there is some waiting...
- Glusterd and fsd utilization is below 1%

The thing is that a gluster fuse mount on same platform does not have this
slowness so there must be something wrong with my understanding of
gluster-block?
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Can't enable shared_storage with Glusterv5.0

2018-11-06 Thread David Spisla
Hello folks,

I tried to create the shared_storage on a Gluster v5.0 4-node-cluster with
SLES15. Below you find the log output of glusterd.log after executing this
commands:
$ sudo systemctl start glusterd
$ sudo gluster vo set all cluster.enable-shared-storage enable

The consule output tells me a success, but there is no volume
shared_storage, not in the volume list and there are no volfiles in
/var/lib/glusterd/vols.
SSL Support for glusterd is enabled and there is the directory
/var/run/gluster/shared_storage

Regards
David Spisla

















































































































*[2018-11-06 10:26:26.381458] I [MSGID: 100030] [glusterfsd.c:2691:main]
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 5.0 (args:
/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)[2018-11-06
10:26:26.383793] I [MSGID: 106478] [glusterd.c:1435:init] 0-management:
Maximum allowed open file descriptors set to 65536[2018-11-06
10:26:26.383818] I [MSGID: 106479] [glusterd.c:1491:init] 0-management:
Using /var/lib/glusterd as working directory[2018-11-06 10:26:26.383826] I
[MSGID: 106479] [glusterd.c:1497:init] 0-management: Using /var/run/gluster
as pid file working directory[2018-11-06 10:26:26.385081] I
[socket.c:4167:ssl_setup_connection_params] 0-socket.management: SSL
support on the I/O path is ENABLED[2018-11-06 10:26:26.385098] I
[socket.c:4170:ssl_setup_connection_params] 0-socket.management: SSL
support for glusterd is ENABLED[2018-11-06 10:26:26.385103] I
[socket.c:4180:ssl_setup_connection_params] 0-socket.management: using
certificate depth 1[2018-11-06 10:26:26.385248] I
[socket.c:4225:ssl_setup_connection_params] 0-socket.management: failed to
open /etc/ssl/dhparam.pem, DH ciphers are disabled[2018-11-06
10:26:26.387073] W [MSGID: 103071] [rdma.c:4475:__gf_rdma_ctx_create]
0-rpc-transport/rdma: rdma_cm event channel creation failed [No such
device][2018-11-06 10:26:26.387096] W [MSGID: 103055] [rdma.c:4774:init]
0-rdma.management: Failed to initialize IB Device[2018-11-06
10:26:26.387104] W [rpc-transport.c:339:rpc_transport_load]
0-rpc-transport: 'rdma' initialization failed[2018-11-06 10:26:26.387179] W
[rpcsvc.c:1789:rpcsvc_create_listener] 0-rpc-service: cannot create
listener, initing the transport failed[2018-11-06 10:26:26.387190] E
[MSGID: 106244] [glusterd.c:1798:init] 0-management: creation of 1
listeners failed, continuing with succeeded transport[2018-11-06
10:26:26.387295] I [socket.c:4170:ssl_setup_connection_params]
0-socket.management: SSL support for glusterd is ENABLED[2018-11-06
10:26:26.387302] I [socket.c:4180:ssl_setup_connection_params]
0-socket.management: using certificate depth 1[2018-11-06 10:26:26.387428]
I [socket.c:4225:ssl_setup_connection_params] 0-socket.management: failed
to open /etc/ssl/dhparam.pem, DH ciphers are disabled[2018-11-06
10:26:29.263657] I [MSGID: 106513]
[glusterd-store.c:2282:glusterd_restore_op_version] 0-glusterd: retrieved
op-version: 5[2018-11-06 10:26:29.264613] I [MSGID: 106498]
[glusterd-handler.c:3647:glusterd_friend_add_from_peerinfo] 0-management:
connect returned 0The message "I [MSGID: 106498]
[glusterd-handler.c:3647:glusterd_friend_add_from_peerinfo] 0-management:
connect returned 0" repeated 2 times between [2018-11-06 10:26:29.264613]
and [2018-11-06 10:26:29.264716][2018-11-06 10:26:29.264769] W [MSGID:
106061] [glusterd-handler.c:3453:glusterd_transport_inet_options_build]
0-glusterd: Failed to get tcp-user-timeout[2018-11-06 10:26:29.264808] I
[rpc-clnt.c:1000:rpc_clnt_connection_init] 0-management: setting
frame-timeout to 600[2018-11-06 10:26:29.264915] I
[socket.c:4167:ssl_setup_connection_params] 0-management: SSL support on
the I/O path is ENABLED[2018-11-06 10:26:29.264922] I
[socket.c:4170:ssl_setup_connection_params] 0-management: SSL support for
glusterd is ENABLED[2018-11-06 10:26:29.264926] I
[socket.c:4180:ssl_setup_connection_params] 0-management: using certificate
depth 1[2018-11-06 10:26:29.265075] I
[socket.c:4225:ssl_setup_connection_params] 0-management: failed to open
/etc/ssl/dhparam.pem, DH ciphers are disabled[2018-11-06 10:26:29.268184] I
[rpc-clnt.c:1000:rpc_clnt_connection_init] 0-management: setting
frame-timeout to 600[2018-11-06 10:26:29.268277] I
[socket.c:4167:ssl_setup_connection_params] 0-management: SSL support on
the I/O path is ENABLED[2018-11-06 10:26:29.268287] I
[socket.c:4170:ssl_setup_connection_params] 0-management: SSL support for
glusterd is ENABLED[2018-11-06 10:26:29.268293] I
[socket.c:4180:ssl_setup_connection_params] 0-management: using certificate
depth 1[2018-11-06 10:26:29.268483] I
[socket.c:4225:ssl_setup_connection_params] 0-management: failed to open
/etc/ssl/dhparam.pem, DH ciphers are disabled[2018-11-06 10:26:29.268969] I
[rpc-clnt.c:1000:rpc_clnt_connection_init] 0-management: setting
frame-timeout to 600[2018-11-06 10:26:29.269054] I
[socket.c:4167:ssl_setup_connection_params] 0-management: 

[Gluster-users] is Samba blind to quotas.

2018-11-06 Thread lejeczek

hi guys,

I have a Samba (centos 7.5) which does not pick up gluster's quota. More 
specifically it shows 0 bytes free even if I increase quotas.


Where in gluster I could start troubleshooting, if possible?

many thanks, L.

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Shared Storage is unmount after stopping glusterd

2018-11-06 Thread David Spisla
Dear Gluster Community,
I have a v4.1.5 4-node-cluster with SLES15 machines. The last time I
observe that after stopping glusterd with
$ sudo systemctl stop glusterd

sometimes also the FUSE mount of the shared_storage on the same node is
unmount but shared_storage ist still in the volume list. I mean really
"sometimes" because it is not reproducible in a deterministic way and I am
not sure if there is really a dependency between stopping glusterd and
shared_storage.

What is the expected behaviour if stopping glusterd? In my opinion all
volumes included shared_storage should still have FUSE mounts.

Regards
David Spisla
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users