Re: [Gluster-users] vfs_gluster broken

2018-09-11 Thread Anoop C S
On Tue, 2018-09-11 at 15:10 -0600, Terry McGuire wrote: > Hello list. I had happily been sharing a Gluster volume with Samba using > vfs_gluster, but it has > recently stopped working right. I think it might have been after updating > Samba from 4.6.2 to > 4.7.1 (as part of updating CentOS 7.4

Re: [Gluster-users] 4.1.x geo-replication "changelogs could not be processed completely" issue

2018-09-11 Thread Kotresh Hiremath Ravishankar
Answer inline. On Tue, Sep 11, 2018 at 4:19 PM, Kotte, Christian (Ext) < christian.ko...@novartis.com> wrote: > Hi all, > > > > I use glusterfs 4.1.3 non-root user geo-replication in a cascading setup. > The gsyncd.log on the master is fine, but I have some strange changelog > warnings and errors

Re: [Gluster-users] Expanding, rebalance and sharding on 3.12.13

2018-09-11 Thread Raghavendra Gowdappa
On Wed, Sep 12, 2018 at 1:43 AM, Jamie Lawrence wrote: > Hello, > > I have a 3 node cluster running 2 three-way dist/replicate volumes for > Ovirt and three new nodes currently that I'd like to add. I've > unfortunately not had time to closely follow this list the last few months, > and am havi

[Gluster-users] Community Meeting, September 12th, 15:00 UTC

2018-09-11 Thread Amye Scavarda
We'll be in #gluster-meeting on IRC, agenda lives in: https://bit.ly/gluster-community-meetings No agenda items yet, maybe you have some? - amye -- Amye Scavarda | a...@redhat.com | Gluster Community Lead ___ Gluster-users mailing list Gluster-users@glus

[Gluster-users] vfs_gluster broken

2018-09-11 Thread Terry McGuire
Hello list. I had happily been sharing a Gluster volume with Samba using vfs_gluster, but it has recently stopped working right. I think it might have been after updating Samba from 4.6.2 to 4.7.1 (as part of updating CentOS 7.4 to 7.5). The shares suffer a variety of weird issues, including

[Gluster-users] Expanding, rebalance and sharding on 3.12.13

2018-09-11 Thread Jamie Lawrence
Hello, I have a 3 node cluster running 2 three-way dist/replicate volumes for Ovirt and three new nodes currently that I'd like to add. I've unfortunately not had time to closely follow this list the last few months, and am having trouble finding any status on the corruption issue with rebala

Re: [Gluster-users] Was: Upgrade to 4.1.2 geo-replication does not work Now: Upgraded to 4.1.3 geo node Faulty

2018-09-11 Thread Marcus Pedersén
Hi Milind, I do not know if this will help, but using ausearch on one of the master nodes gives this: time->Tue Sep 11 03:28:56 2018 type=PROCTITLE msg=audit(1536629336.548:1202535): proctitle=2F7573722F7362696E2F676C7573746572667364002D73007572642D6764732D303031002D2D766F6C66696C652D69

[Gluster-users] 4.1.x geo-replication "changelogs could not be processed completely" issue

2018-09-11 Thread Kotte, Christian (Ext)
Hi all, I use glusterfs 4.1.3 non-root user geo-replication in a cascading setup. The gsyncd.log on the master is fine, but I have some strange changelog warnings and errors on the interimmaster: gsyncd.log … [2018-09-11 10:38:35.575464] I [master(worker /bricks/brick1/brick):1460:crawl] _GMas

Re: [Gluster-users] Gluster 3.10.5: used disk size reported by quota and du mismatch

2018-09-11 Thread Mauro Tridici
Hi Hari, thank you very much for the explanation and for your important support. Best regards, Mauro > Il giorno 11 set 2018, alle ore 10:49, Hari Gowtham ha > scritto: > > Hi Mauro, > > It was because the quota crawl takes some time and it was working on it. > When we ran the fix-issues it

Re: [Gluster-users] Gluster 3.10.5: used disk size reported by quota and du mismatch

2018-09-11 Thread Hari Gowtham
Hi Mauro, It was because the quota crawl takes some time and it was working on it. When we ran the fix-issues it makes changes to the backend and does a lookup. It takes time for the whole thing to reflect in the quota list command. Earlier, it didnt reflect as it was still crawling. So this is th

Re: [Gluster-users] gluster connection interrupted during transfer

2018-09-11 Thread Richard Neuboeck
Hi, since I feared that the logs would fill up the partition (again) I checked the systems daily and finally found the reason. The glusterfs process on the client runs out of memory and get's killed by OOM after about four days. Since rsync runs for a couple of days longer till it ends I never che