On Tue, 2018-09-11 at 15:10 -0600, Terry McGuire wrote:
> Hello list. I had happily been sharing a Gluster volume with Samba using
> vfs_gluster, but it has
> recently stopped working right. I think it might have been after updating
> Samba from 4.6.2 to
> 4.7.1 (as part of updating CentOS 7.4
Answer inline.
On Tue, Sep 11, 2018 at 4:19 PM, Kotte, Christian (Ext) <
christian.ko...@novartis.com> wrote:
> Hi all,
>
>
>
> I use glusterfs 4.1.3 non-root user geo-replication in a cascading setup.
> The gsyncd.log on the master is fine, but I have some strange changelog
> warnings and errors
On Wed, Sep 12, 2018 at 1:43 AM, Jamie Lawrence
wrote:
> Hello,
>
> I have a 3 node cluster running 2 three-way dist/replicate volumes for
> Ovirt and three new nodes currently that I'd like to add. I've
> unfortunately not had time to closely follow this list the last few months,
> and am havi
We'll be in #gluster-meeting on IRC, agenda lives in:
https://bit.ly/gluster-community-meetings
No agenda items yet, maybe you have some?
- amye
--
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@glus
Hello list. I had happily been sharing a Gluster volume with Samba using
vfs_gluster, but it has recently stopped working right. I think it might have
been after updating Samba from 4.6.2 to 4.7.1 (as part of updating CentOS 7.4
to 7.5). The shares suffer a variety of weird issues, including
Hello,
I have a 3 node cluster running 2 three-way dist/replicate volumes for Ovirt
and three new nodes currently that I'd like to add. I've unfortunately not had
time to closely follow this list the last few months, and am having trouble
finding any status on the corruption issue with rebala
Hi Milind,
I do not know if this will help, but using ausearch on one of the master nodes
gives this:
time->Tue Sep 11 03:28:56 2018
type=PROCTITLE msg=audit(1536629336.548:1202535):
proctitle=2F7573722F7362696E2F676C7573746572667364002D73007572642D6764732D303031002D2D766F6C66696C652D69
Hi all,
I use glusterfs 4.1.3 non-root user geo-replication in a cascading setup. The
gsyncd.log on the master is fine, but I have some strange changelog warnings
and errors on the interimmaster:
gsyncd.log
…
[2018-09-11 10:38:35.575464] I [master(worker /bricks/brick1/brick):1460:crawl]
_GMas
Hi Hari,
thank you very much for the explanation and for your important support.
Best regards,
Mauro
> Il giorno 11 set 2018, alle ore 10:49, Hari Gowtham ha
> scritto:
>
> Hi Mauro,
>
> It was because the quota crawl takes some time and it was working on it.
> When we ran the fix-issues it
Hi Mauro,
It was because the quota crawl takes some time and it was working on it.
When we ran the fix-issues it makes changes to the backend and does a lookup.
It takes time for the whole thing to reflect in the quota list command.
Earlier, it didnt reflect as it was still crawling. So this is th
Hi,
since I feared that the logs would fill up the partition (again) I
checked the systems daily and finally found the reason. The glusterfs
process on the client runs out of memory and get's killed by OOM after
about four days. Since rsync runs for a couple of days longer till it
ends I never che
11 matches
Mail list logo