>
> Hi Strahil,
>
in fact i'm running more bricks per host, around 12 bricks per host.
Nonetheless the feature doesn't seem to work really for me, since it's
starting a separate glusterfsd processes for each brick anyway, actually
after a reboot or restart of glusterd, multiple glusterfsd for the
On January 30, 2020 8:21:18 AM GMT+02:00, Ravishankar N
wrote:
>
>On 30/01/20 11:41 am, Ravishankar N wrote:
>> I think for some reason setting of AFR xattrs on the parent dir did
>> not happen, which is why the files are stuck in split-brain (instead
>> of getting recreated on repo2 using the
On January 29, 2020 1:43:03 PM GMT+02:00, Olaf Buitelaar
wrote:
>Hi Strahil,
>
>Thank you for your reply. I found the issue, the not connected errors
>seem
>to appear from the ACL layer. and somehow it received a permission
>denied,
>and this was translated to a not connected error.
>while the
We see the same thing, after a reboot or downtime of one server there are
almost always unresolved heal entries. Which renders the whole concept of 3x or
2+1 replication kind of moot.
We could often resolve it by just running "touch" on the files through a FUSE
mount, after finding out which
Hello,
We had a similar issue when we upgraded one of our clusters to 6.5 and
clients were running 4.1.5 and 4.1.9, both crashed after few seconds of
mounting, we did not dig into the issue instead, we upgraded the clients to
6.5 and it worked fine.
On Tue, Jan 28, 2020 at 1:35 AM Laurent