>
> Hi Strahil,
>
in fact i'm running more bricks per host, around 12 bricks per host.
Nonetheless the feature doesn't seem to work really for me, since it's
starting a separate glusterfsd processes for each brick anyway, actually
after a reboot or restart of glusterd, multiple glusterfsd for the
On January 29, 2020 1:43:03 PM GMT+02:00, Olaf Buitelaar
wrote:
>Hi Strahil,
>
>Thank you for your reply. I found the issue, the not connected errors
>seem
>to appear from the ACL layer. and somehow it received a permission
>denied,
>and this was translated to a not connected error.
>while the fi
Hi Strahil,
Thank you for your reply. I found the issue, the not connected errors seem
to appear from the ACL layer. and somehow it received a permission denied,
and this was translated to a not connected error.
while the file permission were listed as owner=vdsm and group=kvm, somehow
ACL saw thi
Dear Gluster users,
i'm a bit at a los here, and any help would be appreciated.
I've lost a couple, since the disks suffered from severe XFS error's and of
virtual machines and some won't boot because they can't resolve the size of
the image as reported by vdsm:
"VM kube-large-01 is down with err