Hi Strahil,
I have tried removing the quota for that specific directory and setting it
again but it didn't work (maybe it has to be a quota disable and enable
in the volume options). Currently testing a solution
by Hari with the quota_fsck.py script (https://medium.com/@harigowtham/
glusterfs-quot
Hi João,
Based on your output it seems that the quota size is different on the 2 bricks.
Have you tried to remove the quota and then recreate it ? Maybe it will be the
easiest way to fix it.
Best Regards,
Strahil Nikolov
На 14 август 2020 г. 4:35:14 GMT+03:00, "João Baúto"
написа:
>Hi all,
On Fri, Aug 14, 2020 at 10:04 AM Gilberto Nunes
wrote:
> Hi
> Could you improve the output to show "Possibly undergoing heal" as well?
> gluster vol heal VMS info
> Brick gluster01:/DATA/vms
> Status: Connected
> Number of entries: 0
>
> Brick gluster02:/DATA/vms
> /images/100/vm-100-disk-0.raw -
Yes! I see!
For many small files is complicated...
Here I am generally using 2 or 3 large files (VM disk images!)...
I think this could be at least some progress bar or percent about the
healing process... Some ETA, or similar...
Otherwise the tool is nice and promising...
Thanks anyway.
---
Gilbe
Hi,
We are building a new storage system, and after geo-replication has been
running for a few hours the server runs out of memory and oom-killer
starts killing bricks. It runs fine without geo-replication on, and the
server has 64GB of RAM. I have stopped geo-replication for now.
Any ideas