I am continuing on a thread from March last year; please
see the background in those previous postings.
I am having the same problem again, but now I found the
cause and the way to fix it. It looks to me like a bug,
though I can't be sure.
I have a live mail spool on a replica 3 volume. It has
>> replicate-0: performing entry selfheal on
>> 94aefa13-9828-49e5-9bac-6f70453c100f
> Does this gfid correspond to the same directory path as last time?
No, it's one of the "two unrelated directories" that
I mentioned in some previous post. Both directories
exist on the volume mount and conta
On 16/03/21 11:45 pm, Zenon Panoussis wrote:
Yes if the dataset is small, you can try rm -rf of the dir
from the mount (assuming no other application is accessing
them on the volume) launch heal once so that the heal info
becomes zero and then copy it over again .
I did approximately so; the r
> Yes if the dataset is small, you can try rm -rf of the dir
> from the mount (assuming no other application is accessing
> them on the volume) launch heal once so that the heal info
> becomes zero and then copy it over again .
I did approximately so; the rm -rf took its sweet time and the
nu
On 15/03/21 7:39 pm, Zenon Panoussis wrote:
I don't know how to interpret this, but it surely looks as if
Maildir/.Sent/cur needs to be healed on all three bricks. That
can't be possible, logically it doesn't make sense, because if
not even one brick has the data of an object, that object should
> Hmm, then the client4_0_mkdir_cbk failures in the glustershd.log
> must be for a parallel heal of a directory which contains subdirs.
Running volume heal info gives the following results:
node01:
3 gfids and one named directory, namely Maildir/.Sent/cur.
Running gfid2dirname.sh on the 3 gfid
On 15/03/21 5:11 pm, Zenon Panoussis wrote:
Indeed, enabling granular was only possible when there were
0 files to heal. Re-disabling it, however, did not impose this
limitation.
Ah yes, this is expected behavior because even if we disable it, there
should be enough information to do the entry
> -Was this an upgraded setup or a fresh v9.0 install?
It was freshly installed 8.3 Centos rpms and upgraded to 9.0.
I enabled granular after the upgrade.
> - When there are entries yet to be healed, the CLI should
> have prevented you toggling this option - was that not the
> case?
Indeed,
On 15/03/21 3:39 pm, Zenon Panoussis wrote:
Does anyone know what healing error 22 "invalid argument" is
and how to fix it, or at least how to troubleshoot it?
while true; do date; gluster volume heal gv0 statistics heal-count; echo -e
"--\n"; sleep 297; done
Fri Mar 12 14:58:36
Does anyone know what healing error 22 "invalid argument" is
and how to fix it, or at least how to troubleshoot it?
while true; do date; gluster volume heal gv0 statistics heal-count; echo -e
"--\n"; sleep 297; done
Fri Mar 12 14:58:36 CET 2021
Gathering count of entries to be heal
10 matches
Mail list logo