Just to confirm I've got this correct? So I'll move the directory with the different gfid on the Arbiter brick to somewhere else I then touch this directory on another brick(software is not sensitive to atime update)
I guess the healing should then take place automatically? Thanks David On Thu, 23 Feb 2023 at 11:01, Strahil Nikolov <[email protected]> wrote: > Move away the file located onthe arbiter brick as it has different gfid > and touch it(only if the software that consumes it is NOT sensitive to > atime modification). > > Best Regards, > Strahil Nikolov > > On Wed, Feb 22, 2023 at 13:09, David Dolan > <[email protected]> wrote: > Hi Strahil, > > The output in my previous email showed the directory the file is located > in with a different GFID on the Arbiter node compared with the bricks on > the other nodes. > > Based on that, do you know what my next step should be? > > Thanks > David > > > On Wed, 15 Feb 2023 at 09:21, David Dolan <[email protected]> wrote: > > sorry I didn't receive the previous email. > I've run the command on all 3 nodes(bricks). See below. The directory only > has one file. > On the Arbiter, the file doesn't exist and the directory the file should > be in has a different GFID than the bricks on the other nodes > > Node 1 Brick > getfattr -d -m . -e hex /path_on_brick/subdir1/subdir2/file > trusted.gfid=0x7b1aa40dd1e64b7b8aac7fc6bcbc9e9b > getfattr -d -m . -e hex /path_on_brick/subdir1/subdir2 > trusted.gfid=0xdc99ac0db85d4b1c8a6af57a71bbe22c > getfattr -d -m . -e hex /path_on_brick/subdir1 > trusted.gfid=0x2aa1fe9e65094e6188fc91a6d16dd2c4 > > Node 2 Brick > getfattr -d -m . -e hex /path_on_brick/subdir1/subdir2/file > trusted.gfid=0x7b1aa40dd1e64b7b8aac7fc6bcbc9e9b > getfattr -d -m . -e hex /path_on_brick/subdir1/subdir2 > trusted.gfid=0xdc99ac0db85d4b1c8a6af57a71bbe22c > getfattr -d -m . -e hex /path_on_brick/subdir1 > trusted.gfid=0x2aa1fe9e65094e6188fc91a6d16dd2c4 > > Node 3 Brick (Arbiter) > Path to file doesn't exist > getfattr -d -m . -e hex /path_on_brick/subdir1/subdir2 > trusted.gfid=0x51cca97ac2974ceb9322fe21e6f8ea91 > getfattr -d -m . -e hex /path_on_brick/subdir1 > trusted.gfid=0x2aa1fe9e65094e6188fc91a6d16dd2c4 > > Thanks > David > > On Tue, 14 Feb 2023 at 20:38, Strahil Nikolov <[email protected]> > wrote: > > I guess you didn't receive my last e-mail. > Use getfattr and identify if the gfid mismatch. If yes, move away the > mismatched one. > In order a dir to heal, you have to fix all files inside it before it can > be healed. > > Best Regards, > Strahil Nikolov > В вторник, 14 февруари 2023 г., 14:04:31 ч. Гринуич+2, David Dolan < > [email protected]> написа: > > > I've touched the directory one level above the directory with the I\O > issue as the one above that is the one showing as dirty. > It hasn't healed. Should the self heal daemon automatically kick in here? > > Is there anything else I can do? > > Thanks > David > > On Tue, 14 Feb 2023 at 07:03, Strahil Nikolov <[email protected]> > wrote: > > You can always mount it locally on any of the gluster nodes. > > Best Regards, > Strahil Nikolov > > On Mon, Feb 13, 2023 at 18:13, David Dolan > <[email protected]> wrote: > HI Strahil, > > Thanks for that. It's the first time I've been in this position, so I'm > learning as I go along. > > Unfortunately I can't go into the directory on the client side as I get an > input/output error > Input/output error > d????????? ? ? ? ? ? 01 > > Thanks > David > > > On Sun, 12 Feb 2023 at 20:29, Strahil Nikolov <[email protected]> > wrote: > > Setting blame on client-1 and client-2 will make a bigger mess. > Can't you touch the affected file from the FUSE mount point ? > > Best Regards, > Strahil Nikolov > > On Tue, Feb 7, 2023 at 14:42, David Dolan > <[email protected]> wrote: > Hi All. > > Hoping you can help me with a healing problem. I have one file which > didn't self heal. > it looks to be a problem with a directory in the path as one node says > it's dirty. I have a replica volume with arbiter > This is what the 3 nodes say. One brick on each > > Node1getfattr -d -m . -e hex /path/to/dir | grep afrgetfattr: Removing > leading '/' from absolute path > namestrusted.afr.volume-client-2=0x000000000000000000000001trusted.afr.dirty=0x000000000000000000000000Node2getfattr > -d -m . -e hex /path/to/dir | grep afrgetfattr: Removing leading '/' from > absolute path > namestrusted.afr.volume-client-2=0x000000000000000000000001trusted.afr.dirty=0x000000000000000000000000Node3(Arbiter)getfattr > -d -m . -e hex /path/to/dir | grep afrgetfattr: Removing leading '/' from > absolute path namestrusted.afr.dirty=0x000000000000000000000001 > > Since Node3(the arbiter) sees it as dirty and it looks like Node 1 and > Node 2 have good copies, I was thinking of running the following on Node1 > which I believe would tell Node 2 and Node 3 to sync from Node 1 > I'd then kick off a heal on the volume > > setfattr -n trusted.afr.volume-client-1 -v 0x000000010000000000000000 > /path/to/dirsetfattr -n trusted.afr.volume-client-2 -v > 0x000000010000000000000000 /path/to/dir > > client-0 is node 1, client-1 is node2 and client-2 is node 3. I've > verified the hard links with gfid are in the xattrop directory > Is this the correct way to heal and resolve the issue? > > Thanks > David > ________ > > > > Community Meeting Calendar: > > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://meet.google.com/cpu-eiue-hvk > Gluster-users mailing list > [email protected] > https://lists.gluster.org/mailman/listinfo/gluster-users > >
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list [email protected] https://lists.gluster.org/mailman/listinfo/gluster-users
