I have the same question. Do you have excessively high --entry-timeout parameter to your FUSE mount? In any case, "Structure needs cleaning" error should not surface up to FUSE and that is still a bug.
On Thu, Dec 12, 2013 at 12:46 PM, Maik Kulbe <i...@linux-web-development.de>wrote: > How do you mount your Client? FUSE? I had similar problems when playing > around with the timeout options for the FUSE mount. If they are too high > they cache the metadata for too long. When you move the file the inode > should stay the same and on the second node the path should stay in cache > for a while so it still knows the inode for that moved files old path thus > can act on the file without knowing it's path. > > The problems kick in when you delete a file and recreate it - the cache > tries to access the old inode, which was deleted, thus throwing errors. If > I recall correctly the "structure needs cleaning" is one of two error > messages I got, depending on which of the timeout mount options was set to > a higher value. > > -----Original Mail----- > From: Johan Huysmans [johan.huysm...@inuits.be] > Sent: 12.12.13 - 14:51:35 > To: gluster-users@gluster.org [gluster-users@gluster.org] > > Subject: Re: [Gluster-users] Structure needs cleaning on some files > > > I created a bug for this issue: >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1041109 >> >> gr. >> Johan >> >> On 10-12-13 12:52, Johan Huysmans wrote: >> >> Hi All, >> >> It seems I can easily reproduce the problem. >> >> * on node 1 create a file (touch , cat , ...). >> * on node 2 take md5sum of direct file (md5sum /path/to/file) >> * on node 1 move file to other name (mv file file1) >> * on node 2 take md5sum of direct file (md5sum /path/to/file), this is >> still working although the file is not really there >> * on node 1 change file content >> * on node 2 take md5sum of direct file (md5sum /path/to/file), this is >> still working and has a changed md5sum >> >> This is really strange behaviour. >> Is this normal, can this be altered with a a setting? >> >> Thanks for any info, >> gr. >> Johan >> >> On 10-12-13 10:02, Johan Huysmans wrote: >> >> I could reproduce this problem with while my mount point is running in >> debug mode. >> logfile is attached. >> >> gr. >> Johan Huysmans >> >> On 10-12-13 09:30, Johan Huysmans wrote: >> >> Hi All, >> >> When reading some files we get this error: >> md5sum: /path/to/file.xml: Structure needs cleaning >> >> in /var/log/glusterfs/mnt-sharedfs.log we see these errors: >> [2013-12-10 08:07:32.256910] W >> [client-rpc-fops.c:526:client3_3_stat_cbk] 1-testvolume-client-0: >> remote operation failed: No such file or directory >> [2013-12-10 08:07:32.257436] W >> [client-rpc-fops.c:526:client3_3_stat_cbk] 1-testvolume-client-1: >> remote operation failed: No such file or directory >> [2013-12-10 08:07:32.259356] W [fuse-bridge.c:705:fuse_attr_cbk] >> 0-glusterfs-fuse: 8230: STAT() /path/to/file.xml => -1 (Structure >> needs cleaning) >> >> We are using gluster 3.4.1-3 on CentOS6. >> Our servers are 64-bit, our clients 32-bit (we are already using >> --enable-ino32 on the mountpoint) >> >> This is my gluster configuration: >> Volume Name: testvolume >> Type: Replicate >> Volume ID: ca9c2f87-5d5b-4439-ac32-b7c138916df7 >> Status: Started >> Number of Bricks: 1 x 2 = 2 >> Transport-type: tcp >> Bricks: >> Brick1: SRV-1:/gluster/brick1 >> Brick2: SRV-2:/gluster/brick2 >> Options Reconfigured: >> performance.force-readdirp: on >> performance.stat-prefetch: off >> network.ping-timeout: 5 >> >> And this is how the applications work: >> We have 2 client nodes who both have a fuse.glusterfs mountpoint. >> On 1 client node we have a application which writes files. >> On the other client node we have a application which reads these >> files. >> On the node where the files are written we don't see any problem, >> and can read that file without problems. >> On the other node we have problems (error messages above) reading >> that file. >> The problem occurs when we perform a md5sum on the exact file, when >> perform a md5sum on all files in that directory there is no problem. >> >> How can we solve this problem as this is annoying. >> The problem occurs after some time (can be days), an umount and >> mount of the mountpoint solves it for some days. >> Once it occurs (and we don't remount) it occurs every time. >> >> I hope someone can help me with this problems. >> >> Thanks, >> Johan Huysmans >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://supercolony.gluster.org/mailman/listinfo/gluster-users >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://supercolony.gluster.org/mailman/listinfo/gluster-users >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://supercolony.gluster.org/mailman/listinfo/gluster-users >> > > _______________________________________________ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users >
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users