Nicolas Williams wrote: > On Thu, Jan 04, 2007 at 12:04:14PM +0200, Benny Halevy wrote: >> I agree that the way the client implements its cache is out of the protocol >> scope. But how do you interpret "correct behavior" in section 4.2.1? >> "Clients MUST use filehandle comparisons only to improve performance, not >> for correct behavior. All clients need to be prepared for situations in >> which it cannot be determined whether two filehandles denote the same object >> and in such cases, avoid making invalid assumptions which might cause >> incorrect behavior." >> Don't you consider data corruption due to cache inconsistency an incorrect >> behavior? > > If a file with multiple hardlinks appears to have multiple distinct > filehandles then a client like Trond's will treat it as multiple > distinct files (with the same hardlink count, and you won't be able to > find the other links to them -- oh well). Can this cause data > corruption? Yes, but only if there are applications that rely on the > different file names referencing the same file, and backup apps on the > client won't get the hardlinks right either.
The case I'm discussing is multiple filehandles for the same name, not even for different hardlinks. This causes spurious EIO errors on the client when the filehandle changes and cache inconsistency when opening the file multiple times in parallel. > > What I don't understand is why getting the fileid is so hard -- always > GETATTR when you GETFH and you'll be fine. I'm guessing that's not as > difficult as it is to maintain a hash table of fileids. It's not difficult at all, just that the client can't rely on the fileids to be unique in both space and time because of server non-compliance (e.g. netapp's snapshots) and fileid reuse after delete. - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html