> as i understand the current implementation, it isnt quite clear to me > what the use case would be in the current situation. client a has a > file open. client b deletes it. the fileserver continues to keep that > file around so client a can work with it (read/write) as if the file is > still there.
Yes, that is normally the behaviour on a local UNIX file system if you think of the same scenario but with process a and process b on the same client. > when client a is done, it closes the file and the file goes > away. to what purpose? Just before client a is done, process a on client a writes the contents of the file to a _new_ file descriptor under a new name. Not unheard of. If I remember correctly, one of the hard bugs in Arla was to make it detect if the file was incomplete and then fail instead of writing crap into the file. > it would be "better" to just return i/o errors immediately for client a. > it would give you some indication that someone else is modifying the > file to which you believe you have exclusive access. Problem is that process b on client b is not modifying the file contents but only taking away a reference. This can only be prevented if process a on client a that has the file open increases some reference count on the server so that the file is kept around in spite of process b deleting. Unless that happens, all scenarios are somehow broken. Now, should we think of how to make a test case that shows what happens or do you already have one? Harald. _______________________________________________ OpenAFS-devel mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-devel
