On Tue, 16 Apr 2013 14:27:12 -0400 Derrick Brashear <sha...@gmail.com> wrote:
> > contents for caching purposes anyway. In option (2) you can have a > > collision by just removing a file and creating one. Maybe those > > aren't _so_ different, but that's my impression. > > It's pretty easy to avoid the condition you mention in option 2, but > it does mean additional "consumption" of the uniq space: on a remove, > make sure the next uniq we'd allocate is not close to our current > value, potentially by using a large increment if we are close. But I'm > not sure it's worth that. I don't think that really makes it any better. If you increment the nextuniq to 2000 on a remove, you might have just incremented it to the same uniq of a file we had 2 deletes ago (or 5, or 3000). It may make it slightly less likely, but it's just making the number of things that need to coincidentally happen rise to 3 instead of 2. -- Andrew Deason adea...@sinenomine.net _______________________________________________ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info