On 12/27/2013 12:02 AM, Jonathan Nieder wrote:
Michael Haggerty wrote:
It could be that some other process is trying to clean up empty
directories at the same time that safe_create_leading_directories() is
attempting to create them. In this case, it could happen that
directory a/b was present at the end of one iteration of the loop
(either it was already present or we just created it ourselves), but
by the time we try to create directory a/b/c, directory a/b has
been deleted. In fact, directory a might also have been deleted.
When does this happen in practice? Is this about git racing with
itself or with some other program?
I think it could be triggered by a reference creation racing with a
reference packing. See below.
I fear that the aggressive directory creator fighting the aggressive
directory remover might be waging a losing battle.
That may be so, but it would be strange for a directory remover to be
aggressive. And even if it were, the worst consequence would be that
the director creator would try three times before giving up.
Is this about a push that creates a ref racing against a push that
deletes a ref from the same hierarchy?
The race could be triggered in this scenario but I don't think it would
result in a spurious error (at least not if there are only two
racers...) The reason is that empty loose reference directories are not
deleted when a reference is *deleted*, but rather when a new
d/f-conflicting reference is *created*. E.g., if
git branch foo/bar
git branch -d foo/bar # this leaves directory foo behind
# this removes directory foo and creates file foo:
git branch foo
git branch foo/baz
The last two commands could fight. However, in this scenario one of the
reference creations would ultimately have to fail anyway, so this patch
doesn't really help.
However, when packing references, the directories that used to hold the
old references are deleted right away. So
git branch foo/bar
git pack-refs --all
git branch foo/baz
Here, the last two commands could fight.
So, if a call to mkdir() fails with ENOENT, then try checking/making
all directories again from the beginning. Attempt up to three times
before giving up.
If we are racing against a ref deletion, then we can retry while our
rival keeps walking up the directory tree and deleting parent
directories. As soon as we successfully create a directory, we have
won the race.
But what happens if the entire safe_create_leading_directories
operation succeeds and *then* our racing partner deletes the
directory? No one is putting in a file to reserve the directory for
the directory creator.
If we care enough to retry more than once, I fear this is retrying at
the wrong level.
I realize that this change doesn't solve the whole problem. But you
make a good point, that if the caller is going to retry anyway, then
there is no need to retry within this function. It would be sufficient
for this function to return a specific error value indicating that
creating the directory failed, but there's a chance of success if you
try again.
On the other hand, your argument assumes that all callers really *do*
retry, which isn't the case, and I doubt that all callers are going to
be fixed. So there might be some value in retrying within this function
anyway (it's a game of averages we're playing here anyway).
I'll think some more about it.
Tests?
I can't think of how to test this short of either instrumenting the code
(which I did for my own tests, but didn't include the test code in this
submission) or running the test within some kind of malicious virtual
filesystem. Ideas?
Michael
--
Michael Haggerty
mhag...@alum.mit.edu
http://softwareswirl.blogspot.com/
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html