Stefan Beller <sbel...@google.com> writes:

> The problem comes from guessing the number of fds we're allowed to use.
> At first I thought it was a fundamental issue with the code being broken, but
> it turns out we just need a larger offset as we apparently have 9 files open
> already, before the transaction even starts.
> I did not expect the number to be that high, which is why I came up with the
> arbitrary number of 8 (3 for stdin/out/err, maybe packed refs and reflog so I
> guessed, 8 would do fine).
>
> I am not sure if the 9 is a constant or if it scales to some unknown
> property yet.
> So to make the series work, all we need is:
>
> - int remaining_fds = get_max_fd_limit() - 8;
> + int remaining_fds = get_max_fd_limit() - 9;
>
> I am going to try to understand where the 9 comes from and resend the patches.

I have a suspicion that the above is an indication that the approach
is fundamentally not sound.  9 may be OK in your test repository,
but that may fail in a repository with different resource usage
patterns.

On the core management side, xmalloc() and friends retry upon
failure, after attempting to free the resource.  I wonder if your
codepath can do something similar to that, perhaps?

On the other hand, it may be that this "let's keep it open as long
as possible, as creat-close-open-write-close is more expensive" may
not be worth the complexity.  I wonder if it might not be a bad idea
to start with a simpler rule, e.g. "use creat-write-close for ref
updates outside transactions, and creat-close-open-write-close for
inside transactions, as that is likely to be multi-ref updates" or
something stupid and simple like that?

Michael?


--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to