On Tue, Oct 3, 2017 at 2:45 AM, Christian Couder
<christian.cou...@gmail.com> wrote:
> Yeah, some people need the faster solution, but my opinion is that
> many other people would prefer the single shot protocol.
> If all you want to do is a simple resumable clone using bundles for
> example, then the long running process solution is very much overkill.
>
> For example with filters there are people using them to do keyword
> expansion (maybe to emulate the way Subversion and CVS substitutes
> keywords like $Id$, $Author$ and so on). It would be really bad to
> deprecate the single shot filters and tell those people they now have
> to use long running processes because we don't want to maintain the
> small code that make single shot filters work.
>
> The Microsoft GVFS use case is just one use case that is very far from
> what most people need. And my opinion is that many more people could
> benefit from the single shot protocol. For example many people and
> admins could benefit from resumable clones using bundles and, if I
> remove the single shot protocol, this use case will be unnecessarily
> more difficult to implement in the same way as keyword expansion would
> be unnecessarily more difficult to implement if we removed the single
> shot filters.

The idea that some users will prefer writing to the single-shot
protocol is reasonable to me, but I think that providing a contrib/
Perl script that wraps something that speaks the single-shot protocol
is sufficient. This results in less C code, and a better separation of
concerns (I prefer 1 exit point and 1 adapter over 2 exit points).

> I agree that your patch set already includes some infrastructure that
> could be used by my work, and your patch sets are perhaps implementing
> some of this infrastructure better than in my work (I haven't taken a
> deep look). But I really think that the right approach is to focus
> first on designing a flexible protocol between Git and external
> stores. Then the infrastructure work should be related to improving or
> enabling the flexible protocol and the communication between Git and
> external stores.
>
> Doing infrastructure work first and improving things on top of this
> new infrastructure without relying first on a design of the protocol
> between Git and external stores is not the best approach as I think we
> might over engineer some infrastructure work or base some user
> interfaces on the infrastructure work and not on the end goal.
>
> For example if we improve the current protocol, which is not
> necessarily a bad thing in itself, we might forget that for resumable
> clone it is much better if we just let external stores and helpers
> handle the transfer.
>
> I am not saying that doing infrastructure work is bad or will not in
> the end let us reach our goals, but I see it as something that is
> potentially distracting, or misleading, from focusing first on the
> protocol between Git and external stores.

I think that the infrastructure really needs to be considered when
designing the protocol. In particular, we had to consider the needs of
the connectivity check in fsck and the repacking in GC when designing
what the promisor remote (or ODB, in this case) needs to tell us and
what, if any, postprocessing needs to be done. In the end, I settled
on tracking which objects came from the promisor remote and which did
not, which works in my design (which I have tried to ensure that it
fits in our and Microsoft's use case). But that design won't work in
what I understand to be the ODB case, at least from what I understand,
because (at least) (i) you can have multiple ODBs, and (ii) Git does
not have direct access to the objects stored within the ODBs. So some
more design needs to be done.

Reply via email to