On 07/20, Jeff Hostetler wrote:
> 
> 
> On 7/18/2018 1:15 PM, Brandon Williams wrote:
> > On 07/18, Stefan Beller wrote:
> > > On Wed, Jul 18, 2018 at 6:31 AM Derrick Stolee <sto...@gmail.com> wrote:
> > > > 
> > > > On 7/17/2018 7:25 PM, Stefan Beller wrote:
> > > > > On Tue, Jul 17, 2018 at 2:09 PM Brandon Williams <bmw...@google.com> 
> > > > > wrote:
> > > > > > Signed-off-by: Brandon Williams <bmw...@google.com>
> > > > > > ---
> > > > > > 
> > > > > > Since introducing protocol v2 and enabling fetch I've been thinking
> > > > > > about what its inverse 'push' would look like.  After talking with a
> > > > > > number of people I have a longish list of things that could be done 
> > > > > > to
> > > > > > improve push and I think I've been able to distill the core 
> > > > > > features we
> > > > > > want in push v2.
> > > > > It would be nice to know which things you want to improve.
> > > > 
> > > > Hopefully we can also get others to chime in with things they don't like
> > > > about the existing protocol. What pain points exist, and what can we do
> > > > to improve at the transport layer before considering new functionality?
> > > 
> > > Another thing that I realized last night was the possibility to chunk 
> > > requests.
> > > The web of today is driven by lots of small http(s) requests. I know our 
> > > server
> > > team fights with the internal tools all the time because the communication
> > > involved in git-fetch is usually a large http request (large packfile).
> > > So it would be nice to have the possibility of chunking the request.
> > > But I think that can be added as a capability? (Not sure how)
> > 
> > Fetch and push requests/responses are already "chunked" when using the
> > http transport.  So I'm not sure what you mean by adding a capability
> > because the protocol doesn't care about which transport you're using.
> > This is of course unless you're talking about a different "chunking"
> > from what it means to chunk an http request/response.
> > 
> 
> Internally, we've talked about wanting to have resumable pushes and
> fetches.  I realize this is difficult to do when the server is
> replicated and the repeated request might be talking to a different
> server instance.  And there's a problem with temp files littering the
> server as it waits for the repeated attempt.  But still, the packfile
> sent/received can be large and connections do get dropped.
> 
> That is, if we think about sending 1 large packfile and just using a
> byte-range-like approach to resuming the transfer.
> 
> If we allowed the request to send a series of packfiles, with each
> "chunk" being self-contained and usable.  So if a push connection was
> dropped the server could apply the successfully received packfile(s)
> (add the received objects and update the refs to the commits received so
> far).  And ignore the interrupted and unreceived packfile(s) and let the
> client retry later.  When/if the client retried the push, it would
> renegotiate haves/wants and send a new series of packfile(s).  With the
> assumption being that the server would have updated refs from the
> earlier aborted push, so the packfile(s) computed for the second attempt
> would not repeat the content successfully transmitted in the first
> attempt.
> 
> This would require that the client build an ordered set of packfiles
> from oldest to newest so that the server can apply them in-order and
> the graph remain connected.  That may be outside your scope here.
> 
> Also, we might have to add a few messages to the protocol after the
> negotiation, for the client to say that it is going to send the push
> content in 'n' packfiles and send 'n' messages with the intermediate
> ref values being updated in each packfile.
> 
> Just thinking out loud here.
> Jeff

We've talked about working on resumable fetch/push (both of which are
out of the scope of this work), but we haven't started working on
anything just yet.

There's a couple different ways to do this like you've pointed out, we
can either have the server redirect the client to fetch from a CDN
(where its put the packfile) and then the client can use ranged requests
to fetch until the server decides to remove it from the CDN.  This can
be tricky because every fetch can produce a unique packfile so maybe you
don't want to put a freshly constructed, unique packfile for each client
request up on a CDN somewhere.

Breaking up a response into multiple packfiles and small ref-updates
could work, that way as long as some of the smaller packs/updates are
applied then the client is making headway towards being up to date with
the server.

-- 
Brandon Williams

Reply via email to