From: "Junio C Hamano"
"Philip Oakley" writes:
From: "Junio C Hamano"
If you clone a repository, and the connection drops, the next attempt
will have to start from scratch. This can add significant time and
expense if you're on
"Philip Oakley" writes:
> From: "Junio C Hamano"
>>
>>> If you clone a repository, and the connection drops, the next attempt
>>> will have to start from scratch. This can add significant time and
>>> expense if you're on a low-bandwidth or metered
From: "Junio C Hamano"
Sent: Wednesday, March 02, 2016 8:41 AM
Josh Triplett writes:
If you clone a repository, and the connection drops, the next attempt
will have to start from scratch. This can add significant time and
expense if you're on a
Josh Triplett <j...@joshtriplett.org> writes:
> That does help in the case of cloning torvalds/linux.git from
> kernel.org, and I'd love to see it used transparently.
>
> However, even with that, I still also see value in a resumable git clone
> (or git pull) for many other r
sibly other forms
> of "CDN offload" material) transparently used by "git clone" was the
> proposal by Shawn Pearce mentioned elsewhere in this thread.
That does help in the case of cloning torvalds/linux.git from
kernel.org, and I'd love to see it used transparently.
However,
On Wed, Mar 02, 2016 at 12:31:16AM -0800, Junio C Hamano wrote:
> Josh Triplett writes:
> > I think several simpler optimizations seem
> > preferable, such as binary object names, and abbreviating complete
> > object sets ("I have these commits/trees and everything they
On Wed, Mar 02, 2016 at 03:22:17PM +0700, Duy Nguyen wrote:
> On Wed, Mar 2, 2016 at 3:13 PM, Josh Triplett wrote:
> > On Wed, Mar 02, 2016 at 02:30:24AM +, Al Viro wrote:
> >> On Tue, Mar 01, 2016 at 05:40:28PM -0800, Stefan Beller wrote:
> >>
> >> > So throwing away
On Wed, Mar 02, 2016 at 12:41:20AM -0800, Junio C Hamano wrote:
> Josh Triplett writes:
>
> > If you clone a repository, and the connection drops, the next attempt
> > will have to start from scratch. This can add significant time and
> > expense if you're on a
On 3/2/16 2:02 PM, Jeff King wrote:
On Wed, Mar 02, 2016 at 03:22:17PM +0700, Duy Nguyen wrote:
As a simple proposal, the server could send the list of hashes (in
approximately the same order it would send the pack), the client could
send back a bitmap where '0' means "send it" and '1' means
On Wed, Mar 2, 2016 at 3:31 PM, Junio C Hamano wrote:
> Josh Triplett writes:
>
>> I don't think it's worth the trouble and ambiguity to send abbreviated
>> object names over the wire.
>
> Yup. My unscientific experiment was to show that the list would
Josh Triplett writes:
> If you clone a repository, and the connection drops, the next attempt
> will have to start from scratch. This can add significant time and
> expense if you're on a low-bandwidth or metered connection trying to
> clone something like Linux.
For
On Wed, Mar 02, 2016 at 03:22:17PM +0700, Duy Nguyen wrote:
> > As a simple proposal, the server could send the list of hashes (in
> > approximately the same order it would send the pack), the client could
> > send back a bitmap where '0' means "send it" and '1' means "got that one
> > already",
Josh Triplett writes:
> I don't think it's worth the trouble and ambiguity to send abbreviated
> object names over the wire.
Yup. My unscientific experiment was to show that the list would be
far smaller than the actual transfer and between full binary and
full textual
On Wed, Mar 2, 2016 at 3:13 PM, Josh Triplett wrote:
> On Wed, Mar 02, 2016 at 02:30:24AM +, Al Viro wrote:
>> On Tue, Mar 01, 2016 at 05:40:28PM -0800, Stefan Beller wrote:
>>
>> > So throwing away half finished stuff while keeping the front load?
>>
>> Throw away the
On Wed, Mar 2, 2016 at 9:30 AM, Al Viro wrote:
> IIRC, the objection had been that the organisation of the pack will lead
> to many cases when deltas are transferred *first*, with base object not
> getting there prior to disconnect. I suspect that fraction of the objects
On Wed, Mar 02, 2016 at 02:30:24AM +, Al Viro wrote:
> On Tue, Mar 01, 2016 at 05:40:28PM -0800, Stefan Beller wrote:
>
> > So throwing away half finished stuff while keeping the front load?
>
> Throw away the object that got truncated and ones for which delta chain
> doesn't resolve
On Wed, Mar 02, 2016 at 02:37:53PM +0700, Duy Nguyen wrote:
> On Wed, Mar 2, 2016 at 1:31 PM, Junio C Hamano wrote:
> > Al Viro writes:
> >
> >> FWIW, I wasn't proposing to recreate the remaining bits of that _pack_;
> >> just do the normal pull with
On Wed, Mar 2, 2016 at 2:37 PM, Duy Nguyen wrote:
>> So in order to salvage some transfer out of 2.4MB, the hypothetical
>> Al protocol would first have the upload-pack give 20*1396 = 28kB
>
> It could be 10*1396 or less
Oops somehow I read previous mails as client sends
On Wed, Mar 2, 2016 at 1:31 PM, Junio C Hamano wrote:
> Al Viro writes:
>
>> FWIW, I wasn't proposing to recreate the remaining bits of that _pack_;
>> just do the normal pull with one addition: start with sending the list
>> of sha1 of objects you are
Al Viro writes:
> FWIW, I wasn't proposing to recreate the remaining bits of that _pack_;
> just do the normal pull with one addition: start with sending the list
> of sha1 of objects you are about to send and let the recepient reply
> with "I already have , don't bother
On Tue, Mar 01, 2016 at 05:40:28PM -0800, Stefan Beller wrote:
> So throwing away half finished stuff while keeping the front load?
Throw away the object that got truncated and ones for which delta chain
doesn't resolve entirely in the transferred part.
> > indexing the objects it
> >
On Wed, Mar 2, 2016 at 8:30 AM, Josh Triplett wrote:
> If you clone a repository, and the connection drops, the next attempt
> will have to start from scratch. This can add significant time and
> expense if you're on a low-bandwidth or metered connection trying to
> clone
+ Duy, who tried resumable clone a few days/weeks ago
On Tue, Mar 1, 2016 at 5:30 PM, Josh Triplett wrote:
> If you clone a repository, and the connection drops, the next attempt
> will have to start from scratch. This can add significant time and
> expense if you're on a
If you clone a repository, and the connection drops, the next attempt
will have to start from scratch. This can add significant time and
expense if you're on a low-bandwidth or metered connection trying to
clone something like Linux.
Would it be possible to make git clone resumable after a
24 matches
Mail list logo