Stefan Beller writes:
> So I started looking into extending the buffer size as another 'first step'
> towards the protocol version 2 again. But now I think the packed length
> limit of 64k is actually a good and useful thing to have and should be
> extended/fixed if and only if we run into seriou
On Tue, Mar 3, 2015 at 9:13 AM, Junio C Hamano wrote:
> Duy Nguyen writes:
>
>> Junio pointed out in private that I didn't address the packet length
>> limit (64k). I thought I could get away with a new capability
>> (i.e. not worry about it now) but I finally admit that was a bad
>> hack. So per
On Wed, Mar 4, 2015 at 5:03 PM, Stefan Beller wrote:
>
> If anyone wants to experiment with the data I gathered, I can make them
> available.
>
All data of `ls-remote` including the gathering script is found at
(112 kB .tar.xz)
https://drive.google.com/file/d/0B7E93UKgFAfjcHRvM1N2YjBfTzA/view?us
On Wed, Mar 4, 2015 at 11:10 AM, Shawn Pearce wrote:
> On Wed, Mar 4, 2015 at 4:05 AM, Duy Nguyen wrote:
>> On Wed, Mar 4, 2015 at 11:27 AM, Shawn Pearce wrote:
>>> Let me go on a different tangent a bit from the current protocol.
>>>
>>> http://www.grpc.io/ was recently released and is built on
On Wed, Mar 4, 2015 at 4:05 AM, Duy Nguyen wrote:
> On Wed, Mar 4, 2015 at 11:27 AM, Shawn Pearce wrote:
>> Let me go on a different tangent a bit from the current protocol.
>>
>> http://www.grpc.io/ was recently released and is built on the HTTP/2
>> standard. It uses protobuf as a proven extens
On Wed, Mar 4, 2015 at 11:27 AM, Shawn Pearce wrote:
> Let me go on a different tangent a bit from the current protocol.
>
> http://www.grpc.io/ was recently released and is built on the HTTP/2
> standard. It uses protobuf as a proven extensibility mechanism.
> Including a full C based grpc stack
On Tue, Mar 3, 2015 at 5:54 PM, Duy Nguyen wrote:
> On Wed, Mar 4, 2015 at 12:13 AM, Junio C Hamano wrote:
>> My recollection is that the consensus from the last time we
>> discussed protocol revamping was to list one capability per packet
>> so that packet length limit does not matter, but you m
On Wed, Mar 4, 2015 at 12:13 AM, Junio C Hamano wrote:
> My recollection is that the consensus from the last time we
> discussed protocol revamping was to list one capability per packet
> so that packet length limit does not matter, but you may want to
> check with the list archive yourself.
I co
Junio C Hamano writes:
> Duy Nguyen writes:
>
>> Junio pointed out in private that I didn't address the packet length
>> limit (64k). I thought I could get away with a new capability
>> (i.e. not worry about it now) but I finally admit that was a bad
>> hack. So perhaps this on top.
>
> No, I di
Duy Nguyen writes:
> Junio pointed out in private that I didn't address the packet length
> limit (64k). I thought I could get away with a new capability
> (i.e. not worry about it now) but I finally admit that was a bad
> hack. So perhaps this on top.
No, I didn't ;-) but I tend to agree that "
On Mon, Mar 02, 2015 at 04:21:36PM +0700, Duy Nguyen wrote:
> On Sun, Mar 01, 2015 at 07:47:40PM -0800, Junio C Hamano wrote:
> > It seems, however, that our current thinking is that it is OK to do
> > the "allow new v1 clients to notice the availabilty of v2 servers,
> > so that they can talk v2 t
On Sun, Mar 01, 2015 at 11:06:21PM -, Philip Oakley wrote:
> OK, maybe not exactly about protocol, but a possible option would be the
> ability to send the data as a bundle or multi-bundles; Or perhasps as an
> archive, zip, or tar.
>
> Data can then be exchanged across an airgap or pigeon m
On Mon, Mar 02, 2015 at 04:21:36PM +0700, Duy Nguyen wrote:
> On Sun, Mar 01, 2015 at 07:47:40PM -0800, Junio C Hamano wrote:
> > It seems, however, that our current thinking is that it is OK to do
> > the "allow new v1 clients to notice the availabilty of v2 servers,
> > so that they can talk v2 t
On Sun, Mar 01, 2015 at 07:47:40PM -0800, Junio C Hamano wrote:
> It seems, however, that our current thinking is that it is OK to do
> the "allow new v1 clients to notice the availabilty of v2 servers,
> so that they can talk v2 the next time" thing, so my preference is
> to throw this "client fir
Stefan Beller writes:
> A race condition may be a serious objection then? Once people believe the
> refs can scale fairly well they will use it, which means blasting the ref
> advertisement will become very worse over time.
I think we are already in agreement about that case:
A misdetected
David Lang writes:
> how would these approaches be affected by a client that is pulling
> from different remotes into one local repository? For example, pulling
> from the main kernel repo and from the -stable repo.
>
> David Lang
As I said in $gmane/264000, which the above came from:
Note
On Sun, 1 Mar 2015, Junio C Hamano wrote:
and if the only time your refs/remotes/origin/* hierarchy changes is
when you fetch from there (which should be the norm), you can look
into remote.origin.fetch refspec (to learn that "refs/heads*" is
what you are asking) and your refs/remotes/origin/* r
On Sun, 1 Mar 2015, Stefan Beller wrote:
The way I understand Junio here is to have predefined points which
makes it easier to communicate. There are lots of clients and they usually
want to catch up a different amount of commits, so we need to recompute it
all the time. The idea is then to comp
Duy Nguyen writes:
> On Sun, Mar 1, 2015 at 3:41 PM, Junio C Hamano wrote:
>>> - Because the protocol exchange starts by the server side
>>>advertising all its refs, even when the fetcher is interested in
>>>a single ref, the initial overhead is nontrivial, especially when
>>>you ar
From: "Junio C Hamano"
I earlier said:
So if we are going to discuss a new protocol, I'd prefer to see the
discussion without worrying too much about how to inter-operate
with the current vintage of Git. It is no longer an interesting
problem,
as we know how to solve it with minimum risk. In
On Sun, Mar 1, 2015 at 3:32 AM, Duy Nguyen wrote:
> On Sun, Mar 1, 2015 at 3:41 PM, Junio C Hamano wrote:
>>> - Because the protocol exchange starts by the server side
>>>advertising all its refs, even when the fetcher is interested in
>>>a single ref, the initial overhead is nontrivial,
On Sun, Mar 1, 2015 at 3:41 PM, Junio C Hamano wrote:
>> - Because the protocol exchange starts by the server side
>>advertising all its refs, even when the fetcher is interested in
>>a single ref, the initial overhead is nontrivial, especially when
>>you are doing a small incremental
I earlier said:
> So if we are going to discuss a new protocol, I'd prefer to see the
> discussion without worrying too much about how to inter-operate
> with the current vintage of Git. It is no longer an interesting problem,
> as we know how to solve it with minimum risk. Instead, I'd like to
>
On Fri, Feb 27, 2015 at 4:33 PM, Junio C Hamano wrote:
> On Fri, Feb 27, 2015 at 3:44 PM, Stefan Beller wrote:
>> On Fri, Feb 27, 2015 at 3:05 PM, Junio C Hamano wrote:
>>>
>>> I am _not_ proposing that we should go this route, at least not yet.
>>> I am merely pointing out that an in-place side
On Fri, Feb 27, 2015 at 3:44 PM, Stefan Beller wrote:
> On Fri, Feb 27, 2015 at 3:05 PM, Junio C Hamano wrote:
>>
>> I am _not_ proposing that we should go this route, at least not yet.
>> I am merely pointing out that an in-place sidegrade from v1 to a
>> protocol that avoids the megabyte-advert
On Fri, Feb 27, 2015 at 4:07 PM, Duy Nguyen wrote:
>
> There may be another hole, if we send "want ", it looks
> like it will go through without causing errors. It's not exactly no-op
> because an empty tree object will be bundled in result pack. But that
> makes no difference in pratice. I didn't
On Sat, Feb 28, 2015 at 6:05 AM, Junio C Hamano wrote:
> Just for fun, I was trying to see if there is a hole in the current
> protocol that allows a new client to talk a valid v1 protocol
> exchange with existing, deployed servers without breaking, while
> letting it to know a new server that it
On Fri, Feb 27, 2015 at 3:05 PM, Junio C Hamano wrote:
> Junio C Hamano writes:
>
>> I do not think v1 can be fixed by "send one ref with capability,
>> newer client may respond immediately so we can stop enumerating
>> remaining refs and older one will get stuck so we can have a timeout
>> to se
Junio C Hamano writes:
> I do not think v1 can be fixed by "send one ref with capability,
> newer client may respond immediately so we can stop enumerating
> remaining refs and older one will get stuck so we can have a timeout
> to see if the connection is from the newer one, and send the rest
>
+git@vger.kernel.org
On Thu, Feb 26, 2015 at 5:42 PM, Duy Nguyen wrote:
> https://github.com/pclouds/git/commits/uploadpack2
I rebased your branch, changed the order of commits slightly and
started to add some.
they are found at https://github.com/stefanbeller/git/commits/uploadpack2
I think th
On Thu, Feb 26, 2015 at 12:13 PM, Junio C Hamano wrote:
>
> I agree with the value assessment of these patches 98%, but these
> bits can be taken as the "we have v2 server availble for you on the
> side, by the way" hint you mentioned in the older thread, I think.
The patches are not well polish
On Thu, Feb 26, 2015 at 12:13 PM, Junio C Hamano wrote:
> Duy Nguyen writes:
>
>> Step 1 then should be identifying these wrongdoings and assumptions.
>>
>> We can really go wild with these capabilities. The only thing that
>> can't be changed is perhaps sending the first ref. I don't know
>> whe
Duy Nguyen writes:
> Step 1 then should be identifying these wrongdoings and assumptions.
>
> We can really go wild with these capabilities. The only thing that
> can't be changed is perhaps sending the first ref. I don't know
> whether we can accept a dummy first ref... After that point, you can
On Thu, Feb 26, 2015 at 2:15 AM, Duy Nguyen wrote:
> On Thu, Feb 26, 2015 at 2:31 PM, Stefan Beller wrote:
>> On Wed, Feb 25, 2015 at 10:04 AM, Junio C Hamano wrote:
>>> Duy Nguyen writes:
>>>
On Wed, Feb 25, 2015 at 6:37 AM, Stefan Beller wrote:
> I can understand, that we maybe want
On Thu, Feb 26, 2015 at 2:31 PM, Stefan Beller wrote:
> On Wed, Feb 25, 2015 at 10:04 AM, Junio C Hamano wrote:
>> Duy Nguyen writes:
>>
>>> On Wed, Feb 25, 2015 at 6:37 AM, Stefan Beller wrote:
I can understand, that we maybe want to just provide one generic
"version 2" of the protoc
On Wed, Feb 25, 2015 at 10:04 AM, Junio C Hamano wrote:
> Duy Nguyen writes:
>
>> On Wed, Feb 25, 2015 at 6:37 AM, Stefan Beller wrote:
>>> I can understand, that we maybe want to just provide one generic
>>> "version 2" of the protocol which is an allrounder not doing bad in
>>> all of these as
Duy Nguyen writes:
> On Wed, Feb 25, 2015 at 6:37 AM, Stefan Beller wrote:
>> I can understand, that we maybe want to just provide one generic
>> "version 2" of the protocol which is an allrounder not doing bad in
>> all of these aspects, but I can see usecases of having the desire to
>> replace
On Wed, Feb 25, 2015 at 6:37 AM, Stefan Beller wrote:
> I can understand, that we maybe want to just provide one generic
> "version 2" of the protocol which is an allrounder not doing bad in
> all of these aspects, but I can see usecases of having the desire to
> replace the wire protocol by your
On Mon, Feb 23, 2015 at 10:15 PM, Junio C Hamano wrote:
> On Mon, Feb 23, 2015 at 8:02 PM, Duy Nguyen wrote:
>>
>> It's very hard to keep backward compatibility if you want to stop the
>> initial ref adverstisement, costly when there are lots of refs. But we
>> can let both protocols run in paral
On Mon, Feb 23, 2015 at 8:02 PM, Duy Nguyen wrote:
>
> It's very hard to keep backward compatibility if you want to stop the
> initial ref adverstisement, costly when there are lots of refs. But we
> can let both protocols run in parallel, with the old one advertise the
> presence of the new one.
On Mon, Feb 23, 2015 at 8:02 PM, Duy Nguyen wrote:
> On Tue, Feb 24, 2015 at 10:12 AM, Stefan Beller wrote:
>> One of the biggest problems of a new protocol would be deployment
>> as the users probably would not care too deeply. It should just
>> work in the sense that the user should not even se
On Tue, Feb 24, 2015 at 10:12 AM, Stefan Beller wrote:
> One of the biggest problems of a new protocol would be deployment
> as the users probably would not care too deeply. It should just
> work in the sense that the user should not even sense that the
> protocol changed.
Agreed.
> To do so we
Inspired by a discusson on the scaling of git in the last days,
I thought about starting the adventure to teach git a new transport
protocol.
One of the biggest problems of a new protocol would be deployment
as the users probably would not care too deeply. It should just
work in the sense that th
43 matches
Mail list logo