Re: [PATCH 2/2] http: use credential API to handle proxy authentication

2015-11-05 Thread Jeff King
On Wed, Nov 04, 2015 at 10:13:25AM +0100, Knut Franke wrote:

> For consistency reasons, add parsing of http_proxy/https_proxy/all_proxy
> environment variables, which would otherwise be evaluated as a fallback by 
> curl.
> Without this, we would have different semantics for git configuration and
> environment variables.

I can't say I'm excited about this, but I don't think there's a better
way.

There was a series similar to yours in 2012, and it ran into the same
problems. There's sadly no good way to ask curl "what is the proxy you
ended up using?".

There was also some discussion with curl upstream of providing a new
authentication interface, where we would provide curl with
authentication callbacks, and it would trigger them if and when
credentials were needed. Somebody upstream was working on a patch, but I
don't think it ever got merged. :(

Here's a relevant bit from that old series (which doesn't seem threaded,
but you can search for the author if you want to see more):

  http://thread.gmane.org/gmane.comp.version-control.git/192246

I have a few comments/questions below.

> +
> + curl_easy_getinfo(slot->curl, CURLINFO_HTTP_CONNECTCODE,
> + &slot->results->http_connectcode);

It looks like you use this to see the remote side's HTTP 407 code.  In
the 2012 series, I think we simply looked for a 407 in the HTTP return
code (I assume that if we fail in the CONNECT, we can't get any other
HTTP code and so curl just returns the proxy code).

I don't have a proxy to test against, but would that work (it's entirely
possible the other series was simply wrong)?

If we do need CONNECTCODE, do we need to protect it with an #ifdef on
the curl version? The manpage says it came in 7.10.7, which was released
in 2003. That's probably old enough not to worry about.

> + if (proxy_auth.password) {
> + memset(proxy_auth.password, 0, strlen(proxy_auth.password));
> + free(proxy_auth.password);

My understanding is that memset() like this is not sufficient for
zero-ing sensitive data, as they can be optimized out by the compiler. I
don't think there's a portable alternative, though, so it may be the
best we can do. OTOH, the rest of git does not worry about such zero-ing
anyway, so we could also simply omit it here.

> + free((void *)curl_proxyuserpwd);

This cast is necessary because curl_proxyuserpwd is declared const. But
I do not see anywhere that it needs to be const (we detach a strbuf into
it). Can we simply change the declaration?

For that matter, it is not clear to me why this needs to be a global at
all. Once we hand the value to curl_easy_setopt, curl keeps its own
copy.

>   free((void *) http_proxy_authmethod);

This one unfortunately does need to remain const, as it is used with
git_config_string (though given the number of void casts necessary in
patch 1, it may be less painful to simply cast it in the call to
git_config_string()).

> @@ -994,6 +1060,8 @@ static int handle_curl_result(struct slot_results 
> *results)
>  
>   if (results->curl_result == CURLE_OK) {
>   credential_approve(&http_auth);
> + if (proxy_auth.password)
> + credential_approve(&proxy_auth);
>   return HTTP_OK;

Approving on success. Makes sense. You can drop the conditional here;
credential_approve() is a noop if the password isn't set.

> @@ -1008,6 +1076,8 @@ static int handle_curl_result(struct slot_results 
> *results)
>   return HTTP_REAUTH;
>   }
>   } else {
> + if (results->http_connectcode == 407)
> + credential_reject(&proxy_auth);

Rejecting on a 407 makes sense (though again, can we check
results->http_code?). But if we get a 407 and we _don't_ have a
password, shouldn't we then prompt for one, similar to what we do with a
401?

That will require some refactoring around http_request_reauth, though
(because now we might potentially retry twice: once to get past the
proxy auth, and once to get past the real site's auth).

You prompt unconditionally for the password earlier, but only if the
proxy URL contains a username. We used to do the same thing for regular
http, but people got annoyed that they had to specify half the
credential in the URL. Perhaps it would be less so with proxies (which
are changed a lot less), so I don't think making this work is an
absolute requirement.

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2] gitk: add -C commandline parameter to change path

2015-11-05 Thread Juha-Pekka Heikkila
This patch adds -C (change working directory) parameter to
gitk. With this parameter, instead of need to cd to directory
with .git folder, one can point the correct folder from
commandline.

v2: Adjusted the parameter as per Eric's suggestion. I think
it now work in similar manner as in many GNU tools as well
as git itself.

Signed-off-by: Juha-Pekka Heikkila 
---
 Documentation/gitk.txt |  7 +++
 gitk-git/gitk  | 26 +-
 2 files changed, 24 insertions(+), 9 deletions(-)

diff --git a/Documentation/gitk.txt b/Documentation/gitk.txt
index 6ade002..d194d9b 100644
--- a/Documentation/gitk.txt
+++ b/Documentation/gitk.txt
@@ -146,6 +146,13 @@ gitk-specific options
Select the specified commit after loading the graph.
Default behavior is equivalent to specifying '--select-commit=HEAD'.
 
+-C ::
+
+   Run as if gitk was started in '' instead of the current
+   working directory. When multiple `-C` options are given, each
+   subsequent non-absolute `-C ` is interpreted relative to
+   the preceding `-C `.
+
 Examples
 
 gitk v2.6.12.. include/scsi drivers/scsi::
diff --git a/gitk-git/gitk b/gitk-git/gitk
index fcc606e..606474a 100755
--- a/gitk-git/gitk
+++ b/gitk-git/gitk
@@ -12279,20 +12279,14 @@ setui $uicolor
 
 setoptions
 
-# check that we can find a .git directory somewhere...
-if {[catch {set gitdir [exec git rev-parse --git-dir]}]} {
-show_error {} . [mc "Cannot find a git repository here."]
-exit 1
-}
-
 set selecthead {}
 set selectheadid {}
 
 set revtreeargs {}
 set cmdline_files {}
-set i 0
 set revtreeargscmd {}
-foreach arg $argv {
+for {set i 0} {$i < [llength $argv]} {incr i} {
+   set arg [lindex $argv [expr {$i}]]
 switch -glob -- $arg {
"" { }
"--" {
@@ -12305,11 +12299,25 @@ foreach arg $argv {
"--argscmd=*" {
set revtreeargscmd [string range $arg 10 end]
}
+   "-C*" {
+   if {[string length $arg] < 3} {
+   incr i
+   cd [lindex $argv [expr {$i}]]
+   continue
+   } else {
+   cd [string range $arg 2 end]
+   }
+   }
default {
lappend revtreeargs $arg
}
 }
-incr i
+}
+
+# check that we can find a .git directory somewhere...
+if {[catch {set gitdir [exec git rev-parse --git-dir]}]} {
+show_error {} . [mc "Cannot find a git repository here."]
+exit 1
 }
 
 if {$selecthead eq "HEAD"} {
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] test: accept death by SIGPIPE as a valid failure mode

2015-11-05 Thread Lars Schneider

> On 05 Nov 2015, at 08:47, Jeff King  wrote:
> 
> On Fri, Oct 30, 2015 at 02:22:14PM -0700, Junio C Hamano wrote:
> 
>> On a local host, the object/history transport code often talks over
>> pipe with the other side.  The other side may notice some (expected)
>> failure, send the error message either to our process or to the
>> standard error and hung up.  In such codepaths, if timing were not
>> unfortunate, our side would receive the report of (expected) failure
>> from the other side over the pipe and die().  Otherwise, our side
>> may still be trying to talk to it and would die with a SIGPIPE.
>> 
>> This was observed as an intermittent breakage in t5516 by a few
>> people.
>> 
>> In the real-life scenario, either mode of death exits with a
>> non-zero status, and the user would learn that the command failed.
>> The test_must_fail helper should also know that dying with SIGPIPE
>> is one of the valid failure modes when we are expecting the tested
>> operation to notice problem and fail.
> 
> Sorry for the slow review; before commenting I wanted to dig into
> whether this SIGPIPE ambiguity was avoidable in the first place.
> 
> I think the answer is "probably not". We do call write_or_die() pretty
> consistently in the network-aware programs. So we could ignore SIGPIPE,
> and then we would catch EPIPE (of course, we convert that into SIGPIPE
> in many places, but we do not have to do so). But since the SIGPIPE
> behavior is global, that carries the risk of us failing to check a write
> against some other descriptor. It's probably not worth it.
> 
> Teaching the tests to handle both cases seems like a reasonable
> workaround. Changing test_must_fail covers a lot of cases; I wondered if
> there are other tests that would not want to silently cover up a SIGPIPE
> death. But I could not really think of a plausible reason.
> 
> So I think your patch is the best thing to do.
> 
> -Peff

Oh, I missed this email thread. I am still working on a stable Travis-CI 
integration and I ran into this issue a few times. I fixed it in my (not yet 
published) patch with an additional function "test_must_fail_or_sigpipe" that 
I've used for all tests affected by this issue. Modifying the "test_must_fail" 
function seemed too risky for me as I don't understand all possible 
implications. However, if you don't see a problem then this is fine with me.

- Lars--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] git-svn: improve rebase/mkdirs performance

2015-11-05 Thread Dair Grant
Processing empty_dir directives becomes extremely slow for svn
repositories with a large enough history.

This is due to using a single hash to store the list of empty
directories, with the expensive step being purging items from
that hash using grep+delete.

Storing directories in a hash of hashes improves the performance
of this purge step and removes a potentially lengthy delay after
every rebase/mkdirs command.

The svn repository with this behaviour has 110K commits with
unhandled.log containing 170K empty_dir directives.

This takes 10 minutes to process when using a single hash, vs
3 seconds with a hash of hashes.

Signed-off-by: Dair Grant 
---
 perl/Git/SVN.pm | 82 +++--
 1 file changed, 74 insertions(+), 8 deletions(-)

diff --git a/perl/Git/SVN.pm b/perl/Git/SVN.pm
index 152fb7e..ebbdd37 100644
--- a/perl/Git/SVN.pm
+++ b/perl/Git/SVN.pm
@@ -1211,20 +1211,85 @@ sub do_fetch {
 sub mkemptydirs {
my ($self, $r) = @_;
 
+   # add/remove/collect a paths table
+   #
+   # Paths are split into a tree of nodes, stored as a hash of hashes.
+   #
+   # Each node contains a 'path' entry for the path (if any) associated 
with
+   # that node and a 'children' entry for any nodes under that location.
+   #
+   # Removing a path requires a hash lookup for each component then 
dropping
+   # that node (and anything under it), which is substantially faster than 
a
+   # grep slice into a single hash of paths for large numbers of paths.
+   #
+   # For a large (200K) number of empty_dir directives this reduces 
scanning
+   # time to 3 seconds vs 10 minutes for grep+delete on a single hash of 
paths.
+   sub add_path {
+   my ($paths_table, $path) = @_;
+   my $node_ref = undef;
+
+   foreach my $x (split('/', $path)) {
+   if (!exists($paths_table->{$x})) {
+   $paths_table->{$x} = {};
+   $paths_table->{$x}{"children"} = {};
+   }
+
+   $node_ref= $paths_table->{$x};
+   $paths_table = $paths_table->{$x}{"children"};
+   }
+
+   $node_ref->{"path"} = $path;
+   }
+
+   sub remove_path {
+   my ($paths_table, $path) = @_;
+   my $nodes_ref = undef;
+   my $node_name = undef;
+
+   foreach my $x (split('/', $path)) {
+   if (!exists($paths_table->{$x})) {
+   return;
+   }
+
+   $nodes_ref = $paths_table;
+   $node_name = $x;
+
+   $paths_table = $paths_table->{$x}{"children"};
+   }
+   
+   delete($nodes_ref->{$node_name});
+   }
+
+   sub collect_paths {
+   my ($paths_table, $paths_ref) = @_;
+
+   foreach my $v (values %$paths_table) {
+   my $p = $v->{"path"};
+   my $c = $v->{"children"};
+
+   collect_paths($c, $paths_ref);
+   
+   if (defined($p)) {
+   push(@$paths_ref, $p);
+   }
+   }
+   }
+
sub scan {
-   my ($r, $empty_dirs, $line) = @_;
+   my ($r, $paths_table, $line) = @_;
if (defined $r && $line =~ /^r(\d+)$/) {
return 0 if $1 > $r;
} elsif ($line =~ /^  \+empty_dir: (.+)$/) {
-   $empty_dirs->{$1} = 1;
+   add_path($paths_table, $1);
} elsif ($line =~ /^  \-empty_dir: (.+)$/) {
-   my @d = grep {m[^\Q$1\E(/|$)]} (keys %$empty_dirs);
-   delete @$empty_dirs{@d};
+   remove_path($paths_table, $1);
}
1; # continue
};
 
-   my %empty_dirs = ();
+   my @empty_dirs  = ();
+   my %paths_table = ();
+
my $gz_file = "$self->{dir}/unhandled.log.gz";
if (-f $gz_file) {
if (!can_compress()) {
@@ -1235,7 +1300,7 @@ sub mkemptydirs {
die "Unable to open $gz_file: $!\n";
my $line;
while ($gz->gzreadline($line) > 0) {
-   scan($r, \%empty_dirs, $line) or last;
+   scan($r, \%paths_table, $line) or last;
}
$gz->gzclose;
}
@@ -1244,13 +1309,14 @@ sub mkemptydirs {
if (open my $fh, '<', "$self->{dir}/unhandled.log") {
binmode $fh or croak "binmode: $!";
while (<$fh>) {
-   scan($r, \%empty_dirs, $_) or last;
+   scan($r, \%paths_ta

Re: [PATCH 2/2] http: use credential API to handle proxy authentication

2015-11-05 Thread Knut Franke
On 2015-11-05 03:24, Jeff King wrote:
> There was also some discussion with curl upstream of providing a new
> authentication interface, where we would provide curl with
> authentication callbacks, and it would trigger them if and when
> credentials were needed. Somebody upstream was working on a patch, but I
> don't think it ever got merged. :(

That would certainly be nice, also with respect to other credentials, such as
SSL key passphrase (presuming that'd be possible without modifying the SSL lib
as well).

> Here's a relevant bit from that old series (which doesn't seem threaded,
> but you can search for the author if you want to see more):
> 
>   http://thread.gmane.org/gmane.comp.version-control.git/192246

My main takeaway from this, apart from the points you mention below, is that
it'd be good to have a test case, similar to t/lib-httpd.sh. Since none of the
existent proxy-related code has an automated test, I think this would be an
improvement on top of the other patches. I'd need to look into how easy/hard
this would be to implement.

> > +
> > +   curl_easy_getinfo(slot->curl, CURLINFO_HTTP_CONNECTCODE,
> > +   &slot->results->http_connectcode);
> 
> It looks like you use this to see the remote side's HTTP 407 code.  In
> the 2012 series, I think we simply looked for a 407 in the HTTP return
> code

I'm not sure why that worked for the author of the old series - possibly curl
semantics changed at some point. In my setup at least (with curl 7.15.5), after
a failed proxy authentication, CURLINFO_HTTP_CODE returns 0 while
CURLINFO_HTTP_CONNECTCODE returns the 407. This is also consistent with the curl
documentation for CURLINFO_RESPONSE_CODE (which has replaced CURLINFO_HTTP_CODE
in 7.10.7, though the compatibility #define is still there): "Note that a
proxy's CONNECT response should be read with CURLINFO_HTTP_CONNECTCODE and not
this."

> If we do need CONNECTCODE, do we need to protect it with an #ifdef on
> the curl version? The manpage says it came in 7.10.7, which was released
> in 2003. That's probably old enough not to worry about.

As Junio pointed out earlier, since some people still care about ancient curl
versions, we don't want to knowingly break compatibility. So yes, an #ifdef
would be in oder here.

> 
> > +   if (proxy_auth.password) {
> > +   memset(proxy_auth.password, 0, strlen(proxy_auth.password));
> > +   free(proxy_auth.password);
> 
> My understanding is that memset() like this is not sufficient for
> zero-ing sensitive data, as they can be optimized out by the compiler. I
> don't think there's a portable alternative, though, so it may be the
> best we can do. OTOH, the rest of git does not worry about such zero-ing
> anyway, so we could also simply omit it here.

For what it's worth, that's the same as we do for cert_auth (while, as far as I
can see, no attempt is made for http_auth). I tend to think it's better than
nothing. Maybe an in-code comment stating it's not reliable would be in order,
to prevent the passing reader from putting too much trust in it.

> > +   free((void *)curl_proxyuserpwd);
> 
> This cast is necessary because curl_proxyuserpwd is declared const. But
> I do not see anywhere that it needs to be const (we detach a strbuf into
> it). Can we simply change the declaration?

Right.

> For that matter, it is not clear to me why this needs to be a global at
> all. Once we hand the value to curl_easy_setopt, curl keeps its own
> copy.

That's true only for relatively recent curl versions; before 7.17.0, strings
were not copied.

> > @@ -1008,6 +1076,8 @@ static int handle_curl_result(struct slot_results 
> > *results)
> > return HTTP_REAUTH;
> > }
> > } else {
> > +   if (results->http_connectcode == 407)
> > +   credential_reject(&proxy_auth);
> 
> Rejecting on a 407 makes sense (though again, can we check
> results->http_code?). But if we get a 407 and we _don't_ have a
> password, shouldn't we then prompt for one, similar to what we do with a
> 401?
> 
> That will require some refactoring around http_request_reauth, though
> (because now we might potentially retry twice: once to get past the
> proxy auth, and once to get past the real site's auth).

I think this would also require changes to post_rpc in remote-curl.c, which
apparently does something similar to http_request_reauth. Probably something
along the lines of adding a HTTP_PROXY_REAUTH return code, plus some refactoring
in order to prevent code duplication between the different code parts handling
(proxy) reauth. :-/

> You prompt unconditionally for the password earlier, but only if the
> proxy URL contains a username. We used to do the same thing for regular
> http, but people got annoyed that they had to specify half the
> credential in the URL. Perhaps it would be less so with proxies (which
> are changed a lot less), so I don't think making this work is an
> absolute requirement.

As far as I

[PATCH] contrib/subtree: remove "push" command from the "todo" file

2015-11-05 Thread Fabio Porcedda
Because the "push" command is already avaiable, remove it from the
"todo" file.

Signed-off-by: Fabio Porcedda 
---
 contrib/subtree/todo | 2 --
 1 file changed, 2 deletions(-)

diff --git a/contrib/subtree/todo b/contrib/subtree/todo
index 7e44b00..0d0e777 100644
--- a/contrib/subtree/todo
+++ b/contrib/subtree/todo
@@ -12,8 +12,6 @@
exactly the right subtree structure, rather than using
subtree merge...)
 
-   add a 'push' subcommand to parallel 'pull'
-   
add a 'log' subcommand to see what's new in a subtree?
 
add to-submodule and from-submodule commands
-- 
2.6.2

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


race condition when pushing

2015-11-05 Thread Lyle Ziegelmiller

Hi

git push --set-upstream  has some sort of race condition. Some times when I 
execute it, it works. Other times, it does not. Below is from my command 
window. I've executed the exact same command (using bash history 
re-execution, so I know I didn't make a typo), repeatedly. Notice the last 
execution results in an error. I am the only person on my machine. This is 
non-deterministic behavior.


lylez@LJZ-DELLPC ~/gittest/local
$ git push --set-upstream origin localbranch1
Branch localbranch1 set up to track remote branch localbranch1 from origin.
Everything up-to-date

lylez@LJZ-DELLPC ~/gittest/local
$ git push --set-upstream origin localbranch1
Branch localbranch1 set up to track remote branch localbranch1 from origin.
Everything up-to-date

lylez@LJZ-DELLPC ~/gittest/local
$ git push --set-upstream origin localbranch1
error: could not commit config file .git/config
Branch localbranch1 set up to track remote branch localbranch1 from origin.
Everything up-to-date

I'm using Git in a Cygwin window on a 32-bit Windows 10 machine. Others have 
experienced this as well: 
http://stackoverflow.com/questions/18761284/git-error-could-not-commit-config-file


lylez@LJZ-DELLPC ~/gittest/local
$ git --version
git version 2.5.1


Regards,

Lyle Ziegelmiller



--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] test: accept death by SIGPIPE as a valid failure mode

2015-11-05 Thread Junio C Hamano
Lars Schneider  writes:

> Oh, I missed this email thread. I am still working on a stable
> Travis-CI integration and I ran into this issue a few times. I
> fixed it in my (not yet published) patch with an additional
> function "test_must_fail_or_sigpipe" that I've used for all tests
> affected by this issue. Modifying the "test_must_fail" function
> seemed too risky for me as I don't understand all possible
> implications. However, if you don't see a problem then this is
> fine with me.

It's not that I don't see a problem at all.  You constructed a good
summary of the issues in three bullet points, that lead me to think
that it is the right approach to tweak the way the tests evaluate
the outcome, but then nothing came out of the discussion, so I sent
out a "how about doing it this way" to make sure this topic will not
be forgotten.  There is nothing more to it, and "how about..." is in
no way final.

There obviously are pros and cons between introducing your new
helper to mark the ones that are allowed to catch SIGPIPE and
changing all occurrences of test_must_fail.  I do not have a strong
opinion yet, but it needs to be discussed and decided.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 25/25] refs: break out ref conflict checks

2015-11-05 Thread David Turner
On Thu, 2015-11-05 at 05:00 +0100, Michael Haggerty wrote:
> On 11/04/2015 10:01 PM, David Turner wrote:
> > On Tue, 2015-11-03 at 08:40 +0100, Michael Haggerty wrote:
> >> + * extras and skip must be sorted lists of reference names. Either one
> >> + * can be NULL, signifying the empty list.
> >> + */
> > 
> > My version had:
> > 
> > "skip can be NULL; extras cannot."
> > 
> > The first thing that function does is:
> > string_list_find_insert_index(extras, dirname, 0)
> > 
> > And that crashes when extras is null.  So I think my version is correct
> > here.
> 
> We're talking about the function find_descendant_ref(), which was added
> in this patch, right? Because the first thing that function does is
> 
> + if (!extras)
> + return NULL;
> 
> (This guard was in your version, too.) Also, the callsite doesn't
> protect against extras==NULL. So either we're talking about two
> different things here, or I disagree with you.

You're right.  I totally missed that.  But while looking at it, I
noticed that the commit message doesn't look quite right (my fault):

> Create new function verify_no_descendants, to hold one of the ref
> conflict checks used in verify_refname_available. Multiple backends
> will need this function, so move it to the common code.

The function is find_descendant_ref not verify_no_descendants.

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] http: use credential API to handle proxy authentication

2015-11-05 Thread Jeff King
On Thu, Nov 05, 2015 at 12:56:54PM +0100, Knut Franke wrote:

> My main takeaway from this, apart from the points you mention below, is that
> it'd be good to have a test case, similar to t/lib-httpd.sh. Since none of the
> existent proxy-related code has an automated test, I think this would be an
> improvement on top of the other patches. I'd need to look into how easy/hard
> this would be to implement.

Yeah, tests would be wonderful. I think the main challenge will be
configuring Apache as a proxy (and failing gracefully when mod_proxy is
not available).

If there's another proxy that is easy to configure for a one-shot test,
that would be fine, too. It's nice if it's something that's commonly
available, though, so more people can actually run the test.

> > It looks like you use this to see the remote side's HTTP 407 code.  In
> > the 2012 series, I think we simply looked for a 407 in the HTTP return
> > code
> 
> I'm not sure why that worked for the author of the old series - possibly curl
> semantics changed at some point.

It's not clear to me that the original _did_ work in all cases. So I'll
trust your experiments now much more than that old thread. :)

> > My understanding is that memset() like this is not sufficient for
> > zero-ing sensitive data, as they can be optimized out by the compiler. I
> > don't think there's a portable alternative, though, so it may be the
> > best we can do. OTOH, the rest of git does not worry about such zero-ing
> > anyway, so we could also simply omit it here.
> 
> For what it's worth, that's the same as we do for cert_auth (while, as far as 
> I
> can see, no attempt is made for http_auth). I tend to think it's better than
> nothing. Maybe an in-code comment stating it's not reliable would be in order,
> to prevent the passing reader from putting too much trust in it.

Rather than just a comment, can we do something like:

  void clear_password(void *buf, size_t len)
  {
/*
 * TODO: This is known to be insufficient, but perhaps better
 * than nothing, and at least portable. We should use a more
 * secure variant on systems that provide it.
 */
memset(buf, 0, len);
  }

That will make it easier to find such sites and improve them later
(adding this function in a separate translation unit might actually be
enough to make it work, as the compiler cannot omit a call to the opaque
clear_password, and clear_password itself does not know the results will
not be used).

> > For that matter, it is not clear to me why this needs to be a global at
> > all. Once we hand the value to curl_easy_setopt, curl keeps its own
> > copy.
> 
> That's true only for relatively recent curl versions; before 7.17.0, strings
> were not copied.

Yeah, I remembered that, but I thought for some reason it was old enough
that we didn't need to worry about it. I have a feeling that there may
be other places where we do not handle it that well.

7.17.0 is from 2007. I wonder if it is time we bumped our minimum
required curl version. Supporting older installations is nice, but at
some point it is not really helping anybody, and that has to be
balanced by the increase in code complexity (and especially we are not
helping those people if there are subtle bugs that nobody else is
exercising).

That's a separate topic from your patch, though (though I would not mind
at all if you wanted to work on it :) ).

> > That will require some refactoring around http_request_reauth, though
> > (because now we might potentially retry twice: once to get past the
> > proxy auth, and once to get past the real site's auth).
> 
> I think this would also require changes to post_rpc in remote-curl.c, which
> apparently does something similar to http_request_reauth. Probably something
> along the lines of adding a HTTP_PROXY_REAUTH return code, plus some 
> refactoring
> in order to prevent code duplication between the different code parts handling
> (proxy) reauth. :-/

Yeah, that would work. I think we could also just loop on HTTP_REAUTH.
The code in handle_curl_result that returns HTTP_REAUTH will only do so
if it looks like we could make progress by trying again.

> > You prompt unconditionally for the password earlier, but only if the
> > proxy URL contains a username. We used to do the same thing for regular
> > http, but people got annoyed that they had to specify half the
> > credential in the URL. Perhaps it would be less so with proxies (which
> > are changed a lot less), so I don't think making this work is an
> > absolute requirement.
> 
> As far as I understand, the issue was around unconditionally prompting for the
> password even if it was listed in ~/.netrc. As far as I can see, curl doesn't
> read ~/.netrc for proxy credentials, so I don't think it would make a 
> difference
> here.

The .netrc thing came up recently-ish, but the HTTP prompting issues are
much older than that. Basically, does:

  git config http.proxy http://example.com:8080

work out of t

Re: [PATCHv3 02/11] run-command: report failure for degraded output just once

2015-11-05 Thread Stefan Beller
On Wed, Nov 4, 2015 at 11:32 PM, Junio C Hamano  wrote:
> Jeff King  writes:
>
>> POSIX implies it is the case in the definition of read[2] in two ways:
>>
>>   1. The O_NONBLOCK behavior for pipes is mentioned only when dealing
>>  with empty pipes.
>>
>>   2. Later, it says:
>>
>>The value returned may be less than nbyte if the number of bytes
>>left in the file is less than nbyte, if the read() request was
>>interrupted by a signal, or if the file is a pipe or FIFO or
>>special file and has fewer than nbyte bytes immediately available
>>for reading.
>>
>>  That is not explicit, but the "immediately" there seems to imply
>>  it.
>
> We were reading the same book, but I was more worried about that
> "may" there; it merely tells the caller of read(2) not to be alarmed
> when the call returned without filling the entire buffer, without
> mandating the implementation of read(2) never to block.
>
> Having said that,...
>
>>> So perhaps the original reasoning of doing nonblock was faulty, you
>>> are saying?

I agree that the original reasoning was faulty. It happened in the first place,
because of how I approached the problem. (strbuf_read should return immediately
after reading and to communicate that we had non blocking read and checked for
EAGAIN).

Having read the man pages again, I agree with you that the non blocking is
bogus to begin with.

>>
>> Exactly. And therefore a convenient way to deal with the portability
>> issue is to get rid of it. :)
>
> ... I do like the simplification you alluded to in the other
> message.  Not having to worry about the nonblock (at least until it
> is found problematic in the real world) is a very good first step,
> especially because the approach allows us to collectively make
> progress by letting all of us in various platforms build and
> experiment with "something that works".

I'll send a patch to just remove set_nonblocking which should fix the compile
problems on Windows and make it work regardless on all platforms.

After that I continue with the update series.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: File owner/group and git

2015-11-05 Thread David Turner
On Wed, 2015-11-04 at 18:38 -0800, Junio C Hamano wrote:
> David Turner  writes:
> 
> > In unpack-trees.c, in verify_uptodate_1, we check ie_match_stat.  This
> > returns OWNER_CHANGED if a file has changed ownership since the index
> > was updated.  Do we actually care about that particular case?  Or really
> > anything other than DATA_CHANGED?
> 
> That's a 10-year old code and there aren't that many people left
> who can answer the original rationale, I am afraid ;-)
> 
> In general, "Do we actually care?" is not the question we ask in
> this area of the code.  "Does it help us to catch real changes, or
> does it change spuriously to make it too unreliable a signal to be
> useful?" is the question that drives the design of this part of the
> system.
> 
> DATA_CHANGED is "we know the contents are different without even
> looking at the data".  If the size is different from the last time
> we hashed the data, the contents must have changed.  The inverse is
> not true (and that is half of the "racy git" issue).
> 
> Other *_CHANGED are finely classified only because originally we
> didn't really know which are useful to treat as notable change
> event, and "changed" variable had sufficient number of bits to hold
> different classification, so that we could pick and choose which
> ones we truly care.  We knew MTIME was useful in the sense that even
> if the size is the same, updated mtime is good enough indication
> that the stuff has changed, even to "make" utility.
> 
> INODE and CTIME are not so stable on some filesystems (e.g. inum may
> not be stable on a network share across remount) and in some
> environments (e.g. some virus scanners touch ctime to mark scanned
> files, cf. 1ce4790b), and would trigger false positives too often to
> be useful.  We always paid attention to them initially, but there
> are configurations to tell Git not raise them these days.
> 
> OWNER probably falls into a category that is stable enough to be
> useful, as the most likely way for it to change is not by running
> "chown" on the file in-place (which does not change the contents),
> but by running "mv" to drop another file owned by somebody else to
> the original location (which likely does change the contents).  At
> the same time, "mv" a different file into the path would likely
> trigger changes to INODE and MTIME as well, so it cannot be more
> than belt-and-suspenders measure to catch modification.  In that
> sense ignoring OWNER would not hurt too much.
> 
> If it changes spuriously to make it too unreliable a signal to be
> useful, it certainly is OK to introduce a knob to ignore it.  It
> might even make sense to ignore it unconditionally if the false hit
> happens too frequently, but offhand my gut reaction is that there
> may be something wrong in the environment (i.e. system outside Git
> in which Git runs) if owner/group changes spuriously to cause
> issues.

Thanks.

The only case where we saw it was with our watchman code, which lies
about ownership (to save space/time).  We're going to try ignoring
OWNER_CHANGED in our watchman branch, and if that fixes the issue for
our users, we'll stop worrying about it on the theory that Duy's
watchman stuff is the long-term path forward, and it doesn't have this
issue.

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/2] run-command: Remove set_nonblocking

2015-11-05 Thread Stefan Beller
strbuf_read_once can also operate on blocking file descriptors if we are
sure they are ready. The poll (2) command however makes sure this is the
case.

Reading the manual for poll (2), there may be spurious returns indicating
readiness but that is for network sockets only. Pipes should be unaffected.
By having this patch, we rely on the correctness of poll to return
only pipes ready to read.

This fixes compilation in Windows.

Signed-off-by: Stefan Beller 
---
 run-command.c | 13 -
 1 file changed, 13 deletions(-)

diff --git a/run-command.c b/run-command.c
index 0a3c24e..51d078c 100644
--- a/run-command.c
+++ b/run-command.c
@@ -1006,17 +1006,6 @@ static void pp_cleanup(struct parallel_processes *pp)
sigchain_pop_common();
 }
 
-static void set_nonblocking(int fd)
-{
-   int flags = fcntl(fd, F_GETFL);
-   if (flags < 0)
-   warning("Could not get file status flags, "
-   "output will be degraded");
-   else if (fcntl(fd, F_SETFL, flags | O_NONBLOCK))
-   warning("Could not set file status flags, "
-   "output will be degraded");
-}
-
 /* returns
  *  0 if a new task was started.
  *  1 if no new jobs was started (get_next_task ran out of work, non critical
@@ -1052,8 +1041,6 @@ static int pp_start_one(struct parallel_processes *pp)
return code ? -1 : 1;
}
 
-   set_nonblocking(pp->children[i].process.err);
-
pp->nr_processes++;
pp->children[i].in_use = 1;
pp->pfd[i].fd = pp->children[i].process.err;
-- 
2.6.1.247.ge8f2a41.dirty

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/2] Remove non-blocking fds from run-command.

2015-11-05 Thread Stefan Beller
So as far as I understand, all of the discussion participants (Torsten, Jeff,
Junio and me) are convinced we don't need the non-blocking feature. So remove 
it.

I developed it on top of d075d2604c0 (Merge branch 'rs/daemon-plug-child-leak' 
into sb/submodule-parallel-update)
but AFAICT it also applies to sb/submodule-parallel-fetch.

This will fix compilation in Windows without any platform specific hacks.

Thanks,
Stefan

Stefan Beller (2):
  run-command: Remove set_nonblocking
  strbuf: Correct documentation for strbuf_read_once

 run-command.c | 13 -
 strbuf.h  |  3 +--
 2 files changed, 1 insertion(+), 15 deletions(-)

-- 
2.6.1.247.ge8f2a41.dirty

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/2] strbuf: Correct documentation for strbuf_read_once

2015-11-05 Thread Stefan Beller
No need to document the O_NONBLOCK. We will read just once and return.
In case the read blocks, this works too.

Signed-off-by: Stefan Beller 
---
 strbuf.h | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/strbuf.h b/strbuf.h
index ea69665..7a08da4 100644
--- a/strbuf.h
+++ b/strbuf.h
@@ -367,8 +367,7 @@ extern size_t strbuf_fread(struct strbuf *, size_t, FILE *);
 extern ssize_t strbuf_read(struct strbuf *, int fd, size_t hint);
 
 /**
- * Read from a file descriptor that is marked as O_NONBLOCK without
- * blocking.  Returns the number of new bytes appended to the sb.
+ * Returns the number of new bytes appended to the sb.
  * Negative return value signals there was an error returned from
  * underlying read(2), in which case the caller should check errno.
  * e.g. errno == EAGAIN when the read may have blocked.
-- 
2.6.1.247.ge8f2a41.dirty

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 2/3] Limit the size of the data block passed to SHA1_Update()

2015-11-05 Thread Junio C Hamano
atous...@gmail.com writes:

> +#ifndef git_SHA_CTX
> +
> +#ifdef SHA1_MAX_BLOCK_SIZE
> +#include "compat/sha1-chunked.h"
> +#define git_SHA_CTX  platform_SHA_CTX
> +#define git_SHA1_Initplatform_SHA1_Init
> +#define git_SHA1_Update  git_SHA1_Update_Chunked
> +#define git_SHA1_Final   platform_SHA1_Final
> +#else
>  #define git_SHA_CTX  platform_SHA_CTX
>  #define git_SHA1_Initplatform_SHA1_Init
>  #define git_SHA1_Update  platform_SHA1_Update
>  #define git_SHA1_Final   platform_SHA1_Final
> -
>  #endif
>  
> +#endif 
> +

Adjusting to the proposed change to 1/3, this step would become the
attached patch.  Note that I thought the above would not scale well
so I did it a bit differently.

-- >8 --
From: Atousa Pahlevan Duprat 
Subject: sha1: allow limiting the size of the data passed to SHA1_Update()

Using the previous commit's inredirection mechanism for SHA1,
support a chunked implementation of SHA1_Update() that limits the
amount of data in the chunk passed to SHA1_Update().

This is enabled by using the Makefile variable SHA1_MAX_BLOCK_SIZE
to specify chunk size.  When using Apple's CommonCrypto library this
is set to 1GiB (the implementation cannot handle more 4GiB).

Signed-off-by: Atousa Pahlevan Duprat 
Signed-off-by: Junio C Hamano 
---
 Makefile | 13 +
 cache.h  |  6 ++
 compat/apple-common-crypto.h |  4 
 compat/sha1-chunked.c| 19 +++
 compat/sha1-chunked.h|  2 ++
 5 files changed, 44 insertions(+)
 create mode 100644 compat/sha1-chunked.c
 create mode 100644 compat/sha1-chunked.h

diff --git a/Makefile b/Makefile
index 04c2231..6a4ca59 100644
--- a/Makefile
+++ b/Makefile
@@ -141,6 +141,10 @@ all::
 # Define PPC_SHA1 environment variable when running make to make use of
 # a bundled SHA1 routine optimized for PowerPC.
 #
+# Define SHA1_MAX_BLOCK_SIZE to limit the amount of data that will be hashed
+# in one call to the platform's SHA1_Update(). e.g. APPLE_COMMON_CRYPTO
+# wants 'SHA1_MAX_BLOCK_SIZE=1024L*1024L*1024L' defined.
+#
 # Define NEEDS_CRYPTO_WITH_SSL if you need -lcrypto when using -lssl (Darwin).
 #
 # Define NEEDS_SSL_WITH_CRYPTO if you need -lssl when using -lcrypto (Darwin).
@@ -1335,6 +1339,11 @@ ifdef NO_POSIX_GOODIES
BASIC_CFLAGS += -DNO_POSIX_GOODIES
 endif
 
+ifdef APPLE_COMMON_CRYPTO
+   # Apple CommonCrypto requires chunking
+   SHA1_MAX_BLOCK_SIZE = 1024L*1024L*1024L
+endif
+
 ifdef BLK_SHA1
SHA1_HEADER = "block-sha1/sha1.h"
LIB_OBJS += block-sha1/sha1.o
@@ -1353,6 +1362,10 @@ endif
 endif
 endif
 
+ifdef SHA1_MAX_BLOCK_SIZE
+   LIB_OBJS += compat/sha1-chunked.o
+   BASIC_CFLAGS += -DSHA1_MAX_BLOCK_SIZE="$(SHA1_MAX_BLOCK_SIZE)"
+endif
 ifdef NO_PERL_MAKEMAKER
export NO_PERL_MAKEMAKER
 endif
diff --git a/cache.h b/cache.h
index 2f697c4..19c966d 100644
--- a/cache.h
+++ b/cache.h
@@ -30,6 +30,12 @@
 #define git_SHA1_Updateplatform_SHA1_Update
 #define git_SHA1_Final platform_SHA1_Final
 
+#ifdef SHA1_MAX_BLOCK_SIZE
+#include "compat/sha1-chunked.h"
+#undef git_SHA1_Update
+#define git_SHA1_Updategit_SHA1_Update_Chunked
+#endif
+
 #include 
 typedef struct git_zstream {
z_stream z;
diff --git a/compat/apple-common-crypto.h b/compat/apple-common-crypto.h
index c8b9b0e..d3fb264 100644
--- a/compat/apple-common-crypto.h
+++ b/compat/apple-common-crypto.h
@@ -16,6 +16,10 @@
 #undef TYPE_BOOL
 #endif
 
+#ifndef SHA1_MAX_BLOCK_SIZE
+#error Using Apple Common Crypto library requires setting SHA1_MAX_BLOCK_SIZE
+#endif
+
 #ifdef APPLE_LION_OR_NEWER
 #define git_CC_error_check(pattern, err) \
do { \
diff --git a/compat/sha1-chunked.c b/compat/sha1-chunked.c
new file mode 100644
index 000..6adfcfd
--- /dev/null
+++ b/compat/sha1-chunked.c
@@ -0,0 +1,19 @@
+#include "cache.h"
+
+int git_SHA1_Update_Chunked(platform_SHA_CTX *c, const void *data, size_t len)
+{
+   size_t nr;
+   size_t total = 0;
+   const char *cdata = (const char*)data;
+
+   while (len) {
+   nr = len;
+   if (nr > SHA1_MAX_BLOCK_SIZE)
+   nr = SHA1_MAX_BLOCK_SIZE;
+   platform_SHA1_Update(c, cdata, nr);
+   total += nr;
+   cdata += nr;
+   len -= nr;
+   }
+   return total;
+}
diff --git a/compat/sha1-chunked.h b/compat/sha1-chunked.h
new file mode 100644
index 000..7b2df28
--- /dev/null
+++ b/compat/sha1-chunked.h
@@ -0,0 +1,2 @@
+
+int git_SHA1_Update_Chunked(platform_SHA_CTX *c, const void *data, size_t len);
-- 
2.6.2-535-ga9e37b0

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 3/3] Move all the SHA1 implementations into one directory

2015-11-05 Thread Junio C Hamano
atous...@gmail.com writes:

> From: Atousa Pahlevan Duprat 
>
> The various SHA1 implementations were spread around in 3 directories.
> This makes it easier to understand what implementations are
> available at a glance.
>
> Signed-off-by: Atousa Pahlevan Duprat 
> ---

I am not strongly opposed to moving block and ppc (I am not strongly
for the movement, either, though).

I however think the chunked one should not be mixed together with
them--it is not a full SHA-1 hash implementation but belongs to a
different layer of abstration (and that is the reason why we
introduced a new layer of indirection in 1/3).
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 1/3] Provide another level of abstraction for the SHA1 utilities.

2015-11-05 Thread Junio C Hamano
atous...@gmail.com writes:

> From: Atousa Pahlevan Duprat 
>
> The git source uses git_SHA1_Update() and friends to call
> into the code that computes the hashes.  This is can then be
> mapped directly to an implementation that computes the hash,
> such as platform_SHA1_Update(); or as we will do in a subsequent
> patch, it can be mapped to something more complex that will in turn call
> into the platform's SHA implementation.
>
> Signed-off-by: Atousa Pahlevan Duprat 
> ---
>  cache.h | 19 +++
>  1 file changed, 15 insertions(+), 4 deletions(-)
>
> diff --git a/cache.h b/cache.h
> index a9aaa03..a934a2e 100644
> --- a/cache.h
> +++ b/cache.h
> @@ -12,10 +12,21 @@
>  
>  #include SHA1_HEADER
>  #ifndef git_SHA_CTX
> -#define git_SHA_CTX  SHA_CTX
> -#define git_SHA1_InitSHA1_Init
> -#define git_SHA1_Update  SHA1_Update
> -#define git_SHA1_Final   SHA1_Final
> +
> +/* platform's underlying implementation of SHA1, could be OpenSSL,
> +   blk_SHA, Apple CommonCrypto, etc...  */
> +#define platform_SHA_CTX SHA_CTX
> +#define platform_SHA1_Init   SHA1_Init
> +#define platform_SHA1_Update SHA1_Update
> +#define platform_SHA1_Final  SHA1_Final
> +
> +/* git may call platform's underlying implementation of SHA1 directly,
> +   or may call it through a wrapper */
> +#define git_SHA_CTX  platform_SHA_CTX
> +#define git_SHA1_Initplatform_SHA1_Init
> +#define git_SHA1_Update  platform_SHA1_Update
> +#define git_SHA1_Final   platform_SHA1_Final
> +
>  #endif
>  
>  #include 

This is not quite correct, I am afraid.  Our own implementations
still define git_SHA* macros, but they should be considered the
"platform" ones in the new world order with another level of
indirection.

I think the attached is closer to what we want.  The implementations
may give us platform_SHA*() in which case cache.h does not have to
give the fallback mapping from them to the OpenSSL compatible
interface used by OpenSSL and CommonCrypto.  Regardless of the
platform SHA-1 implementations, by default they are the ones used by
the rest of the system via git_SHA*().

And in the second step, git_SHA1_Update() may map to the Chunked
one, whose implementation would use platform_SHA1_Update().

-- >8 ---
From: Atousa Pahlevan Duprat 
Subject: sha1: provide another level of indirection for the SHA-1 functions

The git source uses git_SHA1_Update() and friends to call into the
code that computes the hashes.  Traditionally, we used to map these
directly to underlying implementation of the SHA-1 hash (e.g.
SHA1_Update() from OpenSSL or blk_SHA1_Update() from block-sha1/).

This arrangement however makes it hard to tweak behaviour of the
underlying implementation without fully replacing.  If we want to
introduce a tweaked_SHA1_Update() wrapper to implement the "Update"
in a slightly different way, for example, the implementation of the
wrapper still would want to call into the underlying implementation,
but tweaked_SHA1_Update() cannot call git_SHA1_Update() to get to
the underlying implementation (often but not always SHA1_Update()).

Add another level of indirection that maps platform_SHA1_Update()
and friends to their underlying implementations, and by default make
git_SHA1_Update() and friends map to platform_SHA1_* functions.

Doing it this way will later allow us to map git_SHA1_Update() to
tweaked_SHA1_Update(), and the latter can use platform_SHA1_Update()
in its implementation.

Signed-off-by: Atousa Pahlevan Duprat 
Signed-off-by: Junio C Hamano 
---
 block-sha1/sha1.h |  8 
 cache.h   | 22 +-
 ppc/sha1.h|  8 
 3 files changed, 25 insertions(+), 13 deletions(-)

diff --git a/block-sha1/sha1.h b/block-sha1/sha1.h
index b864df6..4df6747 100644
--- a/block-sha1/sha1.h
+++ b/block-sha1/sha1.h
@@ -16,7 +16,7 @@ void blk_SHA1_Init(blk_SHA_CTX *ctx);
 void blk_SHA1_Update(blk_SHA_CTX *ctx, const void *dataIn, unsigned long len);
 void blk_SHA1_Final(unsigned char hashout[20], blk_SHA_CTX *ctx);
 
-#define git_SHA_CTXblk_SHA_CTX
-#define git_SHA1_Init  blk_SHA1_Init
-#define git_SHA1_Updateblk_SHA1_Update
-#define git_SHA1_Final blk_SHA1_Final
+#define platform_SHA_CTX   blk_SHA_CTX
+#define platform_SHA1_Init blk_SHA1_Init
+#define platform_SHA1_Update   blk_SHA1_Update
+#define platform_SHA1_Finalblk_SHA1_Final
diff --git a/cache.h b/cache.h
index a9aaa03..2f697c4 100644
--- a/cache.h
+++ b/cache.h
@@ -11,13 +11,25 @@
 #include "string-list.h"
 
 #include SHA1_HEADER
-#ifndef git_SHA_CTX
-#define git_SHA_CTXSHA_CTX
-#define git_SHA1_Init  SHA1_Init
-#define git_SHA1_UpdateSHA1_Update
-#define git_SHA1_Final SHA1_Final
+#ifndef platform_SHA_CTX
+/*
+ * platform's underlying implementation of SHA-1; could be OpenSSL,
+ * blk_SHA, Apple CommonCrypto, etc...  Note that including
+ * SHA1_HEADER may have already defined platform_SHA_CTX for our
+ * own implementations like block-s

Re: [PATCH 1/2] run-command: Remove set_nonblocking

2015-11-05 Thread Junio C Hamano
Stefan Beller  writes:

> strbuf_read_once can also operate on blocking file descriptors if we are
> sure they are ready. The poll (2) command however makes sure this is the
> case.
>
> Reading the manual for poll (2), there may be spurious returns indicating
> readiness but that is for network sockets only. Pipes should be unaffected.

Given the presence of "for example" in that bug section, I wouldn't
say "only" or "should be unaffected".

> By having this patch, we rely on the correctness of poll to return
> only pipes ready to read.

We rely on two things.  One is for poll to return only pipes that are 
non-empty.  The other is for read from a non-empty pipe not to block.

>
> This fixes compilation in Windows.
>
> Signed-off-by: Stefan Beller 
> ---

Thanks.  Let's apply these fixes on sb/submodule-parallel-fetch,
merge the result to 'next' and have people play with it.

>  run-command.c | 13 -
>  1 file changed, 13 deletions(-)
>
> diff --git a/run-command.c b/run-command.c
> index 0a3c24e..51d078c 100644
> --- a/run-command.c
> +++ b/run-command.c
> @@ -1006,17 +1006,6 @@ static void pp_cleanup(struct parallel_processes *pp)
>   sigchain_pop_common();
>  }
>  
> -static void set_nonblocking(int fd)
> -{
> - int flags = fcntl(fd, F_GETFL);
> - if (flags < 0)
> - warning("Could not get file status flags, "
> - "output will be degraded");
> - else if (fcntl(fd, F_SETFL, flags | O_NONBLOCK))
> - warning("Could not set file status flags, "
> - "output will be degraded");
> -}
> -
>  /* returns
>   *  0 if a new task was started.
>   *  1 if no new jobs was started (get_next_task ran out of work, non critical
> @@ -1052,8 +1041,6 @@ static int pp_start_one(struct parallel_processes *pp)
>   return code ? -1 : 1;
>   }
>  
> - set_nonblocking(pp->children[i].process.err);
> -
>   pp->nr_processes++;
>   pp->children[i].in_use = 1;
>   pp->pfd[i].fd = pp->children[i].process.err;
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] run-command: Remove set_nonblocking

2015-11-05 Thread Stefan Beller
On Thu, Nov 5, 2015 at 10:45 AM, Junio C Hamano  wrote:
> Stefan Beller  writes:
>
>> strbuf_read_once can also operate on blocking file descriptors if we are
>> sure they are ready. The poll (2) command however makes sure this is the
>> case.
>>
>> Reading the manual for poll (2), there may be spurious returns indicating
>> readiness but that is for network sockets only. Pipes should be unaffected.
>
> Given the presence of "for example" in that bug section, I wouldn't
> say "only" or "should be unaffected".

Reading the documentation we are in agreement, that we expect
no spurious returns, no?

>
>> By having this patch, we rely on the correctness of poll to return
>> only pipes ready to read.
>
> We rely on two things.  One is for poll to return only pipes that are
> non-empty.  The other is for read from a non-empty pipe not to block.

That's what I meant with 'pipe being ready'.

>
>>
>> This fixes compilation in Windows.
>>
>> Signed-off-by: Stefan Beller 
>> ---
>
> Thanks.  Let's apply these fixes on sb/submodule-parallel-fetch,
> merge the result to 'next' and have people play with it.

Maybe the commit message was weakly crafted. Do you want me to resend?

>
>>  run-command.c | 13 -
>>  1 file changed, 13 deletions(-)
>>
>> diff --git a/run-command.c b/run-command.c
>> index 0a3c24e..51d078c 100644
>> --- a/run-command.c
>> +++ b/run-command.c
>> @@ -1006,17 +1006,6 @@ static void pp_cleanup(struct parallel_processes *pp)
>>   sigchain_pop_common();
>>  }
>>
>> -static void set_nonblocking(int fd)
>> -{
>> - int flags = fcntl(fd, F_GETFL);
>> - if (flags < 0)
>> - warning("Could not get file status flags, "
>> - "output will be degraded");
>> - else if (fcntl(fd, F_SETFL, flags | O_NONBLOCK))
>> - warning("Could not set file status flags, "
>> - "output will be degraded");
>> -}
>> -
>>  /* returns
>>   *  0 if a new task was started.
>>   *  1 if no new jobs was started (get_next_task ran out of work, non 
>> critical
>> @@ -1052,8 +1041,6 @@ static int pp_start_one(struct parallel_processes *pp)
>>   return code ? -1 : 1;
>>   }
>>
>> - set_nonblocking(pp->children[i].process.err);
>> -
>>   pp->nr_processes++;
>>   pp->children[i].in_use = 1;
>>   pp->pfd[i].fd = pp->children[i].process.err;
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] run-command: Remove set_nonblocking

2015-11-05 Thread Junio C Hamano
Stefan Beller  writes:

> On Thu, Nov 5, 2015 at 10:45 AM, Junio C Hamano  wrote:
>> Stefan Beller  writes:
>>
>>> strbuf_read_once can also operate on blocking file descriptors if we are
>>> sure they are ready. The poll (2) command however makes sure this is the
>>> case.
>>>
>>> Reading the manual for poll (2), there may be spurious returns indicating
>>> readiness but that is for network sockets only. Pipes should be unaffected.
>>
>> Given the presence of "for example" in that bug section, I wouldn't
>> say "only" or "should be unaffected".
>
> Reading the documentation we are in agreement, that we expect
> no spurious returns, no?

Given the presence of "for example" in that bug section, I wouldn't
say "only" or "should be unaffected".  I cannot say "we expect no
spurious returns".

>> Thanks.  Let's apply these fixes on sb/submodule-parallel-fetch,
>> merge the result to 'next' and have people play with it.
>
> Maybe the commit message was weakly crafted. Do you want me to resend?

I somehow feel that it is prudent to let this cook just above 'next'
for a few days (not just for the log message but to verify the
strategy and wait for others to come up with even better ideas), but
then I'll be offline starting next week, so I expect that merging
the final version to 'next' will be done by our interim maintainer,
which means we still have time to polish ;-)

Here is what I queued for now.

-- >8 --
From: Stefan Beller 
Date: Thu, 5 Nov 2015 10:17:18 -0800
Subject: [PATCH] run-command: remove set_nonblocking()

strbuf_read_once can also operate on blocking file descriptors if we
are sure they are ready.  And the poll(2) we call before calling
this ensures that this is the case.

Reading the manual for poll(2), there may be spurious returns
indicating readiness but that is for network sockets only and pipes
should be unaffected.

With this change, we rely on

 - poll(2) returns only non-empty pipes; and
 - read(2) on a non-empty pipe does not block.

This should fix compilation on Windows.

Signed-off-by: Stefan Beller 
Signed-off-by: Junio C Hamano 
---
 run-command.c | 13 -
 1 file changed, 13 deletions(-)

diff --git a/run-command.c b/run-command.c
index 1fbd286..07424e9 100644
--- a/run-command.c
+++ b/run-command.c
@@ -996,17 +996,6 @@ static void pp_cleanup(struct parallel_processes *pp)
sigchain_pop_common();
 }
 
-static void set_nonblocking(int fd)
-{
-   int flags = fcntl(fd, F_GETFL);
-   if (flags < 0)
-   warning("Could not get file status flags, "
-   "output will be degraded");
-   else if (fcntl(fd, F_SETFL, flags | O_NONBLOCK))
-   warning("Could not set file status flags, "
-   "output will be degraded");
-}
-
 /* returns
  *  0 if a new task was started.
  *  1 if no new jobs was started (get_next_task ran out of work, non critical
@@ -1042,8 +1031,6 @@ static int pp_start_one(struct parallel_processes *pp)
return code ? -1 : 1;
}
 
-   set_nonblocking(pp->children[i].process.err);
-
pp->nr_processes++;
pp->children[i].in_use = 1;
pp->pfd[i].fd = pp->children[i].process.err;
-- 
2.6.2-539-g1c5cd50

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] run-command: Remove set_nonblocking

2015-11-05 Thread Johannes Sixt
Am 05.11.2015 um 19:17 schrieb Stefan Beller:
> strbuf_read_once can also operate on blocking file descriptors if we are
> sure they are ready. The poll (2) command however makes sure this is the
> case.
> 
> Reading the manual for poll (2), there may be spurious returns indicating
> readiness but that is for network sockets only. Pipes should be unaffected.
> By having this patch, we rely on the correctness of poll to return
> only pipes ready to read.
> 
> This fixes compilation in Windows.

It certainly does (but I haven't tested, yet). But parallel processes
will not work because we do not have a sufficiently complete waitpid
emulation, yet. (waitpid(-1, ...) is not implemented.)

However, I think that the infrastructure can be simplified even further
to a level that we do not need additional emulation on Windows.

First let me say that I find it very questionable that the callbacks
receive a struct child_process. This is an implementation detail. It is
also an implementation detail that stderr of the children is read and
buffered, and that the child's stdout is redirected to stderr. It
should not be the task of the get_next_task callback to set the members
no_stdin, stdout_to_stderr, and err of struct child_process.

If you move that initialization to pp_start_one, you will notice rather
sooner than later that the readable end of the file descriptor is never
closed!

Which makes me think: Other users of start_command/finish_command work
such that they

1. request a pipe by setting .out = -1
2. start_command
3. read from .out until EOF
4. close .out
5. wait for the process with finish_command

But the parallel_process infrastructure does not follow this pattern.
It

1. requests a pipe by setting .err = -1
2. start_command
3. read from .err
4. wait for the process with waitpid

(and forgets to close .err). EOF is not in the picture (but that is
not essential).

I suggest to change this such that we read from the children until EOF,
mark them to be at their end of life, and then wait for them using
finish_command (assuming that a process that closes stdout and stderr
will die very soon if it is not already dead).

Here is a prototype patch. Feel free to pick it up. It marks a process
whose EOF we have found by setting .err to -1. It's probably better to
extend the meaning of the in_use indicator for this purpose. This seems
to work on Linux with test-run-command with sub-processes that produce
100k output each:

./test-run-command run-command-parallel 5 sh -c "printf \"%010d\n\" 999"

although error handling would require some polishing according to
t0061-run-command.

diff --git a/run-command.c b/run-command.c
index 51d078c..3e42299 100644
--- a/run-command.c
+++ b/run-command.c
@@ -977,7 +977,7 @@ static struct parallel_processes *pp_init(int n,
for (i = 0; i < n; i++) {
strbuf_init(&pp->children[i].err, 0);
child_process_init(&pp->children[i].process);
-   pp->pfd[i].events = POLLIN;
+   pp->pfd[i].events = POLLIN|POLLHUP;
pp->pfd[i].fd = -1;
}
sigchain_push_common(handle_children_on_signal);
@@ -1061,11 +1061,17 @@ static void pp_buffer_stderr(struct parallel_processes 
*pp, int output_timeout)
/* Buffer output from all pipes. */
for (i = 0; i < pp->max_processes; i++) {
if (pp->children[i].in_use &&
-   pp->pfd[i].revents & POLLIN)
-   if (strbuf_read_once(&pp->children[i].err,
-pp->children[i].process.err, 0) < 
0)
+   pp->pfd[i].revents & (POLLIN|POLLHUP)) {
+   int n = strbuf_read_once(&pp->children[i].err,
+pp->children[i].process.err, 0);
+   if (n == 0) {
+   close(pp->children[i].process.err);
+   pp->children[i].process.err = -1;
+   } else if (n < 0) {
if (errno != EAGAIN)
die_errno("read");
+   }
+   }
}
 }
 
@@ -1082,59 +1088,20 @@ static void pp_output(struct parallel_processes *pp)
 static int pp_collect_finished(struct parallel_processes *pp)
 {
int i = 0;
-   pid_t pid;
-   int wait_status, code;
+   int code;
int n = pp->max_processes;
int result = 0;
 
while (pp->nr_processes > 0) {
-   pid = waitpid(-1, &wait_status, WNOHANG);
-   if (pid == 0)
-   break;
-
-   if (pid < 0)
-   die_errno("wait");
-
for (i = 0; i < pp->max_processes; i++)
if (pp->children[i].in_use &&
-   pid == pp->children[i].process.pid)
+   pp->children[i].process.err == -1)
break;
+
  

Re: [PATCH v3 0/4] Improve hideRefs when used with namespaces

2015-11-05 Thread Junio C Hamano
Thanks; will replace what has been queued.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] run-command: Remove set_nonblocking

2015-11-05 Thread Junio C Hamano
Johannes Sixt  writes:

> Am 05.11.2015 um 19:17 schrieb Stefan Beller:
>> strbuf_read_once can also operate on blocking file descriptors if we are
>> sure they are ready. The poll (2) command however makes sure this is the
>> case.
>> 
>> Reading the manual for poll (2), there may be spurious returns indicating
>> readiness but that is for network sockets only. Pipes should be unaffected.
>> By having this patch, we rely on the correctness of poll to return
>> only pipes ready to read.
>> 
>> This fixes compilation in Windows.
>
> It certainly does (but I haven't tested, yet). But parallel processes
> will not work because we do not have a sufficiently complete waitpid
> emulation, yet. (waitpid(-1, ...) is not implemented.)
>
> However, I think that the infrastructure can be simplified even further
> to a level that we do not need additional emulation on Windows.

;-)

This is why I love this list (and in general not rushing any change
too early to 'next').

> Which makes me think: Other users of start_command/finish_command work
> such that they
>
> 1. request a pipe by setting .out = -1
> 2. start_command
> 3. read from .out until EOF
> 4. close .out
> 5. wait for the process with finish_command
>
> But the parallel_process infrastructure does not follow this pattern.
> It
>
> 1. requests a pipe by setting .err = -1
> 2. start_command
> 3. read from .err
> 4. wait for the process with waitpid
>
> (and forgets to close .err). EOF is not in the picture (but that is
> not essential).

Unrelated tangent.  daemon is another one that uses start_command()
but does not use finish_command().

> I suggest to change this such that we read from the children until EOF,
> mark them to be at their end of life, and then wait for them using
> finish_command (assuming that a process that closes stdout and stderr
> will die very soon if it is not already dead).

Hmm, interesting.  This does match the normal "spawn, interact and
wait" cycle for a single process much better.

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Odd problems trying to build an orphaned branch

2015-11-05 Thread alan
I am trying to create an orphaned branch that contains the linux-3.12.y
branch from linux-stable. Each time I try a method to make this work I
encounter a blocker that halts my progress.

I expect that at least one of these is a bug, but I am not sure.

Here is what I did. I have read the docs and tried a huge pile of
suggestions. How is this supposed to be done?

I am using git version 2.6.2.402.g2635c2b. It passes all the tests.

I created an orphan branch from 3.12-rc1. I then used git format-patch to
generate patches from 3.12-rc1 to HEAD. (Over 7000 patches.) I use git am
to apply them to the orphan branch. At patch 237 it fails to apply. (It
appears the patch is from a block of code added with a merge commit, but
it is somewhere in the middle of the block.)

Are merge commits supposed to screw up git-format-patch?

I also tried using clone with depth and --single-branch set.  It ignored
the depth setting and gave me the whole branch all the way back to 2.6.x.

All the examples of shallow clones use depth=1. Is it broken for values
bigger than 1 or am I missing something?

I tried using graft and filter-branch. None of the descriptions are very
clear. None of them worked either. Filter-branch died on a commit
somewhere in 2.6 land that had no author. (Which is outside of the commits
I want to keep.)

I tried creating an orphan branch and using cherry-pick
v3.12-rc1..linux-3.12.y. It blew up on the first merge commit it hit. I
tried adding in "-m 1" to try to get it to pick a parent, but then it died
on the first commit because it was not a merge.

Why is this so hard?

All I want to do is take a branch from linux-stable and create a branch
that contains just the commits from where it was branched off of master
until it hits HEAD. That is it. All the scripts that I have seen that
claim to do just what I want break when it hits a merge or a bogus author.
(How that got into linux-stable, I have no idea. The commit is 10 year
old!)

Ideas? Do I need to create a new command? ("cake-cutter". Cut from
commit..commit and make a new branch out of it.)



--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] run-command: Remove set_nonblocking

2015-11-05 Thread Stefan Beller
On Thu, Nov 5, 2015 at 12:27 PM, Johannes Sixt  wrote:

> diff --git a/run-command.c b/run-command.c
> index 51d078c..3e42299 100644
> --- a/run-command.c
> +++ b/run-command.c
> @@ -977,7 +977,7 @@ static struct parallel_processes *pp_init(int n,
> for (i = 0; i < n; i++) {
> strbuf_init(&pp->children[i].err, 0);
> child_process_init(&pp->children[i].process);
> -   pp->pfd[i].events = POLLIN;
> +   pp->pfd[i].events = POLLIN|POLLHUP;
> pp->pfd[i].fd = -1;
> }
> sigchain_push_common(handle_children_on_signal);
> @@ -1061,11 +1061,17 @@ static void pp_buffer_stderr(struct 
> parallel_processes *pp, int output_timeout)
> /* Buffer output from all pipes. */
> for (i = 0; i < pp->max_processes; i++) {
> if (pp->children[i].in_use &&
> -   pp->pfd[i].revents & POLLIN)
> -   if (strbuf_read_once(&pp->children[i].err,
> -pp->children[i].process.err, 0) 
> < 0)
> +   pp->pfd[i].revents & (POLLIN|POLLHUP)) {
> +   int n = strbuf_read_once(&pp->children[i].err,
> +pp->children[i].process.err, 0);
> +   if (n == 0) {
> +   close(pp->children[i].process.err);
> +   pp->children[i].process.err = -1;

So you set .err to -1 to signal the process has ended here...

> -
> for (i = 0; i < pp->max_processes; i++)
> if (pp->children[i].in_use &&
> -   pid == pp->children[i].process.pid)
> +   pp->children[i].process.err == -1)
> break;

to make a decision here if we want to finish_command on it.


> +   code = finish_command(&pp->children[i].process);

> -   child_process_clear(&pp->children[i].process);

but .err stays stays -1 here for the next iteration?
We would need to reset it to 0 again.

So .err is
  0 when the slot is not in use
 -1 when the child has finished awaiting termination
 >0 when the child is living a happy life.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] contrib/subtree: remove "push" command from the "todo" file

2015-11-05 Thread Eric Sunshine
On Thu, Nov 5, 2015 at 10:26 AM, Fabio Porcedda
 wrote:
> Because the "push" command is already avaiable, remove it from the

s/avaiable/available/

> "todo" file.
>
> Signed-off-by: Fabio Porcedda 
> ---
> diff --git a/contrib/subtree/todo b/contrib/subtree/todo
> @@ -12,8 +12,6 @@
> exactly the right subtree structure, rather than using
> subtree merge...)
>
> -   add a 'push' subcommand to parallel 'pull'
> -
> add a 'log' subcommand to see what's new in a subtree?
>
> add to-submodule and from-submodule commands
> --
> 2.6.2
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: race condition when pushing

2015-11-05 Thread Eric Sunshine
On Thu, Nov 5, 2015 at 11:11 AM, Lyle Ziegelmiller  wrote:
> git push --set-upstream  has some sort of race condition. Some times when I
> execute it, it works. Other times, it does not. Below is from my command
> window. I've executed the exact same command (using bash history
> re-execution, so I know I didn't make a typo), repeatedly. Notice the last
> execution results in an error. I am the only person on my machine. This is
> non-deterministic behavior.
>
> lylez@LJZ-DELLPC ~/gittest/local
> $ git push --set-upstream origin localbranch1
> Branch localbranch1 set up to track remote branch localbranch1 from origin.
> Everything up-to-date
>
> lylez@LJZ-DELLPC ~/gittest/local
> $ git push --set-upstream origin localbranch1
> Branch localbranch1 set up to track remote branch localbranch1 from origin.
> Everything up-to-date
>
> lylez@LJZ-DELLPC ~/gittest/local
> $ git push --set-upstream origin localbranch1
> error: could not commit config file .git/config
> Branch localbranch1 set up to track remote branch localbranch1 from origin.
> Everything up-to-date
>
> I'm using Git in a Cygwin window on a 32-bit Windows 10 machine.

If I recall correctly, the typical culprit is a Windows virus scanner
(or even an indexer) locking the file, so git is unable to manipulate
it.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


FREXES | Advertising team

2015-11-05 Thread Frexes

Dear Sirs,

We are an advertising company working with exclusive Luxury Brands 
interested in targeting high net worth audiences in UK.


We would like to know more regarding advertising opportunities on your 
website and websites-partners if any and current rates.  Let us know 
more about future cooperation terms and provide with management contact 
details so we can follow up with you  after reviewing all the materials 
sent.


Sincerely,
FREXES Advertising team
www.frexes.com
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


What's cooking in git.git (Nov 2015, #01; Thu, 5)

2015-11-05 Thread Junio C Hamano
Here are the topics that have been cooking.  Commits prefixed with
'-' are only in 'pu' (proposed updates) while commits prefixed with
'+' are in 'next'.

Git 2.6.3 has been tagged, with accumulated fixes and minor updates
that are already in 'master'.  We have about 5 weeks left til -rc0
so hopefully a handful of topics that are not yet in 'next' but have
already been reviewed and polished may be able to be merged to
'next', cook in there for a while and be in 2.7.0 release.  I'll be
offline for a few weeks starting this weekend, but I am confident
that our capable interim maintainer can shepherd these topics
forward with the help from our contributors ;-).

You can find the changes described here in the integration branches
of the repositories listed at

http://git-blame.blogspot.com/p/git-public-repositories.html

--
[Graduated to "master"]

* da/difftool (2015-10-29) 1 commit
  (merged to 'next' on 2015-11-01 at 4e5ab33)
 + difftool: ignore symbolic links in use_wt_file

 The code to prepare the working tree side of temporary directory
 for the "dir-diff" feature forgot that symbolic links need not be
 copied (or symlinked) to the temporary area, as the code already
 special cases and overwrites them.  Besides, it was wrong to try
 computing the object name of the target of symbolic link, which may
 not even exist or may be a directory.


* jc/mailinfo-lib (2015-11-01) 1 commit
  (merged to 'next' on 2015-11-01 at 3ecaa28)
 + mailinfo: fix passing wrong address to git_mailinfo_config

 Hotfix for a topic already in 'master'.


* jk/initialization-fix-to-add-submodule-odb (2015-10-28) 1 commit
  (merged to 'next' on 2015-11-01 at da94b97)
 + add_submodule_odb: initialize alt_odb list earlier

 We peek objects from submodule's object store by linking it to the
 list of alternate object databases, but the code to do so forgot to
 correctly initialize the list.


* js/git-gdb (2015-10-30) 1 commit
  (merged to 'next' on 2015-11-01 at 3d232d5)
 + test: facilitate debugging Git executables in tests with gdb

 Allow easier debugging of a single "git" invocation in our test
 scripts.


* kn/for-each-branch (2015-10-30) 1 commit
  (merged to 'next' on 2015-11-01 at 4249dc9)
 + ref-filter: fallback on alphabetical comparison

 Using the timestamp based criteria in "git branch --sort" did not
 tiebreak branches that point at commits with the same timestamp (or
 the same commit), making the resulting output unstable.


* mk/blame-first-parent (2015-10-30) 3 commits
  (merged to 'next' on 2015-11-01 at 3f87150)
 + blame: allow blame --reverse --first-parent when it makes sense
 + blame: extract find_single_final
 + blame: test to describe use of blame --reverse --first-parent

 "git blame" learnt to take "--first-parent" and "--reverse" at the
 same time when it makes sense.


* rs/daemon-plug-child-leak (2015-11-02) 2 commits
  (merged to 'next' on 2015-11-02 at 64afbb9)
 + daemon: plug memory leak
 + run-command: factor out child_process_clear()
 (this branch is used by sb/submodule-parallel-update.)

 "git daemon" uses "run_command()" without "finish_command()", so it
 needs to release resources itself, which it forgot to do.


* rs/show-branch-argv-array (2015-11-01) 1 commit
  (merged to 'next' on 2015-11-01 at fac4fa6)
 + show-branch: use argv_array for default arguments

 Code simplification.


* rs/wt-status-detached-branch-fix (2015-11-01) 5 commits
  (merged to 'next' on 2015-11-01 at cb23615)
 + wt-status: use skip_prefix() to get rid of magic string length constants
 + wt-status: don't skip a magical number of characters blindly
 + wt-status: avoid building bogus branch name with detached HEAD
 + wt-status: exit early using goto in wt_shortstatus_print_tracking()
 + t7060: add test for status --branch on a detached HEAD

 "git status --branch --short" accessed beyond the constant string
 "HEAD", which has been corrected.

--
[New Topics]

* ad/sha1-update-chunked (2015-11-05) 2 commits
 - sha1: allow limiting the size of the data passed to SHA1_Update()
 - sha1: provide another level of indirection for the SHA-1 functions

 Apple's common crypto implementation of SHA1_Update() does not take
 more than 4GB at a time, and we now have a compile-time workaround
 for it.

 I think this is more or less ready.  I am skeptical about the file
 location reorg ([PATCH 3/3] $gmane/280912) and did not queue it.


* dt/http-range (2015-11-02) 2 commits
  (merged to 'next' on 2015-11-03 at 7c3cc60)
 + http: use off_t to store partial file size
 + http.c: use CURLOPT_RANGE for range requests

 A Range: request can be responded with a full response and when
 asked properly libcurl knows how to strip the result down to the
 requested range.  However, we were hand-crafting a range request
 and it did not kick in.

 Will merge to 'master'.


* vl/grep-configurable-threads (2015-11-01) 1 commit
 - grep: add --threads= option 

[ANNOUNCE] Git v2.6.3

2015-11-05 Thread Junio C Hamano
The latest maintenance release Git v2.6.3 is now available at
the usual places.  This contains bug & regression fixes that
have already been merged to the 'master' front.

The tarballs are found at:

https://www.kernel.org/pub/software/scm/git/

The following public repositories all have a copy of the 'v2.6.3'
tag and the 'maint' branch that the tag points at:

  url = https://kernel.googlesource.com/pub/scm/git/git
  url = git://repo.or.cz/alt-git.git
  url = git://git.sourceforge.jp/gitroot/git-core/git.git
  url = git://git-core.git.sourceforge.net/gitroot/git-core/git-core
  url = https://github.com/gitster/git



Git v2.6.3 Release Notes


Fixes since v2.6.2
--

 * The error message from "git blame --contents --reverse" incorrectly
   talked about "--contents --children".

 * "git merge-file" tried to signal how many conflicts it found, which
   obviously would not work well when there are too many of them.

 * The name-hash subsystem that is used to cope with case insensitive
   filesystems keeps track of directories and their on-filesystem
   cases for all the paths in the index by holding a pointer to a
   randomly chosen cache entry that is inside the directory (for its
   ce->ce_name component).  This pointer was not updated even when the
   cache entry was removed from the index, leading to use after free.
   This was fixed by recording the path for each directory instead of
   borrowing cache entries and restructuring the API somewhat.

 * When the "git am" command was reimplemented in C, "git am -3" had a
   small regression where it is aborted in its error handling codepath
   when underlying merge-recursive failed in some ways.

 * The synopsis text and the usage string of subcommands that read
   list of things from the standard input are often shown as if they
   only take input from a file on a filesystem, which was misleading.

 * A couple of commands still showed "[options]" in their usage string
   to note where options should come on their command line, but we
   spell that "[]" in most places these days.

 * The submodule code has been taught to work better with separate
   work trees created via "git worktree add".

 * When "git gc --auto" is backgrounded, its diagnosis message is
   lost.  It now is saved to a file in $GIT_DIR and is shown next time
   the "gc --auto" is run.

 * Work around "git p4" failing when the P4 depot records the contents
   in UTF-16 without UTF-16 BOM.

 * Recent update to "rebase -i" that tries to sanity check the edited
   insn sheet before it uses it has become too picky on Windows where
   CRLF left by the editor is turned into a trailing CR on the line
   read via the "read" built-in command.

 * "git clone --dissociate" runs a big "git repack" process at the
   end, and it helps to close file descriptors that are open on the
   packs and their idx files before doing so on filesystems that
   cannot remove a file that is still open.

 * Correct "git p4 --detect-labels" so that it does not fail to create
   a tag that points at a commit that is also being imported.

 * The internal stripspace() function has been moved to where it
   logically belongs to, i.e. strbuf API, and the command line parser
   of "git stripspace" has been updated to use the parse_options API.

 * Prepare for Git on-disk repository representation to undergo
   backward incompatible changes by introducing a new repository
   format version "1", with an extension mechanism.

 * "git gc" used to barf when a symbolic ref has gone dangling
   (e.g. the branch that used to be your upstream's default when you
   cloned from it is now gone, and you did "fetch --prune").

 * The normalize_ceiling_entry() function does not muck with the end
   of the path it accepts, and the real world callers do rely on that,
   but a test insisted that the function drops a trailing slash.

 * "git gc" is safe to run anytime only because it has the built-in
   grace period to protect young objects.  In order to run with no
   grace period, the user must make sure that the repository is
   quiescent.

 * A recent "filter-branch --msg-filter" broke skipping of the commit
   object header, which is fixed.

 * "git --literal-pathspecs add -u/-A" without any command line
   argument misbehaved ever since Git 2.0.

 * Merging a branch that removes a path and another that changes the
   mode bits on the same path should have conflicted at the path, but
   it didn't and silently favoured the removal.

 * "git imap-send" did not compile well with older version of cURL library.

 * The linkage order of libraries was wrong in places around libcurl.

 * It was not possible to use a repository-lookalike created by "git
   worktree add" as a local source of "git clone".

 * When "git send-email" wanted to talk over Net::SMTP::SSL,
   Net::Cmd::datasend() did not like to be fed too many bytes at the
   same time

A note from the maintainer

2015-11-05 Thread Junio C Hamano
Welcome to the Git development community.

This message is written by the maintainer and talks about how Git
project is managed, and how you can work with it.

* Mailing list and the community

The development is primarily done on the Git mailing list. Help
requests, feature proposals, bug reports and patches should be sent to
the list address .  You don't have to be
subscribed to send messages.  The convention on the list is to keep
everybody involved on Cc:, so it is unnecessary to say "Please Cc: me,
I am not subscribed".

Before sending patches, please read Documentation/SubmittingPatches
and Documentation/CodingGuidelines to familiarize yourself with the
project convention.

If you sent a patch and you did not hear any response from anybody for
several days, it could be that your patch was totally uninteresting,
but it also is possible that it was simply lost in the noise.  Please
do not hesitate to send a reminder message in such a case.  Messages
getting lost in the noise may be a sign that those who can evaluate
your patch don't have enough mental/time bandwidth to process them
right at the moment, and it often helps to wait until the list traffic
becomes calmer before sending such a reminder.

The list archive is available at a few public sites:

http://news.gmane.org/gmane.comp.version-control.git/
http://marc.theaimsgroup.com/?l=git
http://www.spinics.net/lists/git/

For those who prefer to read it over NNTP:

nntp://news.gmane.org/gmane.comp.version-control.git

When you point at a message in a mailing list archive, using
gmane is often the easiest to follow by readers, like this:

http://thread.gmane.org/gmane.comp.version-control.git/27/focus=217

as it also allows people who subscribe to the mailing list as gmane
newsgroup to "jump to" the article.

Some members of the development community can sometimes be found on
the #git and #git-devel IRC channels on Freenode.  Their logs are
available at:

http://colabti.org/irclogger/irclogger_log/git
http://colabti.org/irclogger/irclogger_log/git-devel

There is a volunteer-run newsletter to serve our community ("Git Rev
News" http://git.github.io/rev_news/rev_news.html).

Git is a member project of software freedom conservancy, a non-profit
organization (https://sfconservancy.org/).  To reach a committee of
liaisons to the conservancy, contact them at .


* Reporting bugs

When you think git does not behave as you expect, please do not stop
your bug report with just "git does not work".  "I used git in this
way, but it did not work" is not much better, neither is "I used git
in this way, and X happend, which is broken".  It often is that git is
correct to cause X happen in such a case, and it is your expectation
that is broken. People would not know what other result Y you expected
to see instead of X, if you left it unsaid.

Please remember to always state

 - what you wanted to achieve;

 - what you did (the version of git and the command sequence to reproduce
   the behavior);

 - what you saw happen (X above);

 - what you expected to see (Y above); and

 - how the last two are different.

See http://www.chiark.greenend.org.uk/~sgtatham/bugs.html for further
hints.

If you think you found a security-sensitive issue and want to disclose
it to us without announcing it to wider public, please contact us at
our security mailing list .


* Repositories, branches and documentation.

My public git.git repositories are at:

  git://git.kernel.org/pub/scm/git/git.git/
  https://kernel.googlesource.com/pub/scm/git/git
  git://repo.or.cz/alt-git.git/
  https://github.com/git/git/
  git://git.sourceforge.jp/gitroot/git-core/git.git/
  git://git-core.git.sourceforge.net/gitroot/git-core/git-core/

A few web interfaces are found at:

  http://git.kernel.org/cgit/git/git.git
  https://kernel.googlesource.com/pub/scm/git/git
  http://repo.or.cz/w/alt-git.git

Preformatted documentation from the tip of the "master" branch can be
found in:

  git://git.kernel.org/pub/scm/git/git-{htmldocs,manpages}.git/
  git://repo.or.cz/git-{htmldocs,manpages}.git/
  https://github.com/gitster/git-{htmldocs,manpages}.git/

Also GitHub shows the manual pages formatted in HTML (with a
formatting backend different from the one that is used to create the
above) at:

  http://git-scm.com/docs/git

There are four branches in git.git repository that track the source tree
of git: "master", "maint", "next", and "pu".

The "master" branch is meant to contain what are very well tested and
ready to be used in a production setting.  Every now and then, a
"feature release" is cut from the tip of this branch.  They used to be
named with three dotted decimal digits (e.g. "1.8.5"), but recently we
switched the versioning scheme and "feature releases" are named with
three-dotted decimal digits that ends with ".0" (e.g. "1.9.0").

The last such release was 2.6.0 done on Sep 28th, 2015. You can expect
that the tip of the "master" branch is al

Re: Odd problems trying to build an orphaned branch

2015-11-05 Thread Jeff King
On Thu, Nov 05, 2015 at 01:16:54PM -0800, a...@clueserver.org wrote:

> I created an orphan branch from 3.12-rc1. I then used git format-patch to
> generate patches from 3.12-rc1 to HEAD. (Over 7000 patches.) I use git am
> to apply them to the orphan branch. At patch 237 it fails to apply. (It
> appears the patch is from a block of code added with a merge commit, but
> it is somewhere in the middle of the block.)
> 
> Are merge commits supposed to screw up git-format-patch?

Yes. There is no defined format for merge patches, so git-format-patch
cannot show them. What you're trying to do won't work.

If your goal is to have the history at HEAD truncated at 3.12-rc1, you
are probably better off using a graft and having "filter-branch" rewrite
the history based on that. That will preserve merges and the general
shape of history.

> I also tried using clone with depth and --single-branch set.  It ignored
> the depth setting and gave me the whole branch all the way back to 2.6.x.

Was it a local clone? Depth is ignored for those (it _should_ print a
warning). If so, try --no-local to make it act like a "regular" clone.

> I tried using graft and filter-branch. None of the descriptions are very
> clear. None of them worked either. Filter-branch died on a commit
> somewhere in 2.6 land that had no author. (Which is outside of the commits
> I want to keep.)

I suspect you need to graft more than just the commit at v3.12-rc1. For
example, consider this history graph:

  --A--B--C--D---G--H
   \/
E--F

If we imagine that H is the current HEAD, and D is our tag (v3.12-rc1),
then making a cut between D and C will not have any effect on the side
branch that contains E and F. Commits A and B are still reachable
through them.

You can find the complete set of boundary commits like this:

  git log --boundary --format='%m %H' v3.12-rc1..HEAD

and then graft them all like this:

  git log --boundary --format='%m %H' v3.12-rc1..HEAD |
grep ^- | cut -d' ' -f2 >.git/info/grafts

Then you should be able to run "git filter-branch" to rewrite the
history based on that.

I think you can probably get the same effect by running:

  git filter-branch v3.12-rc1..HEAD

Of course that leaves only the problem that filter-branch is
horrendously slow (for the kernel, most of the time goes to populating
the index for each commit; I think filter-branch could probably learn to
skip this step if there is no index or tree filter at work).

> I tried creating an orphan branch and using cherry-pick
> v3.12-rc1..linux-3.12.y. It blew up on the first merge commit it hit. I
> tried adding in "-m 1" to try to get it to pick a parent, but then it died
> on the first commit because it was not a merge.

That won't do what you want. Cherry-pick doesn't preserve merges. When
you pick a merge and choose a mainline, it is effectively saying "treat
that as the only interesting parent" and squashes the result down to a
single non-merge commit.

If you wanted to follow this path (starting at an orphan and moving the
patches over), I think rebase's "--preserve-merges" would be your best
bet. It used to have some corner cases, though, and I don't know if
those were ever fixed. I'd say filter-branch is the most-supported way
to do what you want.

> All I want to do is take a branch from linux-stable and create a branch
> that contains just the commits from where it was branched off of master
> until it hits HEAD. That is it. All the scripts that I have seen that
> claim to do just what I want break when it hits a merge or a bogus author.
> (How that got into linux-stable, I have no idea. The commit is 10 year
> old!)

As an aside, which commit caused the bogus-author problem? Filter-branch
generally tries to preserve or fix problems rather than barfing, exactly
because it is often used to rewrite-out crap. I wonder if there is
something it could be doing better (though again, I think in your case
you are hitting the commit only because of an incomplete cut with your
grafts).

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Odd problems trying to build an orphaned branch

2015-11-05 Thread Jeff King
On Thu, Nov 05, 2015 at 07:18:32PM -0500, Jeff King wrote:

> Of course that leaves only the problem that filter-branch is
> horrendously slow (for the kernel, most of the time goes to populating
> the index for each commit; I think filter-branch could probably learn to
> skip this step if there is no index or tree filter at work).

Here's a totally untested patch that seems to make a filter-branch like
this on the kernel orders of magnitude faster:

diff --git a/git-filter-branch.sh b/git-filter-branch.sh
index 27c9c54..9df5185 100755
--- a/git-filter-branch.sh
+++ b/git-filter-branch.sh
@@ -306,6 +306,13 @@ then
start_timestamp=$(date '+%s')
 fi
 
+if test -n "$filter_index" || test -n "$filter_tree"
+then
+   need_index=t
+else
+   need_index=
+fi
+
 while read commit parents; do
git_filter_branch__commit_count=$(($git_filter_branch__commit_count+1))
 
@@ -313,7 +320,10 @@ while read commit parents; do
 
case "$filter_subdir" in
"")
-   GIT_ALLOW_NULL_SHA1=1 git read-tree -i -m $commit
+   if test -n "$need_index"
+   then
+   GIT_ALLOW_NULL_SHA1=1 git read-tree -i -m $commit
+   fi
;;
*)
# The commit may not have the subdirectory at all
@@ -387,8 +397,15 @@ while read commit parents; do
} <../commit |
eval "$filter_msg" > ../message ||
die "msg filter failed: $filter_msg"
+
+   if test -n "$need_index"
+   then
+   tree=$(git write-tree)
+   else
+   tree="$commit^{tree}"
+   fi
workdir=$workdir @SHELL_PATH@ -c "$filter_commit" "git commit-tree" \
-   $(git write-tree) $parentstr < ../message > ../map/$commit ||
+   "$tree" $parentstr < ../message > ../map/$commit ||
die "could not write rewritten commit"
 done <../revs
 
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] In configure.ac, try -lpthread in $LIBS instead of $CFLAGS to make picky linkers happy

2015-11-05 Thread Rainer M. Canavan
Some linkers, namely the one on IRIX are rather strict concerning the order or
arguments for symbol resolution, i.e. no libraries listed before objects or
other libraries on the command line are considered for symbol resolution. 
Therefore, -lpthread can't work if it's put in CFLAGS, because it will not be
considered for resolving pthread_key_create in conftest.o. Use $LIBS instead.

Signed-off-by: Rainer Canavan 
---
 configure.ac | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/configure.ac b/configure.ac
index fd22d41..1f55009 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1149,7 +1149,12 @@ elif test -z "$PTHREAD_CFLAGS"; then
   # would then trigger compiler warnings on every single file we compile.
   for opt in "" -mt -pthread -lpthread; do
  old_CFLAGS="$CFLAGS"
- CFLAGS="$opt $CFLAGS"
+ old_LIBS="$LIBS"
+ case "$opt" in
+-l*)  LIBS="$opt $LIBS" ;;
+*)CFLAGS="$opt $CFLAGS" ;;
+ esac
+
  AC_MSG_CHECKING([for POSIX Threads with '$opt'])
  AC_LINK_IFELSE([PTHREADTEST_SRC],
[AC_MSG_RESULT([yes])
@@ -1161,6 +1166,7 @@ elif test -z "$PTHREAD_CFLAGS"; then
],
[AC_MSG_RESULT([no])])
   CFLAGS="$old_CFLAGS"
+  LIBS="$old_LIBS"
   done
   if test $threads_found != yes; then
 AC_CHECK_LIB([pthread], [pthread_create],
-- 
2.6.2

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 7/7] contrib/subtree: Handle '--prefix' argument with a slash appended

2015-11-05 Thread David Greene
From: Techlive Zheng 

'git subtree merge' will fail if the argument of '--prefix' has a slash
appended.

Signed-off-by: Techlive Zheng 
Signed-off-by: David A. Greene 
---
 contrib/subtree/git-subtree.sh |  2 +-
 contrib/subtree/t/t7900-subtree.sh | 20 
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/contrib/subtree/git-subtree.sh b/contrib/subtree/git-subtree.sh
index 308b777..edf36f8 100755
--- a/contrib/subtree/git-subtree.sh
+++ b/contrib/subtree/git-subtree.sh
@@ -90,7 +90,7 @@ while [ $# -gt 0 ]; do
--annotate) annotate="$1"; shift ;;
--no-annotate) annotate= ;;
-b) branch="$1"; shift ;;
-   -P) prefix="$1"; shift ;;
+   -P) prefix="${1%/}"; shift ;;
-m) message="$1"; shift ;;
--no-prefix) prefix= ;;
--onto) onto="$1"; shift ;;
diff --git a/contrib/subtree/t/t7900-subtree.sh 
b/contrib/subtree/t/t7900-subtree.sh
index 2683d7d..751aee3 100755
--- a/contrib/subtree/t/t7900-subtree.sh
+++ b/contrib/subtree/t/t7900-subtree.sh
@@ -257,6 +257,26 @@ test_expect_success 'merge the added subproj again, should 
do nothing' '
)
 '
 
+next_test
+test_expect_success 'merge new subproj history into subdir/ with a slash 
appended to the argument of --prefix' '
+   test_create_repo "$test_count" &&
+   test_create_repo "$test_count/subproj" &&
+   test_create_commit "$test_count" main1 &&
+   test_create_commit "$test_count/subproj" sub1 &&
+   (
+   cd "$test_count" &&
+   git fetch ./subproj master &&
+   git subtree add --prefix=subdir/ FETCH_HEAD
+   ) &&
+   test_create_commit "$test_count/subproj" sub2 &&
+   (
+   cd "$test_count" &&
+   git fetch ./subproj master &&
+   git subtree merge --prefix=subdir/ FETCH_HEAD &&
+   check_equal "$(last_commit_message)" "Merge commit '\''$(git 
rev-parse FETCH_HEAD)'\''"
+   )
+'
+
 #
 # Tests for 'git subtree split'
 #
-- 
2.6.1

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2015-11-05 Thread David Greene
I'm processing some old patches I have lying around.  These clean up
git-subtree's test base and refactor the test code so that each test
is independent of the others.  This greatly aids debugging and
post-mortem analysis.

I have rebased these old patches on master, ensuring that new tests
that have been added in the interim are incorporated into the new test
code.

After using git-subtree in real projects for a couple of years and
exploring similar tools that have been developed, I'm fairly convinced
we should change some current behavor of git-subtree.  I have also run
into the need for some additional features.  I'm now in a position
where I can work on those.

This patch set is a prerequisite for that work.

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/7] contrib/subtree: Add tests for subtree add

2015-11-05 Thread David Greene
From: Techlive Zheng 

Add some tests to check various options to subtree add.  These test
various combinations of --message, --prefix and --squash.

Signed-off-by: Techlive Zheng 
Signed-off-by: David A. Greene 
---
 contrib/subtree/t/t7900-subtree.sh | 19 +++
 1 file changed, 19 insertions(+)

diff --git a/contrib/subtree/t/t7900-subtree.sh 
b/contrib/subtree/t/t7900-subtree.sh
index 4471786..1fa5991 100755
--- a/contrib/subtree/t/t7900-subtree.sh
+++ b/contrib/subtree/t/t7900-subtree.sh
@@ -127,12 +127,24 @@ test_expect_success 'no merge from non-existent subtree' '
test_must_fail git subtree merge --prefix="sub dir" FETCH_HEAD
 '
 
+test_expect_success 'add subproj as subtree into sub dir/ with --prefix' '
+   git subtree add --prefix="sub dir" sub1 &&
+   check_equal "$(last_commit_message)" "Add '\''sub dir/'\'' from commit 
'\''$(git rev-parse sub1)'\''" &&
+   undo
+'
+
 test_expect_success 'check if --message works for add' '
git subtree add --prefix="sub dir" --message="Added subproject" sub1 &&
check_equal ''"$(last_commit_message)"'' "Added subproject" &&
undo
 '
 
+test_expect_success 'add subproj as subtree into sub dir/ with --prefix and 
--message' '
+   git subtree add --prefix="sub dir" --message="Added subproject" sub1 &&
+   check_equal "$(last_commit_message)" "Added subproject" &&
+   undo
+'
+
 test_expect_success 'check if --message works as -m and --prefix as -P' '
git subtree add -P "sub dir" -m "Added subproject using git subtree" 
sub1 &&
check_equal ''"$(last_commit_message)"'' "Added subproject using git 
subtree" &&
@@ -145,6 +157,13 @@ test_expect_success 'check if --message works with squash 
too' '
undo
 '
 
+test_expect_success 'add subproj as subtree into sub dir/ with --squash and 
--prefix and --message' '
+   git subtree add --prefix="sub dir" --message="Added subproject with 
squash" --squash sub1 &&
+   check_equal "$(last_commit_message)" "Added subproject with squash" &&
+   undo
+'
+
+# Maybe delete
 test_expect_success 'add subproj to mainline' '
git subtree add --prefix="sub dir"/ FETCH_HEAD &&
check_equal ''"$(last_commit_message)"'' "Add '"'sub dir/'"' from 
commit '"'"'''"$(git rev-parse sub1)"'''"'"'"
-- 
2.6.1

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 6/7] contrib/subtree: Make each test self-contained

2015-11-05 Thread David Greene
From: Techlive Zheng 

Each test runs a full repository creation and any subtree actions
needed to perform the test.  Each test starts with a clean slate,
making debugging and post-mortem analysis much easier.

Signed-off-by: Techlive Zheng 
Signed-off-by: David A. Greene 
---
 contrib/subtree/t/t7900-subtree.sh | 1258 
 1 file changed, 840 insertions(+), 418 deletions(-)

diff --git a/contrib/subtree/t/t7900-subtree.sh 
b/contrib/subtree/t/t7900-subtree.sh
index 6250194..2683d7d 100755
--- a/contrib/subtree/t/t7900-subtree.sh
+++ b/contrib/subtree/t/t7900-subtree.sh
@@ -14,6 +14,15 @@ export TEST_DIRECTORY
 
 . ../../../t/test-lib.sh
 
+subtree_test_create_repo()
+{
+   test_create_repo "$1"
+   (
+   cd $1
+   git config log.date relative
+   )
+}
+
 create()
 {
echo "$1" >"$1"
@@ -61,515 +70,928 @@ join_commits()
echo "$commit $all"
 }
 
+test_create_commit() (
+   repo=$1
+   commit=$2
+   cd "$repo"
+   mkdir -p $(dirname "$commit") \
+   || error "Could not create directory for commit"
+   echo "$commit" >"$commit"
+   git add "$commit" || error "Could not add commit"
+   git commit -m "$commit" || error "Could not commit"
+)
+
 last_commit_message()
 {
git log --pretty=format:%s -1
 }
 
-test_expect_success 'init subproj' '
-   test_create_repo "sub proj"
-'
-
-# To the subproject!
-cd ./"sub proj"
-
-test_expect_success 'add sub1' '
-   create sub1 &&
-   git commit -m "sub1" &&
-   git branch sub1 &&
-   git branch -m master subproj
-'
-
-# Save this hash for testing later.
-
-subdir_hash=$(git rev-parse HEAD)
-
-test_expect_success 'add sub2' '
-   create sub2 &&
-   git commit -m "sub2" &&
-   git branch sub2
-'
-
-test_expect_success 'add sub3' '
-   create sub3 &&
-   git commit -m "sub3" &&
-   git branch sub3
-'
-
-# Back to mainline
-cd ..
-
-test_expect_success 'enable log.date=relative to catch errors' '
-   git config log.date relative
-'
-
-test_expect_success 'add main4' '
-   create main4 &&
-   git commit -m "main4" &&
-   git branch -m master mainline &&
-   git branch subdir
-'
-
-test_expect_success 'fetch subproj history' '
-   git fetch ./"sub proj" sub1 &&
-   git branch sub1 FETCH_HEAD
-'
-
-test_expect_success 'no subtree exists in main tree' '
-   test_must_fail git subtree merge --prefix="sub dir" sub1
-'
+subtree_test_count=0
+next_test() {
+   subtree_test_count=$(($subtree_test_count+1))
+}
 
-test_expect_success 'no pull from non-existant subtree' '
-   test_must_fail git subtree pull --prefix="sub dir" ./"sub proj" sub1
-'
+#
+# Tests for 'git subtree add'
+#
 
+next_test
 test_expect_success 'no merge from non-existent subtree' '
-   test_must_fail git subtree merge --prefix="sub dir" FETCH_HEAD
+   subtree_test_create_repo "$subtree_test_count" &&
+   subtree_test_create_repo "$subtree_test_count/sub proj" &&
+   test_create_commit "$subtree_test_count" main1 &&
+   test_create_commit "$subtree_test_count/sub proj" sub1 &&
+   (
+   cd "$subtree_test_count" &&
+   git fetch ./"sub proj" master &&
+   test_must_fail git subtree merge --prefix="sub dir" FETCH_HEAD
+   )
 '
 
-test_expect_success 'add subproj as subtree into sub dir/ with --prefix' '
-   git subtree add --prefix="sub dir" sub1 &&
-   check_equal "$(last_commit_message)" "Add '\''sub dir/'\'' from commit 
'\''$(git rev-parse sub1)'\''" &&
-   undo
-'
+next_test
+test_expect_success 'no pull from non-existent subtree' '
+   subtree_test_create_repo "$subtree_test_count" &&
+   subtree_test_create_repo "$subtree_test_count/sub proj" &&
+   test_create_commit "$subtree_test_count" main1 &&
+   test_create_commit "$subtree_test_count/sub proj" sub1 &&
+   (
+   cd "$subtree_test_count" &&
+   git fetch ./"sub proj" master &&
+   test_must_fail git subtree pull --prefix="sub dir" ./"sub proj" 
master
+   )'
 
-test_expect_success 'check if --message works for add' '
-   git subtree add --prefix="sub dir" --message="Added subproject" sub1 &&
-   check_equal ''"$(last_commit_message)"'' "Added subproject" &&
-   undo
+next_test
+test_expect_success 'add subproj as subtree into sub dir/ with --prefix' '
+   subtree_test_create_repo "$subtree_test_count" &&
+   subtree_test_create_repo "$subtree_test_count/sub proj" &&
+   test_create_commit "$subtree_test_count" main1 &&
+   test_create_commit "$subtree_test_count/sub proj" sub1 &&
+   (
+   cd "$subtree_test_count" &&
+   git fetch ./"sub proj" master &&
+   git subtree add --prefix="sub dir" FETCH_HEAD &&
+   check_equal "$(last_commit_message)" "Add '\''sub dir/'\'' from 
commit '\''$(git rev-parse FETCH_HEAD)'\''"
+   )
 '
 
+next_tes

[PATCH 1/7] contrib/subtree: Clean and refactor test code

2015-11-05 Thread David Greene
From: Techlive Zheng 

Mostly prepare for the later tests refactoring.  This moves some
common code to helper functions and generally cleans things up to be
more presentable.

Signed-off-by: Techlive Zheng 
Signed-off-by: David A. Greene 
---
 contrib/subtree/t/Makefile |  31 ---
 contrib/subtree/t/t7900-subtree.sh | 103 -
 2 files changed, 79 insertions(+), 55 deletions(-)

diff --git a/contrib/subtree/t/Makefile b/contrib/subtree/t/Makefile
index c864810..276898e 100644
--- a/contrib/subtree/t/Makefile
+++ b/contrib/subtree/t/Makefile
@@ -13,11 +13,23 @@ TAR ?= $(TAR)
 RM ?= rm -f
 PROVE ?= prove
 DEFAULT_TEST_TARGET ?= test
+TEST_LINT ?= test-lint
+
+ifdef TEST_OUTPUT_DIRECTORY
+TEST_RESULTS_DIRECTORY = $(TEST_OUTPUT_DIRECTORY)/test-results
+else
+TEST_RESULTS_DIRECTORY = ../../../t/test-results
+endif
 
 # Shell quote;
 SHELL_PATH_SQ = $(subst ','\'',$(SHELL_PATH))
+PERL_PATH_SQ = $(subst ','\'',$(PERL_PATH))
+TEST_RESULTS_DIRECTORY_SQ = $(subst ','\'',$(TEST_RESULTS_DIRECTORY))
 
-T = $(wildcard t[0-9][0-9][0-9][0-9]-*.sh)
+T = $(sort $(wildcard t[0-9][0-9][0-9][0-9]-*.sh))
+TSVN = $(sort $(wildcard t91[0-9][0-9]-*.sh))
+TGITWEB = $(sort $(wildcard t95[0-9][0-9]-*.sh))
+THELPERS = $(sort $(filter-out $(T),$(wildcard *.sh)))
 
 all: $(DEFAULT_TEST_TARGET)
 
@@ -26,20 +38,22 @@ test: pre-clean $(TEST_LINT)
 
 prove: pre-clean $(TEST_LINT)
@echo "*** prove ***"; GIT_CONFIG=.git/config $(PROVE) --exec 
'$(SHELL_PATH_SQ)' $(GIT_PROVE_OPTS) $(T) :: $(GIT_TEST_OPTS)
-   $(MAKE) clean
+   $(MAKE) clean-except-prove-cache
 
 $(T):
@echo "*** $@ ***"; GIT_CONFIG=.git/config '$(SHELL_PATH_SQ)' $@ 
$(GIT_TEST_OPTS)
 
 pre-clean:
-   $(RM) -r test-results
+   $(RM) -r '$(TEST_RESULTS_DIRECTORY_SQ)'
 
-clean:
-   $(RM) -r 'trash directory'.* test-results
+clean-except-prove-cache:
+   $(RM) -r 'trash directory'.* '$(TEST_RESULTS_DIRECTORY_SQ)'
$(RM) -r valgrind/bin
+
+clean: clean-except-prove-cache
$(RM) .prove
 
-test-lint: test-lint-duplicates test-lint-executable
+test-lint: test-lint-duplicates test-lint-executable test-lint-shell-syntax
 
 test-lint-duplicates:
@dups=`echo $(T) | tr ' ' '\n' | sed 's/-.*//' | sort | uniq -d` && \
@@ -51,12 +65,15 @@ test-lint-executable:
test -z "$$bad" || { \
echo >&2 "non-executable tests:" $$bad; exit 1; }
 
+test-lint-shell-syntax:
+   @'$(PERL_PATH_SQ)' ../../../t/check-non-portable-shell.pl $(T) 
$(THELPERS)
+
 aggregate-results-and-cleanup: $(T)
$(MAKE) aggregate-results
$(MAKE) clean
 
 aggregate-results:
-   for f in ../../../t/test-results/t*-*.counts; do \
+   for f in '$(TEST_RESULTS_DIRECTORY_SQ)'/t*-*.counts; do \
echo "$$f"; \
done | '$(SHELL_PATH_SQ)' ../../../t/aggregate-results.sh
 
diff --git a/contrib/subtree/t/t7900-subtree.sh 
b/contrib/subtree/t/t7900-subtree.sh
index dfbe443..f9dda3d 100755
--- a/contrib/subtree/t/t7900-subtree.sh
+++ b/contrib/subtree/t/t7900-subtree.sh
@@ -5,7 +5,7 @@
 #
 test_description='Basic porcelain support for subtrees
 
-This test verifies the basic operation of the merge, pull, add
+This test verifies the basic operation of the add, pull, merge
 and split subcommands of git subtree.
 '
 
@@ -20,7 +20,6 @@ create()
git add "$1"
 }
 
-
 check_equal()
 {
test_debug 'echo'
@@ -38,6 +37,30 @@ undo()
git reset --hard HEAD~
 }
 
+# Make sure no patch changes more than one file.
+# The original set of commits changed only one file each.
+# A multi-file change would imply that we pruned commits
+# too aggressively.
+join_commits()
+{
+   commit=
+   all=
+   while read x y; do
+   if [ -z "$x" ]; then
+   continue
+   elif [ "$x" = "commit:" ]; then
+   if [ -n "$commit" ]; then
+   echo "$commit $all"
+   all=
+   fi
+   commit="$y"
+   else
+   all="$all $y"
+   fi
+   done
+   echo "$commit $all"
+}
+
 last_commit_message()
 {
git log --pretty=format:%s -1
@@ -123,9 +146,11 @@ test_expect_success 'add subproj to mainline' '
check_equal ''"$(last_commit_message)"'' "Add '"'sub dir/'"' from 
commit '"'"'''"$(git rev-parse sub1)"'''"'"'"
 '
 
-# this shouldn't actually do anything, since FETCH_HEAD is already a parent
-test_expect_success 'merge fetched subproj' '
-   git merge -m "merge -s -ours" -s ours FETCH_HEAD
+test_expect_success 'merge the added subproj again, should do nothing' '
+   # this shouldn not actually do anything, since FETCH_HEAD
+   # is already a parent
+   result=$(git merge -s ours -m "merge -s -ours" FETCH_HEAD) &&
+   check_equal "${result}" "Already up-to-date."
 '
 
 test_expect_success 'add main-sub5' '
@@ -167,7 +192,7 @@ test_ex

[PATCH 2/7] contrib/subtree: Add test for missing subtree

2015-11-05 Thread David Greene
From: Techlive Zheng 

Test that a merge from a non-existant subtree fails.

Signed-off-by: Techlive Zheng 
Signed-off-by: David A. Greene 
---
 contrib/subtree/t/t7900-subtree.sh | 4 
 1 file changed, 4 insertions(+)

diff --git a/contrib/subtree/t/t7900-subtree.sh 
b/contrib/subtree/t/t7900-subtree.sh
index f9dda3d..4471786 100755
--- a/contrib/subtree/t/t7900-subtree.sh
+++ b/contrib/subtree/t/t7900-subtree.sh
@@ -123,6 +123,10 @@ test_expect_success 'no pull from non-existant subtree' '
test_must_fail git subtree pull --prefix="sub dir" ./"sub proj" sub1
 '
 
+test_expect_success 'no merge from non-existent subtree' '
+   test_must_fail git subtree merge --prefix="sub dir" FETCH_HEAD
+'
+
 test_expect_success 'check if --message works for add' '
git subtree add --prefix="sub dir" --message="Added subproject" sub1 &&
check_equal ''"$(last_commit_message)"'' "Added subproject" &&
-- 
2.6.1

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 5/7] contrib/subtree: Add split tests

2015-11-05 Thread David Greene
From: Techlive Zheng 

Add tests to check various options to split.  Check combinations of
--prefix, --message, --annotate, --branch and --rejoin.

Signed-off-by: Techlive Zheng 
Signed-off-by: David A. Greene 
---
 contrib/subtree/t/t7900-subtree.sh | 17 +++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/contrib/subtree/t/t7900-subtree.sh 
b/contrib/subtree/t/t7900-subtree.sh
index 7d59a1a..6250194 100755
--- a/contrib/subtree/t/t7900-subtree.sh
+++ b/contrib/subtree/t/t7900-subtree.sh
@@ -250,7 +250,6 @@ test_expect_success 'split requires path given by option 
--prefix must exist' '
 
 test_expect_success 'check if --message works for split+rejoin' '
spl1=''"$(git subtree split --annotate='"'*'"' --prefix "sub dir" 
--onto FETCH_HEAD --message "Split & rejoin" --rejoin)"'' &&
-   git branch spl1 "$spl1" &&
check_equal ''"$(last_commit_message)"'' "Split & rejoin" &&
undo
 '
@@ -282,7 +281,21 @@ test_expect_success 'check split with --branch for an 
incompatible branch' '
test_must_fail git subtree split --prefix "sub dir" --onto FETCH_HEAD 
--branch subdir
 '
 
-test_expect_success 'check split+rejoin' '
+test_expect_success 'split sub dir/ with --rejoin' '
+   spl1=$(git subtree split --prefix="sub dir" --annotate="*") &&
+   git branch spl1 "$spl1" &&
+   git subtree split --prefix="sub dir" --annotate="*" --rejoin &&
+   check_equal "$(last_commit_message)" "Split '\''sub dir/'\'' into 
commit '\''$spl1'\''" &&
+   undo
+'
+
+test_expect_success 'split sub dir/ with --rejoin and --message' '
+   git subtree split --prefix="sub dir" --message="Split & rejoin" 
--annotate="*" --rejoin &&
+   check_equal "$(last_commit_message)" "Split & rejoin" &&
+   undo
+'
+
+test_expect_success 'check split+rejoin+onto' '
spl1=''"$(git subtree split --annotate='"'*'"' --prefix "sub dir" 
--onto FETCH_HEAD --message "Split & rejoin" --rejoin)"'' &&
undo &&
git subtree split --annotate='"'*'"' --prefix "sub dir" --onto 
FETCH_HEAD --rejoin &&
-- 
2.6.1

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/7] contrib/subtree: Add merge tests

2015-11-05 Thread David Greene
From: Techlive Zheng 

Add some tests for various merge operations.  Test combinations of merge
with --message, --prefix and --squash.

Signed-off-by: Techlive Zheng 
Signed-off-by: David A. Greene 
---
 contrib/subtree/t/t7900-subtree.sh | 13 -
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/contrib/subtree/t/t7900-subtree.sh 
b/contrib/subtree/t/t7900-subtree.sh
index 1fa5991..7d59a1a 100755
--- a/contrib/subtree/t/t7900-subtree.sh
+++ b/contrib/subtree/t/t7900-subtree.sh
@@ -210,11 +210,22 @@ test_expect_success 'check if --message for merge works 
with squash too' '
 
 test_expect_success 'merge new subproj history into subdir' '
git subtree merge --prefix="sub dir" FETCH_HEAD &&
-   git branch pre-split &&
check_equal ''"$(last_commit_message)"'' "Merge commit '"'"'"$(git 
rev-parse sub2)"'"'"' into mainline" &&
undo
 '
 
+test_expect_success 'merge new subproj history into subdir/ with --prefix and 
--message' '
+   git subtree merge --prefix="sub dir" --message="Merged changes from 
subproject" FETCH_HEAD &&
+   check_equal "$(last_commit_message)" "Merged changes from subproject" &&
+   undo
+'
+
+test_expect_success 'merge new subproj history into subdir/ with --squash and 
--prefix and --message' '
+   git subtree merge --prefix="sub dir" --message="Merged changes from 
subproject using squash" --squash FETCH_HEAD &&
+   check_equal "$(last_commit_message)" "Merged changes from subproject 
using squash" &&
+   undo
+'
+
 test_expect_success 'split requires option --prefix' '
echo "You must provide the --prefix option." > expected &&
test_must_fail git subtree split > actual 2>&1 &&
-- 
2.6.1

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] run-command: Remove set_nonblocking

2015-11-05 Thread Johannes Sixt

Am 05.11.2015 um 23:20 schrieb Stefan Beller:

On Thu, Nov 5, 2015 at 12:27 PM, Johannes Sixt  wrote:


diff --git a/run-command.c b/run-command.c
index 51d078c..3e42299 100644
--- a/run-command.c
+++ b/run-command.c
@@ -977,7 +977,7 @@ static struct parallel_processes *pp_init(int n,
 for (i = 0; i < n; i++) {
 strbuf_init(&pp->children[i].err, 0);
 child_process_init(&pp->children[i].process);
-   pp->pfd[i].events = POLLIN;
+   pp->pfd[i].events = POLLIN|POLLHUP;
 pp->pfd[i].fd = -1;
 }
 sigchain_push_common(handle_children_on_signal);
@@ -1061,11 +1061,17 @@ static void pp_buffer_stderr(struct parallel_processes 
*pp, int output_timeout)
 /* Buffer output from all pipes. */
 for (i = 0; i < pp->max_processes; i++) {
 if (pp->children[i].in_use &&
-   pp->pfd[i].revents & POLLIN)
-   if (strbuf_read_once(&pp->children[i].err,
-pp->children[i].process.err, 0) < 
0)
+   pp->pfd[i].revents & (POLLIN|POLLHUP)) {
+   int n = strbuf_read_once(&pp->children[i].err,
+pp->children[i].process.err, 0);
+   if (n == 0) {
+   close(pp->children[i].process.err);
+   pp->children[i].process.err = -1;


So you set .err to -1 to signal the process has ended here...


-
 for (i = 0; i < pp->max_processes; i++)
 if (pp->children[i].in_use &&
-   pid == pp->children[i].process.pid)
+   pp->children[i].process.err == -1)
 break;


to make a decision here if we want to finish_command on it.


Correct.


+   code = finish_command(&pp->children[i].process);



-   child_process_clear(&pp->children[i].process);


but .err stays stays -1 here for the next iteration?
We would need to reset it to 0 again.


No. In the next round, we need -1 to request a pipe. get_next_task 
callback sets it to -1 as well (and I think it is wrong that the 
callback does it; pp_start_one should do that).



So .err is
   0 when the slot is not in use
  -1 when the child has finished awaiting termination
  >0 when the child is living a happy life.


But, as I said, .err is not the right place to mark dying processes (it 
was just the simplest way to demonstrate the concept in this patch). 
Better extend .in_use to a tri-state indicator.


-- Hannes

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] filter-branch: skip index read/write when possible

2015-11-05 Thread Jeff King
On Thu, Nov 05, 2015 at 07:20:48PM -0500, Jeff King wrote:

> Here's a totally untested patch that seems to make a filter-branch like
> this on the kernel orders of magnitude faster:

Testing shows that it is indeed broken. :)

If $filter_subdir is set, it handles the index read itself, but my
earlier patch did not correctly do the write.

This one should work for all cases (unless the user does something
really strange, like expect to manipulate the index inside the
--env-filter or something, but IMHO it is insane for anyone to rely on
that working).

-- >8 --
Subject: filter-branch: skip index read/write when possible

If the user specifies an index filter but not a tree filter,
filter-branch cleverly avoids checking out the tree
entirely. But we don't do the next level of optimization: if
you have no index or tree filter, we do not need to read the
index at all.

This can greatly speed up cases where we are only changing
the commit objects (e.g., cementing a graft into place).
Here are numbers from the newly-added perf test:

  Test  HEAD^  HEAD
  ---
  7000.2: noop filter   13.81(4.95+0.83)   5.43(0.42+0.43) -60.7%

Signed-off-by: Jeff King 
---
Those numbers are from git.git. The bigger your tree, the better the
speedup (I didn't run the perf test, because even the span of
HEAD~100..HEAD takes tens of minutes for each trial with the old code.
With the new it's less than 30 seconds).

 git-filter-branch.sh  | 23 +--
 t/perf/p7000-filter-branch.sh | 19 +++
 2 files changed, 40 insertions(+), 2 deletions(-)
 create mode 100755 t/perf/p7000-filter-branch.sh

diff --git a/git-filter-branch.sh b/git-filter-branch.sh
index 27c9c54..d61f9ba 100755
--- a/git-filter-branch.sh
+++ b/git-filter-branch.sh
@@ -306,6 +306,15 @@ then
start_timestamp=$(date '+%s')
 fi
 
+if test -n "$filter_index" ||
+   test -n "$filter_tree" ||
+   test -n "$filter_subdir"
+then
+   need_index=t
+else
+   need_index=
+fi
+
 while read commit parents; do
git_filter_branch__commit_count=$(($git_filter_branch__commit_count+1))
 
@@ -313,7 +322,10 @@ while read commit parents; do
 
case "$filter_subdir" in
"")
-   GIT_ALLOW_NULL_SHA1=1 git read-tree -i -m $commit
+   if test -n "$need_index"
+   then
+   GIT_ALLOW_NULL_SHA1=1 git read-tree -i -m $commit
+   fi
;;
*)
# The commit may not have the subdirectory at all
@@ -387,8 +399,15 @@ while read commit parents; do
} <../commit |
eval "$filter_msg" > ../message ||
die "msg filter failed: $filter_msg"
+
+   if test -n "$need_index"
+   then
+   tree=$(git write-tree)
+   else
+   tree="$commit^{tree}"
+   fi
workdir=$workdir @SHELL_PATH@ -c "$filter_commit" "git commit-tree" \
-   $(git write-tree) $parentstr < ../message > ../map/$commit ||
+   "$tree" $parentstr < ../message > ../map/$commit ||
die "could not write rewritten commit"
 done <../revs
 
diff --git a/t/perf/p7000-filter-branch.sh b/t/perf/p7000-filter-branch.sh
new file mode 100755
index 000..15ee5d1
--- /dev/null
+++ b/t/perf/p7000-filter-branch.sh
@@ -0,0 +1,19 @@
+#!/bin/sh
+
+test_description='performance of filter-branch'
+. ./perf-lib.sh
+
+test_perf_default_repo
+test_checkout_worktree
+
+test_expect_success 'mark bases for tests' '
+   git tag -f tip &&
+   git tag -f base HEAD~100
+'
+
+test_perf 'noop filter' '
+   git checkout --detach tip &&
+   git filter-branch -f base..HEAD
+'
+
+test_done
-- 
2.6.2.711.g30c79de

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html