Re: [PATCH 3/3] send-email: also pick up cc addresses from -by trailers

2018-10-15 Thread Junio C Hamano
Rasmus Villemoes  writes:

>> I wonder if it would make it easier to grok if we made the logic
>> inside out, i.e.
>> 
>>  if ($sc eq $sender) {
>>  ...
>>  } else {
>>  if ($what =~ /^Signed-off-by$/i) {
>>  next if $suppress_cc{'sob'};
>>  } elsif ($what =~ /-by$/i) {
>>  next if $suppress_cc{'misc'};
>>  } elsif ($what =~ /^Cc$/i) {
>>  next if $suppress_cc{'bodycc'};>}
>
> ...yes, that's probably more readable.

OK, unless there is more comments and suggestions for improvements,
can you send in a final version sometime not in so distant future so
that we won't forget?  It may be surprising to existing users that
the command now suddenly adds more addresses the user did not think
would be added, but it would probably be easy enough for them to
work around.  I'll need to prepare a note in the draft release notes
to describe backward (in)compatibility to warn users.


Re: [PATCH v2 06/13] Add a build definition for Azure DevOps

2018-10-15 Thread Junio C Hamano
"Johannes Schindelin via GitGitGadget" 
writes:

> Also, we make use of the shiny new feature we just introduced where the
> test suite can output JUnit-style .xml files. This information is made
> available in a nice UI that allows the viewer to filter by phase and/or
> test number, and to see trends such as: number of (failing) tests, time
> spent running the test suite, etc.
>
> Signed-off-by: Johannes Schindelin 
> ---
>  azure-pipelines.yml   | 319 ++
>  ci/mount-fileshare.sh |  26 
>  2 files changed, 345 insertions(+)
>  create mode 100644 azure-pipelines.yml
>  create mode 100755 ci/mount-fileshare.sh

I wonder if there is a need to keep what is tested by this and
Travis in sync in any way, but most of the logic is not defined in
these "steps" but implemented in ci/*.sh scripts to be shared, so it
would be OK, I guess.

> diff --git a/ci/mount-fileshare.sh b/ci/mount-fileshare.sh
> new file mode 100755
> index 00..5fb5f74b70
> --- /dev/null
> +++ b/ci/mount-fileshare.sh
> @@ -0,0 +1,26 @@
> +#!/bin/sh
> +
> +die () {
> + echo "$*" >&2
> + exit 1
> +}
> +
> +test $# = 4 ||
> +die "Usage: $0'.

> +
> +mkdir -p "$4" || die "Could not create $4"
> +
> +case "$(uname -s)" in
> +Linux)
> + sudo mount -t cifs -o 
> vers=3.0,username="$2",password="$3",dir_mode=0777,file_mode=0777,serverino 
> "$1" "$4"
> + ;;
> +Darwin)
> + pass="$(echo "$3" | sed -e 's/\//%2F/g' -e 's/+/%2B/g')" &&
> + mount -t smbfs,soft "smb://$2:$pass@${1#//}" "$4"
> + ;;
> +*)
> + die "No support for $(uname -s)"
> + ;;
> +esac ||
> +die "Could not mount $4"
> +

Trailing blank line.

Thanks.


Re: [PATCH v2 01/13] ci: rename the library of common functions

2018-10-15 Thread Junio C Hamano
"Johannes Schindelin via GitGitGadget" 
writes:

> From: Johannes Schindelin 
>
> The name is hard-coded to reflect that we use Travis CI for continuous
> testing.
>
> In the next commits, we will extend this to be able use Azure DevOps,
> too.
>
> So let's adjust the name to make it more generic.
>
> Signed-off-by: Johannes Schindelin 
> ---

With many pieces still shared across these two CI systems, this
looks like an excellent move.

I was wondering if the other extreme of moving ci/* to ci/travis/*,
introducing ci/azure/* and sharing nothing was needed, but being
able to share meaning amount of set-up sounds quite good.


Re: [PATCH] submodule helper: convert relative URL to absolute URL if needed

2018-10-15 Thread Junio C Hamano
Jonathan Nieder  writes:

>> git config submodule.active .
>> git submodule update
>> fatal: repository '../plugins/codemirror-editor' does not exist
>> fatal: clone of '../plugins/codemirror-editor' into submodule path 
>> '/tmp/gerrit/plugins/codemirror-editor' failed
>> Failed to clone 'plugins/codemirror-editor'. Retry scheduled
>> [...]
>> fatal: clone of '../plugins/codemirror-editor' into submodule path 
>> '/tmp/gerrit/plugins/codemirror-editor' failed
>> Failed to clone 'plugins/codemirror-editor' a second time, aborting
>> [...]
>
> Thanks.
>
> "git log" and other tools have the ability to rewrap lines and will get
> confused by this transcript.  Can you indent it to unconfuse them?
>
>> Signed-off-by: Stefan Beller 
>
> Please also credit the bug reporter:
>
> Reported-by: Jaewoong Jung 
>
> ...
> nit: inconsistent vertical whitespace (extra blank line?)
> ...
> I think this would be easier to read with all the release commands
> together:
> ...

All good points.


Re: [PATCH 1/1] subtree: add build targets 'man' and 'html'

2018-10-15 Thread Junio C Hamano
Junio C Hamano  writes:

> Christian Hesse  writes:
>
>> Junio C Hamano  on Wed, 2018/10/10 11:26:
>>> As 'contrib' material without real maintenance, I do not care too
>>> deeply, but shouldn't this change be more like this to avoid
>>> duplicating the list of targets?
>>
>> Probably, yes.
>> Do you want to add this yourself or do you want me to send an updated patch
>> or one on top of the last change?
>
> In principle either is fine but keep in mind that I'll likely forget
> if you leave it up to me.

Actully, I take it back.  The original patch is already in 'next',
so an incremental on top of what you sent is the only valid
improvement ;-)



Re: [PATCH 0/9] Offer to run CI/PR builds in Visual Studio Team Services

2018-10-15 Thread Junio C Hamano
Johannes Schindelin  writes:

> AFAIR Junio does not push to github.com/git/git, it is an automatic
> mirror.
>
> GitLab could easily do the same.

It used to be in the early days but these days git/git and
gitster/git are updated in a same for loop that pushes to various
destinations.  You are correct that GitLab or any other hosting
sites could do the same polling and mirroring.  I am just too lazy
to open a new account at yet another hosting site to add that for
loop, but I may choose to when I am absolutely bored and nothing
else to do ;-).



Re: [PATCH 0/9] Offer to run CI/PR builds in Visual Studio Team Services

2018-10-15 Thread Junio C Hamano
Ævar Arnfjörð Bjarmason  writes:

> I have not reviewed this in any detail, but incorporating this in some
> form or other seems like a no-brainer to me.
>
> If we have "free" (from the perspective of the project) CPU being
> offered by various CI setups let's use it.

Somebody else said in a separate subthread that there are people who
want to limit the CI, and if we pick this we may want to retire
Travis.  Naively I'd say the more the merrier (i.e. agreeing with
your above statement), but I may be missing something.


Re: [PATCH 0/4] Bloom filter experiment

2018-10-15 Thread Junio C Hamano
Derrick Stolee  writes:

> 2. The filters are sized according to the number of changes in each
> commit, with a minimum of one 64-bit word.
> ...
> 6. When we compute the Bloom filters, we don't store a filter for
> commits whose first-parent diff has more than 512 paths.

Just being curious but was 512 taken out of thin air or is there
some math behind it, e.g. to limit false positive rate down to
certain threshold?  With a wide-enough bitset, you could store
arbitrary large number of paths with low enough false positive, I
guess, but is there a point where there is too many paths in the
change that gives us diminishing returns and not worth having a
filter in the first place?

In a normal source-code-control context, the set of paths modified
by any single commit ought to be a small subset of the entire paths,
and whole-tree changes ought to be fairly rare.  In a project for
which that assumption does not hold, it might help to have a
negative bloom filter (i.e. "git log -- A" asks "does the commit
modify A?" and the filter would say "we know it does not, because we
threw all the paths that are not touched to the bloom filter"), but
I think that would optimize for a wrong case.



Re: [PATCHv2 0/3] git-p4: improved unshelving - small fixes

2018-10-15 Thread Junio C Hamano
Luke Diamand  writes:

> This is the same as my earlier patch other than fixing the
> documentation to reflect the change to the destination branch,
> as noticed by Junio.

Also you set up when-finished driver for clean-up before running
clone, which I think is a good change, too.

Will replace.  Thanks.

>
> Luke Diamand (3):
>   git-p4: do not fail in verbose mode for missing 'fileSize' key
>   git-p4: unshelve into refs/remotes/p4-unshelved, not
> refs/remotes/p4/unshelved
>   git-p4: fully support unshelving changelists
>
>  Documentation/git-p4.txt | 10 ++---
>  git-p4.py| 90 +++-
>  t/t9832-unshelve.sh  | 75 ++---
>  3 files changed, 117 insertions(+), 58 deletions(-)


Re: [PATCH 2/3] http: add support for disabling SSL revocation checks in cURL

2018-10-15 Thread Junio C Hamano
"Brendan Forster via GitGitGadget"  writes:

> Note: an earlier iteration tried to use the config setting
> http.schannel.checkRevoke, but the http.* config settings can be limited
> to specific URLs via http..* (which would mistake `schannel` for a
> URL).

Yeah, "http.schannel.anything" would not work, but is this note
relevant here?  As far as the git development community as a whole
is concerned, this is the first iteration of the patch we see and
review.

In any case, you can use "http..$variable" to say "I want the
http.$variable to be in effect but only when I am talking to ".
Does it make sense for this new variable, too?  That is, does it
benefit the users to be able to do something like this?

[http] schannelCheckRevoke = no
[http "https://microsoft.com/;] schannelCheckRevoke = yes

I am guessing that the answer is yes.

I guess the same comment applies to the previous step, but I suspect
that the code structure may not allow us to switch the SSL backend
so late in the game (e.g. "when talking to microsoft, use schannel,
but when talking to github, use openssl").

> +#if LIBCURL_VERSION_NUM >= 0x072c00
> + curl_easy_setopt(result, CURLOPT_SSL_OPTIONS, 
> CURLSSLOPT_NO_REVOKE);
> +#else
> + warning("CURLSSLOPT_NO_REVOKE not applied to curl SSL options 
> because\n"
> + "your curl version is too old (>= 7.44.0)");
> +#endif

That ">=" is hard to grok.  I think you meant it to be pronounced
"requries at least", but that is not a common reading.  People more
commonly pronounce it "is greater than or equal to".

> + }
> +
>   if (http_proactive_auth)
>   init_curl_http_auth(result);


Re: [PATCH v2 00/13] Base SHA-256 implementation

2018-10-15 Thread Junio C Hamano
"brian m. carlson"  writes:

>  t/t0014-hash.sh   |  54 
>  create mode 100755 t/t0014-hash.sh

If I am not mistaken, 0014 is already used by another topic in
flight, and will cause test-lint failure on 'pu'.



Re: [PATCH 3/3] mingw: use domain information for default email

2018-10-15 Thread Junio C Hamano
"Johannes Schindelin via GitGitGadget" 
writes:

> +char *mingw_query_user_email(void)
> +{
> + return get_extended_user_info(NameUserPrincipal);
> +}
> +
> ...
>  
> +#ifndef query_user_email
> +#define query_user_email() NULL
> +#endif

The three patches look sensible to me; will queue.

You may already have audited our use of "struct passwd",
"getpwnam()" and "getpwuid()"--I haven't.  I think we use these only
to learn user's email (to be used as the default ident) and home
directory (to expand "git config --type=path").  If that is really
the case, it may be a worthwhile clean-up to introduce our own API
that offers these two exact functions, have the per-platform
implementation of the API in compat/, and get rid of "struct passwd"
and calls to getpw*() functions out of the core Git proper, to wean
ourselves away from depending on POSIX too much.

But of course that is a separate topic.


Re: [PATCH v2 12/13] README: add a build badge (status of the Azure Pipelines build)

2018-10-15 Thread Eric Sunshine
On Mon, Oct 15, 2018 at 6:12 AM Johannes Schindelin via GitGitGadget
 wrote:
> Just like so many other OSS projects, we now also have a build badge.
>
> Signed-off-by: Johannes Schindelin 
> ---
> diff --git a/README.md b/README.md
> @@ -1,3 +1,5 @@
> +[![Build 
> Status](https:/dev.azure.com/git/git/_apis/build/status/test-git.git)](https://dev.azure.com/git/git/_build/latest?definitionId=2)

The first URL is broken "https:/..." rather than "https://...;.


Re: [PATCH v10 00/21] Convert "git stash" to C builtin

2018-10-15 Thread Junio C Hamano
Thomas Gummerer  writes:

> On 10/15, Paul-Sebastian Ungureanu wrote:
>> Hello,
>> 
>> This is a new iteration of `git stash`, based on the last review.
>> This iteration fixes some code styling issues, bring some changes
>> to `do_push_stash()` and `do_create_stash()` to be closer to API by
>> following Thomas Gummerer's review of last iteration [1]. Also, there
>> were some missing messages [2], which are now included.
>
> Thanks for your work!  I had two more comments (on the patches
> inline).  Once those are addressed, I'd be happy for this to be merged
> to 'next'.

Thank you for your tireless reviews.  They were quite helpful and
reasonable.


Re: [PATCH v2 09/13] git-p4: use `test_atexit` to kill the daemon

2018-10-15 Thread Eric Sunshine
On Mon, Oct 15, 2018 at 6:12 AM Johannes Schindelin via GitGitGadget
 wrote:
> This should be more reliable than the current method, and prepares the
> test suite for a consistent way to clean up before re-running the tests
> with different options.
>
> Signed-off-by: Johannes Schindelin 
> ---
> diff --git a/t/t-basic.sh b/t/t-basic.sh
> @@ -134,6 +134,7 @@ check_sub_test_lib_test_err () {
> +cat >/dev/null <<\DDD
>  test_expect_success 'pretend we have a fully passing test suite' "
> run_sub_test_lib_test full-pass '3 passing tests' <<-\\EOF &&
> for i in 1 2 3
> @@ -820,6 +821,7 @@ test_expect_success 'tests clean up even on failures' "
> > 1..2
> EOF
>  "
> +DDD

Is this "DDD" here-doc leftover debugging goop?


Re: [PATCH v2 00/13] Offer to run CI/PR builds in Azure Pipelines

2018-10-15 Thread Taylor Blau
On Mon, Oct 15, 2018 at 04:55:25PM +0200, Johannes Schindelin wrote:
> Hi Taylor,
>
> On Mon, 15 Oct 2018, Taylor Blau wrote:
>
> > Thanks for putting this together, and offering to build Git on Azure
> > Pipelines. I haven't followed v1 of this series very closely, so please
> > excuse me if my comments have already been addressed, and I missed them
> > in a skim of the last revision.
> >
> > On Mon, Oct 15, 2018 at 03:11:57AM -0700, Johannes Schindelin via 
> > GitGitGadget wrote:
> > > It is also an invaluable tool for contributors who can validate their code
> > > contributions via PRs on GitHub, e.g. to verify that their tests do 
> > > actually
> > > run on macOS (i.e. with the BSD family of Unix tools instead of the GNU
> > > one).
> >
> > Agree.
> >
> > > The one sad part about this is the Windows support. Travis lacks it, and 
> > > we
> > > work around that by using Azure Pipelines (the CI part of Azure DevOps,
> > > formerly known as Visual Studio Team Services) indirectly: one phase in
> > > Travis would trigger a build, wait for its log, and then paste that log.
> >
> > I wonder if Travis' recent announcement [1] affects this at all.
>
> :-)
>
> It did not escape my notice that after years and years of trying to get
> *anybody* at Travis to listen to my offers to help get this started, the
> announcement of Azure Pipelines for OSS seemed to finally do the trick
> (they still don't bother to talk to me, of course).
>
> And to answer your question without a question mark: I do not really think
> that the Travis announcement affects this here patch series: I have a ton
> of good experience with Azure Pipelines, use it in Git for Windows for
> ages, and I am finally able to have it in core Git, too. So I want it, and
> I spent a lot of time getting there, and I think it probably won't hurt
> core Git at all (besides, it seems that at least some of the phases are a
> bit faster on Azure Pipelines than Travis).

I think that there are fair reasons to prefer Azure Pipelines over
Travis. In particular, I am encouraged by the fact that we (1) know that
we won't timeout, and (2) can have a standardized CI interface on Git
and Git for Windows. Certainly, the Windows support on Azure Pipelines
is more developed than that of Travis', so that's another point for
Azure.

> Another really good reason for me to do this is that I can prod the Azure
> Pipelines team directly. And I even get an answer, usually within minutes.
> Which is a lot faster than the Travis team answers my questions, which
> is... not yet? (I tried to get in contact with them in late 2015 or early
> 2016, and I tried again a year later, and then a couple of months later,
> and I have yet to hear back.)

Certainly a good reason. To be clear/fair, I've sent in a number of
support tickets to Travis CI over the years, and have always been
responded to in a short amount of time with helpful answers. I think
that we would really be fine in either case, TBH.

> Also, I am not quite sure about the timeouts on Travis, but at least
> AppVeyor had rather short timeouts: the Windows build (due to our
> extensive use of Unix shell scripting in our test suite) takes 1h40m
> currently, and AppVeyor times out after 20 or 30 minutes. I could imagine
> that Travis times out after the same time, or maybe 60 minutes, which
> would still be too short. On Azure Pipelines, the maximum timeout (which
> can be configured via the azure-pipelines.yml file) is four hours IIRC.
> Plenty enough even for our test suite on Windows.
>
> > To summarize [1], Travis is offering an early version of adding Windows
> > to their list of supported builder operations systems. This brings the
> > list to macOS, Linux, and Windows, which I think satisfies what we would
> > like to regularly build git.git on.
>
> Honestly, I would love to have also FreeBSD and other platforms being
> tested. And with Azure Pipelines, I can make that happen (eventually), by
> adding another pool of VMs (given that I have a free $150/month Azure
> subscription, I'd use Azure VMs, of course). As long as a platform can run
> .NET Core software, it can run Azure Pipelines agents.
>
> With Travis, I don't think I can add private agent pools.

I think that's right.

> > Would we like to abandon Travis as our main CI service for upstream
> > git.git, and build on Azure Pipelines only? If so, I think that this is
> > an OK way to go, but I think that I would be opposed to having more than
> > one build system. (FWIW, we tend to _three_ for Git LFS, and it can be a
> > hassle at times).
>
> This question of abandoning Travis in favor of Azure Pipelines is a bit of
> a hornets' nest, as I really, really only want to bring the goodness of
> Azure Pipelines to git.git, and I am *clearly* biased, as I work at
> Microsoft.
>
> Which is the reason why I did not even hint at it in the cover letter, let
> alone included a patch to make it so.
>
> My patch series is purely about adding support for running CI/PR 

Re: [PATCH 2/4] merge-recursive: increase marker length with depth of recursion

2018-10-15 Thread Junio C Hamano
Elijah Newren  writes:

> Would you like me to edit the commit message to include this more
> difficult case?

Neither.  If the "marker length" change is required in a separate
series that will build on top of the current 4-patch series, I think
dropping this step from the current series and make it a part of the
series that deals with rename/rename would make more sense.


Re: [PATCH v3] gpg-interface.c: detect and reject multiple signatures on commits

2018-10-15 Thread Junio C Hamano
Michał Górny  writes:

>> OK, so the whole thing makes sense to me.
>> 
>> Having said that, if we wanted to short-circuit, I think
>> 
>> for (each line) {
>> for (each sigcheck_gpg_status[]) {
>> if (not the one on line)
>> continue;
>> if (sigc->result != 'U') {
>> if (sigc->key)
>> goto found_dup;
>> sigc->key = make a copy;
>> if (*next && sigc->result != 'E') {
>> if (sigc->signer)
>> goto found_dup;
>> sigc->signer = make a copy;
>> }
>> }
>> break;
>> }
>> }
>> return;
>> 
>> found_dup:
>> sigc->result = 'E';
>> FREE_AND_NULL(sigc->signer);
>> FREE_AND_NULL(sigc->key);
>> return;
>>  
>> would also be fine.
>
> Do I understand correctly that you mean to take advantage that 'seen
> exclusive status' cases match 'seen key' cases?  I think this would be
> a little less readable.


Yes, the above is taking advantage of: exclusive ones do give us
key and/or signer, so it is a sign that we've found collision
between two exclusive status line if we need to free and replace.

But that was "whole thing makes sense, but if we wanted to...".  I
do not know if we want to short-circuit upon finding a single
problem, or parse the whole thing to the end.  I guess we could
short-circuit while still using the "seen-exclusive" variable (we
can just do so at the place seen-exclusive is incremented---if it is
already one, then we know we have seen one already and we are
looking at another one).

> That said, I was planning on next patch that replaced the "!= 'U'" test
> with explicit flags for whether a particular status includes key
> and UID.  If you'd agree with this direction, I think having this one
> separate as well would make sense.

Yup, it might be a bit over-engineered for this code, but we are
adding the "exclusive" bit to the status[] array already, and I
think it makes sense to also have "does this give us key?" and "does
this tell us signer?" bit there.

Thanks.


Re: [PATCH 1/1] subtree: add build targets 'man' and 'html'

2018-10-15 Thread Junio C Hamano
Christian Hesse  writes:

> Junio C Hamano  on Wed, 2018/10/10 11:26:
>> As 'contrib' material without real maintenance, I do not care too
>> deeply, but shouldn't this change be more like this to avoid
>> duplicating the list of targets?
>
> Probably, yes.
> Do you want to add this yourself or do you want me to send an updated patch
> or one on top of the last change?

In principle either is fine but keep in mind that I'll likely forget
if you leave it up to me.


Re: [PATCH v2 00/13] Base SHA-256 implementation

2018-10-15 Thread Junio C Hamano
"brian m. carlson"  writes:

> I realize the increase to GIT_MAX_HEXSZ will result in an increase in
> memory usage, but we're going to need to do it at some point, and the
> sooner the code is in the codebase, the sooner people can play around
> with it and test it.

Thanks.


Re: [PATCH v2 13/13] commit-graph: specify OID version for SHA-256

2018-10-15 Thread Junio C Hamano
"brian m. carlson"  writes:

> Since the commit-graph code wants to serialize the hash algorithm into
> the data store, specify a version number for each supported algorithm.
> Note that we don't use the values of the constants themselves, as they
> are internal and could change in the future.
>
> Signed-off-by: brian m. carlson 
> ---
>  commit-graph.c | 9 -
>  1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/commit-graph.c b/commit-graph.c
> index 7a28fbb03f..e587c21bb6 100644
> --- a/commit-graph.c
> +++ b/commit-graph.c
> @@ -45,7 +45,14 @@ char *get_commit_graph_filename(const char *obj_dir)
>  
>  static uint8_t oid_version(void)
>  {
> - return 1;
> + switch (hash_algo_by_ptr(the_hash_algo)) {
> + case GIT_HASH_SHA1:
> + return 1;
> + case GIT_HASH_SHA256:
> + return 2;
> + default:
> + BUG("unknown hash algorithm");
> + }

Style: align switch/case.


Shouldn't this be just using GIT_HASH_* constants instead?  Are we
expecting that it would be benefitial to allow more than one "oid
version" even within a same GIT_HASH_*?

>  }
>  
>  static struct commit_graph *alloc_commit_graph(void)


Re: [PATCH v2 03/13] hex: introduce functions to print arbitrary hashes

2018-10-15 Thread Junio C Hamano
"brian m. carlson"  writes:

> diff --git a/cache.h b/cache.h
> index d508f3d4f8..a13d14ce0a 100644
> --- a/cache.h
> +++ b/cache.h
> @@ -1361,9 +1361,9 @@ extern int get_oid_hex(const char *hex, struct 
> object_id *sha1);
>  extern int hex_to_bytes(unsigned char *binary, const char *hex, size_t len);
>  
>  /*
> - * Convert a binary sha1 to its hex equivalent. The `_r` variant is 
> reentrant,
> + * Convert a binary hash to its hex equivalent. The `_r` variant is 
> reentrant,
>   * and writes the NUL-terminated output to the buffer `out`, which must be at
> - * least `GIT_SHA1_HEXSZ + 1` bytes, and returns a pointer to out for
> + * least `GIT_MAX_HEXSZ + 1` bytes, and returns a pointer to out for
>   * convenience.
>   *
>   * The non-`_r` variant returns a static buffer, but uses a ring of 4
> @@ -1371,10 +1371,13 @@ extern int hex_to_bytes(unsigned char *binary, const 
> char *hex, size_t len);
>   *
>   *   printf("%s -> %s", sha1_to_hex(one), sha1_to_hex(two));
>   */
> -extern char *sha1_to_hex_r(char *out, const unsigned char *sha1);
> -extern char *oid_to_hex_r(char *out, const struct object_id *oid);
> -extern char *sha1_to_hex(const unsigned char *sha1); /* static buffer 
> result! */
> -extern char *oid_to_hex(const struct object_id *oid);/* same static 
> buffer as sha1_to_hex */
> +char *hash_to_hex_algo_r(char *buffer, const unsigned char *hash, int algo);
> +char *sha1_to_hex_r(char *out, const unsigned char *sha1);
> +char *oid_to_hex_r(char *out, const struct object_id *oid);
> +char *hash_to_hex_algo(const unsigned char *hash, int algo); /* static 
> buffer result! */
> +char *sha1_to_hex(const unsigned char *sha1);/* same 
> static buffer */
> +char *hash_to_hex(const unsigned char *hash);/* same 
> static buffer */
> +char *oid_to_hex(const struct object_id *oid);   /* same 
> static buffer */

Even though in hex.c I see mixture of *_algo and *_algop helper
functions, I see only "algo" variants above.  Is it our official
stance to use primarily the integer index into the algo array when
specifying the hash, and when a caller into 'multi-hash' API happens
to have other things, it should use functions in 2/13 to convert it
to the canonical "int algo" beforehand?

I am not saying it is bad or good to choose the index into the algo
array as the primary way to specify the algorithm.  I think it is a
good idea to pick one and stick to it, and I wanted to make sure
that the choice we made here is clearly communicated to any future
developer who read this code.

> +char *sha1_to_hex(const unsigned char *sha1)
> +{
> + return hash_to_hex_algo(sha1, GIT_HASH_SHA1);
> +}
> +
> +char *hash_to_hex(const unsigned char *hash)
> +{
> + return hash_to_hex_algo(hash, hash_algo_by_ptr(the_hash_algo));
>  }
>  
>  char *oid_to_hex(const struct object_id *oid)
>  {
> - return sha1_to_hex(oid->hash);
> + return hash_to_hex_algo(oid->hash, hash_algo_by_ptr(the_hash_algo));
>  }

Having said the above, seeing the use of hash_algo_by_ptr() here
makes me suspect if it makes more sense to use the algop as the
primary way to specify which algorithm the caller wants to use.
IOW, making the set of helpers in 02/13 to allow quering by name,
format-id, or the integer index and have them all return a pointer
to "const struct git_hash_algo".  Two immediate downsides I can see
is that it exposes the actual structure to the callers (but is it
really a problem?  Outside callers learn hash sizes etc. by accessing
its fields anyway without treating the algo struct as opaque.), and
passing an 8-byte pointer may be more costly than passing a small
integer index that ranges between 0 and 1 at most (assuming that
we'd only use SHA-1 and "the current NewHash" in the code).




Working test 5

2018-10-15 Thread Judy

Did you get my email from last week?
Let me know if you have photos for cutting out or retouching?

We are an image team who can do editing for your the web store photos,
industry photos or portrait photos.

Send photos, we will do testing for you to check quality.
Waiting for your reply soon.

Thanks,
Judy



Re: [PATCH] submodule helper: convert relative URL to absolute URL if needed

2018-10-15 Thread Jonathan Nieder
Hi,

Stefan Beller wrote:

> The submodule helper update_clone called by "git submodule update",
> clones submodules if needed. As submodules used to have the URL indicating
> if they were active, the step to resolve relative URLs was done in the
> "submodule init" step. Nowadays submodules can be configured active without
> calling an explicit init, e.g. via configuring submodule.active.
>
> When trying to obtain submodules that are set active this way, we'll
> fallback to the URL found in the .gitmodules, which may be relative to the
> superproject, but we do not resolve it, yet:
> 
> git clone https://gerrit.googlesource.com/gerrit
> cd gerrit && grep url .gitmodules
>   url = ../plugins/codemirror-editor
>   ...
> git config submodule.active .
> git submodule update
> fatal: repository '../plugins/codemirror-editor' does not exist
> fatal: clone of '../plugins/codemirror-editor' into submodule path 
> '/tmp/gerrit/plugins/codemirror-editor' failed
> Failed to clone 'plugins/codemirror-editor'. Retry scheduled
> [...]
> fatal: clone of '../plugins/codemirror-editor' into submodule path 
> '/tmp/gerrit/plugins/codemirror-editor' failed
> Failed to clone 'plugins/codemirror-editor' a second time, aborting
> [...]

Thanks.

"git log" and other tools have the ability to rewrap lines and will get
confused by this transcript.  Can you indent it to unconfuse them?

> Signed-off-by: Stefan Beller 

Please also credit the bug reporter:

Reported-by: Jaewoong Jung 

[...]
> --- a/builtin/submodule--helper.c
> +++ b/builtin/submodule--helper.c
> @@ -584,6 +584,27 @@ static int module_foreach(int argc, const char **argv, 
> const char *prefix)
>   return 0;
>  }
>  
> +

nit: inconsistent vertical whitespace (extra blank line?)

> +static char *compute_submodule_clone_url(const char *rel_url)
> +{
> + char *remoteurl, *relurl;
> + char *remote = get_default_remote();
> + struct strbuf remotesb = STRBUF_INIT;
> +
> + strbuf_addf(, "remote.%s.url", remote);
> + free(remote);
> +
> + if (git_config_get_string(remotesb.buf, )) {
> + warning(_("could not lookup configuration '%s'. Assuming this 
> repository is its own authoritative upstream."), remotesb.buf);
> + remoteurl = xgetcwd();
> + }
> + relurl = relative_url(remoteurl, rel_url, NULL);
> + strbuf_release();
> + free(remoteurl);
> +
> + return relurl;
> +}

I think this would be easier to read with all the release commands
together:

...

free(remote);
free(remoteurl);
strbuf_release();
return relurl;

[...]
> @@ -634,21 +655,9 @@ static void init_submodule(const char *path, const char 
> *prefix,
[...]
> - relurl = relative_url(remoteurl, url, NULL);
> - strbuf_release();
> - free(remoteurl);
> - free(url);
> - url = relurl;
> + char *to_free = url;
> + url = compute_submodule_clone_url(url);
> + free(to_free);

I still think this would be easier to read with a style that makes
the ownership passing clearer:

char *oldurl = url;
url = compute_submodule_clone_url(oldurl);
free(oldurl);

With whatever subset of the above tweaks makes sense,
Reviewed-by: Jonathan Nieder 

Thanks.


Re: [PATCH 0/9] Offer to run CI/PR builds in Visual Studio Team Services

2018-10-15 Thread Christian Couder
On Mon, Oct 15, 2018 at 8:33 PM Johannes Schindelin
 wrote:
>
> Hi team,
>
> On Mon, 15 Oct 2018, Christian Couder wrote:
>
> > On Mon, Oct 15, 2018 at 5:46 PM Duy Nguyen  wrote:
> > >
> > > On Mon, Oct 15, 2018 at 5:08 PM Ævar Arnfjörð Bjarmason
> > >  wrote:
> > > > As an aside I poked Junio via private mail in late August to see if he'd
> > > > be interested in pushing to gitlab.com/git/git.git too as part of his
> > > > normal push-outs.
> >
> > People at GitLab also have wondered if it would be possible for GitLab
> > to host an official Git mirror, and I think it would be nice indeed if
> > Junio could push to gitlab.com.
>
> AFAIR Junio does not push to github.com/git/git, it is an automatic
> mirror.

On https://git-blame.blogspot.com/p/git-public-repositories.html there is:

"Copies of the source code to Git live in many repositories, and this
is a list of the ones I push into. Some repositories have only a
subset of branches.

With maint, master, next, pu, todo:

[...]
https://github.com/git/git/
"

> GitLab could easily do the same.

Yeah, GitLab could easily create an automatic mirror. I think it would
be a bit better though if the git repo on GitLab could be independent.
It could make things more resilient against outages or security
issues.


[PATCH] submodule helper: convert relative URL to absolute URL if needed

2018-10-15 Thread Stefan Beller
The submodule helper update_clone called by "git submodule update",
clones submodules if needed. As submodules used to have the URL indicating
if they were active, the step to resolve relative URLs was done in the
"submodule init" step. Nowadays submodules can be configured active without
calling an explicit init, e.g. via configuring submodule.active.

When trying to obtain submodules that are set active this way, we'll
fallback to the URL found in the .gitmodules, which may be relative to the
superproject, but we do not resolve it, yet:

git clone https://gerrit.googlesource.com/gerrit
cd gerrit && grep url .gitmodules
url = ../plugins/codemirror-editor
...
git config submodule.active .
git submodule update
fatal: repository '../plugins/codemirror-editor' does not exist
fatal: clone of '../plugins/codemirror-editor' into submodule path 
'/tmp/gerrit/plugins/codemirror-editor' failed
Failed to clone 'plugins/codemirror-editor'. Retry scheduled
[...]
fatal: clone of '../plugins/codemirror-editor' into submodule path 
'/tmp/gerrit/plugins/codemirror-editor' failed
Failed to clone 'plugins/codemirror-editor' a second time, aborting
[...]

To resolve the issue, factor out the function that resolves the relative
URLs in "git submodule init" (in the submodule helper in the init_submodule
function) and call it at the appropriate place in the update_clone helper.

Signed-off-by: Stefan Beller 
---

Jonathan wrote:
> Ah, this is moved code. I should have used --color-moved. ;-)

Yes, any cleanup should go on top.

I extended the commit message and made sure not to leak memory.

When rerolling origin/xxx/sb-submodule-recursive-fetch-gets-the-tip-in-pu,
there will be conflicts with this series, but I can work with that.

 builtin/submodule--helper.c | 48 -
 t/t7400-submodule-basic.sh  | 24 +++
 2 files changed, 55 insertions(+), 17 deletions(-)

diff --git a/builtin/submodule--helper.c b/builtin/submodule--helper.c
index f6fb8991f3..03f5e0d03e 100644
--- a/builtin/submodule--helper.c
+++ b/builtin/submodule--helper.c
@@ -584,6 +584,27 @@ static int module_foreach(int argc, const char **argv, 
const char *prefix)
return 0;
 }
 
+
+static char *compute_submodule_clone_url(const char *rel_url)
+{
+   char *remoteurl, *relurl;
+   char *remote = get_default_remote();
+   struct strbuf remotesb = STRBUF_INIT;
+
+   strbuf_addf(, "remote.%s.url", remote);
+   free(remote);
+
+   if (git_config_get_string(remotesb.buf, )) {
+   warning(_("could not lookup configuration '%s'. Assuming this 
repository is its own authoritative upstream."), remotesb.buf);
+   remoteurl = xgetcwd();
+   }
+   relurl = relative_url(remoteurl, rel_url, NULL);
+   strbuf_release();
+   free(remoteurl);
+
+   return relurl;
+}
+
 struct init_cb {
const char *prefix;
unsigned int flags;
@@ -634,21 +655,9 @@ static void init_submodule(const char *path, const char 
*prefix,
/* Possibly a url relative to parent */
if (starts_with_dot_dot_slash(url) ||
starts_with_dot_slash(url)) {
-   char *remoteurl, *relurl;
-   char *remote = get_default_remote();
-   struct strbuf remotesb = STRBUF_INIT;
-   strbuf_addf(, "remote.%s.url", remote);
-   free(remote);
-
-   if (git_config_get_string(remotesb.buf, )) {
-   warning(_("could not lookup configuration '%s'. 
Assuming this repository is its own authoritative upstream."), remotesb.buf);
-   remoteurl = xgetcwd();
-   }
-   relurl = relative_url(remoteurl, url, NULL);
-   strbuf_release();
-   free(remoteurl);
-   free(url);
-   url = relurl;
+   char *to_free = url;
+   url = compute_submodule_clone_url(url);
+   free(to_free);
}
 
if (git_config_set_gently(sb.buf, url))
@@ -1562,8 +1571,13 @@ static int prepare_to_clone_next_submodule(const struct 
cache_entry *ce,
 
strbuf_reset();
strbuf_addf(, "submodule.%s.url", sub->name);
-   if (repo_config_get_string_const(the_repository, sb.buf, ))
-   url = sub->url;
+   if (repo_config_get_string_const(the_repository, sb.buf, )) {
+   if (starts_with_dot_slash(sub->url) ||
+   starts_with_dot_dot_slash(sub->url))
+   url = compute_submodule_clone_url(sub->url);
+   else
+   url = sub->url;
+   }
 
strbuf_reset();
strbuf_addf(, "%s/.git", ce->name);
diff --git a/t/t7400-submodule-basic.sh b/t/t7400-submodule-basic.sh
index 

Re: [PATCH v2 10/13] Add a base implementation of SHA-256 support

2018-10-15 Thread brian m. carlson
On Mon, Oct 15, 2018 at 04:59:12PM +0200, Duy Nguyen wrote:
>  On Mon, Oct 15, 2018 at 4:23 AM brian m. carlson
>  wrote:
> >
> > SHA-1 is weak and we need to transition to a new hash function.  For
> > some time, we have referred to this new function as NewHash.  Recently,
> > we decided to pick SHA-256 as NewHash.
> >
> > Add a basic implementation of SHA-256 based off libtomcrypt, which is in
> > the public domain.  Optimize it and restructure it to meet our coding
> > standards.  Place it in a directory called "sha256" where it and any
> > future implementations can live so as to avoid a proliferation of
> > implementation directories.
> >
> > Wire up SHA-256 in the list of hash algorithms, and add a test that the
> > algorithm works correctly.
> >
> > Note that with this patch, it is still not possible to switch to using
> > SHA-256 in Git.  Additional patches are needed to prepare the code to
> > handle a larger hash algorithm and further test fixes are needed.
> 
> At some point I assume SHA-256 will become functional and be part of a
> git release without all file formats updated to support multiple
> hashes. Should we somehow discourage the user from using it because it
> will break when all file formats are finally updated?

In order to activate SHA-256 in the codebase, currently you need a patch
to force it on.  Otherwise, the code is simply inert and does nothing
(other than in the test-tool).  I've included the patch below so you can
see what it does (or if you want to play around with it).

Without this patch, Git remains fully SHA-1 and can't access any of the
SHA-256 code.  I have some very preliminary patches that do wire up
extensions.objectFormat (branch object-id-part15 [sic]) but I haven't
picked them up in a while.  (I need to finish test fixes first.)

Making the codebase run in SHA-256 mode with a Git that supports both
algorithms but defaults to SHA-1 is currently seriously broken, even
with the object-id-part15 branch.  When those patches go in, they will
be behind a developer flag to prevent wholesale chaos and segfaults.

So in summary, no, I don't think a developer flag is necessary at this
point.

-- >% --
From  Mon Sep 17 00:00:00 2001
From: "brian m. carlson" 
Date: Wed, 25 Jul 2018 23:48:30 +
Subject: [PATCH] Switch default hash algorithm to SHA-256 for testing

Set the default hash algorithm to SHA-256 for testing purposes.

This is a test commit and should not be used in production.

Signed-off-by: brian m. carlson 
---
 repository.c|  2 +-
 setup.c |  2 +-
 t/test-lib-functions.sh |  4 +---
 t/test-lib.sh   | 13 -
 4 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/repository.c b/repository.c
index 5dd1486718..4ce50ea670 100644
--- a/repository.c
+++ b/repository.c
@@ -17,7 +17,7 @@ void initialize_the_repository(void)
the_repo.objects = raw_object_store_new();
the_repo.parsed_objects = parsed_object_pool_new();
 
-   repo_set_hash_algo(_repo, GIT_HASH_SHA1);
+   repo_set_hash_algo(_repo, GIT_HASH_SHA256);
 }
 
 static void expand_base_dir(char **out, const char *in,
diff --git a/setup.c b/setup.c
index b24c811c1c..2b4cc4e5a4 100644
--- a/setup.c
+++ b/setup.c
@@ -490,7 +490,7 @@ int read_repository_format(struct repository_format 
*format, const char *path)
memset(format, 0, sizeof(*format));
format->version = -1;
format->is_bare = -1;
-   format->hash_algo = GIT_HASH_SHA1;
+   format->hash_algo = GIT_HASH_SHA256;
string_list_init(>unknown_extensions, 1);
git_config_from_file(check_repo_format, path, format);
return format->version;
diff --git a/t/test-lib-functions.sh b/t/test-lib-functions.sh
index dfa1bebb46..91cf2b9d40 100644
--- a/t/test-lib-functions.sh
+++ b/t/test-lib-functions.sh
@@ -1165,9 +1165,7 @@ test_set_hash () {
 
 # Detect the hash algorithm in use.
 test_detect_hash () {
-   # Currently we only support SHA-1, but in the future this function will
-   # actually detect the algorithm in use.
-   test_hash_algo='sha1'
+   test_hash_algo='sha256'
 }
 
 # Load common hash metadata and common placeholder object IDs for use with
diff --git a/t/test-lib.sh b/t/test-lib.sh
index 3f95bfda60..bb507f2d4f 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -176,18 +176,21 @@ esac
 
 # Convenience
 #
-# A regexp to match 5, 35 and 40 hexdigits
+# A regexp to match 4, 5, 35, 40, and 64 hexdigits
+_x04='[0-9a-f][0-9a-f][0-9a-f][0-9a-f]'
 _x05='[0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f]'
 _x35="$_x05$_x05$_x05$_x05$_x05$_x05$_x05"
 _x40="$_x35$_x05"
+_x64="$_x40$_x05$_x05$_x05$_x05$_x04"
 
 # Zero SHA-1
 _z40=
+_z64=
 
-OID_REGEX="$_x40"
-ZERO_OID=$_z40
-EMPTY_TREE=4b825dc642cb6eb9a060e54bf8d69288fbee4904
-EMPTY_BLOB=e69de29bb2d1d6434b8b29ae775ad8c2e48c5391

Re: [PATCH v10 00/21] Convert "git stash" to C builtin

2018-10-15 Thread Thomas Gummerer
On 10/15, Paul-Sebastian Ungureanu wrote:
> Hello,
> 
> This is a new iteration of `git stash`, based on the last review.
> This iteration fixes some code styling issues, bring some changes
> to `do_push_stash()` and `do_create_stash()` to be closer to API by
> following Thomas Gummerer's review of last iteration [1]. Also, there
> were some missing messages [2], which are now included.

Thanks for your work!  I had two more comments (on the patches
inline).  Once those are addressed, I'd be happy for this to be merged
to 'next'.

Since I applied this locally, and it may help someone else who may
want to have a look at this, the range-diff is below:

 1:  b7224e494e =  1:  89142f99e7 sha1-name.c: add `get_oidf()` which acts like 
`get_oid()`
 2:  63f2e0e6f9 !  2:  2d45985676 strbuf.c: add `strbuf_join_argv()`
@@ -14,19 +14,17 @@
strbuf_setlen(sb, sb->len + sb2->len);
  }
  
-+const char *strbuf_join_argv(struct strbuf *buf,
-+   int argc, const char **argv, char delim)
++void strbuf_join_argv(struct strbuf *buf,
++int argc, const char **argv, char delim)
 +{
 +  if (!argc)
-+  return buf->buf;
++  return;
 +
 +  strbuf_addstr(buf, *argv);
 +  while (--argc) {
 +  strbuf_addch(buf, delim);
 +  strbuf_addstr(buf, *(++argv));
 +  }
-+
-+  return buf->buf;
 +}
 +
  void strbuf_addchars(struct strbuf *sb, int c, size_t n)
@@ -40,12 +38,12 @@
   */
  extern void strbuf_addbuf(struct strbuf *sb, const struct strbuf *sb2);
  
-+
 +/**
-+ *
++ * Join the arguments into a buffer. `delim` is put between every
++ * two arguments.
 + */
-+extern const char *strbuf_join_argv(struct strbuf *buf, int argc,
-+  const char **argv, char delim);
++extern void strbuf_join_argv(struct strbuf *buf, int argc,
++   const char **argv, char delim);
 +
  /**
   * This function can be used to expand a format string containing
 3:  9b9433781b =  3:  63d10ee599 stash: improve option parsing test coverage
 4:  c1d38060c5 !  4:  a6953b57e5 stash: update test cases conform to coding 
guidelines
@@ -1,8 +1,9 @@
 Author: Paul-Sebastian Ungureanu 
 
-stash: update test cases conform to coding guidelines
+t3903: modernize style
 
-Removed whitespaces after redirection operators.
+Remove whitespaces after redirection operators and wrap
+long lines.
 
 Signed-off-by: Paul-Sebastian Ungureanu 

 
 5:  ac7a8267e6 =  5:  9985d8650b stash: rename test cases to be more 
descriptive
 6:  0e6458a280 =  6:  93d7e82b96 stash: add tests for `git stash show` config
13:  6e04c948cf !  7:  e06aca5ff5 stash: mention options in `show` synopsis.
@@ -1,9 +1,9 @@
 Author: Paul-Sebastian Ungureanu 
 
-stash: mention options in `show` synopsis.
+stash: mention options in `show` synopsis
 
-Mention in the usage text and in the documentation, that `show`
-accepts any option known to `git diff`.
+Mention in the documentation, that `show` accepts any
+option known to `git diff`.
 
 Signed-off-by: Paul-Sebastian Ungureanu 

 
@@ -28,25 +28,3 @@
  
Show the changes recorded in the stash entry as a diff between the
stashed contents and the commit back when the stash entry was first
-
- diff --git a/builtin/stash--helper.c b/builtin/stash--helper.c
- --- a/builtin/stash--helper.c
- +++ b/builtin/stash--helper.c
-@@
- 
- static const char * const git_stash_helper_usage[] = {
-   N_("git stash--helper list []"),
--  N_("git stash--helper show []"),
-+  N_("git stash--helper show [] []"),
-   N_("git stash--helper drop [-q|--quiet] []"),
-   N_("git stash--helper ( pop | apply ) [--index] [-q|--quiet] 
[]"),
-   N_("git stash--helper branch  []"),
-@@
- };
- 
- static const char * const git_stash_helper_show_usage[] = {
--  N_("git stash--helper show []"),
-+  N_("git stash--helper show [] []"),
-   NULL
- };
- 
 7:  a2abd1b4bd !  8:  974dbaa492 stash: convert apply to builtin
@@ -121,8 +121,8 @@
 +  get_oidf(>w_tree, "%s:", revision) ||
 +  get_oidf(>b_tree, "%s^1:", revision) ||
 +  get_oidf(>i_tree, "%s^2:", revision)) {
-+  free_stash_info(info);
 +  error(_("'%s' is not a stash-like commit"), revision);
++  free_stash_info(info);
 +  exit(128);
 +  }
 +}
@@ -370,18 +370,20 @@
 +
 +  if (diff_tree_binary(, >w_commit)) {
 +  strbuf_release();
-+  return -1;
++  return error(_("Could not generate diff %s^!."),
++

Re: [PATCH v10 18/21] stash: convert save to builtin

2018-10-15 Thread Thomas Gummerer
On 10/15, Paul-Sebastian Ungureanu wrote:
> The `-m` option is no longer supported as it might not make
> sense to have two ways of passing a message. Even if this is
> a change in behaviour, the documentation remains the same
> because the `-m` parameter was omitted before.

[...]

> + OPT_STRING('m', "message", _msg, "message",
> +N_("stash message")),
> + OPT_END()

We do seem to support a '-m' option here though.  I'm happy not
supporting it, but the commit message seems to say otherwise.  I can't
remember the discussion her, but either the commit message, or the
option parsing should be updated here.


Re: [PATCH v10 19/21] stash: convert `stash--helper.c` into `stash.c`

2018-10-15 Thread Thomas Gummerer
On 10/15, Paul-Sebastian Ungureanu wrote:
> The old shell script `git-stash.sh`  was removed and replaced
> entirely by `builtin/stash.c`. In order to do that, `create` and
> `push` were adapted to work without `stash.sh`. For example, before
> this commit, `git stash create` called `git stash--helper create
> --message "$*"`. If it called `git stash--helper create "$@"`, then
> some of these changes wouldn't have been necessary.
> 
> This commit also removes the word `helper` since now stash is
> called directly and not by a shell script.
> 
> Signed-off-by: Paul-Sebastian Ungureanu 
> ---
> @@ -1138,7 +1133,6 @@ static int do_create_stash(struct pathspec ps, char 
> **stash_msg,
>   fprintf_ln(stderr, _("You do not have "
>"the initial commit yet"));
>   ret = -1;
> - *stash_msg = NULL;
>   goto done;
>   } else {
>   head_commit = lookup_commit(the_repository, >b_commit);
> @@ -1146,7 +1140,6 @@ static int do_create_stash(struct pathspec ps, char 
> **stash_msg,
>  
>   if (!check_changes(ps, include_untracked)) {
>   ret = 1;
> - *stash_msg = NULL;
>   goto done;
>   }
>  
> @@ -1167,7 +1160,6 @@ static int do_create_stash(struct pathspec ps, char 
> **stash_msg,
>   fprintf_ln(stderr, _("Cannot save the current "
>"index state"));
>   ret = -1;
> - *stash_msg = NULL;
>   goto done;
>   }
>  
> @@ -1178,14 +1170,12 @@ static int do_create_stash(struct pathspec ps, char 
> **stash_msg,
>   fprintf_ln(stderr, _("Cannot save "
>"the untracked files"));
>   ret = -1;
> - *stash_msg = NULL;
>   goto done;
>   }
>   untracked_commit_option = 1;
>   }
>   if (patch_mode) {
>   ret = stash_patch(info, ps, patch, quiet);
> - *stash_msg = NULL;
>   if (ret < 0) {
>   if (!quiet)
>   fprintf_ln(stderr, _("Cannot save the current "
> @@ -1200,7 +1190,6 @@ static int do_create_stash(struct pathspec ps, char 
> **stash_msg,
>   fprintf_ln(stderr, _("Cannot save the current "
>"worktree state"));
>   ret = -1;
> - *stash_msg = NULL;
>   goto done;
>   }
>   }
> @@ -1210,7 +1199,7 @@ static int do_create_stash(struct pathspec ps, char 
> **stash_msg,
>   else
>   strbuf_addf(_msg_buf, "On %s: %s", branch_name,
>   *stash_msg);
> - *stash_msg = strbuf_detach(_msg_buf, NULL);
> + *stash_msg = xstrdup(stash_msg_buf.buf);
>  
>   /*
>* `parents` will be empty after calling `commit_tree()`, so there is
> @@ -1244,30 +1233,23 @@ static int do_create_stash(struct pathspec ps, char 
> **stash_msg,
>  
>  static int create_stash(int argc, const char **argv, const char *prefix)
>  {
> - int include_untracked = 0;
>   int ret = 0;
>   char *stash_msg = NULL;
>   struct stash_info info;
>   struct pathspec ps;
> - struct option options[] = {
> - OPT_BOOL('u', "include-untracked", _untracked,
> -  N_("include untracked files in stash")),
> - OPT_STRING('m', "message", _msg, N_("message"),
> -  N_("stash message")),
> - OPT_END()
> - };
> + struct strbuf stash_msg_buf = STRBUF_INIT;
>  
> - argc = parse_options(argc, argv, prefix, options,
> -  git_stash_helper_create_usage,
> -  0);
> + /* Starting with argv[1], since argv[0] is "create" */
> + strbuf_join_argv(_msg_buf, argc - 1, ++argv, ' ');
> + stash_msg = stash_msg_buf.buf;

stash_msg is just a pointer to stash_msg_buf.buf here..
>  
>   memset(, 0, sizeof(ps));
> - ret = do_create_stash(ps, _msg, include_untracked, 0, ,
> -   NULL, 0);
> + ret = do_create_stash(ps, _msg, 0, 0, , NULL, 0);
>  
>   if (!ret)
>   printf_ln("%s", oid_to_hex(_commit));
>  
> + strbuf_release(_msg_buf);

We release the strbuf here, which means stash_msg_buf.buf is now
'strbuf_slopbuf', which is a global variable and can't be free'd.  If
stash_msg is not changed within do_create_stash, it is now pointing to
'strbuf_slopbuf', and we try to free that below, which makes git
crash in t3903.44, which breaks bisection.

>   free(stash_msg);
>  
>   /*

I think the following diff fixes memory management, by making
do_push_stash responsible for freeing stash_msg when it's done with
it, while the callers of do_create_stash have to free the parameter

Re: [PATCH 1/1] subtree: add build targets 'man' and 'html'

2018-10-15 Thread Christian Hesse
Junio C Hamano  on Wed, 2018/10/10 11:26:
> As 'contrib' material without real maintenance, I do not care too
> deeply, but shouldn't this change be more like this to avoid
> duplicating the list of targets?

Probably, yes.
Do you want to add this yourself or do you want me to send an updated patch
or one on top of the last change?
-- 
main(a){char*c=/*Schoene Gruesse */"B?IJj;MEH"
"CX:;",b;for(a/*Best regards my address:*/=0;b=c[a++];)
putchar(b-1/(/*Chriscc -ox -xc - && ./x*/b/42*2-3)*42);}


pgpKlf25ZBx3J.pgp
Description: OpenPGP digital signature


Re: [PATCH v3] gpg-interface.c: detect and reject multiple signatures on commits

2018-10-15 Thread Michał Górny
On Mon, 2018-10-15 at 12:31 +0900, Junio C Hamano wrote:
> Michał Górny  writes:
> 
> > GnuPG supports creating signatures consisting of multiple signature
> > packets.  If such a signature is verified, it outputs all the status
> > messages for each signature separately.  However, git currently does not
> > account for such scenario and gets terribly confused over getting
> > multiple *SIG statuses.
> > 
> > For example, if a malicious party alters a signed commit and appends
> > a new untrusted signature, git is going to ignore the original bad
> > signature and report untrusted commit instead.  However, %GK and %GS
> > format strings may still expand to the data corresponding
> > to the original signature, potentially tricking the scripts into
> > trusting the malicious commit.
> > 
> > Given that the use of multiple signatures is quite rare, git does not
> > support creating them without jumping through a few hoops, and finally
> > supporting them properly would require extensive API improvement, it
> > seems reasonable to just reject them at the moment.
> > 
> > Signed-off-by: Michał Górny 
> > ---
> >  gpg-interface.c  | 94 +++-
> >  t/t7510-signed-commit.sh | 26 +++
> >  2 files changed, 91 insertions(+), 29 deletions(-)
> > 
> > Changes in v3: reworked the whole loop to iterate over lines rather
> > than scanning the whole buffer, as requested.  Now it also catches
> > duplicate instances of the same status.
> > 
> > diff --git a/gpg-interface.c b/gpg-interface.c
> > index db17d65f8..480aab4ee 100644
> > --- a/gpg-interface.c
> > +++ b/gpg-interface.c
> > @@ -75,48 +75,84 @@ void signature_check_clear(struct signature_check *sigc)
> > FREE_AND_NULL(sigc->key);
> >  }
> >  
> > +/* An exclusive status -- only one of them can appear in output */
> > +#define GPG_STATUS_EXCLUSIVE   (1<<0)
> > +
> >  static struct {
> > char result;
> > const char *check;
> > +   unsigned int flags;
> >  } sigcheck_gpg_status[] = {
> > -   { 'G', "\n[GNUPG:] GOODSIG " },
> > -   { 'B', "\n[GNUPG:] BADSIG " },
> > -   { 'U', "\n[GNUPG:] TRUST_NEVER" },
> > -   { 'U', "\n[GNUPG:] TRUST_UNDEFINED" },
> > -   { 'E', "\n[GNUPG:] ERRSIG "},
> > -   { 'X', "\n[GNUPG:] EXPSIG "},
> > -   { 'Y', "\n[GNUPG:] EXPKEYSIG "},
> > -   { 'R', "\n[GNUPG:] REVKEYSIG "},
> > +   { 'G', "GOODSIG ", GPG_STATUS_EXCLUSIVE },
> > +   { 'B', "BADSIG ", GPG_STATUS_EXCLUSIVE },
> > +   { 'U', "TRUST_NEVER", 0 },
> > +   { 'U', "TRUST_UNDEFINED", 0 },
> > +   { 'E', "ERRSIG ", GPG_STATUS_EXCLUSIVE },
> > +   { 'X', "EXPSIG ", GPG_STATUS_EXCLUSIVE },
> > +   { 'Y', "EXPKEYSIG ", GPG_STATUS_EXCLUSIVE },
> > +   { 'R', "REVKEYSIG ", GPG_STATUS_EXCLUSIVE },
> >  };
> >  
> >  static void parse_gpg_output(struct signature_check *sigc)
> >  {
> > const char *buf = sigc->gpg_status;
> > +   const char *line, *next;
> > int i;
> > -
> > -   /* Iterate over all search strings */
> > -   for (i = 0; i < ARRAY_SIZE(sigcheck_gpg_status); i++) {
> > -   const char *found, *next;
> > -
> > -   if (!skip_prefix(buf, sigcheck_gpg_status[i].check + 1, 
> > )) {
> > -   found = strstr(buf, sigcheck_gpg_status[i].check);
> > -   if (!found)
> > -   continue;
> > -   found += strlen(sigcheck_gpg_status[i].check);
> > -   }
> > -   sigc->result = sigcheck_gpg_status[i].result;
> > -   /* The trust messages are not followed by key/signer 
> > information */
> > -   if (sigc->result != 'U') {
> > -   next = strchrnul(found, ' ');
> > -   sigc->key = xmemdupz(found, next - found);
> > -   /* The ERRSIG message is not followed by signer 
> > information */
> > -   if (*next && sigc-> result != 'E') {
> > -   found = next + 1;
> > -   next = strchrnul(found, '\n');
> > -   sigc->signer = xmemdupz(found, next - found);
> > +   int had_exclusive_status = 0;
> > +
> > +   /* Iterate over all lines */
> > +   for (line = buf; *line; line = strchrnul(line+1, '\n')) {
> > +   while (*line == '\n')
> > +   line++;
> > +   /* Skip lines that don't start with GNUPG status */
> > +   if (strncmp(line, "[GNUPG:] ", 9))
> > +   continue;
> > +   line += 9;
> 
> You do not want to count to 9 yourself.  Instead
> 
>   if (!skip_prefix(line, "[GNUPG:] ", ))
>   continue;
> 
> 
> > +   /* Iterate over all search strings */
> > +   for (i = 0; i < ARRAY_SIZE(sigcheck_gpg_status); i++) {
> > +   if (!strncmp(line, sigcheck_gpg_status[i].check,
> > +   strlen(sigcheck_gpg_status[i].check))) {
> > +   line += strlen(sigcheck_gpg_status[i].check);
> 
> Likewise.

Both done.

> 
> > +   

Re: [PATCH v10 08/21] stash: convert apply to builtin

2018-10-15 Thread Thomas Gummerer
On 10/15, Johannes Schindelin wrote:
> Hi Paul,
> 
> On Mon, 15 Oct 2018, Paul-Sebastian Ungureanu wrote:
> 
> > +static void assert_stash_like(struct stash_info *info, const char 
> > *revision)
> > +{
> > +   if (get_oidf(>b_commit, "%s^1", revision) ||
> > +   get_oidf(>w_tree, "%s:", revision) ||
> > +   get_oidf(>b_tree, "%s^1:", revision) ||
> > +   get_oidf(>i_tree, "%s^2:", revision)) {
> > +   error(_("'%s' is not a stash-like commit"), revision);
> > +   free_stash_info(info);
> > +   exit(128);
> 
> Thomas had mentioned earlier that this should probably be a die() (and
> that the `free_stash_info()` should simply be dropped), see
> https://public-inbox.org/git/20180930174848.ge2...@hank.intra.tgummerer.com/

I think the way this is now is fine by me.  Not sure how much we care
about freeing 'info' or not (do we care about leaks when we 'die()'
anyway?), but this is done in the right order now, so we don't print
garbage in the error message anymore, and I'm happy with either this
or replacing all this with 'die()'.  

> > +   }
> > +}
> > +
> > +static int get_stash_info(struct stash_info *info, int argc, const char 
> > **argv)
> > +{
> > +   int ret;
> > +   char *end_of_rev;
> > +   char *expanded_ref;
> > +   const char *revision;
> > +   const char *commit = NULL;
> > +   struct object_id dummy;
> > +   struct strbuf symbolic = STRBUF_INIT;
> > +
> > +   if (argc > 1) {
> > +   int i;
> > +   struct strbuf refs_msg = STRBUF_INIT;
> > +   for (i = 0; i < argc; i++)
> > +   strbuf_addf(_msg, " '%s'", argv[i]);
> 
> Thomas had also mentioned that this should be a `strbuf_join_argv()` call
> now.

Re-reading this we quote the individual args here, which is not
possible with 'strbuf_join_argv()', which I failed to notice when
reading this the other time.  We don't currently quote them, but I
think the quoting may actually be useful.

It would however have been nice if the reason why the suggestion was
rejected would have been written down as a reply to my original
review, to avoid misunderstandings like this :)

> Maybe v10 is an accidental re-send of v9?
> 
> Ciao,
> Dscho
> 
> > +
> > +   fprintf_ln(stderr, _("Too many revisions specified:%s"),
> > +  refs_msg.buf);
> > +   strbuf_release(_msg);
> > +
> > +   return -1;
> > +   }
> > +
> > +   if (argc == 1)
> > +   commit = argv[0];
> > +
> > +   strbuf_init(>revision, 0);
> > +   if (!commit) {
> > +   if (!ref_exists(ref_stash)) {
> > +   free_stash_info(info);
> > +   fprintf_ln(stderr, _("No stash entries found."));
> > +   return -1;
> > +   }
> > +
> > +   strbuf_addf(>revision, "%s@{0}", ref_stash);
> > +   } else if (strspn(commit, "0123456789") == strlen(commit)) {
> > +   strbuf_addf(>revision, "%s@{%s}", ref_stash, commit);
> > +   } else {
> > +   strbuf_addstr(>revision, commit);
> > +   }
> > +
> > +   revision = info->revision.buf;
> > +
> > +   if (get_oid(revision, >w_commit)) {
> > +   error(_("%s is not a valid reference"), revision);
> > +   free_stash_info(info);
> > +   return -1;
> > +   }
> > +
> > +   assert_stash_like(info, revision);
> > +
> > +   info->has_u = !get_oidf(>u_tree, "%s^3:", revision);
> > +
> > +   end_of_rev = strchrnul(revision, '@');
> > +   strbuf_add(, revision, end_of_rev - revision);
> > +
> > +   ret = dwim_ref(symbolic.buf, symbolic.len, , _ref);
> > +   strbuf_release();
> > +   switch (ret) {
> > +   case 0: /* Not found, but valid ref */
> > +   info->is_stash_ref = 0;
> > +   break;
> > +   case 1:
> > +   info->is_stash_ref = !strcmp(expanded_ref, ref_stash);
> > +   break;
> > +   default: /* Invalid or ambiguous */
> > +   free_stash_info(info);
> > +   }
> > +
> > +   free(expanded_ref);
> > +   return !(ret == 0 || ret == 1);
> > +}
> > +
> > +static int reset_tree(struct object_id *i_tree, int update, int reset)
> > +{
> > +   int nr_trees = 1;
> > +   struct unpack_trees_options opts;
> > +   struct tree_desc t[MAX_UNPACK_TREES];
> > +   struct tree *tree;
> > +   struct lock_file lock_file = LOCK_INIT;
> > +
> > +   read_cache_preload(NULL);
> > +   if (refresh_cache(REFRESH_QUIET))
> > +   return -1;
> > +
> > +   hold_locked_index(_file, LOCK_DIE_ON_ERROR);
> > +
> > +   memset(, 0, sizeof(opts));
> > +
> > +   tree = parse_tree_indirect(i_tree);
> > +   if (parse_tree(tree))
> > +   return -1;
> > +
> > +   init_tree_desc(t, tree->buffer, tree->size);
> > +
> > +   opts.head_idx = 1;
> > +   opts.src_index = _index;
> > +   opts.dst_index = _index;
> > +   opts.merge = 1;
> > +   opts.reset = reset;
> > +   opts.update = update;
> > +   opts.fn = oneway_merge;
> > +
> > +   if (unpack_trees(nr_trees, t, ))
> > +   return -1;
> > +
> > +   if (write_locked_index(_index, _file, 

Re: [PATCH v2 09/13] git-p4: use `test_atexit` to kill the daemon

2018-10-15 Thread Luke Diamand
On Mon, 15 Oct 2018 at 16:02, Johannes Schindelin
 wrote:
>
> Hi Luke,
>
> On Mon, 15 Oct 2018, Luke Diamand wrote:
>
> > On Mon, 15 Oct 2018 at 11:12, Johannes Schindelin via GitGitGadget
> >  wrote:
> > >
> > > From: Johannes Schindelin 
> > >
> > > This should be more reliable than the current method, and prepares the
> > > test suite for a consistent way to clean up before re-running the tests
> > > with different options.
> > >
> >
> > I'm finding that it's leaving p4d processes lying around.
>
> That's a bummer!
>
> > e.g.
> >
> > $ ./t9820-git-p4-editor-handling.sh
> > 
> > $ ./t9820-git-p4-editor-handling.sh
> > 
>
> Since I do not currently have a setup with p4d installed, can you run that
> with `sh -x ...` and see whether this code path is hit?

All you need to do is to put p4 and p4d in your PATH.

https://www.perforce.com/downloads/helix-core-p4d
https://www.perforce.com/downloads/helix-command-line-client-p4

The server is free to use for a small number of users, you don't need
to do anything to make it go.


>
>  test_done () {
> GIT_EXIT_OK=t
>
> +   test -n "$immediate" || test_atexit_handler
> +

+ test -n
+ test_atexit_handler
./t9820-git-p4-editor-handling.sh: 764:
./t9820-git-p4-editor-handling.sh: test_atexit_handler: not found

Is that expected?




> if test -n "$write_junit_xml" && test -n "$junit_xml_path"
> then
>
> > And also
> >
> > $ ./t9800-git-p4-basic.sh
> > 
> > Ctrl-C
>
> Oh, you're right. I think I need to do something in this line:
>
> trap 'exit $?' INT
>
> in t/test-lib.sh, something like
>
> trap 'exit_code=$?; test_atexit_handler; exit $exit_code' INT
>
> would you agree? (And: could you test?)

Not sure.
Send me a patch and I can try it out.

Thanks,
Luke


Re: Git Test Coverage Report (Monday, Oct 15)

2018-10-15 Thread Derrick Stolee

On 10/15/2018 12:24 PM, Derrick Stolee wrote:


Uncovered code in 'jch' (22f2f0f) and not in 'next' (152ad8e)
-

prio-queue.c
2d181390f3 94) return queue->array[queue->nr - 1].data;

(I have a fix to cover this in my private branch for this topic.)



revision.c
4943d28849 2931) return;
4943d28849 2934) return;
4943d28849 2937) c->object.flags |= UNINTERESTING;
4943d28849 2940) return;
4943d28849 2943) mark_parents_uninteresting(c);
4943d28849 2966) return;
4943d28849 2969) return;
4943d28849 2974) return;
4943d28849 3022) init_author_date_slab(>author_date);
4943d28849 3023) info->topo_queue.compare = 
compare_commits_by_author_date;

4943d28849 3024) info->topo_queue.cb_data = >author_date;
4943d28849 3025) break;
4943d28849 3038) continue;
4943d28849 3048) record_author_date(>author_date, c);
6c04ff3001 3086) if (!revs->ignore_missing_links)
6c04ff3001 3087) die("Failed to traverse parents of commit %s",
4943d28849 3088) oid_to_hex(>object.oid));
4943d28849 3096) continue;
Looks like a number of these lines are important to cover, but are not 
covered by tests that _also_ specify '--topo-order'. I bet I can cover 
more of these by overriding the sort logic to use the new algorithm if 
GIT_TEST_COMMIT_GRAPH is specified. Or, should I create yet another test 
variable to cover these cases?


(Note: I run these coverage reports with a variety of optional test 
variables.)



Uncovered code in 'next' (152ad8e) and not in 'master' (5a0cc8a)

builtin/rev-list.c
7c0fe330d5 builtin/rev-list.c 227) die("unexpected missing %s object 
'%s'",
7c0fe330d5 builtin/rev-list.c 228) type_name(obj->type), 
oid_to_hex(>oid));


commit-graph.c
5cef295f28   67) return 0;
20fd6d5799   79) return 0;
These are two ways to say the commit-graph should not be used, but are 
not covered by tests currently. One is if we say "is the repo shallow?" 
which happens to return when the repo has grafts (but we keep the check 
here in case the way shallows are implemented changes) and the other is 
if the repo is not initialized, but I fixed the test-helpers that had 
not initialized the repo yet.


Uncovered code in 'master' (5a0cc8a) and not in (fe8321ec05)
-
builtin/fsck.c
66ec0390e7 builtin/fsck.c 862) midx_argv[2] = "--object-dir";
66ec0390e7 builtin/fsck.c 863) midx_argv[3] = alt->path;
66ec0390e7 builtin/fsck.c 864) if (run_command(_verify))
66ec0390e7 builtin/fsck.c 865) errors_found |= ERROR_COMMIT_GRAPH;
These are underneath the "for all alternates" loop, and _should_ be 
covered with the coming GIT_TEST_MULTI_PACK_INDEX test variable.


Thanks,
-Stolee


Re: [PATCH] branch: trivial style fix

2018-10-15 Thread Jeff King
On Sat, Oct 06, 2018 at 08:40:33AM +0800, Tao Qingyun wrote:

> >> -  for (i = 0; i < argc; i++, strbuf_reset()) {
> >> +  for (i = 0; i < argc; i++) {
> >>char *target = NULL;
> >>int flags = 0;
> >>  
> >> +  strbuf_reset();
> >
> > This is not "trivial" nor "style fix", though.
> >
> > It is not "trivial" because it also changes the behaviour.  Instead
> > of resetting the strbuf each time after the loop body runs, the new
> > code resets it before the loop body, even for the 0-th iteration
> > where the strbuf hasn't even been used at all.  It is not a "style
> > fix" because we do not have a particular reason to avoid using the
> > comma operator to perform two simple expressions with side effects
> > inside a single expression.
> >
> Thank you and Jeff for your explanation. I think I get the point now.
> 
> The third part of `for` statement is normally for a step. I think it's
> improve readability even it does nothing in the first iteration.
> 
> And, should I drop this part and resend the patch? I'm a newbie :).

Sorry for the slow reply. For some reason I do not think your message
here made it to the list (but I don't see anything obviously wrong with
it).

Anyway, yes, I think it is worth dropping this hunk and re-sending the
else-if style fix as a separate patch (you may choose to re-send this
hunk as its own patch, too, if you want to argue for its inclusion, but
there is no sense in holding the actual style fix hostage).

-Peff


Re: [PATCH 0/9] Offer to run CI/PR builds in Visual Studio Team Services

2018-10-15 Thread Johannes Schindelin
Hi team,

On Mon, 15 Oct 2018, Christian Couder wrote:

> On Mon, Oct 15, 2018 at 5:46 PM Duy Nguyen  wrote:
> >
> > On Mon, Oct 15, 2018 at 5:08 PM Ævar Arnfjörð Bjarmason
> >  wrote:
> > > As an aside I poked Junio via private mail in late August to see if he'd
> > > be interested in pushing to gitlab.com/git/git.git too as part of his
> > > normal push-outs.
> 
> People at GitLab also have wondered if it would be possible for GitLab
> to host an official Git mirror, and I think it would be nice indeed if
> Junio could push to gitlab.com.

AFAIR Junio does not push to github.com/git/git, it is an automatic
mirror.

GitLab could easily do the same.

Ciao,
Johannes

Re: [PATCH v8 0/7] speed up index load through parallelization

2018-10-15 Thread Ben Peart

fixup! IEOT error messages

Enable localizing new error messages and improve the error message for
invalid IEOT extension sizes.

Signed-off-by: Ben Peart 
---
 read-cache.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/read-cache.c b/read-cache.c
index 7acc2c86f4..f9fa6a7979 100644
--- a/read-cache.c
+++ b/read-cache.c
@@ -3480,7 +3480,7 @@ static struct index_entry_offset_table 
*read_ieot_extension(const char *mmap, si

/* validate the version is IEOT_VERSION */
ext_version = get_be32(index);
if (ext_version != IEOT_VERSION) {
-  error("invalid IEOT version %d", ext_version);
+  error(_("invalid IEOT version %d"), ext_version);
   return NULL;
}
index += sizeof(uint32_t);
@@ -3488,7 +3488,7 @@ static struct index_entry_offset_table 
*read_ieot_extension(const char *mmap, si

/* extension size - version bytes / bytes per entry */
nr = (extsize - sizeof(uint32_t)) / (sizeof(uint32_t) + 
sizeof(uint32_t));

if (!nr) {
-  error("invalid number of IEOT entries %d", nr);
+  error(_("invalid IEOT extension size %d"), extsize);
   return NULL;
}
ieot = xmalloc(sizeof(struct index_entry_offset_table)
--
2.18.0.windows.1



On 10/14/2018 8:28 AM, Duy Nguyen wrote:

On Wed, Oct 10, 2018 at 5:59 PM Ben Peart  wrote:

@@ -3460,14 +3479,18 @@ static struct index_entry_offset_table 
*read_ieot_extension(const char *mmap, si

 /* validate the version is IEOT_VERSION */
 ext_version = get_be32(index);
-   if (ext_version != IEOT_VERSION)
+   if (ext_version != IEOT_VERSION) {
+  error("invalid IEOT version %d", ext_version);


Please wrap this string in _() so that it can be translated.


return NULL;
+   }
 index += sizeof(uint32_t);

 /* extension size - version bytes / bytes per entry */
 nr = (extsize - sizeof(uint32_t)) / (sizeof(uint32_t) + 
sizeof(uint32_t));
-   if (!nr)
+   if (!nr) {
+  error("invalid number of IEOT entries %d", nr);


Ditto. And reporting extsize may be more useful than nr, which we know
is zero, but we don't know why it's calculated zero unless we know
extsize.



Re: What's cooking in git.git (Oct 2018, #02; Sat, 13)

2018-10-15 Thread Stefan Beller
On Fri, Oct 12, 2018 at 6:03 PM Junio C Hamano  wrote:
>
> Stefan Beller  writes:
>
> >> * sb/submodule-recursive-fetch-gets-the-tip (2018-10-11) 9 commits
> >>  . builtin/fetch: check for submodule updates for non branch fetches
> >>  . fetch: retry fetching submodules if needed objects were not fetched
> >>  . submodule: fetch in submodules git directory instead of in worktree
> >>  . repository: repo_submodule_init to take a submodule struct
> >>  . submodule.c: do not copy around submodule list
> >>  . submodule.c: move global changed_submodule_names into fetch submodule 
> >> struct
> >>  . submodule.c: sort changed_submodule_names before searching it
> >>  . submodule.c: fix indentation
> >>  . sha1-array: provide oid_array_filter
> >>
> >>  "git fetch --recurse-submodules" may not fetch the necessary commit
> >>  that is bound to the superproject, which is getting corrected.
> >>
> >>  Ejected for now, as it has fallouts in places like t/helper/.
> >
> > This is the first time I hear about that, I'll look into that.
> > The tipmost commit there is also shoddy, I'll redo that.
>
> This is the first time I saw the breakage with this series, but I
> would not be suprised, as this was rerolled recently.  Who knows
> what got changed in this series and in other topics---any new
> interaction can arise and that is a normal part of distributed
> development.

I performed the same merge last week, and the range-diff indicates,
that your version of next was further ahead than mine.

The breakage itself comes from

t/helper/test-submodule-nested-repo-config.c: In function
‘cmd__submodule_nested_repo_config’:
t/helper/test-submodule-nested-repo-config.c:20:54: warning: passing
argument 3 of ‘repo_submodule_init’ from incompatible pointer type
[-Wincompatible-pointer-types]
  if (repo_submodule_init(, the_repository, argv[1])) {
  ^~~~
In file included from ./cache.h:17:0,
 from ./submodule-config.h:4,
 from t/helper/test-submodule-nested-repo-config.c:2:
./repository.h:126:5: note: expected ‘const struct submodule *’ but
argument is of type ‘const char *’
 int repo_submodule_init(struct repository *subrepo,
 ^~~

which is introduced by
commit c369da44610acc5e56213b8784d4250ae619fdb9
  (origin/ao/submodule-wo-gitmodules-checked-out)
Author: Antonio Ospite 
Date:   2018-10-05 15:06

t/helper: add test-submodule-nested-repo-config

Add a test tool to exercise config_from_gitmodules(), in particular for
the case of nested submodules.

Add also a test to document that reading the submoudles config of nested
submodules does not work yet when the .gitmodules file is not in the
working tree but it still in the index.

This is because the git API does not always make it possible access the
object store  of an arbitrary repository (see get_oid() usage in
config_from_gitmodules()).

When this git limitation gets fixed the aforementioned use case will be
supported too.

Signed-off-by: Antonio Ospite 
Signed-off-by: Junio C Hamano 

My resend will take that into account, building on Antonios series.

Thanks,
Stefan


Re: [PATCH] Typo `dashes ?` -> `dashes?`

2018-10-15 Thread Jeff King
On Sun, Oct 14, 2018 at 07:44:58PM +, Jacques Bodin-Hullin wrote:

> diff --git a/parse-options.c b/parse-options.c
> index 3b874a83a0c89..88512212392ae 100644
> --- a/parse-options.c
> +++ b/parse-options.c
> @@ -352,7 +352,7 @@ static void check_typos(const char *arg, const struct 
> option *options)
>   return;
>  
>   if (starts_with(arg, "no-")) {
> - error ("did you mean `--%s` (with two dashes ?)", arg);
> + error ("did you mean `--%s` (with two dashes?)", arg);

I agree the extra space in the original is stylistically weird, and your
suggestion is an improvement. However, I think this is really a question
"did you mean..." with a parenthetical phrase. Most style guides would
recommend putting the punctuation outside, like:

  error: did you mean `--foo` (with two dashes)?

Speaking of style, the extra space between "error" and "(" does not
match our usual coding style. It might be worth removing while we're
touching these lines.

-Peff


Re: [PATCH] builtin/branch.c: remove useless branch_get

2018-10-15 Thread Jeff King
On Mon, Oct 15, 2018 at 10:08:39PM +0800, Tao Qingyun wrote:

> Signed-off-by: Tao Qingyun 

The commit message should describe the "why" here. I gave some reasoning
nearby in:

  https://public-inbox.org/git/20181015171417.ga1...@sigill.intra.peff.net/

>From your initial message, it sounds like this might also be fixing a
bug ("confusing that getting a branch before it has created"). Can you
describe that (and ideally show the fix with a test)?

> diff --git a/builtin/branch.c b/builtin/branch.c
> index c396c41533..2367703034 100644
> --- a/builtin/branch.c
> +++ b/builtin/branch.c
> @@ -809,11 +809,6 @@ int cmd_branch(int argc, const char **argv, const char 
> *prefix)
>   git_config_set_multivar(buf.buf, NULL, NULL, 1);
>   strbuf_release();
>   } else if (argc > 0 && argc <= 2) {
> - struct branch *branch = branch_get(argv[0]);
> -
> - if (!branch)
> - die(_("no such branch '%s'"), argv[0]);
> -

>From what I can tell, the patch itself _is_ an improvement. I just think
we need to explain why for the record.

-Peff


Re: [Question] builtin/branch.c

2018-10-15 Thread Jeff King
On Sun, Oct 14, 2018 at 12:19:35PM +0200, Ævar Arnfjörð Bjarmason wrote:

> On Sat, Oct 13, 2018 at 10:12 AM Tao Qingyun  wrote:
> > Hi, I am learning `builtin/branch.c`. I find that it will call `branch_get`
> > before create and [un]set upstream, and die with "no such branch" if failed.
> > but `branch_get` seems never fail, it is a get_or_create. Also, it was
> > confused that getting a branch before it has created.
> >
> > builtin/branch.c #811
> >
> > } else if (argc > 0 && argc <= 2) {
> > struct branch *branch = branch_get(argv[0]);
> >
> > if (!branch)
> > die(_("no such branch '%s'"), argv[0]);
> 
> From my reading of the source you're correct. That !branch case is
> pointless. The only way that function can fail is in the x*() family
> of functions, which'll make the function die instead of returning
> NULL.

It sometimes returns current_branch, which can be NULL (e.g., if you're
on a detached HEAD). Try:

  $ git branch HEAD
  fatal: no such branch 'HEAD'

  $ git branch ''
  fatal: no such branch ''

However, it seems weird that we'd check those cases here (and provide
such lousy messages). And indeed, dropping that and letting us
eventually hit create_branch() gives a much better message:

  $ git branch HEAD
  fatal: 'HEAD' is not a valid branch name.

  $ git branch ''
  fatal: '' is not a valid branch name.

I think we'd want to see that reasoning in the commit message.

-Peff


Re:Business proposition for you

2018-10-15 Thread Melvin Greg
Hello, 

Business proposition for you.

I have a client from Syrian who will like to invest with your 
company. My client is willing to invest $4 Million. Can I have 
your company website to show to my client your company so that 
they will check and decide if they will invest there funds with 
you as joint partner. 

This information is needed urgently.

Please reply. 

Best Regards,
Agent Melvin Greg
Tel:+1 4045966532


Re: [PATCH v3 1/2] commit-graph write: add progress output

2018-10-15 Thread SZEDER Gábor
On Mon, Sep 17, 2018 at 03:33:35PM +, Ævar Arnfjörð Bjarmason wrote:

> @@ -560,6 +563,9 @@ static int add_packed_commits(const struct object_id *oid,
>   off_t offset = nth_packed_object_offset(pack, pos);
>   struct object_info oi = OBJECT_INFO_INIT;
>  
> + if (list->progress)
> + display_progress(list->progress, ++list->progress_done);

Note that add_packed_commits() is used as a callback function for
for_each_object_in_pack() (with '--stdin-packs') or
for_each_packed_object() (no options), i.e. this will count the number
of objects, not commits:

  $ git rev-list --all |wc -l
  768524
  $ git rev-list --objects --all |wc -l
  6130295
  # '--count --objects' together didn't work as expected.
  $ time ~/src/git/git commit-graph write
  Finding commits for commit graph: 6130295, done.
  Annotating commits in commit graph: 2305572, done.
  Computing commit graph generation numbers: 100% (768524/768524), done.

(Now I also see the 3x difference in the "Annotating commits" counter
that you mentioned.)

I see two options:

  - Provide a different title for this progress counter, e.g.
"Scanning objects for c-g", or "Processing objects...", or
something else that says "objects" instead of "commits".

  - Move this condition and display_progress() call to the end of the
function, so it will only count commits, not any other objects.
(As far as I understand both for_each_object_in_pack() and
for_each_packed_object() iterate in pack .idx order, i.e. it's
essentially random.  This means that commit objects should be
distributed evenly among other kinds of objects, so we don't have
to worry about the counter stalling for a long stretch of
consecutive non-commit objects.  At least in theory.)





Re: [PATCH 15/16] commit-reach: make can_all_from_reach... linear

2018-10-15 Thread René Scharfe
Am 15.10.2018 um 17:31 schrieb Derrick Stolee:
> On 10/14/2018 10:29 AM, René Scharfe wrote:
>> diff --git a/git-compat-util.h b/git-compat-util.h
>> index 5f2e90932f..491230fc57 100644
>> --- a/git-compat-util.h
>> +++ b/git-compat-util.h
>> @@ -1066,6 +1066,21 @@ static inline void sane_qsort(void *base, size_t 
>> nmemb, size_t size,
>>  qsort(base, nmemb, size, compar);
>>   }
>>   
>> +#define DECLARE_SORT(scope, name, elemtype) \
>> +scope void name(elemtype, size_t)
>> +
>> +#define DEFINE_SORT(scope, name, elemtype, one, two)
>> \
>> +static int name##_compare(const elemtype, const elemtype);  \
>> +static int name##_compare_void(const void *a, const void *b)
>> \
>> +{   \
>> +return name##_compare(a, b);\
>> +}   \
>> +scope void name(elemtype base, size_t nmemb)
>> \
>> +{   \
>> +QSORT(base, nmemb, name##_compare_void);\
>> +}   \
>> +static int name##_compare(const elemtype one, const elemtype two)
>> +
> 
> Since you were worried about the "private" name of the compare function, 
> maybe split this macro into two: DEFINE_COMPARE and DEFINE_SORT. Then, 
> if someone wants direct access to the compare function, they could use 
> the DEFINE_COMPARE to ensure the typing is correct, and use QSORT as 
> normal with name##_compare_void.

The pointers are converted to const void * somewhere along the way from
qsort() to compare function.  Splitting the macro would require type
check tricks to make sure the types of the compare function matches the
array to be sorted.  Letting a single macro bake it all into a layer
cake of generated functions is a lot simpler.

> As I think about this, I think this is less of a problem than is worth 
> this split. The commit-slab definitions generate a lot of methods using 
> the "name##" convention, so perhaps we should just trust developers 
> using the macros to look up the macro definition or similar examples. In 
> that sense, including a conversion that consumes the compare function 
> directly can be a signpost for future callers.

Using the generated compare function name directly is a bit awkward; e.g.
in the two example cases it would be sort_by_score_compare() and
sort_packed_ref_records_compare().  Defining the real compare function
the usual way (with a proper name) and having the DEFINE_SORT block call
it is a bit more repetitive, but clean and understandable IMHO.

We also could just leave complicated cases alone..

> I would say that maybe the times where you need to do something special 
> should be pulled out into their own patches, so we can call attention to 
> them directly.

Right; this patch was just a sketch.

> I like this "sort_by_" convention..
> 
>>   
>>  for (i = 0; i < nr_packs; i++) {
>>  pack_names[i] = pairs[i].pack_name;
>> @@ -455,10 +453,8 @@ struct pack_midx_entry {
>>  uint64_t offset;
>>   };
>>   
>> -static int midx_oid_compare(const void *_a, const void *_b)
>> +DEFINE_SORT(static, sort_midx, struct pack_midx_entry *, a, b)
>>   {
>> -const struct pack_midx_entry *a = (const struct pack_midx_entry *)_a;
>> -const struct pack_midx_entry *b = (const struct pack_midx_entry *)_b;
>>  int cmp = oidcmp(>oid, >oid);
>>   
>>  if (cmp)
>> @@ -573,7 +569,7 @@ static struct pack_midx_entry *get_sorted_entries(struct 
>> multi_pack_index *m,
>>  }
>>  }
>>   
>> -QSORT(entries_by_fanout, nr_fanout, midx_oid_compare);
>> +sort_midx(entries_by_fanout, nr_fanout);
> 
> ...but it isn't followed here. Perhaps "sort_by_oid"?

That function sorts by oid, pack_mtime, and pack_int_id, but including
all these fields in the name is a bit unwieldy.  Being unspecific by
calling it sort_midx() was the lazy way out.  Mentioning only oid is a
bit misleading.  Perhaps sort_by_oid_etc()?

René



Git Test Coverage Report (Monday, Oct 15)

2018-10-15 Thread Derrick Stolee
In an effort to ensure new code is reasonably covered by the test suite, 
we now have contrib/coverage-diff.sh to combine the gcov output from 
'make coverage-test ; make coverage-report' with the output from 'git 
diff A B' to discover _new_ lines of code that are not covered. This 
report ignores lines including "BUG(".


This report takes the output of these results after running on four 
branches:


        pu: 78bfaf7bc1efe6892fe4dbedf9ed80f9dd48146c

       jch: d67e018e0f57ebbb4fa0354a58a0a6d47e25a948

      next: 152ad8e3369ac77026886a2910e3a407c281df35

    master: 5a0cc8aca797dbd7d2be3b67458ff880ed45cddf

 maint: 04861070400d3ca9d487bd0d736ca818b9ffe371

I ran the test suite on each of these branches on an Ubuntu Linux VM, 
and I'm missing some dependencies (like apache, svn, and perforce) so 
not all tests are run.


I submit this output without comment. I'm taking a look especially at my 
own lines to see where coverage can be improved.


Thanks,

-Stolee

Uncovered code in 'pu' (12227c8) and not in 'jch' (22f2f0f)
---
builtin/blame.c
74e8221b52 builtin/blame.c    924) blame_date_width = sizeof("Thu Oct 19 
16:00");

74e8221b52 builtin/blame.c    925) break;

builtin/branch.c
ba8ba9df26 builtin/branch.c 452) die(_("could not resolve HEAD"));
ba8ba9df26 builtin/branch.c 458) die(_("HEAD (%s) points outside of 
refs/heads/"), refname);


builtin/grep.c
a8ace17bd4 builtin/grep.c  439) grep_read_unlock();

builtin/help.c
e6e76baaf4 builtin/help.c 429) if (!exclude_guides || alias[0] == '!') {
e6e76baaf4 builtin/help.c 430) printf_ln(_("'%s' is aliased to '%s'"), 
cmd, alias);

e6e76baaf4 builtin/help.c 431) free(alias);
e6e76baaf4 builtin/help.c 432) exit(0);
e6e76baaf4 builtin/help.c 441) fprintf_ln(stderr, _("'%s' is aliased to 
'%s'"), cmd, alias);

e6e76baaf4 builtin/help.c 442) count = split_cmdline(alias, );
e6e76baaf4 builtin/help.c 443) if (count < 0)
e6e76baaf4 builtin/help.c 444) die(_("bad alias.%s string: %s"), cmd,
e6e76baaf4 builtin/help.c 446) free(argv);
e6e76baaf4 builtin/help.c 448) return alias;

builtin/rebase--interactive.c
6424061be4 builtin/rebase--interactive.c   23) return 
error_errno(_("could not read '%s'."), todo_file);
6424061be4 builtin/rebase--interactive.c   28) return 
error_errno(_("could not write '%s'"), todo_file);
7ccfac40bc builtin/rebase--interactive.c   43) return 
error_errno(_("could not read '%s'."), todo_file);
7ccfac40bc builtin/rebase--interactive.c   46) 
todo_list_release(_list);
7ccfac40bc builtin/rebase--interactive.c   47) return error(_("unusable 
todo list: '%s'"), todo_file);
9787d17d40 builtin/rebase--interactive.c  294) ret = 
rearrange_squash_in_todo_file();


date.c
74e8221b52  113) die("Timestamp too large for this system: %"PRItime, time);
74e8221b52  216) if (tm->tm_mon == human_tm->tm_mon) {
74e8221b52  217) if (tm->tm_mday > human_tm->tm_mday) {
74e8221b52  219) } else if (tm->tm_mday == human_tm->tm_mday) {
74e8221b52  220) hide.date = hide.wday = 1;
74e8221b52  221) } else if (tm->tm_mday + 5 > human_tm->tm_mday) {
74e8221b52  223) hide.date = 1;
74e8221b52  231) gettimeofday(, NULL);
74e8221b52  232) show_date_relative(time, tz, , buf);
74e8221b52  233) return;
74e8221b52  246) hide.seconds = 1;
74e8221b52  247) hide.tz |= !hide.date;
74e8221b52  248) hide.wday = hide.time = !hide.year;
74e8221b52  262) strbuf_rtrim(buf);
74e8221b52  287) gettimeofday(, NULL);
74e8221b52  290) human_tz = local_time_tzoffset(now.tv_sec, _tm);
74e8221b52  886) static int auto_date_style(void)
74e8221b52  888) return (isatty(1) || pager_in_use()) ? DATE_HUMAN : 
DATE_NORMAL;

74e8221b52  909) return DATE_HUMAN;
74e8221b52  911) return auto_date_style();

git.c
a9a60b94cc 322) fprintf_ln(stderr, _("'%s' is aliased to '%s'"),

rebase-interactive.c
b74a37a5a7  26) warning(_("unrecognized setting %s for option "
6424061be4 107) return error_errno(_("could not write '%s''"), todo_file);
6424061be4 110) return error(_("could not copy '%s' to '%s'."), todo_file,
b74a37a5a7 174) goto leave_check;

sequencer.c
b5d6062402 4425) strbuf_insert(buf, todo_list->items[insert].offset_in_buf +
b5d6062402 4437) int sequencer_add_exec_commands(const char *commands)
06d8136126 ) return error_errno(_("could not read '%s'."), todo_file);
b5d6062402 4446) if (todo_list_parse_insn_buffer(todo_list.buf.buf, 
_list)) {

b5d6062402 4451) todo_list_add_exec_commands(_list, commands);
b5d6062402 4452) res = write_message(todo_list.buf.buf, 
todo_list.buf.len, todo_file, 0);

0cce4a2756 4453) todo_list_release(_list);
b5d6062402 4455) return res;
b74a37a5a7 4515) goto out;
b74a37a5a7 4520) goto out;
b8dac44d10 4660) todo_list_release(_todo);
009173ed7b 4665) todo_list_release(_todo);
009173ed7b 4666) return error_errno(_("could not write '%s'"), todo_file);
9787d17d40 4859) int rearrange_squash_in_todo_file(void)
9787d17d40 4861) const char *todo_file = rebase_path_todo();
9787d17d40 4862) struct todo_list todo_list 

Re: [PATCH 0/9] Offer to run CI/PR builds in Visual Studio Team Services

2018-10-15 Thread Christian Couder
On Mon, Oct 15, 2018 at 5:46 PM Duy Nguyen  wrote:
>
> On Mon, Oct 15, 2018 at 5:08 PM Ævar Arnfjörð Bjarmason
>  wrote:
> > As an aside I poked Junio via private mail in late August to see if he'd
> > be interested in pushing to gitlab.com/git/git.git too as part of his
> > normal push-outs.

People at GitLab also have wondered if it would be possible for GitLab
to host an official Git mirror, and I think it would be nice indeed if
Junio could push to gitlab.com.

> > One neat thing that would buy us is the ability to
> > have a .gitlab-ci.yml in git.git and use their CI implementation.

Yeah, that would be nice for people who prefer to use GitLab CI.

In general I agree that it is a good idea to have support for many CI
tools even if it is not always up-to-date and even if it's not used by
many people.

> That would be great since it allows gitlab forks to have CI support
> from the beginning. But gitlab ci has time execution limits if I
> remember correctly and I'm not sure if we'll use it all up before the
> end of the month (or whatever period that is), or do they offer
> something special to git.git?

I can ask for more time if that would help.


Re: [PATCH v3 1/2] commit-graph write: add progress output

2018-10-15 Thread SZEDER Gábor
On Thu, Oct 11, 2018 at 07:52:21PM +0200, Ævar Arnfjörð Bjarmason wrote:
> 
> On Wed, Oct 10 2018, Ævar Arnfjörð Bjarmason wrote:
> 
> > On Wed, Oct 10 2018, SZEDER Gábor wrote:
> >
> >> On Wed, Oct 10, 2018 at 11:56:45PM +0200, Ævar Arnfjörð Bjarmason wrote:
> >>> On Wed, Oct 10 2018, SZEDER Gábor wrote:
> >>
> >>> >>for (i = 0; i < oids->nr; i++) {
> >>> >> +  display_progress(progress, ++j);
> >>>
> >>> [...]
> >>>
> >>> > This display_progress() call, however, doesn't seem to be necessary.
> >>> > First, it counts all commits for a second time, resulting in the ~2x
> >>> > difference compared to the actual number of commits, and then causing
> >>> > my confusion.  Second, all what this loop is doing is setting a flag
> >>> > in commits that were already looked up and parsed in the above loops.
> >>> > IOW this loop is very fast, and the progress indicator jumps from
> >>> > ~780k right to 1.5M, even on my tiny laptop, so it doesn't need a
> >>> > progress indicator at all.

> Hrm, actually reading this again your initial post says we end up with a
> 2x difference v.s. the number of commits, but it's actually 3x.

Well, it depends on how you create the commit-graph and on the repo as
well, I guess.  I run 'git commit-graph write --reachable' in a repo
created by 'git clone --single-branch ...', and in that case the
difference is only ~2x (the first loop in close_reachable() has as
many iterations as the number of refs).  If the repo were to contain
twice as many refs as commits, then the difference could be as high as
4x.

However, I think I might have noticed an other progress counting issue
as well, will get back to it later but first I have to get my numbers
straight.

> The loop
> that has a rather trivial runtime comparatively is the 3x, but the 2x
> loop takes a notable amount of time. So e.g. on git.git:
> 
> $ git rev-list --all | wc -l; ~/g/git/git commit-graph write
> 166678
> Annotating commits in commit graph: 518463, done.
> Computing commit graph generation numbers: 100% (172685/172685), done.


Re: [PATCH 0/9] Offer to run CI/PR builds in Visual Studio Team Services

2018-10-15 Thread Duy Nguyen
On Mon, Oct 15, 2018 at 5:08 PM Ævar Arnfjörð Bjarmason
 wrote:
> As an aside I poked Junio via private mail in late August to see if he'd
> be interested in pushing to gitlab.com/git/git.git too as part of his
> normal push-outs. One neat thing that would buy us is the ability to
> have a .gitlab-ci.yml in git.git and use their CI implementation.

That would be great since it allows gitlab forks to have CI support
from the beginning. But gitlab ci has time execution limits if I
remember correctly and I'm not sure if we'll use it all up before the
end of the month (or whatever period that is), or do they offer
something special to git.git?
-- 
Duy


Re: [PATCH 15/16] commit-reach: make can_all_from_reach... linear

2018-10-15 Thread Derrick Stolee

On 10/14/2018 10:29 AM, René Scharfe wrote:

It still has some repetition, converted code is a bit longer than the
current one, and I don't know how to build a Coccinelle rule that would
do that conversion.

Looked for a possibility to at least leave QSORT call-sites alone by
enhancing that macro, but didn't find any.  Found a few websites
showing off mindblowing macros, thouhg, this one in particular:

https://github.com/pfultz2/Cloak/wiki/C-Preprocessor-tricks,-tips,-and-idioms

Anyway, drove the generative approach a bit further, and came up with
the new DEFINE_SORT below.  I'm unsure about the name; perhaps it should
be called DEFINE_SORT_BY_COMPARE_FUNCTION_BODY, but that's a bit long.
It handles casts and const attributes behind the scenes and avoids
repetition, but looks a bit weird, as it is placed where a function
signature would go.

Apart from that the macro is simple and doesn't use any tricks or
added checks.  It just sets up boilerplate functions to offer type-safe
sorting.

diffcore-rename.c and refs/packed-backend.c receive special treatment in
the patch because their compare functions are used outside of sorting as
well.  I made them take typed pointers nevertheless and used them from
DEFINE_SORT; the wrapper generated by that macro is supposed to be
private.  Given that such reuse is rare and I think we don't need a way
to make it public.

What do y'all think about this direction?

---

[snip]

diff --git a/git-compat-util.h b/git-compat-util.h
index 5f2e90932f..491230fc57 100644
--- a/git-compat-util.h
+++ b/git-compat-util.h
@@ -1066,6 +1066,21 @@ static inline void sane_qsort(void *base, size_t nmemb, 
size_t size,
qsort(base, nmemb, size, compar);
  }
  
+#define DECLARE_SORT(scope, name, elemtype) \

+scope void name(elemtype, size_t)
+
+#define DEFINE_SORT(scope, name, elemtype, one, two)   \
+static int name##_compare(const elemtype, const elemtype); \
+static int name##_compare_void(const void *a, const void *b)   \
+{  \
+   return name##_compare(a, b);\
+}  \
+scope void name(elemtype base, size_t nmemb)   \
+{  \
+   QSORT(base, nmemb, name##_compare_void);\
+}  \
+static int name##_compare(const elemtype one, const elemtype two)
+


Since you were worried about the "private" name of the compare function, 
maybe split this macro into two: DEFINE_COMPARE and DEFINE_SORT. Then, 
if someone wants direct access to the compare function, they could use 
the DEFINE_COMPARE to ensure the typing is correct, and use QSORT as 
normal with name##_compare_void.


As I think about this, I think this is less of a problem than is worth 
this split. The commit-slab definitions generate a lot of methods using 
the "name##" convention, so perhaps we should just trust developers 
using the macros to look up the macro definition or similar examples. In 
that sense, including a conversion that consumes the compare function 
directly can be a signpost for future callers.


I would say that maybe the times where you need to do something special 
should be pulled out into their own patches, so we can call attention to 
them directly.


[snip]

diff --git a/midx.c b/midx.c
index 713d6f9dde..4407db7949 100644
--- a/midx.c
+++ b/midx.c
@@ -419,10 +419,8 @@ struct pack_pair {
char *pack_name;
  };
  
-static int pack_pair_compare(const void *_a, const void *_b)

+DEFINE_SORT(static, sort_by_pack_name, struct pack_pair *, a, b)
  {
-   struct pack_pair *a = (struct pack_pair *)_a;
-   struct pack_pair *b = (struct pack_pair *)_b;
return strcmp(a->pack_name, b->pack_name);
  }
  
@@ -438,7 +436,7 @@ static void sort_packs_by_name(char **pack_names, uint32_t nr_packs, uint32_t *p

pairs[i].pack_name = pack_names[i];
}
  
-	QSORT(pairs, nr_packs, pack_pair_compare);

+   sort_by_pack_name(pairs, nr_packs);


I like this "sort_by_" convention..

  
  	for (i = 0; i < nr_packs; i++) {

pack_names[i] = pairs[i].pack_name;
@@ -455,10 +453,8 @@ struct pack_midx_entry {
uint64_t offset;
  };
  
-static int midx_oid_compare(const void *_a, const void *_b)

+DEFINE_SORT(static, sort_midx, struct pack_midx_entry *, a, b)
  {
-   const struct pack_midx_entry *a = (const struct pack_midx_entry *)_a;
-   const struct pack_midx_entry *b = (const struct pack_midx_entry *)_b;
int cmp = oidcmp(>oid, >oid);
  
  	if (cmp)

@@ -573,7 +569,7 @@ static struct pack_midx_entry *get_sorted_entries(struct 
multi_pack_index *m,
}
}
  
-		QSORT(entries_by_fanout, nr_fanout, midx_oid_compare);

+  

Re: [BUG] gitignore documentation inconsistent with actual behaviour

2018-10-15 Thread Duy Nguyen
On Mon, Oct 15, 2018 at 12:57 AM Junio C Hamano  wrote:
>
> Duy Nguyen  writes:
>
> >> Our matching function comes from rsync originally, whose manpage says:
> >>
> >> use ’**’ to match anything, including slashes.
> >>
> >> I believe this is accurate as far as the implementation goes.
> >
> > No. "**" semantics is not the same as from rsync. The three cases
> > "**/", "/**/" and "/**" were requested by Junio if I remember
> > correctly. You can search the mail archive for more information.
>
> Perhaps spelling the rules out would be more benefitial for the
> purpose of this thread?  I do not recall what I requested, but let

OK I gave you too much credit. You pointed out semantics problem with
rsync "**" [1] and gently pushed me to implement a safer subset ;-)

[1] https://public-inbox.org/git/7vbogj5sji@alter.siamese.dyndns.org/

> me throw out my guesses (i.e. what I would have wished if I were
> making a request to implement something) to keep the thread alive,
> you can correct me, and people can take it from there to update the
> docs ;-)
>
> A double-asterisk, both of whose ends are adjacent to a
> directory boundary (i.e. the beginning of the pattern, the end
> of the pattern or a slash) macthes 0 or more levels of
> directories.  e.g. **/a/b would match a/b, x/a/b, x/y/a/b, but
> not z-a/b.  a/**/b would match a/b, a/x/b, but not a/z-b or
> a-z-b.
>
> What a double-asterisk that does not sit on a directory boundary,
> e.g. "a**b", "a**/b", "a/**b", or "**a/b", matches, as far as I am
> concerned, is undefined, meaning that (1) I do not care all that
> much what the code actually do when a pattern like that is given as
> long as it does not segfault, and (2) I do not think I would mind
> changing the behaviour as a "bugfix", if their current behaviour
> does not make sense and we can come up with a saner alternative.

I think the document describes more or less what you wrote above in
the indented paragraph (but maybe less clear, patches are of course
welcome). It's the last paragraph that is problematic. It right now
says "invalid" which could be interpreted as "bad pattern, rejected"
but we in fact accept these "*" are regular ones.

There are not many alternatives we could do though. Erroring out could
really flood the stderr because we match these patterns zillions of
time and traditionally fnmatch gracefully accepts bad patterns, trying
to make the best of of them. So keeping undefined "**" as two "*"
sounds good enough.

But of course I'm open for saner alternatives if people find any.
-- 
Duy


Re: [PATCH v2 13/13] commit-graph: specify OID version for SHA-256

2018-10-15 Thread Derrick Stolee

On 10/14/2018 10:19 PM, brian m. carlson wrote:

Since the commit-graph code wants to serialize the hash algorithm into
the data store, specify a version number for each supported algorithm.
Note that we don't use the values of the constants themselves, as they
are internal and could change in the future.

Signed-off-by: brian m. carlson 
---
  commit-graph.c | 9 -
  1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/commit-graph.c b/commit-graph.c
index 7a28fbb03f..e587c21bb6 100644
--- a/commit-graph.c
+++ b/commit-graph.c
@@ -45,7 +45,14 @@ char *get_commit_graph_filename(const char *obj_dir)
  
  static uint8_t oid_version(void)

  {
-   return 1;
+   switch (hash_algo_by_ptr(the_hash_algo)) {
+   case GIT_HASH_SHA1:
+   return 1;
+   case GIT_HASH_SHA256:
+   return 2;
+   default:
+   BUG("unknown hash algorithm");
+   }
  }
  
  static struct commit_graph *alloc_commit_graph(void)

Simple and easy!

Thanks,
-Stolee


Re: [PATCH v2 09/13] commit-graph: convert to using the_hash_algo

2018-10-15 Thread Derrick Stolee

On 10/14/2018 10:18 PM, brian m. carlson wrote:

Instead of using hard-coded constants for object sizes, use
the_hash_algo to look them up.  In addition, use a function call to look
up the object ID version and produce the correct value.
This looks good and I can see already how this new organization will 
help make the hash size more flexible.


Thanks!
-Stolee



Re: [PATCH 0/9] Offer to run CI/PR builds in Visual Studio Team Services

2018-10-15 Thread Ævar Arnfjörð Bjarmason


On Mon, Sep 03 2018, Johannes Schindelin via GitGitGadget wrote:

> As a special treat, this patch series adds the ability to present the
> outcome of Git's test suite as JUnit-style .xml files. This allows the VSTS
> build to present fun diagrams, trends, and makes it a lot easier to drill
> down to test failures than before. See for example
> https://git.visualstudio.com/git/_build/results?buildId=113=ms.vss-test-web.test-result-details
> [https://git.visualstudio.com/git/_build/results?buildId=113=ms.vss-test-web.test-result-details]
> (you can click on the label of the failed test, and then see the detailed
> output in the right pane).
>
> This patch series took way more time than I had originally planned, but I
> think that in particular the advanced display of the test results was worth
> it. Please let me know what you think about this.

I have not reviewed this in any detail, but incorporating this in some
form or other seems like a no-brainer to me.

If we have "free" (from the perspective of the project) CPU being
offered by various CI setups let's use it.

If people don't like it they don't have to look at those CI runs, and
the burden of carrying another CI in git.git is minimal, and actually
this patch series makes it much easier to incorporate other CI's later,
since you took care of moving the Travis-specific bits out.

As an aside I poked Junio via private mail in late August to see if he'd
be interested in pushing to gitlab.com/git/git.git too as part of his
normal push-outs. One neat thing that would buy us is the ability to
have a .gitlab-ci.yml in git.git and use their CI implementation.


Re: [PATCH v2 09/13] git-p4: use `test_atexit` to kill the daemon

2018-10-15 Thread Johannes Schindelin
Hi Luke,

On Mon, 15 Oct 2018, Luke Diamand wrote:

> On Mon, 15 Oct 2018 at 11:12, Johannes Schindelin via GitGitGadget
>  wrote:
> >
> > From: Johannes Schindelin 
> >
> > This should be more reliable than the current method, and prepares the
> > test suite for a consistent way to clean up before re-running the tests
> > with different options.
> >
> 
> I'm finding that it's leaving p4d processes lying around.

That's a bummer!

> e.g.
> 
> $ ./t9820-git-p4-editor-handling.sh
> 
> $ ./t9820-git-p4-editor-handling.sh
> 

Since I do not currently have a setup with p4d installed, can you run that
with `sh -x ...` and see whether this code path is hit?

 test_done () {
GIT_EXIT_OK=t

+   test -n "$immediate" || test_atexit_handler
+
if test -n "$write_junit_xml" && test -n "$junit_xml_path"
then

> And also
> 
> $ ./t9800-git-p4-basic.sh
> 
> Ctrl-C

Oh, you're right. I think I need to do something in this line:

trap 'exit $?' INT

in t/test-lib.sh, something like

trap 'exit_code=$?; test_atexit_handler; exit $exit_code' INT

would you agree? (And: could you test?)

Thanks,
Dscho

> $ ./t9800-git-p4-basic.sh
> 
> 
> $ ps | grep p4d
> 21392 pts/100:00:00 p4d  <
> 
> 
> Luke
> 


Re: [PATCH 2/4] merge-recursive: increase marker length with depth of recursion

2018-10-15 Thread Elijah Newren
On Sun, Oct 14, 2018 at 10:12 PM Junio C Hamano  wrote:
>
> Elijah Newren  writes:
>
> > When using merge.conflictstyle=diff3 to have the "base version" be shown
> > in conflicts, there is the possibility that the base version itself has
> > conflicts in it.  This comes about when there are more than one merge
> > base, and the merging of those merge bases produces a conflict.
> > Since this process applies recursively, it is possible to have conflict
> > markers nested at an arbitrary depth; to be able to differentiate the
> > conflict markers from different nestings, we make them all of different
> > lengths.
>
> I know it is possible that the common ancestor part that is enclosed
> by the outermost makers can have arbitrary conflicts, and they can
> be even recursive conflicts.  But I fail to see why it is a problem.
> Perhaps that is because I am not particularly good at resolving
> merge conflicts, but as long as the common part of the outermost
> merge is identifyable, would that really matter?  What I would do
> while looking at common ancestor part with conflicts (not even a
> recursive one) is just to ignore it, so...
>
> Not that I strongly oppose to incrementing the marker length at
> every level.  I do not think it would hurt, but I just do not see
> how it would help.

Fair enough.  The real motivation for these changes was the
modification to rename/rename(2to1) conflicts (and rename/add
conflicts) to behave like add/add conflicts -- that change means we
can have nested conflict markers even without invoking the recursive
part of the recursive machinery.  So, I needed a way to increase the
length of the conflict markers besides just checking
opts->virtual_ancestor.  Just using a fixed extra indent seemed
problematic, because if I also had to worry about even one
virtual_ancestor, then I was already dealing with the possibility of
triply nested conflict markers and only one of them from a virtual
merge base.  See t6036 in
https://public-inbox.org/git/20181014020537.17991-3-new...@gmail.com/.

However, that series was long enough, so to try to simplify review I
split as much as I could out of it.  That resulted, among other
things, in me submitting this marker nesting change as part of this
series using a more limited rationale.

Would you like me to edit the commit message to include this more
difficult case?


Re: [PATCH v2 10/13] Add a base implementation of SHA-256 support

2018-10-15 Thread Duy Nguyen
 On Mon, Oct 15, 2018 at 4:23 AM brian m. carlson
 wrote:
>
> SHA-1 is weak and we need to transition to a new hash function.  For
> some time, we have referred to this new function as NewHash.  Recently,
> we decided to pick SHA-256 as NewHash.
>
> Add a basic implementation of SHA-256 based off libtomcrypt, which is in
> the public domain.  Optimize it and restructure it to meet our coding
> standards.  Place it in a directory called "sha256" where it and any
> future implementations can live so as to avoid a proliferation of
> implementation directories.
>
> Wire up SHA-256 in the list of hash algorithms, and add a test that the
> algorithm works correctly.
>
> Note that with this patch, it is still not possible to switch to using
> SHA-256 in Git.  Additional patches are needed to prepare the code to
> handle a larger hash algorithm and further test fixes are needed.

At some point I assume SHA-256 will become functional and be part of a
git release without all file formats updated to support multiple
hashes. Should we somehow discourage the user from using it because it
will break when all file formats are finally updated?

The simplest way is to just not register "sha256" in hash_algos unless
some developer flag is set. Or rename sha256 to sha256-experimental or
something to make it more obvious (but then we may need to fix up the
test suite after renaming it back to sha256, not great)
-- 
Duy


Re: [PATCH v2 00/13] Offer to run CI/PR builds in Azure Pipelines

2018-10-15 Thread Johannes Schindelin
Hi Taylor,

On Mon, 15 Oct 2018, Taylor Blau wrote:

> Thanks for putting this together, and offering to build Git on Azure
> Pipelines. I haven't followed v1 of this series very closely, so please
> excuse me if my comments have already been addressed, and I missed them
> in a skim of the last revision.
> 
> On Mon, Oct 15, 2018 at 03:11:57AM -0700, Johannes Schindelin via 
> GitGitGadget wrote:
> > It is also an invaluable tool for contributors who can validate their code
> > contributions via PRs on GitHub, e.g. to verify that their tests do actually
> > run on macOS (i.e. with the BSD family of Unix tools instead of the GNU
> > one).
> 
> Agree.
> 
> > The one sad part about this is the Windows support. Travis lacks it, and we
> > work around that by using Azure Pipelines (the CI part of Azure DevOps,
> > formerly known as Visual Studio Team Services) indirectly: one phase in
> > Travis would trigger a build, wait for its log, and then paste that log.
> 
> I wonder if Travis' recent announcement [1] affects this at all.

:-)

It did not escape my notice that after years and years of trying to get
*anybody* at Travis to listen to my offers to help get this started, the
announcement of Azure Pipelines for OSS seemed to finally do the trick
(they still don't bother to talk to me, of course).

And to answer your question without a question mark: I do not really think
that the Travis announcement affects this here patch series: I have a ton
of good experience with Azure Pipelines, use it in Git for Windows for
ages, and I am finally able to have it in core Git, too. So I want it, and
I spent a lot of time getting there, and I think it probably won't hurt
core Git at all (besides, it seems that at least some of the phases are a
bit faster on Azure Pipelines than Travis).

Another really good reason for me to do this is that I can prod the Azure
Pipelines team directly. And I even get an answer, usually within minutes.
Which is a lot faster than the Travis team answers my questions, which
is... not yet? (I tried to get in contact with them in late 2015 or early
2016, and I tried again a year later, and then a couple of months later,
and I have yet to hear back.)

Also, I am not quite sure about the timeouts on Travis, but at least
AppVeyor had rather short timeouts: the Windows build (due to our
extensive use of Unix shell scripting in our test suite) takes 1h40m
currently, and AppVeyor times out after 20 or 30 minutes. I could imagine
that Travis times out after the same time, or maybe 60 minutes, which
would still be too short. On Azure Pipelines, the maximum timeout (which
can be configured via the azure-pipelines.yml file) is four hours IIRC.
Plenty enough even for our test suite on Windows.

> To summarize [1], Travis is offering an early version of adding Windows
> to their list of supported builder operations systems. This brings the
> list to macOS, Linux, and Windows, which I think satisfies what we would
> like to regularly build git.git on.

Honestly, I would love to have also FreeBSD and other platforms being
tested. And with Azure Pipelines, I can make that happen (eventually), by
adding another pool of VMs (given that I have a free $150/month Azure
subscription, I'd use Azure VMs, of course). As long as a platform can run
.NET Core software, it can run Azure Pipelines agents.

With Travis, I don't think I can add private agent pools.

> Would we like to abandon Travis as our main CI service for upstream
> git.git, and build on Azure Pipelines only? If so, I think that this is
> an OK way to go, but I think that I would be opposed to having more than
> one build system. (FWIW, we tend to _three_ for Git LFS, and it can be a
> hassle at times).

This question of abandoning Travis in favor of Azure Pipelines is a bit of
a hornets' nest, as I really, really only want to bring the goodness of
Azure Pipelines to git.git, and I am *clearly* biased, as I work at
Microsoft.

Which is the reason why I did not even hint at it in the cover letter, let
alone included a patch to make it so.

My patch series is purely about adding support for running CI/PR builds of
https://github.com/git/git via Azure Pipelines.

> I see some benefit to sticking with Travis, since we already have a
> build configuration that works there. But, you've done the work to
> "port" that build configuration over to Azure, so perhaps the point is
> moot.

It is not so much a port, as an attempt to generalize our ci/* files.

> > As Git's Windows builds (and tests!) take quite a bit of time, Travis often
> > timed out, or somehow the trigger did not work, and for security reasons
> > (the Windows builds are performed in a private pool of containers), the
> > Windows builds are completely disabled for Pull Requests on GitHub.
> 
> This would be a concession of [1], in my mind: is it possible to run the
> tests on Windows in a time such that Travis will not time out?

To be honest, I spent such a lot of time to get things to work on 

Re: [PATCH 3/3] mingw: use domain information for default email

2018-10-15 Thread Eric Sunshine
On Mon, Oct 15, 2018 at 5:47 AM Johannes Schindelin via GitGitGadget
 wrote:
> When a user is registered in a Windows domain, it is really easy to
> obtain the email address. So let's do that.
> [...]
> Signed-off-by: Johannes Schindelin 
> ---
> diff --git a/compat/mingw.c b/compat/mingw.c
> @@ -1826,6 +1826,11 @@ static char *get_extended_user_info(enum 
> EXTENDED_NAME_FORMAT type)
> +char *mingw_query_user_email(void)
> +{
> +   return get_extended_user_info(NameUserPrincipal);
> +}
> diff --git a/ident.c b/ident.c
> @@ -168,6 +168,9 @@ const char *ident_default_email(void)
> +   } else if ((email = query_user_email()) && email[0]) {
> +   strbuf_addstr(_default_email, email);
> +   free((char *)email);

If query_user_email(), which calls get_extended_user_info(), returns
NULL, then we do nothing (and don't worry about freeing the result).
However, if query_user_email() returns a zero-length allocated string,
then this conditional will leak that string (due to the email[0]
check). From patch 2/3, we see in get_extended_user_info():

+static char *get_extended_user_info(enum EXTENDED_NAME_FORMAT type)
+{
+   if (GetUserNameExW(type, wbuffer, )) {
+   char *converted = xmalloc((len *= 3));
+   if (xwcstoutf(converted, wbuffer, len) >= 0)
+   return converted;

that it may possibly return a zero-length string (due to the ">=0").
Do we care about this potential leak (or is GetUserNameExW()
guaranteed never to return such a string)?


Re: [PATCH 0/4] Bloom filter experiment

2018-10-15 Thread Derrick Stolee

On 10/9/2018 3:34 PM, SZEDER Gábor wrote:

To keep the ball rolling, here is my proof of concept in a somewhat
cleaned-up form, with still plenty of rough edges.


Peff, Szeder, and Jonathan,

Thanks for giving me the kick in the pants to finally write a proof of 
concept for my personal take on how this should work. My implementation 
borrows things from both Szeder and Jonathan's series. You can find my 
commits for all of the versions on GitHub (it's a bit too messy to share 
as a patch series right now, I think):


Repo: https://github.com/derrickstolee/git
Branches: bloom/* (includes bloom/stolee, bloom/peff, bloom/szeder, and 
bloom/tan for the respective implementations, and bloom/base as the 
common ancestor)


My implementation uses the following scheme:

1. Bloom filters are computed and stored on a commit-by-commit basis.

2. The filters are sized according to the number of changes in each 
commit, with a minimum of one 64-bit word.


3. The filters are stored in the commit-graph using two new optional 
chunks: one stores a single 32-bit integer for each commit that provides 
the end of its Bloom filter in the second "data" chunk. The data chunk 
also stores the magic constants (hash version, num hash keys, and num 
bits per entry).


4. We fill the Bloom filters as (const char *data, int len) pairs as 
"struct bloom_filter"s in a commit slab.


5. In order to evaluate containment, we need the struct bloom_filter, 
but also struct bloom_settings (stores the magic constants in one 
place), and struct bloom_key (stores the _k_ hash values). This allows 
us to hash a path once and test the same path against many Bloom filters.


6. When we compute the Bloom filters, we don't store a filter for 
commits whose first-parent diff has more than 512 paths.


7. When we compute the commit-graph, we can re-use the pre-existing 
filters without needing to recompute diffs. (Caveat: the current 
implementation will re-compute diffs for the commits with diffs that 
were too large.)


You can build the Bloom filters in my implementation this way:

GIT_TEST_BLOOM_FILTERS=1 ./git commit-graph write --reachable


You can play around with it like this:

   $ GIT_USE_POC_BLOOM_FILTER=$((8*1024*1024*8)) git commit-graph write
   Computing commit graph generation numbers: 100% (52801/52801), done.
   Computing bloom filter: 100% (52801/52801), done.
   # Yeah, I even added progress indicator! :)
   $ GIT_TRACE_BLOOM_FILTER=2 GIT_USE_POC_BLOOM_FILTER=y git rev-list --count 
--full-history HEAD -- t/valgrind/valgrind.sh
   886
   20:40:24.783699 revision.c:486  bloom filter total queries: 66095 
definitely not: 64953 maybe: 1142 false positives: 256 fp ratio: 0.003873


Jonathan used this same test, so will I. Here is a summary table:

| Implementation | Queries | Maybe | FP # | FP %  |
||-|---|--|---|
| Szeder | 66095   | 1142  | 256  | 0.38% |
| Jonathan   | 66459   | 107   | 89   | 0.16% |
| Stolee | 53025   | 492   | 479  | 0.90% |

(Note that we must have used different starting points, which is why my 
"Queries" is so much smaller.)


The increase in false-positive percentage is expected in my 
implementation. I'm using the correct filter sizes to hit a <1% FP 
ratio. This could be lowered by changing the settings, and the size 
would dynamically grow. For my Git repo (which contains 
git-for-windows/git and microsoft/git) this implementation grows the 
commmit-graph file from 5.8 MB to 7.3 MB (1.5 MB total, compared to 
Szeder's 8MB filter). For 105,260 commits, that rounds out to less than 
20 bytes per commit (compared to Jonathan's 256 bytes per commit).


Related stats for my Linux repo: 781,756 commits, commit-graph grows 
from 43.8 to 55.6 MB (~12 MB additional, ~16 bytes per commit).


I haven't done a side-by-side performance test for these 
implementations, but it would be interesting to do so.


Despite writing a lot of code in a short amount of time, there is a lot 
of work to be done before this is submittable:


1. There are three different environment variables right now. It would 
be better to have one GIT_TEST_ variable and rely on existing tracing 
for logs (trace2 values would be good here).


2. We need config values for writing and consuming bloom filters, but 
also to override the default settings.


3. My bloom.c/bloom.h is too coupled to the commit-graph. I want to 
harden that interface to let Bloom filters live as their own thing, but 
that the commit-graph could load a bloom filter from the file instead of 
from the slab.


4. Tests, tests, and more tests.

We'll see how much time I have to do this polish, but I think the 
benefit is proven.


Thanks,
-Stolee


Re: [PATCH 2/3] getpwuid(mingw): provide a better default for the user name

2018-10-15 Thread Eric Sunshine
On Mon, Oct 15, 2018 at 5:47 AM Johannes Schindelin via GitGitGadget
 wrote:
> We do have the excellent GetUserInfoEx() function to obtain more
> detailed information of the current user (if the user is part of a
> Windows domain); Let's use it.
> [...]
> Signed-off-by: Johannes Schindelin 
> ---
> diff --git a/compat/mingw.c b/compat/mingw.c
> @@ -1798,6 +1799,33 @@ int mingw_getpagesize(void)
> +static char *get_extended_user_info(enum EXTENDED_NAME_FORMAT type)
> +{
> +   DECLARE_PROC_ADDR(secur32.dll, BOOL, GetUserNameExW,
> +   enum EXTENDED_NAME_FORMAT, LPCWSTR, PULONG);
> +   static wchar_t wbuffer[1024];

Does this need to be static? It's not being returned to the caller.

> +   len = ARRAY_SIZE(wbuffer);
> +   if (GetUserNameExW(type, wbuffer, )) {
> +   char *converted = xmalloc((len *= 3));
> +   if (xwcstoutf(converted, wbuffer, len) >= 0)
> +   return converted;
> +   free(converted);
> +   }

If xwcstoutf() fails, 'converted' is freed; otherwise, the allocated
'converted' is stored in the caller's statically held 'passwd' struct.
Okay.

> +   return NULL;
> +}


Re: [PATCH 1/3] getpwuid(mingw): initialize the structure only once

2018-10-15 Thread Eric Sunshine
On Mon, Oct 15, 2018 at 5:47 AM Johannes Schindelin via GitGitGadget
 wrote:
> Signed-off-by: Johannes Schindelin 
> ---
> diff --git a/compat/mingw.c b/compat/mingw.c
> @@ -1800,16 +1800,27 @@ int mingw_getpagesize(void)
>  struct passwd *getpwuid(int uid)
>  {
> +   static unsigned initialized;
> static char user_name[100];
> -   static struct passwd p;
> +   static struct passwd *p;
>
> +   if (initialized)
> +   return p;
> +
> +   len = sizeof(user_name);
> +   if (!GetUserName(user_name, )) {
> +   initialized = 1;
> return NULL;
> +   }

If GetUserName() fails, that failure is recorded (as "initialized=1"
and 'p' is still NULL), so subsequent invocations just return NULL
without doing any more work. Makes sense.

> +   p = xmalloc(sizeof(*p));
> +   p->pw_name = user_name;
> +   p->pw_gecos = "unknown";
> +   p->pw_dir = NULL;
> +
> +   initialized = 1;
> +   return p;
>  }


Re: [PATCH v2 00/13] Offer to run CI/PR builds in Azure Pipelines

2018-10-15 Thread Taylor Blau
Hi Johannes,

Thanks for putting this together, and offering to build Git on Azure
Pipelines. I haven't followed v1 of this series very closely, so please
excuse me if my comments have already been addressed, and I missed them
in a skim of the last revision.

On Mon, Oct 15, 2018 at 03:11:57AM -0700, Johannes Schindelin via GitGitGadget 
wrote:
> It is also an invaluable tool for contributors who can validate their code
> contributions via PRs on GitHub, e.g. to verify that their tests do actually
> run on macOS (i.e. with the BSD family of Unix tools instead of the GNU
> one).

Agree.

> The one sad part about this is the Windows support. Travis lacks it, and we
> work around that by using Azure Pipelines (the CI part of Azure DevOps,
> formerly known as Visual Studio Team Services) indirectly: one phase in
> Travis would trigger a build, wait for its log, and then paste that log.

I wonder if Travis' recent announcement [1] affects this at all. To
summarize [1], Travis is offering an early version of adding Windows to
their list of supported builder operations systems. This brings the list
to macOS, Linux, and Windows, which I think satisfies what we would
like to regularly build git.git on.

Would we like to abandon Travis as our main CI service for upstream
git.git, and build on Azure Pipelines only? If so, I think that this is
an OK way to go, but I think that I would be opposed to having more than
one build system. (FWIW, we tend to _three_ for Git LFS, and it can be a
hassle at times).

I see some benefit to sticking with Travis, since we already have a
build configuration that works there. But, you've done the work to
"port" that build configuration over to Azure, so perhaps the point is
moot.

> As Git's Windows builds (and tests!) take quite a bit of time, Travis often
> timed out, or somehow the trigger did not work, and for security reasons
> (the Windows builds are performed in a private pool of containers), the
> Windows builds are completely disabled for Pull Requests on GitHub.

This would be a concession of [1], in my mind: is it possible to run the
tests on Windows in a time such that Travis will not time out?

> As a special treat, this patch series adds the ability to present the
> outcome of Git's test suite as JUnit-style .xml files. This allows the Azure
> Pipelines build to present fun diagrams, trends, and makes it a lot easier
> to drill down to test failures than before. See for example
> https://dev.azure.com/git/git/_build/results?buildId=113=ms.vss-test-web.test-result-details
> [https://dev.azure.com/git/git/_build/results?buildId=113=ms.vss-test-web.test-result-details]
> (you can click on the label of the failed test, and then see the detailed
> output in the right pane).

That's pretty cool. Travis doesn't support this (to the best of my
knowledge).

Thanks,
Taylor

[1]: https://blog.travis-ci.com/2018-10-11-windows-early-release


[PATCH] builtin/branch.c: remove useless branch_get

2018-10-15 Thread Tao Qingyun
Signed-off-by: Tao Qingyun 
---
 builtin/branch.c | 5 -
 1 file changed, 5 deletions(-)

diff --git a/builtin/branch.c b/builtin/branch.c
index c396c41533..2367703034 100644
--- a/builtin/branch.c
+++ b/builtin/branch.c
@@ -809,11 +809,6 @@ int cmd_branch(int argc, const char **argv, const char 
*prefix)
git_config_set_multivar(buf.buf, NULL, NULL, 1);
strbuf_release();
} else if (argc > 0 && argc <= 2) {
-   struct branch *branch = branch_get(argv[0]);
-
-   if (!branch)
-   die(_("no such branch '%s'"), argv[0]);
-
if (filter.kind != FILTER_REFS_BRANCHES)
die(_("-a and -r options to 'git branch' do not make 
sense with a branch name"));
 
-- 
2.19.0



Re: [PATCH 2/3] http: add support for disabling SSL revocation checks in cURL

2018-10-15 Thread Eric Sunshine
On Mon, Oct 15, 2018 at 6:14 AM Brendan Forster via GitGitGadget
 wrote:
> This config value is only used if http.sslBackend is set to "schannel",
> which forces cURL to use the Windows Certificate Store when validating
> server certificates associated with a remote server.
>
> This is only supported in cURL 7.44 or later.
> [...]
> Signed-off-by: Brendan Forster 
> ---
> diff --git a/http.c b/http.c
> @@ -811,6 +818,16 @@ static CURL *get_curl_handle(void)
> +   if (http_ssl_backend && !strcmp("schannel", http_ssl_backend) &&
> +   !http_schannel_check_revoke) {
> +#if LIBCURL_VERSION_NUM >= 0x072c00
> +   curl_easy_setopt(result, CURLOPT_SSL_OPTIONS, 
> CURLSSLOPT_NO_REVOKE);
> +#else
> +   warning("CURLSSLOPT_NO_REVOKE not applied to curl SSL options 
> because\n"
> +   "your curl version is too old (>= 7.44.0)");

This message is confusing. If your curl is too old, shouldn't the ">=" be a "<"?

> +#endif
> +   }


Re: [PATCH 1/3] http: add support for selecting SSL backends at runtime

2018-10-15 Thread Eric Sunshine
On Mon, Oct 15, 2018 at 6:14 AM Johannes Schindelin via GitGitGadget
 wrote:
> This patch adds the Git side of that feature: by setting http.sslBackend
> to "openssl" or "schannel", Git for Windows can now choose the SSL
> backend at runtime.
> [...]
> Signed-off-by: Johannes Schindelin 
> ---
> diff --git a/http.c b/http.c
> @@ -995,6 +1003,33 @@ void http_init(struct remote *remote, const char *url, 
> int proactive_auth)
> +#if LIBCURL_VERSION_NUM >= 0x073800
> +   if (http_ssl_backend) {
> +   const curl_ssl_backend **backends;
> +   struct strbuf buf = STRBUF_INIT;
> +   int i;
> +
> +   switch (curl_global_sslset(-1, http_ssl_backend, )) {
> +   case CURLSSLSET_UNKNOWN_BACKEND:
> +   strbuf_addf(, _("Unsupported SSL backend '%s'. "
> +   "Supported SSL backends:"),
> +   http_ssl_backend);
> +   for (i = 0; backends[i]; i++)
> +   strbuf_addf(, "\n\t%s", 
> backends[i]->name);
> +   die("%s", buf.buf);

This is the only 'case' arm which utilizes 'strbuf buf', and it
die()s, so no leak despite a lack of strbuf_release(). Okay.

> +   case CURLSSLSET_NO_BACKENDS:
> +   die(_("Could not set SSL backend to '%s': "
> + "cURL was built without SSL backends"),
> +   http_ssl_backend);
> +   case CURLSSLSET_TOO_LATE:
> +   die(_("Could not set SSL backend to '%s': already 
> set"),
> +   http_ssl_backend);
> +   case CURLSSLSET_OK:
> +   break; /* Okay! */
> +   }
> +   }
> +#endif


Re: Git Test Coverage Report (Friday, Oct 12)

2018-10-15 Thread Derrick Stolee

On 10/15/2018 4:09 AM, Johannes Schindelin wrote:

Hi Stolee,

On Fri, 12 Oct 2018, Derrick Stolee wrote:


In an effort to ensure new code is reasonably covered by the test suite, we
now have contrib/coverage-diff.sh to combine the gcov output from 'make
coverage-test ; make coverage-report' with the output from 'git diff A B' to
discover _new_ lines of code that are not covered.

Thanks for doing this. I do notice, though, that there are a few mentions
of BUG() lines, e.g.


0af129b2ed builtin/rebase--interactive2.c 267) BUG("invalid command '%d'",
command);

I do not think that we will ever want to ensure that all of these lines
get code coverage ;-) Maybe it would be possible to exclude those lines
from the analysis?
I'll incorporate that into my build script, but leave it out of 
contrib/coverage-diff.sh in case people really want to see those lines 
when the inspect a single topic.


Thanks,
-Stolee


Ignored files being silently overwritten when switching branches

2018-10-15 Thread Per Lundberg
Hi,

Sorry if this question has been asked before; I skimmed through the list 
archives and the FAQ but couldn't immediately find it - please point me 
in the right direction if it has indeed been discussed before.

We were renaming some previously-included configuration files (foo.conf) 
in one of our repos, instead providing a "default" configuration 
(foo.conf.default) that can easily be copied over to foo.conf by 
individual developers. This all works fine, and the *.conf are now added 
to the .gitignore list.

_However_, when switching back to our previous release branches (which 
includes the foo.conf file in the tree), we have noticed that git 
silently overwrites the locally-modified foo.conf file with the upstream 
foo.conf file from that branch. When switching back to master, the file 
contents is therefore perpetually lost, which is a bit unfortunate.

I did a quick repro case here: https://github.com/perlun/git-test, and 
it seems easy to reproduce this behavior using the following steps (also 
documented in that git repo):

$ git init
$ touch foo.txt
$ nano foo.txt
$ git add foo.txt
$ git commit -m 'Add foo.txt'
[master (root-commit) 8ef05cb] Add foo.txt
  1 file changed, 1 insertion(+)
  create mode 100644 foo.txt
$ git checkout -b dev
Switched to a new branch 'dev'
$ git mv foo.txt foo.bar
$ git commit -m "Rename foo.txt -> foo.bar"
[dev 4c55c9b] Rename foo.txt -> foo.bar
  1 file changed, 0 insertions(+), 0 deletions(-)
  rename foo.txt => foo.bar (100%)
$ echo 'my local foo.txt' > foo.txt
$ echo foo.txt > .gitignore
$ git commit -m "Add .gitignore"
[dev 4c16acb] Add .gitignore
  1 file changed, 2 insertions(+)
  create mode 100644 .gitignore
$ git checkout master # This will silently overwrite the local foo.txt

So my question is: is this by design or should this be considered a bug 
in git? Of course, it depends largely on what .gitignore is being used 
for - if we are talking about files which can easily be regenerated 
(build artifacts, node_modules folders etc.) I can totally understand 
the current behavior, but when dealing with more sensitive & important 
content it's a bit inconvenient.


What I would have expected would be for git to complain, with this message:

error: The following untracked working tree files would be overwritten 
by checkout:
foo.txt
Please move or remove them before you switch branches.
Aborting

This is normally the message you get when a _non-ignored_ file is being 
overwritten. But apparently not so when an ignored file is being 
overwritten. If this can be tweaked in the local repo settings somehow, 
please let me know.
--
Best regards,
Per


Case when I can not unstage the hunk

2018-10-15 Thread KES
Hi. 
Here is log:

git reset HEAD -p /home/kes/s/public/v2/js/contact_us.js
diff --git a/public/v2/js/contact_us.js b/public/v2/js/contact_us.js
index e05be6d0..d429d291 100644
--- a/public/v2/js/contact_us.js
+++ b/public/v2/js/contact_us.js
@@ -1,7 +1,19 @@
+
+
+function captchaProcess( form ) {
+  var id =  $(form).find( 'textarea[name=g-recaptcha-response]' )
+.attr( 'id' ).match( /\d+$/ );
+  if( id ){ id =  id[0] } else { id = 0 }
+  grecaptcha.reset( Number( id ) );
+  grecaptcha.execute( Number( id ) );
+  // When .execute is done this will trigger callback
+  // which is specified on g-recaptcha div element via data-callback attr
+}
+
+
 // FORM VALIDATION
-var validator;
 $(function(){
-  validator =  $('#contact_us-form').validate({
+  $('#contact_us-form').validate({
 rules: {
   name: {
 required: true,
Unstage this hunk [y,n,q,a,d,j,J,g,/,s,e,?]? s
Split into 3 hunks.
@@ -1 +1,14 @@
+
+
+function captchaProcess( form ) {
+  var id =  $(form).find( 'textarea[name=g-recaptcha-response]' )
+.attr( 'id' ).match( /\d+$/ );
+  if( id ){ id =  id[0] } else { id = 0 }
+  grecaptcha.reset( Number( id ) );
+  grecaptcha.execute( Number( id ) );
+  // When .execute is done this will trigger callback
+  // which is specified on g-recaptcha div element via data-callback attr
+}
+
+
 // FORM VALIDATION
Unstage this hunk [y,n,q,a,d,j,J,g,/,e,?]? n
@@ -1,3 +14,2 @@
 // FORM VALIDATION
-var validator;
 $(function(){
Unstage this hunk [y,n,q,a,d,K,j,J,g,/,e,?]? y
@@ -3,5 +15,5 @@
 $(function(){
-  validator =  $('#contact_us-form').validate({
+  $('#contact_us-form').validate({
 rules: {
   name: {
 required: true,
Unstage this hunk [y,n,q,a,d,K,j,J,g,/,e,?]? y
@@ -47,12 +59,7 @@ $(function(){
   $(element).removeClass("error_inp");
 },
 submitHandler: function(form) {
-  var id =  $(form).find( 'textarea[name=g-recaptcha-response]' )
-.attr( 'id' ).match( /\d+$/ );
-  if( id ){ id =  id[0] } else { id = 0 }
-  grecaptcha.execute( Number( id ) );
-  // When .execute is done this will trigger callback
-  // which is specified on g-recaptcha div element via data-callback attr
+  captchaProcess( form );
 }
   });
 });
Unstage this hunk [y,n,q,a,d,K,g,/,e,?]? n
error: patch failed: public/v2/js/contact_us.js:1
error: public/v2/js/contact_us.js: patch does not apply
diff --git a/public/v2/js/contact_us.js b/public/v2/js/contact_us.js
index e05be6d0..d429d291 100644
--- a/public/v2/js/contact_us.js
+++ b/public/v2/js/contact_us.js
@@ -1,7 +1,6 @@
 // FORM VALIDATION
-var validator;
 $(function(){
-  validator =  $('#contact_us-form').validate({
+  $('#contact_us-form').validate({
 rules: {
   name: {
 required: true,



DETAILS:

git diff -b -w --ignore-blank-lines /home/kes/s/public/v2/js/contact_us.js
diff --git a/public/v2/js/contact_us.js b/public/v2/js/contact_us.js
index d429d291..bd1f4ddd 100644
--- a/public/v2/js/contact_us.js
+++ b/public/v2/js/contact_us.js
@@ -1,10 +1,24 @@
 
+function show_validator_errors( xhr, validator ) {
+  if( xhr.status == 404 ) { return }
+
+  var res =  xhr.responseJSON;
+  // TODO: Display responseText if there is no responseJSON
+  if( !res ) { return };
+
+  var errors =  {};
+  // convert server response into validator format
+  for( var error in res.error.info ) {
+errors[error] =  res.error.info[ error ][ 1 ] || 'Error';
+  }
+  validator.showErrors( errors );
+}
+
 
 function captchaProcess( form ) {
   var id =  $(form).find( 'textarea[name=g-recaptcha-response]' )
 .attr( 'id' ).match( /\d+$/ );
   if( id ){ id =  id[0] } else { id = 0 }
-  grecaptcha.reset( Number( id ) );
   grecaptcha.execute( Number( id ) );
   // When .execute is done this will trigger callback
   // which is specified on g-recaptcha div element via data-callback attr








git diff -b -w --ignore-blank-lines --staged 
/home/kes/s/public/v2/js/contact_us.js
diff --git a/public/v2/js/contact_us.js b/public/v2/js/contact_us.js
index e05be6d0..d429d291 100644
--- a/public/v2/js/contact_us.js
+++ b/public/v2/js/contact_us.js
@@ -1,7 +1,19 @@
+
+
+function captchaProcess( form ) {
+  var id =  $(form).find( 'textarea[name=g-recaptcha-response]' )
+.attr( 'id' ).match( /\d+$/ );
+  if( id ){ id =  id[0] } else { id = 0 }
+  grecaptcha.reset( Number( id ) );
+  grecaptcha.execute( Number( id ) );
+  // When .execute is done this will trigger callback
+  // which is specified on g-recaptcha div element via data-callback attr
+}
+
+
 // FORM VALIDATION
-var validator;
 $(function(){
-  validator =  $('#contact_us-form').validate({
+  $('#contact_us-form').validate({
 rules: {
   name: {
 required: true,






stat displayed differently

2018-10-15 Thread KES
Hi.

At next log:

$ git prb
POST git-upload-pack (948 bytes)
remote: Counting objects: 20, done.
remote: Compressing objects: 100% (19/19), done.
remote: Total 20 (delta 14), reused 0 (delta 0)
Unpacking objects: 100% (20/20), done.
>From https://tracker.feel-safe.net/gitdev/main
   9eafff67..cf2226de  296_tos-> origin/296_tos
 = [up to date]297-unification-of-buttons -> 
origin/297-unification-of-buttons
 = [up to date]ToS-component  -> origin/ToS-component
 = [up to date]authorization_pg   -> origin/authorization_pg
 = [up to date]buy_dev-> origin/buy_dev
 = [up to date]content_negotiation_fixes  -> 
origin/content_negotiation_fixes
 = [up to date]dash_v2-> origin/dash_v2
 = [up to date]design_master  -> origin/design_master
 = [up to date]design_master_2-> origin/design_master_2
 = [up to date]fix_order_pg   -> origin/fix_order_pg
 = [up to date]master -> origin/master
 = [up to date]new_design_master  -> origin/new_design_master
 = [up to date]new_error_pg   -> origin/new_error_pg
 = [up to date]text-page-style-> origin/text-page-style
Created autostash: 43c2c613
HEAD is now at 9eafff67 Create validator variable
Changes from 9eafff67776fd53ff61bb58bdd4f5cdd166ef476 to 
cf2226deb3b130348504ae393bede9e371fcb5d1:
 design/dash/decomp/templates/modals/modal_success.html |   2 +-
 lib/Mojolicious/Plugin/Control/Modal.pm| 118 
++---
 public/main/css/modal.css  |  28 +
 templates/main/index.html.ep   |   4 ++
 4 files changed, 92 insertions(+), 60 deletions(-)
 create mode 100644 public/main/css/modal.css
Note: checking out 'cf2226deb3b130348504ae393bede9e371fcb5d1'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b 

HEAD is now at cf2226de Fix modal window for reset password
Rebasing (1/1)
 .../decomp/templates/modals/modal_success.html |   2 +-
 lib/Mojolicious/Plugin/Control/Modal.pm| 118 ++---
 public/main/css/modal.css  |  28 +
 templates/main/index.html.ep   |   4 +
 4 files changed, 92 insertions(+), 60 deletions(-)
Successfully rebased and updated refs/heads/296_tos.
Applied autostash.


We have two output of stat. But it is displayed differently


How to force stat output be consistent between outputs?

 lib/Mojolicious/Plugin/Control/Modal.pm| 118 
++---
VS 
 lib/Mojolicious/Plugin/Control/Modal.pm| 118 ++---



[PATCHv2 2/3] git-p4: unshelve into refs/remotes/p4-unshelved, not refs/remotes/p4/unshelved

2018-10-15 Thread Luke Diamand
The branch detection code looks for branches under refs/remotes/p4/...
and can end up getting confused if there are unshelved changes in
there as well. This happens in the function p4BranchesInGit().

Instead, put the unshelved changes into refs/remotes/p4-unshelved/.

Signed-off-by: Luke Diamand 
---
 Documentation/git-p4.txt | 6 +++---
 git-p4.py| 3 ++-
 t/t9832-unshelve.sh  | 6 +++---
 3 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/Documentation/git-p4.txt b/Documentation/git-p4.txt
index 41780a5aa9..6c0017e36e 100644
--- a/Documentation/git-p4.txt
+++ b/Documentation/git-p4.txt
@@ -174,7 +174,7 @@ $ git p4 submit --update-shelve 1234 --update-shelve 2345
 Unshelve
 
 Unshelving will take a shelved P4 changelist, and produce the equivalent git 
commit
-in the branch refs/remotes/p4/unshelved/.
+in the branch refs/remotes/p4-unshelved/.
 
 The git commit is created relative to the current origin revision (HEAD by 
default).
 If the shelved changelist's parent revisions differ, git-p4 will refuse to 
unshelve;
@@ -182,13 +182,13 @@ you need to be unshelving onto an equivalent tree.
 
 The origin revision can be changed with the "--origin" option.
 
-If the target branch in refs/remotes/p4/unshelved already exists, the old one 
will
+If the target branch in refs/remotes/p4-unshelved already exists, the old one 
will
 be renamed.
 
 
 $ git p4 sync
 $ git p4 unshelve 12345
-$ git show refs/remotes/p4/unshelved/12345
+$ git show p4-unshelved/12345
 
 $ git p4 unshelve 12345
 
diff --git a/git-p4.py b/git-p4.py
index 5701bad06a..76c18a22e9 100755
--- a/git-p4.py
+++ b/git-p4.py
@@ -3956,7 +3956,8 @@ def __init__(self):
 ]
 self.verbose = False
 self.noCommit = False
-self.destbranch = "refs/remotes/p4/unshelved"
+self.destbranch = "refs/remotes/p4-unshelved"
+self.origin = "p4/master"
 
 def renameBranch(self, branch_name):
 """ Rename the existing branch to branch_name.N
diff --git a/t/t9832-unshelve.sh b/t/t9832-unshelve.sh
index 48ec7679b8..c3d15ceea8 100755
--- a/t/t9832-unshelve.sh
+++ b/t/t9832-unshelve.sh
@@ -54,8 +54,8 @@ EOF
cd "$git" &&
change=$(last_shelved_change) &&
git p4 unshelve $change &&
-   git show refs/remotes/p4/unshelved/$change | grep -q "Further 
description" &&
-   git cherry-pick refs/remotes/p4/unshelved/$change &&
+   git show refs/remotes/p4-unshelved/$change | grep -q "Further 
description" &&
+   git cherry-pick refs/remotes/p4-unshelved/$change &&
test_path_is_file file2 &&
test_cmp file1 "$cli"/file1 &&
test_cmp file2 "$cli"/file2 &&
@@ -88,7 +88,7 @@ EOF
cd "$git" &&
change=$(last_shelved_change) &&
git p4 unshelve $change &&
-   git diff refs/remotes/p4/unshelved/$change.0 
refs/remotes/p4/unshelved/$change | grep -q file3
+   git diff refs/remotes/p4-unshelved/$change.0 
refs/remotes/p4-unshelved/$change | grep -q file3
)
 '
 
-- 
2.19.1.331.gae0ed827e6



[PATCHv2 0/3] git-p4: improved unshelving - small fixes

2018-10-15 Thread Luke Diamand
This is the same as my earlier patch other than fixing the
documentation to reflect the change to the destination branch,
as noticed by Junio.

Luke Diamand (3):
  git-p4: do not fail in verbose mode for missing 'fileSize' key
  git-p4: unshelve into refs/remotes/p4-unshelved, not
refs/remotes/p4/unshelved
  git-p4: fully support unshelving changelists

 Documentation/git-p4.txt | 10 ++---
 git-p4.py| 90 +++-
 t/t9832-unshelve.sh  | 75 ++---
 3 files changed, 117 insertions(+), 58 deletions(-)

-- 
2.19.1.331.gae0ed827e6



[PATCHv2 3/3] git-p4: fully support unshelving changelists

2018-10-15 Thread Luke Diamand
The previous git-p4 unshelve support would check for changes
in Perforce to the files being unshelved since the original
shelve, and would complain if any were found.

This was to ensure that the user wouldn't end up with both the
shelved change delta, and some deltas from other changes in their
git commit.

e.g. given fileA:
  the
  quick
  brown
  fox

  change1: s/the/The/   <- p4 shelve this change
  change2: s/fox/Fox/   <- p4 submit this change
  git p4 unshelve 1 <- FAIL

This change teaches the P4Unshelve class to always create a parent
commit which matches the P4 tree (for the files being unshelved) at
the point prior to the P4 shelve being created (which is reported
in the p4 description for a shelved changelist).

That then means git-p4 can always create a git commit matching the
P4 shelve that was originally created, without any extra deltas.

The user might still need to use the --origin option though - there
is no way for git-p4 to work out the versions of all of the other
*unchanged* files in the shelve, since this information is not recorded
by Perforce.

Additionally this fixes handling of shelved 'move' operations.

Signed-off-by: Luke Diamand 
---
 Documentation/git-p4.txt |  4 +-
 git-p4.py| 84 +++-
 t/t9832-unshelve.sh  | 69 ++---
 3 files changed, 106 insertions(+), 51 deletions(-)

diff --git a/Documentation/git-p4.txt b/Documentation/git-p4.txt
index 6c0017e36e..f0a0280954 100644
--- a/Documentation/git-p4.txt
+++ b/Documentation/git-p4.txt
@@ -177,8 +177,8 @@ Unshelving will take a shelved P4 changelist, and produce 
the equivalent git com
 in the branch refs/remotes/p4-unshelved/.
 
 The git commit is created relative to the current origin revision (HEAD by 
default).
-If the shelved changelist's parent revisions differ, git-p4 will refuse to 
unshelve;
-you need to be unshelving onto an equivalent tree.
+A parent commit is created based on the origin, and then the unshelve commit is
+created based on that.
 
 The origin revision can be changed with the "--origin" option.
 
diff --git a/git-p4.py b/git-p4.py
index 76c18a22e9..1998c3e141 100755
--- a/git-p4.py
+++ b/git-p4.py
@@ -1306,6 +1306,9 @@ def processContent(self, git_mode, relPath, contents):
 return LargeFileSystem.processContent(self, git_mode, relPath, 
contents)
 
 class Command:
+delete_actions = ( "delete", "move/delete", "purge" )
+add_actions = ( "add", "move/add" )
+
 def __init__(self):
 self.usage = "usage: %prog [options]"
 self.needsGit = True
@@ -2524,7 +2527,6 @@ def map_in_client(self, depot_path):
 return ""
 
 class P4Sync(Command, P4UserMap):
-delete_actions = ( "delete", "move/delete", "purge" )
 
 def __init__(self):
 Command.__init__(self)
@@ -2612,20 +2614,7 @@ def checkpoint(self):
 if self.verbose:
 print("checkpoint finished: " + out)
 
-def cmp_shelved(self, path, filerev, revision):
-""" Determine if a path at revision #filerev is the same as the file
-at revision @revision for a shelved changelist. If they don't 
match,
-unshelving won't be safe (we will get other changes mixed in).
-
-This is comparing the revision that the shelved changelist is 
*based* on, not
-the shelved changelist itself.
-"""
-ret = p4Cmd(["diff2", "{0}#{1}".format(path, filerev), 
"{0}@{1}".format(path, revision)])
-if verbose:
-print("p4 diff2 path %s filerev %s revision %s => %s" % (path, 
filerev, revision, ret))
-return ret["status"] == "identical"
-
-def extractFilesFromCommit(self, commit, shelved=False, shelved_cl = 0, 
origin_revision = 0):
+def extractFilesFromCommit(self, commit, shelved=False, shelved_cl = 0):
 self.cloneExclude = [re.sub(r"\.\.\.$", "", path)
  for path in self.cloneExclude]
 files = []
@@ -2650,17 +2639,6 @@ def extractFilesFromCommit(self, commit, shelved=False, 
shelved_cl = 0, origin_r
 file["type"] = commit["type%s" % fnum]
 if shelved:
 file["shelved_cl"] = int(shelved_cl)
-
-# For shelved changelists, check that the revision of each 
file that the
-# shelve was based on matches the revision that we are using 
for the
-# starting point for git-fast-import (self.initialParent). 
Otherwise
-# the resulting diff will contain deltas from multiple commits.
-
-if file["action"] != "add" and \
-not self.cmp_shelved(path, file["rev"], origin_revision):
-sys.exit("change {0} not based on {1} for {2}, cannot 
unshelve".format(
-commit["change"], self.initialParent, path))
-
 files.append(file)
 fnum = fnum + 1
 return files
@@ -3032,7 +3010,7 @@ 

[PATCHv2 1/3] git-p4: do not fail in verbose mode for missing 'fileSize' key

2018-10-15 Thread Luke Diamand
If deleting or moving a file, sometimes P4 doesn't report the file size.

The code handles this just fine but some logging crashes. Stop this
happening.

There was some earlier discussion on the list about this:

https://public-inbox.org/git/xmqq1sqpp1vv@gitster.mtv.corp.google.com/

Signed-off-by: Luke Diamand 
---
 git-p4.py | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/git-p4.py b/git-p4.py
index 7fab255584..5701bad06a 100755
--- a/git-p4.py
+++ b/git-p4.py
@@ -2775,7 +2775,10 @@ def streamOneP4File(self, file, contents):
 relPath = self.stripRepoPath(file['depotFile'], self.branchPrefixes)
 relPath = self.encodeWithUTF8(relPath)
 if verbose:
-size = int(self.stream_file['fileSize'])
+if 'fileSize' in self.stream_file:
+size = int(self.stream_file['fileSize'])
+else:
+size = 0 # deleted files don't get a fileSize apparently
 sys.stdout.write('\r%s --> %s (%i MB)\n' % (file['depotFile'], 
relPath, size/1024/1024))
 sys.stdout.flush()
 
-- 
2.19.1.331.gae0ed827e6



Re: [PATCHv1 2/3] git-p4: unshelve into refs/remotes/p4-unshelved, not refs/remotes/p4/unshelved

2018-10-15 Thread Luke Diamand
On Fri, 12 Oct 2018 at 19:19, Luke Diamand  wrote:
>
> On Fri, 12 Oct 2018 at 14:45, Junio C Hamano  wrote:
> >
> > Luke Diamand  writes:
> >
> > > The branch detection code looks for branches under refs/remotes/p4/...
> > > and can end up getting confused if there are unshelved changes in
> > > there as well. This happens in the function p4BranchesInGit().
> > >
> > > Instead, put the unshelved changes into refs/remotes/p4-unshelved/.
> >
> > I am not a p4 user (and not a git-p4 user), so it is a bit hard for
> > me to assess if this is a backward incompatibile change and if so
> > how serious potential breakage to existing users would be.
>
> I don't think it's a particularly serious breakage - it reports the
> branch it unshelves to, so it should be fairly obvious.
>
> However, maybe it would make sense to pull this into a separate commit
> to make it more obvious? I should have thought of that before
> submitting.
>
> >
> > >
> > > -If the target branch in refs/remotes/p4/unshelved already exists, the 
> > > old one will
> > > +If the target branch in refs/remotes/p4-unshelved already exists, the 
> > > old one will
> > >  be renamed.
> > >
> > >  
> > >  $ git p4 sync
> > >  $ git p4 unshelve 12345
> > > -$ git show refs/remotes/p4/unshelved/12345
> > > +$ git show p4/unshelved/12345
> >
> > Isn't this "p4-unshelved/12345" now?
>
> Yes, I think another reason to pull into a separate commit.

D'oh. It's already in a separate commit.
I'll re-roll fixing that documentation.

I think it will be fine to change the branch that the unshelving
happens into - I think it's very unlikely anyone is basing some
automated scripts on this, because of the way that unshelving is used
anyway.

Luke


Re: [PATCH v2 09/13] git-p4: use `test_atexit` to kill the daemon

2018-10-15 Thread Luke Diamand
On Mon, 15 Oct 2018 at 11:12, Johannes Schindelin via GitGitGadget
 wrote:
>
> From: Johannes Schindelin 
>
> This should be more reliable than the current method, and prepares the
> test suite for a consistent way to clean up before re-running the tests
> with different options.
>

I'm finding that it's leaving p4d processes lying around.

e.g.

$ ./t9820-git-p4-editor-handling.sh

$ ./t9820-git-p4-editor-handling.sh


And also

$ ./t9800-git-p4-basic.sh

Ctrl-C

$ ./t9800-git-p4-basic.sh


$ ps | grep p4d
21392 pts/100:00:00 p4d  <


Luke


Loan Offer

2018-10-15 Thread rifat



Do you need a loan? If YES Kindly contact us via: 
citigrouploaninvestm...@aol.com


[PATCH 2/3] http: add support for disabling SSL revocation checks in cURL

2018-10-15 Thread Brendan Forster via GitGitGadget
From: Brendan Forster 

This adds support for a new http.schannelCheckRevoke config value.

This config value is only used if http.sslBackend is set to "schannel",
which forces cURL to use the Windows Certificate Store when validating
server certificates associated with a remote server.

This config value should only be set to "false" if you are in an
environment where revocation checks are blocked by the network, with
no alternative options.

This is only supported in cURL 7.44 or later.

Note: an earlier iteration tried to use the config setting
http.schannel.checkRevoke, but the http.* config settings can be limited
to specific URLs via http..* (which would mistake `schannel` for a
URL).

Helped by Agustín Martín Barbero.

Signed-off-by: Brendan Forster 
Signed-off-by: Johannes Schindelin 
---
 Documentation/config.txt |  8 
 http.c   | 17 +
 2 files changed, 25 insertions(+)

diff --git a/Documentation/config.txt b/Documentation/config.txt
index 7d38f0bf1a..d569ebd49e 100644
--- a/Documentation/config.txt
+++ b/Documentation/config.txt
@@ -1989,6 +1989,14 @@ http.sslBackend::
This option is ignored if cURL lacks support for choosing the SSL
backend at runtime.
 
+http.schannelCheckRevoke::
+   Used to enforce or disable certificate revocation checks in cURL
+   when http.sslBackend is set to "schannel". Defaults to `true` if
+   unset. Only necessary to disable this if Git consistently errors
+   and the message is about checking the revocation status of a
+   certificate. This option is ignored if cURL lacks support for
+   setting the relevant SSL option at runtime.
+
 http.pinnedpubkey::
Public key of the https service. It may either be the filename of
a PEM or DER encoded public key file or a string starting with
diff --git a/http.c b/http.c
index 7fb37a061b..8998056b60 100644
--- a/http.c
+++ b/http.c
@@ -157,6 +157,8 @@ static char *cached_accept_language;
 
 static char *http_ssl_backend;
 
+static int http_schannel_check_revoke = 1;
+
 size_t fread_buffer(char *ptr, size_t eltsize, size_t nmemb, void *buffer_)
 {
size_t size = eltsize * nmemb;
@@ -310,6 +312,11 @@ static int http_options(const char *var, const char 
*value, void *cb)
return 0;
}
 
+   if (!strcmp("http.schannelcheckrevoke", var)) {
+   http_schannel_check_revoke = git_config_bool(var, value);
+   return 0;
+   }
+
if (!strcmp("http.minsessions", var)) {
min_curl_sessions = git_config_int(var, value);
 #ifndef USE_CURL_MULTI
@@ -811,6 +818,16 @@ static CURL *get_curl_handle(void)
}
 #endif
 
+   if (http_ssl_backend && !strcmp("schannel", http_ssl_backend) &&
+   !http_schannel_check_revoke) {
+#if LIBCURL_VERSION_NUM >= 0x072c00
+   curl_easy_setopt(result, CURLOPT_SSL_OPTIONS, 
CURLSSLOPT_NO_REVOKE);
+#else
+   warning("CURLSSLOPT_NO_REVOKE not applied to curl SSL options 
because\n"
+   "your curl version is too old (>= 7.44.0)");
+#endif
+   }
+
if (http_proactive_auth)
init_curl_http_auth(result);
 
-- 
gitgitgadget



[PATCH 3/3] http: when using Secure Channel, ignore sslCAInfo by default

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

As of cURL v7.60.0, the Secure Channel backend can use the certificate
bundle provided via `http.sslCAInfo`, but that would override the
Windows Certificate Store. Since this is not desirable by default, let's
tell Git to not ask cURL to use that bundle by default when the `schannel`
backend was configured via `http.sslBackend`, unless
`http.schannelUseSSLCAInfo` overrides this behavior.

Signed-off-by: Johannes Schindelin 
---
 Documentation/config.txt |  8 
 http.c   | 19 ++-
 2 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/Documentation/config.txt b/Documentation/config.txt
index d569ebd49e..1f6a6a4a6f 100644
--- a/Documentation/config.txt
+++ b/Documentation/config.txt
@@ -1997,6 +1997,14 @@ http.schannelCheckRevoke::
certificate. This option is ignored if cURL lacks support for
setting the relevant SSL option at runtime.
 
+http.schannelUseSSLCAInfo::
+   As of cURL v7.60.0, the Secure Channel backend can use the
+   certificate bundle provided via `http.sslCAInfo`, but that would
+   override the Windows Certificate Store. Since this is not desirable
+   by default, Git will tell cURL not to use that bundle by default
+   when the `schannel` backend was configured via `http.sslBackend`,
+   unless `http.schannelUseSSLCAInfo` overrides this behavior.
+
 http.pinnedpubkey::
Public key of the https service. It may either be the filename of
a PEM or DER encoded public key file or a string starting with
diff --git a/http.c b/http.c
index 8998056b60..a0a8b93785 100644
--- a/http.c
+++ b/http.c
@@ -158,6 +158,12 @@ static char *cached_accept_language;
 static char *http_ssl_backend;
 
 static int http_schannel_check_revoke = 1;
+/*
+ * With the backend being set to `schannel`, setting sslCAinfo would override
+ * the Certificate Store in cURL v7.60.0 and later, which is not what we want
+ * by default.
+ */
+static int http_schannel_use_ssl_cainfo;
 
 size_t fread_buffer(char *ptr, size_t eltsize, size_t nmemb, void *buffer_)
 {
@@ -317,6 +323,11 @@ static int http_options(const char *var, const char 
*value, void *cb)
return 0;
}
 
+   if (!strcmp("http.schannelusesslcainfo", var)) {
+   http_schannel_use_ssl_cainfo = git_config_bool(var, value);
+   return 0;
+   }
+
if (!strcmp("http.minsessions", var)) {
min_curl_sessions = git_config_int(var, value);
 #ifndef USE_CURL_MULTI
@@ -869,7 +880,13 @@ static CURL *get_curl_handle(void)
if (ssl_pinnedkey != NULL)
curl_easy_setopt(result, CURLOPT_PINNEDPUBLICKEY, 
ssl_pinnedkey);
 #endif
-   if (ssl_cainfo != NULL)
+   if (http_ssl_backend && !strcmp("schannel", http_ssl_backend) &&
+   !http_schannel_use_ssl_cainfo) {
+   curl_easy_setopt(result, CURLOPT_CAINFO, NULL);
+#if LIBCURL_VERSION_NUM >= 0x073400
+   curl_easy_setopt(result, CURLOPT_PROXY_CAINFO, NULL);
+#endif
+   } else if (ssl_cainfo != NULL)
curl_easy_setopt(result, CURLOPT_CAINFO, ssl_cainfo);
 
if (curl_low_speed_limit > 0 && curl_low_speed_time > 0) {
-- 
gitgitgadget


[PATCH 1/3] http: add support for selecting SSL backends at runtime

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

As of version 7.56.0, curl supports being compiled with multiple SSL
backends.

This patch adds the Git side of that feature: by setting http.sslBackend
to "openssl" or "schannel", Git for Windows can now choose the SSL
backend at runtime.

This comes in handy on Windows because Secure Channel ("schannel") is
the native solution, accessing the Windows Credential Store, thereby
allowing for enterprise-wide management of certificates. For historical
reasons, Git for Windows needs to support OpenSSL still, as it has
previously been the only supported SSL backend in Git for Windows for
almost a decade.

The patch has been carried in Git for Windows for over a year, and is
considered mature.

Signed-off-by: Johannes Schindelin 
---
 Documentation/config.txt |  5 +
 http.c   | 35 +++
 2 files changed, 40 insertions(+)

diff --git a/Documentation/config.txt b/Documentation/config.txt
index 1546833213..7d38f0bf1a 100644
--- a/Documentation/config.txt
+++ b/Documentation/config.txt
@@ -1984,6 +1984,11 @@ http.sslCAPath::
with when fetching or pushing over HTTPS. Can be overridden
by the `GIT_SSL_CAPATH` environment variable.
 
+http.sslBackend::
+   Name of the SSL backend to use (e.g. "openssl" or "schannel").
+   This option is ignored if cURL lacks support for choosing the SSL
+   backend at runtime.
+
 http.pinnedpubkey::
Public key of the https service. It may either be the filename of
a PEM or DER encoded public key file or a string starting with
diff --git a/http.c b/http.c
index 98ff122585..7fb37a061b 100644
--- a/http.c
+++ b/http.c
@@ -155,6 +155,8 @@ static struct active_request_slot *active_queue_head;
 
 static char *cached_accept_language;
 
+static char *http_ssl_backend;
+
 size_t fread_buffer(char *ptr, size_t eltsize, size_t nmemb, void *buffer_)
 {
size_t size = eltsize * nmemb;
@@ -302,6 +304,12 @@ static int http_options(const char *var, const char 
*value, void *cb)
curl_ssl_try = git_config_bool(var, value);
return 0;
}
+   if (!strcmp("http.sslbackend", var)) {
+   free(http_ssl_backend);
+   http_ssl_backend = xstrdup_or_null(value);
+   return 0;
+   }
+
if (!strcmp("http.minsessions", var)) {
min_curl_sessions = git_config_int(var, value);
 #ifndef USE_CURL_MULTI
@@ -995,6 +1003,33 @@ void http_init(struct remote *remote, const char *url, 
int proactive_auth)
git_config(urlmatch_config_entry, );
free(normalized_url);
 
+#if LIBCURL_VERSION_NUM >= 0x073800
+   if (http_ssl_backend) {
+   const curl_ssl_backend **backends;
+   struct strbuf buf = STRBUF_INIT;
+   int i;
+
+   switch (curl_global_sslset(-1, http_ssl_backend, )) {
+   case CURLSSLSET_UNKNOWN_BACKEND:
+   strbuf_addf(, _("Unsupported SSL backend '%s'. "
+   "Supported SSL backends:"),
+   http_ssl_backend);
+   for (i = 0; backends[i]; i++)
+   strbuf_addf(, "\n\t%s", backends[i]->name);
+   die("%s", buf.buf);
+   case CURLSSLSET_NO_BACKENDS:
+   die(_("Could not set SSL backend to '%s': "
+ "cURL was built without SSL backends"),
+   http_ssl_backend);
+   case CURLSSLSET_TOO_LATE:
+   die(_("Could not set SSL backend to '%s': already set"),
+   http_ssl_backend);
+   case CURLSSLSET_OK:
+   break; /* Okay! */
+   }
+   }
+#endif
+
if (curl_global_init(CURL_GLOBAL_ALL) != CURLE_OK)
die("curl_global_init failed");
 
-- 
gitgitgadget



[PATCH 0/3] Allow choosing the SSL backend cURL uses (plus related patches)

2018-10-15 Thread Johannes Schindelin via GitGitGadget
This topic branch brings support for choosing cURL's SSL backend (a feature
developed in Git for Windows' context) at runtime via http.sslBackend, and
two more patches that are related (and only of interest for Windows users).

Brendan Forster (1):
  http: add support for disabling SSL revocation checks in cURL

Johannes Schindelin (2):
  http: add support for selecting SSL backends at runtime
  http: when using Secure Channel, ignore sslCAInfo by default

 Documentation/config.txt | 21 
 http.c   | 71 +++-
 2 files changed, 91 insertions(+), 1 deletion(-)


base-commit: 5a0cc8aca797dbd7d2be3b67458ff880ed45cddf
Published-As: 
https://github.com/gitgitgadget/git/releases/tags/pr-46%2Fdscho%2Fhttp-ssl-backend-v1
Fetch-It-Via: git fetch https://github.com/gitgitgadget/git 
pr-46/dscho/http-ssl-backend-v1
Pull-Request: https://github.com/gitgitgadget/git/pull/46
-- 
gitgitgadget


[PATCH v2 11/13] tests: record more stderr with --write-junit-xml in case of failure

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

Sometimes, failures in a test case are actually caused by issues in
earlier test cases.

To make it easier to see those issues, let's attach the output from
before the failing test case (i.e. stdout/stderr since the previous
failing test case, or the start of the test script). This will be
visible in the "Attachments" of the details of the failed test.

Signed-off-by: Johannes Schindelin 
---
 t/test-lib.sh | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/t/test-lib.sh b/t/test-lib.sh
index 8a60e39364..5f62418f9c 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -479,6 +479,9 @@ test_failure_ () {
"$(cat "$GIT_TEST_TEE_OUTPUT_FILE")")"
>"$GIT_TEST_TEE_OUTPUT_FILE"
junit_insert="$junit_insert"
+   junit_insert="$junit_insert$(xml_attr_encode \
+   "$(cat "$GIT_TEST_TEE_OUTPUT_FILE.err")")"
+   >"$GIT_TEST_TEE_OUTPUT_FILE.err"
write_junit_xml_testcase "$1" "  $junit_insert"
fi
test_failure=$(($test_failure + 1))
@@ -763,9 +766,12 @@ test_start_ () {
then
junit_start=$(test-tool date getnanos)
 
-   # truncate output
-   test -z "$GIT_TEST_TEE_OUTPUT_FILE" ||
-   >"$GIT_TEST_TEE_OUTPUT_FILE"
+   # append to future ; truncate output
+   test -z "$GIT_TEST_TEE_OUTPUT_FILE" || {
+   cat "$GIT_TEST_TEE_OUTPUT_FILE" \
+   >>"$GIT_TEST_TEE_OUTPUT_FILE.err"
+   >"$GIT_TEST_TEE_OUTPUT_FILE"
+   }
fi
 }
 
-- 
gitgitgadget



[PATCH v2 12/13] README: add a build badge (status of the Azure Pipelines build)

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

Just like so many other OSS projects, we now also have a build badge.

Signed-off-by: Johannes Schindelin 
---
 README.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/README.md b/README.md
index f920a42fad..bf4780c22d 100644
--- a/README.md
+++ b/README.md
@@ -1,3 +1,5 @@
+[![Build 
Status](https:/dev.azure.com/git/git/_apis/build/status/test-git.git)](https://dev.azure.com/git/git/_build/latest?definitionId=2)
+
 Git - fast, scalable, distributed revision control system
 =
 
-- 
gitgitgadget



[PATCH v2 03/13] test-date: add a subcommand to measure times in shell scripts

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

In the next commit, we want to teach Git's test suite to optionally
output test results in JUnit-style .xml files. These files contain
information about the time spent. So we need a way to measure time.

While we could use `date +%s` for that, this will give us only seconds,
i.e. very coarse-grained timings.

GNU `date` supports `date +%s.%N` (i.e. nanosecond-precision output),
but there is no equivalent in BSD `date` (read: on macOS, we would not
be able to obtain precise timings).

So let's introduce `test-tool date getnanos`, with an optional start
time, that outputs preciser values.

Granted, it is a bit pointless to try measuring times accurately in
shell scripts, certainly to nanosecond precision. But it is better than
second-granularity.

Signed-off-by: Johannes Schindelin 
---
 t/helper/test-date.c | 12 
 1 file changed, 12 insertions(+)

diff --git a/t/helper/test-date.c b/t/helper/test-date.c
index a0837371ab..792a805374 100644
--- a/t/helper/test-date.c
+++ b/t/helper/test-date.c
@@ -7,6 +7,7 @@ static const char *usage_msg = "\n"
 "  test-tool date parse [date]...\n"
 "  test-tool date approxidate [date]...\n"
 "  test-tool date timestamp [date]...\n"
+"  test-tool date getnanos [start-nanos]\n"
 "  test-tool date is64bit\n"
 "  test-tool date time_t-is64bit\n";
 
@@ -82,6 +83,15 @@ static void parse_approx_timestamp(const char **argv, struct 
timeval *now)
}
 }
 
+static void getnanos(const char **argv, struct timeval *now)
+{
+   double seconds = getnanotime() / 1.0e9;
+
+   if (*argv)
+   seconds -= strtod(*argv, NULL);
+   printf("%lf\n", seconds);
+}
+
 int cmd__date(int argc, const char **argv)
 {
struct timeval now;
@@ -108,6 +118,8 @@ int cmd__date(int argc, const char **argv)
parse_approxidate(argv+1, );
else if (!strcmp(*argv, "timestamp"))
parse_approx_timestamp(argv+1, );
+   else if (!strcmp(*argv, "getnanos"))
+   getnanos(argv+1, );
else if (!strcmp(*argv, "is64bit"))
return sizeof(timestamp_t) == 8 ? 0 : 1;
else if (!strcmp(*argv, "time_t-is64bit"))
-- 
gitgitgadget



[PATCH v2 10/13] tests: include detailed trace logs with --write-junit-xml upon failure

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

The JUnit XML format lends itself to be presented in a powerful UI,
where you can drill down to the information you are interested in very
quickly.

For test failures, this usually means that you want to see the detailed
trace of the failing tests.

With Travis CI, we passed the `--verbose-log` option to get those
traces. However, that seems excessive, as we do not need/use the logs in
almost all of those cases: only when a test fails do we have a way to
include the trace.

So let's do something different when using Azure DevOps: let's run all
the tests with `--quiet` first, and only if a failure is encountered,
try to trace the commands as they are executed.

Of course, we cannot turn on `--verbose-log` after the fact. So let's
just re-run the test with all the same options, adding `--verbose-log`.
And then munging the output file into the JUnit XML on the fly.

Note: there is an off chance that re-running the test in verbose mode
"fixes" the failures (and this does happen from time to time!). That is
a possibility we should be able to live with. Ideally, we would label
this as "Passed upon rerun", and Azure Pipelines even know about that
outcome, but it is not available when using the JUnit XML format for
now:
https://github.com/Microsoft/azure-pipelines-agent/blob/master/src/Agent.Worker/TestResults/JunitResultReader.cs

Signed-off-by: Johannes Schindelin 
---
 t/test-lib.sh | 34 +-
 1 file changed, 33 insertions(+), 1 deletion(-)

diff --git a/t/test-lib.sh b/t/test-lib.sh
index 6f9c1f5300..8a60e39364 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -85,6 +85,13 @@ done,*)
test "$(cat "$BASE.exit")" = 0
exit
;;
+*' --write-junit-xml '*)
+   # record how to call this script *with* --verbose-log, in case
+   # we encounter a breakage
+   junit_rerun_options_sq="$(printf '%s\n' "$0" --verbose-log -x "$@" |
+   sed -e "s/'/'''/g" -e "s/^/'/" -e "s/\$/'/" |
+   tr '\012' ' ')"
+   ;;
 esac
 
 # For repeatability, reset the environment to known value.
@@ -446,10 +453,31 @@ test_ok_ () {
 test_failure_ () {
if test -n "$write_junit_xml"
then
+   if test -z "$GIT_TEST_TEE_OUTPUT_FILE"
+   then
+   # clean up
+   test_atexit_handler
+
+   # re-run with --verbose-log
+   echo "# Re-running: $junit_rerun_options_sq" >&2
+
+   cd "$TEST_DIRECTORY" &&
+   eval "${TEST_SHELL_PATH}" "$junit_rerun_options_sq" \
+   >/dev/null 2>&1
+   status=$?
+
+   say_color "" "$(test 0 = $status ||
+   echo "not ")ok $test_count - (re-ran with 
trace)"
+   say "1..$test_count"
+   GIT_EXIT_OK=t
+   exit $status
+   fi
+
junit_insert=""
junit_insert="$junit_insert $(xml_attr_encode \
-   "$(printf '%s\n' "$@" | sed 1d)")"
+   "$(cat "$GIT_TEST_TEE_OUTPUT_FILE")")"
+   >"$GIT_TEST_TEE_OUTPUT_FILE"
junit_insert="$junit_insert"
write_junit_xml_testcase "$1" "  $junit_insert"
fi
@@ -734,6 +762,10 @@ test_start_ () {
if test -n "$write_junit_xml"
then
junit_start=$(test-tool date getnanos)
+
+   # truncate output
+   test -z "$GIT_TEST_TEE_OUTPUT_FILE" ||
+   >"$GIT_TEST_TEE_OUTPUT_FILE"
fi
 }
 
-- 
gitgitgadget



[PATCH v2 13/13] travis: fix skipping tagged releases

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

When building a PR, TRAVIS_BRANCH refers to the *target branch*.
Therefore, if a PR targets `master`, and `master` happened to be tagged,
we skipped the build by mistake.

Fix this by using TRAVIS_PULL_REQUEST_BRANCH (i.e. the *source branch*)
when available, falling back to TRAVIS_BRANCH (i.e. for CI builds, also
known as "push builds").

Signed-off-by: Johannes Schindelin 
---
 ci/lib.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ci/lib.sh b/ci/lib.sh
index 584abcd529..e1858ae609 100755
--- a/ci/lib.sh
+++ b/ci/lib.sh
@@ -3,7 +3,7 @@
 if test true = "$TRAVIS"
 then
# We are running within Travis CI
-   CI_BRANCH="$TRAVIS_BRANCH"
+   CI_BRANCH="${TRAVIS_PULL_REQUEST_BRANCH:-$TRAVIS_BRANCH}"
CI_COMMIT="$TRAVIS_COMMIT"
CI_JOB_ID="$TRAVIS_JOB_ID"
CI_JOB_NUMBER="$TRAVIS_JOB_NUMBER"
-- 
gitgitgadget


[PATCH v2 00/13] Offer to run CI/PR builds in Azure Pipelines

2018-10-15 Thread Johannes Schindelin via GitGitGadget
For a long time already, we have Git's source code continuously tested via
Travis CI, see e.g. https://travis-ci.org/git/git/builds/421738884. It has
served us well, and more and more developers actually pay attention and
benefit from the testing this gives us.

It is also an invaluable tool for contributors who can validate their code
contributions via PRs on GitHub, e.g. to verify that their tests do actually
run on macOS (i.e. with the BSD family of Unix tools instead of the GNU
one).

The one sad part about this is the Windows support. Travis lacks it, and we
work around that by using Azure Pipelines (the CI part of Azure DevOps,
formerly known as Visual Studio Team Services) indirectly: one phase in
Travis would trigger a build, wait for its log, and then paste that log.

As Git's Windows builds (and tests!) take quite a bit of time, Travis often
timed out, or somehow the trigger did not work, and for security reasons
(the Windows builds are performed in a private pool of containers), the
Windows builds are completely disabled for Pull Requests on GitHub.

One might ask why we did not use Azure Pipelines directly. There were a
couple of reasons for that:

 * most notably, Azure Pipelines' build logs could not be viewed
   anonymously,
 * while Azure Pipelines had Linux and Windows agents, it lacked macOS
   agents,
 * etc

The main two reasons no longer apply: macOS agents are available now
[https://docs.microsoft.com/en-us/azure/devops/release-notes/2018/jul-10-vsts]
, and there is a limited preview of "public projects"
[https://blogs.msdn.microsoft.com/devops/2018/04/27/vsts-public-projects-limited-preview/]
, i.e. it is possible to configure a Azure Pipelines project so that anybody
can view the logs.

I had secured such a public project for Git for Windows already, and I
recently also got one for Git. For now, the latter is hooked up with my
personal git.git fork on GitHub, but it is my hope that I convince y'all
that these Azure Pipelines builds are a good idea, and then hook it up with 
https://github.com/git/git.

As a special treat, this patch series adds the ability to present the
outcome of Git's test suite as JUnit-style .xml files. This allows the Azure
Pipelines build to present fun diagrams, trends, and makes it a lot easier
to drill down to test failures than before. See for example 
https://dev.azure.com/git/git/_build/results?buildId=113=ms.vss-test-web.test-result-details
[https://dev.azure.com/git/git/_build/results?buildId=113=ms.vss-test-web.test-result-details]
 
(you can click on the label of the failed test, and then see the detailed
output in the right pane).

This patch series took way more time than I had originally planned, but I
think that in particular the advanced display of the test results was worth
it. Please let me know what you think about this.

Changes since v1:

 * Removed a superfluous eval.
 * Added the commit that fixes the Travis PR builds targeting master that 
   just happens to be tagged (see 
   https://travis-ci.org/git/git/jobs/424276413 for an incorrectly-skipped
   build).
 * The commit messages and the cover letter now reflect the name change from
   Visual Studio Team Services to Azure DevOps (and in particular, Azure
   Pipelines for the automated builds).
 * Now we're using test_atexit (which we introduced for that purpose)
   instead of hard-coding kill_p4d and stop_git_daemon.
 * The build should now also succeed for Pull Requests (where secret
   variables are not available, for security reasons, and as a consequence
   the file share cannot be mounted).
 * The shell scripted parts now use proper && chains.

Johannes Schindelin (13):
  ci: rename the library of common functions
  ci/lib.sh: encapsulate Travis-specific things
  test-date: add a subcommand to measure times in shell scripts
  tests: optionally write results as JUnit-style .xml
  ci/lib.sh: add support for Azure Pipelines
  Add a build definition for Azure DevOps
  tests: introduce `test_atexit`
  git-daemon: use `test_atexit` in the tests
  git-p4: use `test_atexit` to kill the daemon
  tests: include detailed trace logs with --write-junit-xml upon failure
  tests: record more stderr with --write-junit-xml in case of failure
  README: add a build badge (status of the Azure Pipelines build)
  travis: fix skipping tagged releases

 README.md  |   2 +
 azure-pipelines.yml| 319 +
 ci/install-dependencies.sh |   5 +-
 ci/{lib-travisci.sh => lib.sh} |  67 -
 ci/mount-fileshare.sh  |  26 ++
 ci/print-test-failures.sh  |   4 +-
 ci/run-build-and-tests.sh  |   2 +-
 ci/run-linux32-docker.sh   |   2 +-
 ci/run-static-analysis.sh  |   2 +-
 ci/run-windows-build.sh|   2 +-
 ci/test-documentation.sh   |   3 +-
 t/.gitignore   |   1 +
 

[PATCH v2 07/13] tests: introduce `test_atexit`

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

When running the p4 daemon or `git daemon`, we want to kill it at the
end of the test script.

So far, we do this "manually".

However, in the next few commits we want to teach the test suite to
optionally re-run scripts with different options, therefore we will have
to have a consistent way to stop daemons.

Let's introduce `test_atexit`, which is loosely modeled after
`test_when_finished` (but has a broader scope: rather than running the
commands after the current test case, run them when the test script
finishes, and also run them when the `--immediate` option is in effect).

Signed-off-by: Johannes Schindelin 
---
 t/t-basic.sh| 18 ++
 t/test-lib-functions.sh | 29 +
 t/test-lib.sh   |  4 
 3 files changed, 51 insertions(+)

diff --git a/t/t-basic.sh b/t/t-basic.sh
index 391f910c6a..8c5faa6ce1 100755
--- a/t/t-basic.sh
+++ b/t/t-basic.sh
@@ -821,6 +821,24 @@ test_expect_success 'tests clean up even on failures' "
EOF
 "
 
+test_expect_success 'test_atexit is run' "
+   test_must_fail run_sub_test_lib_test \
+   atexit-cleanup 'Run atexit commands' -i <<-\\EOF &&
+   test_expect_success 'tests clean up even after a failure' '
+   > ../../clean-atexit &&
+   test_atexit rm ../../clean-atexit &&
+   > ../../also-clean-atexit &&
+   test_atexit rm ../../also-clean-atexit &&
+   > ../../dont-clean-atexit &&
+   (exit 1)
+   '
+   test_done
+   EOF
+   test_path_exists dont-clean-atexit &&
+   test_path_is_missing clean-atexit &&
+   test_path_is_missing also-clean-atexit
+"
+
 test_expect_success 'test_oid setup' '
test_oid_init
 '
diff --git a/t/test-lib-functions.sh b/t/test-lib-functions.sh
index 78d8c3783b..d7dd0c1be9 100644
--- a/t/test-lib-functions.sh
+++ b/t/test-lib-functions.sh
@@ -891,6 +891,35 @@ test_when_finished () {
} && (exit \"\$eval_ret\"); eval_ret=\$?; $test_cleanup"
 }
 
+# This function can be used to schedule some commands to be run
+# unconditionally at the end of the test script, e.g. to stop a daemon:
+#
+#  test_expect_success 'test git daemon' '
+#  git daemon &
+#  daemon_pid=$! &&
+#  test_atexit "kill $daemon_pid" &&
+#  hello world
+#  '
+
+test_atexit () {
+   # We cannot detect when we are in a subshell in general, but by
+   # doing so on Bash is better than nothing (the test will
+   # silently pass on other shells).
+   test "${BASH_SUBSHELL-0}" = 0 ||
+   error "bug in test script: test_atexit does nothing in a subshell"
+   test_atexit_cleanup="{ $*
+   } && (exit \"\$eval_ret\"); eval_ret=\$?; $test_atexit_cleanup"
+}
+
+test_atexit_handler () {
+   test : != "$test_atexit_cleanup" || return 0
+
+   setup_malloc_check
+   test_eval_ "$test_atexit_cleanup"
+   test_atexit_cleanup=:
+   teardown_malloc_check
+}
+
 # Most tests can use the created repository, but some may need to create more.
 # Usage: test_create_repo 
 test_create_repo () {
diff --git a/t/test-lib.sh b/t/test-lib.sh
index 7ed0013f6d..6f9c1f5300 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -413,6 +413,7 @@ test_external_has_tap=0
 
 die () {
code=$?
+   test_atexit_handler || code=$?
if test -n "$GIT_EXIT_OK"
then
exit $code
@@ -826,9 +827,12 @@ write_junit_xml_testcase () {
junit_have_testcase=t
 }
 
+test_atexit_cleanup=:
 test_done () {
GIT_EXIT_OK=t
 
+   test -n "$immediate" || test_atexit_handler
+
if test -n "$write_junit_xml" && test -n "$junit_xml_path"
then
test -n "$junit_have_testcase" || {
-- 
gitgitgadget



[PATCH v2 06/13] Add a build definition for Azure DevOps

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

This commit adds an azure-pipelines.yml file which is Azure DevOps'
equivalent to Travis CI's .travis.yml.

To make things a bit easier to understand, we refrain from using the
`matrix` feature here because (while it is powerful) it can be a bit
confusing to users who are not familiar with CI setups. Therefore, we
use a separate phase even for similar configurations (such as GCC vs
Clang on Linux, GCC vs Clang on macOS).

Also, we make use of the shiny new feature we just introduced where the
test suite can output JUnit-style .xml files. This information is made
available in a nice UI that allows the viewer to filter by phase and/or
test number, and to see trends such as: number of (failing) tests, time
spent running the test suite, etc.

Signed-off-by: Johannes Schindelin 
---
 azure-pipelines.yml   | 319 ++
 ci/mount-fileshare.sh |  26 
 2 files changed, 345 insertions(+)
 create mode 100644 azure-pipelines.yml
 create mode 100755 ci/mount-fileshare.sh

diff --git a/azure-pipelines.yml b/azure-pipelines.yml
new file mode 100644
index 00..b5749121d2
--- /dev/null
+++ b/azure-pipelines.yml
@@ -0,0 +1,319 @@
+resources:
+- repo: self
+  fetchDepth: 1
+
+phases:
+- phase: linux_clang
+  displayName: linux-clang
+  condition: succeeded()
+  queue:
+name: Hosted Ubuntu 1604
+  steps:
+  - bash: |
+   test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || 
ci/mount-fileshare.sh //gitfileshare.file.core.windows.net/test-cache 
gitfileshare "$GITFILESHAREPWD" "$HOME/test-cache" || exit 1
+
+   sudo apt-get update &&
+   sudo apt-get -y install git gcc make libssl-dev libcurl4-openssl-dev 
libexpat-dev tcl tk gettext git-email zlib1g-dev apache2-bin &&
+
+   export CC=clang || exit 1
+
+   ci/install-dependencies.sh
+   ci/run-build-and-tests.sh || {
+   ci/print-test-failures.sh
+   exit 1
+   }
+
+   test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || sudo umount 
"$HOME/test-cache" || exit 1
+displayName: 'ci/run-build-and-tests.sh'
+env:
+  GITFILESHAREPWD: $(gitfileshare.pwd)
+  - task: PublishTestResults@2
+displayName: 'Publish Test Results **/TEST-*.xml'
+inputs:
+  mergeTestResults: true
+  testRunTitle: 'linux-clang'
+  platform: Linux
+  publishRunAttachments: false
+condition: succeededOrFailed()
+
+- phase: linux_gcc
+  displayName: linux-gcc
+  condition: succeeded()
+  queue:
+name: Hosted Ubuntu 1604
+  steps:
+  - bash: |
+   test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || 
ci/mount-fileshare.sh //gitfileshare.file.core.windows.net/test-cache 
gitfileshare "$GITFILESHAREPWD" "$HOME/test-cache" || exit 1
+
+   sudo apt-get update &&
+   sudo apt-get -y install git gcc make libssl-dev libcurl4-openssl-dev 
libexpat-dev tcl tk gettext git-email zlib1g-dev apache2-bin || exit 1
+
+   ci/install-dependencies.sh
+   ci/run-build-and-tests.sh || {
+   ci/print-test-failures.sh
+   exit 1
+   }
+
+   test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || sudo umount 
"$HOME/test-cache" || exit 1
+displayName: 'ci/run-build-and-tests.sh'
+env:
+  GITFILESHAREPWD: $(gitfileshare.pwd)
+  - task: PublishTestResults@2
+displayName: 'Publish Test Results **/TEST-*.xml'
+inputs:
+  mergeTestResults: true
+  testRunTitle: 'linux-gcc'
+  platform: Linux
+  publishRunAttachments: false
+condition: succeededOrFailed()
+
+- phase: osx_clang
+  displayName: osx-clang
+  condition: succeeded()
+  queue:
+name: Hosted macOS
+  steps:
+  - bash: |
+   test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || 
ci/mount-fileshare.sh //gitfileshare.file.core.windows.net/test-cache 
gitfileshare "$GITFILESHAREPWD" "$HOME/test-cache" || exit 1
+
+   export CC=clang
+
+   ci/install-dependencies.sh
+   ci/run-build-and-tests.sh || {
+   ci/print-test-failures.sh
+   exit 1
+   }
+
+   test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || umount 
"$HOME/test-cache" || exit 1
+displayName: 'ci/run-build-and-tests.sh'
+env:
+  GITFILESHAREPWD: $(gitfileshare.pwd)
+  - task: PublishTestResults@2
+displayName: 'Publish Test Results **/TEST-*.xml'
+inputs:
+  mergeTestResults: true
+  testRunTitle: 'osx-clang'
+  platform: macOS
+  publishRunAttachments: false
+condition: succeededOrFailed()
+
+- phase: osx_gcc
+  displayName: osx-gcc
+  condition: succeeded()
+  queue:
+name: Hosted macOS
+  steps:
+  - bash: |
+   test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || 
ci/mount-fileshare.sh //gitfileshare.file.core.windows.net/test-cache 
gitfileshare "$GITFILESHAREPWD" "$HOME/test-cache" || exit 1
+
+   ci/install-dependencies.sh
+   ci/run-build-and-tests.sh || {
+   ci/print-test-failures.sh
+   exit 1
+   }
+
+   test "$GITFILESHAREPWD" = '$(gitfileshare.pwd)' || 

[PATCH v2 05/13] ci/lib.sh: add support for Azure Pipelines

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

This patch introduces a conditional arm that defines some environment
variables and a function that displays the URL given the job id (to
identify previous runs for known-good trees).

Signed-off-by: Johannes Schindelin 
---
 ci/lib.sh | 23 +++
 1 file changed, 23 insertions(+)

diff --git a/ci/lib.sh b/ci/lib.sh
index 8532555b4e..584abcd529 100755
--- a/ci/lib.sh
+++ b/ci/lib.sh
@@ -19,6 +19,29 @@ then
BREW_INSTALL_PACKAGES="git-lfs gettext"
export GIT_PROVE_OPTS="--timer --jobs 3 --state=failed,slow,save"
export GIT_TEST_OPTS="--verbose-log -x --immediate"
+elif test -n "$SYSTEM_TASKDEFINITIONSURI"
+then
+   # We are running in Azure Pipelines
+   CI_BRANCH="$BUILD_SOURCEBRANCH"
+   CI_COMMIT="$BUILD_SOURCEVERSION"
+   CI_JOB_ID="$BUILD_BUILDID"
+   CI_JOB_NUMBER="$BUILD_BUILDNUMBER"
+   CI_OS_NAME="$(echo "$AGENT_OS" | tr A-Z a-z)"
+   test darwin != "$CI_OS_NAME" || CI_OS_NAME=osx
+   CI_REPO_SLUG="$(expr "$BUILD_REPOSITORY_URI" : '.*/\([^/]*/[^/]*\)$')"
+   CC="${CC:-gcc}"
+
+   # use a subdirectory of the cache dir (because the file share is shared
+   # among *all* phases)
+   cache_dir="$HOME/test-cache/$SYSTEM_PHASENAME"
+
+   url_for_job_id () {
+   echo 
"$SYSTEM_TASKDEFINITIONSURI$SYSTEM_TEAMPROJECT/_build/results?buildId=$1"
+   }
+
+   BREW_INSTALL_PACKAGES=
+   export GIT_PROVE_OPTS="--timer --jobs 10 --state=failed,slow,save"
+   export GIT_TEST_OPTS="--quiet --write-junit-xml"
 fi
 
 skip_branch_tip_with_tag () {
-- 
gitgitgadget



[PATCH v2 09/13] git-p4: use `test_atexit` to kill the daemon

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

This should be more reliable than the current method, and prepares the
test suite for a consistent way to clean up before re-running the tests
with different options.

Signed-off-by: Johannes Schindelin 
---
 t/lib-git-p4.sh| 10 +-
 t/t-basic.sh   |  2 ++
 t/t9800-git-p4-basic.sh|  4 
 t/t9801-git-p4-branch.sh   |  4 
 t/t9802-git-p4-filetype.sh |  4 
 t/t9803-git-p4-shell-metachars.sh  |  4 
 t/t9804-git-p4-label.sh|  4 
 t/t9805-git-p4-skip-submit-edit.sh |  4 
 t/t9806-git-p4-options.sh  |  5 -
 t/t9807-git-p4-submit.sh   |  4 
 t/t9808-git-p4-chdir.sh|  4 
 t/t9809-git-p4-client-view.sh  |  4 
 t/t9810-git-p4-rcs.sh  |  4 
 t/t9811-git-p4-label-import.sh |  5 -
 t/t9812-git-p4-wildcards.sh|  4 
 t/t9813-git-p4-preserve-users.sh   |  4 
 t/t9814-git-p4-rename.sh   |  4 
 t/t9815-git-p4-submit-fail.sh  |  4 
 t/t9816-git-p4-locked.sh   |  4 
 t/t9817-git-p4-exclude.sh  |  4 
 t/t9818-git-p4-block.sh|  4 
 t/t9819-git-p4-case-folding.sh |  4 
 t/t9820-git-p4-editor-handling.sh  |  4 
 t/t9821-git-p4-path-variations.sh  |  4 
 t/t9822-git-p4-path-encoding.sh|  4 
 t/t9823-git-p4-mock-lfs.sh |  4 
 t/t9824-git-p4-git-lfs.sh  |  4 
 t/t9825-git-p4-handle-utf16-without-bom.sh |  4 
 t/t9826-git-p4-keep-empty-commits.sh   |  4 
 t/t9827-git-p4-change-filetype.sh  |  4 
 t/t9828-git-p4-map-user.sh |  4 
 t/t9829-git-p4-jobs.sh |  4 
 t/t9830-git-p4-symlink-dir.sh  |  4 
 t/t9831-git-p4-triggers.sh |  4 
 t/t9832-unshelve.sh|  4 
 t/t9833-errors.sh  |  5 -
 36 files changed, 3 insertions(+), 148 deletions(-)

diff --git a/t/lib-git-p4.sh b/t/lib-git-p4.sh
index c27599474c..f4f5d7d296 100644
--- a/t/lib-git-p4.sh
+++ b/t/lib-git-p4.sh
@@ -74,15 +74,6 @@ cli="$TRASH_DIRECTORY/cli"
 git="$TRASH_DIRECTORY/git"
 pidfile="$TRASH_DIRECTORY/p4d.pid"
 
-# Sometimes "prove" seems to hang on exit because p4d is still running
-cleanup () {
-   if test -f "$pidfile"
-   then
-   kill -9 $(cat "$pidfile") 2>/dev/null && exit 255
-   fi
-}
-trap cleanup EXIT
-
 # git p4 submit generates a temp file, which will
 # not get cleaned up if the submission fails.  Don't
 # clutter up /tmp on the test machine.
@@ -141,6 +132,7 @@ start_p4d () {
# p4d failed to start
return 1
fi
+   test_atexit kill_p4d
 
# build a p4 user so aut...@example.com has an entry
p4_add_user author
diff --git a/t/t-basic.sh b/t/t-basic.sh
index 8c5faa6ce1..041bd7e3ce 100755
--- a/t/t-basic.sh
+++ b/t/t-basic.sh
@@ -134,6 +134,7 @@ check_sub_test_lib_test_err () {
)
 }
 
+cat >/dev/null <<\DDD
 test_expect_success 'pretend we have a fully passing test suite' "
run_sub_test_lib_test full-pass '3 passing tests' <<-\\EOF &&
for i in 1 2 3
@@ -820,6 +821,7 @@ test_expect_success 'tests clean up even on failures' "
> 1..2
EOF
 "
+DDD
 
 test_expect_success 'test_atexit is run' "
test_must_fail run_sub_test_lib_test \
diff --git a/t/t9800-git-p4-basic.sh b/t/t9800-git-p4-basic.sh
index 729cd25770..5856563068 100755
--- a/t/t9800-git-p4-basic.sh
+++ b/t/t9800-git-p4-basic.sh
@@ -326,8 +326,4 @@ test_expect_success 'submit from worktree' '
)
 '
 
-test_expect_success 'kill p4d' '
-   kill_p4d
-'
-
 test_done
diff --git a/t/t9801-git-p4-branch.sh b/t/t9801-git-p4-branch.sh
index 6a86d6996b..50013132c8 100755
--- a/t/t9801-git-p4-branch.sh
+++ b/t/t9801-git-p4-branch.sh
@@ -610,8 +610,4 @@ test_expect_success 'Update a file in git side and submit 
to P4 using client vie
)
 '
 
-test_expect_success 'kill p4d' '
-   kill_p4d
-'
-
 test_done
diff --git a/t/t9802-git-p4-filetype.sh b/t/t9802-git-p4-filetype.sh
index 9978352d78..94edebe272 100755
--- a/t/t9802-git-p4-filetype.sh
+++ b/t/t9802-git-p4-filetype.sh
@@ -333,8 +333,4 @@ test_expect_success SYMLINKS 'empty symlink target' '
)
 '
 
-test_expect_success 'kill p4d' '
-   kill_p4d
-'
-
 test_done
diff --git a/t/t9803-git-p4-shell-metachars.sh 
b/t/t9803-git-p4-shell-metachars.sh
index d5c3675100..2913277013 100755
--- a/t/t9803-git-p4-shell-metachars.sh
+++ b/t/t9803-git-p4-shell-metachars.sh
@@ -105,8 +105,4 @@ test_expect_success 'branch with shell char' '
)
 '
 
-test_expect_success 'kill p4d' '
-   kill_p4d
-'
-
 test_done

[PATCH v2 04/13] tests: optionally write results as JUnit-style .xml

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

This will come in handy when publishing the results of Git's test suite
during an automated Azure DevOps run.

Signed-off-by: Johannes Schindelin 
---
 t/.gitignore  |  1 +
 t/test-lib.sh | 98 +++
 2 files changed, 99 insertions(+)

diff --git a/t/.gitignore b/t/.gitignore
index 348715f0e4..91cf5772fe 100644
--- a/t/.gitignore
+++ b/t/.gitignore
@@ -2,3 +2,4 @@
 /test-results
 /.prove
 /chainlinttmp
+/out/
diff --git a/t/test-lib.sh b/t/test-lib.sh
index 3f95bfda60..7ed0013f6d 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -288,6 +288,9 @@ do
--verbose-log)
verbose_log=t
shift ;;
+   --write-junit-xml)
+   write_junit_xml=t
+   shift ;;
*)
echo "error: unknown test option '$1'" >&2; exit 1 ;;
esac
@@ -431,11 +434,24 @@ trap 'exit $?' INT
 # the test_expect_* functions instead.
 
 test_ok_ () {
+   if test -n "$write_junit_xml"
+   then
+   write_junit_xml_testcase "$*"
+   fi
test_success=$(($test_success + 1))
say_color "" "ok $test_count - $@"
 }
 
 test_failure_ () {
+   if test -n "$write_junit_xml"
+   then
+   junit_insert=""
+   junit_insert="$junit_insert $(xml_attr_encode \
+   "$(printf '%s\n' "$@" | sed 1d)")"
+   junit_insert="$junit_insert"
+   write_junit_xml_testcase "$1" "  $junit_insert"
+   fi
test_failure=$(($test_failure + 1))
say_color error "not ok $test_count - $1"
shift
@@ -444,11 +460,19 @@ test_failure_ () {
 }
 
 test_known_broken_ok_ () {
+   if test -n "$write_junit_xml"
+   then
+   write_junit_xml_testcase "$* (breakage fixed)"
+   fi
test_fixed=$(($test_fixed+1))
say_color error "ok $test_count - $@ # TODO known breakage vanished"
 }
 
 test_known_broken_failure_ () {
+   if test -n "$write_junit_xml"
+   then
+   write_junit_xml_testcase "$* (known breakage)"
+   fi
test_broken=$(($test_broken+1))
say_color warn "not ok $test_count - $@ # TODO known breakage"
 }
@@ -706,6 +730,10 @@ test_start_ () {
test_count=$(($test_count+1))
maybe_setup_verbose
maybe_setup_valgrind
+   if test -n "$write_junit_xml"
+   then
+   junit_start=$(test-tool date getnanos)
+   fi
 }
 
 test_finish_ () {
@@ -743,6 +771,13 @@ test_skip () {
 
case "$to_skip" in
t)
+   if test -n "$write_junit_xml"
+   then
+   message="$(xml_attr_encode "$skipped_reason")"
+   write_junit_xml_testcase "$1" \
+   "  "
+   fi
+
say_color skip >&3 "skipping test: $@"
say_color skip "ok $test_count # skip $1 ($skipped_reason)"
: true
@@ -758,9 +793,58 @@ test_at_end_hook_ () {
:
 }
 
+write_junit_xml () {
+   case "$1" in
+   --truncate)
+   >"$junit_xml_path"
+   junit_have_testcase=
+   shift
+   ;;
+   esac
+   printf '%s\n' "$@" >>"$junit_xml_path"
+}
+
+xml_attr_encode () {
+   # We do not translate CR to  because BSD sed does not handle
+   # \r in the regex. In practice, the output should not even have any
+   # carriage returns.
+   printf '%s\n' "$@" |
+   sed -e 's/&/\/g' -e "s/'/\/g" -e 's/"/\/g' \
+   -e 's//\/g' \
+   -e 's/  /\/g' -e 's/$/\/' -e '$s/$//' |
+   tr -d '\012\015'
+}
+
+write_junit_xml_testcase () {
+   junit_attrs="name=\"$(xml_attr_encode "$this_test.$test_count $1")\""
+   shift
+   junit_attrs="$junit_attrs classname=\"$this_test\""
+   junit_attrs="$junit_attrs time=\"$(test-tool \
+   date getnanos $junit_start)\""
+   write_junit_xml "$(printf '%s\n' \
+   "" "$@" "")"
+   junit_have_testcase=t
+}
+
 test_done () {
GIT_EXIT_OK=t
 
+   if test -n "$write_junit_xml" && test -n "$junit_xml_path"
+   then
+   test -n "$junit_have_testcase" || {
+   junit_start=$(test-tool date getnanos)
+   write_junit_xml_testcase "all tests skipped"
+   }
+
+   # adjust the overall time
+   junit_time=$(test-tool date getnanos $junit_suite_start)
+   sed "s/]*/& time=\"$junit_time\"/" \
+   <"$junit_xml_path" >"$junit_xml_path.new"
+   mv "$junit_xml_path.new" "$junit_xml_path"
+
+   write_junit_xml "  " ""
+   fi
+
if test -z "$HARNESS_ACTIVE"
then
test_results_dir="$TEST_OUTPUT_DIRECTORY/test-results"
@@ -996,6 +1080,7 @@ then
 else
mkdir -p "$TRASH_DIRECTORY"
 fi
+
 # Use -P to resolve symlinks in our working 

[PATCH v2 01/13] ci: rename the library of common functions

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

The name is hard-coded to reflect that we use Travis CI for continuous
testing.

In the next commits, we will extend this to be able use Azure DevOps,
too.

So let's adjust the name to make it more generic.

Signed-off-by: Johannes Schindelin 
---
 ci/install-dependencies.sh | 2 +-
 ci/{lib-travisci.sh => lib.sh} | 0
 ci/print-test-failures.sh  | 2 +-
 ci/run-build-and-tests.sh  | 2 +-
 ci/run-linux32-docker.sh   | 2 +-
 ci/run-static-analysis.sh  | 2 +-
 ci/run-windows-build.sh| 2 +-
 ci/test-documentation.sh   | 2 +-
 8 files changed, 7 insertions(+), 7 deletions(-)
 rename ci/{lib-travisci.sh => lib.sh} (100%)

diff --git a/ci/install-dependencies.sh b/ci/install-dependencies.sh
index 75a9fd2475..961064658e 100755
--- a/ci/install-dependencies.sh
+++ b/ci/install-dependencies.sh
@@ -3,7 +3,7 @@
 # Install dependencies required to build and test Git on Linux and macOS
 #
 
-. ${0%/*}/lib-travisci.sh
+. ${0%/*}/lib.sh
 
 P4WHENCE=http://filehost.perforce.com/perforce/r$LINUX_P4_VERSION
 
LFSWHENCE=https://github.com/github/git-lfs/releases/download/v$LINUX_GIT_LFS_VERSION
diff --git a/ci/lib-travisci.sh b/ci/lib.sh
similarity index 100%
rename from ci/lib-travisci.sh
rename to ci/lib.sh
diff --git a/ci/print-test-failures.sh b/ci/print-test-failures.sh
index d55460a212..7aef39a2fd 100755
--- a/ci/print-test-failures.sh
+++ b/ci/print-test-failures.sh
@@ -3,7 +3,7 @@
 # Print output of failing tests
 #
 
-. ${0%/*}/lib-travisci.sh
+. ${0%/*}/lib.sh
 
 # Tracing executed commands would produce too much noise in the loop below.
 set +x
diff --git a/ci/run-build-and-tests.sh b/ci/run-build-and-tests.sh
index 2a5bff4a1c..e28ac2fb9a 100755
--- a/ci/run-build-and-tests.sh
+++ b/ci/run-build-and-tests.sh
@@ -3,7 +3,7 @@
 # Build and test Git
 #
 
-. ${0%/*}/lib-travisci.sh
+. ${0%/*}/lib.sh
 
 ln -s "$cache_dir/.prove" t/.prove
 
diff --git a/ci/run-linux32-docker.sh b/ci/run-linux32-docker.sh
index 21637903ce..751acfcf8a 100755
--- a/ci/run-linux32-docker.sh
+++ b/ci/run-linux32-docker.sh
@@ -3,7 +3,7 @@
 # Download and run Docker image to build and test 32-bit Git
 #
 
-. ${0%/*}/lib-travisci.sh
+. ${0%/*}/lib.sh
 
 docker pull daald/ubuntu32:xenial
 
diff --git a/ci/run-static-analysis.sh b/ci/run-static-analysis.sh
index 5688f261d0..dc189c7456 100755
--- a/ci/run-static-analysis.sh
+++ b/ci/run-static-analysis.sh
@@ -3,7 +3,7 @@
 # Perform various static code analysis checks
 #
 
-. ${0%/*}/lib-travisci.sh
+. ${0%/*}/lib.sh
 
 make --jobs=2 coccicheck
 
diff --git a/ci/run-windows-build.sh b/ci/run-windows-build.sh
index d99a180e52..a73a4eca0a 100755
--- a/ci/run-windows-build.sh
+++ b/ci/run-windows-build.sh
@@ -6,7 +6,7 @@
 # supported) and a commit hash.
 #
 
-. ${0%/*}/lib-travisci.sh
+. ${0%/*}/lib.sh
 
 test $# -ne 2 && echo "Unexpected number of parameters" && exit 1
 test -z "$GFW_CI_TOKEN" && echo "GFW_CI_TOKEN not defined" && exit
diff --git a/ci/test-documentation.sh b/ci/test-documentation.sh
index a20de9ca12..d3cdbac73f 100755
--- a/ci/test-documentation.sh
+++ b/ci/test-documentation.sh
@@ -3,7 +3,7 @@
 # Perform sanity checks on documentation and build it.
 #
 
-. ${0%/*}/lib-travisci.sh
+. ${0%/*}/lib.sh
 
 gem install asciidoctor
 
-- 
gitgitgadget



[PATCH v2 08/13] git-daemon: use `test_atexit` in the tests

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

This makes use of the just-introduced consistent way to specify that a
long-running process needs to be terminated at the end of a test script
run.

Signed-off-by: Johannes Schindelin 
---
 t/interop/i5500-git-daemon.sh | 1 -
 t/lib-git-daemon.sh   | 3 +--
 t/t5570-git-daemon.sh | 1 -
 3 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/t/interop/i5500-git-daemon.sh b/t/interop/i5500-git-daemon.sh
index 1daf69420b..4d22e42f84 100755
--- a/t/interop/i5500-git-daemon.sh
+++ b/t/interop/i5500-git-daemon.sh
@@ -37,5 +37,4 @@ test_expect_success "fetch with $VERSION_B" '
test_cmp expect actual
 '
 
-stop_git_daemon
 test_done
diff --git a/t/lib-git-daemon.sh b/t/lib-git-daemon.sh
index edbea2d986..a896af2284 100644
--- a/t/lib-git-daemon.sh
+++ b/t/lib-git-daemon.sh
@@ -13,7 +13,6 @@
 #
 #  test_expect_success ...
 #
-#  stop_git_daemon
 #  test_done
 
 test_tristate GIT_TEST_GIT_DAEMON
@@ -43,7 +42,7 @@ start_git_daemon() {
 
mkdir -p "$GIT_DAEMON_DOCUMENT_ROOT_PATH"
 
-   trap 'code=$?; stop_git_daemon; (exit $code); die' EXIT
+   test_atexit 'stop_git_daemon'
 
say >&3 "Starting git daemon ..."
mkfifo git_daemon_output
diff --git a/t/t5570-git-daemon.sh b/t/t5570-git-daemon.sh
index 7466aad111..08f95c80a2 100755
--- a/t/t5570-git-daemon.sh
+++ b/t/t5570-git-daemon.sh
@@ -211,5 +211,4 @@ test_expect_success FAKENC 'hostname interpolation works 
after LF-stripping' '
test_cmp expect actual
 '
 
-stop_git_daemon
 test_done
-- 
gitgitgadget



[PATCH v2 02/13] ci/lib.sh: encapsulate Travis-specific things

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

The upcoming patches will allow building git.git via Azure Pipelines
(i.e. Azure DevOps' Continuous Integration), where variable names and
URLs look a bit different than in Travis CI.

Signed-off-by: Johannes Schindelin 
---
 ci/install-dependencies.sh |  3 ++-
 ci/lib.sh  | 44 +++---
 ci/print-test-failures.sh  |  2 +-
 ci/test-documentation.sh   |  1 +
 4 files changed, 35 insertions(+), 15 deletions(-)

diff --git a/ci/install-dependencies.sh b/ci/install-dependencies.sh
index 961064658e..63fa37f68f 100755
--- a/ci/install-dependencies.sh
+++ b/ci/install-dependencies.sh
@@ -28,7 +28,8 @@ osx-clang|osx-gcc)
brew update --quiet
# Uncomment this if you want to run perf tests:
# brew install gnu-time
-   brew install git-lfs gettext
+   test -z "$BREW_INSTALL_PACKAGES" ||
+   brew install $BREW_INSTALL_PACKAGES
brew link --force gettext
brew install caskroom/cask/perforce
;;
diff --git a/ci/lib.sh b/ci/lib.sh
index 06970f7213..8532555b4e 100755
--- a/ci/lib.sh
+++ b/ci/lib.sh
@@ -1,5 +1,26 @@
 # Library of functions shared by all CI scripts
 
+if test true = "$TRAVIS"
+then
+   # We are running within Travis CI
+   CI_BRANCH="$TRAVIS_BRANCH"
+   CI_COMMIT="$TRAVIS_COMMIT"
+   CI_JOB_ID="$TRAVIS_JOB_ID"
+   CI_JOB_NUMBER="$TRAVIS_JOB_NUMBER"
+   CI_OS_NAME="$TRAVIS_OS_NAME"
+   CI_REPO_SLUG="$TRAVIS_REPO_SLUG"
+
+   cache_dir="$HOME/travis-cache"
+
+   url_for_job_id () {
+   echo "https://travis-ci.org/$CI_REPO_SLUG/jobs/$1;
+   }
+
+   BREW_INSTALL_PACKAGES="git-lfs gettext"
+   export GIT_PROVE_OPTS="--timer --jobs 3 --state=failed,slow,save"
+   export GIT_TEST_OPTS="--verbose-log -x --immediate"
+fi
+
 skip_branch_tip_with_tag () {
# Sometimes, a branch is pushed at the same time the tag that points
# at the same commit as the tip of the branch is pushed, and building
@@ -13,10 +34,10 @@ skip_branch_tip_with_tag () {
# we can skip the build because we won't be skipping a build
# of a tag.
 
-   if TAG=$(git describe --exact-match "$TRAVIS_BRANCH" 2>/dev/null) &&
-   test "$TAG" != "$TRAVIS_BRANCH"
+   if TAG=$(git describe --exact-match "$CI_BRANCH" 2>/dev/null) &&
+   test "$TAG" != "$CI_BRANCH"
then
-   echo "$(tput setaf 2)Tip of $TRAVIS_BRANCH is exactly at 
$TAG$(tput sgr0)"
+   echo "$(tput setaf 2)Tip of $CI_BRANCH is exactly at $TAG$(tput 
sgr0)"
exit 0
fi
 }
@@ -25,7 +46,7 @@ skip_branch_tip_with_tag () {
 # job if we encounter the same tree again and can provide a useful info
 # message.
 save_good_tree () {
-   echo "$(git rev-parse $TRAVIS_COMMIT^{tree}) $TRAVIS_COMMIT 
$TRAVIS_JOB_NUMBER $TRAVIS_JOB_ID" >>"$good_trees_file"
+   echo "$(git rev-parse $CI_COMMIT^{tree}) $CI_COMMIT $CI_JOB_NUMBER 
$CI_JOB_ID" >>"$good_trees_file"
# limit the file size
tail -1000 "$good_trees_file" >"$good_trees_file".tmp
mv "$good_trees_file".tmp "$good_trees_file"
@@ -35,7 +56,7 @@ save_good_tree () {
 # successfully before (e.g. because the branch got rebased, changing only
 # the commit messages).
 skip_good_tree () {
-   if ! good_tree_info="$(grep "^$(git rev-parse $TRAVIS_COMMIT^{tree}) " 
"$good_trees_file")"
+   if ! good_tree_info="$(grep "^$(git rev-parse $CI_COMMIT^{tree}) " 
"$good_trees_file")"
then
# Haven't seen this tree yet, or no cached good trees file yet.
# Continue the build job.
@@ -45,18 +66,18 @@ skip_good_tree () {
echo "$good_tree_info" | {
read tree prev_good_commit prev_good_job_number prev_good_job_id
 
-   if test "$TRAVIS_JOB_ID" = "$prev_good_job_id"
+   if test "$CI_JOB_ID" = "$prev_good_job_id"
then
cat <<-EOF
-   $(tput setaf 2)Skipping build job for commit 
$TRAVIS_COMMIT.$(tput sgr0)
+   $(tput setaf 2)Skipping build job for commit 
$CI_COMMIT.$(tput sgr0)
This commit has already been built and tested 
successfully by this build job.
To force a re-build delete the branch's cache and then 
hit 'Restart job'.
EOF
else
cat <<-EOF
-   $(tput setaf 2)Skipping build job for commit 
$TRAVIS_COMMIT.$(tput sgr0)
+   $(tput setaf 2)Skipping build job for commit 
$CI_COMMIT.$(tput sgr0)
This commit's tree has already been built and tested 
successfully in build job $prev_good_job_number for commit $prev_good_commit.
-   The log of that build job is available at 
https://travis-ci.org/$TRAVIS_REPO_SLUG/jobs/$prev_good_job_id
+   The log of that build job 

[PATCH 3/3] mingw: use domain information for default email

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

When a user is registered in a Windows domain, it is really easy to
obtain the email address. So let's do that.

Suggested by Lutz Roeder.

Signed-off-by: Johannes Schindelin 
---
 compat/mingw.c| 5 +
 compat/mingw.h| 2 ++
 git-compat-util.h | 4 
 ident.c   | 3 +++
 4 files changed, 14 insertions(+)

diff --git a/compat/mingw.c b/compat/mingw.c
index 623ff5daf5..44264fe3fd 100644
--- a/compat/mingw.c
+++ b/compat/mingw.c
@@ -1826,6 +1826,11 @@ static char *get_extended_user_info(enum 
EXTENDED_NAME_FORMAT type)
return NULL;
 }
 
+char *mingw_query_user_email(void)
+{
+   return get_extended_user_info(NameUserPrincipal);
+}
+
 struct passwd *getpwuid(int uid)
 {
static unsigned initialized;
diff --git a/compat/mingw.h b/compat/mingw.h
index 571019d0bd..f31dcff2be 100644
--- a/compat/mingw.h
+++ b/compat/mingw.h
@@ -424,6 +424,8 @@ static inline void convert_slashes(char *path)
 int mingw_offset_1st_component(const char *path);
 #define offset_1st_component mingw_offset_1st_component
 #define PATH_SEP ';'
+extern char *mingw_query_user_email(void);
+#define query_user_email mingw_query_user_email
 #if !defined(__MINGW64_VERSION_MAJOR) && (!defined(_MSC_VER) || _MSC_VER < 
1800)
 #define PRIuMAX "I64u"
 #define PRId64 "I64d"
diff --git a/git-compat-util.h b/git-compat-util.h
index 5f2e90932f..71779cb0ae 100644
--- a/git-compat-util.h
+++ b/git-compat-util.h
@@ -382,6 +382,10 @@ static inline char *git_find_last_dir_sep(const char *path)
 #define find_last_dir_sep git_find_last_dir_sep
 #endif
 
+#ifndef query_user_email
+#define query_user_email() NULL
+#endif
+
 #if defined(__HP_cc) && (__HP_cc >= 61000)
 #define NORETURN __attribute__((noreturn))
 #define NORETURN_PTR
diff --git a/ident.c b/ident.c
index 327abe557f..33bcf40644 100644
--- a/ident.c
+++ b/ident.c
@@ -168,6 +168,9 @@ const char *ident_default_email(void)
strbuf_addstr(_default_email, email);
committer_ident_explicitly_given |= IDENT_MAIL_GIVEN;
author_ident_explicitly_given |= IDENT_MAIL_GIVEN;
+   } else if ((email = query_user_email()) && email[0]) {
+   strbuf_addstr(_default_email, email);
+   free((char *)email);
} else
copy_email(xgetpwuid_self(_email_is_bogus),
   _default_email, _email_is_bogus);
-- 
gitgitgadget


[PATCH 2/3] getpwuid(mingw): provide a better default for the user name

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

We do have the excellent GetUserInfoEx() function to obtain more
detailed information of the current user (if the user is part of a
Windows domain); Let's use it.

Suggested by Lutz Roeder.

To avoid the cost of loading Secur32.dll (even lazily, loading DLLs
takes a non-neglibile amount of time), we use the established technique
to load DLLs only when, and if, needed.

Signed-off-by: Johannes Schindelin 
---
 compat/mingw.c | 32 +++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/compat/mingw.c b/compat/mingw.c
index 597781b370..623ff5daf5 100644
--- a/compat/mingw.c
+++ b/compat/mingw.c
@@ -5,6 +5,7 @@
 #include "../strbuf.h"
 #include "../run-command.h"
 #include "../cache.h"
+#include "win32/lazyload.h"
 
 #define HCAST(type, handle) ((type)(intptr_t)handle)
 
@@ -1798,6 +1799,33 @@ int mingw_getpagesize(void)
return si.dwAllocationGranularity;
 }
 
+/* See https://msdn.microsoft.com/en-us/library/windows/desktop/ms724435.aspx 
*/
+enum EXTENDED_NAME_FORMAT {
+   NameDisplay = 3,
+   NameUserPrincipal = 8
+};
+
+static char *get_extended_user_info(enum EXTENDED_NAME_FORMAT type)
+{
+   DECLARE_PROC_ADDR(secur32.dll, BOOL, GetUserNameExW,
+   enum EXTENDED_NAME_FORMAT, LPCWSTR, PULONG);
+   static wchar_t wbuffer[1024];
+   DWORD len;
+
+   if (!INIT_PROC_ADDR(GetUserNameExW))
+   return NULL;
+
+   len = ARRAY_SIZE(wbuffer);
+   if (GetUserNameExW(type, wbuffer, )) {
+   char *converted = xmalloc((len *= 3));
+   if (xwcstoutf(converted, wbuffer, len) >= 0)
+   return converted;
+   free(converted);
+   }
+
+   return NULL;
+}
+
 struct passwd *getpwuid(int uid)
 {
static unsigned initialized;
@@ -1816,7 +1844,9 @@ struct passwd *getpwuid(int uid)
 
p = xmalloc(sizeof(*p));
p->pw_name = user_name;
-   p->pw_gecos = "unknown";
+   p->pw_gecos = get_extended_user_info(NameDisplay);
+   if (!p->pw_gecos)
+   p->pw_gecos = "unknown";
p->pw_dir = NULL;
 
initialized = 1;
-- 
gitgitgadget



[PATCH 0/3] Provide a useful default user ident on Windows

2018-10-15 Thread Johannes Schindelin via GitGitGadget
On Linux, we use the gecos information to come up with a sensible user
name/email. On Windows, there is no gecos. But there is something
comparable, and with these patches, we use it.

This has been carried in Git for Windows for three years, and is considered
mature.

Johannes Schindelin (3):
  getpwuid(mingw): initialize the structure only once
  getpwuid(mingw): provide a better default for the user name
  mingw: use domain information for default email

 compat/mingw.c| 60 +--
 compat/mingw.h|  2 ++
 git-compat-util.h |  4 
 ident.c   |  3 +++
 4 files changed, 62 insertions(+), 7 deletions(-)


base-commit: 5a0cc8aca797dbd7d2be3b67458ff880ed45cddf
Published-As: 
https://github.com/gitgitgadget/git/releases/tags/pr-45%2Fdscho%2Fdefault-ident-v1
Fetch-It-Via: git fetch https://github.com/gitgitgadget/git 
pr-45/dscho/default-ident-v1
Pull-Request: https://github.com/gitgitgadget/git/pull/45
-- 
gitgitgadget


[PATCH 1/3] getpwuid(mingw): initialize the structure only once

2018-10-15 Thread Johannes Schindelin via GitGitGadget
From: Johannes Schindelin 

Signed-off-by: Johannes Schindelin 
---
 compat/mingw.c | 25 ++---
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/compat/mingw.c b/compat/mingw.c
index 18caf21969..597781b370 100644
--- a/compat/mingw.c
+++ b/compat/mingw.c
@@ -1800,16 +1800,27 @@ int mingw_getpagesize(void)
 
 struct passwd *getpwuid(int uid)
 {
+   static unsigned initialized;
static char user_name[100];
-   static struct passwd p;
+   static struct passwd *p;
+   DWORD len;
 
-   DWORD len = sizeof(user_name);
-   if (!GetUserName(user_name, ))
+   if (initialized)
+   return p;
+
+   len = sizeof(user_name);
+   if (!GetUserName(user_name, )) {
+   initialized = 1;
return NULL;
-   p.pw_name = user_name;
-   p.pw_gecos = "unknown";
-   p.pw_dir = NULL;
-   return 
+   }
+
+   p = xmalloc(sizeof(*p));
+   p->pw_name = user_name;
+   p->pw_gecos = "unknown";
+   p->pw_dir = NULL;
+
+   initialized = 1;
+   return p;
 }
 
 static HANDLE timer_event;
-- 
gitgitgadget



Re: [PATCH v10 08/21] stash: convert apply to builtin

2018-10-15 Thread Johannes Schindelin
Hi Paul,

On Mon, 15 Oct 2018, Paul-Sebastian Ungureanu wrote:

> +static void assert_stash_like(struct stash_info *info, const char *revision)
> +{
> + if (get_oidf(>b_commit, "%s^1", revision) ||
> + get_oidf(>w_tree, "%s:", revision) ||
> + get_oidf(>b_tree, "%s^1:", revision) ||
> + get_oidf(>i_tree, "%s^2:", revision)) {
> + error(_("'%s' is not a stash-like commit"), revision);
> + free_stash_info(info);
> + exit(128);

Thomas had mentioned earlier that this should probably be a die() (and
that the `free_stash_info()` should simply be dropped), see
https://public-inbox.org/git/20180930174848.ge2...@hank.intra.tgummerer.com/

> + }
> +}
> +
> +static int get_stash_info(struct stash_info *info, int argc, const char 
> **argv)
> +{
> + int ret;
> + char *end_of_rev;
> + char *expanded_ref;
> + const char *revision;
> + const char *commit = NULL;
> + struct object_id dummy;
> + struct strbuf symbolic = STRBUF_INIT;
> +
> + if (argc > 1) {
> + int i;
> + struct strbuf refs_msg = STRBUF_INIT;
> + for (i = 0; i < argc; i++)
> + strbuf_addf(_msg, " '%s'", argv[i]);

Thomas had also mentioned that this should be a `strbuf_join_argv()` call
now.

Maybe v10 is an accidental re-send of v9?

Ciao,
Dscho

> +
> + fprintf_ln(stderr, _("Too many revisions specified:%s"),
> +refs_msg.buf);
> + strbuf_release(_msg);
> +
> + return -1;
> + }
> +
> + if (argc == 1)
> + commit = argv[0];
> +
> + strbuf_init(>revision, 0);
> + if (!commit) {
> + if (!ref_exists(ref_stash)) {
> + free_stash_info(info);
> + fprintf_ln(stderr, _("No stash entries found."));
> + return -1;
> + }
> +
> + strbuf_addf(>revision, "%s@{0}", ref_stash);
> + } else if (strspn(commit, "0123456789") == strlen(commit)) {
> + strbuf_addf(>revision, "%s@{%s}", ref_stash, commit);
> + } else {
> + strbuf_addstr(>revision, commit);
> + }
> +
> + revision = info->revision.buf;
> +
> + if (get_oid(revision, >w_commit)) {
> + error(_("%s is not a valid reference"), revision);
> + free_stash_info(info);
> + return -1;
> + }
> +
> + assert_stash_like(info, revision);
> +
> + info->has_u = !get_oidf(>u_tree, "%s^3:", revision);
> +
> + end_of_rev = strchrnul(revision, '@');
> + strbuf_add(, revision, end_of_rev - revision);
> +
> + ret = dwim_ref(symbolic.buf, symbolic.len, , _ref);
> + strbuf_release();
> + switch (ret) {
> + case 0: /* Not found, but valid ref */
> + info->is_stash_ref = 0;
> + break;
> + case 1:
> + info->is_stash_ref = !strcmp(expanded_ref, ref_stash);
> + break;
> + default: /* Invalid or ambiguous */
> + free_stash_info(info);
> + }
> +
> + free(expanded_ref);
> + return !(ret == 0 || ret == 1);
> +}
> +
> +static int reset_tree(struct object_id *i_tree, int update, int reset)
> +{
> + int nr_trees = 1;
> + struct unpack_trees_options opts;
> + struct tree_desc t[MAX_UNPACK_TREES];
> + struct tree *tree;
> + struct lock_file lock_file = LOCK_INIT;
> +
> + read_cache_preload(NULL);
> + if (refresh_cache(REFRESH_QUIET))
> + return -1;
> +
> + hold_locked_index(_file, LOCK_DIE_ON_ERROR);
> +
> + memset(, 0, sizeof(opts));
> +
> + tree = parse_tree_indirect(i_tree);
> + if (parse_tree(tree))
> + return -1;
> +
> + init_tree_desc(t, tree->buffer, tree->size);
> +
> + opts.head_idx = 1;
> + opts.src_index = _index;
> + opts.dst_index = _index;
> + opts.merge = 1;
> + opts.reset = reset;
> + opts.update = update;
> + opts.fn = oneway_merge;
> +
> + if (unpack_trees(nr_trees, t, ))
> + return -1;
> +
> + if (write_locked_index(_index, _file, COMMIT_LOCK))
> + return error(_("unable to write new index file"));
> +
> + return 0;
> +}
> +
> +static int diff_tree_binary(struct strbuf *out, struct object_id *w_commit)
> +{
> + struct child_process cp = CHILD_PROCESS_INIT;
> + const char *w_commit_hex = oid_to_hex(w_commit);
> +
> + /*
> +  * Diff-tree would not be very hard to replace with a native function,
> +  * however it should be done together with apply_cached.
> +  */
> + cp.git_cmd = 1;
> + argv_array_pushl(, "diff-tree", "--binary", NULL);
> + argv_array_pushf(, "%s^2^..%s^2", w_commit_hex, w_commit_hex);
> +
> + return pipe_command(, NULL, 0, out, 0, NULL, 0);
> +}
> +
> +static int apply_cached(struct strbuf *out)
> +{
> + struct child_process cp = CHILD_PROCESS_INIT;
> +
> + /*
> +  * Apply currently only reads either from stdin 

  1   2   >