Re: [GSoC][PATCH 5/6 v2] submodule: port submodule subcommand sync from shell to C

2017-06-26 Thread Christian Couder
On Tue, Jun 27, 2017 at 1:11 AM, Prathamesh Chavan  wrote:

> +static char *get_up_path(const char *path)
> +{
> +   int i = count_slashes(path);
> +   int l = strlen(path);

Nit: "l" is quite similar to "i" in many fonts, so maybe use "len"
instead of "l", but see below.

> +   struct strbuf sb = STRBUF_INIT;
> +
> +   while (i--)
> +   strbuf_addstr(&sb, "../");

Nit: a regular "for" loop like the following might be easier to review:

for (i = count_slashes(path); i; i--)
strbuf_addstr(&sb, "../");

> +   /*
> +*Check if 'path' ends with slash or not
> +*for having the same output for dir/sub_dir
> +*and dir/sub_dir/
> +*/
> +   if (!is_dir_sep(path[l - 1]))

As "l" is used only here, maybe we could get rid of this variable
altogether with something like:

  if (!is_dir_sep(path[strlen(path) - 1]))

> +   strbuf_addstr(&sb, "../");
> +
> +   return strbuf_detach(&sb, NULL);
> +}

> +static void sync_submodule(const struct cache_entry *list_item, void 
> *cb_data)
> +{
> +   struct sync_cb *info = cb_data;
> +   const struct submodule *sub;
> +   char *sub_key, *remote_key;
> +   char *sub_origin_url, *super_config_url, *displaypath;
> +   struct strbuf sb = STRBUF_INIT;
> +   struct child_process cp = CHILD_PROCESS_INIT;
> +
> +   if (!is_submodule_initialized(list_item->name))
> +   return;
> +
> +   sub = submodule_from_path(null_sha1, list_item->name);
> +
> +   if (!sub && !sub->url)

I think it should be "(!sub || !sub->url)".

> +   die(_("no url found for submodule path '%s' in .gitmodules"),
> + list_item->name);


Re: Compile Error v2.13.2 on Solaris SPARC

2017-06-26 Thread Michael Kebe
On the Solaris system here __BYTE_ORDER__ set to 4321 and _BIG_ENDIAN
is defined, but has no value.

The problem is the not short circuiting macro...

-8<--
#undef _FOO1
#undef _FOO2
#undef _FOO2

#undef _BAR1
#undef _BAR2
#undef _BAR3

#define _FOO3 42

//comment out this line or give it a value to make the preprocesser happy
#define _BAR1

#if (defined(_FOO1) || defined(_FOO2) || defined(_FOO3))

// not short circuiting... preprocesser tries to evaluate (_FOO1 &&
_BAR1) but _BAR1 has no value...
#if ((defined(_FOO1) && (_FOO1 == _BAR1)) || \
  (defined(_FOO2) && (_FOO2 == _BAR2)) || \
  (defined(_FOO3) && (_FOO3 == BAR3)) )
#define SHA1DC_BIGENDIAN
#endif

#endif
-8<--
https://gist.github.com/michaelkebe/c963c7478b7b55ad197f0665986870d4

What do you think?

2017-06-27 7:41 GMT+02:00 Michael Kebe :
> 2017-06-26 22:27 GMT+02:00 Ævar Arnfjörð Bjarmason :
>> Could you (or anyone else for that matter) please test it with:
>>
>> git clone --branch bigend-detect-solaris-again 
>> https://github.com/avar/sha1collisiondetection.git &&
>> cd sha1collisiondetection &&
>> make test
>
> Still no luck.
>
> ~/sha1collisiondetection (bigend-detect-solaris-again *)$ CC=gcc gmake test
> mkdir -p dep_lib && gcc -O2 -Wall -Werror -Wextra -pedantic -std=c90
> -Ilib  -M -MF dep_lib/sha1.d lib/sha1.c
> lib/sha1.c:63:58: error: operator '==' has no right operand
>  #if ((defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) || \
>   ^
>
> Running Solaris on sparc:
> $ uname -a
> SunOS er202 5.11 11.3 sun4v sparc sun4v
>
>
> The isa_defs.h is available here:
> https://gist.github.com/michaelkebe/472720cd684b5b2a504b8eeb24049870
>
>
> Greetings
> Michael


Re: [GSoC][PATCH 2/6 v2] submodule: port subcommand foreach from shell to C

2017-06-26 Thread Christian Couder
On Tue, Jun 27, 2017 at 1:11 AM, Prathamesh Chavan  wrote:
> +
> +   if (!is_submodule_populated_gently(list_item->name, NULL))
> +   goto cleanup;
> +
> +   prepare_submodule_repo_env(&cp.env_array);
> +   /* For the purpose of executing  in the submodule,
> +* separate shell is used for the purpose of running the
> +* child process.
> +*/

As this comment spans over more than one line, it should be like:

/*
 * first line of comment
 * second line of comment
 * more stuff ...
 */

Also please explain WHY a shell is needed, we can see from the code
that we will use a shell.
So it should be something like:

/*
 * Use a shell because ...
 * and ...
 */

> +   cp.use_shell = 1;
> +   cp.dir = list_item->name;


Re: Compile Error v2.13.2 on Solaris SPARC

2017-06-26 Thread Michael Kebe
2017-06-26 22:27 GMT+02:00 Ævar Arnfjörð Bjarmason :
> Could you (or anyone else for that matter) please test it with:
>
> git clone --branch bigend-detect-solaris-again 
> https://github.com/avar/sha1collisiondetection.git &&
> cd sha1collisiondetection &&
> make test

Still no luck.

~/sha1collisiondetection (bigend-detect-solaris-again *)$ CC=gcc gmake test
mkdir -p dep_lib && gcc -O2 -Wall -Werror -Wextra -pedantic -std=c90
-Ilib  -M -MF dep_lib/sha1.d lib/sha1.c
lib/sha1.c:63:58: error: operator '==' has no right operand
 #if ((defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) || \
  ^

Running Solaris on sparc:
$ uname -a
SunOS er202 5.11 11.3 sun4v sparc sun4v


The isa_defs.h is available here:
https://gist.github.com/michaelkebe/472720cd684b5b2a504b8eeb24049870


Greetings
Michael


Re: Bug: Cannot kill Nodejs process using ctrl + c

2017-06-26 Thread Gyandeep Singh
Appreciate it. Thanks a lot.
Regards,

Gyandeep Singh
Website: http://gyandeeps.com


On Mon, Jun 26, 2017 at 10:15 PM, Stefan Beller  wrote:
> I miss-read your email.
>
> So you are not running Git, but only talk about the (Git-)bash that is
> conveniently bundled with Git for Windows?
> For that I recommend https://github.com/git-for-windows/git/issues/new
>
> Johannes Schindelin the GfW maintainer is cc'd here as well, but
> AFAICT he prefers Github issues.
>
> On Mon, Jun 26, 2017 at 8:08 PM, Gyandeep Singh  wrote:
>> Not sure what you mean by output. But its just goes back to normal like  this
>>
>> Gyandeep@Gyandeep MINGW64 ~
>>
>> $
>>
>>
>>
>> It was working fine on first release of git 2.13. It broken with
>> releases after that.
>>
>> Will try with –no-pager flag and let you tomorrow.
>>
>>
>>
>> Thanks
>>
>> Gyandeep
>> Regards,
>>
>> Gyandeep Singh
>> Website: http://gyandeeps.com
>>
>>
>> On Mon, Jun 26, 2017 at 9:55 PM, Stefan Beller  wrote:
>>> Which exact outputs of Git are invoked?
>>>
>>> Does it change when giving slightly different options e.g. --no-pager?


Re: Bug: Cannot kill Nodejs process using ctrl + c

2017-06-26 Thread Stefan Beller
I miss-read your email.

So you are not running Git, but only talk about the (Git-)bash that is
conveniently bundled with Git for Windows?
For that I recommend https://github.com/git-for-windows/git/issues/new

Johannes Schindelin the GfW maintainer is cc'd here as well, but
AFAICT he prefers Github issues.

On Mon, Jun 26, 2017 at 8:08 PM, Gyandeep Singh  wrote:
> Not sure what you mean by output. But its just goes back to normal like  this
>
> Gyandeep@Gyandeep MINGW64 ~
>
> $
>
>
>
> It was working fine on first release of git 2.13. It broken with
> releases after that.
>
> Will try with –no-pager flag and let you tomorrow.
>
>
>
> Thanks
>
> Gyandeep
> Regards,
>
> Gyandeep Singh
> Website: http://gyandeeps.com
>
>
> On Mon, Jun 26, 2017 at 9:55 PM, Stefan Beller  wrote:
>> Which exact outputs of Git are invoked?
>>
>> Does it change when giving slightly different options e.g. --no-pager?


Re: --color-moved feedback, was Re: [PATCH v2 19/29] packed-backend: new module for handling packed references

2017-06-26 Thread Stefan Beller
On Fri, Jun 23, 2017 at 6:10 PM, Jeff King  wrote:
> On Fri, Jun 23, 2017 at 01:23:52PM -0700, Stefan Beller wrote:
>
>> > In the end, I just did --color-moved=plain,  ...
>> > "yep, this is all a giant moved chunk, so I don't have to look carefully
>> > at it".
>>
>> This is dangerous, as "plain" does not care about permutations.
>> See the 7f5af90798 (diff.c: color moved lines differently, 2017-06-19)
>> for details. You would want at least "zebra", which would show you
>> permutations.
>
> Ah, I see. I think what I'd really want is some way of correlating two
> particular hunks. That's hard to do with color, though. I guess that's
> the "you would need a ton of colors" discussion I saw going on before?

Yes. And the real question is "how many". ;)
and if you want to optimize for the fewest number of colors, or rather
"best user experience as defined by me", or something else entirely
different.

>
> It would depend on how many hunks there are, and how close together they
> are. For instance, your 6cd5757c8 seems like a good candidate, but I
> have to admit with --color-moved=zebra I have no idea what's going on.
> Some things are definitely colored, but I'm not sure what corresponds to
> what.

That is a good one indeed. My take on that:
If you use "zebra" alone, you do not care if it is cyan or yellow.
*Both* indicate a moved new line, and adjacent lines of the same color
indicate that they were adjacent at the source as well.

for reference http://i.imgur.com/hQXNlga.png

Look at the two lines of the first cyan->yellow transient
(closing the function prepare_submodule_repo_env_no_git_dir)

What it tells you is this:

  There is no matching paragraph in the deleted lines, that also
  end with two braces.

The yellow->cyan transit (prepare_submodule_repo_env) tells us:

  We did not have a empty line and then that function signature
  in the deleted lines, so we flipflop to the other color to tell the user.


>
>> > That feels more dangerous to me, just because the heuristics seem to
>> > find a lot of "moves" of repeated code. Imagine a simple patch like
>> > "switch return 0 to return -1". If the code you're touching went away,
>> > there's a very good chance that another "return 0" popped up somewhere
>> > else in the code base. A conflict is what you want there; silently
>> > changing some other site would be not only wrong, but quite subtle and
>> > hard to find.
>>
>> I agree, that is the danger, but the scale of danger is the size of the moved
>> chunk. A file is usually a very large chunk, such that it is obviously a good
>> idea to fix that. Maybe we'd introduce a threshold, that the fix must not be 
>> in
>> range of the block boundaries of say 3 lines.
>> (Then the moved block must be at least 7 lines of code such that a one liner
>> fix in the middle would kick in)
>
> Yes, I'd agree it's really a continuum from "one line" to "whole file".
> I think right now the --color-moved stuff is too close to the former to
> be safe, but pushing it farther towards the middle would remedy that.

Yup, I agree on that. I just wanted to state the principle that this would be
"move detection done right", because "file boundaries" are rather arbitrary
w.r.t to the very valuable content that we deal with. ;)

Thanks,
Stefan

>
> -Peff


Re: Bug: Cannot kill Nodejs process using ctrl + c

2017-06-26 Thread Gyandeep Singh
Not sure what you mean by output. But its just goes back to normal like  this

Gyandeep@Gyandeep MINGW64 ~

$



It was working fine on first release of git 2.13. It broken with
releases after that.

Will try with –no-pager flag and let you tomorrow.



Thanks

Gyandeep
Regards,

Gyandeep Singh
Website: http://gyandeeps.com


On Mon, Jun 26, 2017 at 9:55 PM, Stefan Beller  wrote:
> Which exact outputs of Git are invoked?
>
> Does it change when giving slightly different options e.g. --no-pager?


Re: [PATCH 2/3] builtin/fetch: parse recurse-submodules-default at default options parsing

2017-06-26 Thread Stefan Beller
On Fri, Jun 23, 2017 at 5:51 PM, Junio C Hamano  wrote:

   if (recurse_submodules != RECURSE_SUBMODULES_OFF) {
 - if (recurse_submodules_default) {
 - int arg = 
 parse_fetch_recurse_submodules_arg("--recurse-submodules-default", 
 recurse_submodules_default);
 - set_config_fetch_recurse_submodules(arg);
 - }
 + if (recurse_submodules_default != RECURSE_SUBMODULES_DEFAULT)
 + 
 set_config_fetch_recurse_submodules(recurse_submodules_default);
>>>


>
> I am not talking about the outer "if" condition.

I agree with your analysis, my answer was evasive.
I'll dig into the details why we do not set the default by default.


Re: What's cooking in git.git (Jun 2017, #06; Thu, 22)

2017-06-26 Thread Stefan Beller
On Fri, Jun 23, 2017 at 6:01 PM, Junio C Hamano  wrote:
> Junio C Hamano  writes:
>
>> But for the purpose of this "moved line coloring",
>> excluding multiple copy destinations of the same thing may be a
>> simpler and more robust solution.  It will not catch "somebody
>> stupidly removed one function and made two private copies", though.
>
> Let me take this one back.  Treating multiple copy destinations and
> multiple copy sources differently is a bad idea.  It is easy for a
> user to give "-R" option to "diff" to turn such a stupid patch into
> "somebody brilliantly consolidated two copies of the same thing into
> a single function", and we want to keep "diff" and "diff -R" output
> symmetric.

Thanks for the pointer with "blame -C". I'll look through the archives
to see if there are good discussions that yield ideas for this.


Re: Bug: Cannot kill Nodejs process using ctrl + c

2017-06-26 Thread Stefan Beller
Which exact outputs of Git are invoked?

Does it change when giving slightly different options e.g. --no-pager?


Re: [PATCH v6 5/6] convert: move multiple file filter error handling to separate function

2017-06-26 Thread Stefan Beller
On Mon, Jun 26, 2017 at 10:56 AM, Junio C Hamano  wrote:
> Not about this patch, but viewing this with
>
> git show -w --color-moved=zebra
>
> gives an interesting result.  The bulk of the part moved are
> re-indented, and the comment string gets zebra stripes, as if the
> line movement detection code does not realize that these came from
> the same place.
>

Thanks for the pointer, I'll include the whitespace-less comparison in tests.

Thanks,
Stefan


Re: [PATCH] pack-bitmap: Don't perform unaligned memory access

2017-06-26 Thread Jonathan Nieder
James Clarke wrote:

> The preceding bitmap entries have a 1-byte XOR-offset and 1-byte flags,
> so their size is not a multiple of 4. Thus the name-hash cache is only
> guaranteed to be 2-byte aligned and so we must use get_be32 rather than
> indexing the array directly.
>
> Signed-off-by: James Clarke 
> ---
>  pack-bitmap.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Reviewed-by: Jonathan Nieder 

Failing build log:
https://buildd.debian.org/status/fetch.php?pkg=git&arch=sparc64&ver=1%3A2.13.2-2&stamp=1498520310&raw=0

Thanks for tracking down and fixing it.


Re: [GSoC] Update: Week 5

2017-06-26 Thread Prathamesh Chavan
On Tue, Jun 20, 2017 at 5:31 AM, Andrew Ardill  wrote:
> On 20 June 2017 at 07:41, Prathamesh Chavan  wrote:
>
>>But as communicating between child_process is still an issue
>>and so there was no simple was to current carry out the
>>porting. And hence, a hack was used instead. But after
>>discussing it, instead using the repository-object patch
>>series will help to resolve these issues in this situation.
>
> Just wondering, does that mean that your patch series is dependent on
> the repository-object one? I saw some discussion around it recently
> but couldn't see it in the latest whats cooking so maybe I missed what
> has happened to it.

Sorry for such a late reply. In this update, and even in the latest update[1],
the patches aren't dependent on the 'repository-object' series.
But there are certain issues encountered which I aim to resolve
using them.

>
> Really enjoying your updates, by the way, they are very clear and show
> what looks like great progress!

Thanks a lot for this, and I hope to keep improving it. :)

Thanks,
Prathamesh Chavan

[1]: 
https://public-inbox.org/git/CAME+mvUrr8EA-6jbCZdpB7dMZ5CN3RyY7yoRoUBoiZw=sh6...@mail.gmail.com/


[GSoC][PATCH 4/6 v2] submodule: port submodule subcommand status

2017-06-26 Thread Prathamesh Chavan
The mechanism used for porting submodule subcommand 'status'
is similar to that used for subcommand 'foreach'.
The function cmd_status from git-submodule is ported to three
functions in the builtin submodule--helper namely: module_status,
for_each_submodule_list and status_submodule.

print_status is also introduced for handling the output of
the subcommand and also to reduce the code size.

Mentored-by: Christian Couder 
Mentored-by: Stefan Beller 
Signed-off-by: Prathamesh Chavan 
---
 builtin/submodule--helper.c | 152 
 git-submodule.sh|  49 +-
 2 files changed, 153 insertions(+), 48 deletions(-)

diff --git a/builtin/submodule--helper.c b/builtin/submodule--helper.c
index 86112ac92..a5de7a0fe 100644
--- a/builtin/submodule--helper.c
+++ b/builtin/submodule--helper.c
@@ -566,6 +566,157 @@ static int module_init(int argc, const char **argv, const 
char *prefix)
return 0;
 }
 
+struct status_cb {
+   const char *prefix;
+   unsigned int quiet: 1;
+   unsigned int recursive: 1;
+   unsigned int cached: 1;
+};
+#define STATUS_CB_INIT { NULL, 0, 0, 0 }
+
+static void print_status(struct status_cb *info, char state, const char *path,
+char *sub_sha1, char *displaypath)
+{
+   if (info->quiet)
+   return;
+
+   printf("%c%s %s", state, sub_sha1, displaypath);
+
+   if (state == ' ' || state == '+') {
+   struct argv_array name_rev_args = ARGV_ARRAY_INIT;
+
+   argv_array_pushl(&name_rev_args, "print-name-rev",
+path, sub_sha1, NULL);
+   print_name_rev(name_rev_args.argc, name_rev_args.argv,
+  info->prefix);
+   } else {
+   printf("\n");
+   }
+}
+
+static void status_submodule(const struct cache_entry *list_item, void 
*cb_data)
+{
+   struct status_cb *info = cb_data;
+   char *sub_sha1 = xstrdup(oid_to_hex(&list_item->oid));
+   char *displaypath;
+   struct argv_array diff_files_args = ARGV_ARRAY_INIT;
+
+   if (!submodule_from_path(null_sha1, list_item->name))
+   die(_("no submodule mapping found in .gitmodules for path 
'%s'"),
+ list_item->name);
+
+   displaypath = get_submodule_displaypath(list_item->name, info->prefix);
+
+   if (list_item->ce_flags) {
+   print_status(info, 'U', list_item->name,
+sha1_to_hex(null_sha1), displaypath);
+   goto cleanup;
+   }
+
+   if (!is_submodule_initialized(list_item->name)) {
+   print_status(info, '-', list_item->name, sub_sha1, displaypath);
+   goto cleanup;
+   }
+
+   argv_array_pushl(&diff_files_args, "diff-files",
+"--ignore-submodules=dirty", "--quiet", "--",
+list_item->name, NULL);
+
+   if (!cmd_diff_files(diff_files_args.argc, diff_files_args.argv,
+   info->prefix)) {
+   print_status(info, ' ', list_item->name, sub_sha1, displaypath);
+   } else {
+   if (!info->cached) {
+   struct child_process cp = CHILD_PROCESS_INIT;
+   struct strbuf sb = STRBUF_INIT;
+
+   prepare_submodule_repo_env(&cp.env_array);
+   cp.git_cmd = 1;
+   cp.dir = list_item->name;
+
+   argv_array_pushl(&cp.args, "rev-parse",
+"--verify", "HEAD", NULL);
+
+   if (capture_command(&cp, &sb, 0))
+   die(_("could not run 'git rev-parse --verify"
+ "HEAD' in submodule %s"),
+ list_item->name);
+
+   strbuf_strip_suffix(&sb, "\n");
+   print_status(info, '+', list_item->name, sb.buf,
+displaypath);
+   strbuf_release(&sb);
+   } else {
+   print_status(info, '+', list_item->name, sub_sha1,
+displaypath);
+   }
+   }
+
+   if (info->recursive) {
+   struct child_process cpr = CHILD_PROCESS_INIT;
+
+   cpr.git_cmd = 1;
+   cpr.dir = list_item->name;
+   prepare_submodule_repo_env(&cpr.env_array);
+
+   argv_array_pushl(&cpr.args, "--super-prefix", displaypath,
+"submodule--helper", "status", "--recursive",
+NULL);
+
+   if (info->cached)
+   argv_array_push(&cpr.args, "--cached");
+
+   if (info->quiet)
+   argv_array_push(&cpr.args, "--quiet");
+
+   if (run_command(&cpr))
+   die(_("failed to recurse into s

[GSoC][PATCH 5/6 v2] submodule: port submodule subcommand sync from shell to C

2017-06-26 Thread Prathamesh Chavan
The mechanism used for porting the submodule subcommand 'sync' is
similar to that of 'foreach', where we split the function cmd_sync
from shell into three functions in C, module_sync,
for_each_submodule_list and sync_submodule.

print_default_remote is introduced as a submodule--helper
subcommand for getting the default remote as stdout.

Mentored-by: Christian Couder 
Mentored-by: Stefan Beller 
Signed-off-by: Prathamesh Chavan 
---
 builtin/submodule--helper.c | 178 
 git-submodule.sh|  56 +-
 2 files changed, 179 insertions(+), 55 deletions(-)

diff --git a/builtin/submodule--helper.c b/builtin/submodule--helper.c
index a5de7a0fe..4e3322846 100644
--- a/builtin/submodule--helper.c
+++ b/builtin/submodule--helper.c
@@ -44,6 +44,20 @@ static char *get_default_remote(void)
return ret;
 }
 
+static int print_default_remote(int argc, const char **argv, const char 
*prefix)
+{
+   const char *remote;
+
+   if (argc != 1)
+   die(_("submodule--helper print-default-remote takes no 
arguments"));
+
+   remote = get_default_remote();
+   if (remote)
+   puts(remote);
+
+   return 0;
+}
+
 static int starts_with_dot_slash(const char *str)
 {
return str[0] == '.' && is_dir_sep(str[1]);
@@ -385,6 +399,26 @@ static void module_list_active(struct module_list *list)
*list = active_modules;
 }
 
+static char *get_up_path(const char *path)
+{
+   int i = count_slashes(path);
+   int l = strlen(path);
+   struct strbuf sb = STRBUF_INIT;
+
+   while (i--)
+   strbuf_addstr(&sb, "../");
+
+   /*
+*Check if 'path' ends with slash or not
+*for having the same output for dir/sub_dir
+*and dir/sub_dir/
+*/
+   if (!is_dir_sep(path[l - 1]))
+   strbuf_addstr(&sb, "../");
+
+   return strbuf_detach(&sb, NULL);
+}
+
 static int module_list(int argc, const char **argv, const char *prefix)
 {
int i;
@@ -898,6 +932,148 @@ static int module_foreach(int argc, const char **argv, 
const char *prefix)
return 0;
 }
 
+struct sync_cb {
+   const char *prefix;
+   unsigned int quiet: 1;
+   unsigned int recursive: 1;
+};
+#define SYNC_CB_INIT { NULL, 0, 0 }
+
+static void sync_submodule(const struct cache_entry *list_item, void *cb_data)
+{
+   struct sync_cb *info = cb_data;
+   const struct submodule *sub;
+   char *sub_key, *remote_key;
+   char *sub_origin_url, *super_config_url, *displaypath;
+   struct strbuf sb = STRBUF_INIT;
+   struct child_process cp = CHILD_PROCESS_INIT;
+
+   if (!is_submodule_initialized(list_item->name))
+   return;
+
+   sub = submodule_from_path(null_sha1, list_item->name);
+
+   if (!sub && !sub->url)
+   die(_("no url found for submodule path '%s' in .gitmodules"),
+ list_item->name);
+
+   if (starts_with_dot_dot_slash(sub->url) || 
starts_with_dot_slash(sub->url)) {
+   char *remote_url, *up_path;
+   char *remote = get_default_remote();
+   char *remote_key = xstrfmt("remote.%s.url", remote);
+   free(remote);
+
+   if (git_config_get_string(remote_key, &remote_url))
+   remote_url = xgetcwd();
+   up_path = get_up_path(list_item->name);
+   sub_origin_url = relative_url(remote_url, sub->url, up_path);
+   super_config_url = relative_url(remote_url, sub->url, NULL);
+   free(remote_key);
+   free(up_path);
+   free(remote_url);
+   } else {
+   sub_origin_url = xstrdup(sub->url);
+   super_config_url = xstrdup(sub->url);
+   }
+
+   displaypath = get_submodule_displaypath(list_item->name, info->prefix);
+
+   if (!info->quiet)
+   printf(_("Synchronizing submodule url for '%s'\n"),
+displaypath);
+
+   sub_key = xstrfmt("submodule.%s.url", sub->name);
+   if (git_config_set_gently(sub_key, super_config_url))
+   die(_("failed to register url for submodule path '%s'"),
+ displaypath);
+
+   if (!is_submodule_populated_gently(list_item->name, NULL))
+   goto cleanup;
+
+   prepare_submodule_repo_env(&cp.env_array);
+   cp.git_cmd = 1;
+   cp.dir = list_item->name;
+   argv_array_pushl(&cp.args, "submodule--helper",
+"print-default-remote", NULL);
+   if (capture_command(&cp, &sb, 0))
+   die(_("failed to get the default remote for submodule '%s'"),
+ list_item->name);
+
+   strbuf_strip_suffix(&sb, "\n");
+   remote_key = xstrfmt("remote.%s.url", sb.buf);
+   strbuf_release(&sb);
+
+   child_process_init(&cp);
+   prepare_submodule_repo_env(&cp.env_array);
+   cp.git_cmd = 1;
+   cp.dir = list_item->

[GSoC][PATCH 1/6 v2] submodule--helper: introduce for_each_submodule_list

2017-06-26 Thread Prathamesh Chavan
Introduce function for_each_submodule_list for using it
in the later patches, related to porting submodule
subcommands from shell to C.
This new function is also used in ported submodule subcommand
init.

Mentored-by: Christian Couder 
Mentored-by: Stefan Beller 
Signed-off-by: Prathamesh Chavan 
---
This series of patches is based on the 'next' branch. 

Complete build report of this patch series is available at:
https://travis-ci.org/pratham-pc/git/builds/
Branch: patch-series
Build #113

 builtin/submodule--helper.c | 39 +--
 1 file changed, 29 insertions(+), 10 deletions(-)

diff --git a/builtin/submodule--helper.c b/builtin/submodule--helper.c
index 1bfc91bca..c4286aac5 100644
--- a/builtin/submodule--helper.c
+++ b/builtin/submodule--helper.c
@@ -14,6 +14,9 @@
 #include "refs.h"
 #include "connect.h"
 
+typedef void (*submodule_list_func_t)(const struct cache_entry *list_item,
+ void *cb_data);
+
 static char *get_default_remote(void)
 {
char *dest = NULL, *ret;
@@ -352,17 +355,30 @@ static int module_list(int argc, const char **argv, const 
char *prefix)
return 0;
 }
 
-static void init_submodule(const char *path, const char *prefix, int quiet)
+static void for_each_submodule_list(const struct module_list list,
+   submodule_list_func_t fn, void *cb_data)
 {
+   int i;
+   for (i = 0; i < list.nr; i++)
+   fn(list.entries[i], cb_data);
+}
+
+struct init_cb {
+   const char *prefix;
+   unsigned int quiet: 1;
+};
+#define INIT_CB_INIT { NULL, 0 }
+
+static void init_submodule(const struct cache_entry *list_item, void *cb_data)
+{
+   struct init_cb *info = cb_data;
const struct submodule *sub;
struct strbuf sb = STRBUF_INIT;
char *upd = NULL, *url = NULL, *displaypath;
 
-   /* Only loads from .gitmodules, no overlay with .git/config */
-   gitmodules_config();
-   displaypath = get_submodule_displaypath(path, prefix);
+   displaypath = get_submodule_displaypath(list_item->name, info->prefix);
 
-   sub = submodule_from_path(null_sha1, path);
+   sub = submodule_from_path(null_sha1, list_item->name);
 
if (!sub)
die(_("No url found for submodule path '%s' in .gitmodules"),
@@ -374,7 +390,7 @@ static void init_submodule(const char *path, const char 
*prefix, int quiet)
 *
 * Set active flag for the submodule being initialized
 */
-   if (!is_submodule_initialized(path)) {
+   if (!is_submodule_initialized(list_item->name)) {
strbuf_addf(&sb, "submodule.%s.active", sub->name);
git_config_set_gently(sb.buf, "true");
}
@@ -416,7 +432,7 @@ static void init_submodule(const char *path, const char 
*prefix, int quiet)
if (git_config_set_gently(sb.buf, url))
die(_("Failed to register url for submodule path '%s'"),
displaypath);
-   if (!quiet)
+   if (!info->quiet)
fprintf(stderr,
_("Submodule '%s' (%s) registered for path 
'%s'\n"),
sub->name, url, displaypath);
@@ -445,10 +461,10 @@ static void init_submodule(const char *path, const char 
*prefix, int quiet)
 
 static int module_init(int argc, const char **argv, const char *prefix)
 {
+   struct init_cb info = INIT_CB_INIT;
struct pathspec pathspec;
struct module_list list = MODULE_LIST_INIT;
int quiet = 0;
-   int i;
 
struct option module_init_options[] = {
OPT__QUIET(&quiet, N_("Suppress output for initializing a 
submodule")),
@@ -473,8 +489,11 @@ static int module_init(int argc, const char **argv, const 
char *prefix)
if (!argc && git_config_get_value_multi("submodule.active"))
module_list_active(&list);
 
-   for (i = 0; i < list.nr; i++)
-   init_submodule(list.entries[i]->name, prefix, quiet);
+   info.prefix = prefix;
+   info.quiet = !!quiet;
+
+   gitmodules_config();
+   for_each_submodule_list(list, init_submodule, &info);
 
return 0;
 }
-- 
2.13.0



[GSoC][PATCH 2/6 v2] submodule: port subcommand foreach from shell to C

2017-06-26 Thread Prathamesh Chavan
This aims to make git-submodule foreach a builtin. This is the very
first step taken in this direction. Hence, 'foreach' is ported to
submodule--helper, and submodule--helper is called from git-submodule.sh.
The code is split up to have one function to obtain all the list of
submodules. This function acts as the front-end of git-submodule foreach
subcommand. It calls the function for_each_submodule_list, which basically
loops through the list and calls function fn, which in this case is
runcommand_in_submodule. This third function is a calling function that
takes care of running the command in that submodule, and recursively
perform the same when --recursive is flagged.

The first function module_foreach first parses the options present in
argv, and then with the help of module_list_compute, generates the list of
submodules present in the current working tree.

The second function for_each_submodule_list traverses through the
list, and calls function fn (which in case of submodule subcommand
foreach is runcommand_in_submodule) is called for each entry.

The third function runcommand_in_submodule, generates a submodule struct sub
for $name, value and then later prepends name=sub->name; and other
value assignment to the env argv_array structure of a child_process.
Also the  of submodule-foreach is push to args argv_array
structure and finally, using run_command the commands are executed
using a shell.

The third function also takes care of the recursive flag, by creating
a separate child_process structure and prepending "--super-prefix displaypath",
to the args argv_array structure. Other required arguments and the
input  of submodule-foreach is also appended to this argv_array.

Function get_initial_wt_prefix_up_path is introduced to take care of
generating the value for path variable(as it was in the shell script)

Helped-by: Brandon Williams 
Mentored-by: Christian Couder 
Mentored-by: Stefan Beller 
Signed-off-by: Prathamesh Chavan 
---
This patch suggestes the current status of the foreach patch.
Work is still needed on this patch for eleminating the hack
used in this patch for generating path variable.

 builtin/submodule--helper.c | 163 
 git-submodule.sh|  39 +--
 2 files changed, 164 insertions(+), 38 deletions(-)

diff --git a/builtin/submodule--helper.c b/builtin/submodule--helper.c
index c4286aac5..5180659fd 100644
--- a/builtin/submodule--helper.c
+++ b/builtin/submodule--helper.c
@@ -517,6 +517,168 @@ static int module_name(int argc, const char **argv, const 
char *prefix)
return 0;
 }
 
+static char *get_initial_wt_prefix_up_path(char *path)
+{
+   struct strbuf sb = STRBUF_INIT;
+   const char *p = path;
+
+   while (*p) {
+   if (skip_prefix(p, "./", &p))
+   continue;
+   if (!skip_prefix(p, "../", &p))
+   break;
+   strbuf_addstr(&sb, "../");
+   }
+
+   return strbuf_detach(&sb, NULL);
+}
+
+struct cb_foreach {
+   int argc;
+   const char **argv;
+   const char *prefix;
+   unsigned int quiet: 1;
+   unsigned int recursive: 1;
+};
+#define CB_FOREACH_INIT { 0, NULL, NULL, 0, 0 }
+
+static void runcommand_in_submodule(const struct cache_entry *list_item,
+   void *cb_data)
+{
+   struct cb_foreach *info = cb_data;
+   const struct submodule *sub;
+   struct child_process cp = CHILD_PROCESS_INIT;
+   char *displaypath;
+
+   displaypath = get_submodule_displaypath(list_item->name, info->prefix);
+
+   sub = submodule_from_path(null_sha1, list_item->name);
+
+   if (!sub)
+   die(_("No url found for submodule path '%s' in .gitmodules"),
+ displaypath);
+
+   if (!is_submodule_populated_gently(list_item->name, NULL))
+   goto cleanup;
+
+   prepare_submodule_repo_env(&cp.env_array);
+   /* For the purpose of executing  in the submodule,
+* separate shell is used for the purpose of running the
+* child process.
+*/
+   cp.use_shell = 1;
+   cp.dir = list_item->name;
+
+   if (info->argc == 1) {
+
+   /*
+* NEEDSWORK: Here function get_initial_wt_prefix_up_path is
+* used as a hack for evaluating the value of the path
+* variable. The proper way would have been to store and use
+* the prefix of the repository, where first subcommand
+* foreach was executed and then evaluate path as
+* relative_path(list_item->name, prefix) for each submodule.
+*/
+   char *initial_wt_prefix_up_path = 
get_initial_wt_prefix_up_path(displaypath);
+   char *toplevel = xgetcwd();
+   argv_array_pushf(&cp.env_array, "name=%s", sub->name);
+   argv_array_pushf(&cp.env_array, "sm_path=%s%s",
+ 

[GSoC][PATCH 6/6 v2] submodule: port submodule subcommand 'deinit' from shell to C

2017-06-26 Thread Prathamesh Chavan
The same mechanism is used even for porting this submodule
subcommand, as used in the ported subcommands till now.
The function cmd_deinit in split up after porting into three
functions: module_deinit, for_each_submodule_list and
deinit_submodule.

Mentored-by: Christian Couder 
Mentored-by: Stefan Beller 
Signed-off-by: Prathamesh Chavan 
---
 builtin/submodule--helper.c | 140 
 git-submodule.sh|  55 +
 2 files changed, 141 insertions(+), 54 deletions(-)

diff --git a/builtin/submodule--helper.c b/builtin/submodule--helper.c
index 4e3322846..17942529b 100644
--- a/builtin/submodule--helper.c
+++ b/builtin/submodule--helper.c
@@ -751,6 +751,145 @@ static int module_status(int argc, const char **argv, 
const char *prefix)
return 0;
 }
 
+struct deinit_cb {
+   const char *prefix;
+   unsigned int quiet: 1;
+   unsigned int force: 1;
+   unsigned int all: 1;
+};
+#define DEINIT_CB_INIT { NULL, 0, 0, 0 }
+
+static void deinit_submodule(const struct cache_entry *list_item,
+void *cb_data)
+{
+   struct deinit_cb *info = cb_data;
+   const struct submodule *sub;
+   char *displaypath = NULL;
+   struct child_process cp_config = CHILD_PROCESS_INIT;
+   struct strbuf sb_config = STRBUF_INIT;
+   char *sm_path = xstrdup(list_item->name);
+   char *sub_git_dir = xstrfmt("%s/.git", sm_path);
+
+   sub = submodule_from_path(null_sha1, sm_path);
+
+   if (!sub->name)
+   goto cleanup;
+
+   displaypath = get_submodule_displaypath(sm_path, info->prefix);
+
+   /* remove the submodule work tree (unless the user already did it) */
+   if (is_directory(sm_path)) {
+   struct child_process cp = CHILD_PROCESS_INIT;
+
+   /* protect submodules containing a .git directory */
+   if (is_git_directory(sub_git_dir))
+   die(_("Submodule work tree '%s' contains a .git "
+ "directory use 'rm -rf' if you really want "
+ "to remove it including all of its history"),
+ displaypath);
+
+   if (!info->force) {
+   struct child_process cp_rm = CHILD_PROCESS_INIT;
+   cp_rm.git_cmd = 1;
+   argv_array_pushl(&cp_rm.args, "rm", "-qn", sm_path,
+NULL);
+
+   /* list_item->name is changed by cmd_rm() below */
+   if (run_command(&cp_rm))
+   die(_("Submodule work tree '%s' contains local "
+ "modifications; use '-f' to discard 
them"),
+ displaypath);
+   }
+
+   cp.use_shell = 1;
+   argv_array_pushl(&cp.args, "rm", "-rf", sm_path, NULL);
+   if (!run_command(&cp)) {
+   if (!info->quiet)
+   printf(_("Cleared directory '%s'\n"),
+displaypath);
+   } else {
+   if (!info->quiet)
+   printf(_("Could not remove submodule work tree 
'%s'\n"),
+displaypath);
+   }
+   }
+
+   if (mkdir(sm_path, 0700))
+   die(_("could not create empty submodule directory %s"),
+ displaypath);
+
+   cp_config.git_cmd = 1;
+   argv_array_pushl(&cp_config.args, "config", "--get-regexp", NULL);
+   argv_array_pushf(&cp_config.args, "submodule.%s\\.", sub->name);
+
+   /* remove the .git/config entries (unless the user already did it) */
+   if (!capture_command(&cp_config, &sb_config, 0) && sb_config.len) {
+   char *sub_key = xstrfmt("submodule.%s", sub->name);
+   /*
+* remove the whole section so we have a clean state when
+* the user later decides to init this submodule again
+*/
+   git_config_rename_section_in_file(NULL, sub_key, NULL);
+   if (!info->quiet)
+   printf(_("Submodule '%s' (%s) unregistered for path 
'%s'\n"),
+sub->name, sub->url, displaypath);
+   free(sub_key);
+   }
+
+cleanup:
+   free(displaypath);
+   free(sub_git_dir);
+   free(sm_path);
+   strbuf_release(&sb_config);
+}
+
+static int module_deinit(int argc, const char **argv, const char *prefix)
+{
+   struct deinit_cb info = DEINIT_CB_INIT;
+   struct pathspec pathspec;
+   struct module_list list = MODULE_LIST_INIT;
+   int quiet = 0;
+   int force = 0;
+   int all = 0;
+
+   struct option module_deinit_options[] = {
+   OPT__QUIET(&quiet, N_("Suppress submodule status output")),
+   OPT__

[GSoC][PATCH 3/6 v2] submodule: port set_name_rev from shell to C

2017-06-26 Thread Prathamesh Chavan
Since later on we want to port submodule subcommand status, and since
set_name_rev is part of cmd_status, hence this function is ported. It
has been ported to function print_name_rev in C, which calls get_name_rev
to get the revname, and after formatting it, print_name_rev prints it.
And hence in this way, the command `git submodule--helper print-name-rev
"sm_path" "sha1"` sets value of revname in git-submodule.sh

The function get_name_rev returns the stdout of the git describe
commands. Since there are four different git-describe commands used for
generating the name rev, four child_process are introduced, each successive
child process running only when previous has no stdout. The order of these
four git-describe commands is maintained the same as it was in the function
set_name_rev() in shell script.

Mentored-by: Christian Couder 
Mentored-by: Stefan Beller 
Signed-off-by: Prathamesh Chavan 
---
 builtin/submodule--helper.c | 69 +
 git-submodule.sh| 16 ++-
 2 files changed, 71 insertions(+), 14 deletions(-)

diff --git a/builtin/submodule--helper.c b/builtin/submodule--helper.c
index 5180659fd..86112ac92 100644
--- a/builtin/submodule--helper.c
+++ b/builtin/submodule--helper.c
@@ -244,6 +244,74 @@ static char *get_submodule_displaypath(const char *path, 
const char *prefix)
}
 }
 
+enum describe_step {
+   step_bare,
+   step_tags,
+   step_contains,
+   step_all_always,
+   step_end
+};
+
+static char *get_name_rev(const char *sub_path, const char* object_id)
+{
+   struct strbuf sb = STRBUF_INIT;
+   enum describe_step cur_step;
+
+   for (cur_step = step_bare; cur_step < step_end; cur_step++) {
+   struct child_process cp = CHILD_PROCESS_INIT;
+   prepare_submodule_repo_env(&cp.env_array);
+   cp.dir = sub_path;
+   cp.git_cmd = 1;
+   cp.no_stderr = 1;
+
+   switch (cur_step) {
+   case step_bare:
+   argv_array_pushl(&cp.args, "describe",
+object_id, NULL);
+   break;
+   case step_tags: 
+   argv_array_pushl(&cp.args, "describe",
+"--tags", object_id, NULL);
+   break;
+   case step_contains:
+   argv_array_pushl(&cp.args, "describe",
+"--contains", object_id,
+NULL);
+   break;
+   case step_all_always:
+   argv_array_pushl(&cp.args, "describe",
+"--all", "--always",
+object_id, NULL);
+   break;
+   default:
+   BUG("unknown describe step '%d'", cur_step);
+   }
+
+   if (!capture_command(&cp, &sb, 0) && sb.len) {
+   strbuf_strip_suffix(&sb, "\n");
+   return strbuf_detach(&sb, NULL);
+   }
+
+   }
+
+   strbuf_release(&sb);
+   return NULL;
+}
+
+static int print_name_rev(int argc, const char **argv, const char *prefix)
+{
+   char *namerev;
+   if (argc != 3)
+   die("print-name-rev only accepts two arguments:  ");
+
+   namerev = get_name_rev(argv[1], argv[2]);
+   if (namerev && namerev[0])
+   printf(" (%s)", namerev);
+   printf("\n");
+
+   return 0;
+}
+
 struct module_list {
const struct cache_entry **entries;
int alloc, nr;
@@ -1405,6 +1473,7 @@ static struct cmd_struct commands[] = {
{"resolve-relative-url", resolve_relative_url, 0},
{"resolve-relative-url-test", resolve_relative_url_test, 0},
{"foreach", module_foreach, SUPPORT_SUPER_PREFIX},
+   {"print-name-rev", print_name_rev, 0},
{"init", module_init, SUPPORT_SUPER_PREFIX},
{"remote-branch", resolve_remote_submodule_branch, 0},
{"push-check", push_check, 0},
diff --git a/git-submodule.sh b/git-submodule.sh
index c88e0ff7e..b28a6ba8e 100755
--- a/git-submodule.sh
+++ b/git-submodule.sh
@@ -722,18 +722,6 @@ cmd_update()
}
 }
 
-set_name_rev () {
-   revname=$( (
-   sanitize_submodule_env
-   cd "$1" && {
-   git describe "$2" 2>/dev/null ||
-   git describe --tags "$2" 2>/dev/null ||
-   git describe --contains "$2" 2>/dev/null ||
-   git describe --all --always "$2"
-   }
-   ) )
-   test -z "$revname" || revname=" ($revname)"
-}
 #
 # Show commit summary for submodules in index or working tr

[GSoC] Update: Week 6

2017-06-26 Thread Prathamesh Chavan
SUMMARY OF MY PROJECT:

Git submodule subcommands are currently implemented by using shell script
'git-submodule.sh'. There are several reasons why we'll prefer not to
use the shell script. My project intends to convert the subcommands into
C code, thus making them builtins. This will increase Git's portability
and hence the efficiency of working with the git-submodule commands.
Link to the complete proposal: [1]

Mentors:
Stefan Beller 
Christian Couder 

UPDATES:

Following are the updates about my ongoing project:

1. sync and status: The patches were discussed with the mentors
   and after that, are being posted with this patch.

2. deinit: The patch is debugged, and is ready to be
   discussed. Not much discussion occurred over this patch
   and hence the patch is same as its previous version.
   It is also attached with this update.

3. summary: The porting of this submodule subcommand is
   almost completed. Things like improving the function
   names, checking for memory leakage, etc are still
   left to be taken care of. I'm updating the patch's status
   by sending the patch to the mentors off-list, so that
   an appropriate version is posted here on the list.
   Hence, it wasn't attached to the update.

4. foreach: Not much progress was done with this patch
   in particular as most of the time was used for completing
   the porting of submodule subcommand 'summary'.
   Hence its status remains same as mentioned in the
   previous update, which is reposted below:
   'As stated in the previous update, the subcommand was
   ported without resolving the bug, and simply translating the
   present code, and adding a NEEDSWORK tag to the comment for
   mentioning the reported bug as well.
   But as communicating between child_process is still an issue
   and so there was no simple was to current carry out the
   porting. And hence, a hack was used instead. But after
   discussing it, instead using the repository-object patch
   series will help to resolve these issues in this situation.'

PLAN FOR WEEK-7 (27 June 2017 to 3 July 2017):

1. foreach: Since the required changes weren't made in the last
   week in regards with this patch, in the next week I aim for
   fulfilling them first. I'll like to again mention it here:
   As it was decided that unblock the conversion of
   this submodule subcommand, the original cmd_foreach was
   ported without including the BUG-FIX patch here.
   Hence, for this week I will try to utilize the
   'repository-object' series by Brandon Williams.
   Additionally, I'll like to mention that this update
   still, doesn't depend on the 'repository-object' series,
   which I'll be trying to change in the next update.

2. summary: As mentioned earlier, since there is still a little
   work with its porting left, I'll try to finish it and debug
   the ported code as well.

3. deinit: As there is still scope of improvision and discussion
   I'll also be focussing on improving this patch.

[1]: 
https://docs.google.com/document/d/1krxVLooWl--75Pot3dazhfygR3wCUUWZWzTXtK1L-xU/

Thanks,
Prathamesh Chavan


Re: [PATCH v6 6/6] convert: add "status=delayed" to filter process protocol

2017-06-26 Thread Junio C Hamano
Junio C Hamano  writes:

 +  filter->string = "";
 +  continue;
 +  }
 +
 +  /* In dco->paths we store a list of all delayed paths.
 + The filter just send us a list of available paths.
 + Remove them from the list.
 +  */
 +  filter_string_list(&dco->paths, 0,
 +  &remove_available_paths, &available_paths);
>>> 
>>> We first remove from the outstanding request list (dco->paths) what
>>> are now ready...
>>> 
 +  for_each_string_list_item(path, &available_paths) {
>>> 
>>> ...and go over those paths that are now ready.
>>> 
 +  struct cache_entry* ce = index_file_exists(
 +  state->istate, path->string,
 +  strlen(path->string), 0);
 +  assert(dco->state == CE_RETRY);
 +  errs |= (ce ? checkout_entry(ce, state, NULL) : 
 1);
 +  }
>>> 
>>> But we never checked if the contents of this available_paths list is
>>> a subset of the set of paths we originally asked the external
>>> process to filter.
>>
>> Correct.
>>
>>>  This will allow the process to overwrite any
>>> random path that is not even involved in the checkout.
>>
>> No, not "any random path". Only paths that are part of the checkout.
>
> Isn't it "any path that index_file_exists() returns a CE for".  Did
> you filter out elements in available_paths that weren't part of
> dco->paths?  I thought the filter-string-list you have are for the
> other way around (which is necessary to keep track of work to be
> done, but that filtering does not help rejecting rogue responses at
> all).

That is, something along the lines of this on top of the series.
When going over the list of delayed paths to see if any of them is
answered, in remove_available_paths(), we mark the util field of the
answered one.  Later when we go over the list of answered one, we
make sure that it was actually matched with something on dco->paths
before calling checkout_entry().

 entry.c | 20 
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/entry.c b/entry.c
index c6d5cc01dc..f2af67e015 100644
--- a/entry.c
+++ b/entry.c
@@ -150,7 +150,12 @@ void enable_delayed_checkout(struct checkout *state)
 static int remove_available_paths(struct string_list_item *item, void *cb_data)
 {
struct string_list *available_paths = cb_data;
-   return !string_list_has_string(available_paths, item->string);
+   struct string_list_item *available;
+
+   available = string_list_lookup(available_paths, item->string);
+   if (available)
+   avaiable->util = (void *)item->string;
+   return !available;
 }
 
 int finish_delayed_checkout(struct checkout *state)
@@ -192,9 +197,16 @@ int finish_delayed_checkout(struct checkout *state)
&remove_available_paths, &available_paths);
 
for_each_string_list_item(path, &available_paths) {
-   struct cache_entry* ce = index_file_exists(
-   state->istate, path->string,
-   strlen(path->string), 0);
+   struct cache_entry* ce;
+
+   if (!path->util) {
+   error("filter gave '%s' which was 
unasked for",
+ path->string);
+   errs |= 1;
+   continue;
+   }
+   ce = index_file_exists(state->istate, 
path->string,
+  strlen(path->string), 0);
assert(dco->state == CE_RETRY);
errs |= (ce ? checkout_entry(ce, state, NULL) : 
1);
}


Re: [PATCH v6 6/6] convert: add "status=delayed" to filter process protocol

2017-06-26 Thread Junio C Hamano
Lars Schneider  writes:

> Maybe this?
> [...] If Git sends this command, then the
> filter is expected to return a list of pathnames representing blobs 
> that have been delayed earlier and are now available. [...]

OK.

>>> +by a "success" status that is also terminated with a flush packet. If
>>> +no blobs for the delayed paths are available, yet, then the filter is
>>> +expected to block the response until at least one blob becomes
>>> +available.
>> 
>> Ahh, this is better, at least you use "the delayed paths".
>> 
>> What if the result never gets available (e.g. resulted in an error)?
>
> As soon as the filter responds with an empty list, Git stops asking.
> All blobs that Git has not received at this point are considered an
> error.
>
> Should I mention that explicitly?

Otherwise I wouldn't have wondered "what if".

>> I am wondering whose responsibility it will be to deal with a path
>> "list-available" reports that are *not* asked by Git or Git got no
>> "delayed" response.  The list subtraction done by the caller is
>> probably the logical place to do so.
>
> Correct. Git (the caller) will notice that not all "delayed" paths
> are listed by the filter and throw an error at the end.

I am wondering about the other case.  Git didn't ask for a path to
be filtered at all, but the filter sneaked in a path that happens to
in the index in its response---Git should at least ignore it, and
better yet, diagnose it as an error in the filter process.

>>> +   filter->string = "";
>>> +   continue;
>>> +   }
>>> +
>>> +   /* In dco->paths we store a list of all delayed paths.
>>> +  The filter just send us a list of available paths.
>>> +  Remove them from the list.
>>> +   */
>>> +   filter_string_list(&dco->paths, 0,
>>> +   &remove_available_paths, &available_paths);
>> 
>> We first remove from the outstanding request list (dco->paths) what
>> are now ready...
>> 
>>> +   for_each_string_list_item(path, &available_paths) {
>> 
>> ...and go over those paths that are now ready.
>> 
>>> +   struct cache_entry* ce = index_file_exists(
>>> +   state->istate, path->string,
>>> +   strlen(path->string), 0);
>>> +   assert(dco->state == CE_RETRY);
>>> +   errs |= (ce ? checkout_entry(ce, state, NULL) : 
>>> 1);
>>> +   }
>> 
>> But we never checked if the contents of this available_paths list is
>> a subset of the set of paths we originally asked the external
>> process to filter.
>
> Correct.
>
>>  This will allow the process to overwrite any
>> random path that is not even involved in the checkout.
>
> No, not "any random path". Only paths that are part of the checkout.

Isn't it "any path that index_file_exists() returns a CE for".  Did
you filter out elements in available_paths that weren't part of
dco->paths?  I thought the filter-string-list you have are for the
other way around (which is necessary to keep track of work to be
done, but that filtering does not help rejecting rogue responses at
all).

> There are three cases:
>
> (1) available_path is a path that was delayed before (= happy case!)
> (2) available_path is a path that was not delayed before, 
> but filtered (= no problem, as filtering is a idempotent operation)
> (3) available_path is a path that was neither delayed nor filtered
> before (= if the filter returns the blob as-is then this would
> be no problem. otherwise we would indeed have a screwed checkout)
>
> Case 3 might introduce a problem if the filter is buggy.  

> Would you be OK with this check to catch case 3?

I'd be very suspicious about anything you would do only with .nr
field, without filtering the other way around.  After all, we may
have asked it for 3 paths to be filtered, and it may have answered
with its own 3 different paths.

> dco_path_count = dco->paths.nr;
> filter_string_list(&dco->paths, 0,
> &remove_available_paths, &available_paths);
>
> if (dco_path_count - dco->paths.nr != available_paths.nr) {
> /* The filter responded with entries that have not
>  * been delay earlier. Do not ask the filter
>  * for available blobs, again, as the filter is
>  * likely buggy. This will generate an error at 
>  * the end as some files are not filtered properly.
>  */
> filter->string = "";
> error(_("The external filter '%s' responded with "
> "available blobs which have not been delayed "
> "earlier."), filter->string);
> continue;
> }
>
>
>>> +   }
>>> +   string_list_remove_empty_items(&dco->filters, 0);
>>> +   }
>>> +   string_list_clear(&dco->filters, 0);
>>> +
>>> +   /* At thi

Re: [PATCH/RFC] commit-template: improve readability of commit template

2017-06-26 Thread Junio C Hamano
Kaartic Sivaraam  writes:

> * Previously the commit template didn't separate the
>   distinct messages shown. This resulted in difficulty
>   in interpreting it's content. Add new lines to separate
>   the distinct parts of the template.
>
> * Previously the warning about usage of explicit paths
>   without any options wasn't clear. Make it more clear
>   so user gets what it's trying to say.
>

We don't usually make a bullet list in log message.  Please stick to
a plain prose.  

"Previously" is superflous.  Say what it does (e.g. "The commit
template adds optional parts without extra blank lines to its normal
output") in present tense and explain the ramifications of it
(e.g. "I personally find that this makes it harder to find the
optional bit").

> Signed-off-by: Kaartic Sivaraam 
> ---
>  I've tried to improve the message specified in the commit. Hope
>  it works correctly.
>
>  Local test passed.

Perhaps you would want to ensure that this change (if you find it
valuable) will not get broken by other people in the future by
writing a new test that ensures that these extra blank lines are
always there when you think they are needed?

I personally do not find these new blank lines are necessary, and
this change wastes vertical screen real estate which is a limited
resource, but that may be just me.  I on the other hand do not think
the result of this patch is overly worse than the status quo, either.



>  builtin/commit.c | 9 +
>  1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/builtin/commit.c b/builtin/commit.c
> index 8d1cac062..0a5676b76 100644
> --- a/builtin/commit.c
> +++ b/builtin/commit.c
> @@ -841,9 +841,11 @@ static int prepare_to_commit(const char *index_file, 
> const char *prefix,
> "with '%c' will be kept; you may remove them"
> " yourself if you want to.\n"
> "An empty message aborts the commit.\n"), 
> comment_line_char);
> - if (only_include_assumed)
> + if (only_include_assumed) {
> + status_printf_ln(s, GIT_COLOR_NORMAL, "%s", ""); // Add 
> new line for clarity
>   status_printf_ln(s, GIT_COLOR_NORMAL,
>   "%s", only_include_assumed);
> + }

We do not use // comment in most parts of our codebase that are
supposed to be platform neutral (iow, compat/ is exempt).

But more importantly, wouldn't

if (only_include_assumed)
status_printf_ln(s, GIT_COLOR_NORMAL,
-   "%s", only_include_asssumed);
+   "\n%s", only_include_asssumed);

be sufficient?

> @@ -877,8 +879,7 @@ static int prepare_to_commit(const char *index_file, 
> const char *prefix,
>   (int)(ci.name_end - ci.name_begin), 
> ci.name_begin,
>   (int)(ci.mail_end - ci.mail_begin), 
> ci.mail_begin);
>  
> - if (ident_shown)
> - status_printf_ln(s, GIT_COLOR_NORMAL, "%s", "");
> + status_printf_ln(s, GIT_COLOR_NORMAL, "%s", ""); // Add new 
> line for clarity

This does ensure that an extra blank line appears after the optional
section (either after the "only/include assumed" message, or "writing
for somebody else" message).

If we were to go with this sparser output, I think we also should
give an extra blank line before and after the "HEAD detached from
cafebabe" message you would see:

$ git checkout HEAD^0
$ git commit --allow-empty -o

or "On branch blah" if you are on a branch.  I think your change
adds a blank before, but it does not have a separation before
"Changes not staged for commit" line.

>   saved_color_setting = s->use_color;
>   s->use_color = 0;
> @@ -1209,7 +1210,7 @@ static int parse_and_validate_options(int argc, const 
> char *argv[],
>   if (argc == 0 && (also || (only && !amend && !allow_empty)))
>   die(_("No paths with --include/--only does not make sense."));
>   if (argc > 0 && !also && !only)
> - only_include_assumed = _("Explicit paths specified without -i 
> or -o; assuming --only paths...");
> + only_include_assumed = _("Explicit paths () specified 
> without -i or -o; assuming --only ");

I think "paths ()" is excessive.  If you are using  to
hint that they refer to "commit -h" or "commit --help" output, then

Explicit  specified without -i or -o; assumign --only 

should be sufficient.

Having said that, to be quite honest, I think this "assuming --only"
message outlived its usefulness.  This was necessary in very early
days of Git because originally "git commit foo" did "git add foo &&
git commit" (i.e. "-i" was the default) and then later when we made
"--only" the new default in order to match everybody else's SCM, we
needed to remind users of older versions of Git tha

Dear Beloved

2017-06-26 Thread Mrs marios



--
Dear Beloved Friend,

I am Mrs Nicole Benoite Marois and I have been suffering from ovarian 
cancer disease and the doctor says that i have just few weeks to leave. 
I am from (Paris) France but based in Benin republic since eleven years 
ago as a business woman dealing with gold exportation before the death 
of my husband many years ago.


I have $4.5 Million US Dollars at Eco-Bank here in Benin republic and I 
instructed the bank to transfer the fund to you as foreigner that will 
apply to the bank after I have gone, that they should release the fund 
to him/her, but you will assure me that you will take 50% of the fund 
and give 50% to the orphanages home in your country for my heart to 
rest.


Yours fairly friend,
Mrs. Nicole Benoite Marois


let me know

2017-06-26 Thread CELINE
Hello, important charity foundation proposal to discuss with you, if you are 
interested please reply urgently for details. 
with love,
CELINE


Re: [PATCH v6 6/6] convert: add "status=delayed" to filter process protocol

2017-06-26 Thread Lars Schneider

> On 26 Jun 2017, at 21:02, Junio C Hamano  wrote:
> 
> Lars Schneider  writes:
> 
>> Some `clean` / `smudge` filters might require a significant amount of
>> time to process a single blob (e.g. the Git LFS smudge filter might
>> perform network requests). During this process the Git checkout
>> operation is blocked and Git needs to wait until the filter is done to
>> continue with the checkout.
> 
> Good motivation.  I'd say s/might/may/ to stress the fact that this
> is addressing a real-world problem, i.e. some filters incur network
> latencies.

Agreed.


>> Teach the filter process protocol (introduced in edcc858) to accept the
> 
> When referring to an old commit, we recommend this format
> 
>edcc8581 ("convert: add filter..process option", 2016-10-16)
> 
> (with or without dq around the title) which helps readers by telling
> them how old the thing is and what it was about.

Agreed.


> ...
> 
>> +Delay
>> +^
>> +
>> +If the filter supports the "delay" capability, then Git can send the
>> +flag "can-delay" after the filter command and pathname. This flag
>> +denotes that the filter can delay filtering the current blob (e.g. to
>> +compensate network latencies) by responding with no content but with
>> +the status "delayed" and a flush packet.
>> +
>> +packet:  git> command=smudge
>> +packet:  git> pathname=path/testfile.dat
>> +packet:  git> can-delay=1
>> +packet:  git> 
>> +packet:  git> CONTENT
>> +packet:  git> 
>> +packet:  git< status=delayed
>> +packet:  git< 
>> +
>> +
>> +If the filter supports the "delay" capability then it must support the
>> +"list_available_blobs" command. If Git sends this command, then the
>> +filter is expected to return a list of pathnames of blobs that are
>> +available. The list must be terminated with a flush packet followed
> 
> Is it correct to read the above "pathnames of blobs that are
> availble" as "pathnames, among which Git earlier requested to be
> filtered with "can-delay=1", for which the filtered result is
> ready"?  I do not mean to suggest this awkward wording to replace
> yours, but I found yours a bit too fuzzy for first time readers.
> 
> Perhaps my brain has rotten by hearing the "delayed/lazy transfer of
> large blobs", but it went "Huh?" upon seeing "...are available".

Maybe this?
[...] If Git sends this command, then the
filter is expected to return a list of pathnames representing blobs 
that have been delayed earlier and are now available. [...]


>> +by a "success" status that is also terminated with a flush packet. If
>> +no blobs for the delayed paths are available, yet, then the filter is
>> +expected to block the response until at least one blob becomes
>> +available.
> 
> Ahh, this is better, at least you use "the delayed paths".
> 
> What if the result never gets available (e.g. resulted in an error)?

As soon as the filter responds with an empty list, Git stops asking.
All blobs that Git has not received at this point are considered an
error.

Should I mention that explicitly?


> Or is it considered "we _now_ know the result is an error" and such
> a delayed path that failed to retrieve from LFS store "available" at
> that point?

No. "list_available_blobs" only returns blobs that are immediately
available. Errors are not available.


> It may be too late to raise to a series that went to v6, but this
> smells more like "ready" not "available" to me.

You mean you would call it "list_ready_blobs"? I am no native speaker
but I feel "available" sounds better. I also contemplated "retrievable".

I think I understand your reasoning, though. In the case of GitLFS all these 
blobs are "available". Only the ones that GitLFS has downloaded are 
ready, though. However, other filters might delay blobs for non-network
related reasons and then "available" and "ready" would be the same, no?

I would like to keep "available".

> ...

>> diff --git a/cache.h b/cache.h
>> index ae4c45d379..2ec12c4477 100644
>> --- a/cache.h
>> +++ b/cache.h
>> @@ -1551,8 +1552,11 @@ struct checkout {
>> };
>> #define CHECKOUT_INIT { NULL, "" }
>> 
>> +
>> #define TEMPORARY_FILENAME_LENGTH 25
> 
> You probably do not need that new blank line.

Agreed! I have no idea why/how that got in.


> ...
>> diff --git a/convert.c b/convert.c
>> index e55c034d86..6452ab546a 100644
>> --- a/convert.c
>> +++ b/convert.c
>> @@ -533,7 +534,8 @@ static int start_multi_file_filter_fn(struct 
>> subprocess_entry *subprocess)
>>  if (err)
>>  goto done;
>> 
>> -err = packet_writel(process->in, "capability=clean", 
>> "capability=smudge", NULL);
>> +err = packet_writel(process->in,
>> +"capability=clean", "capability=smudge", "capability=delay", 
>> NULL);
>> 
>>  for (;;) {
>>  cap_buf = packet_read_line(process->out, NULL);
>> @@ -549,6 +551,8 @@ static int start_multi_file_filter_fn(stru

Re: Bug: Cannot kill Nodejs process using ctrl + c

2017-06-26 Thread Gyandeep Singh
Environment:

OS: Windows 7
Git: git version 2.13.1.windows.2
NodeJS: 8.1.2

Steps:

1. Create a js file with content

const http = require('http');
const fs = require('fs');
const port = 3000;
const app = http.createServer((req,res) => {
res.writeHead(200);
res.end("hi");
});

app.listen(port);

its a simple server running on 3000 port.

2. Run command "node ./app.js" inside git bash.
3. hit "CTRL + c" (2 times) to kill the process.
4. If you look at taskmanager, then you will see a node.js process
still running. or if you try to restart the server it will say port
300 already in use.

Notes:

1. This was working on first version of Git 2.13, it broke after the
first version.


Thanks


Regards,

Gyandeep Singh


Re: [PATCH] pack-bitmap: Don't perform unaligned memory access

2017-06-26 Thread Junio C Hamano
James Clarke  writes:

> The preceding bitmap entries have a 1-byte XOR-offset and 1-byte flags,
> so their size is not a multiple of 4. Thus the name-hash cache is only
> guaranteed to be 2-byte aligned and so we must use get_be32 rather than
> indexing the array directly.
>
> Signed-off-by: James Clarke 
> ---
>
> This was noticed thanks to the recent tests added to t5310-pack-bitmaps.sh,
> which were crashing with SIGBUS on Debian sparc64. All tests (excluding those
> marked with known breakage) now pass again.

Thanks.

>
>  pack-bitmap.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/pack-bitmap.c b/pack-bitmap.c
> index a3ac3dccd..327634cd7 100644
> --- a/pack-bitmap.c
> +++ b/pack-bitmap.c
> @@ -627,7 +627,7 @@ static void show_objects_for_type(
>   sha1 = nth_packed_object_sha1(bitmap_git.pack, 
> entry->nr);
>
>   if (bitmap_git.hashes)
> - hash = ntohl(bitmap_git.hashes[entry->nr]);
> + hash = get_be32(bitmap_git.hashes + entry->nr);
>
>   show_reach(sha1, object_type, 0, hash, bitmap_git.pack, 
> entry->offset);
>   }
> --
> 2.13.2


Re: [RFC PATCH] proposal for refs/tracking/* hierarchy

2017-06-26 Thread Junio C Hamano
Marc Branchaud  writes:

> OTOH, we shouldn't need to do any conflict resolution on these
> "tracking" refs:  The remote side should update the refs
> properly. Nobody should make local changes to these refs.
>
> I guess I'm advocating that git should only allow "tracking" refs to
> be updated by a fetch, or a successful push of a local, non-"tracking"
> ref.

I do not think anybody is imagining new things to happen inside
refs/tracking other than by fetching, just like refs/remotes/ refs
behave like so.

What I was discussing was mostly the step next to the introduction
of tracking/.  Some things are useful merely by existing (e.g. tags
you got from remote, as long as you can easily refer to them, are
useful to you).  Some other things are cumbersome to use until you
manage to consolidate your views with theirs in order to make
collective progress, and they require merging (e.g. notes).


Re: [PATCH v6 6/6] convert: add "status=delayed" to filter process protocol

2017-06-26 Thread Junio C Hamano
Lars Schneider  writes:

> Some `clean` / `smudge` filters might require a significant amount of
> time to process a single blob (e.g. the Git LFS smudge filter might
> perform network requests). During this process the Git checkout
> operation is blocked and Git needs to wait until the filter is done to
> continue with the checkout.

Good motivation.  I'd say s/might/may/ to stress the fact that this
is addressing a real-world problem, i.e. some filters incur network
latencies.

> Teach the filter process protocol (introduced in edcc858) to accept the

When referring to an old commit, we recommend this format

edcc8581 ("convert: add filter..process option", 2016-10-16)

(with or without dq around the title) which helps readers by telling
them how old the thing is and what it was about.

> @@ -512,12 +512,69 @@ the protocol then Git will stop the filter process and 
> restart it
>  with the next file that needs to be processed. Depending on the
>  `filter..required` flag Git will interpret that as error.
>  
> -After the filter has processed a blob it is expected to wait for
> -the next "key=value" list containing a command. Git will close
> +After the filter has processed a command it is expected to wait for
> +a "key=value" list containing the next command. Git will close

Good generalization.

> +Delay
> +^
> +
> +If the filter supports the "delay" capability, then Git can send the
> +flag "can-delay" after the filter command and pathname. This flag
> +denotes that the filter can delay filtering the current blob (e.g. to
> +compensate network latencies) by responding with no content but with
> +the status "delayed" and a flush packet.
> +
> +packet:  git> command=smudge
> +packet:  git> pathname=path/testfile.dat
> +packet:  git> can-delay=1
> +packet:  git> 
> +packet:  git> CONTENT
> +packet:  git> 
> +packet:  git< status=delayed
> +packet:  git< 
> +
> +
> +If the filter supports the "delay" capability then it must support the
> +"list_available_blobs" command. If Git sends this command, then the
> +filter is expected to return a list of pathnames of blobs that are
> +available. The list must be terminated with a flush packet followed

Is it correct to read the above "pathnames of blobs that are
availble" as "pathnames, among which Git earlier requested to be
filtered with "can-delay=1", for which the filtered result is
ready"?  I do not mean to suggest this awkward wording to replace
yours, but I found yours a bit too fuzzy for first time readers.

Perhaps my brain has rotten by hearing the "delayed/lazy transfer of
large blobs", but it went "Huh?" upon seeing "...are available".

> +by a "success" status that is also terminated with a flush packet. If
> +no blobs for the delayed paths are available, yet, then the filter is
> +expected to block the response until at least one blob becomes
> +available.

Ahh, this is better, at least you use "the delayed paths".

What if the result never gets available (e.g. resulted in an error)?
Or is it considered "we _now_ know the result is an error" and such
a delayed path that failed to retrieve from LFS store "available" at
that point?

It may be too late to raise to a series that went to v6, but this
smells more like "ready" not "available" to me.

> diff --git a/builtin/checkout.c b/builtin/checkout.c
> index a6b2af39d3..c1a256df8d 100644
> --- a/builtin/checkout.c
> +++ b/builtin/checkout.c
> @@ -376,6 +376,8 @@ static int checkout_paths(const struct checkout_opts 
> *opts,
>   state.force = 1;
>   state.refresh_cache = 1;
>   state.istate = &the_index;
> +
> + enable_delayed_checkout(&state);
>   for (pos = 0; pos < active_nr; pos++) {
>   struct cache_entry *ce = active_cache[pos];
>   if (ce->ce_flags & CE_MATCHED) {
> @@ -390,6 +392,7 @@ static int checkout_paths(const struct checkout_opts 
> *opts,
>   pos = skip_same_name(ce, pos) - 1;
>   }
>   }
> + errs |= finish_delayed_checkout(&state);

OK.


>   if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
>   die(_("unable to write new index file"));

> diff --git a/cache.h b/cache.h
> index ae4c45d379..2ec12c4477 100644
> --- a/cache.h
> +++ b/cache.h
> @@ -1551,8 +1552,11 @@ struct checkout {
>  };
>  #define CHECKOUT_INIT { NULL, "" }
>  
> +
>  #define TEMPORARY_FILENAME_LENGTH 25

You probably do not need that new blank line.

>  extern int checkout_entry(struct cache_entry *ce, const struct checkout 
> *state, char *topath);
> +extern void enable_delayed_checkout(struct checkout *state);
> +extern int finish_delayed_checkout(struct checkout *state);

OK.

> diff --git a/convert.c b/convert.c
> index e55c034d86..6452ab546a 100644
> --- a/convert.c
> +++ b/convert.c
> @@ -533,7 +534,8 @@ static int start_multi_file_filter_fn(struct 
> subprocess_entry *subproce

Re: [PATCH v6 3/6] t0021: write "OUT" only on success

2017-06-26 Thread Lars Schneider

> On 26 Jun 2017, at 19:50, Junio C Hamano  wrote:
> 
> Lars Schneider  writes:
> 
>>> On 26 Jun 2017, at 00:17, Junio C Hamano  wrote:
>>> 
>>> Lars Schneider  writes:
>>> 
 "rot13-filter.pl" used to write "OUT " to the debug log even in case 
 of
 an abort or error. Fix this by writing "OUT " to the debug log only 
 in
 the successful case if output is actually written.
>>> 
>>> Again, use of "Fix this" without clarifying what the problem is.  Is
>>> this change needed because the size may not be known when the new
>>> filter protocol is in use, or something?
>> 
>> How about this?
>> 
>>"rot13-filter.pl" always writes "OUT " to the debug log at the end
>>of an interaction.
>> 
>>This works without issues for the existing cases "abort", "error", and 
>>"success". In a subsequent patch 'convert: add "status=delayed" to 
>>filter process protocol' we will add a new case "delayed". In that case 
>>we do not send the data right away and it would be wrong/misleading to
>>the reader if we would write "OUT " to the debug log.
> 
> I still do not quite get "we do not send the data right away" as
> opposed to "at the end of an interaction" makes it wrong/misleading
> to write "OUT "?
> 
>A new response "delayed" that will be introduced in subsequent
>patches accepts the input, without giving the filtered result
>right away, and at that moment, we do not even have the output
>size yet.
> 
> might be a very reasonable rationale for omitting "OUT: " in
> the output when "delayed" request is seen, but that still does not
> explain why it is sensible to drop "OUT: " for error or abort
> case.

Well, "rot13-filter.pl" *does* have the output available right away
even in the delayed case (because of the mock implementation). 
If we print "OUT: " all the time then this would lead to
misleading log messages in this situation.

It's not necessary to drop "OUT: " for abort and error. It
was just the way that required the least number of lines of code.


> Having said all that, I tend to agree with the actual change.  When
> we abort or error, we aren't producing any useful output, so it may
> be sensible _not_ to say "OUT: 0" in the log output from the test
> helper in these cases.  And if the change is explained that way,
> readers would understand why this step is a good thing to have, with
> or without the future "delayed" step in the series.

How about this?

"rot13-filter.pl" always writes "OUT " to the debug log at the end
of a response.

This works without issues for the existing responses "abort", "error", 
and "success". A new response "delayed", that will be introduced in 
subsequent patches, accepts the input without giving the filtered result
right away. Since we actually have the data already available in our mock
filter the debug log output would be wrong/misleading. Therefore, we
do not write "OUT " for "delayed" responses.

To simplify the code we do not write "OUT " for "abort" and 
"error" responses, too, as their size is always zero.

- Lars

Re: Compile Error v2.13.2 on Solaris SPARC

2017-06-26 Thread Ævar Arnfjörð Bjarmason

On Mon, Jun 26 2017, Michael Kebe jotted:

> Still no luck, with one or both patches.

Could you please attach (or pastebin or whatever) your copy of
/usr/include/sys/isa_defs.h? And what Solaris/Illumos/Whatever version
is this?

Maybe this patch works for you:

diff --git a/sha1dc/sha1.c b/sha1dc/sha1.c
index facea1bb56..4f747c3aea 100644
--- a/sha1dc/sha1.c
+++ b/sha1dc/sha1.c
@@ -36,17 +36,19 @@
 #undef SHA1DC_BIGENDIAN
 #endif
 
-#if (defined(_BYTE_ORDER) || defined(__BYTE_ORDER) || defined(__BYTE_ORDER__))
+#if (defined(BYTE_ORDER) || defined(_BYTE_ORDER) || defined(__BYTE_ORDER) || \
+ defined(__BYTE_ORDER__))
 
-#if ((defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) || \
- (defined(__BYTE_ORDER) && (__BYTE_ORDER == __BIG_ENDIAN)) || \
- (defined(__BYTE_ORDER__) && (__BYTE_ORDER__ == __BIG_ENDIAN__)) )
+#if ((defined(BYTE_ORDER) && defined(BIG_ENDIAN) && (BYTE_ORDER == 
BIG_ENDIAN)) || \
+ (defined(_BYTE_ORDER) && defined(_BIG_ENDIAN) && (_BYTE_ORDER == 
_BIG_ENDIAN)) ||   \
+ (defined(__BYTE_ORDER) && defined(__BIG_ENDIAN) && (__BYTE_ORDER == 
__BIG_ENDIAN)) || \
+ (defined(__BYTE_ORDER__) && defined(__BIG_ENDIAN__) && (__BYTE_ORDER__ == 
__BIG_ENDIAN__)) )
 #define SHA1DC_BIGENDIAN
 #endif
 
 #else
 
-#if (defined(_BIG_ENDIAN) || defined(__BIG_ENDIAN) || defined(__BIG_ENDIAN__) 
|| \
+#if (defined(BIG_ENDIAN) || defined(_BIG_ENDIAN) || defined(__BIG_ENDIAN) || 
defined(__BIG_ENDIAN__) || \
  defined(__ARMEB__) || defined(__THUMBEB__) || defined(__AARCH64EB__) || \
  defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || \
  defined(__sparc))


Make sure to run the test suite after that, because it may compile but
not diagnose you as Big Endian if it doesn't work, thus failing horribly
on runtime.

> Greetings
> Michael
>
> 2017-06-26 14:47 GMT+02:00 Ævar Arnfjörð Bjarmason :
>>
>> On Mon, Jun 26 2017, Michael Kebe jotted:
>>
>>> No luck with the patch.
>>>
>>> Still got:
>>>
>>> CC sha1dc/sha1.o
>>> sha1dc/sha1.c:43:58: error: operator '==' has no right operand
>>>   (defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) ||   \
>>>   ^
>>> gmake: *** [sha1dc/sha1.o] Error 1
>>
>> Does this patch change anything, with or without the previous patch:
>>
>> diff --git a/git-compat-util.h b/git-compat-util.h
>> index 047172d173..1327aea229 100644
>> --- a/git-compat-util.h
>> +++ b/git-compat-util.h
>> @@ -131,6 +131,14 @@
>>  # else
>>  # define _XOPEN_SOURCE 500
>>  # endif
>> +
>> +/*
>> + * Bring in macros defining _BIG_ENDIAN etc. Should be brought in by
>> + * the likes of stdio.h, but include it here in case it hasn't been
>> + * included already.
>> + */
>> +#include 
>> +
>>  #elif !defined(__APPLE__) && !defined(__FreeBSD__) && !defined(__USLC__) && 
>> \
>>!defined(_M_UNIX) && !defined(__sgi) && !defined(__DragonFly__) && \
>>!defined(__TANDEM) && !defined(__QNX__) && !defined(__MirBSD__) && \
>>
>>>
>>> Greetings
>>> Michael
>>>
>>> 2017-06-26 10:42 GMT+02:00 Ævar Arnfjörð Bjarmason :

 On Mon, Jun 26 2017, Michael Kebe jotted:

> When compiling 2.13.2 on Solaris SPARC I get this error:
>
> CC sha1dc/sha1.o
> sha1dc/sha1.c:41:58: error: operator '==' has no right operand
>  #if ((defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) || \
>   ^
> gmake: *** [sha1dc/sha1.o] Error 1
>
> The define _BIG_ENDIAN is set by Solaris on SPARC systems. So the
> check in line 41 gives this error.
>
> The _BIG_ENDIAN define is used few line below for defining
> SHA1DC_BIGENDIAN. This is needed for Solaris SPARC systems.
> See
> https://github.com/cr-marcstevens/sha1collisiondetection/commit/33a694a9ee1b79c24be45f9eab5ac0e1aeeaf271

 I can see why this would error out. In sys/isa_defs.h on SPARC there's
 just `#define _BIG_ENDIAN`
 (http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/sys/isa_defs.h),
 and (on Linux):

 $ cat /tmp/test.c
 #define _FOO
 #define _BAR 1
 #if (_BAR == _FOO)
 #endif
 $ gcc -E /tmp/test.c
 # 1 "/tmp/test.c"
 # 1 ""
 # 1 ""
 # 31 ""
 # 1 "/usr/include/stdc-predef.h" 1 3 4
 # 32 "" 2
 # 1 "/tmp/test.c"
 /tmp/test.c:3:18: error: operator '==' has no right operand
  #if (_BAR == _FOO)

 What I don't get is how this would have worked for Liam then (see
 20170613020939.gemh3m5z6czgw...@oracle.com). Differences in Solaris
 versions and how their headers look like?

 Does this patch fix it for you?

 diff --git a/sha1dc/sha1.c b/sha1dc/sha1.c
 index facea1bb56..0b75b31b67 100644
 --- a/sha1dc/sha1.c
 +++ b/sha1dc/sha1.c
 @@ -36,9 +36,11 @@
  #undef SHA1DC_BIGENDIAN
  #endif

 -#if (defined(_BYTE_ORDER) || def

Re: Compile Error v2.13.2 on Solaris SPARC

2017-06-26 Thread Liam R. Howlett
* ?var Arnfj?r? Bjarmason  [170626 08:47]:
> 
> On Mon, Jun 26 2017, Michael Kebe jotted:
> 
> > No luck with the patch.
> >
> > Still got:
> >
> > CC sha1dc/sha1.o
> > sha1dc/sha1.c:43:58: error: operator '==' has no right operand
> >   (defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) ||   \
> >   ^
> > gmake: *** [sha1dc/sha1.o] Error 1
> 
> Does this patch change anything, with or without the previous patch:
> 
> diff --git a/git-compat-util.h b/git-compat-util.h
> index 047172d173..1327aea229 100644
> --- a/git-compat-util.h
> +++ b/git-compat-util.h
> @@ -131,6 +131,14 @@
>  # else
>  # define _XOPEN_SOURCE 500
>  # endif
> +
> +/*
> + * Bring in macros defining _BIG_ENDIAN etc. Should be brought in by
> + * the likes of stdio.h, but include it here in case it hasn't been
> + * included already.
> + */
> +#include 
> +
>  #elif !defined(__APPLE__) && !defined(__FreeBSD__) && !defined(__USLC__) && \
>!defined(_M_UNIX) && !defined(__sgi) && !defined(__DragonFly__) && \
>!defined(__TANDEM) && !defined(__QNX__) && !defined(__MirBSD__) && \
> 

This addition still fails on Solaris for me.  It appears that _BIG_ENDIAN is
defined but with no value on this platform.


> >
> > Greetings
> > Michael
> >
> > 2017-06-26 10:42 GMT+02:00 Ævar Arnfjörð Bjarmason :
> >>
> >> On Mon, Jun 26 2017, Michael Kebe jotted:
> >>
> >>> When compiling 2.13.2 on Solaris SPARC I get this error:
> >>>
> >>> CC sha1dc/sha1.o
> >>> sha1dc/sha1.c:41:58: error: operator '==' has no right operand
> >>>  #if ((defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) || \
> >>>   ^
> >>> gmake: *** [sha1dc/sha1.o] Error 1
> >>>
> >>> The define _BIG_ENDIAN is set by Solaris on SPARC systems. So the
> >>> check in line 41 gives this error.
> >>>
> >>> The _BIG_ENDIAN define is used few line below for defining
> >>> SHA1DC_BIGENDIAN. This is needed for Solaris SPARC systems.
> >>> See
> >>> https://github.com/cr-marcstevens/sha1collisiondetection/commit/33a694a9ee1b79c24be45f9eab5ac0e1aeeaf271
> >>
> >> I can see why this would error out. In sys/isa_defs.h on SPARC there's
> >> just `#define _BIG_ENDIAN`
> >> (http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/sys/isa_defs.h),
> >> and (on Linux):
> >>
> >> $ cat /tmp/test.c
> >> #define _FOO
> >> #define _BAR 1
> >> #if (_BAR == _FOO)
> >> #endif
> >> $ gcc -E /tmp/test.c
> >> # 1 "/tmp/test.c"
> >> # 1 ""
> >> # 1 ""
> >> # 31 ""
> >> # 1 "/usr/include/stdc-predef.h" 1 3 4
> >> # 32 "" 2
> >> # 1 "/tmp/test.c"
> >> /tmp/test.c:3:18: error: operator '==' has no right operand
> >>  #if (_BAR == _FOO)
> >>
> >> What I don't get is how this would have worked for Liam then (see
> >> 20170613020939.gemh3m5z6czgw...@oracle.com). Differences in Solaris
> >> versions and how their headers look like?

I am running Linux and 2.13.2 compiles and works fine for me on SPARC.


If you want to keep the compact layout you have in the #if defined()
portion, you can get away with reversing the logic as follows:

- >8 -
diff --git a/sha1dc/sha1.c b/sha1dc/sha1.c
index facea1bb5..808b520cd 100644
--- a/sha1dc/sha1.c
+++ b/sha1dc/sha1.c
@@ -36,20 +36,20 @@
 #undef SHA1DC_BIGENDIAN
 #endif

-#if (defined(_BYTE_ORDER) || defined(__BYTE_ORDER) || defined(__BYTE_ORDER__))
+#if !(defined(_BYTE_ORDER) || defined(__BYTE_ORDER) || defined(__BYTE_ORDER__))

-#if ((defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) || \
- (defined(__BYTE_ORDER) && (__BYTE_ORDER == __BIG_ENDIAN)) || \
- (defined(__BYTE_ORDER__) && (__BYTE_ORDER__ == __BIG_ENDIAN__)) )
+#if (defined(_BIG_ENDIAN) || defined(__BIG_ENDIAN) || defined(__BIG_ENDIAN__) 
|| \
+ defined(__ARMEB__) || defined(__THUMBEB__) || defined(__AARCH64EB__) || \
+ defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || \
+ defined(__sparc))
 #define SHA1DC_BIGENDIAN
 #endif
 
 #else
+#if ((defined(_BYTE_ORDER) && defined(_BIG_ENDIAN)) || \
+ (defined(__BYTE_ORDER) && (__BYTE_ORDER == __BIG_ENDIAN)) || \
+ (defined(__BYTE_ORDER__) && (__BYTE_ORDER__ == __BIG_ENDIAN__)) )

-#if (defined(_BIG_ENDIAN) || defined(__BIG_ENDIAN) || defined(__BIG_ENDIAN__) 
|| \
- defined(__ARMEB__) || defined(__THUMBEB__) || defined(__AARCH64EB__) || \
- defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || \
- defined(__sparc))
 #define SHA1DC_BIGENDIAN
 #endif



Re: [PATCH] submodule--helper: teach push-check to handle HEAD

2017-06-26 Thread Brandon Williams
On 06/23, Junio C Hamano wrote:
> Brandon Williams  writes:
> 
> > In 06bf4ad1d (push: propagate remote and refspec with
> > --recurse-submodules) push was taught how to propagate a refspec down to
> > submodules when the '--recurse-submodules' flag is given.  The only refspecs
> > that are allowed to be propagated are ones which name a ref which exists
> > in both the superproject and the submodule, with the caveat that 'HEAD'
> > was disallowed.
> >
> > This patch teaches push-check (the submodule helper which determines if
> > a refspec can be propagated to a submodule) to permit propagating 'HEAD'
> > if and only if the superproject and the submodule both have the same
> > named branch checked out and the submodule is not in a detached head
> > state.
> >
> > Signed-off-by: Brandon Williams 
> > ---
> >  builtin/submodule--helper.c| 57 
> > +++---
> >  submodule.c| 18 ++---
> >  t/t5531-deep-submodule-push.sh | 25 +-
> >  3 files changed, 82 insertions(+), 18 deletions(-)
> >
> > diff --git a/builtin/submodule--helper.c b/builtin/submodule--helper.c
> > index 1b4d2b346..fd5020036 100644
> > --- a/builtin/submodule--helper.c
> > +++ b/builtin/submodule--helper.c
> > @@ -1107,24 +1107,41 @@ static int resolve_remote_submodule_branch(int 
> > argc, const char **argv,
> >  static int push_check(int argc, const char **argv, const char *prefix)
> >  {
> > struct remote *remote;
> > +   const char *superproject_head;
> > +   char *head;
> > +   int detached_head = 0;
> > +   struct object_id head_oid;
> >  
> > -   if (argc < 2)
> > -   die("submodule--helper push-check requires at least 1 
> > argument");
> > +   if (argc < 3)
> > +   die("submodule--helper push-check requires at least 2 
> > argument");
> 
> "arguments"?
 
You're right, I'll fix the typo.

> > +
> > +   /*
> > +* superproject's resolved head ref.
> > +* if HEAD then the superproject is in a detached head state, otherwise
> > +* it will be the resolved head ref.
> > +*/
> > +   superproject_head = argv[1];
> 
> The above makes it sound like the caller gives either "HEAD" (when
> detached) or "refs/heads/branch" (when on 'branch') in argv[1] and
> you are stashing it away, but ...
> 
> > +   /* Get the submodule's head ref and determine if it is detached */
> > +   head = resolve_refdup("HEAD", 0, head_oid.hash, NULL);
> > +   if (!head)
> > +   die(_("Failed to resolve HEAD as a valid ref."));
> > +   if (!strcmp(head, "HEAD"))
> > +   detached_head = 1;
> 
> ... the work to see which branch we are on and if we are detached is
> done by this code without consulting argv[1].  I cannot tell what is
> going on.  Is argv[1] assigned to superproject_head a red herring?

The idea is that 'git submodule--helper push-check' is called by a
superproject on every submodule that may be pushed.  So this command is
invoked on the submodule itself.  This change requires knowing what
'HEAD' refers to in the superproject (either detached or a named branch)
so the superproject passes that information to the submodule via
argv[1].  This snippet of code is responsible for finding what 'HEAD'
refers to in the submodule so that later we can compare the
superproject's and submodule's 'HEAD' ref to see if they match the same
named branch.

> 
> > /*
> >  * The remote must be configured.
> >  * This is to avoid pushing to the exact same URL as the parent.
> >  */
> > -   remote = pushremote_get(argv[1]);
> > +   remote = pushremote_get(argv[2]);
> > if (!remote || remote->origin == REMOTE_UNCONFIGURED)
> > -   die("remote '%s' not configured", argv[1]);
> > +   die("remote '%s' not configured", argv[2]);
> >  
> > /* Check the refspec */
> > -   if (argc > 2) {
> > -   int i, refspec_nr = argc - 2;
> > +   if (argc > 3) {
> > +   int i, refspec_nr = argc - 3;
> > struct ref *local_refs = get_local_heads();
> > struct refspec *refspec = parse_push_refspec(refspec_nr,
> > -argv + 2);
> > +argv + 3);
> 
> If you have no need for argv[1] (and you don't, as you have stashed
> it away in superproject_head), it may be less damage to the code if
> you shifted argv upfront after grabbing superproject_head.
> 
> > for (i = 0; i < refspec_nr; i++) {
> > struct refspec *rs = refspec + i;
> > @@ -1132,18 +1149,30 @@ static int push_check(int argc, const char **argv, 
> > const char *prefix)
> > if (rs->pattern || rs->matching)
> > continue;
> >  
> > -   /*
> > -* LHS must match a single ref
> > -* NEEDSWORK: add logic to special case 'HEAD' once
> > -* working with submodules in a detached head state
> 

Re: [RFC PATCH] proposal for refs/tracking/* hierarchy

2017-06-26 Thread Marc Branchaud

On 2017-06-23 04:54 PM, Junio C Hamano wrote:

Jacob Keller  writes:


Instead, lets add support for a new refs/tracking/* hierarchy which is
laid out in such a way to avoid this inconsistency. All refs in
"refs/tracking//*" will include the complete ref, such that
dropping the "tracking/" part will give the exact ref name as it
is found in the upstream. Thus, we can track any ref here by simply
fetching it into refs/tracking//*.


I do think this overall is a good goal to aim for wrt "tracking",
i.e.  keeping a pristine copy of what we observed from the outside
world.  In addition to what you listed as "undecided" below,
however, I expect that this will highlight a lot of trouble in
"working together", i.e. reconciling the differences of various
world views and moving the project forward, in two orthogonal axes:

  - Things pointed at by some refs (e.g. notes/, but I think the
".gitmodules equivalent that is not tied to any particular commit
in the superproject" Jonathan Nieder and friends advocate falls
into the same category) do not correspond to the working tree
contents, and merging their contents will always pose challenge
when we need to prepare for conflict resolution.


OTOH, we shouldn't need to do any conflict resolution on these 
"tracking" refs:  The remote side should update the refs properly. 
Nobody should make local changes to these refs.


I guess I'm advocating that git should only allow "tracking" refs to be 
updated by a fetch, or a successful push of a local, non-"tracking" ref.


M.



  - Things pointed at by some other refs (e.g. tags/) do not make
sense to be merged.  You may locally call a commit with a tag
"wip", while your friends may use the same "wip" tag to point at
a different one.  Both are valid, and more importantly, there is
no point even trying to reconcile what the "wip" tag means in the
project globally.

For the former, I expect that merging these non-working tree
contents will all have to require some specific tool that is
tailored for the meaning each hierarchy has, just like "git notes
merge" gives a solution that is very specific to the notes refs (I
do not know how well "notes merge" works in practice, though).

For the latter, having a separate tracking hierarchy is a strict
improvement from "tracking" point of view, but I think "working
together" also needs a well designed set of new look-up rules, and a
new convention.  For example, some tags may not want to be shared
(e.g. "wip" example above) even though they should be easy to access
by those who already have them (e.g. "git log ..wip" should work
without having to say "git log ..refs/local-tags/wip", even if we
decide to implement the "some tags may not want to be shared"
segregation by introducing a new hierarchy).  And also a shared tag
that is copied to "refs/tracking/origin/tags/v1.0" would need a way
better than "git log tracking/origin/tags/v1.0" for this mechanism
to be useful in everyday workflow.

Thanks.



Re: [PATCH v6 5/6] convert: move multiple file filter error handling to separate function

2017-06-26 Thread Junio C Hamano
Lars Schneider  writes:

> Refactoring the filter error handling is useful for the subsequent patch
> 'convert: add "status=delayed" to filter process protocol'.
>
> In addition, replace the parentheses around the empty "if" block with a
> single semicolon to adhere to the Git style guide.
>
> Signed-off-by: Lars Schneider 
> ---
>  convert.c | 47 ++-
>  1 file changed, 26 insertions(+), 21 deletions(-)

The patch looks obviously correct.  Thanks.

> diff --git a/convert.c b/convert.c
> index 9907e3b9ba..e55c034d86 100644
> --- a/convert.c
> +++ b/convert.c
> @@ -565,6 +565,29 @@ static int start_multi_file_filter_fn(struct 
> subprocess_entry *subprocess)
>   return err;
>  }
>  
> +static void handle_filter_error(const struct strbuf *filter_status,
> + struct cmd2process *entry,
> + const unsigned int wanted_capability) {
> + if (!strcmp(filter_status->buf, "error"))
> + ; /* The filter signaled a problem with the file. */
> + else if (!strcmp(filter_status->buf, "abort") && wanted_capability) {
> + /*
> +  * The filter signaled a permanent problem. Don't try to filter
> +  * files with the same command for the lifetime of the current
> +  * Git process.
> +  */
> +  entry->supported_capabilities &= ~wanted_capability;
> + } else {
> + /*
> +  * Something went wrong with the protocol filter.
> +  * Force shutdown and restart if another blob requires 
> filtering.
> +  */
> + error("external filter '%s' failed", entry->subprocess.cmd);
> + subprocess_stop(&subprocess_map, &entry->subprocess);
> + free(entry);
> + }
> +}
> +
>  static int apply_multi_file_filter(const char *path, const char *src, size_t 
> len,
>  int fd, struct strbuf *dst, const char *cmd,
>  const unsigned int wanted_capability)
> @@ -656,28 +679,10 @@ static int apply_multi_file_filter(const char *path, 
> const char *src, size_t len
>  done:
>   sigchain_pop(SIGPIPE);
>  
> - if (err) {
> - if (!strcmp(filter_status.buf, "error")) {
> - /* The filter signaled a problem with the file. */
> - } else if (!strcmp(filter_status.buf, "abort")) {
> - /*
> -  * The filter signaled a permanent problem. Don't try 
> to filter
> -  * files with the same command for the lifetime of the 
> current
> -  * Git process.
> -  */
> -  entry->supported_capabilities &= ~wanted_capability;
> - } else {
> - /*
> -  * Something went wrong with the protocol filter.
> -  * Force shutdown and restart if another blob requires 
> filtering.
> -  */
> - error("external filter '%s' failed", cmd);
> - subprocess_stop(&subprocess_map, &entry->subprocess);
> - free(entry);
> - }
> - } else {
> + if (err)
> + handle_filter_error(&filter_status, entry, wanted_capability);
> + else
>   strbuf_swap(dst, &nbuf);
> - }
>   strbuf_release(&nbuf);
>   return !err;
>  }


Re: [PATCH v6 5/6] convert: move multiple file filter error handling to separate function

2017-06-26 Thread Junio C Hamano
Not about this patch, but viewing this with

git show -w --color-moved=zebra

gives an interesting result.  The bulk of the part moved are
re-indented, and the comment string gets zebra stripes, as if the
line movement detection code does not realize that these came from
the same place.



Re: [PATCH v6 3/6] t0021: write "OUT" only on success

2017-06-26 Thread Junio C Hamano
Lars Schneider  writes:

>> On 26 Jun 2017, at 00:17, Junio C Hamano  wrote:
>> 
>> Lars Schneider  writes:
>> 
>>> "rot13-filter.pl" used to write "OUT " to the debug log even in case 
>>> of
>>> an abort or error. Fix this by writing "OUT " to the debug log only in
>>> the successful case if output is actually written.
>> 
>> Again, use of "Fix this" without clarifying what the problem is.  Is
>> this change needed because the size may not be known when the new
>> filter protocol is in use, or something?
>
> How about this?
>
> "rot13-filter.pl" always writes "OUT " to the debug log at the end
> of an interaction.
>
> This works without issues for the existing cases "abort", "error", and 
> "success". In a subsequent patch 'convert: add "status=delayed" to 
> filter process protocol' we will add a new case "delayed". In that case 
> we do not send the data right away and it would be wrong/misleading to
> the reader if we would write "OUT " to the debug log.

I still do not quite get "we do not send the data right away" as
opposed to "at the end of an interaction" makes it wrong/misleading
to write "OUT "?

A new response "delayed" that will be introduced in subsequent
patches accepts the input, without giving the filtered result
right away, and at that moment, we do not even have the output
size yet.

might be a very reasonable rationale for omitting "OUT: " in
the output when "delayed" request is seen, but that still does not
explain why it is sensible to drop "OUT: " for error or abort
case.

Having said all that, I tend to agree with the actual change.  When
we abort or error, we aren't producing any useful output, so it may
be sensible _not_ to say "OUT: 0" in the log output from the test
helper in these cases.  And if the change is explained that way,
readers would understand why this step is a good thing to have, with
or without the future "delayed" step in the series.

Thanks.


Re: [PATCH v4 6/8] sha1_file: improve sha1_object_info_extended

2017-06-26 Thread Jonathan Tan
On Mon, Jun 26, 2017 at 10:28 AM, Junio C Hamano  wrote:
> Jonathan Tan  writes:
>
>> On Sat, Jun 24, 2017 at 5:45 AM, Jeff King  wrote:
>>> On Mon, Jun 19, 2017 at 06:03:13PM -0700, Jonathan Tan wrote:
>>>
 Subject: [PATCH v4 6/8] sha1_file: improve sha1_object_info_extended
 Improve sha1_object_info_extended() by supporting additional flags. This
 allows has_sha1_file_with_flags() to be modified to use
 sha1_object_info_extended() in a subsequent patch.
>>>
>>> A minor nit, but try to avoid vague words like "improve" in your subject
>>> lines. Something like:
>>>
>>>   sha1_file: teach sha1_object_info_extended more flags
>>>
>>> That's not too specific either, but I think in --oneline output it gives
>>> you a much better clue about what part of the function it touches.
>>>
 ---
  cache.h |  4 
  sha1_file.c | 43 ---
  2 files changed, 28 insertions(+), 19 deletions(-)
>>>
>>> The patch itself looks good.
>>
>> Thanks. I did try, but all my attempts exceeded 50 characters. Maybe
>> "sha1_file: support more flags in object info query" is good enough.
>
> Between the two, I personally find that Peff's is more descriptive,
> so unless there are other changes planned, let me "rebase -i" to
> retitle the commit.
>
> Thanks.

His suggestion does exceed 50 characters, but I see that that's a soft
limit. Either title is fine with me, thanks.


Re: [PATCH v6 1/6] t0021: keep filter log files on comparison

2017-06-26 Thread Junio C Hamano
Lars Schneider  writes:

>> It would become a problem _if_ we want future users of this helper
>> to reuse the same expect (or actual) multiple times and start from
>> an unmodified one.  There may be some other reason why you do not
>> want the comparison to smudge these files.  Please state what that
>> reason is before saying "fix this".
>
> Understood. How about this?
>
> The filter log files are modified on comparison. That might be 
> unexpected by the caller. It would be even undesirable if the caller 
> wants to reuse the original log files.
>
> Address these issues by using temp files for modifications. This is 
> useful for the subsequent patch 'convert: add "status=delayed" to 
> filter process protocol'.

The updated one is much more understandable.  Thanks.

> If this is OK, then do you want me to resend the series or can you fix it
> in place?

In general, I am OK running "rebase -i" to polish the log message
unless there are other changes to the patches planned.



Re: [PATCH v4 6/8] sha1_file: improve sha1_object_info_extended

2017-06-26 Thread Junio C Hamano
Jonathan Tan  writes:

> On Sat, Jun 24, 2017 at 5:45 AM, Jeff King  wrote:
>> On Mon, Jun 19, 2017 at 06:03:13PM -0700, Jonathan Tan wrote:
>>
>>> Subject: [PATCH v4 6/8] sha1_file: improve sha1_object_info_extended
>>> Improve sha1_object_info_extended() by supporting additional flags. This
>>> allows has_sha1_file_with_flags() to be modified to use
>>> sha1_object_info_extended() in a subsequent patch.
>>
>> A minor nit, but try to avoid vague words like "improve" in your subject
>> lines. Something like:
>>
>>   sha1_file: teach sha1_object_info_extended more flags
>>
>> That's not too specific either, but I think in --oneline output it gives
>> you a much better clue about what part of the function it touches.
>>
>>> ---
>>>  cache.h |  4 
>>>  sha1_file.c | 43 ---
>>>  2 files changed, 28 insertions(+), 19 deletions(-)
>>
>> The patch itself looks good.
>
> Thanks. I did try, but all my attempts exceeded 50 characters. Maybe
> "sha1_file: support more flags in object info query" is good enough.

Between the two, I personally find that Peff's is more descriptive,
so unless there are other changes planned, let me "rebase -i" to
retitle the commit.

Thanks.


Re: [PATCH v4 6/8] sha1_file: improve sha1_object_info_extended

2017-06-26 Thread Junio C Hamano
Jeff King  writes:

> On Mon, Jun 19, 2017 at 06:03:13PM -0700, Jonathan Tan wrote:
>
>> Subject: [PATCH v4 6/8] sha1_file: improve sha1_object_info_extended
>> Improve sha1_object_info_extended() by supporting additional flags. This
>> allows has_sha1_file_with_flags() to be modified to use
>> sha1_object_info_extended() in a subsequent patch.
>
> A minor nit, but try to avoid vague words like "improve" in your subject
> lines. Something like:
>
>   sha1_file: teach sha1_object_info_extended more flags
>
> That's not too specific either, but I think in --oneline output it gives
> you a much better clue about what part of the function it touches.

Yeah, thanks for paying attention to the --oneline output.  Mention
of the exact function name tells that it is about a more options on
information gathering, which is a better title.

>> ---
>>  cache.h |  4 
>>  sha1_file.c | 43 ---
>>  2 files changed, 28 insertions(+), 19 deletions(-)
>
> The patch itself looks good.

Yeah, I agree.


[PATCH/RFC] commit-template: improve readability of commit template

2017-06-26 Thread Kaartic Sivaraam
* Previously the commit template didn't separate the
  distinct messages shown. This resulted in difficulty
  in interpreting it's content. Add new lines to separate
  the distinct parts of the template.

* Previously the warning about usage of explicit paths
  without any options wasn't clear. Make it more clear
  so user gets what it's trying to say.

Signed-off-by: Kaartic Sivaraam 
---
 I've tried to improve the message specified in the commit. Hope
 it works correctly.

 Local test passed.

 builtin/commit.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/builtin/commit.c b/builtin/commit.c
index 8d1cac062..0a5676b76 100644
--- a/builtin/commit.c
+++ b/builtin/commit.c
@@ -841,9 +841,11 @@ static int prepare_to_commit(const char *index_file, const 
char *prefix,
  "with '%c' will be kept; you may remove them"
  " yourself if you want to.\n"
  "An empty message aborts the commit.\n"), 
comment_line_char);
-   if (only_include_assumed)
+   if (only_include_assumed) {
+   status_printf_ln(s, GIT_COLOR_NORMAL, "%s", ""); // Add 
new line for clarity
status_printf_ln(s, GIT_COLOR_NORMAL,
"%s", only_include_assumed);
+   }
 
/*
 * These should never fail because they come from our own
@@ -877,8 +879,7 @@ static int prepare_to_commit(const char *index_file, const 
char *prefix,
(int)(ci.name_end - ci.name_begin), 
ci.name_begin,
(int)(ci.mail_end - ci.mail_begin), 
ci.mail_begin);
 
-   if (ident_shown)
-   status_printf_ln(s, GIT_COLOR_NORMAL, "%s", "");
+   status_printf_ln(s, GIT_COLOR_NORMAL, "%s", ""); // Add new 
line for clarity
 
saved_color_setting = s->use_color;
s->use_color = 0;
@@ -1209,7 +1210,7 @@ static int parse_and_validate_options(int argc, const 
char *argv[],
if (argc == 0 && (also || (only && !amend && !allow_empty)))
die(_("No paths with --include/--only does not make sense."));
if (argc > 0 && !also && !only)
-   only_include_assumed = _("Explicit paths specified without -i 
or -o; assuming --only paths...");
+   only_include_assumed = _("Explicit paths () specified 
without -i or -o; assuming --only ");
if (!cleanup_arg || !strcmp(cleanup_arg, "default"))
cleanup_mode = use_editor ? CLEANUP_ALL : CLEANUP_SPACE;
else if (!strcmp(cleanup_arg, "verbatim"))
-- 
2.11.0



Re: What's cooking in git.git (Jun 2017, #07; Sat, 24)

2017-06-26 Thread Jonathan Tan
On Sat, 24 Jun 2017 16:25:13 -0700
Junio C Hamano  wrote:

> * jt/unify-object-info (2017-06-21) 8 commits
>  - sha1_file: refactor has_sha1_file_with_flags
>  - sha1_file: do not access pack if unneeded
>  - sha1_file: improve sha1_object_info_extended
>  - sha1_file: refactor read_object
>  - sha1_file: move delta base cache code up
>  - sha1_file: rename LOOKUP_REPLACE_OBJECT
>  - sha1_file: rename LOOKUP_UNKNOWN_OBJECT
>  - sha1_file: teach packed_object_info about typename
> 
>  Code clean-ups.
> 
>  Looked sensible to me.  Any further comments?
>  cf. <20170624124522.p2dnwdah75e4n...@sigill.intra.peff.net>

In a reply that I just sent out [1], I suggested "sha1_file: support
more flags in object info query" to replace "sha1_file: improve
sha1_object_info_extended" (the 6th patch from the bottom). If you are
OK with that commit message title, and if you prefer not to make the
change locally yourself, I can send out a reroll with the updated commit
message title.

[1] 
https://public-inbox.org/git/cagf8dgj8c0choxzo5cpt56225xgbgrjagy8hdast+0-usfw...@mail.gmail.com/


Re: [PATCH v4 6/8] sha1_file: improve sha1_object_info_extended

2017-06-26 Thread Jonathan Tan
On Sat, Jun 24, 2017 at 5:45 AM, Jeff King  wrote:
> On Mon, Jun 19, 2017 at 06:03:13PM -0700, Jonathan Tan wrote:
>
>> Subject: [PATCH v4 6/8] sha1_file: improve sha1_object_info_extended
>> Improve sha1_object_info_extended() by supporting additional flags. This
>> allows has_sha1_file_with_flags() to be modified to use
>> sha1_object_info_extended() in a subsequent patch.
>
> A minor nit, but try to avoid vague words like "improve" in your subject
> lines. Something like:
>
>   sha1_file: teach sha1_object_info_extended more flags
>
> That's not too specific either, but I think in --oneline output it gives
> you a much better clue about what part of the function it touches.
>
>> ---
>>  cache.h |  4 
>>  sha1_file.c | 43 ---
>>  2 files changed, 28 insertions(+), 19 deletions(-)
>
> The patch itself looks good.

Thanks. I did try, but all my attempts exceeded 50 characters. Maybe
"sha1_file: support more flags in object info query" is good enough.


Re: [PATCH v4 7/8] sha1_file: do not access pack if unneeded

2017-06-26 Thread Jonathan Tan
On Sat, Jun 24, 2017 at 1:39 PM, Jeff King  wrote:
> On Sat, Jun 24, 2017 at 11:41:39AM -0700, Junio C Hamano wrote:
> If we are open to writing anything, then I think it should follow the
> same pointer-to-data pattern that the rest of the struct does. I.e.,
> declare the extra pack-data struct as a pointer, and let callers fill it
> in or not. It's excessive in the sense that it's not a variable-sized
> answer, but it at least makes the interface consistent.
>
> And no callers who read it would be silently broken; the actual data
> type in "struct object_info" would change, so they'd get a noisy compile
> error.

I considered that, but there was some trickiness in streaming.c -
open_istream() would need to establish that pointer even though that
is not its responsibility, or istream_source would need to
heap-allocate some memory then point to it from `oi` (it has to be
heap-allocated since the memory must outlive the function).

Also, it does not solve the "maintenance nightmare" issue that Junio
described (in that in order to optimize the pack read away, we would
need a big "if" statement).

Those issues are probably surmountable, but in the end I settled on
just allowing the caller to pass NULL and having
sha1_object_info_extended() substitute an empty struct when that
happens, as you can see in v5 [2]. That allows most of
sha1_object_info_extended() to not have to handle NULL, but also
allows for the specific optimization (optimizing the pack read away)
that I want.

[1] https://public-inbox.org/git/xmqq8tklqkbe@gitster.mtv.corp.google.com/
[2] 
https://public-inbox.org/git/ddbbc86204c131c83b3a1ff3b52789be9ed2a5b1.1498091579.git.jonathanta...@google.com/


Re: What's cooking in git.git (Jun 2017, #07; Sat, 24)

2017-06-26 Thread Junio C Hamano
Lars Schneider  writes:

>> On 26 Jun 2017, at 11:44, Ævar Arnfjörð Bjarmason  wrote:
>> 
>> If we're cloning the submodule, which from this output, and AFAIK in
>> general happens with all Travis builds, but correct me if I'm wrong
>> we'll set DC_SHA1_SUBMODULE=auto due to this bit in the Makefile:
>> 
>>ifeq ($(wildcard 
>> sha1collisiondetection/lib/sha1.h),sha1collisiondetection/lib/sha1.h)
>>DC_SHA1_SUBMODULE = auto
>>endif
>> 
>> So if (and I think this is the case) Travis just does a clone with
>> --recurse-submodules then this is already being CI'd.
>
> Do you see some other way to check if this is part of the build?
> Would it make sense to add this info to "git --version --build-options"?
>
> I am not familiar with the SHA1 machinery... but does it work on macOS
> even though we generally use APPLE_COMMON_CRYPTO?

I thought that we allowed "Use apple-common-crypto for the real
openssl thing (like curl and imaps) but use the hash function from
this other thing (like block-sha/)" and was hoping that we can test
sha1dc/ and/or sha1collisiondetection/ with that mechanism.

OTOH, if the binary packaged one on MacOS uses everything from
apple-common-crypto, then not testing with that gives us a larger
coverage gap, so unless we add a _new_ target that uses sha1dc/ on
MacOS, it may not worth be worrying about.



[PATCH] pack-bitmap: Don't perform unaligned memory access

2017-06-26 Thread James Clarke
The preceding bitmap entries have a 1-byte XOR-offset and 1-byte flags,
so their size is not a multiple of 4. Thus the name-hash cache is only
guaranteed to be 2-byte aligned and so we must use get_be32 rather than
indexing the array directly.

Signed-off-by: James Clarke 
---

This was noticed thanks to the recent tests added to t5310-pack-bitmaps.sh,
which were crashing with SIGBUS on Debian sparc64. All tests (excluding those
marked with known breakage) now pass again.

 pack-bitmap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index a3ac3dccd..327634cd7 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -627,7 +627,7 @@ static void show_objects_for_type(
sha1 = nth_packed_object_sha1(bitmap_git.pack, 
entry->nr);

if (bitmap_git.hashes)
-   hash = ntohl(bitmap_git.hashes[entry->nr]);
+   hash = get_be32(bitmap_git.hashes + entry->nr);

show_reach(sha1, object_type, 0, hash, bitmap_git.pack, 
entry->offset);
}
--
2.13.2



[ANNOUNCE] Git for Windows 2.13.2

2017-06-26 Thread Johannes Schindelin
Dear Git users,

It is my pleasure to announce that Git for Windows 2.13.2 is available from:

https://git-for-windows.github.io/

Changes since Git for Windows v2.13.1(2) (June 15th 2017)

New Features

  * Comes with Git v2.13.2.
  * Comes with Git Credential Manager v1.10.1.
  * The Git Bash prompt can now be overridden by creating the file
.config\git\git-prompt.sh.
  * Comes with cURL v7.54.1.

Filename | SHA-256
 | ---
Git-2.13.2-64-bit.exe | 
7ac1e1c3b8ed1ee557055047ca03b1562de70c66f8fd1a90393a5405e1f1967b
Git-2.13.2-32-bit.exe | 
a6f828b701a65e436181e8017e4ae55129b4f680d7e95f445d1e43f26c061cb7
PortableGit-2.13.2-64-bit.7z.exe | 
7cdb0234bffdd6dd0cd441da97e87b233d344790e4d957059ff09217fe48765d
PortableGit-2.13.2-32-bit.7z.exe | 
125c3402971849f478bcdc6904babfc235fdea4e731e31f9a5339cf0e422685a
MinGit-2.13.2-64-bit.zip | 
302a72d72c5c881f8d34183485f0e86721b7a89f2090977f3795ab89670d9c1d
MinGit-2.13.2-32-bit.zip | 
e7e12f2dec9361cdf496fc0378a891fcc9f6f4ffac60b1b06675e64e0bdbcdac
Git-2.13.2-64-bit.tar.bz2 | 
cb77390c523d466a01ef72c9678e56429fa8c112a4b75990368f7a6ff6038e9d
Git-2.13.2-32-bit.tar.bz2 | 
6682457881341ac2fc581d5bad169beb5c9245c4957fc76254ef2e14806691c6

Ciao,
Johannes


Re: Compile Error v2.13.2 on Solaris SPARC

2017-06-26 Thread Michael Kebe
Still no luck, with one or both patches.

Greetings
Michael

2017-06-26 14:47 GMT+02:00 Ævar Arnfjörð Bjarmason :
>
> On Mon, Jun 26 2017, Michael Kebe jotted:
>
>> No luck with the patch.
>>
>> Still got:
>>
>> CC sha1dc/sha1.o
>> sha1dc/sha1.c:43:58: error: operator '==' has no right operand
>>   (defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) ||   \
>>   ^
>> gmake: *** [sha1dc/sha1.o] Error 1
>
> Does this patch change anything, with or without the previous patch:
>
> diff --git a/git-compat-util.h b/git-compat-util.h
> index 047172d173..1327aea229 100644
> --- a/git-compat-util.h
> +++ b/git-compat-util.h
> @@ -131,6 +131,14 @@
>  # else
>  # define _XOPEN_SOURCE 500
>  # endif
> +
> +/*
> + * Bring in macros defining _BIG_ENDIAN etc. Should be brought in by
> + * the likes of stdio.h, but include it here in case it hasn't been
> + * included already.
> + */
> +#include 
> +
>  #elif !defined(__APPLE__) && !defined(__FreeBSD__) && !defined(__USLC__) && \
>!defined(_M_UNIX) && !defined(__sgi) && !defined(__DragonFly__) && \
>!defined(__TANDEM) && !defined(__QNX__) && !defined(__MirBSD__) && \
>
>>
>> Greetings
>> Michael
>>
>> 2017-06-26 10:42 GMT+02:00 Ævar Arnfjörð Bjarmason :
>>>
>>> On Mon, Jun 26 2017, Michael Kebe jotted:
>>>
 When compiling 2.13.2 on Solaris SPARC I get this error:

 CC sha1dc/sha1.o
 sha1dc/sha1.c:41:58: error: operator '==' has no right operand
  #if ((defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) || \
   ^
 gmake: *** [sha1dc/sha1.o] Error 1

 The define _BIG_ENDIAN is set by Solaris on SPARC systems. So the
 check in line 41 gives this error.

 The _BIG_ENDIAN define is used few line below for defining
 SHA1DC_BIGENDIAN. This is needed for Solaris SPARC systems.
 See
 https://github.com/cr-marcstevens/sha1collisiondetection/commit/33a694a9ee1b79c24be45f9eab5ac0e1aeeaf271
>>>
>>> I can see why this would error out. In sys/isa_defs.h on SPARC there's
>>> just `#define _BIG_ENDIAN`
>>> (http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/sys/isa_defs.h),
>>> and (on Linux):
>>>
>>> $ cat /tmp/test.c
>>> #define _FOO
>>> #define _BAR 1
>>> #if (_BAR == _FOO)
>>> #endif
>>> $ gcc -E /tmp/test.c
>>> # 1 "/tmp/test.c"
>>> # 1 ""
>>> # 1 ""
>>> # 31 ""
>>> # 1 "/usr/include/stdc-predef.h" 1 3 4
>>> # 32 "" 2
>>> # 1 "/tmp/test.c"
>>> /tmp/test.c:3:18: error: operator '==' has no right operand
>>>  #if (_BAR == _FOO)
>>>
>>> What I don't get is how this would have worked for Liam then (see
>>> 20170613020939.gemh3m5z6czgw...@oracle.com). Differences in Solaris
>>> versions and how their headers look like?
>>>
>>> Does this patch fix it for you?
>>>
>>> diff --git a/sha1dc/sha1.c b/sha1dc/sha1.c
>>> index facea1bb56..0b75b31b67 100644
>>> --- a/sha1dc/sha1.c
>>> +++ b/sha1dc/sha1.c
>>> @@ -36,9 +36,11 @@
>>>  #undef SHA1DC_BIGENDIAN
>>>  #endif
>>>
>>> -#if (defined(_BYTE_ORDER) || defined(__BYTE_ORDER) || 
>>> defined(__BYTE_ORDER__))
>>> +#if (defined(BYTE_ORDER) || defined(_BYTE_ORDER) || defined(__BYTE_ORDER) 
>>> || \
>>> + defined(__BYTE_ORDER__))
>>>
>>> -#if ((defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) || \
>>> +#if ((defined(BYTE_ORDER) && (BYTE_ORDER == BIG_ENDIAN)) || \
>>> + (defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) ||   \
>>>   (defined(__BYTE_ORDER) && (__BYTE_ORDER == __BIG_ENDIAN)) || \
>>>   (defined(__BYTE_ORDER__) && (__BYTE_ORDER__ == __BIG_ENDIAN__)) )
>>>  #define SHA1DC_BIGENDIAN
>>>
>>> I thought maybe BYTE_ORDER would work after searching the Illumos
>>> sources a bit more:
>>> http://src.illumos.org/source/search?q=BYTE_ORDER&project=illumos-gate
>


Re: Compile Error v2.13.2 on Solaris SPARC

2017-06-26 Thread Ævar Arnfjörð Bjarmason

On Mon, Jun 26 2017, Michael Kebe jotted:

> No luck with the patch.
>
> Still got:
>
> CC sha1dc/sha1.o
> sha1dc/sha1.c:43:58: error: operator '==' has no right operand
>   (defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) ||   \
>   ^
> gmake: *** [sha1dc/sha1.o] Error 1

Does this patch change anything, with or without the previous patch:

diff --git a/git-compat-util.h b/git-compat-util.h
index 047172d173..1327aea229 100644
--- a/git-compat-util.h
+++ b/git-compat-util.h
@@ -131,6 +131,14 @@
 # else
 # define _XOPEN_SOURCE 500
 # endif
+
+/*
+ * Bring in macros defining _BIG_ENDIAN etc. Should be brought in by
+ * the likes of stdio.h, but include it here in case it hasn't been
+ * included already.
+ */
+#include 
+
 #elif !defined(__APPLE__) && !defined(__FreeBSD__) && !defined(__USLC__) && \
   !defined(_M_UNIX) && !defined(__sgi) && !defined(__DragonFly__) && \
   !defined(__TANDEM) && !defined(__QNX__) && !defined(__MirBSD__) && \

>
> Greetings
> Michael
>
> 2017-06-26 10:42 GMT+02:00 Ævar Arnfjörð Bjarmason :
>>
>> On Mon, Jun 26 2017, Michael Kebe jotted:
>>
>>> When compiling 2.13.2 on Solaris SPARC I get this error:
>>>
>>> CC sha1dc/sha1.o
>>> sha1dc/sha1.c:41:58: error: operator '==' has no right operand
>>>  #if ((defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) || \
>>>   ^
>>> gmake: *** [sha1dc/sha1.o] Error 1
>>>
>>> The define _BIG_ENDIAN is set by Solaris on SPARC systems. So the
>>> check in line 41 gives this error.
>>>
>>> The _BIG_ENDIAN define is used few line below for defining
>>> SHA1DC_BIGENDIAN. This is needed for Solaris SPARC systems.
>>> See
>>> https://github.com/cr-marcstevens/sha1collisiondetection/commit/33a694a9ee1b79c24be45f9eab5ac0e1aeeaf271
>>
>> I can see why this would error out. In sys/isa_defs.h on SPARC there's
>> just `#define _BIG_ENDIAN`
>> (http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/sys/isa_defs.h),
>> and (on Linux):
>>
>> $ cat /tmp/test.c
>> #define _FOO
>> #define _BAR 1
>> #if (_BAR == _FOO)
>> #endif
>> $ gcc -E /tmp/test.c
>> # 1 "/tmp/test.c"
>> # 1 ""
>> # 1 ""
>> # 31 ""
>> # 1 "/usr/include/stdc-predef.h" 1 3 4
>> # 32 "" 2
>> # 1 "/tmp/test.c"
>> /tmp/test.c:3:18: error: operator '==' has no right operand
>>  #if (_BAR == _FOO)
>>
>> What I don't get is how this would have worked for Liam then (see
>> 20170613020939.gemh3m5z6czgw...@oracle.com). Differences in Solaris
>> versions and how their headers look like?
>>
>> Does this patch fix it for you?
>>
>> diff --git a/sha1dc/sha1.c b/sha1dc/sha1.c
>> index facea1bb56..0b75b31b67 100644
>> --- a/sha1dc/sha1.c
>> +++ b/sha1dc/sha1.c
>> @@ -36,9 +36,11 @@
>>  #undef SHA1DC_BIGENDIAN
>>  #endif
>>
>> -#if (defined(_BYTE_ORDER) || defined(__BYTE_ORDER) || 
>> defined(__BYTE_ORDER__))
>> +#if (defined(BYTE_ORDER) || defined(_BYTE_ORDER) || defined(__BYTE_ORDER) 
>> || \
>> + defined(__BYTE_ORDER__))
>>
>> -#if ((defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) || \
>> +#if ((defined(BYTE_ORDER) && (BYTE_ORDER == BIG_ENDIAN)) || \
>> + (defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) ||   \
>>   (defined(__BYTE_ORDER) && (__BYTE_ORDER == __BIG_ENDIAN)) || \
>>   (defined(__BYTE_ORDER__) && (__BYTE_ORDER__ == __BIG_ENDIAN__)) )
>>  #define SHA1DC_BIGENDIAN
>>
>> I thought maybe BYTE_ORDER would work after searching the Illumos
>> sources a bit more:
>> http://src.illumos.org/source/search?q=BYTE_ORDER&project=illumos-gate



Re: Compile Error v2.13.2 on Solaris SPARC

2017-06-26 Thread Michael Kebe
No luck with the patch.

Still got:

CC sha1dc/sha1.o
sha1dc/sha1.c:43:58: error: operator '==' has no right operand
  (defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) ||   \
  ^
gmake: *** [sha1dc/sha1.o] Error 1

Greetings
Michael

2017-06-26 10:42 GMT+02:00 Ævar Arnfjörð Bjarmason :
>
> On Mon, Jun 26 2017, Michael Kebe jotted:
>
>> When compiling 2.13.2 on Solaris SPARC I get this error:
>>
>> CC sha1dc/sha1.o
>> sha1dc/sha1.c:41:58: error: operator '==' has no right operand
>>  #if ((defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) || \
>>   ^
>> gmake: *** [sha1dc/sha1.o] Error 1
>>
>> The define _BIG_ENDIAN is set by Solaris on SPARC systems. So the
>> check in line 41 gives this error.
>>
>> The _BIG_ENDIAN define is used few line below for defining
>> SHA1DC_BIGENDIAN. This is needed for Solaris SPARC systems.
>> See
>> https://github.com/cr-marcstevens/sha1collisiondetection/commit/33a694a9ee1b79c24be45f9eab5ac0e1aeeaf271
>
> I can see why this would error out. In sys/isa_defs.h on SPARC there's
> just `#define _BIG_ENDIAN`
> (http://src.illumos.org/source/xref/illumos-gate/usr/src/uts/common/sys/isa_defs.h),
> and (on Linux):
>
> $ cat /tmp/test.c
> #define _FOO
> #define _BAR 1
> #if (_BAR == _FOO)
> #endif
> $ gcc -E /tmp/test.c
> # 1 "/tmp/test.c"
> # 1 ""
> # 1 ""
> # 31 ""
> # 1 "/usr/include/stdc-predef.h" 1 3 4
> # 32 "" 2
> # 1 "/tmp/test.c"
> /tmp/test.c:3:18: error: operator '==' has no right operand
>  #if (_BAR == _FOO)
>
> What I don't get is how this would have worked for Liam then (see
> 20170613020939.gemh3m5z6czgw...@oracle.com). Differences in Solaris
> versions and how their headers look like?
>
> Does this patch fix it for you?
>
> diff --git a/sha1dc/sha1.c b/sha1dc/sha1.c
> index facea1bb56..0b75b31b67 100644
> --- a/sha1dc/sha1.c
> +++ b/sha1dc/sha1.c
> @@ -36,9 +36,11 @@
>  #undef SHA1DC_BIGENDIAN
>  #endif
>
> -#if (defined(_BYTE_ORDER) || defined(__BYTE_ORDER) || 
> defined(__BYTE_ORDER__))
> +#if (defined(BYTE_ORDER) || defined(_BYTE_ORDER) || defined(__BYTE_ORDER) || 
> \
> + defined(__BYTE_ORDER__))
>
> -#if ((defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) || \
> +#if ((defined(BYTE_ORDER) && (BYTE_ORDER == BIG_ENDIAN)) || \
> + (defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) ||   \
>   (defined(__BYTE_ORDER) && (__BYTE_ORDER == __BIG_ENDIAN)) || \
>   (defined(__BYTE_ORDER__) && (__BYTE_ORDER__ == __BIG_ENDIAN__)) )
>  #define SHA1DC_BIGENDIAN
>
> I thought maybe BYTE_ORDER would work after searching the Illumos
> sources a bit more:
> http://src.illumos.org/source/search?q=BYTE_ORDER&project=illumos-gate


ANNOUNCE: A mailing list for git packaging discussion

2017-06-26 Thread Ævar Arnfjörð Bjarmason
Now that my PCRE v2 patches have landed, I wanted to prod packagers of
git to tell them they could build against it since they probably had it
packaged already.

After E-Mailing Jonathan Nieder about that asking if there was a better
way to contact packagers than hunting down their E-Mails individually:

> More generally, do you know if there's some good way to contact the
> maintainers of various git packages (I was hoping there would be some
> list, even an unofficial giant CC one), or if I'd just need to hunt down
> the contact details for common distros manually.

Sadly I don't know of a git-packagers@ list, or I'd be on it. :)

Hence this list. It'll be a low-volume list used for dev -> packager
announcements & coordinaiton, and perhaps discussion about git packaging
in general (to the extent people feel a need to not discuss that on the
main ML).

If you're interested sign up at:
https://groups.google.com/forum/#!forum/git-packagers

I've already invited a few people I found in
Debian/Ubuntu/Arch/Gentoo/FreeBSD git package changelogs with some quick
Googling & those I personally know package Git, but if you are or know
anyone packaging git please sign up / prod those people.

Thanks!


Re: What's cooking in git.git (Jun 2017, #07; Sat, 24)

2017-06-26 Thread Lars Schneider

> On 26 Jun 2017, at 11:44, Ævar Arnfjörð Bjarmason  wrote:
> 
> 
> On Mon, Jun 26 2017, Lars Schneider jotted:
> 
>>> On 25 Jun 2017, at 01:25, Junio C Hamano  wrote:
>> 
>>> ...
>> 
>>> * ab/sha1dc (2017-06-07) 2 commits
>>> - sha1collisiondetection: automatically enable when submodule is populated
>>> - sha1dc: optionally use sha1collisiondetection as a submodule
>>> 
>>> The "collission-detecting" implementation of SHA-1 hash we borrowed
>>> from is replaced by directly binding the upstream project as our
>>> submodule.
>>> 
>>> Will keep in 'pu'.
>>> cf. 
>>> 
>>> The only nit I may have is that we may possibly want to turn this
>>> on in .travis.yml on MacOS before we move it forward (otherwise
>>> we'd be shipping bundled one and submodule one without doing any
>>> build on that platform)?  Other than that, the topic seems ready to
>>> be merged down.
>> 
>> What do you mean by "turn this on in .travis.qml on MacOS"?
>> The submodule is already cloned on all platforms on Travis:
>> https://travis-ci.org/git/git/jobs/246965294#L25-L27
>> 
>> However, I think DC_SHA1_SUBMODULE (or even DC_SHA1) is not enabled
>> on any platform right now. Should we enable it on all platforms or
>> add a new build job that enables/tests these flags?
> 
> If we're cloning the submodule, which from this output, and AFAIK in
> general happens with all Travis builds, but correct me if I'm wrong
> we'll set DC_SHA1_SUBMODULE=auto due to this bit in the Makefile:
> 
>ifeq ($(wildcard 
> sha1collisiondetection/lib/sha1.h),sha1collisiondetection/lib/sha1.h)
>DC_SHA1_SUBMODULE = auto
>endif
> 
> So if (and I think this is the case) Travis just does a clone with
> --recurse-submodules then this is already being CI'd.

Do you see some other way to check if this is part of the build?
Would it make sense to add this info to "git --version --build-options"?

I am not familiar with the SHA1 machinery... but does it work on macOS
even though we generally use APPLE_COMMON_CRYPTO?

- Lars

Re: What's cooking in git.git (Jun 2017, #07; Sat, 24)

2017-06-26 Thread Ævar Arnfjörð Bjarmason

On Mon, Jun 26 2017, Lars Schneider jotted:

>> On 25 Jun 2017, at 01:25, Junio C Hamano  wrote:
>
>> ...
>
>> * ab/sha1dc (2017-06-07) 2 commits
>> - sha1collisiondetection: automatically enable when submodule is populated
>> - sha1dc: optionally use sha1collisiondetection as a submodule
>>
>> The "collission-detecting" implementation of SHA-1 hash we borrowed
>> from is replaced by directly binding the upstream project as our
>> submodule.
>>
>> Will keep in 'pu'.
>> cf. 
>>
>> The only nit I may have is that we may possibly want to turn this
>> on in .travis.yml on MacOS before we move it forward (otherwise
>> we'd be shipping bundled one and submodule one without doing any
>> build on that platform)?  Other than that, the topic seems ready to
>> be merged down.
>
> What do you mean by "turn this on in .travis.qml on MacOS"?
> The submodule is already cloned on all platforms on Travis:
> https://travis-ci.org/git/git/jobs/246965294#L25-L27
>
> However, I think DC_SHA1_SUBMODULE (or even DC_SHA1) is not enabled
> on any platform right now. Should we enable it on all platforms or
> add a new build job that enables/tests these flags?

If we're cloning the submodule, which from this output, and AFAIK in
general happens with all Travis builds, but correct me if I'm wrong
we'll set DC_SHA1_SUBMODULE=auto due to this bit in the Makefile:

ifeq ($(wildcard 
sha1collisiondetection/lib/sha1.h),sha1collisiondetection/lib/sha1.h)
DC_SHA1_SUBMODULE = auto
endif

So if (and I think this is the case) Travis just does a clone with
--recurse-submodules then this is already being CI'd.


Re: Getting first tag per branch for a commit

2017-06-26 Thread Orgad Shaneh
On Mon, Jun 26, 2017 at 12:54 AM, Junio C Hamano  wrote:
> Orgad Shaneh  writes:
>
>> What I'd like to have is a way to tell the first tag per branch (or
>> per merge) that the commit appeared on.
>
>> I think that this can be done by filtering out tags that are connected
>> to already listed tags by first-parent link.
>
> Yes.  When one tag can be reached by another tag, then the former is
> definitely an earlier tag than the latter.
>
> A trivial way to compute it would require O(n^2) invocations of "git
> merge-base --is-ancestor".  Alternatively, I think you can perhaps
> use "git merge-base --independent".

I think this feature needs to be implemented in Git (by a flag to git describe).
O(n^2) is way too much when you have 20,000 tags.

Unfortunately, I don't feel qualified for implementing it myself. Does anyone
else volunteer? :)

> Having said that, one thing to keep in mind is that a single "first
> tag" may not exist at all.
>
> Consider this topology:
>
>   o---X---.topic
>  / \   \
>  ---o---o---o---o---N---S---o---   maint
>  \   \   \   \
>   o---o---o---M---o---o---T---o--- master
>
> where a topic branch was forked from the maintenance track, which is
> periodically merged to the master branch.  That topic branch has the
> commit of interest, X, which first gets merged to the master branch
> at merge M, which eventually gets tagged as T (i.e. a new feature
> release).  But (wall-clock-wise) after merge M is made and the
> change is tested in the context of the master branch, but before the
> release T happens, the topic may be merged down to the maintenance
> track at merge N.  Then eventually the tip of the maintenance track
> is tagged as S (i.e. a maintenance release).
>
> Topologically, T and S cannot be compared and they both contain X,
> so the question "what is the first tag on 'master' that has commit
> X?" does not have a single unique answer.  Both S and T are eligible.
>
> You could define various heuristics to tiebreak among these tags.
> You may be tempted to compare timestamps of S and T.  If they were
> equal, then you might want to compare timestamps of M and N.
>
> But you'd need to accept that fundamentally there may not be a
> single "first tag".

I accept that. Anyway, this looks to me like a corner case, and I don't mind
having several tags from the same branch for this case.

- Orgad


Re: [PATCH v6 3/6] t0021: write "OUT" only on success

2017-06-26 Thread Lars Schneider

> On 26 Jun 2017, at 00:17, Junio C Hamano  wrote:
> 
> Lars Schneider  writes:
> 
>> "rot13-filter.pl" used to write "OUT " to the debug log even in case of
>> an abort or error. Fix this by writing "OUT " to the debug log only in
>> the successful case if output is actually written.
> 
> Again, use of "Fix this" without clarifying what the problem is.  Is
> this change needed because the size may not be known when the new
> filter protocol is in use, or something?

How about this?

"rot13-filter.pl" always writes "OUT " to the debug log at the end
of an interaction.

This works without issues for the existing cases "abort", "error", and 
"success". In a subsequent patch 'convert: add "status=delayed" to 
filter process protocol' we will add a new case "delayed". In that case 
we do not send the data right away and it would be wrong/misleading to
the reader if we would write "OUT " to the debug log.

Address this issue by writing "OUT " to the debug log only if 
output is actually written in the successful case.

- Lars

Re: [PATCH v3 0/4] Add regression tests for recent rebase -i fixes

2017-06-26 Thread Phillip Wood
On 23/06/17 20:01, Junio C Hamano wrote:
> Junio C Hamano  writes:
> 
>> For 3420, I can wrap the two-liner patch I showed here earlier into
>> a commit on top of the series.  
> 
> So, here is what I'll queue on top before merging the topic down to
> 'master'.

Thanks for creating this fixup, I'll remember to think about
GETTEXT_POISON when I'm writing tests in the future.

> -- >8 --
> Subject: [PATCH] t3420: fix under GETTEXT_POISON build
> 
> Newly added tests to t3420 in this series prepare expected
> human-readable output from "git rebase -i" and then compare the
> actual output with it.  As the output from the command is designed
> to go through i18n/l10n, we need to use test_i18ncmp to tell
> GETTEXT_POISON build that it is OK the output does not match.
> 
> Signed-off-by: Junio C Hamano 
> ---
>  t/t3420-rebase-autostash.sh | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/t/t3420-rebase-autostash.sh b/t/t3420-rebase-autostash.sh
> index 6826c38cbd..e243700660 100755
> --- a/t/t3420-rebase-autostash.sh
> +++ b/t/t3420-rebase-autostash.sh
> @@ -178,7 +178,7 @@ testrebase () {
>   test_when_finished git branch -D rebased-feature-branch &&
>   suffix=${type#\ --} && suffix=${suffix:-am} &&
>   create_expected_success_$suffix &&
> - test_cmp expected actual
> + test_i18ncmp expected actual
>   '
>  
>   test_expect_success "rebase$type: dirty index, non-conflicting rebase" '
> @@ -275,7 +275,7 @@ testrebase () {
>   test_when_finished git branch -D rebased-feature-branch &&
>   suffix=${type#\ --} && suffix=${suffix:-am} &&
>   create_expected_failure_$suffix &&
> - test_cmp expected actual
> + test_i18ncmp expected actual
>   '
>  }
>  
> 



Re: [PATCH v3 0/4] Add regression tests for recent rebase -i fixes

2017-06-26 Thread Phillip Wood
On 23/06/17 19:53, Junio C Hamano wrote:
> Junio C Hamano  writes:
> 
>> 3404 needs a similar fix-up for the series to be able to stand on
>> its own.  Alternatively, at least we need to understand what in 'pu'
>> makes the result of the merge pass---the symptom indicates that this
>> topic cannot be merged to a released version without that unknown
>> other topic in 'pu' merged if we want to keep POISON build passing
>> the tests.
> 
> Ah, no worries.  I think I figured it out.  
> 
> The topic "rebase -i regression fix", which this "regression fix
> tests" builds on, is queued on an older codebase than 0d75bfe6
> ("tests: fix tests broken under GETTEXT_POISON=YesPlease",
> 2017-05-05); it is natural these old test breakages can be seen when
> the topic is tested alone.

Oh, that explains it, I was pretty sure the reflog messages were not
translated so couldn't understand why it would fail under
GETTEXT_POISON=YesPlease

> So we can safely merge this topic down.

That's great, thanks for taking the time to track down the reason for
the test failure

Best Wishes

Phillip




Re: [PATCH v6 1/6] t0021: keep filter log files on comparison

2017-06-26 Thread Lars Schneider

> On 26 Jun 2017, at 00:12, Junio C Hamano  wrote:
> 
> Lars Schneider  writes:
> 
>> The filter log files are modified on comparison. Write the modified log files
>> to temp files for comparison to fix this.
> 
> The phrase "to fix this" implies that it is _wrong_ to modify after
> comparing it, but it is unclear _why_ you think is wrong.  After
> all, the purpose of this comparison helper is to see if these two
> are the same with cruft removed, and once the helper finds the
> answer to that question, the current users of the comparison helper
> do not reuse these files, so from _their_ point of view, there is
> nothing to "fix", is there?
> 
> It would become a problem _if_ we want future users of this helper
> to reuse the same expect (or actual) multiple times and start from
> an unmodified one.  There may be some other reason why you do not
> want the comparison to smudge these files.  Please state what that
> reason is before saying "fix this".

Understood. How about this?

The filter log files are modified on comparison. That might be 
unexpected by the caller. It would be even undesirable if the caller 
wants to reuse the original log files.

Address these issues by using temp files for modifications. This is 
useful for the subsequent patch 'convert: add "status=delayed" to 
filter process protocol'.

If this is OK, then do you want me to resend the series or can you fix it
in place?

Thanks,
Lars

Re: What's cooking in git.git (Jun 2017, #07; Sat, 24)

2017-06-26 Thread Lars Schneider

> On 25 Jun 2017, at 01:25, Junio C Hamano  wrote:

> ...

> * ab/sha1dc (2017-06-07) 2 commits
> - sha1collisiondetection: automatically enable when submodule is populated
> - sha1dc: optionally use sha1collisiondetection as a submodule
> 
> The "collission-detecting" implementation of SHA-1 hash we borrowed
> from is replaced by directly binding the upstream project as our
> submodule.
> 
> Will keep in 'pu'.
> cf. 
> 
> The only nit I may have is that we may possibly want to turn this
> on in .travis.yml on MacOS before we move it forward (otherwise
> we'd be shipping bundled one and submodule one without doing any
> build on that platform)?  Other than that, the topic seems ready to
> be merged down.

What do you mean by "turn this on in .travis.qml on MacOS"? 
The submodule is already cloned on all platforms on Travis: 
https://travis-ci.org/git/git/jobs/246965294#L25-L27

However, I think DC_SHA1_SUBMODULE (or even DC_SHA1) is not enabled 
on any platform right now. Should we enable it on all platforms or 
add a new build job that enables/tests these flags?

- Lars


Re: [PATCH v4 5/5] stash: implement builtin stash

2017-06-26 Thread Matthieu Moy
Thomas Gummerer  writes:

> After the line
>
> test -n "$seen_non_option" || set "push" "$@"
>
> it's not possible that $# is 0 anymore, so this will never be
> printed.  From a quick look at the history it seems like it wasn't
> possible to trigger that codepath for a while.  If I'm reading things
> correctly 3c2eb80fe3 ("stash: simplify defaulting to "save" and reject
> unknown options", 2009-08-18) seems to have introduced the small
> change in behaviour.

Indeed. That wasn't on purpose, but I seem to have turned this

case $# in
0)
push_stash &&
say "$(gettext "(To restore them type \"git stash apply\")")"
;;

into dead code.

> As I don't think anyone has complained since then, I'd just leave it
> as is, which makes git stash with no options a little less verbose.

I agree it's OK to keep is as-is, but the original logic (give a bit
more advice when "stash push" was DWIM-ed) made sense too, so it can
make sense to re-activate it while porting to C.

-- 
Matthieu Moy
http://www-verimag.imag.fr/~moy/


Compile Error v2.13.2 on Solaris SPARC

2017-06-26 Thread Michael Kebe
When compiling 2.13.2 on Solaris SPARC I get this error:

CC sha1dc/sha1.o
sha1dc/sha1.c:41:58: error: operator '==' has no right operand
 #if ((defined(_BYTE_ORDER) && (_BYTE_ORDER == _BIG_ENDIAN)) || \
  ^
gmake: *** [sha1dc/sha1.o] Error 1

The define _BIG_ENDIAN is set by Solaris on SPARC systems. So the
check in line 41 gives this error.

The _BIG_ENDIAN define is used few line below for defining
SHA1DC_BIGENDIAN. This is needed for Solaris SPARC systems.
See
https://github.com/cr-marcstevens/sha1collisiondetection/commit/33a694a9ee1b79c24be45f9eab5ac0e1aeeaf271

Greetings
Michael