Re: [RFC PATCH v1] telemetry design overview (part 1)

2018-06-10 Thread Jeff King
On Sun, Jun 10, 2018 at 12:00:57AM +, brian m. carlson wrote:

> I also have to look at this issue from the interests of what is best for
> the FLOSS community and for users as a whole.  Adding in functionality
> that sends off usage data from a command-line tool, especially one that
> is as widely used as Git is, is not in the interests of users as a
> whole, nor is it common practice in FLOSS tools.

I'm not sure if this last statement is true. For instance, many tools
have crash-reporting functionality, and some contain more advanced
telemetry. E.g.:

  https://wiki.mozilla.org/Telemetry

Personally I do not like this sort of data collection at all, and never
enable it. And I would be highly suspicious of any FLOSS tool that
shipped with it enabled by default. But it seems like at least some
projects _do_ find it useful, and enough people enable it to make it
worth carrying as a feature.

-Peff


Re: [RFC PATCH v1] telemetry design overview (part 1)

2018-06-10 Thread Jeff King
On Sun, Jun 10, 2018 at 12:44:25AM +0200, Johannes Sixt wrote:

> > I agree with Peff: this is something you as a user need to be aware of,
> > and need to make sure you configure your Git just like you want. As long
> > as this is a purely opt-in feature, it is useful and helpful.
> 
> The problem with this feature is not so much that it enables someone to do
> bad things, but that it is specifically targeted at recording *how users use
> Git*.

I think one issue here is that we are not looking at concrete patches.
So for instance, I've seen a claim that Git should never have a way to
turn on tracing all the time. But at GitHub we have found it useful to
have a config option to log the sha1 of every object that is dropped by
git-prune or by "repack -ad". It's helped both as a developer (tracking
down races or bugs in our code) and as an administrator (figuring out
where a corruption was introduced). It needs to be on all the time to be
useful, since the point is to have an audit trail to look at _after_ a
bad thing happens.

That's something we do completely on the server side; I don't think
there are any privacy or "spying" issues there. And I don't think it's a
huge maintenance burden. Inside the existing code, it's literally a
one-line "log this" (the log code itself is a hundred or so lines in its
own file).

Now most users probably don't care that much about this use case. And
I'm OK to apply it as a custom patch. But doesn't it seem like that's
something other people hosting Git repos might want? Or that the concept
might extend to other loggable items that _are_ interesting on the
client side?

That's why I think it is worth taking this step-by-step. Let's log more
things. Let's make enabling tracing more flexible. Those are hopefully
uncontentious and universally useful. If you want to draw the line on
"spying", then I think the right place to draw it is when somebody wants
to ship code to actually move those logs out of the user's control.

-Peff


Re: [RFC PATCH v1] telemetry design overview (part 1)

2018-06-10 Thread Jeff King
On Sat, Jun 09, 2018 at 10:05:49PM +0200, Johannes Schindelin wrote:

> > E.g., could we have a flag or environment variable to have the existing
> > traces output JSON? I guess right now they're inherently free-form via
> > trace_printf, so it would involve adding some structured interface
> > calls. Which is more or less what I guess JeffH's proposed feature to
> > look like.
> 
> I think that is a much larger project than what JeffHost proposed, and
> would unfortunately put too much of a brake on his project.

I definitely don't want to stall somebody else's momentum with a bunch
of what-if's. But I also don't want to end up down the road with two
nearly-identical systems for tracing information. That's confusing to
users, and to developers who must choose which system to use for any new
tracing information they add.

So I think it's worth at least giving a little thought to how we might
leverage similarities between the trace system and this. Even if we
don't implement it now, it would be nice to have a vague sense of how
they could grow together in the long run.

-Peff


[PATCH v2] fetch-pack: don't try to fetch peel values with --all

2018-06-10 Thread Jeff King
On Mon, Jun 11, 2018 at 01:28:23AM -0400, Eric Sunshine wrote:

> On Mon, Jun 11, 2018 at 12:47 AM, Jeff King  wrote:
> > Subject: fetch-pack: don't try to fetch peeled values with --all
> > [...]
> > Original report and test from Kirill Smelkov.
> >
> > Signed-off-by: Kirill Smelkov 
> > Signed-off-by: Jeff King 
> > ---
> > diff --git a/t/t5500-fetch-pack.sh b/t/t5500-fetch-pack.sh
> > @@ -506,30 +506,45 @@ test_expect_success 'test missing ref before 
> > existing' '
> > +test_expect_success 'test --all wrt tag to non-commits' '
> > +   blob_sha1=$(echo "hello blob" | git hash-object -t blob -w --stdin) 
> > &&
> > +   git tag -a -m "tag -> blob" tag-to-blob $blob_sha1 &&
> > +   tree_sha1=$(printf "100644 blob $blob_sha1\tfile\n" | git mktree) &&
> 
> Perhaps modernize these names to 'blob_oid' and 'tree_oid', or even
> simpler, just 'blob' and 'tree'.

Looking deeper, we do not need these trees and blobs at all. The problem
is really just a tag that peels to an object that is not otherwise a ref
tip, regardless of its type.

So below is a patch that simplifies the test even further (the actual
code change is the same).

> > +   git tag -a -m "tag -> tree" tag-to-tree $tree_sha1 &&
> > +   mkdir fetchall &&
> > +   (
> > +   cd fetchall &&
> > +   git init &&
> > +   git fetch-pack --all .. &&
> 
> Simpler:
> 
> git init fetchall &&
> (
> cd fetchall &&
> git fetch-pack --all .. &&
> 
> Although, I see that this script already has a mix of the two styles
> (simpler and not-so-simple), so...

The nearby tests actually reuse the "client" directory. We can do that,
too, if we simply create new objects for our test, to make sure they
still need fetching. See below (we could also use "git -C" there, but
the subshell matches the other tests).

-- >8 --
Subject: fetch-pack: don't try to fetch peel values with --all

When "fetch-pack --all" sees a tag-to-blob on the remote, it
tries to fetch both the tag itself ("refs/tags/foo") and the
peeled value that the remote advertises ("refs/tags/foo^{}").
Asking for the object pointed to by the latter can cause
upload-pack to complain with "not our ref", since it does
not mark the peeled objects with the OUR_REF (unless they
were at the tip of some other ref).

Arguably upload-pack _should_ be marking those peeled
objects. But it never has in the past, since clients would
generally just ask for the tag and expect to get the peeled
value along with it. And that's how "git fetch" works, as
well as older versions of "fetch-pack --all".

The problem was introduced by 5f0fc64513 (fetch-pack:
eliminate spurious error messages, 2012-09-09). Before then,
the matching logic was something like:

  if (refname is ill-formed)
 do nothing
  else if (doing --all)
 always consider it matched
  else
 look through list of sought refs for a match

That commit wanted to flip the order of the second two arms
of that conditional. But we ended up with:

  if (refname is ill-formed)
do nothing
  else
look through list of sought refs for a match

  if (--all and no match so far)
always consider it matched

That means tha an ill-formed ref will trigger the --all
conditional block, even though we should just be ignoring
it. We can fix that by having a single "else" with all of
the well-formed logic, that checks the sought refs and
"--all" in the correct order.

Reported-by: Kirill Smelkov 
Signed-off-by: Jeff King 
---
 fetch-pack.c  |  8 
 t/t5500-fetch-pack.sh | 10 ++
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/fetch-pack.c b/fetch-pack.c
index a320ce9872..cc7a42fee9 100644
--- a/fetch-pack.c
+++ b/fetch-pack.c
@@ -657,11 +657,11 @@ static void filter_refs(struct fetch_pack_args *args,
}
i++;
}
-   }
 
-   if (!keep && args->fetch_all &&
-   (!args->deepen || !starts_with(ref->name, "refs/tags/")))
-   keep = 1;
+   if (!keep && args->fetch_all &&
+   (!args->deepen || !starts_with(ref->name, 
"refs/tags/")))
+   keep = 1;
+   }
 
if (keep) {
*newtail = ref;
diff --git a/t/t5500-fetch-pack.sh b/t/t5500-fetch-pack.sh
index d4f435155f..5726f83ea3 100755
--- a/t/t5500-fetch-pack.sh
+++ b/t/t5500-fetch-pack.sh
@@ -518,6 +518,16 @@ test_expect_success 'test --all, --depth, and explicit 
tag' '
) >out-adt 2>error-adt
 '
 
+test_expect_success 'test --all with tag to non-tip' '
+   git commit --allow-empty -m non-tip &&
+   git commit --allow-empty -m tip &&
+   git tag -m "annotated" non-tip HEAD^ &&
+   (
+   cd client &&
+   git fetch-pack --all ..
+   )
+'
+
 test_expect_success 'shallow fetch with tags does not break the repo

Re: [PATCH] fetch-pack: don't try to fetch peeled values with --all

2018-06-10 Thread Eric Sunshine
On Mon, Jun 11, 2018 at 12:47 AM, Jeff King  wrote:
> Subject: fetch-pack: don't try to fetch peeled values with --all
> [...]
> Original report and test from Kirill Smelkov.
>
> Signed-off-by: Kirill Smelkov 
> Signed-off-by: Jeff King 
> ---
> diff --git a/t/t5500-fetch-pack.sh b/t/t5500-fetch-pack.sh
> @@ -506,30 +506,45 @@ test_expect_success 'test missing ref before existing' '
> +test_expect_success 'test --all wrt tag to non-commits' '
> +   blob_sha1=$(echo "hello blob" | git hash-object -t blob -w --stdin) &&
> +   git tag -a -m "tag -> blob" tag-to-blob $blob_sha1 &&
> +   tree_sha1=$(printf "100644 blob $blob_sha1\tfile\n" | git mktree) &&

Perhaps modernize these names to 'blob_oid' and 'tree_oid', or even
simpler, just 'blob' and 'tree'.

> +   git tag -a -m "tag -> tree" tag-to-tree $tree_sha1 &&
> +   mkdir fetchall &&
> +   (
> +   cd fetchall &&
> +   git init &&
> +   git fetch-pack --all .. &&

Simpler:

git init fetchall &&
(
cd fetchall &&
git fetch-pack --all .. &&

Although, I see that this script already has a mix of the two styles
(simpler and not-so-simple), so...

> +   git cat-file blob $blob_sha1 >/dev/null &&
> +   git cat-file tree $tree_sha1 >/dev/null
> +   )
> +'


[PATCH] fetch-pack: don't try to fetch peeled values with --all

2018-06-10 Thread Jeff King
On Mon, Jun 11, 2018 at 12:20:16AM -0400, Jeff King wrote:

> Doubly interesting, it looks like this case _used_ to work, but was
> broken by 5f0fc64513 (fetch-pack: eliminate spurious error messages,
> 2012-09-09). Which only changed the fetch-pack side. It moved the
> handling of --all so that it was no longer in the "else" for
> check_refname_format(). I guess the original code was rejecting those
> peeled bits as "not a ref" (which makes sense).
> 
> So that seems like a bug in fetch-pack. But I'm still not convinced that
> upload-pack doesn't also have a bug.

Here's a patch which fixes fetch-pack. I just rolled the test into the
same commit; I hope that's OK.

I'm somewhat on the fence regarding the upload-pack behavior. It would
probably be pretty easy to fix, but since this is how it has always
worked, I'm not sure if it's worth changing (and I think it is
consistent in a sense -- it just means that the peeled tips we advertise
are meant only as information, and not to be explicitly requested).

One other funny thing I noticed about this code. For ill-formed refs, it
checks that they begin with "refs/" and that they fail
check_refname_format(). But I think that means I could advertise
"foobar^{}" and fetch-pack would consider it a possible ref to fetch.
That seems odd. I guess that's perhaps how it handles HEAD, though. I
didn't dig in further.

-- >8 --
Subject: fetch-pack: don't try to fetch peeled values with --all

When "fetch-pack --all" sees a tag-to-blob on the remote, it
tries to fetch both the tag itself ("refs/tags/foo") and the
peeled value that the remote advertises ("refs/tags/foo^{}").
Asking for the object pointed to by the latter can cause
upload-pack to complain with "not our ref", since it does
not mark the peeled objects with the OUR_REF.

Arguably upload-pack _should_ be marking those peeled
objects. But it never has in the past, since clients would
generally just ask for the tag and expect to get the peeled
value along with it. And that's how "git fetch" works, as
well as older versions of "fetch-pack --all".

The problem was introduced by 5f0fc64513 (fetch-pack:
eliminate spurious error messages, 2012-09-09). Before then,
the matching logic was something like:

  if (refname is ill-formed)
 do nothing
  else if (doing --all)
 always consider it matched
  else
 look through list of sought refs for a match

That commit wanted to flip the order of the second two arms
of that conditional. But we ended up with:

  if (refname is ill-formed)
do nothing
  else
look through list of sought refs for a match

  if (--all and no match so far)
always consider it matched

That means tha an ill-formed ref will trigger the --all
conditional block, even though we should just be ignoring
it. We can fix that by having a single "else" with all of
the well-formed logic, that checks the sought refs and
"--all" in the correct order.

Original report and test from Kirill Smelkov.

Signed-off-by: Kirill Smelkov 
Signed-off-by: Jeff King 
---
I just stuck with your same test, but thinking about it, I guess this
would be a problem even for a tag-to-commit.

Diff is -U15 to better show the context (in case you are wondering why
it is so big ;) ).

 fetch-pack.c  |  8 
 t/t5500-fetch-pack.sh | 15 +++
 2 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/fetch-pack.c b/fetch-pack.c
index a320ce9872..cc7a42fee9 100644
--- a/fetch-pack.c
+++ b/fetch-pack.c
@@ -645,35 +645,35 @@ static void filter_refs(struct fetch_pack_args *args,
 
if (starts_with(ref->name, "refs/") &&
check_refname_format(ref->name, 0))
; /* trash */
else {
while (i < nr_sought) {
int cmp = strcmp(ref->name, sought[i]->name);
if (cmp < 0)
break; /* definitely do not have it */
else if (cmp == 0) {
keep = 1; /* definitely have it */
sought[i]->match_status = REF_MATCHED;
}
i++;
}
-   }
 
-   if (!keep && args->fetch_all &&
-   (!args->deepen || !starts_with(ref->name, "refs/tags/")))
-   keep = 1;
+   if (!keep && args->fetch_all &&
+   (!args->deepen || !starts_with(ref->name, 
"refs/tags/")))
+   keep = 1;
+   }
 
if (keep) {
*newtail = ref;
ref->next = NULL;
newtail = &ref->next;
} else {
ref->next = unmatched;
unmatched = ref;
}
}
 
/* Append unmatched req

Re: [PATCH] fetch-pack: demonstrate --all breakage when remote have tags to non-commit objects

2018-06-10 Thread Jeff King
On Sun, Jun 10, 2018 at 02:32:57PM +, Kirill Smelkov wrote:

> Added test shows remote with two tag objects pointing to a blob and a
> tree. The tag objects themselves are referenced from under regular
> refs/tags/* namespace. If test_expect_failure is changed to
> test_expect_success the test fails:

Interesting case. The problem is actually that upload-pack complains:

>   fatal: git upload-pack: not our ref 
> 038f48ad0beaffbea71d186a05084b79e3870cbf

And that sha1 is the tagged blob:

>   bc4e9e1fa80662b449805b1ac29fc9b1e4c49187refs/tags/tag-to-blob   
> # <-- NOTE
>   038f48ad0beaffbea71d186a05084b79e3870cbfrefs/tags/tag-to-blob^{}
>   520db1f5e1afeaa12b1a8d73ce82db72ca036ee1refs/tags/tag-to-tree   
> # <-- NOTE
>   7395c100223b7cd760f58ccfa0d3f3d2dd539bb6refs/tags/tag-to-tree^{}

So it seems like upload-pack is at fault for not marking the object as a
tip when it peels the tag.

> For the reference, porcelain fetch 'refs/*:refs/origin/*' works:

That's because it doesn't actually issue a "want" for the peeled blob
(it doesn't need to, because it's fetching the tag itself). So it
happens to work, but I still think upload-pack is at fault for not
accepting the "want" on the blob it advertised.

Doubly interesting, it looks like this case _used_ to work, but was
broken by 5f0fc64513 (fetch-pack: eliminate spurious error messages,
2012-09-09). Which only changed the fetch-pack side. It moved the
handling of --all so that it was no longer in the "else" for
check_refname_format(). I guess the original code was rejecting those
peeled bits as "not a ref" (which makes sense).

So that seems like a bug in fetch-pack. But I'm still not convinced that
upload-pack doesn't also have a bug.

> +test_expect_failure 'test --all wrt tag to non-commits' '
> + blob_sha1=$(echo "hello blob" | git hash-object -t blob -w --stdin) &&
> + git tag -a -m "tag -> blob" tag-to-blob $blob_sha1 &&
> + tree_sha1=$(echo -e "100644 blob $blob_sha1\tfile" | git mktree) &&

I had to switch this "echo -e" to:

  printf "100644 blob $blob_sha1\tfile\n"

since "-e" is a bash-ism (and my /bin/sh is dash).

-Peff


Re: [PATCH] checkout files in-place

2018-06-10 Thread brian m. carlson
On Sun, Jun 10, 2018 at 09:44:45PM +0200, Clemens Buchacher wrote:
> diff --git a/Documentation/config.txt b/Documentation/config.txt
> index b6cb997164..17af0fe163 100644
> --- a/Documentation/config.txt
> +++ b/Documentation/config.txt
> @@ -923,6 +923,14 @@ core.sparseCheckout::
>   Enable "sparse checkout" feature. See section "Sparse checkout" in
>   linkgit:git-read-tree[1] for more information.
>  
> +core.checkoutInplace::

Perhaps "core.checkoutInPlace" (captialized "place")?

> + Checkout file contents in-place. By default Git checkout removes 
> existing

"Check out".

> + work tree files before it replaces them with different contents. If this
> + option is enabled Git will overwrite the contents of existing files
> + in-place. This is useful on systems where open file handles to a removed

Here and above, uou don't need to hyphenate "in place" when used as an
adverb, only when using it as an adjective before the noun (e.g.
"in-place checkout").

> + file prevent creating new files at the same path. Note that Git will not
> + update read/write permissions according to umask.

I'm wondering if it's worth a mention that running out of disk space (or
quota) will cause data to be truncated.

>  static void *read_blob_entry(const struct cache_entry *ce, unsigned long 
> *size)
> @@ -470,8 +475,15 @@ int checkout_entry(struct cache_entry *ce,
>   if (!state->force)
>   return error("%s is a directory", path.buf);
>   remove_subtree(&path);
> - } else if (unlink(path.buf))
> - return error_errno("unable to unlink old '%s'", 
> path.buf);
> + } else if (checkout_inplace) {
> + if (!(st.st_mode & 0200) ||
> + (trust_executable_bit && (st.st_mode & 0100) != 
> (ce->ce_mode & 0100)))
> + if (chmod(path.buf, (ce->ce_mode & 0100) ? 0777 
> : 0666))
> + return error_errno("unable to change 
> mode of '%s'", path.buf);

So in-place checkout won't work if the mode changes and we're not the
owner of the file.  One place where I could see people wanting to use
this on Unix is shared repositories with BSD group semantics, but that
wouldn't work reliably.

I don't see that as a problem, as that isn't the issue this patch is
trying to solve, but it may end up biting people.
-- 
brian m. carlson: Houston, Texas, US
OpenPGP: https://keybase.io/bk2204


signature.asc
Description: PGP signature


Re: [PATCH 24/30] merge-recursive: Add computation of collisions due to dir rename & merging

2018-06-10 Thread Jeff King
On Sun, Jun 10, 2018 at 12:56:31PM +0200, René Scharfe wrote:

> The value of PATH_MAX is platform-dependent, so it's easy to exceed when
> doing cross-platform development.  It's also not a hard limit on most
> operating systems, not even on Windows.  Further reading:
> 
>https://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html
> 
> So using a fixed buffer is not a good idea, and writing to it without
> checking is dangerous.  Here's a fix:

Even on platforms where it _is_ a hard-limit, we are quite often dealing
with paths that come from tree objects (so even if the OS would
eventually complain about our path, it is small consolation when we
smash the stack before we get there).

Your patch looks good to me, and we definitely should address this
before v2.18-final.

> - char temp[PATH_MAX];
> + char *temp = xstrdup(path);
>   char *end;
> - struct dir_rename_entry *entry;
> + struct dir_rename_entry *entry = NULL;;
>  
> - strcpy(temp, path);

I'm sad that this strcpy() wasn't caught in review. IMHO we should avoid
that function altogether, even when we _think_ it can't trigger an
overflow. That's easier to reason about (and makes auditing easier).

It looks like another one has crept in recently, too.

-- >8 --
Subject: [PATCH] blame: prefer xsnprintf to strcpy for colors

Our color buffers are all COLOR_MAXLEN, which fits the
largest possible color. So we can never overflow the buffer
by copying an existing color. However, using strcpy() makes
it harder to audit the code-base for calls that _are_
problems. We should use something like xsnprintf(), which
shows the reader that we expect this never to fail (and
provides a run-time assertion if it does, just in case).

Signed-off-by: Jeff King 
---
Another option would just be color_parse(repeated_meta_color, "cyan").
The run-time cost is slightly higher, but it probably doesn't matter
here, and perhaps it's more readable.

This one is less critical for v2.18.

 builtin/blame.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/builtin/blame.c b/builtin/blame.c
index 4202584f97..45770c5a8c 100644
--- a/builtin/blame.c
+++ b/builtin/blame.c
@@ -1068,7 +1068,9 @@ int cmd_blame(int argc, const char **argv, const char 
*prefix)
find_alignment(&sb, &output_option);
if (!*repeated_meta_color &&
(output_option & OUTPUT_COLOR_LINE))
-   strcpy(repeated_meta_color, GIT_COLOR_CYAN);
+   xsnprintf(repeated_meta_color,
+ sizeof(repeated_meta_color),
+ "%s", GIT_COLOR_CYAN);
}
if (output_option & OUTPUT_ANNOTATE_COMPAT)
output_option &= ~(OUTPUT_COLOR_LINE | 
OUTPUT_SHOW_AGE_WITH_COLOR);
-- 
2.18.0.rc1.446.g4486251e51



[PATCH] checkout files in-place

2018-06-10 Thread Clemens Buchacher
When replacing files with new content during checkout, we do not write
to them in-place. Instead we unlink and re-create the files in order to
let the system figure out ownership and permissions for the new file,
taking umask into account.

It is safe to do this on Linux file systems, even if open file handles
still exist, because unlink only removes the directory reference to the
file. On Windows, however, a file cannot be deleted until all handles to
it are closed. If a file cannot be deleted, its name cannot be reused.

This causes files to be deleted, but not checked out when switching
branches. This is frequently an issue with Qt Creator, which
continuously opens files in the work tree, as reported here:
https://github.com/git-for-windows/git/issues/1653

This change adds the core.checkout_inplace option. If enabled, checkout
will open files for writing the new content in-place. This fixes the
issue, but with this approach the system will not update file
permissions according to umask. Only essential updates of write and
executable permissions are performed.

The in-place checkout is therefore optional. It could be enabled by Git
installers on Windows, where umask is irrelevant.

Signed-off-by: Clemens Buchacher 
---

I wonder if Git should be responsible for updating ownership and file
permissions when modifying existing files during checkout. We could
otherwise remove the unlink completely. Maybe this could even improve
performance in some cases. It made no difference in a short test on
Windows.

Regression tests are running. This will take a while.

 Documentation/config.txt|  8 
 cache.h |  2 ++
 config.c|  5 +
 entry.c | 18 +++---
 environment.c   |  1 +
 t/t2031-checkout-inplace.sh | 41 +
 6 files changed, 72 insertions(+), 3 deletions(-)
 create mode 100755 t/t2031-checkout-inplace.sh

diff --git a/Documentation/config.txt b/Documentation/config.txt
index b6cb997164..17af0fe163 100644
--- a/Documentation/config.txt
+++ b/Documentation/config.txt
@@ -923,6 +923,14 @@ core.sparseCheckout::
Enable "sparse checkout" feature. See section "Sparse checkout" in
linkgit:git-read-tree[1] for more information.
 
+core.checkoutInplace::
+   Checkout file contents in-place. By default Git checkout removes 
existing
+   work tree files before it replaces them with different contents. If this
+   option is enabled Git will overwrite the contents of existing files
+   in-place. This is useful on systems where open file handles to a removed
+   file prevent creating new files at the same path. Note that Git will not
+   update read/write permissions according to umask.
+
 core.abbrev::
Set the length object names are abbreviated to.  If
unspecified or set to "auto", an appropriate value is
diff --git a/cache.h b/cache.h
index 2c640d4c31..c8fccd2a80 100644
--- a/cache.h
+++ b/cache.h
@@ -808,6 +808,7 @@ extern char *git_replace_ref_base;
 extern int fsync_object_files;
 extern int core_preload_index;
 extern int core_apply_sparse_checkout;
+extern int checkout_inplace;
 extern int precomposed_unicode;
 extern int protect_hfs;
 extern int protect_ntfs;
@@ -1530,6 +1531,7 @@ struct checkout {
unsigned force:1,
 quiet:1,
 not_new:1,
+inplace:1,
 refresh_cache:1;
 };
 #define CHECKOUT_INIT { NULL, "" }
diff --git a/config.c b/config.c
index cd2b404b14..4ac2407057 100644
--- a/config.c
+++ b/config.c
@@ -1231,6 +1231,11 @@ static int git_default_core_config(const char *var, 
const char *value, void *cb)
return 0;
}
 
+   if (!strcmp(var, "core.checkoutinplace")) {
+   checkout_inplace = git_config_bool(var, value);
+   return 0;
+   }
+
if (!strcmp(var, "core.precomposeunicode")) {
precomposed_unicode = git_config_bool(var, value);
return 0;
diff --git a/entry.c b/entry.c
index 31c00816dc..54c98870b9 100644
--- a/entry.c
+++ b/entry.c
@@ -78,8 +78,13 @@ static void remove_subtree(struct strbuf *path)
 
 static int create_file(const char *path, unsigned int mode)
 {
+   int flags;
+   if (checkout_inplace)
+   flags = O_WRONLY | O_CREAT | O_TRUNC;
+   else
+   flags = O_WRONLY | O_CREAT | O_EXCL;
mode = (mode & 0100) ? 0777 : 0666;
-   return open(path, O_WRONLY | O_CREAT | O_EXCL, mode);
+   return open(path, flags, mode);
 }
 
 static void *read_blob_entry(const struct cache_entry *ce, unsigned long *size)
@@ -470,8 +475,15 @@ int checkout_entry(struct cache_entry *ce,
if (!state->force)
return error("%s is a directory", path.buf);
remove_subtree(&path);
-   } else if (unlink(path.buf))
-   return 

Re: [PATCH 2/2] git-rebase: error out when incompatible options passed

2018-06-10 Thread Phillip Wood

On 07/06/18 06:06, Elijah Newren wrote:

git rebase has three different types: am, merge, and interactive, all of
which are implemented in terms of separate scripts.  am builds on git-am,
merge builds on git-merge-recursive, and interactive builds on
git-cherry-pick.  We make use of features in those lower-level commands in
the different rebase types, but those features don't exist in all of the
lower level commands so we have a range of incompatibilities.  Previously,
we just accepted nearly any argument and silently ignored whichever ones
weren't implemented for the type of rebase specified.  Change this so the
incompatibilities are documented, included in the testsuite, and tested
for at runtime with an appropriate error message shown.


I think this is a great improvement, it has always bothered me that 
rebase silently ignored the am options when they're given with 
interactive ones.


Some exceptions I left out:

   * --merge and --interactive are technically incompatible since they are
 supposed to run different underlying scripts, but with a few small
 changes, --interactive can do everything that --merge can.  In fact,
 I'll shortly be sending another patch to remove git-rebase--merge and
 reimplement it on top of git-rebase--interactive.


Excellent I've wondered about doing that but never got round to it. One 
thing I was slightly concerned about was that someone maybe parsing the 
output of git-rebase--merge and they'll get a nasty shock when that 
output changes as a result of using the sequencer.




   * One could argue that --interactive and --quiet are incompatible since
 --interactive doesn't implement a --quiet mode (perhaps since
 cherry-pick itself does not implement one).  However, the interactive
 mode is more quiet than the other modes in general with progress
 messages, so one could argue that it's already quiet.

Signed-off-by: Elijah Newren 
---
  Documentation/git-rebase.txt   | 15 +--
  git-rebase.sh  | 17 +
  t/t3422-rebase-incompatible-options.sh | 10 +-
  3 files changed, 35 insertions(+), 7 deletions(-)

diff --git a/Documentation/git-rebase.txt b/Documentation/git-rebase.txt
index 0e20a66e73..451252c173 100644
--- a/Documentation/git-rebase.txt
+++ b/Documentation/git-rebase.txt
@@ -243,6 +243,10 @@ leave out at most one of A and B, in which case it 
defaults to HEAD.
  --keep-empty::
Keep the commits that do not change anything from its
parents in the result.
++
+This uses the `--interactive` machinery internally, and as such,
+anything that is incompatible with --interactive is incompatible
+with this option.
  
  --allow-empty-message::

By default, rebasing commits with an empty message will fail.
@@ -324,6 +328,8 @@ which makes little sense.
and after each change.  When fewer lines of surrounding
context exist they all must match.  By default no context is
ever ignored.
+   Incompatible with the --merge and --interactive options, or
+   anything that implies those options or their machinery.


struct replay_opts has an allow_empty_message member so I'm not sure 
that's true.



  -f::
  --force-rebase::
@@ -355,13 +361,15 @@ default is `--no-fork-point`, otherwise the default is 
`--fork-point`.
  --whitespace=::
These flag are passed to the 'git apply' program
(see linkgit:git-apply[1]) that applies the patch.
-   Incompatible with the --interactive option.
+   Incompatible with the --merge and --interactive options, or
+   anything that implies those options or their machinery.


I wonder if it is better just to list the incompatible options it might 
be a bit long but it would be nicer for the user than them having to 
work out which options imply --interactive.



  --committer-date-is-author-date::
  --ignore-date::
These flags are passed to 'git am' to easily change the dates
of the rebased commits (see linkgit:git-am[1]).
-   Incompatible with the --interactive option.
+   Incompatible with the --merge and --interactive options, or
+   anything that implies those options or their machinery.
  
  --signoff::

Add a Signed-off-by: trailer to all the rebased commits. Note
@@ -400,6 +408,9 @@ The `--rebase-merges` mode is similar in spirit to 
`--preserve-merges`, but
  in contrast to that option works well in interactive rebases: commits can be
  reordered, inserted and dropped at will.
  +
+This uses the `--interactive` machinery internally, but it can be run
+without an explicit `--interactive`.
++


Without more context it's hard to judge but I'm not sure this adds 
anything useful



  It is currently only possible to recreate the merge commits using the
  `recursive` merge strategy; Different merge strategies can be used only via
  explicit `exec git merge -s  [...]` commands.
diff --git a/git-rebase.sh b/git-rebase.sh
index 40be59ecc4..f1dbe

Re: [PATCH] git-rebase.sh: handle keep-empty like all other options

2018-06-10 Thread Phillip Wood

Hi Elijah
On 07/06/18 06:07, Elijah Newren wrote:

Signed-off-by: Elijah Newren 
---
  git-rebase.sh | 6 +-
  1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/git-rebase.sh b/git-rebase.sh
index 40be59ecc4..a56b286372 100755
--- a/git-rebase.sh
+++ b/git-rebase.sh
@@ -276,6 +276,7 @@ do
;;
--keep-empty)
keep_empty=yes
+   test -z "$interactive_rebase" && interactive_rebase=implied


I think you need to wait until all the options have been parsed before 
setting the implied interactive rebase in case the user specifies has 
'--keep-empty' in an alias and specifies '--no-keep-empty' with some am 
options on the command line.


Best Wishes

Phillip

;;
--allow-empty-message)
allow_empty_message=--allow-empty-message
@@ -480,11 +481,6 @@ then
test -z "$interactive_rebase" && interactive_rebase=implied
  fi
  
-if test -n "$keep_empty"

-then
-   test -z "$interactive_rebase" && interactive_rebase=implied
-fi
-
  if test -n "$interactive_rebase"
  then
type=interactive





Re: Why is there no force pull?

2018-06-10 Thread Torsten Bögershausen
On Sat, Jun 09, 2018 at 09:01:54PM +0200, Christoph Böhmwalder wrote:
> Hi,
> 
> Since this is a use case that actually comes up quite often in
> day-to-day use, especially among git beginners, I was wondering: is
> there a specific reason why a command like "fetch changes from remote,
> overwriting everything in my current working directory including all
> commits I've made" doesn't exist? Now, I'm quite aware that something
> like
> 
> $ git fetch origin/branch
> $ git reset --hard origin/branch

This is not exactly what you askeded for, but I tend not to recommend
people using "git reset --hard" at all.
Either use a "stash", just in case.

Or, in your case:
$ git fetch origin  && git checkout origin/branch

This will put your working tree onto origin/branch.

As a bonus, in case that you have done commits, which are now no longer
visible, "git reflog" is typically able to find them.


Re: [PATCH v7 2/2] http-backend: respect CONTENT_LENGTH for receive-pack

2018-06-10 Thread Max Kirillov
On Mon, Jun 04, 2018 at 12:44:09AM -0400, Jeff King wrote:
> On Sun, Jun 03, 2018 at 12:27:49AM +0300, Max Kirillov wrote:
>> +env \
>> +CONTENT_TYPE=application/x-git-upload-pack-request \
>> +QUERY_STRING=/repo.git/git-upload-pack \
>> +PATH_TRANSLATED="$PWD"/.git/git-upload-pack \
>> +GIT_HTTP_EXPORT_ALL=TRUE \
>> +REQUEST_METHOD=POST \
>> +CONTENT_LENGTH="$NOT_FIT_IN_SSIZE" \
>> +git http-backend /dev/null 2>err &&
>> +grep -q "fatal:.*CONTENT_LENGTH" err
> 
> I'm not sure if these messages should be marked for translation. If so,
> you'd want test_i18ngrep here.

Message localization does not seem to be used in
http-backend at all. It makes sense - server-side software
probably does not know who is the user on the other side, if
the message gets to the user at all. So, I think the
message should not be translated.


[PATCH v8 2/3] http-backend: respect CONTENT_LENGTH as specified by rfc3875

2018-06-10 Thread Max Kirillov
http-backend reads whole input until EOF. However, the RFC 3875 specifies
that a script must read only as many bytes as specified by CONTENT_LENGTH
environment variable. Web server may exercise the specification by not closing
the script's standard input after writing content. In that case http-backend
would hang waiting for the input. The issue is known to happen with
IIS/Windows, for example.

Make http-backend read only CONTENT_LENGTH bytes, if it's defined, rather than
the whole input until EOF. If the variable is not defined, keep older behavior
of reading until EOF because it is used to support chunked transfer-encoding.

This commit only fixes buffered input, whcih reads whole body before
processign it. Non-buffered input is going to be fixed in subsequent commit.

Signed-off-by: Florian Manschwetus 
[mk: fixed trivial build failures and polished style issues]
Helped-by: Junio C Hamano 
Signed-off-by: Max Kirillov 
---
 config.c   |  2 +-
 config.h   |  1 +
 http-backend.c | 54 +++---
 3 files changed, 49 insertions(+), 8 deletions(-)

diff --git a/config.c b/config.c
index c698988f5e..4148a3529d 100644
--- a/config.c
+++ b/config.c
@@ -853,7 +853,7 @@ int git_parse_ulong(const char *value, unsigned long *ret)
return 1;
 }
 
-static int git_parse_ssize_t(const char *value, ssize_t *ret)
+int git_parse_ssize_t(const char *value, ssize_t *ret)
 {
intmax_t tmp;
if (!git_parse_signed(value, &tmp, 
maximum_signed_value_of_type(ssize_t)))
diff --git a/config.h b/config.h
index ef70a9cac1..c143a1b634 100644
--- a/config.h
+++ b/config.h
@@ -48,6 +48,7 @@ extern void git_config(config_fn_t fn, void *);
 extern int config_with_options(config_fn_t fn, void *,
   struct git_config_source *config_source,
   const struct config_options *opts);
+extern int git_parse_ssize_t(const char *, ssize_t *);
 extern int git_parse_ulong(const char *, unsigned long *);
 extern int git_parse_maybe_bool(const char *);
 extern int git_config_int(const char *, const char *);
diff --git a/http-backend.c b/http-backend.c
index 206dc28e07..0c9e9be2b7 100644
--- a/http-backend.c
+++ b/http-backend.c
@@ -289,7 +289,7 @@ static void write_to_child(int out, const unsigned char 
*buf, ssize_t len, const
  * hit max_request_buffer we die (we'd rather reject a
  * maliciously large request than chew up infinite memory).
  */
-static ssize_t read_request(int fd, unsigned char **out)
+static ssize_t read_request_eof(int fd, unsigned char **out)
 {
size_t len = 0, alloc = 8192;
unsigned char *buf = xmalloc(alloc);
@@ -326,7 +326,46 @@ static ssize_t read_request(int fd, unsigned char **out)
}
 }
 
-static void inflate_request(const char *prog_name, int out, int buffer_input)
+static ssize_t read_request_fixed_len(int fd, ssize_t req_len, unsigned char 
**out)
+{
+   unsigned char *buf = NULL;
+   ssize_t cnt = 0;
+
+   if (max_request_buffer < req_len) {
+   die("request was larger than our maximum size (%lu): "
+   "%" PRIuMAX "; try setting GIT_HTTP_MAX_REQUEST_BUFFER",
+   max_request_buffer, (uintmax_t)req_len);
+   }
+
+   buf = xmalloc(req_len);
+   cnt = read_in_full(fd, buf, req_len);
+   if (cnt < 0) {
+   free(buf);
+   return -1;
+   }
+   *out = buf;
+   return cnt;
+}
+
+static ssize_t get_content_length(void)
+{
+   ssize_t val = -1;
+   const char *str = getenv("CONTENT_LENGTH");
+
+   if (str && !git_parse_ssize_t(str, &val))
+   die("failed to parse CONTENT_LENGTH: %s", str);
+   return val;
+}
+
+static ssize_t read_request(int fd, unsigned char **out, ssize_t req_len)
+{
+   if (req_len < 0)
+   return read_request_eof(fd, out);
+   else
+   return read_request_fixed_len(fd, req_len, out);
+}
+
+static void inflate_request(const char *prog_name, int out, int buffer_input, 
ssize_t req_len)
 {
git_zstream stream;
unsigned char *full_request = NULL;
@@ -344,7 +383,7 @@ static void inflate_request(const char *prog_name, int out, 
int buffer_input)
if (full_request)
n = 0; /* nothing left to read */
else
-   n = read_request(0, &full_request);
+   n = read_request(0, &full_request, req_len);
stream.next_in = full_request;
} else {
n = xread(0, in_buf, sizeof(in_buf));
@@ -380,10 +419,10 @@ static void inflate_request(const char *prog_name, int 
out, int buffer_input)
free(full_request);
 }
 
-static void copy_request(const char *prog_name, int out)
+static void copy_request(const char *prog_name, int out, ssize_t req_len)
 {
unsigned char *buf;
-   ssize_t n = read_request(0, 

Re: [PATCH v7 2/2] http-backend: respect CONTENT_LENGTH for receive-pack

2018-06-10 Thread Max Kirillov
On Tue, Jun 05, 2018 at 01:18:08AM +0300, Max Kirillov wrote:
> On Mon, Jun 04, 2018 at 12:44:09AM -0400, Jeff King wrote:
>> Since this is slightly less efficient, and because it only matters if
>> the web server does not already close the pipe, should this have a
>> run-time configuration knob, even if it defaults to
>> safe-but-slightly-slower?
> 
> Personally, I of course don't want this. Also, I don't think
> the difference is much noticeable. But you can never be sure
> without trying. I'll try to measure some numbers.

It seems to be challenging to see any effect at my system.
At least not with any real operation because changing
references needs IO and index-pack needs CPU so. I'll try
it some more.

>> We should probably say something more generic like:
>> 
>>   die_errno("unable to write to '%s'");
>> 
>> or similar.
> 
> Actually, it is already 3rd same error in this file. Maybe
> deserve some refactoring. I will change the message also.

Extracted the writing and refactoring to a single function,
also fixed the message.

>>> +cat >fetch_body <>> +0032want $hash_head
>>> +0032have $hash_prev
>>> +0009done
>>> +EOF
>> 
>> This depends on the size of the hash. That's always 40 for now, but is
>> something that may change soon.
>> 
>> We already have a packetize() helper; could we use it here?

> Could you point me to it? I cannot find it.

Sorry, misread it as packetSize. Found and used.

>> Also, do we need to protect ourselves against other signals being
>> delivered? E.g., if I resize my xterm and this process gets SIGWINCH, is
>> it going to erroneously end the sleep and say "nope, no exited signal"?

> I'll check, but what could I do? Should I add blocking other
> signals there?

In my Linux I don't see the signal. Except that, there seem to
be not that many ignored signals. Anyway, I don't see what
could be done bout it.


[PATCH v8 1/3] http-backend: cleanup writing to child process

2018-06-10 Thread Max Kirillov
As explained in [1], we should not assume the reason why the writing has
failed, and even if the reason is that child has existed not the reason
why it have done so. So instead just say that writing has failed.

[1] https://public-inbox.org/git/20180604044408.gd14...@sigill.intra.peff.net/

Signed-off-by: Max Kirillov 
---
 http-backend.c | 14 +-
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/http-backend.c b/http-backend.c
index 88d2a9bc40..206dc28e07 100644
--- a/http-backend.c
+++ b/http-backend.c
@@ -278,6 +278,12 @@ static struct rpc_service *select_service(struct strbuf 
*hdr, const char *name)
return svc;
 }
 
+static void write_to_child(int out, const unsigned char *buf, ssize_t len, 
const char *prog_name)
+{
+   if (write_in_full(out, buf, len) < 0)
+   die("unable to write to '%s'", prog_name);
+}
+
 /*
  * This is basically strbuf_read(), except that if we
  * hit max_request_buffer we die (we'd rather reject a
@@ -360,9 +366,8 @@ static void inflate_request(const char *prog_name, int out, 
int buffer_input)
die("zlib error inflating request, result %d", 
ret);
 
n = stream.total_out - cnt;
-   if (write_in_full(out, out_buf, n) < 0)
-   die("%s aborted reading request", prog_name);
-   cnt += n;
+   write_to_child(out, out_buf, stream.total_out - cnt, 
prog_name);
+   cnt = stream.total_out;
 
if (ret == Z_STREAM_END)
goto done;
@@ -381,8 +386,7 @@ static void copy_request(const char *prog_name, int out)
ssize_t n = read_request(0, &buf);
if (n < 0)
die_errno("error reading request body");
-   if (write_in_full(out, buf, n) < 0)
-   die("%s aborted reading request", prog_name);
+   write_to_child(out, buf, n, prog_name);
close(out);
free(buf);
 }
-- 
2.17.0.1185.g782057d875



[PATCH v8 0/3] http-backend: respect CONTENT_LENGTH as specified by rfc3875

2018-06-10 Thread Max Kirillov
Did the fixes proposed for v7

Max Kirillov (3):
  http-backend: cleanup writing to child process
  http-backend: respect CONTENT_LENGTH as specified by rfc3875
  http-backend: respect CONTENT_LENGTH for receive-pack

 config.c   |   2 +-
 config.h   |   1 +
 help.c |   1 +
 http-backend.c | 100 +--
 t/t5562-http-backend-content-length.sh | 169 +
 t/t5562/invoke-with-content-length.pl  |  37 ++
 6 files changed, 295 insertions(+), 15 deletions(-)
 create mode 100755 t/t5562-http-backend-content-length.sh
 create mode 100755 t/t5562/invoke-with-content-length.pl

-- 
2.17.0.1185.g782057d875



[PATCH v8 3/3] http-backend: respect CONTENT_LENGTH for receive-pack

2018-06-10 Thread Max Kirillov
Push passes to another commands, as described in
https://public-inbox.org/git/20171129032214.gb32...@sigill.intra.peff.net/

As it gets complicated to correctly track the data length, instead transfer
the data through parent process and cut the pipe as the specified length is
reached. Do it only when CONTENT_LENGTH is set, otherwise pass the input
directly to the forked commands.

Add tests for cases:

* CONTENT_LENGTH is set, script's stdin has more data, with all combinations
  of variations: fetch or push, plain or compressed body, correct or truncated
  input.

* CONTENT_LENGTH is specified to a value which does not fit into ssize_t.

Helped-by: Junio C Hamano 
Signed-off-by: Max Kirillov 
---
 help.c |   1 +
 http-backend.c |  32 -
 t/t5562-http-backend-content-length.sh | 169 +
 t/t5562/invoke-with-content-length.pl  |  37 ++
 4 files changed, 237 insertions(+), 2 deletions(-)
 create mode 100755 t/t5562-http-backend-content-length.sh
 create mode 100755 t/t5562/invoke-with-content-length.pl

diff --git a/help.c b/help.c
index 60071a9bea..42600ca026 100644
--- a/help.c
+++ b/help.c
@@ -419,6 +419,7 @@ int cmd_version(int argc, const char **argv, const char 
*prefix)
else
printf("no commit associated with this build\n");
printf("sizeof-long: %d\n", (int)sizeof(long));
+   printf("sizeof-size_t: %d\n", (int)sizeof(size_t));
/* NEEDSWORK: also save and output GIT-BUILD_OPTIONS? */
}
return 0;
diff --git a/http-backend.c b/http-backend.c
index 0c9e9be2b7..28c07e7c2a 100644
--- a/http-backend.c
+++ b/http-backend.c
@@ -372,6 +372,8 @@ static void inflate_request(const char *prog_name, int out, 
int buffer_input, ss
unsigned char in_buf[8192];
unsigned char out_buf[8192];
unsigned long cnt = 0;
+   int req_len_defined = req_len >= 0;
+   size_t req_remaining_len = req_len;
 
memset(&stream, 0, sizeof(stream));
git_inflate_init_gzip_only(&stream);
@@ -386,8 +388,15 @@ static void inflate_request(const char *prog_name, int 
out, int buffer_input, ss
n = read_request(0, &full_request, req_len);
stream.next_in = full_request;
} else {
-   n = xread(0, in_buf, sizeof(in_buf));
+   ssize_t buffer_len;
+   if (req_len_defined && req_remaining_len <= 
sizeof(in_buf))
+   buffer_len = req_remaining_len;
+   else
+   buffer_len = sizeof(in_buf);
+   n = xread(0, in_buf, buffer_len);
stream.next_in = in_buf;
+   if (req_len_defined && n > 0)
+   req_remaining_len -= n;
}
 
if (n <= 0)
@@ -430,6 +439,23 @@ static void copy_request(const char *prog_name, int out, 
ssize_t req_len)
free(buf);
 }
 
+static void pipe_fixed_length(const char *prog_name, int out, size_t req_len)
+{
+   unsigned char buf[8192];
+   size_t remaining_len = req_len;
+
+   while (remaining_len > 0) {
+   size_t chunk_length = remaining_len > sizeof(buf) ? sizeof(buf) 
: remaining_len;
+   ssize_t n = xread(0, buf, chunk_length);
+   if (n < 0)
+   die_errno("Reading request failed");
+   write_to_child(out, buf, n, prog_name);
+   remaining_len -= n;
+   }
+
+   close(out);
+}
+
 static void run_service(const char **argv, int buffer_input)
 {
const char *encoding = getenv("HTTP_CONTENT_ENCODING");
@@ -456,7 +482,7 @@ static void run_service(const char **argv, int buffer_input)
 "GIT_COMMITTER_EMAIL=%s@http.%s", user, host);
 
cld.argv = argv;
-   if (buffer_input || gzipped_request)
+   if (buffer_input || gzipped_request || req_len >= 0)
cld.in = -1;
cld.git_cmd = 1;
if (start_command(&cld))
@@ -467,6 +493,8 @@ static void run_service(const char **argv, int buffer_input)
inflate_request(argv[0], cld.in, buffer_input, req_len);
else if (buffer_input)
copy_request(argv[0], cld.in, req_len);
+   else if (req_len >= 0)
+   pipe_fixed_length(argv[0], cld.in, req_len);
else
close(0);
 
diff --git a/t/t5562-http-backend-content-length.sh 
b/t/t5562-http-backend-content-length.sh
new file mode 100755
index 00..8040d80e04
--- /dev/null
+++ b/t/t5562-http-backend-content-length.sh
@@ -0,0 +1,169 @@
+#!/bin/sh
+
+test_description='test git-http-backend respects CONTENT_LENGTH'
+. ./test-lib.sh
+
+test_lazy_prereq GZIP 'gzip --version'
+
+verify_http_result() {
+   # sometimes there is fatal error buit the result is 

[PATCH] fetch-pack: demonstrate --all breakage when remote have tags to non-commit objects

2018-06-10 Thread Kirill Smelkov
Added test shows remote with two tag objects pointing to a blob and a
tree. The tag objects themselves are referenced from under regular
refs/tags/* namespace. If test_expect_failure is changed to
test_expect_success the test fails:

Initialized empty Git repository in 
/home/kirr/src/tools/git/git/t/trash directory.t5500-fetch-pack/fetchall/.git/
fatal: git upload-pack: not our ref 
038f48ad0beaffbea71d186a05084b79e3870cbf
fatal: The remote end hung up unexpectedly
not ok 56 - test --all wrt tag to non-commits
#
#   blob_sha1=$(echo "hello blob" | git hash-object -t blob 
-w --stdin) &&
#   git tag -a -m "tag -> blob" tag-to-blob $blob_sha1 &&
#   tree_sha1=$(echo -e "100644 blob $blob_sha1\tfile" | 
git mktree) &&
#   git tag -a -m "tag -> tree" tag-to-tree $tree_sha1 &&
#   mkdir fetchall &&
#   (
#   cd fetchall &&
#   git init &&
#   git fetch-pack --all .. &&
#   git cat-file blob $blob_sha1 >/dev/null &&
#   git cat-file tree $tree_sha1 >/dev/null
#   )

and manual investigation from under "trash 
directory.t5500-fetch-pack/fetchall/" shows:

.../git/t/trash directory.t5500-fetch-pack/fetchall$ git ls-remote ..
440858748ae905d48259d4fb67a12a7aa1520cf7HEAD
f85e353c1b377970afbb804118d9135948598eearefs/heads/A
440858748ae905d48259d4fb67a12a7aa1520cf7refs/heads/B
7f3cb539fbce926dd99233cfc9b6966f1d69747erefs/heads/C
b3401427a9637a35f6a203d635e5677e76ad409drefs/heads/D
4928b093c801d36be5cdb3ed3ab572fa0d4c93bfrefs/heads/E
c1375be492d3716839043d7f7e9a593f8e80c668refs/heads/F
f85e353c1b377970afbb804118d9135948598eearefs/tags/A
440858748ae905d48259d4fb67a12a7aa1520cf7refs/tags/B
7f3cb539fbce926dd99233cfc9b6966f1d69747erefs/tags/C
b3401427a9637a35f6a203d635e5677e76ad409drefs/tags/D
4928b093c801d36be5cdb3ed3ab572fa0d4c93bfrefs/tags/E
c1375be492d3716839043d7f7e9a593f8e80c668refs/tags/F
27f494dfb7e67d2f9cd2282404adf1d97581aa34refs/tags/OLDTAG
10e1d7b51cacf2f0478498681177f0e6f1e8392drefs/tags/TAGA1
f85e353c1b377970afbb804118d9135948598eearefs/tags/TAGA1^{}
f85e353c1b377970afbb804118d9135948598eearefs/tags/TAGA2
a540a4ddd2b16a9fe66e9539d5ec103c68052eaarefs/tags/TAGB1
9ca64d8fd8038b086badca1d11ccd8bbcfdeace1refs/tags/TAGB1^{}
9ca64d8fd8038b086badca1d11ccd8bbcfdeace1refs/tags/TAGB2
bc4e9e1fa80662b449805b1ac29fc9b1e4c49187refs/tags/tag-to-blob   
# <-- NOTE
038f48ad0beaffbea71d186a05084b79e3870cbfrefs/tags/tag-to-blob^{}
520db1f5e1afeaa12b1a8d73ce82db72ca036ee1refs/tags/tag-to-tree   
# <-- NOTE
7395c100223b7cd760f58ccfa0d3f3d2dd539bb6refs/tags/tag-to-tree^{}

.../git/t/trash directory.t5500-fetch-pack/fetchall$ git fetch-pack 
--all ..
fatal: A git upload-pack: not our ref 
038f48ad0beaffbea71d186a05084b79e3870cbf
fatal: The remote end hung up unexpectedly

however the remote has all referenced objects and is generally ok:

.../git/t/trash directory.t5500-fetch-pack/fetchall$ cd ..

.../git/t/trash directory.t5500-fetch-pack$ git show tag-to-blob
tag tag-to-blob
Tagger: C O Mitter 
Date:   Thu Apr 7 16:36:13 2005 -0700

tag -> blob
hello blob

.../git/t/trash directory.t5500-fetch-pack$ git show tag-to-tree
tag tag-to-tree
Tagger: C O Mitter 
Date:   Thu Apr 7 16:36:13 2005 -0700

tag -> tree

tree tag-to-tree

file

.../git/t/trash directory.t5500-fetch-pack$ git fsck
Checking object directories: 100% (256/256), done.

For the reference, porcelain fetch 'refs/*:refs/origin/*' works:

.../git/t/trash directory.t5500-fetch-pack/fetchall$ git fetch .. 
'refs/*:refs/origin/*'
remote: Enumerating objects: 259, done.
remote: Counting objects: 100% (259/259), done.
remote: Compressing objects: 100% (89/89), done.
remote: Total 259 (delta 1), reused 0 (delta 0)
Receiving objects: 100% (259/259), 21.64 KiB | 1.97 MiB/s, done.
Resolving deltas: 100% (1/1), done.
From ..
 * [new branch]  A   -> refs/origin/heads/A
 * [new branch]  B   -> refs/origin/heads/B
 * [new branch]  C   -> refs/origin/heads/C
 * [new branch]  D   -> refs/origin/heads/D
 * [new branch]  E   -> refs/origin/heads/E

Good Morning

2018-06-10 Thread Cheng Chan



"Hello,
Please let me know if you can be part of our supply.Get back to me for  
more info if interested:chengsc...@gmail.com

From cheng chan",

















--
"Hello,
Please let me know if you can be part of our supply.Get back to me for  
more info if interested:chengsc...@gmail.com

From cheng chan",


--
Please Save trees, print only when necessary.



Re: can not send mail

2018-06-10 Thread Ævar Arnfjörð Bjarmason
On Sun, Jun 10, 2018 at 2:04 AM, 陶青云  wrote:
> Sorry to intrude, but I can't send a patch to the maillist using
> qq.com and 163.com SMTP server.

Do you have a git with this patch: http://github.com/git/git/commit/5453b83bdf ?

It worked for 赵小强 on 163.com, maybe it'll work for you too.


Re: [PATCH 24/30] merge-recursive: Add computation of collisions due to dir rename & merging

2018-06-10 Thread René Scharfe
Am 10.06.2018 um 12:56 schrieb René Scharfe:
> Am 10.11.2017 um 20:05 schrieb Elijah Newren:
>> +static struct dir_rename_entry *check_dir_renamed(const char *path,
>> +  struct hashmap *dir_renames) {
>> +char temp[PATH_MAX];
>> +char *end;
>> +struct dir_rename_entry *entry;
>> +
>> +strcpy(temp, path);
>> +while ((end = strrchr(temp, '/'))) {
>> +*end = '\0';
>> +entry = dir_rename_find_entry(dir_renames, temp);
>> +if (entry)
>> +return entry;
>> +}
>> +return NULL;
>> +}
> 
> The value of PATH_MAX is platform-dependent, so it's easy to exceed when
> doing cross-platform development.  It's also not a hard limit on most
> operating systems, not even on Windows.  Further reading:
> 
> https://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html
> 
> So using a fixed buffer is not a good idea, and writing to it without
> checking is dangerous.  Here's a fix:

Argh, I meant to reply to v10 of that patch, i.e. this:

   https://public-inbox.org/git/20180419175823.7946-21-new...@gmail.com/

The cited code wasn't changed and is in current master, though, so both
that part and my patch are still relevant.

René


Re: [PATCH 24/30] merge-recursive: Add computation of collisions due to dir rename & merging

2018-06-10 Thread René Scharfe
Am 10.11.2017 um 20:05 schrieb Elijah Newren:
> +static struct dir_rename_entry *check_dir_renamed(const char *path,
> +   struct hashmap *dir_renames) {
> + char temp[PATH_MAX];
> + char *end;
> + struct dir_rename_entry *entry;
> +
> + strcpy(temp, path);
> + while ((end = strrchr(temp, '/'))) {
> + *end = '\0';
> + entry = dir_rename_find_entry(dir_renames, temp);
> + if (entry)
> + return entry;
> + }
> + return NULL;
> +}

The value of PATH_MAX is platform-dependent, so it's easy to exceed when
doing cross-platform development.  It's also not a hard limit on most
operating systems, not even on Windows.  Further reading:

   https://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html

So using a fixed buffer is not a good idea, and writing to it without
checking is dangerous.  Here's a fix:

-- >8 --
Subject: [PATCH] merge-recursive: use xstrdup() instead of fixed buffer

Paths can be longer than PATH_MAX.  Avoid a buffer overrun in
check_dir_renamed() by using xstrdup() to make a private copy safely.

Signed-off-by: Rene Scharfe 
---
 merge-recursive.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/merge-recursive.c b/merge-recursive.c
index ac27abbd4c..db708176c5 100644
--- a/merge-recursive.c
+++ b/merge-recursive.c
@@ -2211,18 +2211,18 @@ static struct hashmap *get_directory_renames(struct 
diff_queue_struct *pairs,
 static struct dir_rename_entry *check_dir_renamed(const char *path,
  struct hashmap *dir_renames)
 {
-   char temp[PATH_MAX];
+   char *temp = xstrdup(path);
char *end;
-   struct dir_rename_entry *entry;
+   struct dir_rename_entry *entry = NULL;;
 
-   strcpy(temp, path);
while ((end = strrchr(temp, '/'))) {
*end = '\0';
entry = dir_rename_find_entry(dir_renames, temp);
if (entry)
-   return entry;
+   break;
}
-   return NULL;
+   free(temp);
+   return entry;
 }
 
 static void compute_collisions(struct hashmap *collisions,
-- 
2.17.1


pôžička

2018-06-10 Thread Funding Trusts Finance
Pozdravte vás,

Peniaze, ktoré sú k dispozícii ľuďom. Získajte peniaze / pôžičku, ktoré 
potrebujete pri financovaní financií Trusts. Sme súkromní veritelia / investori 
a ponúkame tak osobné úvery, počiatočnú pôžičku, vzdelávací / poľnohospodársky 
úver, nehnuteľný / stavebný úver, majetkový úver, úver bez pôžičiek, mobilný 
úver na bývanie, pôžičku na pevné peniaze, zabezpečený / nezabezpečený úver, 
úver, Jv kapitál / Rehab úvery, Equity / refinančné úvery atď pre záujemcov z 
akejkoľvek krajiny. Ponúkame pôžičku národným / medzinárodným osobám s nízkou 
úrokovou sadzbou 3%. Boli ste odmietnuté bankami alebo inými veriteľmi, 
Scotwest Credit Union Limited je tu, aby vám pomohli archivovať svoj cieľ. Ak 
potrebujete pôžičku akéhokoľvek druhu, obráťte sa na nás prostredníctvom nižšie 
uvedenej e-mailovej adresy a sme tu, aby sme vám pomohli získať to, čo 
potrebujete: i...@fundingtrustsfinance.com

Vaše plné mená:
adresa:
Telefónne číslo:
Požadované množstvo úveru:
doba trvania:
povolanie:
Mesačná úroveň príjmu:
Rod:
Dátum narodenia:
štát:
Krajina:
Poštové smerovacie číslo:
účel:

Akonáhle poskytnete tieto informácie, potom vám môžeme poskytnúť splátky úveru 
je založená na mesačnom základe

Očakávame vašu rýchlu reakciu.

Ďakujem.
S pozdravom,
Ronny Hens,
E-mail: i...@fundingtrustsfinance.com
WEBOVÉ STRÁNKY: www.fundingtrustfinance.com


Re: [RFC PATCH 0/2] contrib/credential/netrc Makefile & test improvements

2018-06-10 Thread Ævar Arnfjörð Bjarmason


On Sun, Jun 10 2018, Todd Zullinger wrote:

>> I added 'use autodie;' without realizing it had external dependencies.
>> According to the documentation
>> http://perldoc.perl.org/autodie.html
>> it's a pragma since perl 5.10.1
>> Removing 'use autodie;' should be fine: it's not critical.
>
> I should clarify that part of why autodie isn't in my build
> environment is that the Fedora and RHEL7+ perl packages
> split out many modules which are shipped as part of the core
> perl tarball.  So while all the platforms I care about have
> perl >= 5.10.1, the Fedora and newer RHEL systems have the
> autodie module in a separate package.
>
> That said, the INSTALL docs still suggest that we only
> require perl >= 5.8, so if autodie is easily removed, that
> would probably be a good plan.

The intent of those docs was and still is to say "5.8 and the modules it
ships with".

This was discussed when 2.17.0 was released with my changes to make us
unconditionally use Digest::MD5:
https://public-inbox.org/git/87fu50e0ht@evledraar.gmail.com/

As noted there that's not some dogmatic "RedHat altered perl so we don't
care about them", but rather that in practice this doesn't impact users
negatively, so I think it's fine.

> Ævar brought up bumping the minimum supported perl to 5.10.0
> last December, in <20171223174400.26668-1-ava...@gmail.com>
> (https://public-inbox.org/git/20171223174400.26668-1-ava...@gmail.com/).
> The general consensus seemed positive, but I don't think
> it's happened.  Even so, that was 5.10.0, not the 5.10.1
> which added autodie.

Right, this doesn't apply to autodie. Looking at
https://www.cpan.org/ports/binaries.html there were a lot of releases
that had 5.10.0, not *.1.

Also git-credential-netrc is in contrib, I don't know if that warrants
treating it differently, I don't use it myself, and don't know how much
it's "not really contrib" in practice (e.g. like the bash completion
which is installed everywhere...)>

But yeah, skimming the code it would be easy to remove the dependency.