Re: Retrieving a file in git that was deleted and committed

2018-12-07 Thread Jeff King
On Fri, Dec 07, 2018 at 01:50:57PM -0800, biswaranjan panda wrote:

> Thanks Jeff and Bryan! However, I am curious that if there were a way
> to tell git blame to skip a commit (the one which added the file again
> and maybe the one which deleted it originally) while it walks back
> through history, then it should just get back the
> entire history right ?

Not easily. ;)

You can feed a set of revisions to git-blame with the "-S" option, but I
don't offhand know how it handles diffs (I think it would have to still
diff each commit against its parent, since history is non-linear, and a
list is inherently linear). You might want to experiment with that.

Other than that, you can play with git-replace to produce a fake
history, as if the deletion never happened. But note that will affect
all commands, not just one particular blame. It might be a neat way to
play with blame, but I doubt I'd leave the replacement in place in the
long term.

-Peff


Re: Difficulty with parsing colorized diff output

2018-12-07 Thread Jeff King
On Fri, Dec 07, 2018 at 07:09:58PM -0500, George King wrote:

> My goal is to detect SGR color sequences, e.g. '\x1b[32m', that exist
> in the source text, and have my highlighter print escaped
> representations of those. For example, I have checked in files that
> are expected test outputs for tools that emit color codes, and diffs
> of those get very confusing.
> 
> Figuring out which color codes are from the source text and which were
> added by git is proving very difficult. The obvious solution is to
> turn git diff coloring off, but as far as I can see this also turns
> off all coloring for logs, which is undesirable.

Another gotcha that I don't think you've hit yet: whitespace
highlighting can color arbitrary parts of the line.

Try this, for example:

  printf 'foo\n' >old
  printf '\t \tfoo\n' >new
  git diff --no-index old new

So I'm not sure what you want to do can ever be completely robust. That
said, I'm not opposed to making Git's color output more regular. It
seems like a reasonable cleanup on its own.

(For the Git project itself, we long ago realized that putting raw color
codes into the source is a big pain when working with diffs, and we
instead use tools like t/test-lib-functions.sh's test_decode_color).

> * Context lines do not begin with reset code, but do end with a reset
> code. It would be preferable in my opinion if they had both (like
> every other line), or none at all.

Context lines do have both. It's just that the default color for context
lines is empty. ;)

But yes, I think it might be reasonable to recognize when an opening
color is empty, and turn the closing reset into a noop in that case (or
I guess you are also advocating the opposite: turn an empty color into a
noop \x1b[m or similar).

I think most of the coloring, including context lines, is coming from
diff.c:emit_diff_symbol_from_struct(). Instead of unconditionally
calling:

  context = diff_get_color_opt(o, DIFF_CONTEXT);
  reset = diff_get_color_opt(o, DIFF_RESET);

I suspect we could have a helper like this:

  static const char *diff_get_reset(const char *color)
  {
if (!color || !*color)
return "";
return diff_colors[DIFF_RESET];
  }
  ...
  context = diff_get_color_opt(o, DIFF_CONTEXT);
  reset = diff_get_reset(context);

> * Added lines have excess codes after the plus sign. The entire prefix
> is, `\x1b[32m+\x1b[m\x1b[32m` translating to GREEN PLUS RESET GREEN.
> Emitting codes after the plus sign makes the parsing more complex and
> idiosyncratic.

Yeah, I've run into this one, too (when working on diff-highlight, which
similarly tries to pass-through Git's existing colors).

It's unfortunately not quite trivial, due to some interactions with
whitespace coloring. See this old patch:

  https://public-inbox.org/git/20101109220136.ga5...@sigill.intra.peff.net/

and the followup:

  https://public-inbox.org/git/20101118064050.ga12...@sigill.intra.peff.net/

> * Add a config feature to turn on log coloring while leaving diff coloring 
> off.

Yes, the fact that --pretty log coloring is tied to diffs is mostly a
historical artifact. I think it would be OK to introduce a color.log
config option that defaults to the value of color.diff if unset. That
would be backwards compatible, but allow you to set them independently
if you wanted to.

-Peff


Re: [PATCH v2 2/3] commit-graph: fix buffer read-overflow

2018-12-07 Thread Jeff King
On Thu, Dec 06, 2018 at 12:20:54PM -0800, Josh Steadmon wrote:

> diff --git a/commit-graph.c b/commit-graph.c
> index 07dd410f3c..224a5f161e 100644
> --- a/commit-graph.c
> +++ b/commit-graph.c
> @@ -165,10 +165,20 @@ struct commit_graph *parse_commit_graph(void 
> *graph_map, int fd,
>   last_chunk_offset = 8;
>   chunk_lookup = data + 8;
>   for (i = 0; i < graph->num_chunks; i++) {
> - uint32_t chunk_id = get_be32(chunk_lookup + 0);
> - uint64_t chunk_offset = get_be64(chunk_lookup + 4);
> + uint32_t chunk_id;
> + uint64_t chunk_offset;
>   int chunk_repeated = 0;
>  
> + if (chunk_lookup + GRAPH_CHUNKLOOKUP_WIDTH >
> + data + graph_size) {
> + error(_("chunk lookup table entry missing; graph file 
> may be incomplete"));
> + free(graph);
> + return NULL;
> + }

Is it possible to overflow the addition here? E.g., if I'm on a 32-bit
system and the truncated chunk appears right at the 4GB limit, in which
case we wrap back around? I guess that's pretty implausible, since it
would mean that the mmap is bumping up against the end of the address
space. I didn't check, but I wouldn't be surprised if sane operating
systems avoid allocating those addresses.

But I think you could write this as:

  if (data + graph_size - chunk_lookup < GRAPH_CHUNKLOOKUP_WIDTH)

to avoid overflow (we know that "data + graph_size" is sane because
that's our mmap, and chunk_lookup is somewhere between "data" and "data
+ graph_size", so the result is between 0 and graph_size).

I dunno. I think I've convinced myself it's a non-issue here, but it may
be good to get in the habit of writing these sorts of offset checks in
an overflow-proof order.

-Peff


Re: [PATCH on sb/more-repo-in-api] revision: use commit graph in get_reference()

2018-12-07 Thread Jeff King
On Thu, Dec 06, 2018 at 03:54:46PM -0800, Jonathan Tan wrote:

> This makes sense - I thought I shouldn't mention the commit graph in the
> code since it seems like a layering violation, but I felt the need to
> mention commit graph in a comment, so maybe the need to mention commit
> graph in the code is there too. Subsequently, maybe the lookup-for-type
> could be replaced by a lookup-in-commit-graph (maybe by using
> parse_commit_in_graph() directly), which should be at least slightly
> faster.

That makes more sense to me. If we don't have a commit graph at all,
it's a quick noop. If we do, we might binary search in the list of
commits for a non-commit. But that's strictly faster than finding the
object's type (which involves a binary search of a larger list, followed
by actually accessing the type info).

> > In general, it would be nice if we had a more incremental API
> > for accessing objects: open, get metadata, then read the data. That
> > would make these kinds of optimizations "free".
> 
> Would this be assuming that to read the data, you would (1) first need to
> read the metadata, and (2) there would be no redundancy in reading the
> two? It seems to me that for loose objects, you would want to perform
> all your reads at once, since any read requires opening the file, and
> for commit graphs, you just want to read what you want, since the
> metadata and the data are in separate places.

By metadata here, I don't mean the commit-graph data, but just the
object type and size. So I'm imagining an interface more like:

  - object_open() locates the object, and stores either the pack
file/offset or a descriptor to a loose path in an opaque handle
struct

  - object_size() and object_type() on that handle would do what you
expect. For loose objects, these would parse the header (the
equivalent of unpack_sha1_header()). For packed ones, they'd use the
object header in the pack (and chase down the delta bits as needed).

  - object_contents() would return the full content

  - object_read() could sequentially read a subset of the file (this
could replace the streaming interface we currently have)

We have most of the low-level bits for this already, if you poke into
what object_info_extended() is doing. We just don't have them packaged
in an interface which can persist across multiple calls.

With an interface like that, parse_object()'s large-blob check could be
something like the patch below.

But your case here is a bit more interesting. If we have a commit graph,
then we can avoid opening (or even finding!) the on-disk object at all.
So I actually think it makes sense to just check the commit-graph first,
as discussed above.

---
diff --git a/object.c b/object.c
index e54160550c..afce58c0bc 100644
--- a/object.c
+++ b/object.c
@@ -254,23 +254,31 @@ struct object *parse_object(struct repository *r, const 
struct object_id *oid)
const struct object_id *repl = lookup_replace_object(r, oid);
void *buffer;
struct object *obj;
+   struct object_handle oh;
 
obj = lookup_object(r, oid->hash);
if (obj && obj->parsed)
return obj;
 
-   if ((obj && obj->type == OBJ_BLOB && has_object_file(oid)) ||
-   (!obj && has_object_file(oid) &&
-oid_object_info(r, oid, NULL) == OBJ_BLOB)) {
-   if (check_object_signature(repl, NULL, 0, NULL) < 0) {
+   if (object_open(, oid) < 0)
+   return NULL; /* missing object */
+
+   if (object_type() == OBJ_BLOB) {
+   /* this will call object_read() on 4k chunks */
+   if (check_object_signature_stream(, oid)) {
error(_("sha1 mismatch %s"), oid_to_hex(oid));
return NULL;
}
+   object_close(); /* we don't care about contents */
parse_blob_buffer(lookup_blob(r, oid), NULL, 0);
return lookup_object(r, oid->hash);
}
 
-   buffer = read_object_file(oid, , );
+   type = object_type();
+   size = object_size();
+   buffer = object_contents();
+   object_close();
+
if (buffer) {
if (check_object_signature(repl, buffer, size, type_name(type)) 
< 0) {
free(buffer);


Re: Retrieving a file in git that was deleted and committed

2018-12-06 Thread Jeff King
On Thu, Dec 06, 2018 at 11:07:00PM -0800, biswaranjan panda wrote:

> Thanks! Strangely git log --follow does work.

I suspect it would work even without --follow. When you limit a log
traversal with a pathspec, like:

  git log foo

that is not about following some continuous stream of content, but
rather just applying that pathspec to the diff of each commit, and
pruning ones where it did not change. So even if there are gaps where
the file did not exist, we continue to apply the pathspec to the older
commits.

Tools like git-blame will _not_ work, though, as they really are trying
to track the content as they walk back through history. And Once all of
the content seems to appear from nowhere in your new commit, that seems
like a dead end.

In theory there could be some machine-readable annotation in the commit
object (or in a note created after the fact) to say "even though 'foo'
is a new file here, it came from $commit:foo".  And then git-blame could
keep following the content there. But such a feature does not yet exist.

-Peff


Re: [RFC PATCH 3/3] test-lib: add the '--stress' option to run a test repeatedly under load

2018-12-06 Thread Jeff King
On Fri, Dec 07, 2018 at 12:10:05AM +0100, SZEDER Gábor wrote:

> On Wed, Dec 05, 2018 at 04:56:21PM -0500, Jeff King wrote:
> > Could we just kill them all?
> > 
> > I guess it's a little tricky, because $! is going to give us the pid of
> > each subshell. We actually want to kill the test sub-process. That takes
> > a few contortions, but the output is nice (every sub-job immediately
> > says "ABORTED ...", and the final process does not exit until the whole
> > tree is done):
> 
> It's trickier than that:
> 
>   - Nowadays our test lib translates SIGINT to exit, but not TERM (or
> HUP, in case a user decides to close the terminal window), which
> means that if the test runs any daemons in the background, then
> those won't be killed when the stress test is aborted.
> 
> This is easy enough to address, and I've been meaning to do this
> in an other patch series anyway.

Yeah, trapping on more signals seems reasonable (or just propagating INT
instead of TERM). I'm more worried about this one:

>   - With the (implied) '--verbose-log' the process killed in the
> background subshell's SIGTERM handler is not the actual test
> process, because '--verbose-log' runs the test again in a piped
> subshell to save its output:

Hmm, yeah. The patch I sent earlier already propagates through the
subshell, so this is really just another layer there. I.e., would it be
reasonable when we are using --verbose-log (or --tee) for the parent
process to propagate signals down to child it spawned?

That would fix this case, and it would make things more predictable in
general (e.g., a user who manually runs kill).

It does feel like a lot of boilerplate to be propagating these signals
manually. I think the "right" Unix-y solution here is process groups:
each sub-job should get its own group, and then the top-level runner
script can kill the whole group. We may run into portability issues,
though (setsid is in util-linux, but I don't know if there's an
equivalent elsewhere; a small perl or C helper could do it, but I have
no idea what that would mean on Windows).

>   (GIT_TEST_TEE_STARTED=done ${TEST_SHELL_PATH} "$0" "$@" 2>&1;
>echo $? >"$TEST_RESULTS_BASE.exit") | tee -a 
> "$GIT_TEST_TEE_OUTPUT_FILE"
> 
> That 'kill' kills the "outer" shell process.
> And though I get "ABORTED..." immediately and I get back my
> prompt right away, the commands involved in the above construct
> are still running in the background:
> 
>   $ ./t3903-stash.sh --stress=1
>   ^CABORTED  0.0
>   $ ps a |grep t3903
>   1280 pts/17   S  0:00 /bin/sh ./t3903-stash.sh --stress=1
>   1281 pts/17   S  0:00 tee -a 
> <...>/test-results/t3903-stash.stress-0.out
>   1282 pts/17   S  0:00 /bin/sh ./t3903-stash.sh --stress=1
>   4173 pts/17   S+ 0:00 grep t3903
> 
> I see this behavior with all shells I tried.
> I haven't yet started to think it through why this happens :)

I think you get ABORTED because we are just watching for the
parent-process to exit (and reporting its status). The fact that that
the subshell (and the tee) are still running is not important. That all
makes sense to me.

> Not implying '--verbose-log' but redirecting the test script's
> output, like you did in your 'stress' script, seems to work in
> dash, ksh, and ks93, but not in Bash v4.3 or later, where, for
> whatever reason, the subshell get SIGINT before the SIGTERM would
> arrive.
> While we could easily handle SIGINT in the subshell with the same
> signal handler as SIGTERM, it bothers me that something
> fundamental is wrong here.

Yeah, I don't understand the SIGINT behavior you're seeing with bash. I
can't replicate it with bash 4.4.23 (from Debian unstable).

-Peff


Re: [RFC PATCH 3/3] test-lib: add the '--stress' option to run a test repeatedly under load

2018-12-06 Thread Jeff King
On Thu, Dec 06, 2018 at 11:56:01PM +0100, SZEDER Gábor wrote:

> > +test_expect_success 'roll those dice' '
> > +   case "$(openssl rand -base64 1)" in
> > +   z*)
> > +   return 1
> > +   esac
> > +'
> 
> Wasteful :)
> 
>   test $(($$ % 42)) -ne 0

Oh, indeed, that is much more clever. :)

-Peff


[ANNOUNCE] Git Contributor Summit Registration, Jan 31, 2019, Brussels

2018-12-06 Thread Jeff King
Registration is now open for the Contributor Summit at Git Merge. To
recap from my earlier announcement[1]:

  When: Thursday, January 31, 2019. 10am-5pm.
  Where: The Egg[2], Brussels, Belgium
  What: Round-table discussion about Git
  Who: All contributors to Git or related projects in the Git ecosystem
   are invited; if you're not sure if you qualify, please ask!

Registering for the contributor summit requires a special code. Please
email me off-list to get one.

As with past years, the code will unlock a ticket for both the contrib
summit _and_ the main conference (on Friday, Feb 1st). So don't register
separately for the main conference. Also, as in past years, there are
two variants: a free ticket, and a €99 one. You are free to use either;
if you choose to pay, the money goes entirely to the Software Freedom
Conservancy.

If you'd like to come but need financial assistance with travel costs,
please reach out to the Git PLC at g...@sfconservancy.org. And please do
so soon (let's say by Dec 13th), so we can make decisions and people can
book their travel.

-Peff

[1] https://public-inbox.org/git/20181109104202.ga8...@sigill.intra.peff.net/

[2] This is the same venue as 2017: https://goo.gl/maps/E36qCGJhK8J2


Re: git, monorepos, and access control

2018-12-06 Thread Jeff King
On Thu, Dec 06, 2018 at 10:17:24AM +0100, Ævar Arnfjörð Bjarmason wrote:

> > The other major user of that feature I can think of is LFS. There Git
> > ends up diffing the LFS pointers, not the big files. Which arguably is
> > the wrong thing (you'd prefer to see the actual file contents diffed),
> > but I think nobody cares in practice because large files generally don't
> > have readable diffs anyway.
> 
> I don't use this either, but I can imagine people who use binary files
> via clean/smudge would be well served by dumping out textual metadata of
> the file for diffing instead of showing nothing.
> 
> E.g. for a video file I might imagine having lines like:
> 
> duration-seconds: 123
> camera-model: Shiny Thingamabob
> 
> Then when you check in a new file your "git diff" will show (using
> normal diff view) that:
> 
>- duration-seconds: 123
>+ duration-seconds: 321
> camera-model: Shiny Thingamabob

I think that's orthogonal to clean/smudge, though. Neither the in-repo
nor on-disk formats are going to show that kind of output. For that
you'd want a separate textconv filter (and fun fact: showing exif data
was actually the original use case for which I wrote textconv).

If you are using something like LFS, using textconv on top is a little
trickier, because we'd always feed the filter the LFS pointer file, not
the actual data contents. Doing the "reversal" that Junio suggested
would fix that. Or with the code as it is, you can simply define your
filter to convert the LFS pointer data into the real content. I don't
really use LFS, but it looks like:

  [diff "mp4"]
  textconv = git lfs smudge | extract-metadata

would probably work.

-Peff


Re: git, monorepos, and access control

2018-12-05 Thread Jeff King
On Wed, Dec 05, 2018 at 11:42:09PM +, Coiner, John wrote:

> > For instance, Git is very eager to try to find delta-compression
> > opportunities between objects, even if they don't have any relationship
> > within the tree structure. So imagine I want to know the contents of
> > tree X. I push up a tree Y similar to X, then fetch it back, falsely
> > claiming to have X but not Y. If the server generates a delta, that may
> > reveal information about X (which you can then iterate to send Y', and
> > so on, treating the server as an oracle until you've guessed the content
> > of X).
> Another good point. I wouldn't have thought of either of these attacks. 
> You're scaring me (appropriately) about the risks of adding security to 
> a previously-unsecured interface. Let me push on the smudge/clean 
> approach and maybe that will bear fruit.

If you do look into that approach, check out how git-lfs works. In fact,
you might even be able to build around lfs itself. It's already putting
placeholder objects into the repository, and then faulting them in from
external storage. All you would need to do is lock down access to that
external storage, which is typically accessed via http.

(That all assumes you're OK with sharing the actual filenames with
everybody, and just restricting access to the blob contents. There's no
way to clean/smudge a whole subtree. For that you'd have to use
submodules).

-Peff


Re: git, monorepos, and access control

2018-12-05 Thread Jeff King
On Thu, Dec 06, 2018 at 10:08:57AM +0900, Junio C Hamano wrote:

> Jeff King  writes:
> 
> > In my opinion this feature is so contrary to Git's general assumptions
> > that it's likely to create a ton of information leaks of the supposedly
> > protected data.
> > ...
> 
> Yup, with s/implemented/designed/, I agree all you said here
> (snipped).

Heh, yeah, I actually scratched my head over what word to use. I think
Git _could_ be written in a way that is both compatible with existing
repositories (i.e., is still recognizably Git) and is careful about
object access control. But either way, what we have now is not close to
that.

> > Sorry I don't have a more positive response. What you want to do is
> > perfectly reasonable, but I just think it's a mismatch with how Git
> > works (and because of the security impact, one missed corner case
> > renders the whole thing useless).
> 
> Yup, again.
> 
> Storing source files encrypted and decrypting with smudge filter
> upon checkout (and those without the access won't get keys and will
> likely to use sparse checkout to exclude these priviledged sources)
> is probably the only workaround that does not involve submodules.
> Viewing "diff" and "log -p" would still be a challenge, which
> probably could use the same filter as smudge for textconv.

I suspect there are going to be some funny corner cases there. I use:

  [diff "gpg"]
  textconv = gpg -qd --no-tty

which works pretty well, but it's for files which are _never_ decrypted
by Git. So they're encrypted in the working tree too, and I don't use
clean/smudge filters.

If the files are already decrypted in the working tree, then running
them through gpg again would be the wrong thing. I guess for a diff
against the working tree, we would always do a "clean" operation to
produce the encrypted text, and then decrypt the result using textconv.
Which would work, but is rather slow.

> I wonder (and this is the primary reason why I am responding to you)
> if it is common enough wish to use the same filter for smudge and
> textconv?  So far, our stance (which can be judged from the way the
> clean/smudge filters are named) has been that the in-repo
> representation is the canonical, and the representation used in the
> checkout is ephemeral, and that is why we run "diff", "grep",
> etc. over the in-repo representation, but the "encrypted in repo,
> decrypted in checkout" abuse would be helped by an option to do the
> reverse---find changes and look substrings in the representation
> used in the checkout.  I am not sure if there are other use cases
> that is helped by such an option.

Hmm. Yeah, I agree with your line of reasoning here. I'm not sure how
common it is. This is the first I can recall it. And personally, I have
never really used clean/smudge filters myself, beyond some toy
experiments.

The other major user of that feature I can think of is LFS. There Git
ends up diffing the LFS pointers, not the big files. Which arguably is
the wrong thing (you'd prefer to see the actual file contents diffed),
but I think nobody cares in practice because large files generally don't
have readable diffs anyway.

-Peff


Re: [RFC PATCH 3/3] test-lib: add the '--stress' option to run a test repeatedly under load

2018-12-05 Thread Jeff King
On Thu, Dec 06, 2018 at 09:22:23AM +0900, Junio C Hamano wrote:

> > So the right number of waits is either "1" or "2". Looping means we do
> > too many (which is mostly a harmless noop) or too few (in the off chance
> > that you have only a single job ;) ). So it works out in practice.
> 
> Well, if you time your ^C perfectly, no number of waits is right, I
> am afraid.  You spawn N processes and while looping N times waiting
> for them, you can ^C out of wait before these N processes all die,
> no?

Each "wait" will try to collect all processes, but may be interrupted by
a signal. So the correct number is actually "1 plus the number of times
the user hits ^C". I had assumed the user was just hitting it once,
though putting the wait into the trap means we do that "1 plus" thing
anyway.

I could also see an argument that subsequent ^C's should exit
immediately, but I think we are getting well into the realm of
over-engineering.

-Peff


Re: [PATCH 2/3] test-lib-functions: introduce the 'test_set_port' helper function

2018-12-05 Thread Jeff King
On Wed, Dec 05, 2018 at 01:20:35PM +0100, SZEDER Gábor wrote:

> On Wed, Dec 05, 2018 at 12:17:09AM -0500, Jeff King wrote:
> > On Tue, Dec 04, 2018 at 05:34:56PM +0100, SZEDER Gábor wrote:
> > 
> > > Several test scripts run daemons like 'git-daemon' or Apache, and
> > > communicate with them through TCP sockets.  To have unique ports where
> > > these daemons are accessible, the ports are usually the number of the
> > > corresponding test scripts, unless the user overrides them via
> > > environment variables, and thus all those tests and test libs contain
> > > more or less the same bit of one-liner boilerplate code to find out
> > > the port.  The last patch in this series will make this a bit more
> > > complicated.
> > > 
> > > Factor out finding the port for a daemon into the common helper
> > > function 'test_set_port' to avoid repeating ourselves.
> > 
> > OK. Looks like this should keep the same port numbers for all of the
> > existing tests, which I think is good.
> 
> Not for all existing tests, though: t0410 and the 'git p4' tests do
> get different ports.

Yeah, I should have said "most". The important thing is that they remain
static and predictable for normal non-stress runs, which they do.

-Peff


Re: [RFC PATCH 3/3] test-lib: add the '--stress' option to run a test repeatedly under load

2018-12-05 Thread Jeff King
On Wed, Dec 05, 2018 at 03:01:06PM +0100, SZEDER Gábor wrote:

> > >   - Make '--stress' imply '--verbose-log' and discard the test's
> > > standard ouput and error; dumping the output of several parallel
> > > tests to the terminal would create a big ugly mess.
> > 
> > Makes sense. My script just redirects the output to a file, but it
> > predates --verbose-log.
> > 
> > My script always runs with "-x". I guess it's not that hard to add it if
> > you want, but it is annoying to finally get a failure after several
> > minutes only to realize that your are missing some important
> > information. ;)
> > 
> > Ditto for "-i", which leaves more readable output (you know the
> > interesting part is at the end of the log), and leaves a trash directory
> > state that is more likely to be useful for inspecting.
> 
> I wanted to imply only the bare minimum of options that are necessary
> to prevent the tests from stomping on each other.  Other than that I
> agree and wouldn't run '--stress' without '-x -i' myself, and at one
> point considered to recommend '-x -i' in the description of
> '--stress'.

Yeah, it probably is the right thing to make the options as orthogonal
as possible, and let the user decide what they want. I was just
wondering if I could replace my "stress" script with this. But I think
I'd still want it to remember to use a sane set of options (including
"--root" in my case).

> > > 'wait' for all parallel jobs before exiting (either because a failure
> > > was found or because the user lost patience and aborted the stress
> > > test), allowing the still running tests to finish.  Otherwise the "OK
> > > X.Y" progress output from the last iteration would likely arrive after
> > > the user got back the shell prompt, interfering with typing in the
> > > next command.  OTOH, this waiting might induce a considerable delay
> > > between hitting ctrl-C and the test actually exiting; I'm not sure
> > > this is the right tradeoff.
> > 
> > If there is a delay, it's actually quite annoying to _not_ wait; you
> > start doing something in the terminal, and then a bunch of extra test
> > statuses print at a random time.
> 
> At least the INT/TERM/etc. handler should say something like "Waiting
> for background jobs to finish", to let the user know why it doesn't
> exit right away.

That seems reasonable. There's also no point in waiting for them to
finish if we know we're aborting anyway. Could we just kill them all?

I guess it's a little tricky, because $! is going to give us the pid of
each subshell. We actually want to kill the test sub-process. That takes
a few contortions, but the output is nice (every sub-job immediately
says "ABORTED ...", and the final process does not exit until the whole
tree is done):

diff --git a/t/test-lib.sh b/t/test-lib.sh
index 9b7f687396..357dead3ff 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -98,8 +98,9 @@ done,*)
mkdir -p "$(dirname "$TEST_RESULTS_BASE")"
stressfail="$TEST_RESULTS_BASE.stress-failed"
rm -f "$stressfail"
-   trap 'echo aborted >"$stressfail"' TERM INT HUP
+   trap 'echo aborted >"$stressfail"; kill $job_pids 2>/dev/null; wait' 
TERM INT HUP
 
+   job_pids=
job_nr=0
while test $job_nr -lt "$job_count"
do
@@ -108,10 +109,13 @@ done,*)
GIT_TEST_STRESS_JOB_NR=$job_nr
export GIT_TEST_STRESS_STARTED GIT_TEST_STRESS_JOB_NR
 
+   trap 'kill $test_pid 2>/dev/null' TERM
+
cnt=0
while ! test -e "$stressfail"
do
-   if $TEST_SHELL_PATH "$0" "$@" >/dev/null 2>&1
+   $TEST_SHELL_PATH "$0" "$@" >/dev/null 2>&1 & 
test_pid=$!
+   if wait
then
printf >&2 "OK   %2d.%d\n" 
$GIT_TEST_STRESS_JOB_NR $cnt
elif test -f "$stressfail" &&
@@ -124,16 +128,11 @@ done,*)
fi
cnt=$(($cnt + 1))
done
-   ) &
+   ) & job_pids="$job_pids $!"
job_nr=$(($job_nr + 1))
done
 
-   job_nr=0
-   while test $job_nr -lt "$job_count"
-   do
-   wait
-   job_nr=$(($job_nr + 1))
-   done
+   wait
 
if test -f "$stressfail" && test "$(cat "$stressfail")" != "aborted"
then

> > > + elif test -f "$stressfail" &&
> > > +  test "$(cat "$stressfail")" = "aborted"
> > > + then
> > > + printf >&2 "ABORTED %2d.%d\n" 
> > > $GIT_TEST_STRESS_JOB_NR $cnt
> > > + else
> > > + printf >&2 "FAIL %2d.%d\n" 
> > > $GIT_TEST_STRESS_JOB_NR $cnt
> > > +  

Re: [RFC PATCH 3/3] test-lib: add the '--stress' option to run a test repeatedly under load

2018-12-05 Thread Jeff King
On Wed, Dec 05, 2018 at 11:34:54AM +0100, SZEDER Gábor wrote:

> 
> Just a quick reply to this one point for now:
> 
> On Wed, Dec 05, 2018 at 12:44:09AM -0500, Jeff King wrote:
> > On Tue, Dec 04, 2018 at 05:34:57PM +0100, SZEDER Gábor wrote:
> > > + job_nr=0
> > > + while test $job_nr -lt "$job_count"
> > > + do
> > > + wait
> > > + job_nr=$(($job_nr + 1))
> > > + done
> > 
> > Do we need to loop? Calling "wait" with no arguments should wait for all
> > children.
> 
> It should, but in dash, ksh, ksh93, Bash v4.2 or older it doesn't seem
> to do so, at least not on my machine, and I get back my shell prompt
> right after hitting ctrl-C or the first failed test, and then get the
> progress output from the remaining jobs later.

That's weird. I run my stress script under dash with a single "wait",
and it handles the failing case just fine. And switching to a single
"wait" in your script works for me.

But the ^C case is interesting. Try running your script under "sh -x"
and hitting ^C. The signal interrupts the first wait. In my script (or
yours modified to use a single wait), we then proceed immediately to the
"exit", and get output from the subshells after we've exited.

But in your script with the loop, the first wait is interrupted, and
then the _second_ wait (in the second loop iteration) picks up all of
the exiting processes. The subsequent waits are all noops that return
immediately, because there are no processes left.

So the right number of waits is either "1" or "2". Looping means we do
too many (which is mostly a harmless noop) or too few (in the off chance
that you have only a single job ;) ). So it works out in practice.

But I think a more clear way to do it is to simply wait in the signal
trap, to cover the case when our original wait was aborted.

I.e., something like this:

diff --git a/t/test-lib.sh b/t/test-lib.sh
index 9b7f687396..5e0c41626f 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -98,7 +98,7 @@ done,*)
mkdir -p "$(dirname "$TEST_RESULTS_BASE")"
stressfail="$TEST_RESULTS_BASE.stress-failed"
rm -f "$stressfail"
-   trap 'echo aborted >"$stressfail"' TERM INT HUP
+   trap 'echo aborted >"$stressfail"; wait' TERM INT HUP
 
job_nr=0
while test $job_nr -lt "$job_count"
@@ -128,12 +128,7 @@ done,*)
job_nr=$(($job_nr + 1))
done
 
-   job_nr=0
-   while test $job_nr -lt "$job_count"
-   do
-   wait
-   job_nr=$(($job_nr + 1))
-   done
+   wait
 
if test -f "$stressfail" && test "$(cat "$stressfail")" != "aborted"
then

which behaves well for me in all cases.

> Bash 4.3 or later are strange: I get back the shell prompt immediately
> after ctrl-C as well, so it doesn't appear to be waiting for all
> remaining jobs to finish either, but! I don't get any of the progress
> output from those jobs to mess up my next command.

Interesting. My bash 4.4 seems to behave the same as dash. It almost
sounds like the SIGINT is getting passed to the subshells for you.
Probably not really worth digging into, though.

In case anybody is playing with this and needed to simulate a stress
failure, here's what I used:

diff --git a/t/t-basic.sh b/t/t-basic.sh
index b6566003dd..a370cd9977 100755
--- a/t/t-basic.sh
+++ b/t/t-basic.sh
@@ -1171,4 +1171,11 @@ test_expect_success 'very long name in the index handled 
sanely' '
test $len = 4098
 '
 
+test_expect_success 'roll those dice' '
+   case "$(openssl rand -base64 1)" in
+   z*)
+   return 1
+   esac
+'
+
 test_done


Re: git, monorepos, and access control

2018-12-05 Thread Jeff King
On Wed, Dec 05, 2018 at 08:13:16PM +, Coiner, John wrote:

> One obstacle to moving AMD to git/VFSForGit is the lack of access 
> control support in git. AMD has a lot of data whose distribution must be 
> limited. Sometimes it's a legal requirement, eg. CPU core designs are 
> covered by US export control laws and not all employees may see them. 
> Sometimes it's a contractual obligation, as when a third party shares 
> data with us and we agree only to share this data with certain 
> employees. Any hypothetical AMD monorepo should be able to securely deny 
> read access in certain subtrees to users without required permissions.
> 
> Has anyone looked at adding access control to git, at a per-directory 
> granularity? Is this a feature that the git community would possibly 
> welcome?

In my opinion this feature is so contrary to Git's general assumptions
that it's likely to create a ton of information leaks of the supposedly
protected data.

For instance, Git is very eager to try to find delta-compression
opportunities between objects, even if they don't have any relationship
within the tree structure. So imagine I want to know the contents of
tree X. I push up a tree Y similar to X, then fetch it back, falsely
claiming to have X but not Y. If the server generates a delta, that may
reveal information about X (which you can then iterate to send Y', and
so on, treating the server as an oracle until you've guessed the content
of X).

You could work around that by teaching the server to refuse to use "X"
in any way when the client does not have the right permissions. But:

  - that's just one example; there are probably a number of such leaks

  - you're fighting an uphill battle against the way Git is implemented;
there's no single point to enforce this kind of segmentation

The model that fits more naturally with how Git is implemented would be
to use submodules. There you leak the hash of the commit from the
private submodule, but that's probably obscure enough (and if you're
really worried, you can add a random nonce to the commit messages in the
submodule to make their hashes unguessable).

> I would welcome your feedback on whether this idea makes technical 
> sense, and whether the feature could ever be a fit for git.

Sorry I don't have a more positive response. What you want to do is
perfectly reasonable, but I just think it's a mismatch with how Git
works (and because of the security impact, one missed corner case
renders the whole thing useless).

-Peff


Re: [PATCH 8/9] sha1-file: use loose object cache for quick existence check

2018-12-05 Thread Jeff King
On Wed, Dec 05, 2018 at 07:41:44PM +0100, René Scharfe wrote:

> > If we start to convert those, there's a
> > little bit of a rabbit hole, but it's actually not too bad.
> 
> You don't need to crawl in just for quick_has_loose(), but eventually
> everything has to be converted.  It seems a bit much for one patch, but
> perhaps that's just my ever-decreasing attention span speaking.

Yeah, my normal process here is to dig to the bottom of the rabbit hole,
and then break it into actual patches. I just shared the middle state
here. ;)

I suspect the http bits could be split off into their own thing. The
bits in sha1-file.c I'd plan to mostly do all together, as they are
a family of related functions.

Mostly I wasn't sure how to wrap this up with the other changes. It's
obviously pretty invasive, and I don't want it to get in the way of
actual functional changes we've been discussing.

> > @@ -879,15 +879,15 @@ int git_open_cloexec(const char *name, int flags)
> >   * Note that it may point to static storage and is only valid until another
> >   * call to stat_sha1_file().
> >   */
> > -static int stat_sha1_file(struct repository *r, const unsigned char *sha1,
> > - struct stat *st, const char **path)
> > +static int stat_loose_object(struct repository *r, const struct object_id 
> > *oid,
> > +struct stat *st, const char **path)
> 
> Hmm, read_sha1_file() was renamed to read_object_file() earlier this
> year, and I'd have expected this to become stat_object_file().  Names..

read_object_file() is about reading an object from _any_ source. These
are specifically about loose objects, and I think that distinction is
important (both here and for map_loose_object, etc).

I'd actually argue that read_object_file() should just be read_object(),
but that already exists. Sigh. (I think it's fixable, but obviously
orthogonal to this topic).

> Anyway, the comment above and one a few lines below should be updated
> as well.

Thanks, fixed.

> >  static int open_sha1_file(struct repository *r,
> > - const unsigned char *sha1, const char **path)
> > + const struct object_id *oid, const char **path)
> 
> That function should lose the "sha1" in its name as well.

Yep, fixed.

> > -static void *unpack_sha1_rest(git_zstream *stream, void *buffer, unsigned 
> > long size, const unsigned char *sha1)
> > +static void *unpack_loose_rest(git_zstream *stream,
> > +  void *buffer, unsigned long size,
> > +  const struct object_id *oid)
> 
> Hmm, both old and new name here look weird to me at this point.

It makes more sense in the pairing of unpack_sha1_header() and
unpack_sha1_rest(). Maybe "body" would be better than "rest".

At any rate, it probably makes sense to rename them together (but I
didn't touch the "header" one here). Maybe the name changes should come
as a separate patch. I was mostly changing them here because I was
changing the signatures anyway, and had to touch all of the callers.

-Peff


Re: [PATCH 8/9] sha1-file: use loose object cache for quick existence check

2018-12-05 Thread Jeff King
On Wed, Dec 05, 2018 at 01:51:36AM -0500, Jeff King wrote:

> > This
> > function is easily converted to struct object_id, though, as its single
> > caller can pass one on -- this makes the copy unnecessary.
> 
> If you mean modifying sha1_loose_object_info() to take an oid, then
> sure, I agree that is a good step forward (and that is exactly the "punt
> until" moment I meant).

So the simple thing is to do that, and then have it pass oid->hash to
the other functions it uses. If we start to convert those, there's a
little bit of a rabbit hole, but it's actually not too bad.

Most of the spill-over is into the dumb-http code. Note that it actually
uses sha1 itself! That probably needs to be the_hash_algo (though I'm
not even sure how we'd negotiate the algorithm across a dumb fetch). At
any rate, I don't think this patch makes anything _worse_ in that
respect.

diff --git a/http-push.c b/http-push.c
index cd48590912..0141b0ad53 100644
--- a/http-push.c
+++ b/http-push.c
@@ -255,7 +255,7 @@ static void start_fetch_loose(struct transfer_request 
*request)
struct active_request_slot *slot;
struct http_object_request *obj_req;
 
-   obj_req = new_http_object_request(repo->url, request->obj->oid.hash);
+   obj_req = new_http_object_request(repo->url, >obj->oid);
if (obj_req == NULL) {
request->state = ABORTED;
return;
diff --git a/http-walker.c b/http-walker.c
index 0a392c85b6..29b59e2fe0 100644
--- a/http-walker.c
+++ b/http-walker.c
@@ -58,7 +58,7 @@ static void start_object_request(struct walker *walker,
struct active_request_slot *slot;
struct http_object_request *req;
 
-   req = new_http_object_request(obj_req->repo->base, obj_req->oid.hash);
+   req = new_http_object_request(obj_req->repo->base, _req->oid);
if (req == NULL) {
obj_req->state = ABORTED;
return;
@@ -543,11 +543,11 @@ static int fetch_object(struct walker *walker, unsigned 
char *sha1)
} else if (req->zret != Z_STREAM_END) {
walker->corrupt_object_found++;
ret = error("File %s (%s) corrupt", hex, req->url);
-   } else if (!hasheq(obj_req->oid.hash, req->real_sha1)) {
+   } else if (!oideq(_req->oid, >real_oid)) {
ret = error("File %s has bad hash", hex);
} else if (req->rename < 0) {
struct strbuf buf = STRBUF_INIT;
-   loose_object_path(the_repository, , req->sha1);
+   loose_object_path(the_repository, , >oid);
ret = error("unable to write sha1 filename %s", buf.buf);
strbuf_release();
}
diff --git a/http.c b/http.c
index 7cfa7a16e0..e95b5b9be0 100644
--- a/http.c
+++ b/http.c
@@ -2298,9 +2298,9 @@ static size_t fwrite_sha1_file(char *ptr, size_t eltsize, 
size_t nmemb,
 }
 
 struct http_object_request *new_http_object_request(const char *base_url,
-   unsigned char *sha1)
+   const struct object_id *oid)
 {
-   char *hex = sha1_to_hex(sha1);
+   char *hex = oid_to_hex(oid);
struct strbuf filename = STRBUF_INIT;
struct strbuf prevfile = STRBUF_INIT;
int prevlocal;
@@ -2311,10 +2311,10 @@ struct http_object_request 
*new_http_object_request(const char *base_url,
 
freq = xcalloc(1, sizeof(*freq));
strbuf_init(>tmpfile, 0);
-   hashcpy(freq->sha1, sha1);
+   oidcpy(>oid, oid);
freq->localfile = -1;
 
-   loose_object_path(the_repository, , sha1);
+   loose_object_path(the_repository, , oid);
strbuf_addf(>tmpfile, "%s.temp", filename.buf);
 
strbuf_addf(, "%s.prev", filename.buf);
@@ -2456,16 +2456,16 @@ int finish_http_object_request(struct 
http_object_request *freq)
}
 
git_inflate_end(>stream);
-   git_SHA1_Final(freq->real_sha1, >c);
+   git_SHA1_Final(freq->real_oid.hash, >c);
if (freq->zret != Z_STREAM_END) {
unlink_or_warn(freq->tmpfile.buf);
return -1;
}
-   if (!hasheq(freq->sha1, freq->real_sha1)) {
+   if (!oideq(>oid, >real_oid)) {
unlink_or_warn(freq->tmpfile.buf);
return -1;
}
-   loose_object_path(the_repository, , freq->sha1);
+   loose_object_path(the_repository, , >oid);
freq->rename = finalize_object_file(freq->tmpfile.buf, filename.buf);
strbuf_release();
 
diff --git a/http.h b/http.h
index d305ca1dc7..66c52b2e1e 100644
--- a/http.h
+++ b/http.h
@@ -224,8 +224,8 @@ struct http_object_request {
CURLcode curl_result;
char errorstr[CURL_ERROR_SIZE];
long http_code;
-   unsigned char sha1[20];
-   u

Re: [PATCH 8/9] sha1-file: use loose object cache for quick existence check

2018-12-04 Thread Jeff King
On Wed, Dec 05, 2018 at 07:02:17AM +0100, René Scharfe wrote:

> > I actually wrote it that way initially, but doing the hashcpy() in the
> > caller is a bit more awkward. My thought was to punt on that until the
> > rest of the surrounding code starts handling oids.
> 
> The level of awkwardness is the same to me, but sha1_loose_object_info()
> is much longer already, so it might feel worse to add it there. 

Right, what I meant was that in sha1_loose_object_info():

  if (special_case)
handle_special_case();

is easier to follow than a block setting up the special case and then
calling the function.

> This
> function is easily converted to struct object_id, though, as its single
> caller can pass one on -- this makes the copy unnecessary.

If you mean modifying sha1_loose_object_info() to take an oid, then
sure, I agree that is a good step forward (and that is exactly the "punt
until" moment I meant).

> > This patch looks sane. How do you want to handle it? I'd assumed your
> > earlier one would be for applying on top, but this one doesn't have a
> > commit message. Did you want me to squash down the individual hunks?
> 
> I'm waiting for the first one (object-store: use one oid_array per
> subdirectory for loose cache) to be accepted, as it aims to solve a
> user-visible performance regression, i.e. that's where the meat is.
> I'm particularly interested in performance numbers from Ævar for it.
> 
> I can send the last one properly later, and add patches for converting
> quick_has_loose() to struct object_id.  Those are just cosmetic..

Great, thanks. I just wanted to make sure these improvements weren't
going to slip through the cracks.

-Peff


Re: [PATCH 2/3] sha1-file: emit error if an alternate looks like a repository

2018-12-04 Thread Jeff King
On Wed, Dec 05, 2018 at 12:35:08PM +0900, Junio C Hamano wrote:

> Ævar Arnfjörð Bjarmason   writes:
> 
> > Since 26125f6b9b ("detect broken alternates.", 2006-02-22) we've
> > emitted an error if the alternates directory doesn't exist, but not
> > for the common misstep of adding a path to another git repository as
> > an alternate, as opposed to its "objects" directory.
> >
> > Let's check for this, i.e. whether X/objects or X/.git/objects exists
> > if the user supplies X and print an error (which as a commit leading
> > up to this one shows doesn't change the exit code, just "warns").
> 
> I agree that "Let's check for this" is a good idea, but do not
> necessarily agree with "i.e.".  Don't we have a helper that takes
> the path to an existing directory and answers "Yup, it does look
> like a Git repository"?  Using that is a lot more in line with what
> you claimed to do in the title for this patch.

Hmm. Yeah, one case this does not handle is when ".git" is a git-file
pointing elsewhere, which should trigger the condition, too.

I think we can afford to be a bit loose with this check if it's just
generating a warning for a case that would otherwise not work (and the
worst is that we might fail to correctly diagnose a broken setup). But
that I think points to another issue: this kicks in even if the path is
otherwise usable.

So if had, say, a git repository whose worktree was full of objects and
packfiles, it currently works for me to point to that as an alternate.
But after this patch, we'd complain "wait, this looks like a git repo!".

So I'd much rather see the logic check first for something usable, and
only when we fail to find it, start doing a loose diagnosis. Something
like:

  if !is_directory($path)
complain that it does not exist, as now
  else if !is_directory($path/pack)
/*
 * it doesn't look like an object directory; technically it
 * _could_ just have loose objects, and maybe we ought to check
 * for directories matching [0-9a-f]{2}, though it seems
 * somewhat unlikely these days.
 */
if is_directory($path/objects) || exists($path/.git)
complain that it looks like a git dir
else
complain that it doesn't look like an object dir
  fi

Hmm. I managed to write a gross mix of C and shell for my pseudocode,
but hopefully you can read it. ;)

> I haven't read 3/3 yet, but as I said, I suspect it is reasonable to
> DWIM and use the object store associated with the directory we found
> to be a repository.

Yeah, I'm tempted by that, too, though I worry about corner cases. How
much effort should we put into discovery? I guess the rules from
enter_repo() would make sense, though the logic in that function would
need some refactoring to be reused elsewhere.

-Peff


Re: [PATCH 3/3] sha1-file: change alternate "error:" message to "warning:"

2018-12-04 Thread Jeff King
On Wed, Dec 05, 2018 at 12:37:58PM +0900, Junio C Hamano wrote:

> Ævar Arnfjörð Bjarmason   writes:
> 
> > Change the "error" message emitted by alt_odb_usable() to be a
> > "warning" instead. As noted in commits leading up to this one this has
> > never impacted the exit code ever since the check was initially added
> > in 26125f6b9b ("detect broken alternates.", 2006-02-22).
> >
> > It's confusing to emit an "error" when e.g. "git fsck" will exit with
> > 0, so let's emit a "warning:" instead.
> 
> OK, that sounds sensible.  An alternative that may be more sensible
> is to actually diagnose this as an error, but the purpose of this
> message is to help users diagnose a possible misconfiguration and
> keeping the repository "working" with the remaining set of object
> stores by leaving it as a mere warning, like this patch does, is
> probably a better approach.

Yeah, I think it's better to keep it as a warning. It's actually
reasonably likely to be benign (e.g., you did a "git repack -ad && rm
/path/to/alternate" to remove the dependency, but forgot to clean up the
alternates). And when it _is_ a problem, the object-reading code paths
will definitely let you know.

-Peff


Re: [RFC PATCH 3/3] test-lib: add the '--stress' option to run a test repeatedly under load

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 07:11:08PM +0100, Ævar Arnfjörð Bjarmason wrote:

> It's a frequent annoyance of mine in the test suite that I'm
> e.g. running t*.sh with some parallel "prove" in one screen, and then I
> run tABCD*.sh manually, and get unlucky because they use the same trash
> dir, and both tests go boom.
> 
> You can fix that with --root, which is much of what this patch does. My
> one-liner for doing --stress has been something like:
> 
> perl -E 'say ++$_ while 1' | parallel --jobs=100% --halt-on-error 
> soon,fail=1 './t-basic.sh --root=trash-{} -v'
> 
> But it would be great if I didn't have to worry about that and could
> just run two concurrent:
> 
> ./t-basic.sh
> 
> So I think we could just set some env variable where instead of having
> the predictable trash directory we have a $TRASHDIR.$N as this patch
> does, except we pick $N by locking some test-runs/tABCD.Nth file with
> flock() during the run.

That actually sounds kind of annoying when doing a single run, since now
you have this extra ".N". I guess it would at least be predictable, and
I tend to tab-complete the trash dirs anyway.

I accomplish the same thing by doing my "big" runs using --root
specified in config.mak (which points to a RAM disk -- if you're not
using one to run the tests, you really should look into it, as it's way
faster). And then for one-offs, investigating failures, etc, I do "cd t
&& ./t-basic.sh -v -i", which naturally runs in the local directory.

-Peff


Re: [RFC PATCH 3/3] test-lib: add the '--stress' option to run a test repeatedly under load

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 06:04:14PM +0100, Ævar Arnfjörð Bjarmason wrote:

> 
> On Tue, Dec 04 2018, SZEDER Gábor wrote:
> 
> > The number of parallel invocations is determined by, in order of
> > precedence: the number specified as '--stress=', or the value of
> > the GIT_TEST_STRESS_LOAD environment variable, or twice the number of
> > available processors in '/proc/cpuinfo', or 8.
> 
> With this series we have at least 3 ways to get this number. First
> online_cpus() in the C code, then Peff's recent `getconf
> _NPROCESSORS_ONLN` in doc-diff, and now this /proc/cpuinfo parsing.

To be fair, I only added the "getconf" thing because somebody complained
that I was parsing /proc/cpuinfo, and suggested it instead. ;)

I don't think it's especially portable, but it should work on Linux and
modern BSD/macOS, which may be enough (unlike doc-diff, I'd be a little
more inclined to ask somebody on a random platform to stress test if
they're hitting a bug).

> Perhaps it makes sense to split online_cpus() into a helper to use from
> the shellscripts instead?

I think somebody proposed that in a recent thread for other reasons,
too. I'd be OK with that, but probably just using getconf is reasonable
in the meantime.

-Peff


Re: [RFC PATCH 3/3] test-lib: add the '--stress' option to run a test repeatedly under load

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 05:34:57PM +0100, SZEDER Gábor wrote:

> To prevent the several parallel invocations of the same test from
> interfering with each other:
> 
>   - Include the parallel job's number in the name of the trash
> directory and the various output files under 't/test-results/' as
> a '.stress-' suffix.

Makes sense.

>   - Add the parallel job's number to the port number specified by the
> user or to the test number, so even tests involving daemons
> listening on a TCP socket can be stressed.

In my script I just use an arbitrary sequence of ports. That means two
stress runs may stomp on each other, but stress runs tend not to
interfere with regular runs (whereas with your scheme, a stress run of
t5561 is going to stomp on t5562). It probably doesn't matter much
either way, as I tend not to do too much with the machine during a
stress run.

>   - Make '--stress' imply '--verbose-log' and discard the test's
> standard ouput and error; dumping the output of several parallel
> tests to the terminal would create a big ugly mess.

Makes sense. My script just redirects the output to a file, but it
predates --verbose-log.

My script always runs with "-x". I guess it's not that hard to add it if
you want, but it is annoying to finally get a failure after several
minutes only to realize that your are missing some important
information. ;)

Ditto for "-i", which leaves more readable output (you know the
interesting part is at the end of the log), and leaves a trash directory
state that is more likely to be useful for inspecting.

My script also implies "--root". That's not strictly necessary, though I
suspect it is much more likely to catch races when it's run on a ram
disk (simply because it leaves the CPU as the main bottleneck).

> 'wait' for all parallel jobs before exiting (either because a failure
> was found or because the user lost patience and aborted the stress
> test), allowing the still running tests to finish.  Otherwise the "OK
> X.Y" progress output from the last iteration would likely arrive after
> the user got back the shell prompt, interfering with typing in the
> next command.  OTOH, this waiting might induce a considerable delay
> between hitting ctrl-C and the test actually exiting; I'm not sure
> this is the right tradeoff.

If there is a delay, it's actually quite annoying to _not_ wait; you
start doing something in the terminal, and then a bunch of extra test
statuses print at a random time.

> + job_nr=0
> + while test $job_nr -lt "$job_count"
> + do
> + (
> + GIT_TEST_STRESS_STARTED=done
> + GIT_TEST_STRESS_JOB_NR=$job_nr
> + export GIT_TEST_STRESS_STARTED GIT_TEST_STRESS_JOB_NR
> +
> + cnt=0
> + while ! test -e "$stressfail"
> + do
> + if $TEST_SHELL_PATH "$0" "$@" >/dev/null 2>&1
> + then
> + printf >&2 "OK   %2d.%d\n" 
> $GIT_TEST_STRESS_JOB_NR $cnt

OK, this adds a counter for each job number (compared to my script).
Seems helpful.

> + elif test -f "$stressfail" &&
> +  test "$(cat "$stressfail")" = "aborted"
> + then
> + printf >&2 "ABORTED %2d.%d\n" 
> $GIT_TEST_STRESS_JOB_NR $cnt
> + else
> + printf >&2 "FAIL %2d.%d\n" 
> $GIT_TEST_STRESS_JOB_NR $cnt
> + echo $GIT_TEST_STRESS_JOB_NR 
> >>"$stressfail"
> + fi

Hmm. What happens if we see a failure _and_ there's an abort? That's
actually pretty plausible if you see a failure whiz by, and you want to
abort the remaining scripts because they take a long time to run.

If the abort happens first, then the goal is to assume any errors are
due to the abort. But there's a race between the individual jobs reading
$stressfail and the parent signal handler writing it. So you may end up
with "aborted\n5" or similar (after which any other jobs would fail to
match "aborted" and declare themselves failures, as well).  I think my
script probably also has this race, too (it doesn't check the abort case
explicitly, but it would race in checking "test -e fail").

If the fail happens first (which I think is the more likely case), then
the abort handler will overwrite the file with "aborted", and we'll
forget that there was a real failure. This works in my script because of
the symlinking (if a real failure symlinked $fail to a directory, then
the attempt to write "aborted" will just be a noop).

> + job_nr=0
> + while test $job_nr -lt "$job_count"
> + do
> + wait
> + job_nr=$(($job_nr + 1))
> + done

Do we need to loop? Calling "wait" with no arguments should wait for all
children.

> + if 

Re: [PATCH 2/3] test-lib-functions: introduce the 'test_set_port' helper function

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 05:34:56PM +0100, SZEDER Gábor wrote:

> Several test scripts run daemons like 'git-daemon' or Apache, and
> communicate with them through TCP sockets.  To have unique ports where
> these daemons are accessible, the ports are usually the number of the
> corresponding test scripts, unless the user overrides them via
> environment variables, and thus all those tests and test libs contain
> more or less the same bit of one-liner boilerplate code to find out
> the port.  The last patch in this series will make this a bit more
> complicated.
> 
> Factor out finding the port for a daemon into the common helper
> function 'test_set_port' to avoid repeating ourselves.

OK. Looks like this should keep the same port numbers for all of the
existing tests, which I think is good. As nice as choosing random high
port numbers might be for some cases, it can also be confusing when
there are random conflicts.

I've also run into non-random conflicts, but at least once you figure
them out they're predictable (the most confusing I've seen is that adb,
the android debugger, sometimes but not always leaves a daemon hanging
around on port 5561).

> Take special care of test scripts with "low" numbers:
> 
>   - Test numbers below 1024 would result in a port that's only usable
> as root, so set their port to '1 + test-nr' to make sure it
> doesn't interfere with other tests in the test suite.  This makes
> the hardcoded port number in 't0410-partial-clone.sh' unnecessary,
> remove it.

This part is what made me think a bit more about conflicting with
dynamically allocated ports. Arguably the http parts of t0410 ought to
be in a much higher-numbered script, but I don't know that we've held
over the years very well to the idea that scripts should only depend on
the bits from lower numbered scripts.

> ---
>  t/lib-git-daemon.sh  |  2 +-
>  t/lib-git-p4.sh  |  9 +
>  t/lib-git-svn.sh |  2 +-
>  t/lib-httpd.sh   |  2 +-
>  t/t0410-partial-clone.sh |  1 -
>  t/t5512-ls-remote.sh |  2 +-
>  t/test-lib-functions.sh  | 36 
>  7 files changed, 41 insertions(+), 13 deletions(-)

The patch itself looks good to me.

> + eval port=\"\${$var}\"
> + case "$port" in

The quotes in the eval'd variable assignment aren't necessary. Usually I
don't mind them in a simple:

  FOO="$bar"

even though they're redundant (and there were a few instances in the
prior patch, I think). But here the escaping makes it harder to read,
compared to:

  eval port=\$$var

It might be worth simplifying, but I don't feel strongly about it (we'd
run into problems if $var contained spaces with either variant, but I
don't think that's worth caring about).

-Peff


Re: [PATCH 1/3] test-lib: consolidate naming of test-results paths

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 05:34:55PM +0100, SZEDER Gábor wrote:

> There are two places where we strip off any leading path components
> and the '.sh' suffix from the test script's pathname, and there are
> two places where we construct the filename of test output files in
> 't/test-results/'.  The last patch in this series will add even more.
> 
> Factor these out into helper variables to avoid repeating ourselves.

Makes sense.

> +TEST_NAME="$(basename "$0" .sh)"
> +TEST_RESULTS_BASE="$TEST_OUTPUT_DIRECTORY/test-results/$TEST_NAME"

Hmm, since we are building up this BASE variable anyway, why not:

  TEST_RESULTS_DIR=$TEST_OUTPUT_DIRECTORY/test-results
  TEST_RESULTS_BASE=$TEST_RESULTS_DIR/$TEST_NAME

? That saves having to run `dirname` on it later.

I guess one could argue that saying "the directory name of the file I'm
writing" is more readable. I just generally try to avoid extra
manipulation of the strings when possible (especially in shell).

-Peff


Re: [PATCH on sb/more-repo-in-api] revision: use commit graph in get_reference()

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 02:42:38PM -0800, Jonathan Tan wrote:

> diff --git a/revision.c b/revision.c
> index b5108b75ab..e7da2c57ab 100644
> --- a/revision.c
> +++ b/revision.c
> @@ -212,7 +212,20 @@ static struct object *get_reference(struct rev_info 
> *revs, const char *name,
>  {
>   struct object *object;
>  
> - object = parse_object(revs->repo, oid);
> + /*
> +  * If the repository has commit graphs, repo_parse_commit() avoids
> +  * reading the object buffer, so use it whenever possible.
> +  */
> + if (oid_object_info(revs->repo, oid, NULL) == OBJ_COMMIT) {
> + struct commit *c = lookup_commit(revs->repo, oid);
> + if (!repo_parse_commit(revs->repo, c))
> + object = (struct object *) c;
> + else
> + object = NULL;
> + } else {
> + object = parse_object(revs->repo, oid);
> + }

This seems like a reasonable thing to do, but I have sort of a
meta-comment. In several places we've started doing this kind of "if
it's this type of object, do X, otherwise do Y" optimization (e.g.,
handling large blobs for streaming).

And in the many cases we end up doubling the effort to do object
lookups: here we do one lookup to get the type, and then if it's not a
commit (or if we don't have a commit graph) we end up parsing it anyway.

I wonder if we could do better. In this instance, it might make sense
to first see if we actually have a commit graph available (it might not
have this object, of course, but at least we'd expect it to have most
commits). In general, it would be nice if we had a more incremental API
for accessing objects: open, get metadata, then read the data. That
would make these kinds of optimizations "free".

I don't have numbers for how much the extra lookups cost. The lookups
are probably dwarfed by parse_object() in general, so even if we save
only a few full object loads, it may be a win. It just seems a shame
that we may be making the "slow" paths (when our type-specific check
doesn't match) even slower.

-Peff


Re: [PATCH 8/9] sha1-file: use loose object cache for quick existence check

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 10:45:13PM +0100, René Scharfe wrote:

> > The comment in the context there is warning callers to remember to load
> > the cache first. Now that we have individual caches, might it make sense
> > to change the interface a bit, and make these members private. I.e.,
> > something like:
> > 
> >   struct oid_array *odb_loose_cache(struct object_directory *odb,
> > int subdir_nr) 
> >   {
> > if (!loose_objects_subdir_seen[subdir_nr])
> > odb_load_loose_cache(odb, subdir_nr); /* or just inline it here 
> > */
> > 
> > return >loose_objects_cache[subdir_nr];
> >   }
> 
> Sure.  And it should take an object_id pointer instead of a subdir_nr --
> less duplication, nicer interface (patch below).

I had considered that initially, but it's a little less flexible if a
caller doesn't actually have an oid. Though both of the proposed callers
do, so it's probably OK to worry about that if and when it ever comes up
(the most plausible thing in my mind is doing some abbrev-like analysis
without looking to abbreviate a _particular_ oid).

> It would be nice if it could return a const pointer to discourage
> messing up the cache, but that's not compatible with oid_array_lookup().

Yeah.

> And quick_has_loose() should be converted to object_id as well -- adding
> a function that takes a SHA-1 is a regression. :D

I actually wrote it that way initially, but doing the hashcpy() in the
caller is a bit more awkward. My thought was to punt on that until the
rest of the surrounding code starts handling oids.

> ---
>  object-store.h |  8 
>  sha1-file.c| 19 ---
>  sha1-name.c|  4 +---
>  3 files changed, 13 insertions(+), 18 deletions(-)

This patch looks sane. How do you want to handle it? I'd assumed your
earlier one would be for applying on top, but this one doesn't have a
commit message. Did you want me to squash down the individual hunks?

-Peff


Re: [ANNOUNCE] Git v2.20.0-rc2

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 06:48:22PM -0800, Stefan Beller wrote:

> > Perhaps we should note this more prominently, and since Brandon isn't at
> > Google anymore can some of you working there edit this post? It's the
> > first Google result for "git protocol v2", so it's going to be quite
> > confusing for people if after 2.20 the instructions in it no longer
> > work.
> 
> Thanks for pointing this out, we missed the implications when that patch was
> discussed. Looking into it.

I think if it just does "ls-remote --heads", that would still prove the
point (it would omit all of the tags).

-Peff


Re: [PATCH] t/lib-git-daemon: fix signal checking

2018-12-03 Thread Jeff King
On Mon, Nov 26, 2018 at 09:03:37PM +0100, SZEDER Gábor wrote:

> Test scripts checking 'git daemon' stop the daemon with a TERM signal,
> and the 'stop_git_daemon' helper checks the daemon's exit status to
> make sure that it indeed died because of that signal.
> 
> This check is bogus since 03c39b3458 (t/lib-git-daemon: use
> test_match_signal, 2016-06-24), for two reasons:
> 
>   - Right after killing 'git daemon', 'stop_git_daemon' saves its exit
> status in a variable, but since 03c39b3458 the condition checking
> the exit status looks at '$?', which at this point is not the exit
> status of 'git daemon', but that of the variable assignment, i.e.
> it's always 0.
> 
>   - The unexpected exit status should abort the whole test script with
> 'error', but it doesn't, because 03c39b3458 forgot to negate
> 'test_match_signal's exit status in the condition.
> 
> This patch fixes both issues.

Oof. Who says two wrongs don't make a right? :)

Thanks for catching this, and the patch looks obviously correct.

I peeked at the other test_match_signal conversions from that era, and
they all look sane.

-Peff


Re: sharedrepository=group not working

2018-12-03 Thread Jeff King
On Mon, Dec 03, 2018 at 08:19:12PM -0800, Jamie Zawinski wrote:

> On Dec 3, 2018, at 8:09 PM, Jeff King  wrote:
> > 
> > but it works fine. Might there be some effective-uid trickiness with the
> > way the server side of git is invoked? Or is this a network mount where
> > the filesystem uid might not match the process uid?
> 
> Huh. They're on the same ext4 fs (it's an AWS EBS sc1 volume, but I
> think that still counts as "not a network mount" as far as Linux is
> concerned.)

Yeah, I think we can discount any oddness there.

> The way I was seeing this fail was a CGI invoking "git push", as user
> "httpd" (and I verified that when the cgi was invoked, "groups"
> reported that "httpd" was a member of group "cvs") but when I tried to
> reproduce the error with "sudo -u apache git push" it didn't fail. So
> possibly something hinky is going on with group permissions when httpd
> invokes git, but I did verify that whoami, groups and pwd were as
> expected, so I couldn't tell what that might be... (Oh, I didn't check
> what umask was, but it should have been 022...)

Hrm. I don't think group permissions would even matter. We asked to
mkdir() with 0700 anyway, so we know they'd be zero.

But a funny umask does seem like a likely candidate for causing the
problem. We asked for 0700, but if there were bits set in umask (say,
0200 or something), that would restrict that further. And it would
explain what you're seeing (inability to write into a directory we
just created), and it might have worked with previous versions (which
was less strict on the group permissions).

I don't suppose this is leaving those incoming-* directories sitting
around so we can inspect their permissions (it's suppose to clean them
up, so I doubt it). If you're up for it, it might be interesting to
patch Git to inspect the umask and "ls -l" the objects/ directory at the
problematic moment. The interesting point is when we call into
tmp-objdir.c:setup_tmp_objdir().

-Peff


Re: sharedrepository=group not working

2018-12-03 Thread Jeff King
On Mon, Dec 03, 2018 at 07:27:13PM -0800, Jamie Zawinski wrote:

> I think sharedrepository=group stopped working some time between
> 2.10.5 (works) and 2.12.4 (does not). 2.19.2 also does not.

Hmm. Given the time-frame and the fact that your strace shows problems
writing into the objects/incoming-* directory, it's likely caused by
722ff7f876 (receive-pack: quarantine objects until pre-receive accepts,
2016-10-03).

The big change there is that instead of writing directly into objects/,
we create a temporary objects/incoming-* directory, write there, and
then migrate the objects over after we determine they're sane.

So in your strace we see the temp directory get created:

>  mkdir("./objects/incoming-U5EN8D", 0700 
>  <... mkdir resumed> )   = 0

The permissions are tighter than we ultimately want, but that's OK.
This tempdir is just for this process (and its children) to look at, and
then we'd eventually migrate the files out.

I could definitely imagine there being a bug in which we don't then
properly loosen permissions when we move things out of the tempdir, but
we don't even get that far. We fail immediately:

>  mkdir("./objects/incoming-U5EN8D/pack", 0777) = -1 EACCES (Permission denied)

That seems strange. The outer directory is only 0700, but the user
permissions should be sufficient. Even with the g+s bit set, it should
still be owned by the same user, shouldn't it?

I tried reproducing your state like this:

  git init --bare dst.git
  git -C dst.git config core.sharedrepository group
  chgrp -R somegroup dst.git
  find dst.git -type f | xargs chmod g+rw
  find dst.git -type d | xargs chmod g+srw

  # push works from original user
  git clone dst.git client
  (
cd client &&
git commit --allow-empty -m foo
git push
  )

  # push works from alternate user
  sudo su anotheruser sh -c '
git clone dst.git /tmp/other &&
cd /tmp/other &&
git commit --allow-empty -m foo &&
git push --receive-pack="strace -e mkdir git-receive-pack"
  '

but it works fine. Might there be some effective-uid trickiness with the
way the server side of git is invoked? Or is this a network mount where
the filesystem uid might not match the process uid?

-Peff


Re: easy way to demonstrate length of colliding SHA-1 prefixes?

2018-12-03 Thread Jeff King
On Mon, Dec 03, 2018 at 02:30:44PM -0800, Matthew DeVore wrote:

> Here is a one-liner to do it. It is Perl line noise, so it's not very cute,
> thought that is subjective. The output shown below is for the Git project
> (not Linux) repository as I've currently synced it:
> 
> $ git rev-list --objects HEAD | sort | perl -anE 'BEGIN { $prev = ""; $long
> = "" } $n = $F[0]; for my $i (reverse 1..40) {last if $i < length($long); if
> (substr($prev, 0, $i) eq substr($n, 0, $i)) {$long = substr($prev, 0, $i);
> last} } $prev = $n; END {say $long}'

Ooh, object-collision golf.

Try:

  git cat-file --batch-all-objects --batch-check='%(objectname)'

instead of "rev-list | sort". It's _much_ faster, because it doesn't
have to actually open the objects and walk the graph.

Some versions of uniq have "-w" (including GNU, but it's definitely not
in POSIX), which lets you do:

  git cat-file --batch-all-objects --batch-check='%(objectname)' |
  uniq -cdw 7

to list all collisions of length 7 (it will show just the first item
from each group, but you can use -D to see them all).

> > You'll always need to list them all. It's inherently an operation where
> > for each SHA-1 you need to search for other ones with that prefix up to
> > a given length.
> > 
> > Perhaps you've missed that you can use --abbrev=N for this, and just
> > grep for things that are loger than that N, e.g. for linux.git:
> > 
> >  git log --oneline --abbrev=10 --pretty=format:%h |
> >  grep -E -v '^.{10}$' |
> >  perl -pe 's/^(.{10}).*/$1/'
> 
> I think the goal was to search all object hashes, not just commits. And git
> rev-list --objects will do that.

You can add "-t --raw" to see the abbreviated tree and blob names,
though it gets tricky around handling merges.

-Peff


Re: [PATCH v2] revisions.c: put promisor option in specialized struct

2018-12-03 Thread Jeff King
On Mon, Dec 03, 2018 at 02:10:19PM -0800, Matthew DeVore wrote:

> Put the allow_exclude_promisor_objects flag in setup_revision_opt. When
> it was in rev_info, it was unclear when it was used, since rev_info is
> passed to functions that don't use the flag. This resulted in
> unnecessary setting of the flag in prune.c, so fix that as well.
> 
> Signed-off-by: Matthew DeVore 
> ---
>  builtin/pack-objects.c |  6 --
>  builtin/prune.c|  1 -
>  builtin/rev-list.c |  6 --
>  revision.c | 10 ++
>  revision.h |  4 ++--
>  5 files changed, 16 insertions(+), 11 deletions(-)

Thanks, this version looks good to me.

One style nit that I don't think is worth a re-roll, but that Junio
might want to tweak while applying:

> diff --git a/revision.c b/revision.c
> index 13e0519c02..f6b32e6a42 100644
> --- a/revision.c
> +++ b/revision.c
> @@ -1791,7 +1791,8 @@ static void add_message_grep(struct rev_info *revs, 
> const char *pattern)
>  }
>  
>  static int handle_revision_opt(struct rev_info *revs, int argc, const char 
> **argv,
> -int *unkc, const char **unkv)
> +int *unkc, const char **unkv,
> +const struct setup_revision_opt* opt)

We keep the "*" with the variable name, not the type.

-Peff


Re: Confusing inconsistent option syntax

2018-12-03 Thread Jeff King
On Sun, Dec 02, 2018 at 09:07:47PM +1100, Robert White wrote:

> `git log --pretty short` gives the error message "ambiguous argument
> 'short'". To get the expected result, you need to use `git log
> --pretty=short`. However, `git log --since yesterday` and `git log
> --since=yesterday` both work as expected.
> 
> When is an = needed? What is the reason for these inconsistencies?

As Duy mentioned, this has to do with optional arguments for some flags.
This is discussed in more detail in "git help cli". Notably (and as
advised in that manpage), you should always use the "stuck" form (with
the "=") in scripts, in case a flag grows an optional value later.

-Peff


Re: [PATCH 8/9] sha1-file: use loose object cache for quick existence check

2018-12-03 Thread Jeff King
On Sun, Dec 02, 2018 at 11:52:50AM +0100, René Scharfe wrote:

> > And for mu.git, a ~20k object repo:
> > 
> > Test origin/master 
> > peff/jk/loose-cache   avar/check-collisions-config
> > 
> > -
> > 0008.2: index-pack with 256*1 loose objects  0.59(0.91+0.06)   
> > 0.58(0.93+0.03) -1.7% 0.57(0.89+0.04) -3.4%
> > 0008.3: index-pack with 256*10 loose objects 0.59(0.91+0.07)   
> > 0.59(0.92+0.03) +0.0% 0.57(0.89+0.03) -3.4%
> > 0008.4: index-pack with 256*100 loose objects0.59(0.91+0.05)   
> > 0.81(1.13+0.04) +37.3%0.58(0.91+0.04) -1.7%
> > 0008.5: index-pack with 256*250 loose objects0.59(0.91+0.05)   
> > 1.23(1.51+0.08) +108.5%   0.58(0.91+0.04) -1.7%
> > 0008.6: index-pack with 256*500 loose objects0.59(0.90+0.06)   
> > 1.96(2.20+0.12) +232.2%   0.58(0.91+0.04) -1.7%
> > 0008.7: index-pack with 256*750 loose objects0.59(0.92+0.05)   
> > 2.72(2.92+0.17) +361.0%   0.58(0.90+0.04) -1.7%
> > 0008.8: index-pack with 256*1000 loose objects   0.59(0.90+0.06)   
> > 3.50(3.67+0.21) +493.2%   0.57(0.90+0.04) -3.4%
> 
> OK, here's another theory: The cache scales badly with increasing
> numbers of loose objects because it sorts the array 256 times as it is
> filled.  Loading it fully and sorting once would help, as would using
> one array per subdirectory.

Yeah, that makes sense. This was actually how I had planned to do it
originally, but then I ended up just reusing the existing single-array
approach from the abbrev code.

I hadn't actually thought about the repeated sortings (but that
definitely makes sense that they would hurt in these pathological
cases), but more just that we get a 256x reduction in N for our binary
search (in fact we already do this first-byte lookup-table trick for
pack index lookups).

Your patch looks good to me. We may want to do one thing on top:

> diff --git a/object-store.h b/object-store.h
> index 8dceed0f31..ee67a50980 100644
> --- a/object-store.h
> +++ b/object-store.h
> @@ -20,7 +20,7 @@ struct object_directory {
>* Be sure to call odb_load_loose_cache() before using.
>*/
>   char loose_objects_subdir_seen[256];
> - struct oid_array loose_objects_cache;
> + struct oid_array loose_objects_cache[256];

The comment in the context there is warning callers to remember to load
the cache first. Now that we have individual caches, might it make sense
to change the interface a bit, and make these members private. I.e.,
something like:

  struct oid_array *odb_loose_cache(struct object_directory *odb,
int subdir_nr) 
  {
if (!loose_objects_subdir_seen[subdir_nr])
odb_load_loose_cache(odb, subdir_nr); /* or just inline it here 
*/

return >loose_objects_cache[subdir_nr];
  }

That's harder to get wrong, and this:

> diff --git a/sha1-file.c b/sha1-file.c
> index 05f63dfd4e..d2f5e65865 100644
> --- a/sha1-file.c
> +++ b/sha1-file.c
> @@ -933,7 +933,8 @@ static int quick_has_loose(struct repository *r,
>   prepare_alt_odb(r);
>   for (odb = r->objects->odb; odb; odb = odb->next) {
>   odb_load_loose_cache(odb, subdir_nr);
> - if (oid_array_lookup(>loose_objects_cache, ) >= 0)
> + if (oid_array_lookup(>loose_objects_cache[subdir_nr],
> +  ) >= 0)
>   return 1;
>   }

becomes:

  struct oid_array *cache = odb_loose_cache(odb, subdir_nr);
  if (oid_array_lookup(cache, ))
return 1;

(An even simpler interface would be a single function that computes
subdir_nr and does the lookup itself, but that would not be enough for
find_short_object_filename()).

-Peff


Re: [PATCH] rebase -i: introduce the 'test' command

2018-12-03 Thread Jeff King
On Mon, Dec 03, 2018 at 06:53:22PM +0100, Duy Nguyen wrote:

> On Sat, Dec 01, 2018 at 03:02:09PM -0500, Jeff King wrote:
> > I sometimes add "x false" to the top of the todo list to stop and create
> > new commits before the first one.
> 
> And here I've been doing the same by "edit" the first commit, add a
> new commit then reorder them in the second interactive rebase :P
> 
> This made me look at git-rebase.txt to really learn about interactive
> rebase. I think the interactive rebase section could use some
> improvements. Its style looks.. umm.. more story telling than a
> reference. Perhaps something like this to at least highlight the
> commands.

Yeah, I think the typographic change is an improvement that doesn't
really have a downside.

As Dscho mentioned, sometimes the story style is what you want, so I
don't think we'd want to simply rearrange it. It may be useful to
present the material twice, though: once as a table/list for reference,
and then once as a story working through an example.

-Peff


Re: [PATCH] rebase -i: introduce the 'test' command

2018-12-03 Thread Jeff King
On Mon, Dec 03, 2018 at 08:01:44PM +0100, Johannes Schindelin wrote:

> > In this sort of situation, I often whish to be able to do nested rebases.
> > Even more because it happen relatively often that I forget that I'm
> > working in a rebase and not on the head, and then it's quite natural
> > to me to type things like 'git rebase -i @^^^' while already rebasing.
> > But I suppose this has already been discussed.
> 
> Varieties of this have been discussed, but no, not nested rebases.
> 
> The closest we thought about was re-scheduling the latest  commits,
> which is now harder because of the `--rebase-merges` mode.
> 
> But I think it would be doable. Your idea of a "nested" rebase actually
> opens that door quite nicely. It would not *really* be a nested rebase,
> and it would still only be possible in interactive mode, but I could
> totally see
> 
>   git rebase --nested -i HEAD~3
> 
> to generate and prepend the following lines to the `git-rebase-todo` file:
> 
>   reset abcdef01 # This is HEAD~3
>   pick abcdef02 # This is HEAD~2
>   pick abcdef03 # This is HEAD~
>   pick abcdef04 # This is HEAD
> 
> (assuming that the latest 3 commits were non-merge commits; It would look
> quite a bit more complicated in other situations.)

Yeah, I would probably use that if it existed.

It would be nicer to have real nested sequencer operations, I think, for
other situations. E.g., cherry-picking a sequence of commits while
you're in the middle of a rebase.

But I suspect getting that right would be _loads_ more work, and
probably would involve some funky UI corner cases to handle the stack of
operations (so truly aborting a rebase may mean an arbitrary number of
"rebase --abort" calls to pop the stack). Your suggestion is probably a
reasonable trick in the meantime.

-Peff


Re: [PATCH] rebase -i: introduce the 'test' command

2018-12-03 Thread Jeff King
On Mon, Dec 03, 2018 at 02:31:37PM +, Phillip Wood wrote:

> > How would I move past the test that fails to continue? I guess "git
> > rebase --edit-todo" and then manually remove it (and any other remaining
> > test lines)?
> 
> Perhaps we could teach git rebase --skip to skip a rescheduled command, it
> could be useful if people want to skip rescheduled picks as well (though I
> don't think I've ever had that happen in the wild). I can see myself turning
> on the rescheduling config setting but occasionally wanting to be able to
> skip over the rescheduled exec command.

Yeah, I agree that would give a nice user experience.

-Peff


Re: [PATCH] revisions.c: put promisor option in specialized struct

2018-12-03 Thread Jeff King
On Mon, Dec 03, 2018 at 11:23:56AM -0800, Matthew DeVore wrote:

> Put the allow_exclude_promisor_objects flag in setup_revision_opt. When
> it was in rev_info, it was unclear when it was used, since rev_info is
> passed to functions that don't use the flag. This resulted in
> unnecessary setting of the flag in prune.c, so fix that as well.
> 
> Signed-off-by: Matthew DeVore 
> ---
>  builtin/pack-objects.c |  7 +--
>  builtin/prune.c|  1 -
>  builtin/rev-list.c |  6 --
>  revision.c | 10 ++
>  revision.h |  4 ++--
>  5 files changed, 17 insertions(+), 11 deletions(-)

Thanks, this mostly looks good to me (with or without tweaking the
initializers as discussed in the other part of the thread).

One thing I noticed:

> @@ -297,7 +296,8 @@ struct setup_revision_opt {
>   const char *def;
>   void (*tweak)(struct rev_info *, struct setup_revision_opt *);
>   const char *submodule;  /* TODO: drop this and use rev_info->repo */
> - int assume_dashdash;
> + int assume_dashdash : 1;
> + int allow_exclude_promisor_objects : 1;
>   unsigned revarg_opt;
>  };

I don't know that we need to penny-pinch bytes in this struct, but in
general it shouldn't hurt either awy. However, a signed bit-field with 1
bit is funny. I'm not even sure what the standard has to say, but in
twos-complement that would store "-1" and "0" (gcc -Wpedantic also
complains about overflow in assigning "1" to it).

So this probably ought to be "unsigned".

-Peff


Re: [RFC 2/2] exclude-promisor-objects: declare when option is allowed

2018-12-03 Thread Jeff King
On Mon, Dec 03, 2018 at 11:10:49AM -0800, Matthew DeVore wrote:

> > > + memset(_r_opt, 0, sizeof(s_r_opt));
> > > + s_r_opt.allow_exclude_promisor_objects = 1;
> > > + setup_revisions(ac, av, , _r_opt);
> > 
> > I wonder if a static initializer for setup_revision_opt is worth it. It
> > would remove the need for this memset. Probably not a big deal either
> > way, though.
> I think you mean something like this:
> 
> static struct setup_revision_opt s_r_opt = {NULL, NULL, NULL, 0, 1, 0};
> 
> This is a bit cryptic (I have to read the struct declaration in order to
> know what is being set to 1) and if the struct ever gets a new field before
> allow_exclude_promisor_objects, this initializer has to be updated.

I agree that's pretty awful.  I meant something like this:

  struct setup_revision_opt s_r_opt = { NULL };
  ...

  s_r_opt.allow_exclude_promisor_objects = 1;
  setup_revisions(...);

It's functionally equivalent to the memset(), but you don't have to
wonder about whether we peek at the uninitialized state in between.

That said, our C99 designated initializer weather-balloons haven't
gotten any complaints yet. So I think you could actually do:

  struct setup_revision_opt s_r_opt = {
.allow_exclude_promisor_objects = 1,
  };
  ...
  setup_revisions(...);

which is pretty nice.

-Peff


Re: [PATCH] rebase -i: introduce the 'test' command

2018-12-01 Thread Jeff King
On Sat, Dec 01, 2018 at 09:28:47PM -0500, Eric Sunshine wrote:

> On Sat, Dec 1, 2018 at 3:02 PM Jeff King  wrote:
> > On Thu, Nov 29, 2018 at 09:32:48AM +0100, Johannes Schindelin wrote:
> > > In reality, I think that it would even make sense to change the default to
> > > reschedule failed `exec` commands. Which is why I suggested to also add a
> > > config option.
> >
> > I sometimes add "x false" to the top of the todo list to stop and create
> > new commits before the first one. That would be awkward if I could never
> > get past that line. However, I think elsewhere a "pause" line has been
> > discussed, which would serve the same purpose.
> 
> Are you thinking of the "break" command (not "pause") which Dscho
> already added[1]?
> 
> [1]: 71f82465b1 (rebase -i: introduce the 'break' command, 2018-10-12)

Yes, thanks (as you can see, I haven't actually used it yet ;) ).

-Peff


Re: [PATCH] rebase -i: introduce the 'test' command

2018-12-01 Thread Jeff King
On Thu, Nov 29, 2018 at 09:32:48AM +0100, Johannes Schindelin wrote:

> > > Would it not make more sense to add a command-line option (and a config
> > > setting) to re-schedule failed `exec` commands? Like so:
> > 
> > Your proposition would do in most cases, however it is not possible to
> > make a distinction between reschedulable and non-reschedulable commands.
> 
> True. But I don't think that's so terrible.
> 
> What I think is something to avoid is two commands that do something very,
> very similar, but with two very, very different names.
> 
> In reality, I think that it would even make sense to change the default to
> reschedule failed `exec` commands. Which is why I suggested to also add a
> config option.

I sometimes add "x false" to the top of the todo list to stop and create
new commits before the first one. That would be awkward if I could never
get past that line. However, I think elsewhere a "pause" line has been
discussed, which would serve the same purpose.

I wonder how often this kind of "yes, I know it fails, but keep going
anyway" situation would come up. And what the interface is like for
getting past it. E.g., what if you fixed a bunch of stuff but your tests
still fail? You may not want to abandon the changes you've made, but you
need to "rebase --continue" to move forward. I encounter this often when
the correct fix is actually in an earlier commit than the one that
yields the test failure. You can't rewind an interactive rebase, so I
complete and restart it, adding an "e"dit at the earlier commit.

How would I move past the test that fails to continue? I guess "git
rebase --edit-todo" and then manually remove it (and any other remaining
test lines)?

That's not too bad, but I wonder if people would find it more awkward
than the current way (which is to just "rebase --continue" until you get
to the end).

I dunno. I am not sure if I am for or against changing the default, so
these are just some musings. :)

-Peff


Re: [PATCH] t5562: skip if NO_CURL is enabled

2018-12-01 Thread Jeff King
On Wed, Nov 28, 2018 at 03:56:29PM +0100, Ævar Arnfjörð Bjarmason wrote:

> 
> On Thu, Nov 22 2018, Jeff King wrote:
> 
> > On Thu, Nov 22, 2018 at 02:17:01AM -0800, Carlo Arenas wrote:
> >> PS. upstreaming the PERL_PATH fix is likely to be good to do soonish
> >> as I presume at least all BSD might be affected, let me know if you
> >> would rather me do that instead as I suspect we might be deadlocked
> >> otherwise ;)
> >
> > Yeah, the $PERL_PATH thing is totally orthogonal, and should graduate
> > separately.
> 
> On the subject of orthagonal things: This test fails on AIX with /bin/sh
> (but not /bin/bash) due to some interaction of ssize_b100dots and the
> build_option function. On that system:
> 
> $ ./git version --build-options
> git version 2.20.0-rc1
> cpu: 00FA74164C00
> no commit associated with this build
> sizeof-long: 4
> sizeof-size_t: 4
> 
> But it somehow ends up in the 'die' condition in that case statement. I
> dug around briefly but couldn't find the cause, probably some limitation
> in the shell constructs it supports. Just leaving a note about this...

That's weird. The functions involved are pretty vanilla. I'd suspect
something funny with the sed invocation:

  build_option () {
git version --build-options |
sed -ne "s/^$1: //p"
  }

but that's the one thing that shouldn't be dependent on the shell in
use.

Can you manually replicate the shell commands to see where it goes
wrong?

-Peff


Re: [PATCH] t5562: skip if NO_CURL is enabled

2018-12-01 Thread Jeff King
On Wed, Nov 28, 2018 at 02:27:08PM +0100, SZEDER Gábor wrote:

> > Curiously, the act.err file also has 54 NUL bytes before the "fatal:"
> > message.
> 
> I think those NUL bytes come from the file system.
> 
> The contents of 'act.err' from the previous test ('fetch gzipped
> empty') is usually:
> 
>   fatal: request ended in the middle of the gzip stream
>   fatal: the remote end hung up unexpectedly
> 
> Notice that the length of the first line is 54 bytes (including the
> trailing newline).  So I suspect that the following is happening:
> 
>   - http-backend in the previous test writes the first line,
>   - that test finishes and this one starts,
>   - this test truncates 'act.err',
>   - and then the still-running http-backend from the previous test
> finally writes the second line.
> 
> So at this point 'act.err' is empty, but the offset of the fd of the
> redirection still open from the previous test is at 54, so the file
> system fills those bytes with NULs.

Right, good thinking. Thanks for the explanation!

-Peff


Re: [PATCH 8/9] sha1-file: use loose object cache for quick existence check

2018-12-01 Thread Jeff King
On Tue, Nov 27, 2018 at 09:48:57PM +0100, René Scharfe wrote:

> > +static int quick_has_loose(struct repository *r,
> > +  const unsigned char *sha1)
> > +{
> > +   int subdir_nr = sha1[0];
> > +   struct object_id oid;
> > +   struct object_directory *odb;
> > +
> > +   hashcpy(oid.hash, sha1);
> > +
> > +   prepare_alt_odb(r);
> > +   for (odb = r->objects->odb; odb; odb = odb->next) {
> > +   odb_load_loose_cache(odb, subdir_nr);
> 
> Is this thread-safe?  What happens if e.g. one index-pack thread resizes
> the array while another one sorts it?

No, but neither is any of the object_info / has_object_file path, which
may use static function-local buffers, or (before my series) alt scratch
bufs, or even call reprepare_packed_git().

In the long run, I think the solution is probably going to be pushing
some mutexes into the right places, and putting one around the cache
fill is an obvious place.

> Loading the cache explicitly up-front would avoid that, and improves
> performance a bit in my (very limited) tests on an SSD.  Demo patch for
> next at the bottom.  How does it do against your test cases?

It's going to do badly on corner cases where we don't need to load every
object subdirectory, and one or more of them are big. I.e., if I look up
"1234abcd", the current code only needs to fault in $GIT_DIR/objects/12.
Pre-loading means we'd hit them all. Even without a lot of objects, on
NFS that's 256 latencies instead of 1.

-Peff


Re: [RFC 2/2] exclude-promisor-objects: declare when option is allowed

2018-12-01 Thread Jeff King
On Fri, Nov 30, 2018 at 05:32:47PM -0800, Matthew DeVore wrote:

> > Speaking of which, would this flag work better as a field in
> > setup_revision_opt, which is passed to setup_revisions()? The intent
> > seem to be to influence how we parse command-line arguments, and that's
> > where other similar flags are (e.g., assume_dashdash).
> 
> Good idea. This would solve the problem of mistakenly believing the flag
> matters when it doesn't, since it is in the struct which is used as the
> arguments of the exact function that cares about it. Here's a new patch -
> I'm tweaking e-mail client settings so hopefully this makes it to the list
> without mangling - if not I'll resend it with `git-send-email` later.
> 
> From 941c89fe1e226ed4d210ce35d0d906526b8277ed Mon Sep 17 00:00:00 2001
> From: Matthew DeVore 
> Date: Fri, 30 Nov 2018 16:43:32 -0800
> Subject: [PATCH] revisions.c: put promisor option in specialized struct
> 
> Put the allow_exclude_promisor_objects flag in setup_revision_opt. When
> it was in rev_info, it was unclear when it was used, since rev_info is
> passed to functions that don't use the flag. This resulted in
> unnecessary setting of the flag in prune.c, so fix that as well.

Thanks, this looks pretty reasonable overall. Two comments:

>   repo_init_revisions(the_repository, , NULL);
>   save_commit_buffer = 0;
> - revs.allow_exclude_promisor_objects_opt = 1;
> - setup_revisions(ac, av, , NULL);
> +
> + memset(_r_opt, 0, sizeof(s_r_opt));
> + s_r_opt.allow_exclude_promisor_objects = 1;
> + setup_revisions(ac, av, , _r_opt);

I wonder if a static initializer for setup_revision_opt is worth it. It
would remove the need for this memset. Probably not a big deal either
way, though.

>  static int handle_revision_opt(struct rev_info *revs, int argc, const char
> **argv,
> -int *unkc, const char **unkv)
> +int *unkc, const char **unkv,
> +int allow_exclude_promisor_objects)

Why not pass in the whole setup_revision_opt struct? We don't need
anything else from it yet, but it seems like the point of that struct is
to pass around preferences like this.

-Peff


Re: [PATCH] Do not fail test if '.' is part of $PATH

2018-12-01 Thread Jeff King
On Sat, Dec 01, 2018 at 06:07:57PM +0100, H.Merijn Brand wrote:

> When $PATH contains the current directory as .:PATH, PATH:., PATH:.:PATH,
> or (maybe worse) as :PATH, PATH:, or PATH::PATH - as an empty entry is
> identical to having dot in $PATH - this test used to fail

Good catch. The test cares about Git not accidentally adding "." to the
PATH, but we can't check that if it is already there.

> This patch was tested with PATH=$PATH, PATH=.:$PATH, PATH=$PATH:.,
> PATH=$PATH:.:/bin, PATH=:$PATH, PATH=$PATH:, and PATH=$PATH::/bin
> [...]
> +test_lazy_prereq DOT_IN_PATH '
> +   case ":$PATH:" in
> +   *:.:*|*::*) true  ;;
> +   *)  false ;;
> +   esac
> +'

Since the test is ultimately checking "can we run should-not-run from
the current directory", might it be simpler to actually try that as the
precondition? I.e., something like:

  test_expect_success 'create program in current directory' '
write_script should-not-run <<-\EOF &&
echo yikes
EOF
  '

  test_lazy_prereq DOT_IN_PATH '
should-not-run
  '

  test_expect_success !DOT_IN_PATH 'run_command is restricted to PATH' '
test_must_fail test-tool run-command run-command should-not-run
  '

?

That's more lines, but we don't have to peek into the details of how
$PATH works.

-Peff


Re: What's cooking in git.git (Nov 2018, #06; Wed, 21)

2018-11-26 Thread Jeff King
On Sun, Nov 25, 2018 at 11:02:05AM +0900, Junio C Hamano wrote:

> Jeff King  writes:
> 
> > I do also think in the long run we should be fixing the "unreachable
> > always become loose" issues.
> 
> I think I've seen an idea of collecting them into a garbage pack
> floated for at least a few times here.  What are the downsides?  We
> no longer will know when these unreachable ones were last accessed
> individually so we need to come up with a different policy around
> their expiration?  As the common traits among objects in such a
> garbage pack (iow the way we discover the objects that need to be
> placed in there) does not involve path information and we lose the
> ability to compress them well?

Yes, the main issue is handling the expiration/mtime.

We may lose some input to the delta heuristics, but:

  - the current alternative is non-delta loose objects (so just shoving
those in a pack is no worse for disk space, and probably better
because of less inode/file overhead)

  - if they were already packed we can often just retain the existing
deltas (and we get this basically for free with the existing code)

  - we could still walk unreachable bits of the graph, starting at
dangling commits, to find the path information (we do this to some
degree already to avoid dropping objects depended on by "unreachable
but recent" objects, but I don't know how systematic we are about
making sure to hit walk down from root trees first)

The most thorough discussion I know of in this direction is the one
around:

  
https://public-inbox.org/git/20170610080626.sjujpmgkli4mu...@sigill.intra.peff.net/

-Peff


Re: t5570 shaky for anyone ?

2018-11-26 Thread Jeff King
On Sun, Nov 25, 2018 at 11:22:47PM +0100, SZEDER Gábor wrote:

> > The following fixes it, but I am not sure if this is the ideal
> > solution.
> 
> Currently this is the only test that looks at 'daemon.log', but if we
> ever going to add another one, then that will be prone to the same
> issue.
> 
> I wonder whether it's really that useful to have the daemon log in the
> test script's output...  if the log was sent directly to daemon log,
> then the window for this race would be smaller, but still not
> completely closed.
> 
> Anyway, I added Peff to Cc, since he added that whole fifo-shell-loop
> thing.

Yeah, see my comments on the other part of the thread. If we did write
directly to a file, I think that would be enough here because git-daemon
writes this entry before running the sub-process. So by the time
ls-remote finishes, we know that it talked to upload-pack, and we know
that before upload-pack was run, git-daemon wrote the log entry. That
assumes git-daemon doesn't buffer its logs (but if it does, we should
probably fix that).

-Peff


Re: t5570 shaky for anyone ?

2018-11-26 Thread Jeff King
On Sun, Nov 25, 2018 at 10:01:38PM +, Thomas Gummerer wrote:

> On 11/25, Torsten Bögershausen wrote:
> > After running the  "Git 2.20-rc1" testsuite here on a raspi,
> > the only TC that failed was t5570.
> > When the "grep" was run on daemon.log, the file was empty (?).
> > When inspecting it later, it was filled, and grep would have found
> > the "extended.attribute" it was looking for.
> 
> I believe this has been reported before in
> https://public-inbox.org/git/1522783990.964448.1325338528.0d49c...@webmail.messagingengine.com/,
> but it seems like the thread never ended with actually fixing it.
> Reading the first reply Peff seemed to be fine with just removing the
> test completely, which would be the easiest solution ;)  Adding him to
> Cc: here.

Yes, I don't think there is a way to make this race-proof without
somehow convincing "cat" to flush (and let us know when it has). Which
really implies killing the daemon, and wait()ing on cat to process the
EOF and exit.  And that makes the tests a lot more expensive if we have
to start the daemon for each snippet.

So I'm still fine with just dropping this test.

> > diff --git a/t/t5570-git-daemon.sh b/t/t5570-git-daemon.sh
> > index 7466aad111..e259fee0ed 100755
> > --- a/t/t5570-git-daemon.sh
> > +++ b/t/t5570-git-daemon.sh
> > @@ -192,6 +192,7 @@ test_expect_success 'daemon log records all attributes' 
> > '
> > GIT_OVERRIDE_VIRTUAL_HOST=localhost \
> > git -c protocol.version=1 \
> > ls-remote "$GIT_DAEMON_URL/interp.git" &&
> > +   sleep 1 &&
> > grep -i extended.attribute daemon.log | cut -d" " -f2- >actual &&
> > test_cmp expect actual
> >  '
> > 
> > A slightly better approach may be to use a "sleep on demand":
> > 
> > +   ( grep -i -q extended.attribute daemon.log || sleep 1 ) &&

That doesn't really fix it, but just broadens the race window. I dunno.
Maybe that is enough in practice. We could do something like:

  repeat_with_timeout () {
local i=0
while test $i -lt 10
do
"$@" && return 0
sleep 1
done
# no success even after 10 seconds
return 1
  }

  repeat_with_timeout grep -i extended.attribute daemon.log

to make the pattern a bit more obvious (and make it easy to extend the
window arbitrarily; surely 10s is enough?).

-Peff


Re: [PATCH] t5562: do not reuse output files

2018-11-24 Thread Jeff King
On Sat, Nov 24, 2018 at 04:47:19PM +0900, Junio C Hamano wrote:

> I do agree that forcing the parent to wait, like you described in
> the comment, would be far more preferrable, [...]

Stray processes can sometimes have funny effects on an outer test
harness, too. E.g., I think I've seen hangs running t5562 under prove,
because some process is holding open a pipe descriptor. This would
probably fix that, too.

> but until that happens,[...]

But if we can't do that immediately for some reason, I do agree with
everything else you said here. ;)

-Peff


Re: [PATCH] t5562: fix perl path

2018-11-24 Thread Jeff King
On Fri, Nov 23, 2018 at 01:38:21AM +0200, Max Kirillov wrote:

> From: Jeff King 
> 
> Some systems do not have perl installed to /usr/bin. Use the variable
> from the build settiings, and call perl directly than via shebang.
> 
> Signed-off-by: Max Kirillov 
> ---
> Submitting. Could you sign-off? Also removed shebang from the script as it is 
> not needed

Yep:

  Signed-off-by: Jeff King 

As Carlos mentioned, I think you could leave the shebang as
documentation, but I'm OK either way.

-Peff


Re: What's cooking in git.git (Nov 2018, #06; Wed, 21)

2018-11-24 Thread Jeff King
On Thu, Nov 22, 2018 at 07:36:54PM +0100, Ævar Arnfjörð Bjarmason wrote:

> > Yeah, my intent had been to circle back around to this, but I just
> > hadn't gotten to it. I'm still pondering a config option or similar,
> > though I remain unconvinced that the cases in which you've showed it
> > being slow are actually realistic or worth worrying about
> 
> FWIW those "used to be 2ms are now 20-40ms" pushes on ext4 are
> representative of the actual prod setup I'm mainly targeting. Now, I
> don't run on ext4 this patch helps there, but it seems plausible that it
> matters to someone who's counting on that performance.

Those are actually the more interesting cases, I think. The ones I'm
most skeptical of are the multi-second delays due to having 250K+
objects.

I do also think in the long run we should be fixing the "unreachable
always become loose" issues.

> Buh yeah, it's certainly obscure. I don't blame you if you don't want to
> hack on it, and not ejecting this out before 2.20 isn't going to break
> anything for me. But do you mind if I make it configurable as part of my
> post-2.20 "disable collisions?"

No, not at all.

> >  (and certainly having an obscure config option is not enough to help
> > most people). If we could have it kick in heuristically, that would be
> > better.
> 
> Aside from this specific scenario. I'd really prefer if we avoid having
> heuristic performance optimizations at all costs.
> 
> Database servers tend to do that sort of thing with their query planner,
> and it results in cases where your entire I/O profile changes overnight
> because you're now on the wrong side of some if/else heuristic about
> whather to use some index or not.

Yes, I agree that's a downside. I just think if we make everybody's
performance 10% slower, they're not really going to notice and seek out
a config option to flip to fix it.

> > However, note that the cache-load for finding abbreviations _must_ have
> > the complete list. And has been loading it for some time. So if you run
> > "git-fetch", for example, you've already been running this code for
> > months (and using the cache in more places is now a free speedup).
> 
> This is reminding me that I need to get around to re-submitting my
> core.validateAbbrev series, which addresses this part of the problem:
> https://public-inbox.org/git/20180608224136.20220-21-ava...@gmail.com/

Yeah, I still agree that is a reasonable thing to do.

> > At the very least, we'd want this patch on top, too. I also think René's
> > suggestion use access() is worth pursuing (though to some degree is
> > orthogonal to the cache).
> 
> I haven't had time to test that, and wasn't prioritizing it since I
> figured this was post-2.20. My hunch is it doesn't matter much if at all
> on NFS. The roundtrip time is what matters, whether that roundtrip is
> fstat() or access() probably not.

Yes, that line of reasoning makes sense to me (but I think an easy 2-3%
speedup on local filesystems may be worth it).

-Peff


Re: What's cooking in git.git (Nov 2018, #06; Wed, 21)

2018-11-24 Thread Jeff King
On Sat, Nov 24, 2018 at 11:11:36AM +0900, Junio C Hamano wrote:

> > However, note that the cache-load for finding abbreviations _must_ have
> > the complete list. And has been loading it for some time. So if you run
> > "git-fetch", for example, you've already been running this code for
> > months (and using the cache in more places is now a free speedup).
> >
> > At the very least, we'd want this patch on top, too. I also think René's
> > suggestion use access() is worth pursuing (though to some degree is
> > orthogonal to the cache).
> 
> OK, will queue, but I wonder where 66f0 comes from before silently
> doing s/66f04152be/3a2e08245c/ myself.

Whoops, I thought I made double-sure to pull the commit id from
origin/jk/loose-object-cache instead of my original local commits, but
obviously I didn't. Thanks for catching it.

-Peff


Re: What's cooking in git.git (Nov 2018, #06; Wed, 21)

2018-11-22 Thread Jeff King
On Wed, Nov 21, 2018 at 11:48:14AM +0100, Ævar Arnfjörð Bjarmason wrote:

> 
> On Wed, Nov 21 2018, Junio C Hamano wrote:
> 
> > * jk/loose-object-cache (2018-11-13) 9 commits
> >   (merged to 'next' on 2018-11-18 at 276691a21b)
> >  + fetch-pack: drop custom loose object cache
> >  + sha1-file: use loose object cache for quick existence check
> >  + object-store: provide helpers for loose_objects_cache
> >  + sha1-file: use an object_directory for the main object dir
> >  + handle alternates paths the same as the main object dir
> >  + sha1_file_name(): overwrite buffer instead of appending
> >  + rename "alternate_object_database" to "object_directory"
> >  + submodule--helper: prefer strip_suffix() to ends_with()
> >  + fsck: do not reuse child_process structs
> >
> >  Code clean-up with optimization for the codepath that checks
> >  (non-)existence of loose objects.
> >
> >  Will cook in 'next'.
> 
> I think as noted in
> https://public-inbox.org/git/e5148b8c-9a3a-5d2e-ac8c-3e536c0f2...@web.de/
> that we should hold off the [89]/9 of this series due to the performance
> regressions this introduces in some cases (while fixing other cases).
> 
> I hadn't had time to follow up on that, and figured it could wait until
> post-2.20 for a re-roll.

Yeah, my intent had been to circle back around to this, but I just
hadn't gotten to it. I'm still pondering a config option or similar,
though I remain unconvinced that the cases in which you've showed it
being slow are actually realistic or worth worrying about (and certainly
having an obscure config option is not enough to help most people). If
we could have it kick in heuristically, that would be better.

However, note that the cache-load for finding abbreviations _must_ have
the complete list. And has been loading it for some time. So if you run
"git-fetch", for example, you've already been running this code for
months (and using the cache in more places is now a free speedup).

At the very least, we'd want this patch on top, too. I also think René's
suggestion use access() is worth pursuing (though to some degree is
orthogonal to the cache).

-- >8 --
Subject: [PATCH] odb_load_loose_cache: fix strbuf leak

Commit 66f04152be (object-store: provide helpers for
loose_objects_cache, 2018-11-12) moved the cache-loading code from
find_short_object_filename(), but forgot the line that releases the path
strbuf.

Reported-by: René Scharfe 
Signed-off-by: Jeff King 
---
 sha1-file.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/sha1-file.c b/sha1-file.c
index 5894e48ea4..5a272f70de 100644
--- a/sha1-file.c
+++ b/sha1-file.c
@@ -2169,6 +2169,7 @@ void odb_load_loose_cache(struct object_directory *odb, 
int subdir_nr)
NULL, NULL,
>loose_objects_cache);
odb->loose_objects_subdir_seen[subdir_nr] = 1;
+   strbuf_release();
 }
 
 static int check_stream_sha1(git_zstream *stream,
-- 
2.20.0.rc1.703.g93fba25b62



Re: [PATCH 6/9] sha1-file: use an object_directory for the main object dir

2018-11-22 Thread Jeff King
On Mon, Nov 12, 2018 at 11:04:52AM -0800, Stefan Beller wrote:

> >   for (odb = the_repository->objects->odb; odb; odb = odb->next) {
> > if (odb->local)
> > return odb->path;
> >   }
> >   return NULL; /* yikes? */
> >
> > ? That feels like it's making things more complicated, not less.
> 
> It depends if the caller cares about the local flag.
> 
> I'd think we can have more than one local, eventually?
> Just think of the partial clone stuff that may have a local
> set of promised stuff and another set of actual objects,
> which may be stored in different local odbs.

Yeah, but I think the definition of "local" gets very tricky there, and
we'll have to think about what it means. So I'd actually prefer to punt
on doing anything too clever at this point.

> If the caller cares about the distinction, they would need
> to write out this loop as above themselves.
> If they don't care, we could migrate them to not
> use this function, so we can get rid of it?

Yes, I do think in the long run we'd want to get rid of most calls to
get_object_directory(). Not only because it uses the_repository, but
because most callers should be asking for a specific action: I want to
write an object, or I want to read an object.

-Peff


Re: how does "clone --filter=sparse:path" work?

2018-11-22 Thread Jeff King
On Thu, Nov 08, 2018 at 01:57:52PM -0500, Jeff Hostetler wrote:

> > Should we simply be disallowing sparse:path filters over upload-pack?
> 
> The option to allow an absolute path over the wire probably needs more
> thought as you suggest.
> 
> Having it in the traverse code was useful for local testing in the
> client.
> 
> But mainly I was thinking of a use case on the client of the form:
> 
> git rev-list
> --objects
> --filter=spec:path=.git/sparse-checkout
> --missing=print
> 
> 
> and get a list of the blobs that you don't have and would need before
> you could checkout  using the current sparse-checkout definition.
> You could then have a pre-checkout hook that would bulk
> fetch them before starting the actual checkout.  Since that would be
> more efficient than demand-loading blobs individually during the
> checkout.  There's more work to do in this area, but that was the idea.
> 
> But back to your point, yes, I think we should restrict this over the
> wire.

Thanks for your thorough response, and sorry for the slow reply. I had
meant to reply with a patch adding in the restriction, but I haven't
quite gotten to it. :)

It's still on my todo list, but I'm going to be offline for a bit for
vacation, and I didn't want to leave this totally hanging without a
response.

-Peff


Re: insteadOf and git-request-pull output

2018-11-22 Thread Jeff King
On Sat, Nov 17, 2018 at 11:07:32PM +0900, Junio C Hamano wrote:

> Jeff King  writes:
> 
> > I suspect it would be less confusing if the rewrite were inverted, like:
> >
> >   [url "gh:"]
> >   rewriteTo = https://github.com
> >   rewritePrivate
> >
> >   [url "git://github.com"]
> >   rewriteTo = https://github.com
> >
> > where the mapping of sections to rewrite rules must be one-to-one, not
> > one-to-many (and you can see that the flip side is that we have to
> > repeat ourselves).
> >
> > I hate to introduce two ways of doing the same thing, but maybe it is
> > simple enough to explain that url.X.insteadOf=Y is identical to
> > url.Y.rewriteTo=X. I do think people have gotten confused by the
> > ordering of insteadOf over the years, so this would let them specify it
> > in whichever way makes the most sense to them.
> 
> I do admit that the current "insteadOf" was too cute a way to
> configure this feature and I often have to read it aloud twice
> before convincing myself I got X and Y right.  It would have been
> cleaner if the definition were in the opposite direction, exactly
> because you can rewrite a single thing into just one way, but we do
> want to support that many things mapping to the same, and the
> approach to use "url.Y.rewriteTo=X" does express it better.

In case anybody is interested in picking this up, the code is actually
quite simple (the underlying data structure remains the same; we're just
inverting the order in the config). It would need documentation and
tests, and probably a matching pushRewriteTo.

diff --git a/remote.c b/remote.c
index 28c7260ed1..26b1a7b119 100644
--- a/remote.c
+++ b/remote.c
@@ -345,6 +345,11 @@ static int handle_config(const char *key, const char 
*value, void *cb)
return config_error_nonbool(key);
rewrite = make_rewrite(_push, name, namelen);
add_instead_of(rewrite, xstrdup(value));
+   } else if (!strcmp(subkey, "rewriteto")) {
+   if (!value)
+   return config_error_nonbool(key);
+   rewrite = make_rewrite(, value, strlen(value));
+   add_instead_of(rewrite, xmemdupz(name, namelen));
}
}
 

However, I did notice the cleanup below as part of writing that. It may
be worth applying independently.

-- >8 --
Subject: [PATCH] remote: check config validity before creating rewrite struct

When parsing url.foo.insteadOf, we call make_rewrite() and only then
check to make sure the config value is a string (and return an error if
it isn't). This isn't quite a leak, because the struct we allocate is
part of a global array, but it does leave a funny half-finished struct.

In practice, it doesn't make much difference because we exit soon after
due to the config error anyway. But let's flip the order here to avoid
leaving a trap for somebody in the future.

Signed-off-by: Jeff King 
---
 remote.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/remote.c b/remote.c
index b850f2feb3..28c7260ed1 100644
--- a/remote.c
+++ b/remote.c
@@ -336,14 +336,14 @@ static int handle_config(const char *key, const char 
*value, void *cb)
if (!name)
return 0;
if (!strcmp(subkey, "insteadof")) {
-   rewrite = make_rewrite(, name, namelen);
if (!value)
return config_error_nonbool(key);
+   rewrite = make_rewrite(, name, namelen);
add_instead_of(rewrite, xstrdup(value));
} else if (!strcmp(subkey, "pushinsteadof")) {
-   rewrite = make_rewrite(_push, name, namelen);
if (!value)
return config_error_nonbool(key);
+   rewrite = make_rewrite(_push, name, namelen);
add_instead_of(rewrite, xstrdup(value));
}
}
-- 
2.20.0.rc1.703.g93fba25b62



Re: [PATCH 1/3] pack-objects: fix tree_depth and layer invariants

2018-11-22 Thread Jeff King
On Tue, Nov 20, 2018 at 05:37:18PM +0100, Duy Nguyen wrote:

> > But in (b), we use the number of stored objects, _not_ the allocated
> > size of the objects array. So we can run into a situation like this:
> >
> >   1. packlist_alloc() needs to store the Nth object, so it grows the
> >  objects array to M, where M > N.
> >
> >   2. oe_set_tree_depth() wants to store a depth, so it allocates an
> >  array of length N. Now we've violated our invariant.
> >
> >   3. packlist_alloc() needs to store the N+1th object. But it _doesn't_
> >  grow the objects array, since N <= M still holds. We try to assign
> >  to tree_depth[N+1], which is out of bounds.
> 
> Do you think if this splitting data to packing_data is too fragile
> that we should just scrape the whole thing and move all data back to
> object_entry[]? We would use more memory of course but higher memory
> usage is still better than more bugs (if these are likely to show up
> again).

Certainly that thought crossed my mind while working on these patches. :)

Especially given the difficulties it introduced into the recent
bitmap-reuse topic, and the size fixes we had to deal with in v2.19.

Overall, though, I dunno. This fix, while subtle, turned out not to be
too complicated. And the memory savings are real. I consider 100M
objects to be on the large size of feasible for stock Git these days,
and I think we are talking about on the order of 4GB memory savings
there. You need a big machine to handle a repository of that size, but
4GB is still appreciable.

So I guess at this point, with all (known) bugs fixed, we should stick
with it for now. If it becomes a problem for development of a future
feature, then we can re-evaluate then.

-Peff


Re: [PATCH 0/3] delta-island fixes

2018-11-22 Thread Jeff King
On Tue, Nov 20, 2018 at 10:06:57AM -0500, Derrick Stolee wrote:

> On 11/20/2018 4:44 AM, Jeff King wrote:
> > In cases like this I think it's often a good idea to have a perf test.
> > Those are expensive anyway, and we'd have the double benefit of
> > exercising the code and showing off the performance improvement. But the
> > delta-island code only makes sense on a very specialized repo: one where
> > you have multiple related but diverging histories.  You could simulate
> > that by picking two branches in a repository, but the effect is going to
> > be miniscule.
> 
> The changes in this series look correct. Too bad it is difficult to test.
> 
> Perhaps we should add a performance test for the --delta-islands check that
> would trigger the failure (and maybe a clone afterwards). There are lots of
> freely available forks of git.git that present interesting fork structure.
> Here are three that would suffice to make this interesting:
> 
>     https://github.com/git/git
>     https://github.com/git-for-windows/git
>     https://github.com/microsoft/git
> 
> Of course, running it on a specific repo is up to the person running the
> perf suite.

I hadn't really considered the possibility of reconstructing a fork
network repository from public repos. It probably would be possible to
include a script which does so, although:

  - I suspect it's going to be pretty expensive. We can use --reference
to reduce the size of subsequent clones, but just the repacks you
have to do to assemble the final shared repo can get pretty
expensive.

  - That's 3 forks. Our real-world case has over 4000. I'm not sure of
the size of the effects for smaller cases.

So in short, I think it's an interesting avenue to explore, but it might
turn out to be a dead-end. I'm not going to prioritize it right now, but
if somebody wants to play with it, I'd be happy to look over the
results.

-Peff


Re: Git for Windows v2.20.0-rc0, was Re: [ANNOUNCE] Git v2.20.0-rc0

2018-11-22 Thread Jeff King
On Wed, Nov 21, 2018 at 11:28:28AM -0800, Bryan Turner wrote:

> But that test code exists because Bitbucket Server provides a Java API
> [1][2] which allows third-party developers to easily build arbitrary
> Git commands to invoke for their own functionality. Setting
> `GitBranchCreateBuilder.reflog(true)` will trigger adding "-l" to the
> assembled "git branch" command. I've changed the code now so that it
> will use "--create-reflog" instead; however, because many of the
> Bitbucket Server add-ons on Marketplace [3], whether free or paid, are
> not open source, and because there are a significant number of
> in-house plugins that are not listed there, it's difficult to know how
> many might be affected. If I were to hazard a guess it would be
> _none_, but I've been surprised before. The end result is that the net
> impact is hard to predict--especially because Git on the server would
> need to be upgraded to 2.20.

Yeah, that is slightly worrisome. In some sense we can guess that
calling ".reflog(true)" is in the same league as our assumption of
"probably nobody is using -l in the first place", but it's one more
place that might have encouraged people into using it, even if it's a
noop.

> (In case you're curious why we used shorthand options, it's because of
> our Windows support. While "git branch" commands rarely, if ever, get
> very long, as a general rule we use shorthand options where they exist
> to keep our command lines shorter, to allow passing more options
> without hitting the hard limit (generally 32K) imposed by
> Windows--something we _have_ had issues with on other commands. For
> commands like "git diff", where it's not possible to pass in paths via
> stdin, every character matters.)

I follow your reasoning, though that is not the engineering decision I
would have made. The long-names tend to be more robust, and while saving
a few bytes might make a case work that would not otherwise, it is
really only delaying the inevitable. But hey, it's not my product. ;)

We probably should support --stdin in more places to cover situations
like this. Patches, welcome.

Also, I have noticed that performance with a large number of pathspecs
tends to be pretty poor, as we search them linearly. I wonder if you've
run into that, too (or maybe, coming from a Java world, you simply have
a small number of extremely long path names ;) ). A while ago I
experimented with putting non-wildcard pathspecs into a trie structure
that we could traverse while walking the actual trees to diff. I never
really polished it because having a huge number of pathspecs didn't seem
like a common use case.

> To try and protect against the unexpected, we have a Supported
> Platforms [4] page which lists Git versions that we've _explicitly
> tested_ with Bitbucket Server. 2.20 won't be marked tested until a
> future release, so the majority of installs will not use it with older
> versions of the system--but there's always that subset who ignore the
> documentation. (Since we do text parsing on Git output, subtle
> breakages do happen from time to time.)
> 
> I would say it's _unlikely_ that we'll hear of any installations where
> all the conditions are met for this to come up:
> - Git 2.20
> - Bitbucket Server (without fixes)
> - Third-party add-on using `reflog(true)`

Thanks for laying this out (and again, thanks for testing and reporting
this before the release!).

I'm comfortable with continuing with the change in v2.20 at this point,
but I'm also totally fine with bumping it for another release. Your case
is about Bitbucket, but there may well be other scripts in the wild.

> It's really just that a) I was caught off guard by the change (my own
> fault for not reading the 2.19 announcement more carefully) and b)
> it's impossible for me to say with _certainty_ that it won't be an
> issue. I'd imagine that latter point is true of the change in general,
> though (it's not really possible to know what scripts it might break,
> and that's going to be true regardless of when the change actually
> gets released), and I'd agree that that shouldn't hold Git back from
> making useful improvements.

The advantage of bumping is that if you switch Bitbucket to
"--create-reflog" _now_, it's more likely to be deployed by the time the
Git change comes.

In theory that also helps people who may not have upgraded to v2.19 yet,
but I suspect in many cases people don't notice the warning at all
(because it's buried in some script output) and will only notice once
the breaking change ships. Then the deprecation period only gives us a
larger ability to say "I told you so...", but that is small satisfaction
to the person whose script was broken.

-Peff


Re: [PATCH v1 1/2] log -G: Ignore binary files

2018-11-22 Thread Jeff King
On Thu, Nov 22, 2018 at 11:16:38AM +0100, Ævar Arnfjörð Bjarmason wrote:

> > +test_expect_success 'log -G looks into binary files with textconv filter' '
> > +   rm -rf .git &&
> > +   git init &&
> > +   echo "* diff=bin" > .gitattributes &&
> > +   printf "a\0b" >data.bin &&
> > +   git add data.bin &&
> > +   git commit -m "message" &&
> > +   git -c diff.bin.textconv=cat log -G a >actual &&
> > +   git log >expected &&
> > +   test_cmp actual expected
> > +'
> > +
> >  test_done
> 
> This patch seems like the wrong direction to me. In particular the
> assertion that "the concept of differences only makes sense for text
> files". That's just not true. This patch breaks this:

But "-G" is defined as "look for differences whose patch text contains
added/removed lines that match ". We don't have patch text here,
let alone added/removed lines.

For binary files, "-Sfoo" is better defined. I think we _could_ define
"search for  in the added/removed bytes of a binary file".  But
I don't think that's what the current code does (it really does a line
diff on a binary file, which is likely to put tons of unchanged crap
into the "added and removed" lines, because the line divisions aren't
meaningful in the first place).

> (
> rm -rf /tmp/g-test &&
> git init /tmp/g-test &&
> cd /tmp/g-test &&
> for i in {1..10}; do
> echo "Always matching thensome 5" >file &&
> printf "a thensome %d binary \0" $i >>file &&
> git add file &&
> git commit -m"Bump $i"
> done &&
> git log -Gthensome.*5
> )
> 
> Right now this will emit 3/10 patches, and the right ones! I.e. "Bump
> [156]". The 1st one because it introduces the "Always matching thensome
> 5". Then 5/6 because the add/remove the string "a thensome 5 binary",
> respectively. Which matches /thensome.*5/.

Right, this will sometimes do the right thing. But it will also often do
the wrong thing. It's also very expensive (we specifically avoid feeding
large binary files to xdiff, but I think "-G" will happily do so -- I
didn't double check, though).

-Peff


Re: [PATCH v1 1/2] log -G: Ignore binary files

2018-11-22 Thread Jeff King
On Wed, Nov 21, 2018 at 09:52:27PM +0100, Thomas Braun wrote:

> diff --git a/diffcore-pickaxe.c b/diffcore-pickaxe.c
> index 69fc55ea1e..8c2558b07d 100644
> --- a/diffcore-pickaxe.c
> +++ b/diffcore-pickaxe.c
> @@ -144,6 +144,11 @@ static int pickaxe_match(struct diff_filepair *p, struct 
> diff_options *o,
>   textconv_two = get_textconv(o->repo->index, p->two);
>   }
>  
> + if ((o->pickaxe_opts & DIFF_PICKAXE_KIND_G) &&
> + ((!textconv_one && diff_filespec_is_binary(o->repo, p->one)) ||
> +  (!textconv_two && diff_filespec_is_binary(o->repo, p->two
> + return 0;

If the user passes "-a" to treat binary files as text, we should
probably skip the binary check. I think we'd need to check
"o->flags.text" here.

> diff --git a/t/t4209-log-pickaxe.sh b/t/t4209-log-pickaxe.sh
> index 844df760f7..42cc8afd8b 100755
> --- a/t/t4209-log-pickaxe.sh
> +++ b/t/t4209-log-pickaxe.sh
> @@ -106,4 +106,26 @@ test_expect_success 'log -S --no-textconv (missing 
> textconv tool)' '
> [...]
> +test_expect_success 'log -G ignores binary files' '
> [...]
> +test_expect_success 'log -G looks into binary files with textconv filter' '

And likewise add a test here similar to the textconv one.

-Peff


Re: [PATCH] t5562: skip if NO_CURL is enabled

2018-11-22 Thread Jeff King
On Thu, Nov 22, 2018 at 02:17:01AM -0800, Carlo Arenas wrote:

> Peff, could you elaborate on your "load testing" setup? which could
> give us any hints
> on what to look for?, FWIW I hadn't been able to reproduce the problem 
> anywhere
> else (and not for a lack of trying)

The script I use is at:

  https://github.com/peff/git/blob/meta/stress

which you invoke like "/path/to/stress t5562" from the top-level of a
git.git checkout.  It basically just runs a loop of twice as many
simultaneous invocations of the test script as you have CPUs, and waits
for one to fail. The load created by all of the runs tends to flush out
timing effects after a while.

It fails for me on t5562 within 30 seconds or so (but note that in this
particular case it sometimes takes a while to produce the final output
because invoke-with-content-length misses the expected SIGCLD and sleeps
the full 60 seconds).

You'll probably need to tweak the variables at the top of the script for
your system.

> PS. upstreaming the PERL_PATH fix is likely to be good to do soonish
> as I presume at least all BSD might be affected, let me know if you
> would rather me do that instead as I suspect we might be deadlocked
> otherwise ;)

Yeah, the $PERL_PATH thing is totally orthogonal, and should graduate
separately.

-Peff


Re: [RFC/PATCH 0/5] stop installing old libexec aliases like "git-init"

2018-11-22 Thread Jeff King
On Thu, Nov 22, 2018 at 01:48:53PM +0100, Johannes Schindelin wrote:

> > So it's maybe do-able, but not quite as trivial as one might hope.
> 
> A trivial alternative would be to recommend adding a man page for
> 3rd-party git-s.
> 
> In other words, as soon as `git-sizer` is accompanied by `git-sizer.1` in
> one of the appropriate locations, it is set.

Yes, it would be nice if everything did ship with a manpage.
Unfortunately that complicates installation, where the installation for
many such tools is "save this file to your $PATH".

Tools like git-sizer may be getting big and popular enough to merit the
extra infrastructure, but I think there are many smaller scripts which
don't.

> FWIW I do have a couple of scripts I use that I install into
> `$HOME/bin/git-`. Although, granted, I essentially switched to
> aliases for most of them, where the aliases still call a script that is
> checked out in some folder in my home directory. The reason: this works in
> more circumstances, as I do not have to add `$HOME/bin` to the `PATH`,
> say, in PowerShell.
> 
> So YMMV with git-s. My rule of thumb is: if I want to use this
> myself only, I'll make it an alias. If I want to ship it (e.g. with Git
> for Windows), I'll make it a git-.

I have a handful of personal git-* scripts: mostly ones that are big
enough to be unwieldy as an alias. But then, $PATH management is pretty
straightforward on my platforms, so it's easier to drop a script there
than to point an alias to a non-git-* script.

-Peff


Re: Document change in format of raw diff output format

2018-11-22 Thread Jeff King
On Thu, Nov 22, 2018 at 11:58:36AM +0100, Greg Hurrell wrote:

> I was troubleshooting some breakage in some code that consumes the
> output of `git log --raw` and looking on two machines with different
> versions of Git just now I discovered the output format has changed
> somewhere between v2.14.5:
> 
> :00 100644 0... 9773b7718... A  content/snippets/1157.md
> 
> and v2.19.0:
> 
> :00 100644 0 9773b7718 Acontent/snippets/1157.md
> 
> A quick search turns up some patches related to the
> GIT_PRINT_SHA1_ELLIPSIS env variable, which can be used to force the
> old output format, and which landed in v2.16.0, I think.

Yes. The actual commit that flipped the default is 7cb6ac1e4b (diff:
diff_aligned_abbrev: remove ellipsis after abbreviated SHA-1 value,
2017-12-03). There's more discussion of the possibility of breakage in
this subthread:

  https://public-inbox.org/git/83D263E58ABD46188756D41FE311E469@PhilipOakley/

> Does it sound right that we should update the documentation in
> diff-format.txt to show what the new output format is? The examples
> all show the old output format, which isn't produced by default any
> more.

Yes, we should definitely update the documentation to show the modern
format. I think that was just an oversight in the original series.

> diff --git a/Documentation/diff-format.txt b/Documentation/diff-format.txt
> index 706916c94c..33776459d0 100644
> --- a/Documentation/diff-format.txt
> +++ b/Documentation/diff-format.txt
> @@ -26,12 +26,12 @@ line per changed file.
>  An output line is formatted this way:
> 
>  
> -in-place edit  :100644 100644 bcd1234... 0123456... M file0
> -copy-edit  :100644 100644 abcd123... 1234567... C68 file1 file2
> -rename-edit:100644 100644 abcd123... 1234567... R86 file1 file3
> -create :00 100644 000... 1234567... A file4
> -delete :100644 00 1234567... 000... D file5
> -unmerged   :00 00 000... 000... U file6
> +in-place edit  :100644 100644 bcd123456 012345678 M file0
> +copy-edit  :100644 100644 abcd12345 123456789 C68 file1 file2
> +rename-edit:100644 100644 abcd12345 123456789 R86 file1 file3
> +create :00 100644 0 123456789 A file4
> +delete :100644 00 123456789 0 D file5
> +unmerged   :00 00 0 0 U file6
>  

Yeah, this looks like an improvement.

I think in general that we'd continue to show 7 characters now, just
without the extra dots (though it's auto-scaled based on the number of
objects in the repo these days, so it's not even really a constant).

>  That is, from the left to the right:
> @@ -75,7 +75,7 @@ and it is out of sync with the index.
>  Example:
> 
>  
> -:100644 100644 5be4a4.. 00.. M file.c
> +:100644 100644 5be4a4abc 0 M file.c
>  

I'm not even sure what this original was trying to show. I don't think
we ever produced that any dots. :)

Thanks for noticing.

-Peff

PS As you noticed, "git log" we don't promise that git-log output will
   never change between versions. For machine-consumption you probably
   want to use plumbing like "git rev-list | git diff-tree --stdin",
   which produces unabbreviated hashes.


Re: [PATCH 1/5] eoie: default to not writing EOIE section

2018-11-21 Thread Jeff King
On Tue, Nov 20, 2018 at 02:21:51PM +0100, SZEDER Gábor wrote:

> On Tue, Nov 20, 2018 at 08:06:16AM -0500, Ben Peart wrote:
> > >diff --git a/read-cache.c b/read-cache.c
> > >index 4ca81286c0..1e9c772603 100644
> > >--- a/read-cache.c
> > >+++ b/read-cache.c
> > >@@ -2689,6 +2689,15 @@ void update_index_if_able(struct index_state 
> > >*istate, struct lock_file *lockfile
> > >   rollback_lock_file(lockfile);
> > >  }
> > >+static int record_eoie(void)
> > >+{
> > >+  int val;
> > 
> > I believe you are going to want to initialize val to 0 here as it is on the
> > stack so is not guaranteed to be zero.
> 
> The git_config_get_bool() call below will initialize it anyway.

Yes, there are two ways to write this. With a conditional to initialize
and return or to return the default, as we have here:

> > >+  if (!git_config_get_bool("index.recordendofindexentries", ))
> > >+  return val;
> > >+  return 0;

Or initialize the default ahead of time, and rely on the function not to
modify it when the entry is missing:

  int val = 0;
  git_config_get_bool("index.whatever", );
  return val;

I think either is perfectly fine, but since I also had to look at it
twice to make sure it was doing the right thing, I figured it is worth
mentioning as a possible style/convention thing we may want to decide
on.

-Peff


Re: [RFC 2/2] exclude-promisor-objects: declare when option is allowed

2018-11-21 Thread Jeff King
On Mon, Oct 22, 2018 at 06:13:42PM -0700, Matthew DeVore wrote:

> diff --git a/builtin/prune.c b/builtin/prune.c
> index 41230f8215..11284d0bf3 100644
> --- a/builtin/prune.c
> +++ b/builtin/prune.c
> @@ -120,6 +120,7 @@ int cmd_prune(int argc, const char **argv, const char 
> *prefix)
>   save_commit_buffer = 0;
>   read_replace_refs = 0;
>   ref_paranoia = 1;
> + revs.allow_exclude_promisor_objects_opt = 1;
>   repo_init_revisions(the_repository, , prefix);
>  
>   argc = parse_options(argc, argv, prefix, options, prune_usage, 0);

I think this line is in the wrong place. The very first thing
repo_init_revisions() will do is memset() the revs struct to all-zeroes,
so it cannot possibly be doing anything.

Normally it would need to go after init_revisions() but before
setup_revisions(), but we don't seem to call the latter at all in
builtin/prune.c. Which makes sense, because you cannot pass options to
influence the reachability traversal. So I don't think we need to care
about this flag at all here.

Speaking of which, would this flag work better as a field in
setup_revision_opt, which is passed to setup_revisions()? The intent
seem to be to influence how we parse command-line arguments, and that's
where other similar flags are (e.g., assume_dashdash).

-Peff


Re: Git for Windows v2.20.0-rc0, was Re: [ANNOUNCE] Git v2.20.0-rc0

2018-11-21 Thread Jeff King
On Tue, Nov 20, 2018 at 03:17:07PM -0800, Bryan Turner wrote:

> I've run 2.20.0-rc0 through the test matrix for Bitbucket Server on
> both Linux and Windows, and the only failures were related to this
> change:
> 
> * "git branch -l " used to be a way to ask a reflog to be
>created while creating a new branch, but that is no longer the
>case.  It is a short-hand for "git branch --list " now.
> 
> Since this is an intentional change I suspect there's nothing to do
> for it but patch Bitbucket Server and move on, but I'll confess it's a
> little frustrating that the option was deprecated in 2.19 and then
> immediately removed in the next minor release. Such a
> backwards-incompatible change seems far more apt for a major release,
> a perspective that's reinforced by having the change follow such a
> brief deprecation period--2.19.0 was only tagged September 10th (in my
> timezone), so it's been less than 3 months. (Looking at the git branch
> documentation for 2.18.0 [1] shows nothing about this deprecation; the
> messaging first appears in 2.19.0 [2]. To be honest, I didn't even
> realize it was deprecated until now, when it's gone--our test suite is
> automated, so the deprecation warning was not visible.)

We did go a bit faster than usual, under the assumption that nobody's
really using "-l". It has been the default since 2006.

Can you tell us a little more about your invocation?

We still have time to avoid the change for this release. And this early
testing of master and release candidates is wonderful exactly to get
this kind of feedback before it's too late.

-Peff


Re: [PATCH v2 1/2] ref-filter: add worktree atom

2018-11-21 Thread Jeff King
On Wed, Nov 21, 2018 at 03:05:04PM +0100, Nickolai Belakovski wrote:

> I think if we move to making this atom just store worktree path, that
> needs to be implemented as a hashmap of refname->wtpath, which would
> also solve this string_list issue, correct? Just making sure I'm not
> missing something before I submit another patch.

string_list has a "util" field, so you actually _can_ use it to create
a mapping. I do think a hashmap is a little more obvious.

OTOH, the hashmap API is a little tricky; if we are going to add a
"strmap" API soon, it may be simpler to just use a string_list now and
convert to strmap when it is a available.

-Peff


Re: git grep prints colon instead of slash

2018-11-21 Thread Jeff King
On Wed, Nov 21, 2018 at 02:54:34PM +0100, Marc Gonzalez wrote:

> If I specify the branch to explore, git grep prints a colon instead of
> a slash in the path:
> 
> $ git grep arm,coresight-tmc master:Documentation/devicetree
> master:Documentation/devicetree:bindings/arm/coresight.txt:   
>   "arm,coresight-tmc", "arm,primecell";
> master:Documentation/devicetree:bindings/arm/coresight.txt: 
> compatible = "arm,coresight-tmc", "arm,primecell";
>^
>   HERE
> 
> There is no such issue when the branch is not specified:
> 
> $ git grep arm,coresight-tmc Documentation/devicetree
> Documentation/devicetree/bindings/arm/coresight.txt:
> "arm,coresight-tmc", "arm,primecell";
> Documentation/devicetree/bindings/arm/coresight.txt:compatible = 
> "arm,coresight-tmc", "arm,primecell";
> ^
> NO ISSUE
> 
> 
> Is this expected behavior?
> The spurious colon prevents one from simply copy/pasting the output.

There's lots of discussion in this thread from last year:

  https://public-inbox.org/git/20170119150347.3484-1-stefa...@redhat.com/

Notably:

  git grep arm,coresight-tmc master -- Documentation/devicetree

will do what you want.

-Peff


Re: pathspec: problems with too long command line

2018-11-21 Thread Jeff King
On Wed, Nov 21, 2018 at 10:23:34AM +0100, Marc Strapetz wrote:

> From our GUI client we are invoking git operations on a possibly large set
> of files. This may result in pathspecs which are exceeding the maximum
> command line length, especially on Windows [1] and OSX [2]. To workaround
> this problem we are currently splitting up such operations by invoking
> multiple git commands. This works well for some commands (like add), but
> doesn't work well for others (like commit).
> 
> A possible solution could be to add another patchspec magic word which will
> read paths from a file instead of command line. A similar approach can be
> found in Mercurial with its "listfile:" pattern [3].
> 
> Does that sound reasonable? If so, we should be able to provide a
> corresponding patch.

Quite a few commands take --stdin, which can be used to send pathspecs
(and often other stuff) without size limits. I don't think either
"commit" or "add" does, but that might be another route.

I'm slightly nervous at a pathspec that starts reading arbitrary files,
because I suspect there may be interesting ways to abuse it for services
which expose Git. E.g., if I have a web service which can show the
history of a file, I might take a $file parameter from the client and
run "git rev-list -- $file" (handling shell quoting, of course). That's
OK now, but with the proposed pathspec magic, a malicious user could ask
for ":(from-file=/etc/passwd)" or whatever.

I dunno. Maybe that is overly paranoid, and certainly servers like that
are a subset of users. And perhaps such servers should be specifying
GIT_LITERAL_PATHSPECS=1 anyway.

-Peff


Re: Git Test Coverage Report (v2.20.0-rc0)

2018-11-20 Thread Jeff King
On Tue, Nov 20, 2018 at 07:17:46AM -0500, Derrick Stolee wrote:

> > And I think that's a pattern with the delta-island code. What we really
> > care about most is that if we throw a real fork-network repository at
> > it, it produces faster clones with fewer un-reusable deltas. So I think
> > a much more interesting approach here would be perf tests. But:
> > 
> >- we'd want to count those as coverage, and that likely makes your
> >  coverage tests prohibitively expensive
> > 
> >- it requires a specialized repo to demonstrate, which most people
> >  aren't going to have handy
> 
> Do you have regularly-running tests that check this in your infrastructure?
> As long as someone would notice if this code starts failing, that would be
> enough.

We do integration tests, and this feature gets exercised quite a bit in
production at GitHub. But none of that caught the breakage I fixed this
morning for the simple fact that we don't keep up with upstream master
in real-time. We're running the delta-island code I wrote years ago, and
the bug was introduced by the upstreaming. Probably in a month or three
I'll merge v2.20 into our fork, and we would have noticed then.

I've had dreams of following upstream more closely, but there are two
blockers:

  - we have enough custom bits that the merges are time-consuming (and
introduce the possibility of funny interactions or regressions). I'd
like to reduce our overall diff from upstream, but it's slow going
and time is finite. (And ironically, it's made harder in some ways
by the lag, because many of our topics are backports of things I
send upstream).

  - deploying the ".0" release from master can be scary. For instance, I
was really happy to have a fix for 9ac3f0e5b3 (pack-objects: fix
performance issues on packing large deltas, 2018-07-22) before
putting the bug into production.

> > Yeah, this triggers if your regex has more than one capture group (and
> > likewise, we almost certainly don't run the "you have too many groups"
> > warning).
> 
> Did you know that regexes are notoriously under-tested [1]? When looking at
> this code, I didn't even know regexes were involved (but I didn't look
> enough at the context).
> 
> [1] https://dl.acm.org/citation.cfm?id=3236072

Heh. Well, fortunately we are compiling regexes from the user's config,
so it's not Git's fault if the regexes are bad. ;)

(Of course on our production servers I'm on the hook for both sides!)

-Peff


Re: [PATCH] test-lib-functions: make 'test_cmp_rev' more informative on failure

2018-11-20 Thread Jeff King
On Tue, Nov 20, 2018 at 12:25:38PM +0100, SZEDER Gábor wrote:

> > but we do not
> > usually bother to do so for our helper functions (like test_cmp).
> 
> test_i18ngrep() does since your 03aa3783f2 (t: send verbose
> test-helper output to fd 4, 2018-02-22).

Oh, indeed. Usually I find myself forgetting about patches I worked on
from 5 years ago. Forgetting about one from 9 months ago is a new low.
:)

> > I don't think it would ever hurt,
> > though. Should we be doing that for all the others, too?
> 
> I think we should, and you agreed:
> 
> https://public-inbox.org/git/20180303065648.ga17...@sigill.intra.peff.net/
> 
> but then went travelling, and then the whole thing got forgotten.

Stupid vacation. :)

OK, yes, I reaffirm my agreement that this is the right direction, then.
Sorry for my lack of memory.

-Peff


Re: Git Test Coverage Report (v2.20.0-rc0)

2018-11-20 Thread Jeff King
On Mon, Nov 19, 2018 at 10:40:53AM -0500, Derrick Stolee wrote:

> > 28b8a73080 builtin/pack-objects.c 2793) depth++;
> > 108f530385 builtin/pack-objects.c 2797) oe_set_tree_depth(_pack, ent,
> > depth);
> 
> This 'depth' variable is incremented as part of a for loop in this patch:
> 
>  static void show_object(struct object *obj, const char *name, void *data)
> @@ -2686,6 +2706,19 @@ static void show_object(struct object *obj, const
> char *name, void *data)
>     add_preferred_base_object(name);
>     add_object_entry(>oid, obj->type, name, 0);
>     obj->flags |= OBJECT_ADDED;
> +
> +   if (use_delta_islands) {
> +   const char *p;
> +   unsigned depth = 0;
> +   struct object_entry *ent;
> +
> +   for (p = strchr(name, '/'); p; p = strchr(p + 1, '/'))
> +   depth++;
> +
> +   ent = packlist_find(_pack, obj->oid.hash, NULL);
> +   if (ent && depth > ent->tree_depth)
> +   ent->tree_depth = depth;
> +   }
>  }
> 
> And that 'ent->tree_depth = depth;' line is replaced with the
> oe_set_tree_depth() call in the report.
> 
> Since depth is never incremented, we are not covering this block. Is it
> possible to test?

This should be covered by the fix in:

  https://public-inbox.org/git/20181120095053.gc22...@sigill.intra.peff.net/

because now entries at the top-level are depth "1". The "depth++" line
is still never executed in our test suite. I'm not sure how much that
matters.

> > delta-islands.c
> > c8d521faf7  53) memcpy(b, old, size);
> 
> This memcpy happens when the 'old' island_bitmap is passed to
> island_bitmap_new(), but...
> 
> > c8d521faf7 187) b->refcount--;
> > c8d521faf7 188) b = kh_value(island_marks, pos) = island_bitmap_new(b);
> 
> This block has the only non-NULL caller to island_bitmap_new().

This is another case where it triggers a lot for a reasonably-sized
repo, but it's hard to construct a small case. This code implements a
copy-on-write of the bitmap, which means the same objects have to be
accessible from two different paths through the reachability graph, each
with different island marks. And then a test would I guess make sure
that the correct subsets of objects never become deltas, which gets
complicated.

And I think that's a pattern with the delta-island code. What we really
care about most is that if we throw a real fork-network repository at
it, it produces faster clones with fewer un-reusable deltas. So I think
a much more interesting approach here would be perf tests. But:

  - we'd want to count those as coverage, and that likely makes your
coverage tests prohibitively expensive

  - it requires a specialized repo to demonstrate, which most people
aren't going to have handy

> > c8d521faf7 212) obj = ((struct tag *)obj)->tagged;
> > c8d521faf7 213) if (obj) {
> > c8d521faf7 214) parse_object(the_repository, >oid);
> > c8d521faf7 215) marks = create_or_get_island_marks(obj);
> > c8d521faf7 216) island_bitmap_set(marks, island_counter);
> 
> It appears that this block would happen if we placed a tag in the delta
> island.

Yep. Again, exercised by real repositories.

I'm not sure how far we want to go in the blind pursuit of coverage.
Certainly we could add a tag to the repo in t5320, and this code would
get executed. But verifying that it's doing the right thing is much
harder (and is more easily done with a perf test).

> > c8d521faf7 397) strbuf_addch(_name, '-');
> 
> This block is inside the following patch:
> [...]

Yeah, this triggers if your regex has more than one capture group (and
likewise, we almost certainly don't run the "you have too many groups"
warning).

> > c8d521faf7 433) continue;
> > c8d521faf7 436) list[dst] = list[src];
> 
> These blocks are inside the following nested loop in deduplicate_islands():
> 
> +   for (ref = 0; ref + 1 < island_count; ref++) {
> +   for (src = ref + 1, dst = src; src < island_count; src++) {
> +   if (list[ref]->hash == list[src]->hash)
> +   continue;
> +
> +   if (src != dst)
> +   list[dst] = list[src];
> +
> +   dst++;
> +   }
> +   island_count = dst;
> +   }
> 
> This means that our "deduplication" logic is never actually doing anything
> meaningful.

Sorry, I don't even remember what this code is trying to do. The island
code is 5+ years old, and just recently ported to upstream Git by
Christian. And that's perhaps part of my laziness in the above tests; it
would be a significant effort to re-figure out all these corner cases.
It's a big part of why I hadn't been sending the patches upstream
myself.

-Peff


Re: [PATCH] tests: send "bug in the test script" errors to the script's stderr

2018-11-20 Thread Jeff King
On Tue, Nov 20, 2018 at 05:45:28AM -0500, Jeff King wrote:

> On Tue, Nov 20, 2018 at 12:34:04AM +0100, SZEDER Gábor wrote:
> 
> > > I do notice that many of the existing "FATAL:" errors use descriptor 5,
> > > which goes to stdout. I'm not sure if those should actually be going to
> > > stderr (or if there's some TAP significance to those lines).
> > 
> > I chose to send these messages to standard error, because they are,
> > well, errors.  TAP only cares about stdout, but by aborting the test
> > script in BUG(), error() or die() we are already violating TAP anyway,
> > so I don't think it matters whether we send "bug in test script" or
> > "FATAL" errors to stdout or stderr.
> 
> Yeah, I agree it probably doesn't matter. I was mostly suggesting to
> make the existing ">&5" into ">&7" for consistency. But I don't think
> that needs to block your patch.

By the way, I did check to see whether this might help the situation
where running under "prove" we see only "Dubious..." and not the actual
error() message produced by the test script. But no, it eats both stdout
and stderr. You can sneak a line through by prepending it with "#", but
only if "prove -o" is used (and even then, it's hard to associate it
with a particular test when you're running many in parallel).

So I guess the status quo is not too bad: prove says "dubious", and then
you re-run the test with "./t1234-foo.sh -v -i" and you get to see the
error.

-Peff


Re: [PATCH] tests: send "bug in the test script" errors to the script's stderr

2018-11-20 Thread Jeff King
On Tue, Nov 20, 2018 at 12:34:04AM +0100, SZEDER Gábor wrote:

> > I do notice that many of the existing "FATAL:" errors use descriptor 5,
> > which goes to stdout. I'm not sure if those should actually be going to
> > stderr (or if there's some TAP significance to those lines).
> 
> I chose to send these messages to standard error, because they are,
> well, errors.  TAP only cares about stdout, but by aborting the test
> script in BUG(), error() or die() we are already violating TAP anyway,
> so I don't think it matters whether we send "bug in test script" or
> "FATAL" errors to stdout or stderr.

Yeah, I agree it probably doesn't matter. I was mostly suggesting to
make the existing ">&5" into ">&7" for consistency. But I don't think
that needs to block your patch.

> BTW, TAP understands "Bail out!" as bail out from the _entire_ test
> suite, but that's not what we want here, I'd think.
> 
> https://testanything.org/tap-specification.html#bail-out

Yes, I added the only existing "Bail out!" to test-lib.sh. :)

I agree that's not what we want here. I actually think the current
behavior (to exit non-zero) does what we want. A TAP harness will
realize that's an error, but not block other scripts.

-Peff


[PATCH 3/3] pack-objects: fix off-by-one in delta-island tree-depth computation

2018-11-20 Thread Jeff King
When delta-islands are in use, we need to record the deepest path at
which we find each tree and blob. Our loop to do so counts slashes, so
"foo" is depth 0, "foo/bar" is depth 1, and so on.

However, this neglects root trees, which are represented by the empty
string. Those also have depth 0, but are at a layer above "foo". Thus,
"foo" should be 1, "foo/bar" at 2, and so on. We use this depth to
topo-sort the trees in resolve_tree_islands(). As a result, we may fail
to visit a root tree before the sub-trees it contains, and therefore not
correctly pass down the island marks.

That in turn could lead to missing some delta opportunities (objects are
in the same island, but we didn't realize it) or creating unwanted
cross-island deltas (one object is in an island another isn't, but we
don't realize). In practice, it seems to have only a small effect.  Some
experiments on the real-world git/git fork network at GitHub showed an
improvement of only 0.14% in the resulting clone size.

Signed-off-by: Jeff King 
---
 builtin/pack-objects.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c
index e7ea206c08..411aefd687 100644
--- a/builtin/pack-objects.c
+++ b/builtin/pack-objects.c
@@ -2786,9 +2786,11 @@ static void show_object(struct object *obj, const char 
*name, void *data)
 
if (use_delta_islands) {
const char *p;
-   unsigned depth = 0;
+   unsigned depth;
struct object_entry *ent;
 
+   /* the empty string is a root tree, which is depth 0 */
+   depth = *name ? 1 : 0;
for (p = strchr(name, '/'); p; p = strchr(p + 1, '/'))
depth++;
 
-- 
2.20.0.rc0.715.gf6b01ab3e1


[PATCH 2/3] pack-objects: zero-initialize tree_depth/layer arrays

2018-11-20 Thread Jeff King
Commit 108f530385 (pack-objects: move tree_depth into 'struct
packing_data', 2018-08-16) started maintaining a tree_depth array that
matches the "objects" array. We extend the array when:

  1. The objects array is extended, in which case we use realloc to
 extend the tree_depth array.

  2. A caller asks to store a tree_depth for object N, and this is the
 first such request; we create the array from scratch and store the
 value for N.

In the latter case, though, we use regular xmalloc(), and the depth
values for any objects besides N is undefined. This happens to not
trigger a bug with the current code, but the reasons are quite subtle:

 - we never ask about the depth for any object with index i < N. This is
   because we store the depth immediately for all trees and blobs. So
   any such "i" must be a non-tree, and therefore we will never need to
   care about its depth (in fact, we really only care about the depth of
   trees).

 - there are no objects at this point with index i > N, because we
   always fill in the depth for a tree immediately after its object
   entry is created (we may still allocate uninitialized depth entries,
   but they'll be initialized by packlist_alloc() when it initializes
   the entry in the "objects" array).

So it works, but only by chance. To be defensive, let's zero the array,
which matches the "unset" values which would be handed out by
oe_tree_depth() already.

Signed-off-by: Jeff King 
---
 git-compat-util.h | 1 +
 pack-objects.h| 4 ++--
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/git-compat-util.h b/git-compat-util.h
index f16058182f..09b0102cae 100644
--- a/git-compat-util.h
+++ b/git-compat-util.h
@@ -861,6 +861,7 @@ extern FILE *fopen_or_warn(const char *path, const char 
*mode);
 #define FREE_AND_NULL(p) do { free(p); (p) = NULL; } while (0)
 
 #define ALLOC_ARRAY(x, alloc) (x) = xmalloc(st_mult(sizeof(*(x)), (alloc)))
+#define CALLOC_ARRAY(x, alloc) (x) = xcalloc((alloc), sizeof(*(x)));
 #define REALLOC_ARRAY(x, alloc) (x) = xrealloc((x), st_mult(sizeof(*(x)), 
(alloc)))
 
 #define COPY_ARRAY(dst, src, n) copy_array((dst), (src), (n), sizeof(*(dst)) + 
\
diff --git a/pack-objects.h b/pack-objects.h
index f31ac1c81c..dc869f26c2 100644
--- a/pack-objects.h
+++ b/pack-objects.h
@@ -412,7 +412,7 @@ static inline void oe_set_tree_depth(struct packing_data 
*pack,
 unsigned int tree_depth)
 {
if (!pack->tree_depth)
-   ALLOC_ARRAY(pack->tree_depth, pack->nr_alloc);
+   CALLOC_ARRAY(pack->tree_depth, pack->nr_alloc);
pack->tree_depth[e - pack->objects] = tree_depth;
 }
 
@@ -429,7 +429,7 @@ static inline void oe_set_layer(struct packing_data *pack,
unsigned char layer)
 {
if (!pack->layer)
-   ALLOC_ARRAY(pack->layer, pack->nr_alloc);
+   CALLOC_ARRAY(pack->layer, pack->nr_alloc);
pack->layer[e - pack->objects] = layer;
 }
 
-- 
2.20.0.rc0.715.gf6b01ab3e1



[PATCH 1/3] pack-objects: fix tree_depth and layer invariants

2018-11-20 Thread Jeff King
Commit 108f530385 (pack-objects: move tree_depth into 'struct
packing_data', 2018-08-16) dynamically manages a tree_depth array in
packing_data that maintains one of these invariants:

  1. tree_depth is NULL (i.e., the requested options don't require us to
 track tree depths)

  2. tree_depth is non-NULL and has as many entries as the "objects"
 array

We maintain (2) by:

  a. When the objects array grows, grow tree_depth to the same size
 (unless it's NULL, in which case we can leave it).

  b. When a caller asks to set a depth via oe_set_tree_depth(), if
 tree_depth is NULL we allocate it.

But in (b), we use the number of stored objects, _not_ the allocated
size of the objects array. So we can run into a situation like this:

  1. packlist_alloc() needs to store the Nth object, so it grows the
 objects array to M, where M > N.

  2. oe_set_tree_depth() wants to store a depth, so it allocates an
 array of length N. Now we've violated our invariant.

  3. packlist_alloc() needs to store the N+1th object. But it _doesn't_
 grow the objects array, since N <= M still holds. We try to assign
 to tree_depth[N+1], which is out of bounds.

That doesn't happen in our test scripts, because the repositories they
use are so small, but it's easy to trigger by running:

  echo HEAD | git pack-objects --revs --delta-islands --stdout >/dev/null

in any reasonably-sized repo (like git.git).

We can fix it by always growing the array to match pack->nr_alloc, not
pack->nr_objects. Likewise for the "layer" array from fe0ac2fb7f
(pack-objects: move 'layer' into 'struct packing_data', 2018-08-16),
which has the same bug.

Signed-off-by: Jeff King 
---
 pack-objects.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/pack-objects.h b/pack-objects.h
index feb6a6a05e..f31ac1c81c 100644
--- a/pack-objects.h
+++ b/pack-objects.h
@@ -412,7 +412,7 @@ static inline void oe_set_tree_depth(struct packing_data 
*pack,
 unsigned int tree_depth)
 {
if (!pack->tree_depth)
-   ALLOC_ARRAY(pack->tree_depth, pack->nr_objects);
+   ALLOC_ARRAY(pack->tree_depth, pack->nr_alloc);
pack->tree_depth[e - pack->objects] = tree_depth;
 }
 
@@ -429,7 +429,7 @@ static inline void oe_set_layer(struct packing_data *pack,
unsigned char layer)
 {
if (!pack->layer)
-   ALLOC_ARRAY(pack->layer, pack->nr_objects);
+   ALLOC_ARRAY(pack->layer, pack->nr_alloc);
pack->layer[e - pack->objects] = layer;
 }
 
-- 
2.20.0.rc0.715.gf6b01ab3e1



[PATCH 0/3] delta-island fixes

2018-11-20 Thread Jeff King
This fixes a few bugs in the cc/delta-islands topic: one major, and two
minor.

Sadly, the major one was not caught by our test suite, and I'm not sure
how to remedy that. The problem is a memory management one, and happens
only when you have a reasonably large number of objects. So it triggers
readily when run on a real repository, but not on the toy one in t5320.
Creating a much larger repository there would make the test suite more
expensive.

In cases like this I think it's often a good idea to have a perf test.
Those are expensive anyway, and we'd have the double benefit of
exercising the code and showing off the performance improvement. But the
delta-island code only makes sense on a very specialized repo: one where
you have multiple related but diverging histories.  You could simulate
that by picking two branches in a repository, but the effect is going to
be miniscule.

Anyway, here are the fixes without tests. We should at least apply these
before v2.20 ships with the bugs.

  [1/3]: pack-objects: fix tree_depth and layer invariants
  [2/3]: pack-objects: zero-initialize tree_depth/layer arrays
  [3/3]: pack-objects: fix off-by-one in delta-island tree-depth computation

 builtin/pack-objects.c | 4 +++-
 git-compat-util.h  | 1 +
 pack-objects.h | 4 ++--
 3 files changed, 6 insertions(+), 3 deletions(-)



Re: [PATCH] t5562: skip if NO_CURL is enabled

2018-11-20 Thread Jeff King
On Mon, Nov 19, 2018 at 11:36:08AM -0800, Carlo Arenas wrote:

> tests 3-8 seem to fail because perl is hardcoded to /urs/bin/perl in
> t5562/invoke-with-content-length.pl, while I seem to be getting some
> sporadic errors in 9 with the following output :
> 
> ++ env CONTENT_TYPE=application/x-git-receive-pack-request
> QUERY_STRING=/repo.git/git-receive-pack
> 'PATH_TRANSLATED=/home/carenas/src/git/t/trash
> directory.t5562-http-backend-content-length/.git/git-receive-pack'
> GIT_HTTP_EXPORT_ALL=TRUE REQUEST_METHOD=POST
> /home/carenas/src/git/t/t5562/invoke-with-content-length.pl push_body
> git http-backend
> ++ verify_http_result '200 OK'
> ++ grep fatal: act.err
> Binary file act.err matches
> ++ return 1
> error: last command exited with $?=1
> not ok 9 - push plain
> 
> and the following output in act.err (with a 200 in act)
> 
> fatal: the remote end hung up unexpectedly

This bit me today, too, and I can reproduce it by running under my
stress-testing script.

Curiously, the act.err file also has 54 NUL bytes before the "fatal:"
message. I tried adding an "strace" to see who was producing that
output, but I can't seem to get it to fail when running under strace
(presumably because the timing is quite different, and this likely some
kind of pipe race).

-Peff


Re: [PATCH] t5562: skip if NO_CURL is enabled

2018-11-19 Thread Jeff King
On Mon, Nov 19, 2018 at 11:26:03PM +0200, Max Kirillov wrote:

> On Mon, Nov 19, 2018 at 11:36:08AM -0800, Carlo Arenas wrote:
> > NO_CURL reflects the build setting (for http support); CURL checks for
> > the curl binary, but as Ævar points out the requirements must be from
> > somewhere else since a NO_CURL=1 build (tested in macOS) still passes
> > the test, but not in NetBSD.
> > 
> > tests 3-8 seem to fail because perl is hardcoded to /urs/bin/perl in
> > t5562/invoke-with-content-length.pl,
> 
> I see.
> 
> In other perl files I can see either '#!/usr/bin/perl' or
> '#!/ust/bin/env perl'. The second one should be more
> portable. Does the latter work on the NetBSD?
> 
> To all: what is supposed to be done about it?

You should swap this out for $PERL_PATH. You can use write_script() to
help if you're copying the script around anyway. Though it looks like
you just run it from the one function. So maybe just:

diff --git a/t/t5562-http-backend-content-length.sh 
b/t/t5562-http-backend-content-length.sh
index b24d8b05a4..90d890d02f 100755
--- a/t/t5562-http-backend-content-length.sh
+++ b/t/t5562-http-backend-content-length.sh
@@ -31,6 +31,7 @@ test_http_env() {
PATH_TRANSLATED="$PWD/.git/git-$handler_type-pack" \
GIT_HTTP_EXPORT_ALL=TRUE \
REQUEST_METHOD=POST \
+   "$PERL_PATH" \
"$TEST_DIRECTORY"/t5562/invoke-with-content-length.pl \
"$request_body" git http-backend >act.out 2>act.err
 }

(note that it's normally OK to just run "perl", because we use a
shell-function wrapper that respects $PERL_PATH, but here we're actually
passing it to "env").

You could also lose the executable bit on the script at that point. It
doesn't matter much, but it would catch an erroneous call relying on the
shebang line.

-Peff


Re: [PATCH] test-lib-functions: make 'test_cmp_rev' more informative on failure

2018-11-19 Thread Jeff King
On Mon, Nov 19, 2018 at 02:28:18PM +0100, SZEDER Gábor wrote:

> The 'test_cmp_rev' helper is merely a wrapper around 'test_cmp'
> checking the output of two 'git rev-parse' commands, which means that
> its output on failure is not particularly informative, as it's
> basically two OIDs with a bit of extra clutter of the diff header, but
> without any indication of which two revisions have caused the failure:
> 
>   --- expect.rev  2018-11-17 14:02:11.569747033 +
>   +++ actual.rev  2018-11-17 14:02:11.569747033 +
>   @@ -1 +1 @@
>   -d79ce1670bdcb76e6d1da2ae095e890ccb326ae9
>   +139b20d8e6c5b496de61f033f642d0e3dbff528d
> 
> It also pollutes the test repo with these two intermediate files,
> though that doesn't seem to cause any complications in our current
> tests (meaning that I couldn't find any tests that have to work around
> the presence of these files by explicitly removing or ignoring them).
> 
> Enhance 'test_cmp_rev' to provide a more useful output on failure with
> less clutter:
> 
>   error: two revisions point to different objects:
> 'HEAD^': d79ce1670bdcb76e6d1da2ae095e890ccb326ae9
> 'extra': 139b20d8e6c5b496de61f033f642d0e3dbff528d
> 
> Doing so is more convenient when storing the OIDs outputted by 'git
> rev-parse' in a local variable each, which, as a bonus, won't pollute
> the repository with intermediate files.
> 
> While at it, also ensure that 'test_cmp_rev' is invoked with the right
> number of parameters, namely two.

This is an improvement, in my opinion (and I agree that using your new
BUG for this last part would be better still). It also saves a process
in the common case.

One question:

> + else
> + local r1 r2
> + r1=$(git rev-parse --verify "$1") &&
> + r2=$(git rev-parse --verify "$2") &&
> + if test "$r1" != "$r2"
> + then
> + cat >&4 <<-EOF
> + error: two revisions point to different objects:
> +   '$1': $r1
> +   '$2': $r2
> + EOF
> + return 1
> + fi

Why does this cat go to descriptor 4? I get why you'd want it to (it's
meant for the user's eyes, and that's where 4 goes), but we do not
usually bother to do so for our helper functions (like test_cmp).

I don't think it matters usually in practice, because nobody tries to
capture the stderr of test_cmp, etc. I don't think it would ever hurt,
though. Should we be doing that for all the others, too?

-Peff


Re: [PATCH] tests: send "bug in the test script" errors to the script's stderr

2018-11-19 Thread Jeff King
On Mon, Nov 19, 2018 at 02:13:26PM +0100, SZEDER Gábor wrote:

> Send these "bug in the test script" error messages directly to the
> test scripts standard error and thus to the terminal, so those bugs
> will be much harder to overlook.  Instead of updating all ~20 such
> 'error' calls with a redirection, let's add a BUG() function to
> 'test-lib.sh', wrapping an 'error' call with the proper redirection
> and also including the common prefix of those error messages, and
> convert all those call sites [4] to use this new BUG() function
> instead.

Yes, I think this is an improvement.

> +BUG () {
> + error >&7 "bug in the test script: $*"
> +}

I naively expected this to go to >&4, but of course that is the
conditional "stderr or /dev/null, depending on --verbose" descriptor. I
have a feeling that we could get rid of descriptors 5 and 7 in favor of
3 and 4, if we did the conditional redirection when running each test,
instead of ahead of time.

But unless we are running out of descriptors, it's not worth the effort
(it's debatable whether we are; 9be795fbce (t5615: avoid re-using
descriptor 4, 2017-12-08) made me nervous, but it's more about the
special-ness of BASHE_XTRACEFD than anything).

Anyway, that's all a tangent to your patch.

I do notice that many of the existing "FATAL:" errors use descriptor 5,
which goes to stdout. I'm not sure if those should actually be going to
stderr (or if there's some TAP significance to those lines).

-Peff


Re: Git Test Coverage Report (v2.20.0-rc0)

2018-11-19 Thread Jeff King
On Mon, Nov 19, 2018 at 11:21:38AM -0500, Jeff King wrote:

> > +   if (use_delta_islands) {
> > +   const char *p;
> > +   unsigned depth = 0;
> > +   struct object_entry *ent;
> > +
> > +   for (p = strchr(name, '/'); p; p = strchr(p + 1, '/'))
> > +   depth++;
> > +
> > +   ent = packlist_find(_pack, obj->oid.hash, NULL);
> > +   if (ent && depth > ent->tree_depth)
> > +   ent->tree_depth = depth;
> > +   }
> >  }
> > 
> > And that 'ent->tree_depth = depth;' line is replaced with the
> > oe_set_tree_depth() call in the report.
> > 
> > Since depth is never incremented, we are not covering this block. Is it
> > possible to test?
> 
> Looks like t5320 only has single-level trees. We probably just need to
> add a deeper tree. A more interesting case is when an object really is
> found at multiple depths, but constructing a case that cared about that
> would be quite complicated.
> 
> That said, there is much bigger problem with this code, which is that
> 108f530385 (pack-objects: move tree_depth into 'struct packing_data',
> 2018-08-16) is totally broken. It works on the trivial repository in the
> test, but try this (especially under valgrind or ASan) on a real
> repository:
> 
>   git repack -adi
> 
> which will crash immediately.  It doesn't correctly maintain the
> invariant that if tree_depth is not NULL, it is the same size as
> the main object array.
> 
> I'll see if I can come up with a fix.

Just a quick update to prevent anyone else looking at it: I have a fix
for this (and another related issue with that commit).

There's an edge case in that depth computation that I think is
unhandled, as well. I _think_ it doesn't trigger very often, but I'm
running some experiments to verify it. That's S-L-O-W, so I probably
won't have results until tomorrow.

-Peff


Re: Git Test Coverage Report (v2.20.0-rc0)

2018-11-19 Thread Jeff King
On Mon, Nov 19, 2018 at 10:40:53AM -0500, Derrick Stolee wrote:

> > 2fa233a554 builtin/pack-objects.c 1512) hashcpy(base_oid.hash,
> > base_sha1);
> > 2fa233a554 builtin/pack-objects.c 1513) if
> > (!in_same_island(>idx.oid, _oid))
> > 2fa233a554 builtin/pack-objects.c 1514) return 0;
> 
> These lines are inside a block for the following if statement:
> 
> +   /*
> +    * Otherwise, reachability bitmaps may tell us if the receiver has
> it,
> +    * even if it was buried too deep in history to make it into the
> +    * packing list.
> +    */
> +   if (thin && bitmap_has_sha1_in_uninteresting(bitmap_git, base_sha1))
> {
> 
> Peff: is this difficult to test?

A bit.

The two features (thin-packs and delta-islands) would not generally be
used together (one is for serving fetches, the other is for optimizing
on-disk packs to serve fetches). Something like:

 echo HEAD^ | git pack-objects --revs --thin --delta-islands

would probably exercise it (assuming there's a delta in HEAD^ against
something in HEAD), but you'd need to construct a very specific scenario
if you wanted to do any kind of checks no the output.

> > 28b8a73080 builtin/pack-objects.c 2793) depth++;
> > 108f530385 builtin/pack-objects.c 2797) oe_set_tree_depth(_pack, ent,
> > depth);
> 
> This 'depth' variable is incremented as part of a for loop in this patch:
> 
>  static void show_object(struct object *obj, const char *name, void *data)
> @@ -2686,6 +2706,19 @@ static void show_object(struct object *obj, const
> char *name, void *data)
>     add_preferred_base_object(name);
>     add_object_entry(>oid, obj->type, name, 0);
>     obj->flags |= OBJECT_ADDED;
> +
> +   if (use_delta_islands) {
> +   const char *p;
> +   unsigned depth = 0;
> +   struct object_entry *ent;
> +
> +   for (p = strchr(name, '/'); p; p = strchr(p + 1, '/'))
> +   depth++;
> +
> +   ent = packlist_find(_pack, obj->oid.hash, NULL);
> +   if (ent && depth > ent->tree_depth)
> +   ent->tree_depth = depth;
> +   }
>  }
> 
> And that 'ent->tree_depth = depth;' line is replaced with the
> oe_set_tree_depth() call in the report.
> 
> Since depth is never incremented, we are not covering this block. Is it
> possible to test?

Looks like t5320 only has single-level trees. We probably just need to
add a deeper tree. A more interesting case is when an object really is
found at multiple depths, but constructing a case that cared about that
would be quite complicated.

That said, there is much bigger problem with this code, which is that
108f530385 (pack-objects: move tree_depth into 'struct packing_data',
2018-08-16) is totally broken. It works on the trivial repository in the
test, but try this (especially under valgrind or ASan) on a real
repository:

  git repack -adi

which will crash immediately.  It doesn't correctly maintain the
invariant that if tree_depth is not NULL, it is the same size as
the main object array.

I'll see if I can come up with a fix.

-Peff


Re: [PATCH 0/5] Make :(attr) pathspec work with "git log"

2018-11-19 Thread Jeff King
On Sun, Nov 18, 2018 at 08:51:52PM +0100, Ævar Arnfjörð Bjarmason wrote:

> > But this also reveals an interesting thing: even though we walk on a
> > tree, we check attributes from _worktree_ (and optionally fall back to
> > the index). This is how attributes are implemented since forever. I
> > think this is not a big deal if we communicate clearly with the user.
> > But otherwise, this series can be scraped, as reading attributes from
> > a specific tree could be a lot of work.
> 
> I'm very happy to see this implemented, and I think the behavior
> described here is the right way to go. E.g. in git.git we have diff=perl
> entries in .gitattributes. It would suck if:
> 
> git log ':(attr:diff=perl)'
> 
> Would only list commits as far as 20460635a8 (".gitattributes: use the
> "perl" differ for Perl", 2018-04-26), since that's when we stop having
> that attribute. Ditto for wanting to run "grep" on e.g. perl files in
> 2.12.0.
> 
> I have also run into cases where I want to use a .gitattributes file
> from a specific commit. E.g. when writing pre-receive hooks where I've
> wanted the .gitattributes of the commit being pushed to configure
> something about it. But as you note this isn't supported at all.
> 
> But a concern is whether we should be making :(attr:*) behave like this
> for now. Are we going to regret it later? I don't think so, I think
> wanting to use the current working tree's / index's is the most sane
> default, and if we get the ability to read it from revisions as we
> e.g. walk the log it would make most sense to just call that
> :(treeattr:*) or something like that.

I think that ship already sailed with the fact that "git log -p" will
show diffs using the worktree attrs. I agree that it would sometimes be
nice to specify attributes from a particular tree, but at this point the
default probably needs to remain as it is.

-Peff


Re: how often do you check in and with what tool(s)?

2018-11-19 Thread Jeff King
On Tue, Nov 13, 2018 at 06:04:15PM -0500, _g e r r y _ _l o w r y _ wrote:

> Examples:
> 
> {me, extreme often, Windows}  very granular, with a brief yet appropriate 
> comment [like narrating a story] per commit-i change a few
> lines of code,
> Alt+Tab to Git Bash, Git Add/Commit,
> Alt+Tab back to some coding tool (example LINQPad).
> [generally, i check in one source file before moving to the next source file]
> 
> 
> {not me, very extreme seldom} in some project, not at all granular, in 
> batches such as 50 of 75 files that have been modified, all
> are checked in with a single detailed comment as to the overall purpose of 
> the batched changes.
> 
> 
> QUESTION:  how often do you check in and with what tool(s)?

I think the more important thing is not how often you commit, but that
your final product of commits is reasonably split to communicate a
logical flow to the proposed changes. That organization helps reviewers
understand what's going on, but it also helps me convince myself that
the direction is good (and that each individual change is necessary and
correct). I get there through a combination of:

  - breaking the problem down ahead of time into logical steps, then
committing after each step is done. E.g., if I'm writing a feature
in foo.c that's going to need infrastructure from bar.c, it's pretty
clear the patch series is going to look something like:

  - refactor bar.c into reusable bits
  - start using reusable bits in foo.c

Take as an example the recent object-cache series I did in:

  https://public-inbox.org/git/20181112144627.ga2...@sigill.intra.peff.net/

I knew I needed to make the existing object-cache code from the
alternates struct available more universally, so I did that first
and then finally in patch 8 I could make use of it in the new spot.

  - committing when you discover a new logical breakpoint. This is
mostly intuition, but is often obvious. Say you're refactoring bar.c
into reusable bits, and you realize there are three distinct bits of
factoring.

Going back to that real-world example above, I found there were a
bunch of discrete steps: renaming the struct (patch 3), refactoring
path creation (patch 5), unifying the alt/non-alt cases (patch 6),
and then providing the cache helpers (patch 7).

Those mostly became clear as I was doing the work.

  - similarly, you might stumble over unrelated or tangential issues. In
that same series, I noticed an existing bug, whose fix became patch
1. That was definitely _not_ where I started, but as soon as I saw
that bug, I probably did a "git add -p && git commit" to file it
away (even with a lousy commit message, and then later I went back
and polished it with "rebase -i").

  - if you're lucky that all happens linearly. But most of the time it
doesn't. It's usually more like a rabbit hole, where you know you're
trying to get to point X, and trying to get there reveals all the
other problems. So at any given time during a series like that, my
working tree is a mess of related half-finished changes. I'll
periodically break that down into rough patches with "add -p" and
commit.

Those intermediate results often have minor compilation problems
(because of ordering issues, or maybe the whole state I had when I
broke it down didn't work yet, but I could start to see the chunks).
So then I take a pass with "rebase -i" ("rebase -x 'make test'" is
especially helpful here) and get something sensible.

So in response to your original question, I commit as often as once a
minute, or as rarely as a big chunk at the end of a day. :)

But in the latter case, I'm almost always going back to convert it into
a series of commits that each represent probably no more than a half
hour of work (and that could be read in much less time).

-Peff


Re: [PATCH] msvc: Directly use MS version (_stricmp) of strcasecmp

2018-11-18 Thread Jeff King
On Sun, Nov 18, 2018 at 10:02:02PM +0100, Sven Strickroth wrote:

> This also removes an implicit conversion from size_t (unsigned) to int 
> (signed).
> 
> _stricmp as well as _strnicmp are both available since VS2012.

Once upon a time we had problems with taking a function pointer of
strcasecmp (to use as a comparator with string_list), so I wondered if
that might be part of why it's defined the way it is.

But the current definition is already inline:

> -
> -static __inline int strcasecmp (const char *s1, const char *s2)
> -{
> -   int size1 = strlen(s1);
> -   int sisz2 = strlen(s2);
> -   return _strnicmp(s1, s2, sisz2 > size1 ? sisz2 : size1);
> -}
> +#define strcasecmp   _stricmp

And it seems we worked around this in de2f95ebed (mailmap: work around
implementations with pure inline strcasecmp, 2013-09-12). So I don't
think there is any blocker there.

(Though of course I have no idea on other portability questions around
_stricmp(); I'll leave that for Windows folks).

-Peff


Re: insteadOf and git-request-pull output

2018-11-17 Thread Jeff King
On Sat, Nov 17, 2018 at 04:46:22PM +0900, Junio C Hamano wrote:

> "brian m. carlson"  writes:
> 
> >> $ git request-pull HEAD^ git://foo.example.com/example | grep example
> >>   ssh://bar.example.com/example
> >> 
> >> I think that if we use the "principle of least surprise," insteadOf
> >> rules shouldn't be applied for git-request-pull URLs.
> >
> > I'd like to point out a different use that may change your view.  I have
> > an insteadOf alias, gh:, that points to GitHub.  Performing the rewrite
> > is definitely the right thing to do, since other users may not have my
> > alias available.
> >
> > I agree that in your case, a rewrite seems less appropriate, but I think
> > we should only skip the rewrite if the value already matches a valid
> > URL.
> 
> It would be tricky to define what a valid URL is, though.  Do we
> need some way to say "this is a private URL that should not be
> given preference when composing a request-pull message"?  E.g.
> 
> [url "git://git.dev/"]
> insteadOf = https://git.dev/
> 
> [url "https://github.com/;]
> insteadOf = gh:
>   private
> 
> The former does not mark https://git.dev/ a private one, so a
> "request-pull https://git.dev/$thing; would show the original
> "https://git.dev/$thing; without rewriting.  The latter marks gh: a
> private one so "request-pull gh:$thing" would be rewritten before
> exposed to the public as "https://github.com/$thing;
> 
> Or something like that?

One funny thing about this is that the "private" config key is
url.https://github.com.private. But that's the public URL!

It makes sense if you think of it as "this rewrite is private". And that
would probably serve for most people's needs, though it gets funny when
you have multiple conversions:

  [url "https://github.com/;]
  insteadOf = gh:
  insteadOf = git://github.com

you may want to share that you are rewriting one of those, but not the
other.

I suspect it would be less confusing if the rewrite were inverted, like:

  [url "gh:"]
  rewriteTo = https://github.com
  rewritePrivate

  [url "git://github.com"]
  rewriteTo = https://github.com

where the mapping of sections to rewrite rules must be one-to-one, not
one-to-many (and you can see that the flip side is that we have to
repeat ourselves).

I hate to introduce two ways of doing the same thing, but maybe it is
simple enough to explain that url.X.insteadOf=Y is identical to
url.Y.rewriteTo=X. I do think people have gotten confused by the
ordering of insteadOf over the years, so this would let them specify it
in whichever way makes the most sense to them.

-Peff


Re: [RFC/PATCH 0/5] stop installing old libexec aliases like "git-init"

2018-11-16 Thread Jeff King
On Fri, Nov 16, 2018 at 08:22:11PM +0100, Ævar Arnfjörð Bjarmason wrote:

> So maybe we should just document this interface better. It seems one
> implicit dependency is that we expect a manpage for the tool to exist
> for --help.

Yeah, I think this really the only problematic assumption. The rest of
"-c", "--git-dir", etc, are just manipulating the environment, and that
gets passed through to sub-invocations of Git (so if I have a script
which calls git-config, it will pick up the "-c" config).

It would be nice if there was a way to ask "is there a manpage?", and
fallback to running "git-cmd --help". But:

  1. I don't think there is a portable way to query that via man. And
 anyway, depending platform and config, it may be opening a browser
 to show HTML documentation (or I think even texinfo?).

  2. We can just ask whether "man git-sizer" (or whatever help display
 command) returned a non-zero exit code, and fall back to "git-sizer
 --help". But there's an infinite-loop possibility here: running
 "git-sizer --help" does what we want. But if "man git-log" failed,
 we'd run "git-log --help", which in turn runs "git help log", which
 runs "man git-log", and so on.

 You can break that loop with an environment variable for "I already
 tried to show the manpage", which would I guess convert "--help" to
 "-h".

So it's maybe do-able, but not quite as trivial as one might hope.

> But I don't remember the details, and can't reproduce it now with:
> 
> $ cat ~/bin/git-sigint 
> #!/usr/bin/env perl
> $SIG{INT} = sub { warn localtime . " " . $$ };
> sleep 1 while 1;
> $ git sigint # or git-sigint
> [... behaves the same either way ...]
> 
> So maybe it was something else, or I'm misremembering...

I think that generally works because hitting ^C is going to send SIGINT
to the whole process group. A more interesting case is:

  git sigint &
  kill -INT $!

There $! is a parent "git" process that is just waiting on git-sigint to
die. But that works, too, because the parent relays common signals due
to 10c6cddd92 (dashed externals: kill children on exit, 2012-01-08). So
you might have been remembering issues prior to that commit (or uncommon
signals; these come from sigchain_push_common).

-Peff


[PATCH] bundle: dup() output descriptor closer to point-of-use

2018-11-16 Thread Jeff King
On Thu, Nov 15, 2018 at 09:01:07PM +0100, Johannes Schindelin wrote:

> > It seems like we should be checking that the stale lockfile isn't left,
> > which is the real problem (the warning is annoying, but is a symptom of
> > the same thing). I.e., something like:
> > 
> >   test_must_fail git bundle create foobar.bundle master..master &&
> >   test_path_is_missing foobar.bundle.lock
> > 
> > That would already pass on non-Windows platforms, but that's OK. It will
> > never give a false failure.
> > 
> > If you don't mind, can you confirm that the test above fails without
> > either of the two patches under discussion?
> 
> This test succeeds with your patch as well as with Gaël's, and fails when
> neither patch is applied. So you're right, it is the much better test.

Thanks for checking!

> > > Do you want to integrate this test into your patch and run with it, or
> > > do you want me to shepherd your patch?
> > 
> > I'll wrap it up with a commit message and a test.

Actually, I realized there's an even simpler way to do this. Here it is.

-- >8 --
Subject: [PATCH] bundle: dup() output descriptor closer to point-of-use

When writing a bundle to a file, the bundle code actually creates
"your.bundle.lock" using our lockfile interface. We feed that output
descriptor to a child git-pack-objects via run-command, which has the
quirk that it closes the output descriptor in the parent.

To avoid confusing the lockfile code (which still thinks the descriptor
is valid), we dup() it, and operate on the duplicate.

However, this has a confusing side effect: after the dup() but before we
call pack-objects, we have _two_ descriptors open to the lockfile. If we
call die() during that time, the lockfile code will try to clean up the
partially-written file. It knows to close() the file before unlinking,
since on some platforms (i.e., Windows) the open file would block the
deletion. But it doesn't know about the duplicate descriptor. On
Windows, triggering an error at the right part of the code will result
in the cleanup failing and the lockfile being left in the filesystem.

We can solve this by moving the dup() much closer to start_command(),
shrinking the window in which we have the second descriptor open. It's
easy to place this in such a way that no die() is possible. We could
still die due to a signal in the exact wrong moment, but we already
tolerate races there (e.g., a signal could come before we manage to put
the file on the cleanup list in the first place).

As a bonus, this shields create_bundle() itself from the duplicate-fd
trick, and we can simplify its error handling (note that the lock
rollback now happens unconditionally, but that's OK; it's a noop if we
didn't open the lock in the first place).

The included test uses an empty bundle to cause a failure at the right
spot in the code, because that's easy to trigger (the other likely
errors are write() problems like ENOSPC).  Note that it would already
pass on non-Windows systems (because they are happy to unlink an
already-open file).

Based-on-a-patch-by: Gaël Lhez 
Signed-off-by: Jeff King 
---
 bundle.c| 39 ++-
 t/t5607-clone-bundle.sh |  6 ++
 2 files changed, 24 insertions(+), 21 deletions(-)

diff --git a/bundle.c b/bundle.c
index 1ef584b93b..6b0f6d8f10 100644
--- a/bundle.c
+++ b/bundle.c
@@ -243,7 +243,7 @@ static int is_tag_in_date_range(struct object *tag, struct 
rev_info *revs)
 }
 
 
-/* Write the pack data to bundle_fd, then close it if it is > 1. */
+/* Write the pack data to bundle_fd */
 static int write_pack_data(int bundle_fd, struct rev_info *revs)
 {
struct child_process pack_objects = CHILD_PROCESS_INIT;
@@ -256,6 +256,20 @@ static int write_pack_data(int bundle_fd, struct rev_info 
*revs)
pack_objects.in = -1;
pack_objects.out = bundle_fd;
pack_objects.git_cmd = 1;
+
+   /*
+* start_command() will close our descriptor if it's >1. Duplicate it
+* to avoid surprising the caller.
+*/
+   if (pack_objects.out > 1) {
+   pack_objects.out = dup(pack_objects.out);
+   if (pack_objects.out < 0) {
+   error_errno(_("unable to dup bundle descriptor"));
+   child_process_clear(_objects);
+   return -1;
+   }
+   }
+
if (start_command(_objects))
return error(_("Could not spawn pack-objects"));
 
@@ -421,21 +435,10 @@ int create_bundle(struct bundle_header *header, const 
char *path,
bundle_to_stdout = !strcmp(path, "-");
if (bundle_to_stdout)
bundle_fd = 1;
-   else {
+   else
bundle_fd = hold_lock_file_for_update(, path,
  

Re: [PATCH] ref-filter: don't look for objects when outside of a repository

2018-11-16 Thread Jeff King
On Fri, Nov 16, 2018 at 02:09:07PM +0900, Junio C Hamano wrote:

> >> I see problems in both directions:
> >> 
> >>  - sorting by "objectname" works now, but it's marked with SOURCE_OBJ,
> >>and would be forbidden with your patch.  I'm actually not sure if
> >>SOURCE_OBJ is accurate; we shouldn't need to access the object to
> >>show it (and we are probably wasting effort loading the full contents
> >>for tools like for-each-ref).
> >> 
> >>However, that's not the full story. For objectname:short, it _does_ call
> >>find_unique_abbrev(). So we expect to have an object directory.
> >
> > Oops, I'm apparently bad at reading. It is in fact SOURCE_OTHER, which
> > makes sense (outside of this whole "--sort outside a repo thing").
> >
> > But we'd ideally distinguish between "objectname" (which should be OK
> > outside a repo) and "objectname:short" (which currently segfaults).
> 
> Arguably, use of ref-filter machinery in ls-remote, whether it is
> given from inside or outside a repo, was a mistake in 1fb20dfd
> ("ls-remote: create '--sort' option", 2018-04-09), as the whole
> point of "ls-remote" is to peek the list of refs and it is perfectly
> normal that the objects listed are not available.

I think it's conceptually reasonable to use the ref-filter machinery.
It's just that it was underprepared to handle this out-of-repo case. I
think we're not too far off, though.

> "ls-remote --sort=authorname" that is run in a repository may not
> segfault on a ref that points at a yet-to-be-fetched commit, but it
> cannot be doing anything sensible.  Is it still better to silently
> produce a nonsense result than refusing to --sort no matter what the
> sort keys are, whether we are inside or outside a repository?

I don't think we produce silent nonsense in the current code (or after
any of the discussed solutions), either in a repo or out. We say "fatal:
missing object ..." inside a repo if the request cannot be fulfilled.
That's not incredibly illuminating, perhaps, but it means we fulfill
whatever we _can_ on behalf of the user's request, and bail otherwise.

If you are arguing that even in a repo we should reject "authorname"
early (just as we would outside of a repo), I could buy that.
Technically we can make it work sometimes (if we happen to have fetched
everything the other side has), but behaving consistently (and with a
decent error message) may trump that.

-Peff


[PATCH 3/3] remote-curl: die on server-side errors

2018-11-16 Thread Jeff King
From: Josh Steadmon 

When a smart HTTP server sends an error message via pkt-line,
remote-curl will fail to detect the error (which usually results in
incorrectly falling back to dumb-HTTP mode).

This patch adds a check in check_smart_http() for server-side error
messages, as well as a test case for this issue.

Signed-off-by: Josh Steadmon 
Signed-off-by: Jeff King 
---
 remote-curl.c   | 3 +++
 t/lib-httpd.sh  | 1 +
 t/lib-httpd/apache.conf | 4 
 t/lib-httpd/error-smart-http.sh | 3 +++
 t/t5551-http-fetch-smart.sh | 5 +
 5 files changed, 16 insertions(+)
 create mode 100644 t/lib-httpd/error-smart-http.sh

diff --git a/remote-curl.c b/remote-curl.c
index 3c9c4a07c3..b1309f2bdc 100644
--- a/remote-curl.c
+++ b/remote-curl.c
@@ -382,6 +382,9 @@ static void check_smart_http(struct discovery *d, const 
char *service,
 */
d->proto_git = 1;
 
+   } else if (skip_prefix(line, "ERR ", )) {
+   die(_("remote error: %s"), p);
+
} else {
die("invalid server response; got '%s'", line);
}
diff --git a/t/lib-httpd.sh b/t/lib-httpd.sh
index a8729f8232..4e946623bb 100644
--- a/t/lib-httpd.sh
+++ b/t/lib-httpd.sh
@@ -131,6 +131,7 @@ prepare_httpd() {
mkdir -p "$HTTPD_DOCUMENT_ROOT_PATH"
cp "$TEST_PATH"/passwd "$HTTPD_ROOT_PATH"
install_script broken-smart-http.sh
+   install_script error-smart-http.sh
install_script error.sh
install_script apply-one-time-sed.sh
 
diff --git a/t/lib-httpd/apache.conf b/t/lib-httpd/apache.conf
index 581c010d8f..6de2bc775c 100644
--- a/t/lib-httpd/apache.conf
+++ b/t/lib-httpd/apache.conf
@@ -117,6 +117,7 @@ Alias /auth/dumb/ www/auth/dumb/
 
 ScriptAliasMatch /smart_*[^/]*/(.*) ${GIT_EXEC_PATH}/git-http-backend/$1
 ScriptAlias /broken_smart/ broken-smart-http.sh/
+ScriptAlias /error_smart/ error-smart-http.sh/
 ScriptAlias /error/ error.sh/
 ScriptAliasMatch /one_time_sed/(.*) apply-one-time-sed.sh/$1
 
@@ -125,6 +126,9 @@ ScriptAliasMatch /one_time_sed/(.*) apply-one-time-sed.sh/$1
 
Options ExecCGI
 
+
+   Options ExecCGI
+
 
   Options ExecCGI
 
diff --git a/t/lib-httpd/error-smart-http.sh b/t/lib-httpd/error-smart-http.sh
new file mode 100644
index 00..e65d447fc4
--- /dev/null
+++ b/t/lib-httpd/error-smart-http.sh
@@ -0,0 +1,3 @@
+echo "Content-Type: application/x-git-upload-pack-advertisement"
+echo
+printf "%s" "0019ERR server-side error"
diff --git a/t/t5551-http-fetch-smart.sh b/t/t5551-http-fetch-smart.sh
index 8630b0cc39..ba83e567e5 100755
--- a/t/t5551-http-fetch-smart.sh
+++ b/t/t5551-http-fetch-smart.sh
@@ -429,5 +429,10 @@ test_expect_success 'GIT_TRACE_CURL_NO_DATA prevents data 
from being traced' '
! grep "=> Send data" err
 '
 
+test_expect_success 'server-side error detected' '
+   test_must_fail git clone $HTTPD_URL/error_smart/repo.git 2>actual &&
+   grep "server-side error" actual
+'
+
 stop_httpd
 test_done
-- 
2.19.1.1636.gc7a073d580


Re: [PATCH v3 00/11] fast export and import fixes and features

2018-11-16 Thread Jeff King
On Thu, Nov 15, 2018 at 11:59:45PM -0800, Elijah Newren wrote:

> This is a series of small fixes and features for fast-export and
> fast-import, mostly on the fast-export side.
> 
> Changes since v2 (full range-diff below):
>   * Dropped the final patch; going to try to use Peff's suggestion of
> rev-list and diff-tree to get what I need instead
>   * Inserted a new patch at the beginning to convert pre-existing sha1
> stuff to oid (rename sha1_to_hex() -> oid_to_hex(), rename
> anonymize_sha1() to anonymize_oid(), etc.)
>   * Modified other patches in the series to add calls to oid_to_hex() rather
> than sha1_to_hex()

Thanks, these changes all look good to me. I have no more nits to pick.
:)

-Peff


[PATCH 2/3] remote-curl: tighten "version 2" check for smart-http

2018-11-16 Thread Jeff King
In a v2 smart-http conversation, the server should reply to our initial
request with a pkt-line saying "version 2" (this is the start of the
"capabilities advertisement"). We check for the string using
starts_with(), but that's overly permissive (it would match "version
20", for example).

Let's tighten this check to use strcmp(). Note that we don't need to
worry about a trailing newline here, because the ptk-line code will have
chomped it for us already.

Signed-off-by: Jeff King 
---
 remote-curl.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/remote-curl.c b/remote-curl.c
index dd9bc41aa1..3c9c4a07c3 100644
--- a/remote-curl.c
+++ b/remote-curl.c
@@ -375,7 +375,7 @@ static void check_smart_http(struct discovery *d, const 
char *service,
d->len = src_len;
d->proto_git = 1;
 
-   } else if (starts_with(line, "version 2")) {
+   } else if (!strcmp(line, "version 2")) {
/*
 * v2 smart http; do not consume version packet, which will
 * be handled elsewhere.
-- 
2.19.1.1636.gc7a073d580



[PATCH 1/3] remote-curl: refactor smart-http discovery

2018-11-16 Thread Jeff King
After making initial contact with an http server, we have to decide if
the server supports smart-http, and if so, which version. Our rules are
a bit inconsistent:

  1. For v0, we require that the content-type indicates a smart-http
 response. We also require the response to look vaguely like a
 pkt-line starting with "#". If one of those does not match, we fall
 back to dumb-http.

 But according to our http protocol spec[1]:

   Dumb servers MUST NOT return a return type starting with
   `application/x-git-`.

 If we see the expected content-type, we should consider it
 smart-http. At that point we can parse the pkt-line for real, and
 complain if it is not syntactically valid.

  2. For v2, we do not actually check the content-type. Our v2 protocol
 spec says[2]:

   When using the http:// or https:// transport a client makes a
   "smart" info/refs request as described in `http-protocol.txt`[...]

 and the http spec is clear that for a smart-http[3]:

   The Content-Type MUST be `application/x-$servicename-advertisement`.

 So it is required according to the spec.

These inconsistencies were easy to miss because of the way the original
code was written as an inline conditional. Let's pull it out into its
own function for readability, and improve a few things:

 - we now predicate the smart/dumb decision entirely on the presence of
   the correct content-type

 - we do a real pkt-line parse before deciding how to proceed (and die
   if it isn't valid)

 - use skip_prefix() for comparing service strings, instead of
   constructing expected output in a strbuf; this avoids dealing with
   memory cleanup

Note that this _is_ tightening what the client will allow. It's all
according to the spec, but it's possible that other implementations
might violate these. However, violating these particular rules seems
like an odd choice for a server to make.

[1] Documentation/technical/http-protocol.txt, l. 166-167
[2] Documentation/technical/protocol-v2.txt, l. 63-64
[3] Documentation/technical/http-protocol.txt, l. 247

Helped-by: Josh Steadmon 
Signed-off-by: Jeff King 
---
 remote-curl.c | 93 ---
 1 file changed, 59 insertions(+), 34 deletions(-)

diff --git a/remote-curl.c b/remote-curl.c
index 762a55a75f..dd9bc41aa1 100644
--- a/remote-curl.c
+++ b/remote-curl.c
@@ -330,9 +330,65 @@ static int get_protocol_http_header(enum protocol_version 
version,
return 0;
 }
 
+static void check_smart_http(struct discovery *d, const char *service,
+struct strbuf *type)
+{
+   char *src_buf;
+   size_t src_len;
+   char *line;
+   const char *p;
+
+   /*
+* If we don't see x-$service-advertisement, then it's not smart-http.
+* But once we do, we commit to it and assume any other protocol
+* violations are hard errors.
+*/
+   if (!skip_prefix(type->buf, "application/x-", ) ||
+   !skip_prefix(p, service, ) ||
+   strcmp(p, "-advertisement"))
+   return;
+
+   /*
+* "Peek" at the first packet by using a separate buf/len pair; some
+* cases below require us leaving the originals intact.
+*/
+   src_buf = d->buf;
+   src_len = d->len;
+   line = packet_read_line_buf(_buf, _len, NULL);
+   if (!line)
+   die("invalid server response; expected service, got flush 
packet");
+
+   if (skip_prefix(line, "# service=", ) && !strcmp(p, service)) {
+   /*
+* The header can include additional metadata lines, up
+* until a packet flush marker.  Ignore these now, but
+* in the future we might start to scan them.
+*/
+   while (packet_read_line_buf(_buf, _len, NULL))
+   ;
+
+   /*
+* v0 smart http; callers expect us to soak up the
+* service and header packets
+*/
+   d->buf = src_buf;
+   d->len = src_len;
+   d->proto_git = 1;
+
+   } else if (starts_with(line, "version 2")) {
+   /*
+* v2 smart http; do not consume version packet, which will
+* be handled elsewhere.
+*/
+   d->proto_git = 1;
+
+   } else {
+   die("invalid server response; got '%s'", line);
+   }
+}
+
 static struct discovery *discover_refs(const char *service, int for_push)
 {
-   struct strbuf exp = STRBUF_INIT;
struct strbuf type = STRBUF_INIT;
struct strbuf charset = STRBUF_INIT;
struct strbuf buffer = STRBUF_INIT;
@@ -405,38 +461,8 @@ static struct discovery *discover_refs(const char 
*service, int for_push)
last->buf_alloc = str

[PATCH 0/3] remote-curl smart-http discovery cleanup

2018-11-16 Thread Jeff King
On Thu, Nov 15, 2018 at 01:51:52PM -0800, Josh Steadmon wrote:

> > This patch tightens both of those (I also made a few stylistic tweaks,
> > and added the ERR condition to show where it would go). I dunno. Part of
> > me sees this as a nice cleanup, but maybe it is better to just leave it
> > alone. A lot of these behaviors are just how it happens to work now, and
> > not part of the spec, but we don't know what might be relying on them.
> 
> At least according to the protocol-v2 and http-protocol docs, the
> stricter behavior seems correct:
> 
> For the first point above, dumb servers should never use an
> "application/x-git-*" content type (http-protocol.txt line 163-167).
> 
> For the second point, the docs require v2 servers to use
> "application/x-git-*" content types. protocol-v2.txt lines 63-65 state
> that v2 clients should make a smart http request, while
> http-protocol.txt lines 247-252 state that a smart server's response
> type must be "application/x-git-*".

Thanks for digging into the spec. I agree that it's pretty clear that
the appropriate content-type is expected.

> Of course we don't know if other implementations follow the spec, but
> ISTM that this patch at least doesn't contradict how we've promised the
> protocols should work.

These seem like pretty unlikely ways for a buggy server to behave, so I
think it's a reasonable risk. I also checked GitHub's implementation
(which recently learned to speak v2) and made sure that it works. :)
I didn't check JGit, but given the provenance, I assume it's fine.

Amusingly, this does break the test you just added, because it tries to
issue an ERR after claiming "text/html" (and after my patch, we
correctly fall back to dumb-http).

> If no one has any objections, I'll include the diff below in v2. Thanks
> for the help Jeff!

I think it makes sense to do the refactoring first as a separate step.
And of course it needs a commit message. So how about this series (your
original is rebased on top)?

  [1/3]: remote-curl: refactor smart-http discovery
  [2/3]: remote-curl: tighten "version 2" check for smart-http
  [3/3]: remote-curl: die on server-side errors

 remote-curl.c   | 96 +
 t/lib-httpd.sh  |  1 +
 t/lib-httpd/apache.conf |  4 ++
 t/lib-httpd/error-smart-http.sh |  3 ++
 t/t5551-http-fetch-smart.sh |  5 ++
 5 files changed, 75 insertions(+), 34 deletions(-)
 create mode 100644 t/lib-httpd/error-smart-http.sh

-Peff


  1   2   3   4   5   6   7   8   9   10   >