Re: [PATCH] rebase docs: fix incorrect format of the section Behavioral Differences

2018-12-04 Thread Junio C Hamano
Johannes Sixt  writes:

> Please let me deposit my objection. This paragraph is not the right
> place to explain what directory renme detection is and how it works
> under the hood. "works fine" in the original text is the right phrase
> here; if there is concern that this induces expectations that cannot
> be met, throw in the word "heuristics".
>
> Such as:
>Directory rename heuristics work fine in the merge and interactive
>backends. It does not in the am backend because...

OK, good enough, I guess.  Or just s/work fine/is enabled/.



Quick Loans

2018-12-04 Thread Loans Service
Apply for a loan at 2% reply to this Email for more Info


Re: [PATCH] rebase docs: fix incorrect format of the section Behavioral Differences

2018-12-04 Thread Johannes Sixt

Am 05.12.18 um 07:20 schrieb Elijah Newren:

On Tue, Dec 4, 2018 at 7:54 PM Junio C Hamano  wrote:


Elijah Newren  writes:


Gah, when I was rebasing on your patch I adopted your sentence rewrite
but forgot to remove the "sometimes".  Thanks for catching; correction:




-- 8< --
Subject: [PATCH v2] git-rebase.txt: update note about directory rename
  detection and am

In commit 6aba117d5cf7 ("am: avoid directory rename detection when
calling recursive merge machinery", 2018-08-29), the git-rebase manpage
probably should have also been updated to note the stronger
incompatibility between git-am and directory rename detection.  Update
it now.

Signed-off-by: Elijah Newren 
---
  Documentation/git-rebase.txt | 8 ++--
  1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/Documentation/git-rebase.txt b/Documentation/git-rebase.txt
index 41631df6e4..ef76cccf3f 100644
--- a/Documentation/git-rebase.txt
+++ b/Documentation/git-rebase.txt
@@ -569,8 +569,12 @@ it to keep commits that started empty.
  Directory rename detection
  ~~

-The merge and interactive backends work fine with
-directory rename detection.  The am backend sometimes does not.
+The merge and interactive backends work fine with directory rename


I am not sure "work fine" a fair and correct label, as rename is
always heuristic.

 The "directory rename detection" heuristic in "merge" and the
 "interactive" backends can take what happened to paths in the
 same directory into account when deciding if a disappeared path
 was "renamed" and to which other path.  The heuristic produces
 incorrect result when the information given is only about
 changed paths, which is why it is disabled when using the "am"
 backend.

perhaps.


The general idea sounds good.  Does adding a few more details help
with understanding, or is it more of an information overload?  I'm
thinking of something like:

  The "directory rename detection" heuristic in the "merge" and
  "interactive" backends can take what happened to paths in the
  same directory on the other side of history into account when
  deciding whether a new path in that directory should instead be
  moved elsewhere.  The heuristic produces incorrect results when
  the only information available is about files which were changed
  on the side of history being rebased, which is why directory
  rename detection is disabled when using the "am" backend.


Please let me deposit my objection. This paragraph is not the right 
place to explain what directory renme detection is and how it works 
under the hood. "works fine" in the original text is the right phrase 
here; if there is concern that this induces expectations that cannot be 
met, throw in the word "heuristics".


Such as:
   Directory rename heuristics work fine in the merge and interactive
   backends. It does not in the am backend because...

-- Hannes


Re: [PATCH v3 07/14] checkout: split into switch-branch and restore-files

2018-12-04 Thread Junio C Hamano
Elijah Newren  writes:

> What depends on stage#2 coming from the commit that will become the
> first parent?

How about "git diff --cc" for a starter?  What came from HEAD's
ancestry should appear first and then what came from the side branch
that is merged into.



Re: [PATCH 8/9] sha1-file: use loose object cache for quick existence check

2018-12-04 Thread Jeff King
On Wed, Dec 05, 2018 at 07:02:17AM +0100, René Scharfe wrote:

> > I actually wrote it that way initially, but doing the hashcpy() in the
> > caller is a bit more awkward. My thought was to punt on that until the
> > rest of the surrounding code starts handling oids.
> 
> The level of awkwardness is the same to me, but sha1_loose_object_info()
> is much longer already, so it might feel worse to add it there. 

Right, what I meant was that in sha1_loose_object_info():

  if (special_case)
handle_special_case();

is easier to follow than a block setting up the special case and then
calling the function.

> This
> function is easily converted to struct object_id, though, as its single
> caller can pass one on -- this makes the copy unnecessary.

If you mean modifying sha1_loose_object_info() to take an oid, then
sure, I agree that is a good step forward (and that is exactly the "punt
until" moment I meant).

> > This patch looks sane. How do you want to handle it? I'd assumed your
> > earlier one would be for applying on top, but this one doesn't have a
> > commit message. Did you want me to squash down the individual hunks?
> 
> I'm waiting for the first one (object-store: use one oid_array per
> subdirectory for loose cache) to be accepted, as it aims to solve a
> user-visible performance regression, i.e. that's where the meat is.
> I'm particularly interested in performance numbers from Ævar for it.
> 
> I can send the last one properly later, and add patches for converting
> quick_has_loose() to struct object_id.  Those are just cosmetic..

Great, thanks. I just wanted to make sure these improvements weren't
going to slip through the cracks.

-Peff


Re: [WIP RFC 3/5] upload-pack: refactor reading of pack-objects out

2018-12-04 Thread Junio C Hamano
Jonathan Tan  writes:

> +struct output_state {
> + char buffer[8193];
> + int used;
> +};
> +
> +static int read_pack_objects_stdout(int outfd, struct output_state *os)
> +{

The naming feels quite odd.  We are downstream of pack-objects and
reading its standard output stream, so "read stdout-of-pack-objects"
is not wrong per-se, but it may be just me.  Stepping back and what
the function is really about often helps us to understand what we
are doing.

I think the function you extracted from the existing logic is
responsible for relaying the data read from pack-objects to the
fetch-pack client on the other side of the wire.  So from that point
of view, singling out "read" as if it is a primary thing the
function does is already suboptimal.  Perhaps

static int relay_pack_data(int in, struct output_state *os)

because the fd is what we "read" from (hence, 'in', not 'out'), and
relaying is what we do and reading is only one half of doing so?

> + /* Data ready; we keep the last byte to ourselves
> +  * in case we detect broken rev-list, so that we
> +  * can leave the stream corrupted.  This is
> +  * unfortunate -- unpack-objects would happily
> +  * accept a valid packdata with trailing garbage,
> +  * so appending garbage after we pass all the
> +  * pack data is not good enough to signal
> +  * breakage to downstream.
> +  */

Yeah, I recall writing this funky and unfortunate logic in 2006.
Perhaps we want to update the comment style to make it a bit more
modern?

The "Data ready;" part of the comment applies more to what the
caller of this logic does (i.e. poll returned and revents indicates
we can read, and that is why we are calling into this logic).  The
remainder is the comment that is relevat to this logic.

> + ssize_t readsz;
> +
> + readsz = xread(outfd, os->buffer + os->used,
> +sizeof(os->buffer) - os->used);

So we read what we could read in the remaining buffer.

I notice that you alloated 8k+1 bytes; would this code end up
reading that 8k+1 bytes in full under the right condition?  I am
mainly wondering where the need for +1 comes from.

> + if (readsz < 0) {
> + return readsz;
> + }

OK, report an error to the caller by returning negative.  I am
guessing that you'd have more code in this block that currently have
only one statement in the future steps, perhaps.

> + os->used += readsz;
> +
> + if (os->used > 1) {
> + send_client_data(1, os->buffer, os->used - 1);
> + os->buffer[0] = os->buffer[os->used - 1];

OK, this corresponds to the "*cp++ = buffered"  in the original just
before xread().

> + os->used = 1;
> + } else {
> + send_client_data(1, os->buffer, os->used);
> + os->used = 0;

I am not sure if the code is correct when os->used happens to be 1
(shouldn't we hold the byte, skip the call to send_client_data(),
and go back to poll() to expect more data?), but this is a faithful
code movement and rewrite of the original.

I've read the caller (below, snipped) and the conversion looked
sensible there as well.  The final flushing of the byte(s) held back
in *os also looks good.


Re: [PATCH] rebase docs: fix incorrect format of the section Behavioral Differences

2018-12-04 Thread Elijah Newren
On Tue, Dec 4, 2018 at 7:54 PM Junio C Hamano  wrote:
>
> Elijah Newren  writes:
>
> > Gah, when I was rebasing on your patch I adopted your sentence rewrite
> > but forgot to remove the "sometimes".  Thanks for catching; correction:
>
> >
> > -- 8< --
> > Subject: [PATCH v2] git-rebase.txt: update note about directory rename
> >  detection and am
> >
> > In commit 6aba117d5cf7 ("am: avoid directory rename detection when
> > calling recursive merge machinery", 2018-08-29), the git-rebase manpage
> > probably should have also been updated to note the stronger
> > incompatibility between git-am and directory rename detection.  Update
> > it now.
> >
> > Signed-off-by: Elijah Newren 
> > ---
> >  Documentation/git-rebase.txt | 8 ++--
> >  1 file changed, 6 insertions(+), 2 deletions(-)
> >
> > diff --git a/Documentation/git-rebase.txt b/Documentation/git-rebase.txt
> > index 41631df6e4..ef76cccf3f 100644
> > --- a/Documentation/git-rebase.txt
> > +++ b/Documentation/git-rebase.txt
> > @@ -569,8 +569,12 @@ it to keep commits that started empty.
> >  Directory rename detection
> >  ~~
> >
> > -The merge and interactive backends work fine with
> > -directory rename detection.  The am backend sometimes does not.
> > +The merge and interactive backends work fine with directory rename
>
> I am not sure "work fine" a fair and correct label, as rename is
> always heuristic.
>
> The "directory rename detection" heuristic in "merge" and the
> "interactive" backends can take what happened to paths in the
> same directory into account when deciding if a disappeared path
> was "renamed" and to which other path.  The heuristic produces
> incorrect result when the information given is only about
> changed paths, which is why it is disabled when using the "am"
> backend.
>
> perhaps.

The general idea sounds good.  Does adding a few more details help
with understanding, or is it more of an information overload?  I'm
thinking of something like:

 The "directory rename detection" heuristic in the "merge" and
 "interactive" backends can take what happened to paths in the
 same directory on the other side of history into account when
 deciding whether a new path in that directory should instead be
 moved elsewhere.  The heuristic produces incorrect results when
 the only information available is about files which were changed
 on the side of history being rebased, which is why directory
 rename detection is disabled when using the "am" backend.


Re: [PATCH 2/3] sha1-file: emit error if an alternate looks like a repository

2018-12-04 Thread Jeff King
On Wed, Dec 05, 2018 at 12:35:08PM +0900, Junio C Hamano wrote:

> Ævar Arnfjörð Bjarmason   writes:
> 
> > Since 26125f6b9b ("detect broken alternates.", 2006-02-22) we've
> > emitted an error if the alternates directory doesn't exist, but not
> > for the common misstep of adding a path to another git repository as
> > an alternate, as opposed to its "objects" directory.
> >
> > Let's check for this, i.e. whether X/objects or X/.git/objects exists
> > if the user supplies X and print an error (which as a commit leading
> > up to this one shows doesn't change the exit code, just "warns").
> 
> I agree that "Let's check for this" is a good idea, but do not
> necessarily agree with "i.e.".  Don't we have a helper that takes
> the path to an existing directory and answers "Yup, it does look
> like a Git repository"?  Using that is a lot more in line with what
> you claimed to do in the title for this patch.

Hmm. Yeah, one case this does not handle is when ".git" is a git-file
pointing elsewhere, which should trigger the condition, too.

I think we can afford to be a bit loose with this check if it's just
generating a warning for a case that would otherwise not work (and the
worst is that we might fail to correctly diagnose a broken setup). But
that I think points to another issue: this kicks in even if the path is
otherwise usable.

So if had, say, a git repository whose worktree was full of objects and
packfiles, it currently works for me to point to that as an alternate.
But after this patch, we'd complain "wait, this looks like a git repo!".

So I'd much rather see the logic check first for something usable, and
only when we fail to find it, start doing a loose diagnosis. Something
like:

  if !is_directory($path)
complain that it does not exist, as now
  else if !is_directory($path/pack)
/*
 * it doesn't look like an object directory; technically it
 * _could_ just have loose objects, and maybe we ought to check
 * for directories matching [0-9a-f]{2}, though it seems
 * somewhat unlikely these days.
 */
if is_directory($path/objects) || exists($path/.git)
complain that it looks like a git dir
else
complain that it doesn't look like an object dir
  fi

Hmm. I managed to write a gross mix of C and shell for my pseudocode,
but hopefully you can read it. ;)

> I haven't read 3/3 yet, but as I said, I suspect it is reasonable to
> DWIM and use the object store associated with the directory we found
> to be a repository.

Yeah, I'm tempted by that, too, though I worry about corner cases. How
much effort should we put into discovery? I guess the rules from
enter_repo() would make sense, though the logic in that function would
need some refactoring to be reused elsewhere.

-Peff


Re: [PATCH 8/9] sha1-file: use loose object cache for quick existence check

2018-12-04 Thread René Scharfe
Am 05.12.2018 um 05:46 schrieb Jeff King:
> On Tue, Dec 04, 2018 at 10:45:13PM +0100, René Scharfe wrote:
> 
>>> The comment in the context there is warning callers to remember to load
>>> the cache first. Now that we have individual caches, might it make sense
>>> to change the interface a bit, and make these members private. I.e.,
>>> something like:
>>>
>>>   struct oid_array *odb_loose_cache(struct object_directory *odb,
>>> int subdir_nr) 
>>>   {
>>> if (!loose_objects_subdir_seen[subdir_nr])
>>> odb_load_loose_cache(odb, subdir_nr); /* or just inline it here 
>>> */
>>>
>>> return >loose_objects_cache[subdir_nr];
>>>   }
>>
>> Sure.  And it should take an object_id pointer instead of a subdir_nr --
>> less duplication, nicer interface (patch below).
> 
> I had considered that initially, but it's a little less flexible if a
> caller doesn't actually have an oid. Though both of the proposed callers
> do, so it's probably OK to worry about that if and when it ever comes up
> (the most plausible thing in my mind is doing some abbrev-like analysis
> without looking to abbreviate a _particular_ oid).

Right, let's focus on concrete requirements of current callers.  YAGNI..
:)

>> And quick_has_loose() should be converted to object_id as well -- adding
>> a function that takes a SHA-1 is a regression. :D
> 
> I actually wrote it that way initially, but doing the hashcpy() in the
> caller is a bit more awkward. My thought was to punt on that until the
> rest of the surrounding code starts handling oids.

The level of awkwardness is the same to me, but sha1_loose_object_info()
is much longer already, so it might feel worse to add it there.  This
function is easily converted to struct object_id, though, as its single
caller can pass one on -- this makes the copy unnecessary.

> This patch looks sane. How do you want to handle it? I'd assumed your
> earlier one would be for applying on top, but this one doesn't have a
> commit message. Did you want me to squash down the individual hunks?

I'm waiting for the first one (object-store: use one oid_array per
subdirectory for loose cache) to be accepted, as it aims to solve a
user-visible performance regression, i.e. that's where the meat is.
I'm particularly interested in performance numbers from Ævar for it.

I can send the last one properly later, and add patches for converting
quick_has_loose() to struct object_id.  Those are just cosmetic..

René


Re: [WIP RFC 2/5] Documentation: add Packfile URIs design doc

2018-12-04 Thread Junio C Hamano
Junio C Hamano  writes:

> So, this is OK, but
>
>> +Clients then should understand that the returned packfile could be 
>> incomplete,
>> +and that it needs to download all the given URIs before the fetch or clone 
>> is
>> +complete. Each URI should point to a Git packfile (which may be a thin pack 
>> and
>> +which may contain offset deltas).
>
> weaken or remove the (parenthetical comment) in the last sentence,
> and replace the beginning of the section with something like
>
>   If the client replies with 'packfile-uris', when the server
>   sends the packfile, it MAY send a `packfile-uris` section...
>
> You may steal what I wrote in the above response to help the
> server-side folks to decide how to actually implement the "it MAY
> send a packfile-uris" part in the document.

By the way, I do agree with the practical consideration the design
you described makes.  For a pregenerated pack that brings you from
v1.0 to v2.0, "thin" would roughly save the transfer of one full
checkout (compressed, of course), and "ofs" would also save several
bytes per object.  Compared to a single pack that delivers everything
the fetcher wants, concatenation of packs without "thin" to transfer
the same set of objects would cost quite a lot more.

And I do not think we should care too much about fetchers that lack
either thin or ofs, so I'd imagine that any client that ask for
packfile-uris would also advertise thin and ofs as well, so in
practice, a request with packfile-uris that lack thin or ofs would
be pretty rare and requiring all three and requiring only one would
not make much practical difference.  It's just that I think singling
out these two capabilities as hard requirements at the protocol
level is wrong.


Re: [PATCH 3/3] sha1-file: change alternate "error:" message to "warning:"

2018-12-04 Thread Jeff King
On Wed, Dec 05, 2018 at 12:37:58PM +0900, Junio C Hamano wrote:

> Ævar Arnfjörð Bjarmason   writes:
> 
> > Change the "error" message emitted by alt_odb_usable() to be a
> > "warning" instead. As noted in commits leading up to this one this has
> > never impacted the exit code ever since the check was initially added
> > in 26125f6b9b ("detect broken alternates.", 2006-02-22).
> >
> > It's confusing to emit an "error" when e.g. "git fsck" will exit with
> > 0, so let's emit a "warning:" instead.
> 
> OK, that sounds sensible.  An alternative that may be more sensible
> is to actually diagnose this as an error, but the purpose of this
> message is to help users diagnose a possible misconfiguration and
> keeping the repository "working" with the remaining set of object
> stores by leaving it as a mere warning, like this patch does, is
> probably a better approach.

Yeah, I think it's better to keep it as a warning. It's actually
reasonably likely to be benign (e.g., you did a "git repack -ad && rm
/path/to/alternate" to remove the dependency, but forgot to clean up the
alternates). And when it _is_ a problem, the object-reading code paths
will definitely let you know.

-Peff


Re: [RFC PATCH 3/3] test-lib: add the '--stress' option to run a test repeatedly under load

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 07:11:08PM +0100, Ævar Arnfjörð Bjarmason wrote:

> It's a frequent annoyance of mine in the test suite that I'm
> e.g. running t*.sh with some parallel "prove" in one screen, and then I
> run tABCD*.sh manually, and get unlucky because they use the same trash
> dir, and both tests go boom.
> 
> You can fix that with --root, which is much of what this patch does. My
> one-liner for doing --stress has been something like:
> 
> perl -E 'say ++$_ while 1' | parallel --jobs=100% --halt-on-error 
> soon,fail=1 './t-basic.sh --root=trash-{} -v'
> 
> But it would be great if I didn't have to worry about that and could
> just run two concurrent:
> 
> ./t-basic.sh
> 
> So I think we could just set some env variable where instead of having
> the predictable trash directory we have a $TRASHDIR.$N as this patch
> does, except we pick $N by locking some test-runs/tABCD.Nth file with
> flock() during the run.

That actually sounds kind of annoying when doing a single run, since now
you have this extra ".N". I guess it would at least be predictable, and
I tend to tab-complete the trash dirs anyway.

I accomplish the same thing by doing my "big" runs using --root
specified in config.mak (which points to a RAM disk -- if you're not
using one to run the tests, you really should look into it, as it's way
faster). And then for one-offs, investigating failures, etc, I do "cd t
&& ./t-basic.sh -v -i", which naturally runs in the local directory.

-Peff


Re: [RFC PATCH 3/3] test-lib: add the '--stress' option to run a test repeatedly under load

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 06:04:14PM +0100, Ævar Arnfjörð Bjarmason wrote:

> 
> On Tue, Dec 04 2018, SZEDER Gábor wrote:
> 
> > The number of parallel invocations is determined by, in order of
> > precedence: the number specified as '--stress=', or the value of
> > the GIT_TEST_STRESS_LOAD environment variable, or twice the number of
> > available processors in '/proc/cpuinfo', or 8.
> 
> With this series we have at least 3 ways to get this number. First
> online_cpus() in the C code, then Peff's recent `getconf
> _NPROCESSORS_ONLN` in doc-diff, and now this /proc/cpuinfo parsing.

To be fair, I only added the "getconf" thing because somebody complained
that I was parsing /proc/cpuinfo, and suggested it instead. ;)

I don't think it's especially portable, but it should work on Linux and
modern BSD/macOS, which may be enough (unlike doc-diff, I'd be a little
more inclined to ask somebody on a random platform to stress test if
they're hitting a bug).

> Perhaps it makes sense to split online_cpus() into a helper to use from
> the shellscripts instead?

I think somebody proposed that in a recent thread for other reasons,
too. I'd be OK with that, but probably just using getconf is reasonable
in the meantime.

-Peff


Re: [RFC PATCH 3/3] test-lib: add the '--stress' option to run a test repeatedly under load

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 05:34:57PM +0100, SZEDER Gábor wrote:

> To prevent the several parallel invocations of the same test from
> interfering with each other:
> 
>   - Include the parallel job's number in the name of the trash
> directory and the various output files under 't/test-results/' as
> a '.stress-' suffix.

Makes sense.

>   - Add the parallel job's number to the port number specified by the
> user or to the test number, so even tests involving daemons
> listening on a TCP socket can be stressed.

In my script I just use an arbitrary sequence of ports. That means two
stress runs may stomp on each other, but stress runs tend not to
interfere with regular runs (whereas with your scheme, a stress run of
t5561 is going to stomp on t5562). It probably doesn't matter much
either way, as I tend not to do too much with the machine during a
stress run.

>   - Make '--stress' imply '--verbose-log' and discard the test's
> standard ouput and error; dumping the output of several parallel
> tests to the terminal would create a big ugly mess.

Makes sense. My script just redirects the output to a file, but it
predates --verbose-log.

My script always runs with "-x". I guess it's not that hard to add it if
you want, but it is annoying to finally get a failure after several
minutes only to realize that your are missing some important
information. ;)

Ditto for "-i", which leaves more readable output (you know the
interesting part is at the end of the log), and leaves a trash directory
state that is more likely to be useful for inspecting.

My script also implies "--root". That's not strictly necessary, though I
suspect it is much more likely to catch races when it's run on a ram
disk (simply because it leaves the CPU as the main bottleneck).

> 'wait' for all parallel jobs before exiting (either because a failure
> was found or because the user lost patience and aborted the stress
> test), allowing the still running tests to finish.  Otherwise the "OK
> X.Y" progress output from the last iteration would likely arrive after
> the user got back the shell prompt, interfering with typing in the
> next command.  OTOH, this waiting might induce a considerable delay
> between hitting ctrl-C and the test actually exiting; I'm not sure
> this is the right tradeoff.

If there is a delay, it's actually quite annoying to _not_ wait; you
start doing something in the terminal, and then a bunch of extra test
statuses print at a random time.

> + job_nr=0
> + while test $job_nr -lt "$job_count"
> + do
> + (
> + GIT_TEST_STRESS_STARTED=done
> + GIT_TEST_STRESS_JOB_NR=$job_nr
> + export GIT_TEST_STRESS_STARTED GIT_TEST_STRESS_JOB_NR
> +
> + cnt=0
> + while ! test -e "$stressfail"
> + do
> + if $TEST_SHELL_PATH "$0" "$@" >/dev/null 2>&1
> + then
> + printf >&2 "OK   %2d.%d\n" 
> $GIT_TEST_STRESS_JOB_NR $cnt

OK, this adds a counter for each job number (compared to my script).
Seems helpful.

> + elif test -f "$stressfail" &&
> +  test "$(cat "$stressfail")" = "aborted"
> + then
> + printf >&2 "ABORTED %2d.%d\n" 
> $GIT_TEST_STRESS_JOB_NR $cnt
> + else
> + printf >&2 "FAIL %2d.%d\n" 
> $GIT_TEST_STRESS_JOB_NR $cnt
> + echo $GIT_TEST_STRESS_JOB_NR 
> >>"$stressfail"
> + fi

Hmm. What happens if we see a failure _and_ there's an abort? That's
actually pretty plausible if you see a failure whiz by, and you want to
abort the remaining scripts because they take a long time to run.

If the abort happens first, then the goal is to assume any errors are
due to the abort. But there's a race between the individual jobs reading
$stressfail and the parent signal handler writing it. So you may end up
with "aborted\n5" or similar (after which any other jobs would fail to
match "aborted" and declare themselves failures, as well).  I think my
script probably also has this race, too (it doesn't check the abort case
explicitly, but it would race in checking "test -e fail").

If the fail happens first (which I think is the more likely case), then
the abort handler will overwrite the file with "aborted", and we'll
forget that there was a real failure. This works in my script because of
the symlinking (if a real failure symlinked $fail to a directory, then
the attempt to write "aborted" will just be a noop).

> + job_nr=0
> + while test $job_nr -lt "$job_count"
> + do
> + wait
> + job_nr=$(($job_nr + 1))
> + done

Do we need to loop? Calling "wait" with no arguments should wait for all
children.

> + if 

Re: [PATCH 2/3] test-lib-functions: introduce the 'test_set_port' helper function

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 05:34:56PM +0100, SZEDER Gábor wrote:

> Several test scripts run daemons like 'git-daemon' or Apache, and
> communicate with them through TCP sockets.  To have unique ports where
> these daemons are accessible, the ports are usually the number of the
> corresponding test scripts, unless the user overrides them via
> environment variables, and thus all those tests and test libs contain
> more or less the same bit of one-liner boilerplate code to find out
> the port.  The last patch in this series will make this a bit more
> complicated.
> 
> Factor out finding the port for a daemon into the common helper
> function 'test_set_port' to avoid repeating ourselves.

OK. Looks like this should keep the same port numbers for all of the
existing tests, which I think is good. As nice as choosing random high
port numbers might be for some cases, it can also be confusing when
there are random conflicts.

I've also run into non-random conflicts, but at least once you figure
them out they're predictable (the most confusing I've seen is that adb,
the android debugger, sometimes but not always leaves a daemon hanging
around on port 5561).

> Take special care of test scripts with "low" numbers:
> 
>   - Test numbers below 1024 would result in a port that's only usable
> as root, so set their port to '1 + test-nr' to make sure it
> doesn't interfere with other tests in the test suite.  This makes
> the hardcoded port number in 't0410-partial-clone.sh' unnecessary,
> remove it.

This part is what made me think a bit more about conflicting with
dynamically allocated ports. Arguably the http parts of t0410 ought to
be in a much higher-numbered script, but I don't know that we've held
over the years very well to the idea that scripts should only depend on
the bits from lower numbered scripts.

> ---
>  t/lib-git-daemon.sh  |  2 +-
>  t/lib-git-p4.sh  |  9 +
>  t/lib-git-svn.sh |  2 +-
>  t/lib-httpd.sh   |  2 +-
>  t/t0410-partial-clone.sh |  1 -
>  t/t5512-ls-remote.sh |  2 +-
>  t/test-lib-functions.sh  | 36 
>  7 files changed, 41 insertions(+), 13 deletions(-)

The patch itself looks good to me.

> + eval port=\"\${$var}\"
> + case "$port" in

The quotes in the eval'd variable assignment aren't necessary. Usually I
don't mind them in a simple:

  FOO="$bar"

even though they're redundant (and there were a few instances in the
prior patch, I think). But here the escaping makes it harder to read,
compared to:

  eval port=\$$var

It might be worth simplifying, but I don't feel strongly about it (we'd
run into problems if $var contained spaces with either variant, but I
don't think that's worth caring about).

-Peff


Re: [WIP RFC 2/5] Documentation: add Packfile URIs design doc

2018-12-04 Thread Junio C Hamano
Jonathan Tan  writes:

> +This feature allows servers to serve part of their packfile response as URIs.
> +This allows server designs that improve scalability in bandwidth and CPU 
> usage
> +(for example, by serving some data through a CDN), and (in the future) 
> provides
> +some measure of resumability to clients.

Without reading the remainder, this makes readers anticipate a few
good things ;-)

 - "part of", so pre-generated constant material can be given from
   CDN and then followed-up by "filling the gaps" small packfile,
   perhaps?

 - The "part of" transmission may not bring the repository up to
   date wrt to the "want" objects; would this feature involve "you
   asked history up to these commits, but with this pack-uri, you'll
   be getting history up to these (somewhat stale) commits"?

Anyway, let's read on.

> +This feature is available only in protocol version 2.
> +
> +Protocol
> +
> +
> +The server advertises `packfile-uris`.
> +
> +If the client replies with the following arguments:
> +
> + * packfile-uris
> + * thin-pack
> + * ofs-delta

"with the following" meaning "with all of the following", or "with
any of the following"?  Is there a reason why the server side must
require that the client understands and is willing to accept a
thin-pack when wanting to use packfile-uris?  The same question for
the ofs-delta.

When the pregenerated constant material the server plans to hand out
the uris for was prepared by using ofs-delta encoding, the server
cannot give the uri to it when the client does not want ofs-delta
encoded packfile, but it feels somewhat strange that we require the
most capable client at the protocol level.  After all, the server
side could prepare one with ofs-delta and another without ofs-delta
and depending on what the client is capable of, hand out different
URIs, if it wanted to.

The reason why I care is because thin and ofs will *NOT* stay
forever be the only optional features of the pack format.  We may
invent yet another such optional 'frotz' feature, which may greatly
help the efficiency of the packfile encoding, hence it may be
preferrable to always generate a CDN packfile with that feature, in
addition to thin and ofs.  Would we add 'frotz' to the above list in
the documentation, then?  What would happen to existing servers and
clients written before that time then?

My recommendation is to drop the mention of "thin" and "ofs" from
the above list, and also from the following paragraph.  The "it MAY
send" will serve as a practical escape clause to allow a server/CDN
implementation that *ALWAYS* prepares pregenerated material that can
only be digested by clients that supports thin and ofs.  Such a server
can send packfile-URIs only when all of the three are given by the
client and be compliant.  And such an update to the proposed document
would allow a more diskful server to prepare both thin and non-thin
pregenerated packs and choose which one to give to the client depending
on the capability.

> +when the server sends the packfile, it MAY send a `packfile-uris` section
> +directly before the `packfile` section (right after `wanted-refs` if it is
> +sent) containing HTTP(S) URIs. See protocol-v2.txt for the documentation of
> +this section.

So, this is OK, but

> +Clients then should understand that the returned packfile could be 
> incomplete,
> +and that it needs to download all the given URIs before the fetch or clone is
> +complete. Each URI should point to a Git packfile (which may be a thin pack 
> and
> +which may contain offset deltas).

weaken or remove the (parenthetical comment) in the last sentence,
and replace the beginning of the section with something like

If the client replies with 'packfile-uris', when the server
sends the packfile, it MAY send a `packfile-uris` section...

You may steal what I wrote in the above response to help the
server-side folks to decide how to actually implement the "it MAY
send a packfile-uris" part in the document.

> +Server design
> +-
> +
> +The server can be trivially made compatible with the proposed protocol by
> +having it advertise `packfile-uris`, tolerating the client sending
> +`packfile-uris`, and never sending any `packfile-uris` section. But we should
> +include some sort of non-trivial implementation in the Minimum Viable 
> Product,
> +at least so that we can test the client.
> +
> +This is the implementation: a feature, marked experimental, that allows the
> +server to be configured by one or more `uploadpack.blobPackfileUri=
> +` entries. Whenever the list of objects to be sent is assembled, a blob
> +with the given sha1 can be replaced by the given URI. This allows, for 
> example,
> +servers to delegate serving of large blobs to CDNs.

;-)

> +Client design
> +-
> +
> +While fetching, the client needs to remember the list of URIs and cannot
> +declare that the fetch is complete until all URIs have been downloaded as
> +packfiles.
> +
> +The division 

Re: [PATCH 1/3] test-lib: consolidate naming of test-results paths

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 05:34:55PM +0100, SZEDER Gábor wrote:

> There are two places where we strip off any leading path components
> and the '.sh' suffix from the test script's pathname, and there are
> two places where we construct the filename of test output files in
> 't/test-results/'.  The last patch in this series will add even more.
> 
> Factor these out into helper variables to avoid repeating ourselves.

Makes sense.

> +TEST_NAME="$(basename "$0" .sh)"
> +TEST_RESULTS_BASE="$TEST_OUTPUT_DIRECTORY/test-results/$TEST_NAME"

Hmm, since we are building up this BASE variable anyway, why not:

  TEST_RESULTS_DIR=$TEST_OUTPUT_DIRECTORY/test-results
  TEST_RESULTS_BASE=$TEST_RESULTS_DIR/$TEST_NAME

? That saves having to run `dirname` on it later.

I guess one could argue that saying "the directory name of the file I'm
writing" is more readable. I just generally try to avoid extra
manipulation of the strings when possible (especially in shell).

-Peff


Re: [PATCH on sb/more-repo-in-api] revision: use commit graph in get_reference()

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 02:42:38PM -0800, Jonathan Tan wrote:

> diff --git a/revision.c b/revision.c
> index b5108b75ab..e7da2c57ab 100644
> --- a/revision.c
> +++ b/revision.c
> @@ -212,7 +212,20 @@ static struct object *get_reference(struct rev_info 
> *revs, const char *name,
>  {
>   struct object *object;
>  
> - object = parse_object(revs->repo, oid);
> + /*
> +  * If the repository has commit graphs, repo_parse_commit() avoids
> +  * reading the object buffer, so use it whenever possible.
> +  */
> + if (oid_object_info(revs->repo, oid, NULL) == OBJ_COMMIT) {
> + struct commit *c = lookup_commit(revs->repo, oid);
> + if (!repo_parse_commit(revs->repo, c))
> + object = (struct object *) c;
> + else
> + object = NULL;
> + } else {
> + object = parse_object(revs->repo, oid);
> + }

This seems like a reasonable thing to do, but I have sort of a
meta-comment. In several places we've started doing this kind of "if
it's this type of object, do X, otherwise do Y" optimization (e.g.,
handling large blobs for streaming).

And in the many cases we end up doubling the effort to do object
lookups: here we do one lookup to get the type, and then if it's not a
commit (or if we don't have a commit graph) we end up parsing it anyway.

I wonder if we could do better. In this instance, it might make sense
to first see if we actually have a commit graph available (it might not
have this object, of course, but at least we'd expect it to have most
commits). In general, it would be nice if we had a more incremental API
for accessing objects: open, get metadata, then read the data. That
would make these kinds of optimizations "free".

I don't have numbers for how much the extra lookups cost. The lookups
are probably dwarfed by parse_object() in general, so even if we save
only a few full object loads, it may be a win. It just seems a shame
that we may be making the "slow" paths (when our type-specific check
doesn't match) even slower.

-Peff


Re: [PATCH 8/9] sha1-file: use loose object cache for quick existence check

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 10:45:13PM +0100, René Scharfe wrote:

> > The comment in the context there is warning callers to remember to load
> > the cache first. Now that we have individual caches, might it make sense
> > to change the interface a bit, and make these members private. I.e.,
> > something like:
> > 
> >   struct oid_array *odb_loose_cache(struct object_directory *odb,
> > int subdir_nr) 
> >   {
> > if (!loose_objects_subdir_seen[subdir_nr])
> > odb_load_loose_cache(odb, subdir_nr); /* or just inline it here 
> > */
> > 
> > return >loose_objects_cache[subdir_nr];
> >   }
> 
> Sure.  And it should take an object_id pointer instead of a subdir_nr --
> less duplication, nicer interface (patch below).

I had considered that initially, but it's a little less flexible if a
caller doesn't actually have an oid. Though both of the proposed callers
do, so it's probably OK to worry about that if and when it ever comes up
(the most plausible thing in my mind is doing some abbrev-like analysis
without looking to abbreviate a _particular_ oid).

> It would be nice if it could return a const pointer to discourage
> messing up the cache, but that's not compatible with oid_array_lookup().

Yeah.

> And quick_has_loose() should be converted to object_id as well -- adding
> a function that takes a SHA-1 is a regression. :D

I actually wrote it that way initially, but doing the hashcpy() in the
caller is a bit more awkward. My thought was to punt on that until the
rest of the surrounding code starts handling oids.

> ---
>  object-store.h |  8 
>  sha1-file.c| 19 ---
>  sha1-name.c|  4 +---
>  3 files changed, 13 insertions(+), 18 deletions(-)

This patch looks sane. How do you want to handle it? I'd assumed your
earlier one would be for applying on top, but this one doesn't have a
commit message. Did you want me to squash down the individual hunks?

-Peff


Re: [PATCH v3 07/14] checkout: split into switch-branch and restore-files

2018-12-04 Thread Elijah Newren
On Tue, Dec 4, 2018 at 6:26 PM Junio C Hamano  wrote:
>
> Duy Nguyen  writes:
>
> > On Tue, Dec 4, 2018 at 6:43 PM Elijah Newren  wrote:
> >> > > > +--ours::
> >> > > > +--theirs::
> >> ...
> >> go away.  Maybe it can still be fixed (I haven't dug too deeply into
> >> it), but if so, the only fix needed here would be to remove this long
> >> explanation about why the tool gets things totally backward.
> >
> > Aha. I' not really deep in this merge business to know if stages 2 and
> > 3 can be swapped. This is right up your alley. I'll just leave it to
> > you.
>
> Please don't show stage#2 and stage#3 swapped to the end user,
> unless that is protected behind an option (not per-repo config).
> It is pretty much ingrained that stage#2 is what came from the
> commit that will become the first parent of the commit being
> prepared, and changing it without an explicit command line option
> will break tools.

What depends on stage#2 coming from the commit that will become the
first parent?  I wasn't thinking in terms of modifying
checkout/restore-files/diff/etc in a way that would make them show
things different than what was recorded in the index, I was rather
musing on whether it was feasible to have rebase tell the merge
machinery to treat HEAD as stage #3 and the other commit as stage #2
so that it was swapping what was actually recorded in the index.

I know the merge machinery implicitly assumes HEAD == stage #2 in
multiple places, and it'd obviously need a fair amount of fixing to
handle this.  I wasn't immediately aware of other things that would
break.  If you know of some, I'm happy to hear.  Otherwise, I might go
and learn the hard way (after I get around to the merge rewrite) why
my idea is crazy.  :-)

> > I'm actually still not sure how to move it here (I guess 'here' is
> > restore-files since we won't move HEAD). All the --mixed, --merge and
> > --hard are confusing. But maybe we could just make 'git restore-files
> > --from HEAD -f :/" behave just like "git reset --hard HEAD" (but with
> > some safety net) But we can leave it for discussion in the next round.
>
> Perhaps you two should pay a bit closer attention to what Thomas
> Gummerer is working on.  I've touched above in my earlier comments,
> too, e.g.  

Indeed, I'm excited about his changes now; I'll keep an eye out.


Re: [PATCH v3 07/14] checkout: split into switch-branch and restore-files

2018-12-04 Thread Elijah Newren
On Tue, Dec 4, 2018 at 6:14 PM Junio C Hamano  wrote:
>
> Duy Nguyen  writes:
>
> >> My single biggest worry about this whole series is that I'm worried
> >> you're perpetuating and even further ingraining one of the biggest
> >> usability problems with checkout: people suggest and use it for
> >> reverting/restoring paths to a previous version, but it doesn't do
> >> that:
> >
> > ...
> >
> >  git restore-files --from=master~10 Documentation/
>
> The "single biggest worry" could be due to Elijah not being aware of
> other recent discussions.  My understanding of the plan is
>
>  - "git checkout" will learn a new "--[no-]overlay" option, where
>the current behaviour, i.e. "take paths in master~10 that match
>pathspec Documentation/, and overlay them on top of what is in
>the index and the working tree", is explained as "the overlay
>mode" and stays to be the default.  With "checkout --no-overlay
>master~10 Documentation/", the command will become "replace paths
>in the current index and the working tree that match the pathspec
>Documentation/ with paths in master~10 that match pathspec
>Documentation/".
>
>  - "git restore-files --from= " by default will use
>"--no-overlay" semantics, but the users can still use "--overlay"
>from the command line as an option.
>
> So "restore-files" would become truly "restore the state of
> Documentation/ to match that of master~10", I would think.

Oh, sweet, that's awesome.  I saw some of the other emails as I was
scanning through and looked at the --overlay stuff since it sounded
relevant but something in the explanation made me think it was about
whether the command would write to the index or the working tree,
rather than being about adding+overwriting vs. just overwriting.

> >> Also, the fact that we're trying to make a simpler command makes me
> >> think that removing the auto-vivify behavior from the default and
> >> adding a simple flag which users can pass to request will allow this
> >> part of the documentation to be hidden behind the appropriate flag,
> >> which may make it easier for users to compartmentalize the command and
> >> it's options, enabling them to learn as they go.
> >
> > Sounds good. I don't know a good name for this new option though so
> > unless anybody comes up with some suggestion, I'll just disable
> > checkout.defaultRemote in switch-branch. If it comes back as a new
> > option, it can always be added later.
>
> Are you two discussing the "checkout --guess" option?  I am somewhat
> lost here.

Generally what was being discussed was just that this manpage was
rather complicated for the standard base-case due to the need to
explain all the behavior associated with --guess since that option is
on by default in checkout.  And since --guess is controlled by
checkout.defaultRemote, that was part of the extra complexity that had
to be learned in the basic explanation, rather than letting users
learn it when they learn a new flag.

The critical part you were missing was part of the original text just
before the quoted part was:

>>> So switch-branch will be controlled by checkout.* config variables?
>>> That probably makes the most sense, but it does dilute the advantage
>>> of adding these simpler commands.

Duy is responding to that even if it wasn't included in his quoting.

> >> > +-f::
> >> > +--force::
> >> > +   Proceed even if the index or the working tree differs from
> >> > +   HEAD.  This is used to throw away local changes.
> >>
> >> Haven't thought through this thoroughly, but do we really need an
> >> option for that instead of telling users to 'git reset --hard HEAD'
> >> before switching branches if they want their stuff thrown away?
> >
> > For me it's just a bit more convenient. Hit an error when switching
> > branch? Recall the command from bash history, stick -f in it and run.
> > Elsewhere I think both Junio and Thomas (or maybe only Junio) suggests
> > moving the "git reset" functionality without moving HEAD to one of
> > these commands, which goes the opposite direction...
>
> Isn't there a huge difference?  "checkout --force "
> needs to clobber only the changes that are involved in the switch,
> i.e. if your README.txt is the same between master and maint while
> Makefile is different, after editing both files while on master, you
> can not "switch-branch" to maint without doing something to Makefile
> (i.e. either discard your local change or wiggle your local change
> to the context of 'maint' with "checkout -m").  But you can carry
> the changes to README.txt while checking out 'maint' branch.
> Running "git reset --hard HEAD" would mean that you will lose the
> changes to README.txt as well.

Ah, indeed.  Thanks for pointing out what I missed here.

> >> > +--orphan ::
> >> > +   Create a new 'orphan' branch, named , started from
> >> > +and switch to it.  The first commit made on this
> >>
> >> What??  started from ?  The whole point of --orphan is
> >> 

Re: [WIP RFC 1/5] Documentation: order protocol v2 sections

2018-12-04 Thread Junio C Hamano
Jonathan Tan  writes:

> The git command line expects Git servers to follow a specific order of

"Command line"?  It sounds like you are talking about the order of
command line arguments and options, but apparently that is not what
you are doing.  Is it "The git over-the-wire protocol"?

> +output = acknowledgements flush-pkt |
> +  [acknowledgments delim-pkt] [shallow-info delim-pkt]
> +  [wanted-refs delim-pkt] packfile flush-pkt

So the output can be either 

 - acks followed by flush (and nothing else) or

 - (possibly) acks, followed by (possibly) shallow, followed by
   (possibly) wanted-refs, followed by the pack stream and flush at
   the end.

> @@ -335,9 +335,10 @@ header.
>  *PKT-LINE(%x01-03 *%x00-ff)
>  
>  acknowledgments section
> - * If the client determines that it is finished with negotiations
> -   by sending a "done" line, the acknowledgments sections MUST be
> -   omitted from the server's response.
> + * If the client determines that it is finished with negotiations by
> +   sending a "done" line (thus requiring the server to send a packfile),
> +   the acknowledgments sections MUST be omitted from the server's
> +   response.

OK.  

>   * Always begins with the section header "acknowledgments"
>  
> @@ -388,9 +389,6 @@ header.
> which the client has not indicated was shallow as a part of
> its request.
>  
> - * This section is only included if a packfile section is also
> -   included in the response.
> -

Earlier, we said that shallow-info is not given when packfile is not
there.  That is captured in the updated EBNF above.  We don't have a
corresponding removal of a bullet point for wanted-refs section below
but probably that is because the original did not have corresponding
bullet point to begin with.

>  wanted-refs section
>   * This section is only included if the client has requested a
> ref using a 'want-ref' line and if a packfile section is also


Re: [ANNOUNCE] Git v2.20.0-rc2

2018-12-04 Thread Jeff King
On Tue, Dec 04, 2018 at 06:48:22PM -0800, Stefan Beller wrote:

> > Perhaps we should note this more prominently, and since Brandon isn't at
> > Google anymore can some of you working there edit this post? It's the
> > first Google result for "git protocol v2", so it's going to be quite
> > confusing for people if after 2.20 the instructions in it no longer
> > work.
> 
> Thanks for pointing this out, we missed the implications when that patch was
> discussed. Looking into it.

I think if it just does "ls-remote --heads", that would still prove the
point (it would omit all of the tags).

-Peff


Re: [PATCH] rebase docs: fix incorrect format of the section Behavioral Differences

2018-12-04 Thread Junio C Hamano
Elijah Newren  writes:

> Gah, when I was rebasing on your patch I adopted your sentence rewrite
> but forgot to remove the "sometimes".  Thanks for catching; correction:

>
> -- 8< --
> Subject: [PATCH v2] git-rebase.txt: update note about directory rename
>  detection and am
>
> In commit 6aba117d5cf7 ("am: avoid directory rename detection when
> calling recursive merge machinery", 2018-08-29), the git-rebase manpage
> probably should have also been updated to note the stronger
> incompatibility between git-am and directory rename detection.  Update
> it now.
>
> Signed-off-by: Elijah Newren 
> ---
>  Documentation/git-rebase.txt | 8 ++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/git-rebase.txt b/Documentation/git-rebase.txt
> index 41631df6e4..ef76cccf3f 100644
> --- a/Documentation/git-rebase.txt
> +++ b/Documentation/git-rebase.txt
> @@ -569,8 +569,12 @@ it to keep commits that started empty.
>  Directory rename detection
>  ~~
>  
> -The merge and interactive backends work fine with
> -directory rename detection.  The am backend sometimes does not.
> +The merge and interactive backends work fine with directory rename

I am not sure "work fine" a fair and correct label, as rename is
always heuristic.  

The "directory rename detection" heuristic in "merge" and the
"interactive" backends can take what happened to paths in the
same directory into account when deciding if a disappeared path
was "renamed" and to which other path.  The heuristic produces
incorrect result when the information given is only about
changed paths, which is why it is disabled when using the "am"
backend.

perhaps.


Re: [PATCH 3/3] sha1-file: change alternate "error:" message to "warning:"

2018-12-04 Thread Junio C Hamano
Ævar Arnfjörð Bjarmason   writes:

> Change the "error" message emitted by alt_odb_usable() to be a
> "warning" instead. As noted in commits leading up to this one this has
> never impacted the exit code ever since the check was initially added
> in 26125f6b9b ("detect broken alternates.", 2006-02-22).
>
> It's confusing to emit an "error" when e.g. "git fsck" will exit with
> 0, so let's emit a "warning:" instead.

OK, that sounds sensible.  An alternative that may be more sensible
is to actually diagnose this as an error, but the purpose of this
message is to help users diagnose a possible misconfiguration and
keeping the repository "working" with the remaining set of object
stores by leaving it as a mere warning, like this patch does, is
probably a better approach.



Re: [PATCH 2/3] sha1-file: emit error if an alternate looks like a repository

2018-12-04 Thread Junio C Hamano
Ævar Arnfjörð Bjarmason   writes:

> Since 26125f6b9b ("detect broken alternates.", 2006-02-22) we've
> emitted an error if the alternates directory doesn't exist, but not
> for the common misstep of adding a path to another git repository as
> an alternate, as opposed to its "objects" directory.
>
> Let's check for this, i.e. whether X/objects or X/.git/objects exists
> if the user supplies X and print an error (which as a commit leading
> up to this one shows doesn't change the exit code, just "warns").

I agree that "Let's check for this" is a good idea, but do not
necessarily agree with "i.e.".  Don't we have a helper that takes
the path to an existing directory and answers "Yup, it does look
like a Git repository"?  Using that is a lot more in line with what
you claimed to do in the title for this patch.

I haven't read 3/3 yet, but as I said, I suspect it is reasonable to
DWIM and use the object store associated with the directory we found
to be a repository.



Faktura VAT n 5756/175641

2018-12-04 Thread Czapik Alina
Dzien dobry!
Zamowiles przedmiot warty. Pobierz fakturę 
http://design.tussell.net/secure2/audi.php?email=c7fd6@89192

pozdrawiam


Czapik Alina

PRIME CAR MANAGEMENT



Re: [PATCHv2 0/9] Resending sb/submodule-recursive-fetch-gets-the-tip

2018-12-04 Thread Junio C Hamano
Stefan Beller  writes:

> This is a resend of sb/submodule-recursive-fetch-gets-the-tip,
> with all feedback addressed. As it took some time, I'll send it
> without range-diff, but would ask for full review.

Is that a "resend" or reroll/update (or whatever word that does not
imply "just sending the same thing again")?

FWIW, here is the range diff between 104f939f27..@{-1} and master..
after replacing the topic with this round.


 3:  304b2dab29 !  3:  08a297bd49 submodule.c: sort changed_submodule_names 
before searching it
@@ -28,7 +28,7 @@
 @@
/* default value, "--submodule-prefix" and its value are added later */
  
-   calculate_changed_submodule_paths();
+   calculate_changed_submodule_paths(r);
 +  string_list_sort(_submodule_names);
run_processes_parallel(max_parallel_jobs,
   get_next_submodule,

Just the call nearby in the context has become repository-aware; no
change in this series.

 4:  f7345dad6d !  4:  16dd6fe133 submodule.c: tighten scope of 
changed_submodule_names struct
...
++  calculate_changed_submodule_paths(r, _submodule_names);
 +  string_list_sort(_submodule_names);
run_processes_parallel(max_parallel_jobs,
   get_next_submodule,

I do recall having to do these adjustments while merging, so not
having to do so anymore with rebasing is a welcome change ;-)

 5:  5613d81d1e !  5:  bcd7337243 submodule: store OIDs in 
changed_submodule_names
...
Likewise.

 7:  e2419f7e30 !  7:  26f80ccfc1 submodule: migrate get_next_submodule to use 
repository structs
@@ -4,7 +4,8 @@
 
 We used to recurse into submodules, even if they were broken having
 only an objects directory. The child process executed in the submodule
-would fail though if the submodule was broken.
+would fail though if the submodule was broken. This is tested via
+"fetching submodule into a broken repository" in t5526.
 
 This patch tightens the check upfront, such that we do not need
 to spawn a child process to find out if the submodule is broken.
@@ -34,6 +35,7 @@
 +  strbuf_repo_worktree_path(, r, "%s/.git", sub->path);
 +  if (repo_init(ret, gitdir.buf, NULL)) {
 +  strbuf_release();
++  free(ret);
 +  return NULL;

Leakfix?  Good.

 +  }
 +  strbuf_release();
@@ -75,11 +77,10 @@
 +  if (repo) {
child_process_init(cp);
 -  cp->dir = strbuf_detach(_path, NULL);
-   prepare_submodule_repo_env(>env_array);
 +  cp->dir = xstrdup(repo->worktree);
+   prepare_submodule_repo_env(>env_array);

Hmph, I offhand do not see there would be any difference if you
assigned to cp->dir before or after preparing the repo env, but is
there a reason these two must be done in this updated order that I
am missing?  Very similar changes appear multiple times in this
range-diff.

cp->git_cmd = 1;
if (!spf->quiet)
-   strbuf_addf(err, "Fetching submodule %s%s\n",
 @@
argv_array_push(>args, default_argv);
argv_array_push(>args, "--submodule-prefix");
@@ -94,8 +95,12 @@
 +   * the submodule is not initialized
 +   */
 +  if (S_ISGITLINK(ce->ce_mode) &&
-+  !is_empty_dir(ce->name))
-+  die(_("Could not access submodule '%s'"), 
ce->name);
++  !is_empty_dir(ce->name)) {
++  spf->result = 1;
++  strbuf_addf(err,
++  _("Could not access submodule 
'%s'"),
++  ce->name);
++  }

OK, not dying but returning to the caller to handle the error.

 9:  7454fe5cb6 !  9:  04eb06607b fetch: try fetching submodules if needed 
objects were not fetched
@@ -17,11 +17,6 @@
 Try fetching a submodule by object id if the object id that the
 superproject points to, cannot be found.
 
-The try does not happen when the "git fetch" done at the
-superproject is not storing the fetched results in remote
-tracking branches (i.e. instead just recording them to
-FETCH_HEAD) in this step. A later patch will fix this.
-
 builtin/fetch used to only inspect submodules when they were fetched
 "on-demand", as in either on/off case it was clear whether the 
submodule
 needs to be fetched. However to know whether we need to try fetching 
the
@@ -29,6 +24,10 @@
 function check_for_new_submodule_commits(), so we'll also run that code
 in case the submodule 

Re: [ANNOUNCE] Git v2.20.0-rc2

2018-12-04 Thread Stefan Beller
-cc linux list


> Perhaps we should note this more prominently, and since Brandon isn't at
> Google anymore can some of you working there edit this post? It's the
> first Google result for "git protocol v2", so it's going to be quite
> confusing for people if after 2.20 the instructions in it no longer
> work.

Thanks for pointing this out, we missed the implications when that patch was
discussed. Looking into it.
>
> 1. 
> https://opensource.googleblog.com/2018/05/introducing-git-protocol-version-2.html


Re: [PATCH v3 07/14] checkout: split into switch-branch and restore-files

2018-12-04 Thread Junio C Hamano
Duy Nguyen  writes:

> On Tue, Dec 4, 2018 at 6:43 PM Elijah Newren  wrote:
>> > > > +--ours::
>> > > > +--theirs::
>> ...
>> go away.  Maybe it can still be fixed (I haven't dug too deeply into
>> it), but if so, the only fix needed here would be to remove this long
>> explanation about why the tool gets things totally backward.
>
> Aha. I' not really deep in this merge business to know if stages 2 and
> 3 can be swapped. This is right up your alley. I'll just leave it to
> you.

Please don't show stage#2 and stage#3 swapped to the end user,
unless that is protected behind an option (not per-repo config).
It is pretty much ingrained that stage#2 is what came from the
commit that will become the first parent of the commit being
prepared, and changing it without an explicit command line option
will break tools.

> I'm actually still not sure how to move it here (I guess 'here' is
> restore-files since we won't move HEAD). All the --mixed, --merge and
> --hard are confusing. But maybe we could just make 'git restore-files
> --from HEAD -f :/" behave just like "git reset --hard HEAD" (but with
> some safety net) But we can leave it for discussion in the next round.

Perhaps you two should pay a bit closer attention to what Thomas
Gummerer is working on.  I've touched above in my earlier comments,
too, e.g.  


Re: [PATCH v3 07/14] checkout: split into switch-branch and restore-files

2018-12-04 Thread Junio C Hamano
Duy Nguyen  writes:

>> My single biggest worry about this whole series is that I'm worried
>> you're perpetuating and even further ingraining one of the biggest
>> usability problems with checkout: people suggest and use it for
>> reverting/restoring paths to a previous version, but it doesn't do
>> that:
>
> ...
>
>  git restore-files --from=master~10 Documentation/

The "single biggest worry" could be due to Elijah not being aware of
other recent discussions.  My understanding of the plan is

 - "git checkout" will learn a new "--[no-]overlay" option, where
   the current behaviour, i.e. "take paths in master~10 that match
   pathspec Documentation/, and overlay them on top of what is in
   the index and the working tree", is explained as "the overlay
   mode" and stays to be the default.  With "checkout --no-overlay
   master~10 Documentation/", the command will become "replace paths
   in the current index and the working tree that match the pathspec
   Documentation/ with paths in master~10 that match pathspec
   Documentation/".

 - "git restore-files --from= " by default will use
   "--no-overlay" semantics, but the users can still use "--overlay"
   from the command line as an option.

So "restore-files" would become truly "restore the state of
Documentation/ to match that of master~10", I would think.

>> Also, the fact that we're trying to make a simpler command makes me
>> think that removing the auto-vivify behavior from the default and
>> adding a simple flag which users can pass to request will allow this
>> part of the documentation to be hidden behind the appropriate flag,
>> which may make it easier for users to compartmentalize the command and
>> it's options, enabling them to learn as they go.
>
> Sounds good. I don't know a good name for this new option though so
> unless anybody comes up with some suggestion, I'll just disable
> checkout.defaultRemote in switch-branch. If it comes back as a new
> option, it can always be added later.

Are you two discussing the "checkout --guess" option?  I am somewhat
lost here.

>> > +-f::
>> > +--force::
>> > +   Proceed even if the index or the working tree differs from
>> > +   HEAD.  This is used to throw away local changes.
>>
>> Haven't thought through this thoroughly, but do we really need an
>> option for that instead of telling users to 'git reset --hard HEAD'
>> before switching branches if they want their stuff thrown away?
>
> For me it's just a bit more convenient. Hit an error when switching
> branch? Recall the command from bash history, stick -f in it and run.
> Elsewhere I think both Junio and Thomas (or maybe only Junio) suggests
> moving the "git reset" functionality without moving HEAD to one of
> these commands, which goes the opposite direction...

Isn't there a huge difference?  "checkout --force "
needs to clobber only the changes that are involved in the switch,
i.e. if your README.txt is the same between master and maint while
Makefile is different, after editing both files while on master, you
can not "switch-branch" to maint without doing something to Makefile
(i.e. either discard your local change or wiggle your local change
to the context of 'maint' with "checkout -m").  But you can carry
the changes to README.txt while checking out 'maint' branch.
Running "git reset --hard HEAD" would mean that you will lose the
changes to README.txt as well.

>> > +--orphan ::
>> > +   Create a new 'orphan' branch, named , started from
>> > +and switch to it.  The first commit made on this
>>
>> What??  started from ?  The whole point of --orphan is
>> you have no parent, i.e. no start point.  Also, why does the
>> explanation reference an argument that wasn't in the immediately
>> preceding synopsis?
>
> I guess bad phrasing. It should be "switch to  first,
> then prepare the worktree so that the first commit will have no
> parent". Or something along that line.

It should be a , no?  It is not a "point" in history, but
is "start with this tree".

I may have more comments on this message but that's it from me for
now.


Re: [PATCH 9/9] fetch: try fetching submodules if needed objects were not fetched

2018-12-04 Thread Jonathan Tan
> Try fetching a submodule by object id if the object id that the
> superproject points to, cannot be found.

Mention here the consequences of what happens when this attempt to fetch
fails. Also, this seems to be a case of "do or do not, there is no try"
- maybe it's better to say "Fetch commits by ID from the submodule's
origin if the submodule doesn't already contain the commit that the
superproject references" (note that there is no "Try" word, since the
consequence is to fail the entire command).

Also mention that this fetch is always from the origin.

> builtin/fetch used to only inspect submodules when they were fetched
> "on-demand", as in either on/off case it was clear whether the submodule
> needs to be fetched. However to know whether we need to try fetching the
> object ids, we need to identify the object names, which is done in this
> function check_for_new_submodule_commits(), so we'll also run that code
> in case the submodule recursion is set to "on".
> 
> The submodule checks were done only when a ref in the superproject
> changed, these checks were extended to also be performed when fetching
> into FETCH_HEAD for completeness, and add a test for that too.

"inspect submodules" and "submodule checks" are unnecessarily vague to
me - might be better to just say "A list of new submodule commits are
already generated in certain conditions (by
check_for_new_submodule_commits()); this new feature invokes that
function in more situations".

> - if ((recurse_submodules != RECURSE_SUBMODULES_OFF) &&
> - (recurse_submodules != RECURSE_SUBMODULES_ON))
> - check_for_new_submodule_commits(>new_oid);
>   r = s_update_ref(msg, ref, 0);
>   format_display(display, r ? '!' : '*', what,
>  r ? _("unable to update local ref") : NULL,
> @@ -779,9 +776,6 @@ static int update_local_ref(struct ref *ref,
>   strbuf_add_unique_abbrev(, >object.oid, 
> DEFAULT_ABBREV);
>   strbuf_addstr(, "..");
>   strbuf_add_unique_abbrev(, >new_oid, 
> DEFAULT_ABBREV);
> - if ((recurse_submodules != RECURSE_SUBMODULES_OFF) &&
> - (recurse_submodules != RECURSE_SUBMODULES_ON))
> - check_for_new_submodule_commits(>new_oid);
>   r = s_update_ref("fast-forward", ref, 1);
>   format_display(display, r ? '!' : ' ', quickref.buf,
>  r ? _("unable to update local ref") : NULL,
> @@ -794,9 +788,6 @@ static int update_local_ref(struct ref *ref,
>   strbuf_add_unique_abbrev(, >object.oid, 
> DEFAULT_ABBREV);
>   strbuf_addstr(, "...");
>   strbuf_add_unique_abbrev(, >new_oid, 
> DEFAULT_ABBREV);
> - if ((recurse_submodules != RECURSE_SUBMODULES_OFF) &&
> - (recurse_submodules != RECURSE_SUBMODULES_ON))
> - check_for_new_submodule_commits(>new_oid);
>   r = s_update_ref("forced-update", ref, 1);
>   format_display(display, r ? '!' : '+', quickref.buf,
>  r ? _("unable to update local ref") : _("forced 
> update"),
> @@ -892,6 +883,8 @@ static int store_updated_refs(const char *raw_url, const 
> char *remote_name,
>   ref->force = rm->peer_ref->force;
>   }
>  
> + if (recurse_submodules != RECURSE_SUBMODULES_OFF)
> + check_for_new_submodule_commits(>old_oid);

As discussed above, indeed, check_for_new_submodule_commits() is now
invoked in more situations (not just when there is a local ref, and also
when recurse_submodules is _ON).

> @@ -1231,8 +1231,14 @@ struct submodule_parallel_fetch {
>   int result;
>  
>   struct string_list changed_submodule_names;
> +
> + /* The submodules to fetch in */
> + struct fetch_task **oid_fetch_tasks;
> + int oid_fetch_tasks_nr, oid_fetch_tasks_alloc;
>  };

Better to document as "Pending fetches by OIDs", I think. (These are not
fetches by default refspec, and are not already in progress.)

> +struct fetch_task {
> + struct repository *repo;
> + const struct submodule *sub;
> + unsigned free_sub : 1; /* Do we need to free the submodule? */
> +
> + /* fetch specific oids if set, otherwise fetch default refspec */
> + struct oid_array *commits;
> +};

I would document this as "Fetch in progress (if callback data) or
pending (if in oid_fetch_tasks in struct submodule_parallel_fetch)".
This potential confusion is why I wanted 2 separate types, as I wrote in
[1].

[1] 
https://public-inbox.org/git/20181026204106.132296-1-jonathanta...@google.com/

> +/**
> + * When a submodule is not defined in .gitmodules, we cannot access it
> + * via the regular submodule-config. Create a fake submodule, which we can
> + * work on.
> + */
> +static const struct submodule *get_non_gitmodules_submodule(const char *path)


Re: [PATCH 8/9] submodule.c: fetch in submodules git directory instead of in worktree

2018-12-04 Thread Jonathan Tan
> Keep the properties introduced in 10f5c52656 (submodule: avoid
> auto-discovery in prepare_submodule_repo_env(), 2016-09-01), by fixating
> the git directory of the submodule.

This is to avoid the autodetection of the Git repository, making it less
error-prone; looks good to me.


Re: [PATCH 7/9] submodule: migrate get_next_submodule to use repository structs

2018-12-04 Thread Jonathan Tan
> We used to recurse into submodules, even if they were broken having
> only an objects directory. The child process executed in the submodule
> would fail though if the submodule was broken. This is tested via
> "fetching submodule into a broken repository" in t5526.
> 
> This patch tightens the check upfront, such that we do not need
> to spawn a child process to find out if the submodule is broken.

Thanks, patches 4-7 look good to me - I see that you have addressed all
my comments. Not sending one email each for patches 4, 5, and 6 -
although I have commented on all of them, my comments were minor.

My more in-depth review was done on a previous version [1], and I see
that my comments have been addressed. Also, Stefan says [2] (and implements
in this patch):

> > > If the working tree directory is empty for that submodule, it means
> > > it is likely not initialized. But why would we use that as a signal to
> > > skip the submodule?
> >
> > What I meant was: if empty, skip it completely. Otherwise, do the
> > repo_submodule_init() and repo_init() thing, and if they both fail, set
> > spf->result to 1, preserving existing behavior.
> 
> I did it the other way round:
> 
> If repo_[submodule_]init fails, see if we have a gitlink in tree and
> an empty dir in the FS, to decide if we need to signal failure.

This works too.

[1] 
https://public-inbox.org/git/20181017225811.66554-1-jonathanta...@google.com/
[2] 
https://public-inbox.org/git/CAGZ79kbNXD35ZwevjLZcrGsT=2hncupmvuwvp1rjsksh0gd...@mail.gmail.com/


Re: [PATCH 3/9] submodule.c: sort changed_submodule_names before searching it

2018-12-04 Thread Jonathan Tan
> We can string_list_insert() to maintain sorted-ness of the
> list as we find new items, or we can string_list_append() to
> build an unsorted list and sort it at the end just once.
> 
> As we do not rely on the sortedness while building the
> list, we pick the "append and sort at the end" as it
> has better worst case execution times.

I would write this entire commit message as:

  Instead of using unsorted_string_list_lookup(), sort
  changed_submodule_names before performing any lookups so that we can
  use the faster string_list_lookup() instead.

The code in this patch is fine, and patches 1-2 are fine too.


Re: [PATCH] rebase docs: fix incorrect format of the section Behavioral Differences

2018-12-04 Thread Elijah Newren


On Tue, Dec 4, 2018 at 1:53 PM Johannes Sixt  wrote:
>
> Am 04.12.18 um 22:24 schrieb Elijah Newren:
> > +  The am backend sometimes does not because it operates on
> > +"fake ancestors" that involve trees containing only the files modified
> > +in the patch.  Without accurate tree information, directory rename
> > +detection cannot safely operate and is thus turned off in the am
> > +backend.
>
> Directory rename detection does not work sometimes. That is, it works
> most of the time. But how can that be the case when it is turned off?

Gah, when I was rebasing on your patch I adopted your sentence rewrite
but forgot to remove the "sometimes".  Thanks for catching; correction:

-- 8< --
Subject: [PATCH v2] git-rebase.txt: update note about directory rename
 detection and am

In commit 6aba117d5cf7 ("am: avoid directory rename detection when
calling recursive merge machinery", 2018-08-29), the git-rebase manpage
probably should have also been updated to note the stronger
incompatibility between git-am and directory rename detection.  Update
it now.

Signed-off-by: Elijah Newren 
---
 Documentation/git-rebase.txt | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/Documentation/git-rebase.txt b/Documentation/git-rebase.txt
index 41631df6e4..ef76cccf3f 100644
--- a/Documentation/git-rebase.txt
+++ b/Documentation/git-rebase.txt
@@ -569,8 +569,12 @@ it to keep commits that started empty.
 Directory rename detection
 ~~
 
-The merge and interactive backends work fine with
-directory rename detection.  The am backend sometimes does not.
+The merge and interactive backends work fine with directory rename
+detection.  The am backend does not because it operates on "fake
+ancestors" that involve trees containing only the files modified in
+the patch.  Without accurate tree information, directory rename
+detection cannot safely operate and is thus turned off in the am
+backend.
 
 include::merge-strategies.txt[]
 
-- 
2.20.0.rc1.2.gca0e531e87

 


Re: [PATCH on sb/more-repo-in-api] revision: use commit graph in get_reference()

2018-12-04 Thread Stefan Beller
On Tue, Dec 4, 2018 at 2:42 PM Jonathan Tan  wrote:
>
> When fetching into a repository, a connectivity check is first made by
> check_exist_and_connected() in builtin/fetch.c that runs:
>
>   git rev-list --objects --stdin --not --all --quiet <(list of objects)
>
> If the client repository has many refs, this command can be slow,
> regardless of the nature of the server repository or what is being
> fetched. A profiler reveals that most of the time is spent in
> setup_revisions() (approx. 60/63), and of the time spent in
> setup_revisions(), most of it is spent in parse_object() (approx.
> 49/60). This is because setup_revisions() parses the target of every ref
> (from "--all"), and parse_object() reads the buffer of the object.
>
> Reading the buffer is unnecessary if the repository has a commit graph
> and if the ref points to a commit (which is typically the case). This
> patch uses the commit graph wherever possible; on my computer, when I
> run the above command with a list of 1 object on a many-ref repository,
> I get a speedup from 1.8s to 1.0s.
>
> Another way to accomplish this effect would be to modify parse_object()
> to use the commit graph if possible; however, I did not want to change
> parse_object()'s current behavior of always checking the object
> signature of the returned object.
>
> Signed-off-by: Jonathan Tan 
> ---
> This is on sb/more-repo-in-api because I'm using the repo_parse_commit()
> function.

This is a mere nicety, not strictly required.
Before we had parse_commit(struct commit *) which would accomplish the
same, (and we'd still have that afterwards as a #define falling back onto
the_repository). As the function get_reference() is not the_repository safe
as it contains a call to is_promisor_object() that is repository
agnostic, I think
it would be fair game to not depend on that series. I am not
complaining, though.

> A colleague noticed this issue when handling a mirror clone.
>
> Looking at the bigger picture, the speed of the connectivity check
> during a fetch might be further improved by passing only the negotiation
> tips (obtained through --negotiation-tip) instead of "--all". This patch
> just handles the low-hanging fruit first.
> ---
>  revision.c | 15 ++-
>  1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/revision.c b/revision.c
> index b5108b75ab..e7da2c57ab 100644
> --- a/revision.c
> +++ b/revision.c
> @@ -212,7 +212,20 @@ static struct object *get_reference(struct rev_info 
> *revs, const char *name,
>  {
> struct object *object;
>
> -   object = parse_object(revs->repo, oid);
> +   /*
> +* If the repository has commit graphs, repo_parse_commit() avoids
> +* reading the object buffer, so use it whenever possible.
> +*/
> +   if (oid_object_info(revs->repo, oid, NULL) == OBJ_COMMIT) {
> +   struct commit *c = lookup_commit(revs->repo, oid);
> +   if (!repo_parse_commit(revs->repo, c))
> +   object = (struct object *) c;
> +   else
> +   object = NULL;

Would it make sense in this case to rely on parse_object below
instead of assigning NULL? The reason for that would be that
when lookup_commit returns NULL, we would try more broadly.

AFAICT oid_object_info doesn't take advantage of the commit graph,
but just looks up the object header, which is still less than completely
parsing it. Then lookup_commit is overly strict, as it may return
NULL as when there still is a type mismatch (I don't think a mismatch
could happen here, as both rely on just the object store, and not the
commit graph.), so this would be just defensive programming for
the sake of it. I dunno.

struct commit *c;

if (oid_object_info(revs->repo, oid, NULL) == OBJ_COMMIT &&
(c = lookup_commit(revs->repo, oid)) &&
!repo_parse_commit(revs->repo, c))
object = (struct object *) c;
else
object = parse_object(revs->repo, oid);


So with all that said, I still think this is a good patch.

Thanks,
Stefan

> +   } else {
> +   object = parse_object(revs->repo, oid);
> +   }
> +
> if (!object) {
> if (revs->ignore_missing)
> return object;
> --
> 2.19.0.271.gfe8321ec05.dirty
>


[PATCH on sb/more-repo-in-api] revision: use commit graph in get_reference()

2018-12-04 Thread Jonathan Tan
When fetching into a repository, a connectivity check is first made by
check_exist_and_connected() in builtin/fetch.c that runs:

  git rev-list --objects --stdin --not --all --quiet <(list of objects)

If the client repository has many refs, this command can be slow,
regardless of the nature of the server repository or what is being
fetched. A profiler reveals that most of the time is spent in
setup_revisions() (approx. 60/63), and of the time spent in
setup_revisions(), most of it is spent in parse_object() (approx.
49/60). This is because setup_revisions() parses the target of every ref
(from "--all"), and parse_object() reads the buffer of the object.

Reading the buffer is unnecessary if the repository has a commit graph
and if the ref points to a commit (which is typically the case). This
patch uses the commit graph wherever possible; on my computer, when I
run the above command with a list of 1 object on a many-ref repository,
I get a speedup from 1.8s to 1.0s.

Another way to accomplish this effect would be to modify parse_object()
to use the commit graph if possible; however, I did not want to change
parse_object()'s current behavior of always checking the object
signature of the returned object.

Signed-off-by: Jonathan Tan 
---
This is on sb/more-repo-in-api because I'm using the repo_parse_commit()
function.

A colleague noticed this issue when handling a mirror clone.

Looking at the bigger picture, the speed of the connectivity check
during a fetch might be further improved by passing only the negotiation
tips (obtained through --negotiation-tip) instead of "--all". This patch
just handles the low-hanging fruit first.
---
 revision.c | 15 ++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/revision.c b/revision.c
index b5108b75ab..e7da2c57ab 100644
--- a/revision.c
+++ b/revision.c
@@ -212,7 +212,20 @@ static struct object *get_reference(struct rev_info *revs, 
const char *name,
 {
struct object *object;
 
-   object = parse_object(revs->repo, oid);
+   /*
+* If the repository has commit graphs, repo_parse_commit() avoids
+* reading the object buffer, so use it whenever possible.
+*/
+   if (oid_object_info(revs->repo, oid, NULL) == OBJ_COMMIT) {
+   struct commit *c = lookup_commit(revs->repo, oid);
+   if (!repo_parse_commit(revs->repo, c))
+   object = (struct object *) c;
+   else
+   object = NULL;
+   } else {
+   object = parse_object(revs->repo, oid);
+   }
+
if (!object) {
if (revs->ignore_missing)
return object;
-- 
2.19.0.271.gfe8321ec05.dirty



Re: [PATCH] rebase docs: fix incorrect format of the section Behavioral Differences

2018-12-04 Thread Johannes Sixt

Am 04.12.18 um 22:24 schrieb Elijah Newren:

+  The am backend sometimes does not because it operates on
+"fake ancestors" that involve trees containing only the files modified
+in the patch.  Without accurate tree information, directory rename
+detection cannot safely operate and is thus turned off in the am
+backend.


Directory rename detection does not work sometimes. That is, it works 
most of the time. But how can that be the case when it is turned off?


-- Hannes


Re: [PATCH 8/9] sha1-file: use loose object cache for quick existence check

2018-12-04 Thread René Scharfe
Am 03.12.2018 um 23:04 schrieb Jeff King:
> On Sun, Dec 02, 2018 at 11:52:50AM +0100, René Scharfe wrote:
> 
>>> And for mu.git, a ~20k object repo:
>>>
>>> Test origin/master 
>>> peff/jk/loose-cache   avar/check-collisions-config
>>> 
>>> -
>>> 0008.2: index-pack with 256*1 loose objects  0.59(0.91+0.06)   
>>> 0.58(0.93+0.03) -1.7% 0.57(0.89+0.04) -3.4%
>>> 0008.3: index-pack with 256*10 loose objects 0.59(0.91+0.07)   
>>> 0.59(0.92+0.03) +0.0% 0.57(0.89+0.03) -3.4%
>>> 0008.4: index-pack with 256*100 loose objects0.59(0.91+0.05)   
>>> 0.81(1.13+0.04) +37.3%0.58(0.91+0.04) -1.7%
>>> 0008.5: index-pack with 256*250 loose objects0.59(0.91+0.05)   
>>> 1.23(1.51+0.08) +108.5%   0.58(0.91+0.04) -1.7%
>>> 0008.6: index-pack with 256*500 loose objects0.59(0.90+0.06)   
>>> 1.96(2.20+0.12) +232.2%   0.58(0.91+0.04) -1.7%
>>> 0008.7: index-pack with 256*750 loose objects0.59(0.92+0.05)   
>>> 2.72(2.92+0.17) +361.0%   0.58(0.90+0.04) -1.7%
>>> 0008.8: index-pack with 256*1000 loose objects   0.59(0.90+0.06)   
>>> 3.50(3.67+0.21) +493.2%   0.57(0.90+0.04) -3.4%
>>
>> OK, here's another theory: The cache scales badly with increasing
>> numbers of loose objects because it sorts the array 256 times as it is
>> filled.  Loading it fully and sorting once would help, as would using
>> one array per subdirectory.
> 
> Yeah, that makes sense. This was actually how I had planned to do it
> originally, but then I ended up just reusing the existing single-array
> approach from the abbrev code.
> 
> I hadn't actually thought about the repeated sortings (but that
> definitely makes sense that they would hurt in these pathological
> cases), but more just that we get a 256x reduction in N for our binary
> search (in fact we already do this first-byte lookup-table trick for
> pack index lookups).

Skipping eight steps in a binary search is something, but it's faster
even without that.

Just realized that the demo code can use "lookup" instead of the much
more expensive "for_each_unique" to sort.  D'oh!  With that change:

  for command in '
  foreach (0..255) {
$subdir = sprintf("%02x", $_);
foreach (1..$ARGV[0]) {
  printf("append %s%038d\n", $subdir, $_);
}
# intermediate sort
print "lookup " . "0" x 40 . "\n";
  }
' '
  foreach (0..255) {
$subdir = sprintf("%02x", $_);
foreach (1..$ARGV[0]) {
  printf("append %s%038d\n", $subdir, $_);
}
  }
  # sort once at the end
  print "lookup " . "0" x 40 . "\n";
' '
  foreach (0..255) {
$subdir = sprintf("%02x", $_);
foreach (1..$ARGV[0]) {
  printf("append %s%038d\n", $subdir, $_);
}
# sort each subdirectory separately
print "lookup " . "0" x 40 . "\n";
print "clear\n";
  }
'
  do
time perl -e "$command" 1000 | t/helper/test-tool sha1-array | wc -l
  done

And the results make the scale of the improvement more obvious:

  256

  real0m3.476s
  user0m3.466s
  sys 0m0.099s
  1

  real0m0.157s
  user0m0.148s
  sys 0m0.046s
  256

  real0m0.117s
  user0m0.116s
  sys 0m0.051s

> Your patch looks good to me. We may want to do one thing on top:
> 
>> diff --git a/object-store.h b/object-store.h
>> index 8dceed0f31..ee67a50980 100644
>> --- a/object-store.h
>> +++ b/object-store.h
>> @@ -20,7 +20,7 @@ struct object_directory {
>>   * Be sure to call odb_load_loose_cache() before using.
>>   */
>>  char loose_objects_subdir_seen[256];
>> -struct oid_array loose_objects_cache;
>> +struct oid_array loose_objects_cache[256];
> 
> The comment in the context there is warning callers to remember to load
> the cache first. Now that we have individual caches, might it make sense
> to change the interface a bit, and make these members private. I.e.,
> something like:
> 
>   struct oid_array *odb_loose_cache(struct object_directory *odb,
> int subdir_nr) 
>   {
>   if (!loose_objects_subdir_seen[subdir_nr])
>   odb_load_loose_cache(odb, subdir_nr); /* or just inline it here 
> */
> 
>   return >loose_objects_cache[subdir_nr];
>   }

Sure.  And it should take an object_id pointer instead of a subdir_nr --
less duplication, nicer interface (patch below).

It would be nice if it could return a const pointer to discourage
messing up the cache, but that's not compatible with oid_array_lookup().

And quick_has_loose() should be converted to object_id as well -- adding
a function that takes a SHA-1 is a regression. :D

René

---
 object-store.h |  8 
 sha1-file.c| 19 ---
 sha1-name.c|  4 +---
 3 files changed, 13 insertions(+), 18 deletions(-)


Re: [PATCH] rebase docs: fix incorrect format of the section Behavioral Differences

2018-12-04 Thread Elijah Newren
On Mon, 3 Dec 2018, Johannes Sixt wrote:

> The text body of section Behavioral Differences is typeset as code,
> but should be regular text. Remove the indentation to achieve that.
> 
> While here, prettify the language:
> 
> - use "the x backend" instead of "x-based rebase";
> - use present tense instead of future tense;
> 
> and use subsections instead of a list.
> 
> Signed-off-by: Johannes Sixt 
> ---
> Cf. https://git-scm.com/docs/git-rebase#_behavioral_differences
> 
> I actually did not test the result, because I don't have the
> infrastructure.
> 
> The sentence "The am backend sometimes does not" is not very useful,
> but is not my fault ;) It would be great if it could be made more
> specific, but I do not know the details.

That sentence is my fault.  I've been sitting on a patch for about a
week that fixes it which I was going to submit after 2.20.0, but since
you're fixing this text up right now, I guess putting these two patches
together makes sense.  I've rebased it on top of your commit and provided
it below.

-- 8< --
Subject: [PATCH] git-rebase.txt: update note about directory rename detection
 and am

In commit 6aba117d5cf7 ("am: avoid directory rename detection when
calling recursive merge machinery", 2018-08-29), the git-rebase manpage
probably should have also been updated to note the stronger
incompatibility between git-am and directory rename detection.  Update
it now.

Signed-off-by: Elijah Newren 
---
 Documentation/git-rebase.txt | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/Documentation/git-rebase.txt b/Documentation/git-rebase.txt
index 41631df6e4..b220b8b2b6 100644
--- a/Documentation/git-rebase.txt
+++ b/Documentation/git-rebase.txt
@@ -569,8 +569,12 @@ it to keep commits that started empty.
 Directory rename detection
 ~~
 
-The merge and interactive backends work fine with
-directory rename detection.  The am backend sometimes does not.
+The merge and interactive backends work fine with directory rename
+detection.  The am backend sometimes does not because it operates on
+"fake ancestors" that involve trees containing only the files modified
+in the patch.  Without accurate tree information, directory rename
+detection cannot safely operate and is thus turned off in the am
+backend.
 
 include::merge-strategies.txt[]
 
-- 
2.20.0.rc1.2.gca0e531e87



Re: [PATCH/RFC v3 00/14] Introduce new commands switch-branch and restore-files

2018-12-04 Thread Eric Sunshine
On Tue, Dec 4, 2018 at 11:28 AM Duy Nguyen  wrote:
> Haven't really worked on killing the term "detached HEAD" yet. But I
> noticed the other day that git-branch reports
>
> * (HEAD detached from 703266f6e4)
>
> and I didn't know how to rephrase that. I guess "unnamed branch from
> 703266f6e4" is probably good enough but my old-timer brain screams no.

"git worktree add" and "git worktree show" also report similar messages.


Re: [WIP RFC 5/5] upload-pack: send part of packfile response as uri

2018-12-04 Thread Stefan Beller
On Mon, Dec 3, 2018 at 3:38 PM Jonathan Tan  wrote:
>
> This is a partial implementation of upload-pack sending part of its
> packfile response as URIs.

It does so by implementing a new flag `--exclude-configured-blobs`
in pack-objects, which would change the output of pack-objects to
output a list of URLs (of the excluded blobs) followed by the
pack to be asked for.

This design seems easy to implement as then upload-pack
can just parse the output and only needs to insert
"packfile" and "packfile-uris\n" at the appropriate places
of the stream, otherwise it just passes through the information
obtained from pack-objects.

The design as-is would make for hard documentation of
pack-objects (its output is not just a pack anymore when that
flag is given, but a highly optimized byte stream).

Initially I did not anticipate this to be one of the major design problems
as I assumed we'd want to use this pack feature over broadly (e.g.
eventually by offloading most of the objects into a base pack that
is just always included as the likelihood for any object in there is
very high on initial clone), but it makes total sense to only
output the URIs that we actually need.

An alternative that comes very close to the current situation
would be to either pass a file path or file descriptor (that upload-pack
listens to in parallel) to pack-objects as an argument of the new flag.
Then we would not need to splice the protocol sections into the single
output stream, but we could announce the sections, then flush
the URIs and then flush the pack afterwards.

I looked at this quickly, but that would need either extensions in
run-command.c for setting up the new fd for us, as there we already
have OS specific code for these setups, or we'd have to duplicate
some of the logic here, which doesn't enthuse me either.

So maybe we'd create a temp file via mkstemp and pass
the file name to pack-objects for writing the URIs and then
we'd just need to stream that file?

> +   # NEEDSWORK: "git clone" fails here because it ignores the URI 
> provided
> +   # instead of fetching it.
> +   test_must_fail env GIT_TRACE_PACKET="$(pwd)/log" \
> +   git -c protocol.version=2 clone \
> +   "$HTTPD_URL/smart/http_parent" http_child 2>err &&
> +   # Although "git clone" fails, we can still check that the server
> +   # provided the URI we requested and that the error message pinpoints
> +   # the object that is missing.
> +   grep "clone< uri https://example.com/a-uri; log &&
> +   test_i18ngrep "did not receive expected object $(cat h)" err

That is a nice test!


Re: [WIP RFC 2/5] Documentation: add Packfile URIs design doc

2018-12-04 Thread Jonathan Tan
> Some thoughts here:
> 
> First, I'd like to see a section (and a bit in the implementation)
> requiring HTTPS if the original protocol is secure (SSH or HTTPS).
> Allowing the server to downgrade to HTTP, even by accident, would be a
> security problem.
> 
> Second, this feature likely should be opt-in for SSH. One issue I've
> seen repeatedly is that people don't want to use HTTPS to fetch things
> when they're using SSH for Git. Many people in corporate environments
> have proxies that break HTTP for non-browser use cases[0], and using SSH
> is the only way that they can make a functional Git connection.

Good points about SSH support and the client needing to control which
protocols the server will send URIs for. I'll include a line in the
client request in which the client can specify which protocols it is OK
with.

> Third, I think the server needs to be required to both support Range
> headers and never change the content of a URI, so that we can have
> resumable clone implicit in this design. There are some places in the
> world where connections are poor and fetching even the initial packfile
> at once might be a problem. (I've seen such questions on Stack
> Overflow, for example.)

Good points. I'll add these in the next revision.

> Having said that, I think overall this is a good idea and I'm glad to
> see a proposal for it.

Thanks, and thanks for your comments too.


Re: Simple git push --tags deleted all branches

2018-12-04 Thread Mateusz Loskot
On Thu, 29 Nov 2018 at 16:39, Ævar Arnfjörð Bjarmason  wrote:
> On Thu, Nov 29 2018, Mateusz Loskot wrote:
> > On Thu, 29 Nov 2018 at 16:03, Ævar Arnfjörð Bjarmason  
> > wrote:
> >> On Wed, Nov 28 2018, Mateusz Loskot wrote:
> >> >
> >> > (using git version 2.19.2.windows.1)
> >> >
> >> > I've just encountered one of those WTH moments.
> >> >
> >> > I have a bare repository
> >> >
> >> > core.git (BARE:master) $ git branch
> >> >   1.0
> >> >   2.0
> >> > * master
> >> >
> >> > core.git (BARE:master) $ git tag
> >> > 1.0.1651
> >> > 1.0.766
> >> > 2.0.1103
> >> > 2.0.1200
> >> >
> >> > I published the repo using: git push --all --follow-tags
> >> >
> >> > This succeeded, but there seem to be no tags pushed, just branches.
> >> > So, I followed with
> >> >
> >> > core.git (BARE:master) $ git push --tags
> >> > To XXX
> >> >  - [deleted]   1.0
> >> >  - [deleted]   2.0
> >> >  ! [remote rejected]   master (refusing to delete the current
> >> > branch: refs/heads/master)
> >> > error: failed to push some refs to 'XXX'
> >> >
> >> > And, I've found out that all branches and tags have been
> >> > wiped in both, local repo and remote :)
> >> >
> >> > I restored the repo and tried out
> >> >
> >> > git push origin 1.0
> >> > git push origin --tags
> >> >
> >> > and this time both succeeded, without wiping out any refs.
> >> >
> >> > Could anyone help me to understand why remote-less
> >> >
> >> > git push --tags
> >> >
> >> > is/was so dangerous and unforgiving?!
> >>
> >> Since nobody's replied yet, I can't see what's going on here from the
> >> info you've provided. My guess is that you have something "mirror" set
> >> on the remote.
> >
> > Thank you for responding.
> >
> > The git push --tags hugely surprised me, and here is genuine screenshot
> > https://twitter.com/mloskot/status/1068072285846859776
> >
> >> It seems you can't share the repo or its URL, but could you share the
> >> scrubbed output of 'git config -l --show-origin' when run inside this
> >> repository?
> >
> > Here is complete output. I have not stripped the basics like aliases,
> > just in case.
>
> Right, it's because you used --mirror, the important bit:
>
> > file:config remote.origin.url=https://xxx.com/core-external-metadata.git
> > file:config remote.origin.fetch=+refs/*:refs/*
> > file:config remote.origin.mirror=true
> > file:config
>
> I.e. you have cloned with the --mirror flag, this is what it's supposed
> to do: https://git-scm.com/docs/git-clone#git-clone---mirror
> https://git-scm.com/docs/git-fetch#git-fetch---prune
>
> I.e. you push and git tries to mirror the refs you have locally to the
> remote, including pruning stuff in the remote.

Thank you very much for diagnosing my issue.
I was not aware about how --mirror affects the workflow.
It all makes perfect sense now.

Best regards,
-- 
Mateusz Loskot, http://mateusz.loskot.net


Re: [PATCH/RFC v3 00/14] Introduce new commands switch-branch and restore-files

2018-12-04 Thread Duy Nguyen
On Tue, Dec 4, 2018 at 7:31 PM Elijah Newren  wrote:
>
> On Tue, Dec 4, 2018 at 10:22 AM Duy Nguyen  wrote:
> >
> > On Tue, Dec 4, 2018 at 6:45 PM Elijah Newren  wrote:
> > > > > > - Two more fancy features (the "git checkout --index" being the
> > > > > >   default mode and the backup log for accidental overwrites) are of
> > > > > >   course still missing. But they are coming.
> > > > > >
> > > > > > I did not go replace "detached HEAD" with "unnamed branch" (or "no
> > > > > > branch") everywhere because I think a unique term is still good to
> > > > > > refer to this concept. Or maybe "no branch" is good enough. I dunno.
> > > > >
> > > > > I personally like "unnamed branch", but "no branch" would still be
> > > > > better than "detached HEAD".
> > > >
> > > > Haven't really worked on killing the term "detached HEAD" yet. But I
> > > > noticed the other day that git-branch reports
> > > >
> > > > * (HEAD detached from 703266f6e4)
> > > >
> > > > and I didn't know how to rephrase that. I guess "unnamed branch from
> > > > 703266f6e4" is probably good enough but my old-timer brain screams no.
> > >
> > > Perhaps "* (On an unnamed branch, at 703266f6e4)"?
> >
> > This 703266f6e4 is the fork point. Once you start adding more commits
> > on top of this unnamed branch, I find it hard to define it "at"
> > 703266f6e4 anymore. "forked from 703266f6e4" (or even starting/growing
> > from...) is probably clearest but also a bit longer.
>
> It reports the fork point rather than the commit HEAD points to?  Ah,
> I guess I never payed that close of attention before.  I actually
> think "on an unnamed branch" is good enough, but if others gain value
> from the extra info, then I understand the conundrum.  I'm not sure
> what the use or rationale is for the fork point, though, so I feel
> slightly at a loss to try to describe this extra piece of info.

It's probably a corner case. This is a better example

* (HEAD detached at pclouds/backup-log)

It does help see i'm working on top of some branch (or tag)
-- 
Duy


Re: [PATCH/RFC v3 00/14] Introduce new commands switch-branch and restore-files

2018-12-04 Thread Elijah Newren
On Tue, Dec 4, 2018 at 10:22 AM Duy Nguyen  wrote:
>
> On Tue, Dec 4, 2018 at 6:45 PM Elijah Newren  wrote:
> > > > > - Two more fancy features (the "git checkout --index" being the
> > > > >   default mode and the backup log for accidental overwrites) are of
> > > > >   course still missing. But they are coming.
> > > > >
> > > > > I did not go replace "detached HEAD" with "unnamed branch" (or "no
> > > > > branch") everywhere because I think a unique term is still good to
> > > > > refer to this concept. Or maybe "no branch" is good enough. I dunno.
> > > >
> > > > I personally like "unnamed branch", but "no branch" would still be
> > > > better than "detached HEAD".
> > >
> > > Haven't really worked on killing the term "detached HEAD" yet. But I
> > > noticed the other day that git-branch reports
> > >
> > > * (HEAD detached from 703266f6e4)
> > >
> > > and I didn't know how to rephrase that. I guess "unnamed branch from
> > > 703266f6e4" is probably good enough but my old-timer brain screams no.
> >
> > Perhaps "* (On an unnamed branch, at 703266f6e4)"?
>
> This 703266f6e4 is the fork point. Once you start adding more commits
> on top of this unnamed branch, I find it hard to define it "at"
> 703266f6e4 anymore. "forked from 703266f6e4" (or even starting/growing
> from...) is probably clearest but also a bit longer.

It reports the fork point rather than the commit HEAD points to?  Ah,
I guess I never payed that close of attention before.  I actually
think "on an unnamed branch" is good enough, but if others gain value
from the extra info, then I understand the conundrum.  I'm not sure
what the use or rationale is for the fork point, though, so I feel
slightly at a loss to try to describe this extra piece of info.


Re: [PATCH/RFC v3 00/14] Introduce new commands switch-branch and restore-files

2018-12-04 Thread Duy Nguyen
On Tue, Dec 4, 2018 at 6:45 PM Elijah Newren  wrote:
> > > > - Two more fancy features (the "git checkout --index" being the
> > > >   default mode and the backup log for accidental overwrites) are of
> > > >   course still missing. But they are coming.
> > > >
> > > > I did not go replace "detached HEAD" with "unnamed branch" (or "no
> > > > branch") everywhere because I think a unique term is still good to
> > > > refer to this concept. Or maybe "no branch" is good enough. I dunno.
> > >
> > > I personally like "unnamed branch", but "no branch" would still be
> > > better than "detached HEAD".
> >
> > Haven't really worked on killing the term "detached HEAD" yet. But I
> > noticed the other day that git-branch reports
> >
> > * (HEAD detached from 703266f6e4)
> >
> > and I didn't know how to rephrase that. I guess "unnamed branch from
> > 703266f6e4" is probably good enough but my old-timer brain screams no.
>
> Perhaps "* (On an unnamed branch, at 703266f6e4)"?

This 703266f6e4 is the fork point. Once you start adding more commits
on top of this unnamed branch, I find it hard to define it "at"
703266f6e4 anymore. "forked from 703266f6e4" (or even starting/growing
from...) is probably clearest but also a bit longer.
-- 
Duy


Re: [PATCH v3 07/14] checkout: split into switch-branch and restore-files

2018-12-04 Thread Duy Nguyen
On Tue, Dec 4, 2018 at 6:43 PM Elijah Newren  wrote:
> > > > +--ours::
> > > > +--theirs::
> > > > +   Check out stage #2 ('ours') or #3 ('theirs') for unmerged
> > > > +   paths.
> > > > ++
> > > > +Note that during `git rebase` and `git pull --rebase`, 'ours' and
> > > > +'theirs' may appear swapped; `--ours` gives the version from the
> > > > +branch the changes are rebased onto, while `--theirs` gives the
> > > > +version from the branch that holds your work that is being rebased.
> > > > ++
> > > > +This is because `rebase` is used in a workflow that treats the
> > > > +history at the remote as the shared canonical one, and treats the
> > > > +work done on the branch you are rebasing as the third-party work to
> > > > +be integrated, and you are temporarily assuming the role of the
> > > > +keeper of the canonical history during the rebase.  As the keeper of
> > > > +the canonical history, you need to view the history from the remote
> > > > +as `ours` (i.e. "our shared canonical history"), while what you did
> > > > +on your side branch as `theirs` (i.e. "one contributor's work on top
> > > > +of it").
> > >
> > > Total aside because I'm not sure what you could change here, but man
> > > do I hate this.
> >
> > Uh it's actually documented? I'm always confused by this too. --ours
> > and --theirs at this point are pretty much tied to stage 2 and 3.
> > Nothing I can do about it. But if you could come up with some other
> > option names, then we could make "new ours" to be stage 3 during
> > rebase, for example.
>
> I don't think it's a naming issue, personally.  Years ago we could
> have defined --ours and --theirs differently based on which kind of
> operation we were in the middle of, but you are probably right that
> they are now tied to stage 2 and 3.  But there's another place that we
> might still be able to address this; I think the brain-damage here may
> have just been due to the fact that the recursive merge machinery was
> rather inflexible and required HEAD to be stage 2.  If it were a
> little more flexible, then we might just be able to make this problem
> go away.  Maybe it can still be fixed (I haven't dug too deeply into
> it), but if so, the only fix needed here would be to remove this long
> explanation about why the tool gets things totally backward.

Aha. I' not really deep in this merge business to know if stages 2 and
3 can be swapped. This is right up your alley. I'll just leave it to
you.

> > > > +'git switch-branch' -c|-C  []::
> > > > +
> > > > +   Specifying `-c` causes a new branch to be created as if
> > > > +   linkgit:git-branch[1] were called and then switched to. In
> > > > +   this case you can use the `--track` or `--no-track` options,
> > > > +   which will be passed to 'git branch'.  As a convenience,
> > > > +   `--track` without `-c` implies branch creation; see the
> > > > +   description of `--track` below.
> > >
> > > Can we get rid of --track/--no-track and just provide a flag (which
> > > takes no arguments) for the user to use?  Problems with --track:
> > >   * it's not even in your synopsis
> > >   * user has to repeat themselves (e.g. 'next' in two places from '-c
> > > next --track origin/next'); this repetition is BOTH laborious AND
> > > error-prone
> > >   * it's rather inconsistent: --track is the default due to
> > > auto-vivify when the user specifies nothing but a branch name that
> > > doesn't exist yet, but when the user realizes the branch doesn't exist
> > > yet and asks to have it created then suddenly tracking is not the
> > > default??
> >
> > I don't think --track is default anymore (maybe I haven't updated the
> > man page correctly). The dwim behavior is only activated in
> > switch-branch when you specify --guess to reduce the amount of magic
> > we throw at the user. With that in mind, do we still hide
> > --track/--no-track from switch-branch?
>
> Ooh, you're adding --guess?  Cool, that addresses my concerns, just in
> a different manner.

No it's always there even in git-checkout, just hidden.

> Personally, I'd leave --track/--no-track out.  It's extra mental
> overhead, git branch has options for setting those if they need some
> special non-default setup, and if there is enough demand for it we can
> add it later.  Removing options once published is much harder.

Slightly less convenient (you would need a combination of git-branch
and git-switch-branch, if you avoid git-checkout). But since I don't
think I have ever used this option, I'm fine with leaving it out and
considering adding it back later.

> > > I'm not sure what's best, but here's some food for thought:
> > >
> > >
> > >git switch-branch 
> > > switches to , if it exists.  Error cases:
> > >   * If  isn't actually a branch but a  or
> > >  or , error out and suggest using
> > > --detach.
> > >   * If  isn't actually a branch but there is a similarly named
> > >  (e.g. origin/), then suggest using
> > > -c.
> >
> > I would make 

Re: [RFC PATCH 3/3] test-lib: add the '--stress' option to run a test repeatedly under load

2018-12-04 Thread Ævar Arnfjörð Bjarmason


On Tue, Dec 04 2018, SZEDER Gábor wrote:

> Unfortunately, we have a few flaky tests, whose failures tend to be
> hard to reproduce.  We've found that the best we can do to reproduce
> such a failure is to run the test repeatedly while the machine is
> under load, and wait in the hope that the load creates enough variance
> in the timing of the test's commands that a failure is evenually
> triggered.  I have a command to do that, and I noticed that two other
> contributors have rolled their own scripts to do the same, all
> choosing slightly different approaches.
>
> To help reproduce failures in flaky tests, introduce the '--stress'
> option to run a test script repeatedly in multiple parallel
> invocations until one of them fails, thereby using the test script
> itself to increase the load on the machine.
>
> The number of parallel invocations is determined by, in order of
> precedence: the number specified as '--stress=', or the value of
> the GIT_TEST_STRESS_LOAD environment variable, or twice the number of
> available processors in '/proc/cpuinfo', or 8.
>
> To prevent the several parallel invocations of the same test from
> interfering with each other:
>
>   - Include the parallel job's number in the name of the trash
> directory and the various output files under 't/test-results/' as
> a '.stress-' suffix.
>
>   - Add the parallel job's number to the port number specified by the
> user or to the test number, so even tests involving daemons
> listening on a TCP socket can be stressed.
>
>   - Make '--stress' imply '--verbose-log' and discard the test's
> standard ouput and error; dumping the output of several parallel
> tests to the terminal would create a big ugly mess.
>
> 'wait' for all parallel jobs before exiting (either because a failure
> was found or because the user lost patience and aborted the stress
> test), allowing the still running tests to finish.  Otherwise the "OK
> X.Y" progress output from the last iteration would likely arrive after
> the user got back the shell prompt, interfering with typing in the
> next command.  OTOH, this waiting might induce a considerable delay
> between hitting ctrl-C and the test actually exiting; I'm not sure
> this is the right tradeoff.

I think it makes sense to generalize this and split it up into two
features.

It's a frequent annoyance of mine in the test suite that I'm
e.g. running t*.sh with some parallel "prove" in one screen, and then I
run tABCD*.sh manually, and get unlucky because they use the same trash
dir, and both tests go boom.

You can fix that with --root, which is much of what this patch does. My
one-liner for doing --stress has been something like:

perl -E 'say ++$_ while 1' | parallel --jobs=100% --halt-on-error 
soon,fail=1 './t-basic.sh --root=trash-{} -v'

But it would be great if I didn't have to worry about that and could
just run two concurrent:

./t-basic.sh

So I think we could just set some env variable where instead of having
the predictable trash directory we have a $TRASHDIR.$N as this patch
does, except we pick $N by locking some test-runs/tABCD.Nth file with
flock() during the run.

Then a stress mode like this would just be:

GIT_TEST_FLOCKED_TRASH_DIRS=1 perl -E 'say ++$_ while 1' | parallel 
--jobs=100% --halt-on-error soon,fail=1 './t-basic.sh'

And sure, we could ship a --stress option too, but it wouldn't be
magical in any way, just another "spawn N in a loop" implementation, but
you could also e.g. use GNU parallel to drive it, and without needing to
decide to stress test in advance, since we'd either flock() the trash
dir, or just mktemp(1)-it.


Re: [PATCH/RFC v3 00/14] Introduce new commands switch-branch and restore-files

2018-12-04 Thread Elijah Newren
On Tue, Dec 4, 2018 at 8:28 AM Duy Nguyen  wrote:
>
> On Tue, Dec 4, 2018 at 2:29 AM Elijah Newren  wrote:
> >
> > On Thu, Nov 29, 2018 at 2:01 PM Nguyễn Thái Ngọc Duy  
> > wrote:
> > >
> > > v3 sees switch-branch go back to switch-branch (in v2 it was
> > > checkout-branch). checkout-files is also renamed restore-files (v1 was
> > > restore-paths). Hopefully we won't see another rename.
> >
> > I started reading through the patches.  I also tried to apply them
> > locally, but they had conflicts or missing base file version on both
> > master and next.  What version did you base it on?
>
> I think nd/checkout-dwim-fix because of a non-trivial conflict there
> (but I don't remember when I noticed it and rebased on that). Anyway
> you can get the whole series at
>
> https://gitlab.com/pclouds/git/tree/switch-branch-and-checkout-files
>
> It fixes some of your comments already, a couple of bug fixes here and
> there and in a good-enough shape that I start actually using it.

Cool.

> > > - Two more fancy features (the "git checkout --index" being the
> > >   default mode and the backup log for accidental overwrites) are of
> > >   course still missing. But they are coming.
> > >
> > > I did not go replace "detached HEAD" with "unnamed branch" (or "no
> > > branch") everywhere because I think a unique term is still good to
> > > refer to this concept. Or maybe "no branch" is good enough. I dunno.
> >
> > I personally like "unnamed branch", but "no branch" would still be
> > better than "detached HEAD".
>
> Haven't really worked on killing the term "detached HEAD" yet. But I
> noticed the other day that git-branch reports
>
> * (HEAD detached from 703266f6e4)
>
> and I didn't know how to rephrase that. I guess "unnamed branch from
> 703266f6e4" is probably good enough but my old-timer brain screams no.

Perhaps "* (On an unnamed branch, at 703266f6e4)"?


Re: [PATCH v3 07/14] checkout: split into switch-branch and restore-files

2018-12-04 Thread Elijah Newren
On Tue, Dec 4, 2018 at 8:22 AM Duy Nguyen  wrote:
>
> Thanks for all the comments! There are still some I haven't replied
> (either I'll agree and do it anyway, or I'll need some more time to
> digest)
>
> On Tue, Dec 4, 2018 at 1:45 AM Elijah Newren  wrote:
> > > +'git restore-files' [--from=] ...
> >
> > Isn't this already inferred by the previous line?  Or was the
> > inclusion of --from on the previous line in error?  Looking at the
> > git-checkout manpage, it looks like you may have just been copying an
> > existing weirdness, but it needs to be fixed.  ;-)
>
> Hehe.
>
> > > +'git restore-files' (-p|--patch) [--from=] [...]
> > > +
> > > +DESCRIPTION
> > > +---
> > > +Updates files in the working tree to match the version in the index
> > > +or the specified tree.
> > > +
> > > +'git restore-files' [--from=] ...::
> >
> >  and ?  I understand  and ,
> > or  but have no clue why it'd be okay to specify 
> > and  together.  What does that even mean?
> >
> > Also, rather than fixing from  to  or ,
> > perhaps we should just use  here?  (I'm thinking of git
> > rev-parse's "Specifying revisions", which suggests "revisions" as a
> > good substitute for "commit-ish" that isn't quite so weird for new
> > users.)
>
> tree-ish is technically more accurate. But I'm ok with just
> . If you give it a blob oid then you should get a nice
> explanation what you're doing wrong anyway.

Documenting as  but having it be more general under the hood
and actually accept  sounds good to me.  I just think the
pain of trying to explain  is too much of a hurdle for
users, especially as I expect it to be very unlikely that users will
ever take advantage of it.

> > > +   Overwrite paths in the working tree by replacing with the
> > > +   contents in the index or in the  (most often a
> > > +   commit).  When a  is given, the paths that
> > > +   match the  are updated both in the index and in
> > > +   the working tree.
> >
> > Is that the default we really want for this command?  Why do we
> > automatically assume these files are ready for commit?  I understand
> > that it's what checkout did, but I'd find it more natural to only
> > affect the working tree by default.  We can give it an option for
> > affecting the index instead (or perhaps in addition).
>
> Yeah, that behavior of updating the index always bothers me when I use
> it but I seemed to forget when working on this.
>
> > > +--ours::
> > > +--theirs::
> > > +   Check out stage #2 ('ours') or #3 ('theirs') for unmerged
> > > +   paths.
> > > ++
> > > +Note that during `git rebase` and `git pull --rebase`, 'ours' and
> > > +'theirs' may appear swapped; `--ours` gives the version from the
> > > +branch the changes are rebased onto, while `--theirs` gives the
> > > +version from the branch that holds your work that is being rebased.
> > > ++
> > > +This is because `rebase` is used in a workflow that treats the
> > > +history at the remote as the shared canonical one, and treats the
> > > +work done on the branch you are rebasing as the third-party work to
> > > +be integrated, and you are temporarily assuming the role of the
> > > +keeper of the canonical history during the rebase.  As the keeper of
> > > +the canonical history, you need to view the history from the remote
> > > +as `ours` (i.e. "our shared canonical history"), while what you did
> > > +on your side branch as `theirs` (i.e. "one contributor's work on top
> > > +of it").
> >
> > Total aside because I'm not sure what you could change here, but man
> > do I hate this.
>
> Uh it's actually documented? I'm always confused by this too. --ours
> and --theirs at this point are pretty much tied to stage 2 and 3.
> Nothing I can do about it. But if you could come up with some other
> option names, then we could make "new ours" to be stage 3 during
> rebase, for example.

I don't think it's a naming issue, personally.  Years ago we could
have defined --ours and --theirs differently based on which kind of
operation we were in the middle of, but you are probably right that
they are now tied to stage 2 and 3.  But there's another place that we
might still be able to address this; I think the brain-damage here may
have just been due to the fact that the recursive merge machinery was
rather inflexible and required HEAD to be stage 2.  If it were a
little more flexible, then we might just be able to make this problem
go away.  Maybe it can still be fixed (I haven't dug too deeply into
it), but if so, the only fix needed here would be to remove this long
explanation about why the tool gets things totally backward.

> > > +Part of the linkgit:git[1] suite
> >
> >
> > My single biggest worry about this whole series is that I'm worried
> > you're perpetuating and even further ingraining one of the biggest
> > usability problems with checkout: people suggest and use it for
> > reverting/restoring paths to a previous version, but it doesn't do
> > that:
> >
> > git restore-files --from 

Re: [RFC PATCH 3/3] test-lib: add the '--stress' option to run a test repeatedly under load

2018-12-04 Thread SZEDER Gábor
On Tue, Dec 04, 2018 at 06:04:14PM +0100, Ævar Arnfjörð Bjarmason wrote:
> 
> On Tue, Dec 04 2018, SZEDER Gábor wrote:
> 
> > The number of parallel invocations is determined by, in order of
> > precedence: the number specified as '--stress=', or the value of
> > the GIT_TEST_STRESS_LOAD environment variable, or twice the number of
> > available processors in '/proc/cpuinfo', or 8.
> 
> With this series we have at least 3 ways to get this number. First
> online_cpus() in the C code, then Peff's recent `getconf
> _NPROCESSORS_ONLN` in doc-diff, and now this /proc/cpuinfo parsing.
> 
> Perhaps it makes sense to split online_cpus() into a helper to use from
> the shellscripts instead?

I don't think so, but I will update this patch to use 'getconf'
instead.



Re: [RFC PATCH 3/3] test-lib: add the '--stress' option to run a test repeatedly under load

2018-12-04 Thread Ævar Arnfjörð Bjarmason


On Tue, Dec 04 2018, SZEDER Gábor wrote:

> The number of parallel invocations is determined by, in order of
> precedence: the number specified as '--stress=', or the value of
> the GIT_TEST_STRESS_LOAD environment variable, or twice the number of
> available processors in '/proc/cpuinfo', or 8.

With this series we have at least 3 ways to get this number. First
online_cpus() in the C code, then Peff's recent `getconf
_NPROCESSORS_ONLN` in doc-diff, and now this /proc/cpuinfo parsing.

Perhaps it makes sense to split online_cpus() into a helper to use from
the shellscripts instead?


[PATCH 1/3] test-lib: consolidate naming of test-results paths

2018-12-04 Thread SZEDER Gábor
There are two places where we strip off any leading path components
and the '.sh' suffix from the test script's pathname, and there are
two places where we construct the filename of test output files in
't/test-results/'.  The last patch in this series will add even more.

Factor these out into helper variables to avoid repeating ourselves.

Signed-off-by: SZEDER Gábor 
---
 t/test-lib.sh | 21 ++---
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/t/test-lib.sh b/t/test-lib.sh
index 0f1faa24b2..49e4563405 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -71,6 +71,9 @@ then
exit 1
 fi
 
+TEST_NAME="$(basename "$0" .sh)"
+TEST_RESULTS_BASE="$TEST_OUTPUT_DIRECTORY/test-results/$TEST_NAME"
+
 # if --tee was passed, write the output not only to the terminal, but
 # additionally to the file test-results/$BASENAME.out, too.
 case "$GIT_TEST_TEE_STARTED, $* " in
@@ -78,12 +81,11 @@ done,*)
# do not redirect again
;;
 *' --tee '*|*' --va'*|*' -V '*|*' --verbose-log '*)
-   mkdir -p "$TEST_OUTPUT_DIRECTORY/test-results"
-   BASE="$TEST_OUTPUT_DIRECTORY/test-results/$(basename "$0" .sh)"
+   mkdir -p "$(dirname "$TEST_RESULTS_BASE")"
 
# Make this filename available to the sub-process in case it is using
# --verbose-log.
-   GIT_TEST_TEE_OUTPUT_FILE=$BASE.out
+   GIT_TEST_TEE_OUTPUT_FILE=$TEST_RESULTS_BASE.out
export GIT_TEST_TEE_OUTPUT_FILE
 
# Truncate before calling "tee -a" to get rid of the results
@@ -91,8 +93,8 @@ done,*)
>"$GIT_TEST_TEE_OUTPUT_FILE"
 
(GIT_TEST_TEE_STARTED=done ${TEST_SHELL_PATH} "$0" "$@" 2>&1;
-echo $? >"$BASE.exit") | tee -a "$GIT_TEST_TEE_OUTPUT_FILE"
-   test "$(cat "$BASE.exit")" = 0
+echo $? >"$TEST_RESULTS_BASE.exit") | tee -a 
"$GIT_TEST_TEE_OUTPUT_FILE"
+   test "$(cat "$TEST_RESULTS_BASE.exit")" = 0
exit
;;
 esac
@@ -818,12 +820,9 @@ test_done () {
 
if test -z "$HARNESS_ACTIVE"
then
-   test_results_dir="$TEST_OUTPUT_DIRECTORY/test-results"
-   mkdir -p "$test_results_dir"
-   base=${0##*/}
-   test_results_path="$test_results_dir/${base%.sh}.counts"
+   mkdir -p "$(dirname "$TEST_RESULTS_BASE")"
 
-   cat >"$test_results_path" <<-EOF
+   cat >"$TEST_RESULTS_BASE.counts" <<-EOF
total $test_count
success $test_success
fixed $test_fixed
@@ -1029,7 +1028,7 @@ then
 fi
 
 # Test repository
-TRASH_DIRECTORY="trash directory.$(basename "$0" .sh)"
+TRASH_DIRECTORY="trash directory.$TEST_NAME"
 test -n "$root" && TRASH_DIRECTORY="$root/$TRASH_DIRECTORY"
 case "$TRASH_DIRECTORY" in
 /*) ;; # absolute path is good
-- 
2.20.0.rc2.156.g5a9fd2ce9c



[RFC PATCH 3/3] test-lib: add the '--stress' option to run a test repeatedly under load

2018-12-04 Thread SZEDER Gábor
Unfortunately, we have a few flaky tests, whose failures tend to be
hard to reproduce.  We've found that the best we can do to reproduce
such a failure is to run the test repeatedly while the machine is
under load, and wait in the hope that the load creates enough variance
in the timing of the test's commands that a failure is evenually
triggered.  I have a command to do that, and I noticed that two other
contributors have rolled their own scripts to do the same, all
choosing slightly different approaches.

To help reproduce failures in flaky tests, introduce the '--stress'
option to run a test script repeatedly in multiple parallel
invocations until one of them fails, thereby using the test script
itself to increase the load on the machine.

The number of parallel invocations is determined by, in order of
precedence: the number specified as '--stress=', or the value of
the GIT_TEST_STRESS_LOAD environment variable, or twice the number of
available processors in '/proc/cpuinfo', or 8.

To prevent the several parallel invocations of the same test from
interfering with each other:

  - Include the parallel job's number in the name of the trash
directory and the various output files under 't/test-results/' as
a '.stress-' suffix.

  - Add the parallel job's number to the port number specified by the
user or to the test number, so even tests involving daemons
listening on a TCP socket can be stressed.

  - Make '--stress' imply '--verbose-log' and discard the test's
standard ouput and error; dumping the output of several parallel
tests to the terminal would create a big ugly mess.

'wait' for all parallel jobs before exiting (either because a failure
was found or because the user lost patience and aborted the stress
test), allowing the still running tests to finish.  Otherwise the "OK
X.Y" progress output from the last iteration would likely arrive after
the user got back the shell prompt, interfering with typing in the
next command.  OTOH, this waiting might induce a considerable delay
between hitting ctrl-C and the test actually exiting; I'm not sure
this is the right tradeoff.

Based on Jeff King's 'stress' script.

Signed-off-by: SZEDER Gábor 
---
 t/README| 13 ++-
 t/test-lib-functions.sh |  7 +++-
 t/test-lib.sh   | 82 +++--
 3 files changed, 96 insertions(+), 6 deletions(-)

diff --git a/t/README b/t/README
index 28711cc508..9851de25c2 100644
--- a/t/README
+++ b/t/README
@@ -186,6 +186,16 @@ appropriately before running "make".
this feature by setting the GIT_TEST_CHAIN_LINT environment
variable to "1" or "0", respectively.
 
+--stress::
+--stress=::
+   Run the test script repeatedly in multiple parallel
+   invocations until one of them fails.  Useful for reproducing
+   rare failures in flaky tests.  The number of parallel
+   invocations is, in order of precedence: , or the value of
+   the GIT_TEST_STRESS_LOAD environment variable, or twice the
+   number of available processors in '/proc/cpuinfo', or 8.
+   Implies `--verbose-log`.
+
 You can also set the GIT_TEST_INSTALLED environment variable to
 the bindir of an existing git installation to test that installation.
 You still need to have built this git sandbox, from which various
@@ -425,7 +435,8 @@ This test harness library does the following things:
  - Creates an empty test directory with an empty .git/objects database
and chdir(2) into it.  This directory is 't/trash
directory.$test_name_without_dotsh', with t/ subject to change by
-   the --root option documented above.
+   the --root option documented above, and a '.stress-' suffix
+   appended by the --stress option.
 
  - Defines standard test helper functions for your scripts to
use.  These functions are designed to make all scripts behave
diff --git a/t/test-lib-functions.sh b/t/test-lib-functions.sh
index d9a602cd0f..9af11e3eed 100644
--- a/t/test-lib-functions.sh
+++ b/t/test-lib-functions.sh
@@ -1288,8 +1288,6 @@ test_set_port () {
# root-only port, use a larger one instead.
port=$(($port + 1))
fi
-
-   eval $var=$port
;;
*[^0-9]*)
error >&7 "invalid port number: $port"
@@ -1298,4 +1296,9 @@ test_set_port () {
# The user has specified the port.
;;
esac
+
+   # Make sure that parallel '--stress' test jobs get different
+   # ports.
+   port=$(($port + ${GIT_TEST_STRESS_JOB_NR:-0}))
+   eval $var=$port
 }
diff --git a/t/test-lib.sh b/t/test-lib.sh
index 49e4563405..9b7f687396 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -71,8 +71,81 @@ then
exit 1
 fi
 
+TEST_STRESS_SFX="${GIT_TEST_STRESS_JOB_NR:+.stress-$GIT_TEST_STRESS_JOB_NR}"
 TEST_NAME="$(basename "$0" .sh)"
-TEST_RESULTS_BASE="$TEST_OUTPUT_DIRECTORY/test-results/$TEST_NAME"

[RFC PATCH 0/3] test-lib: add the '--stress' option to help reproduce occasional failures in flaky tests

2018-12-04 Thread SZEDER Gábor
Inspired by Peff's 'stress' script mentioned in:

  https://public-inbox.org/git/20181122161722.gc28...@sigill.intra.peff.net/

the last patch in this series brings that functionality to our test
library to help to reproduce failures in flaky tests.  So

  ./t1234-foo --stress
  
will run that test script repeatedly in multiple parallel invocations,
in the hope that the increased load creates enough variance in the
timing of the test's commands that a failure is evenually triggered.


SZEDER Gábor (3):
  test-lib: consolidate naming of test-results paths
  test-lib-functions: introduce the 'test_set_port' helper function
  test-lib: add the '--stress' option to run a test repeatedly under
load

 t/README | 13 +-
 t/lib-git-daemon.sh  |  2 +-
 t/lib-git-p4.sh  |  9 +---
 t/lib-git-svn.sh |  2 +-
 t/lib-httpd.sh   |  2 +-
 t/t0410-partial-clone.sh |  1 -
 t/t5512-ls-remote.sh |  2 +-
 t/test-lib-functions.sh  | 39 
 t/test-lib.sh| 99 +++-
 9 files changed, 143 insertions(+), 26 deletions(-)

-- 
2.20.0.rc2.156.g5a9fd2ce9c



[PATCH 2/3] test-lib-functions: introduce the 'test_set_port' helper function

2018-12-04 Thread SZEDER Gábor
Several test scripts run daemons like 'git-daemon' or Apache, and
communicate with them through TCP sockets.  To have unique ports where
these daemons are accessible, the ports are usually the number of the
corresponding test scripts, unless the user overrides them via
environment variables, and thus all those tests and test libs contain
more or less the same bit of one-liner boilerplate code to find out
the port.  The last patch in this series will make this a bit more
complicated.

Factor out finding the port for a daemon into the common helper
function 'test_set_port' to avoid repeating ourselves.

Take special care of test scripts with "low" numbers:

  - Test numbers below 1024 would result in a port that's only usable
as root, so set their port to '1 + test-nr' to make sure it
doesn't interfere with other tests in the test suite.  This makes
the hardcoded port number in 't0410-partial-clone.sh' unnecessary,
remove it.

  - The shell's arithmetic evaluation interprets numbers with leading
zeros as octal values, which means that test number below 1000 and
containing the digits 8 or 9 will trigger an error.  Remove all
leading zeros from the test numbers to prevent this.

Note that the Perforce tests are unlike the other tests involving
daemons in that:

  - 'lib-git-p4.sh' doesn't use the test's number for unique port as
is, but does a bit of additional arithmetic on top [1].

  - The port is not overridable via an environment variable.

With this patch even Perforce tests will use the test's number as
default port, and it will be overridable via the P4DPORT environment
variable.

[1] Commit fc00233071 (git-p4 tests: refactor and cleanup, 2011-08-22)
introduced that "unusual" unique port computation without
explaining why it was necessary (as opposed to simply using the
test number as is).  It seems to be just unnecessary complication,
and in any case that commit came way before the "test nr as unique
port" got "standardized" for other daemons in commits c44132fcf3
(tests: auto-set git-daemon port, 2014-02-10), 3bb486e439 (tests:
auto-set LIB_HTTPD_PORT from test name, 2014-02-10), and
bf9d7df950 (t/lib-git-svn.sh: improve svnserve tests with parallel
make test, 2017-12-01).

Signed-off-by: SZEDER Gábor 
---
 t/lib-git-daemon.sh  |  2 +-
 t/lib-git-p4.sh  |  9 +
 t/lib-git-svn.sh |  2 +-
 t/lib-httpd.sh   |  2 +-
 t/t0410-partial-clone.sh |  1 -
 t/t5512-ls-remote.sh |  2 +-
 t/test-lib-functions.sh  | 36 
 7 files changed, 41 insertions(+), 13 deletions(-)

diff --git a/t/lib-git-daemon.sh b/t/lib-git-daemon.sh
index f98de95c15..41eb1e3ae8 100644
--- a/t/lib-git-daemon.sh
+++ b/t/lib-git-daemon.sh
@@ -28,7 +28,7 @@ then
test_skip_or_die $GIT_TEST_GIT_DAEMON "file system does not support 
FIFOs"
 fi
 
-LIB_GIT_DAEMON_PORT=${LIB_GIT_DAEMON_PORT-${this_test#t}}
+test_set_port LIB_GIT_DAEMON_PORT
 
 GIT_DAEMON_PID=
 GIT_DAEMON_DOCUMENT_ROOT_PATH="$PWD"/repo
diff --git a/t/lib-git-p4.sh b/t/lib-git-p4.sh
index c27599474c..b3be3ba011 100644
--- a/t/lib-git-p4.sh
+++ b/t/lib-git-p4.sh
@@ -53,14 +53,7 @@ time_in_seconds () {
(cd / && "$PYTHON_PATH" -c 'import time; print(int(time.time()))')
 }
 
-# Try to pick a unique port: guess a large number, then hope
-# no more than one of each test is running.
-#
-# This does not handle the case where somebody else is running the
-# same tests and has chosen the same ports.
-testid=${this_test#t}
-git_p4_test_start=9800
-P4DPORT=$((10669 + ($testid - $git_p4_test_start)))
+test_set_port P4DPORT
 
 P4PORT=localhost:$P4DPORT
 P4CLIENT=client
diff --git a/t/lib-git-svn.sh b/t/lib-git-svn.sh
index a8130f9119..f3b478c307 100644
--- a/t/lib-git-svn.sh
+++ b/t/lib-git-svn.sh
@@ -13,6 +13,7 @@ fi
 GIT_DIR=$PWD/.git
 GIT_SVN_DIR=$GIT_DIR/svn/refs/remotes/git-svn
 SVN_TREE=$GIT_SVN_DIR/svn-tree
+test_set_port SVNSERVE_PORT
 
 svn >/dev/null 2>&1
 if test $? -ne 1
@@ -119,7 +120,6 @@ require_svnserve () {
 }
 
 start_svnserve () {
-   SVNSERVE_PORT=${SVNSERVE_PORT-${this_test#t}}
svnserve --listen-port $SVNSERVE_PORT \
 --root "$rawsvnrepo" \
 --listen-once \
diff --git a/t/lib-httpd.sh b/t/lib-httpd.sh
index a8729f8232..e465116ef9 100644
--- a/t/lib-httpd.sh
+++ b/t/lib-httpd.sh
@@ -82,7 +82,7 @@ case $(uname) in
 esac
 
 LIB_HTTPD_PATH=${LIB_HTTPD_PATH-"$DEFAULT_HTTPD_PATH"}
-LIB_HTTPD_PORT=${LIB_HTTPD_PORT-${this_test#t}}
+test_set_port LIB_HTTPD_PORT
 
 TEST_PATH="$TEST_DIRECTORY"/lib-httpd
 HTTPD_ROOT_PATH="$PWD"/httpd
diff --git a/t/t0410-partial-clone.sh b/t/t0410-partial-clone.sh
index ba3887f178..0aca8d7588 100755
--- a/t/t0410-partial-clone.sh
+++ b/t/t0410-partial-clone.sh
@@ -480,7 +480,6 @@ test_expect_success 'gc stops traversal when a missing but 
promised object is re
! grep "$TREE_HASH" out
 '
 
-LIB_HTTPD_PORT=12345  # default port, 410, cannot be 

Re: [PATCH/RFC v3 00/14] Introduce new commands switch-branch and restore-files

2018-12-04 Thread Duy Nguyen
On Tue, Dec 4, 2018 at 2:29 AM Elijah Newren  wrote:
>
> On Thu, Nov 29, 2018 at 2:01 PM Nguyễn Thái Ngọc Duy  
> wrote:
> >
> > v3 sees switch-branch go back to switch-branch (in v2 it was
> > checkout-branch). checkout-files is also renamed restore-files (v1 was
> > restore-paths). Hopefully we won't see another rename.
>
> I started reading through the patches.  I also tried to apply them
> locally, but they had conflicts or missing base file version on both
> master and next.  What version did you base it on?

I think nd/checkout-dwim-fix because of a non-trivial conflict there
(but I don't remember when I noticed it and rebased on that). Anyway
you can get the whole series at

https://gitlab.com/pclouds/git/tree/switch-branch-and-checkout-files

It fixes some of your comments already, a couple of bug fixes here and
there and in a good-enough shape that I start actually using it.

> > - Two more fancy features (the "git checkout --index" being the
> >   default mode and the backup log for accidental overwrites) are of
> >   course still missing. But they are coming.
> >
> > I did not go replace "detached HEAD" with "unnamed branch" (or "no
> > branch") everywhere because I think a unique term is still good to
> > refer to this concept. Or maybe "no branch" is good enough. I dunno.
>
> I personally like "unnamed branch", but "no branch" would still be
> better than "detached HEAD".

Haven't really worked on killing the term "detached HEAD" yet. But I
noticed the other day that git-branch reports

* (HEAD detached from 703266f6e4)

and I didn't know how to rephrase that. I guess "unnamed branch from
703266f6e4" is probably good enough but my old-timer brain screams no.
-- 
Duy


Re: [PATCH v3 07/14] checkout: split into switch-branch and restore-files

2018-12-04 Thread Duy Nguyen
Thanks for all the comments! There are still some I haven't replied
(either I'll agree and do it anyway, or I'll need some more time to
digest)

On Tue, Dec 4, 2018 at 1:45 AM Elijah Newren  wrote:
> > +'git restore-files' [--from=] ...
>
> Isn't this already inferred by the previous line?  Or was the
> inclusion of --from on the previous line in error?  Looking at the
> git-checkout manpage, it looks like you may have just been copying an
> existing weirdness, but it needs to be fixed.  ;-)

Hehe.

> > +'git restore-files' (-p|--patch) [--from=] [...]
> > +
> > +DESCRIPTION
> > +---
> > +Updates files in the working tree to match the version in the index
> > +or the specified tree.
> > +
> > +'git restore-files' [--from=] ...::
>
>  and ?  I understand  and ,
> or  but have no clue why it'd be okay to specify 
> and  together.  What does that even mean?
>
> Also, rather than fixing from  to  or ,
> perhaps we should just use  here?  (I'm thinking of git
> rev-parse's "Specifying revisions", which suggests "revisions" as a
> good substitute for "commit-ish" that isn't quite so weird for new
> users.)

tree-ish is technically more accurate. But I'm ok with just
. If you give it a blob oid then you should get a nice
explanation what you're doing wrong anyway.


> > +   Overwrite paths in the working tree by replacing with the
> > +   contents in the index or in the  (most often a
> > +   commit).  When a  is given, the paths that
> > +   match the  are updated both in the index and in
> > +   the working tree.
>
> Is that the default we really want for this command?  Why do we
> automatically assume these files are ready for commit?  I understand
> that it's what checkout did, but I'd find it more natural to only
> affect the working tree by default.  We can give it an option for
> affecting the index instead (or perhaps in addition).

Yeah, that behavior of updating the index always bothers me when I use
it but I seemed to forget when working on this.

> > +--ours::
> > +--theirs::
> > +   Check out stage #2 ('ours') or #3 ('theirs') for unmerged
> > +   paths.
> > ++
> > +Note that during `git rebase` and `git pull --rebase`, 'ours' and
> > +'theirs' may appear swapped; `--ours` gives the version from the
> > +branch the changes are rebased onto, while `--theirs` gives the
> > +version from the branch that holds your work that is being rebased.
> > ++
> > +This is because `rebase` is used in a workflow that treats the
> > +history at the remote as the shared canonical one, and treats the
> > +work done on the branch you are rebasing as the third-party work to
> > +be integrated, and you are temporarily assuming the role of the
> > +keeper of the canonical history during the rebase.  As the keeper of
> > +the canonical history, you need to view the history from the remote
> > +as `ours` (i.e. "our shared canonical history"), while what you did
> > +on your side branch as `theirs` (i.e. "one contributor's work on top
> > +of it").
>
> Total aside because I'm not sure what you could change here, but man
> do I hate this.

Uh it's actually documented? I'm always confused by this too. --ours
and --theirs at this point are pretty much tied to stage 2 and 3.
Nothing I can do about it. But if you could come up with some other
option names, then we could make "new ours" to be stage 3 during
rebase, for example.

> > +Part of the linkgit:git[1] suite
>
>
> My single biggest worry about this whole series is that I'm worried
> you're perpetuating and even further ingraining one of the biggest
> usability problems with checkout: people suggest and use it for
> reverting/restoring paths to a previous version, but it doesn't do
> that:
>
> git restore-files --from master~10 Documentation/
> 
> git add -u
> git commit -m "Rationale for changing files including reverting 
> Documentation/"
>
> In particular, now you have a mixture of files in Documentation/ from
> master~10 (er, now likely master~11) and HEAD~1; any files and
> sub-directories that existed in HEAD~1 still remain and are mixed with
> all other files in Documentation/ from the older commit.
>
> You may think this is a special case, but this particular issue
> results in some pretty bad surprises.  Also, it was pretty surprising
> to me just how difficult it was to implement an svn-like revert in
> EasyGit, in large part because of this 'oversight' in git.  git
> checkout --  to me has always been fundamentally wrong, but I
> just wasn't sure if I wanted to fight the backward compatibility
> battle and suggest changing it.  With a new command, we definitely
> shouldn't be reinforcing this error.  (And maybe we should consider
> taking the time to fix git checkout too.)

What would be the right behavior for

 git restore-files --from=master~10 Documentation/

then? Consider it an error? I often use "git checkout HEAD" and "git
checkout HEAD^" (usually with -p) but not very far back like
master~10.

> > +If the branch exists 

Re: [PATCH 5/5] test-lib: add support for GIT_TODO_TESTS

2018-12-04 Thread SZEDER Gábor
On Tue, Nov 27, 2018 at 11:54:45PM +0100, Ævar Arnfjörð Bjarmason wrote:
> As noted in the updated t/README documentation being added here
> setting this new GIT_TODO_TESTS variable is usually better than
> GIT_SKIP_TESTS.

I don't see why this is "usually better".  I get how it can help your
particular use-case described below, but that doesn't mean that it's
"usually better".

> My use-case for this is to get feedback from the CI infrastructure[1]
> about which tests are passing due to fixes that have trickled into
> git.git.
> 
> With the GIT_SKIP_TESTS variable this use-case is painful, you need to
> do an occasional manual run without GIT_SKIP_TESTS set. It's much
> better to use GIT_TODO_TESTS and get a report of passing TODO tests
> from prove(1) at the bottom of the test output. Once those passing
> TODO tests have trickled down to 'master' the relevant glob (set for
> all of master/next/pu) can be removed.

Neither from the commit message nor from the documentation is it clear
to me what the result of 'make test' will be when a test listed in
GIT_TODO_TESTS fails.

> As seen in the "GIT_TODO_TESTS mixed failure" test the lack of
> interaction with existing tests marked as TODO by the test suite
> itself is intentional. There's no need to print out something saying
> they matched GIT_TODO_TESTS if they're already TODO tests.
> 
> 1. https://public-inbox.org/git/875zwm15k2@evledraar.gmail.com/
> 
> Signed-off-by: Ævar Arnfjörð Bjarmason 
> ---
>  ci/lib-travisci.sh |  2 +-
>  t/README   | 18 +--
>  t/t-basic.sh   | 81 +++---
>  t/test-lib.sh  | 31 +++---
>  4 files changed, 112 insertions(+), 20 deletions(-)
> 
> diff --git a/ci/lib-travisci.sh b/ci/lib-travisci.sh
> index 69dff4d1ec..ad8290bfdb 100755
> --- a/ci/lib-travisci.sh
> +++ b/ci/lib-travisci.sh
> @@ -121,7 +121,7 @@ osx-clang|osx-gcc)
>   # t9810 occasionally fails on Travis CI OS X
>   # t9816 occasionally fails with "TAP out of sequence errors" on
>   # Travis CI OS X
> - export GIT_SKIP_TESTS="t9810 t9816"
> + export GIT_TODO_TESTS="t9810 t9816"

This change is not mentioned in the commit message.

As noted in the hunk's context, these test scripts are not skipped
because they don't work on OSX at all, but because they are flaky.
Consequently, reporting them as "maybe should be un-TODO'd" when they
happen to succeed is pointless and will just lead to confusion, so
this seems to be a case when GIT_TODO_TESTS is actually worse than
GIT_SKIP_TESTS.

If a failing test in GIT_TODO_TESTS makes the whole 'make test' fail,
then this should be most definitely left as GIT_SKIP_TESTS.

>   ;;
>  GIT_TEST_GETTEXT_POISON)
>   export GIT_TEST_GETTEXT_POISON=YesPlease
> diff --git a/t/README b/t/README
> index c03b268813..922b4fb3bf 100644
> --- a/t/README
> +++ b/t/README
> @@ -207,8 +207,19 @@ ideally be reported as bugs and fixed, or guarded by a 
> prerequisite
>  (see "Using test prerequisites" below). But until then they can be
>  skipped.
>  
> -To skip tests, set the GIT_SKIP_TESTS variable. Individual tests can
> -be skipped:
> +To skip tests, set either the GIT_SKIP_TESTS or GIT_TODO_TESTS
> +variables. The difference is that with SKIP the tests won't be run at
> +all, whereas they will be run with TODO, but in success or failure
> +annotated as a TODO test.

This is confusing.  "To skip" a test means that the test is not run at
all.  Now, if GIT_TODO_TESTS were to run the listed tests, then it
definitely won't skip them, so this sentence contradicts the previous
one.

What does "annotated as a TODO test" mean?  Something similar to how
'test_expect_failure' works?

> +It's usually preferrable to use TODO, since the test suite will report
> +those tests that unexpectedly succeed, which may indicate that
> +whatever bug broke the test in the past has been fixed, and the test
> +should be un-TODO'd. There's no such feedback loop with
> +GIT_SKIP_TESTS.
> +
> +The GIT_SKIP_TESTS and GIT_TODO_TESTS take the same values. Individual
> +tests can be skipped:
>  
>  $ GIT_SKIP_TESTS=t9200.8 sh ./t9200-git-cvsexport-commit.sh
>  
> @@ -223,7 +234,8 @@ patterns that tells which tests to skip, and either can 
> match the
>  
>  For an individual test suite --run could be used to specify that
>  only some tests should be run or that some tests should be
> -excluded from a run.
> +excluded from a run. The --run option is a shorthand for setting
> +a GIT_SKIP_TESTS pattern.
>  
>  The argument for --run is a list of individual test numbers or
>  ranges with an optional negation prefix that define what tests in


Loan Offer At 3% Interest Rate.

2018-12-04 Thread Bank Leumi

--
Do you need a loan for personal or business purpose? Bank Leumi offer a
range of $20,000.00 to $25,000,000.00 at 3% interest rate.
reply us if interested.


Dear Friend.

2018-12-04 Thread Daouda Ali
Dear Friend,
   I am Mr.Daouda Ali the head of file department of Bank of
Africa(B.O.A) here in Burkina Faso / Ouagadougou. In my department we
discover an abandoned sum of (US$18 million US Dollars) in an account
that belongs to one of our foreign customer who died along with his
family in plane crash. It is therefore upon this discovery that I now
decided to make this business proposal to you and release the money to
you as the next of kin or relation to the deceased for the safety and
subsequent disbursement since nobody is coming for it. I agree that
40% of this money will be for you, while 60% would be for me. Then
after the money is been transferred into your account, I will visit
your country for an investment under your kind control.

You have to contact my Bank directly as the real next of kin of this
deceased account with next of kin application form. You have to send
me those your information below to enable me use it and get you next
of kin application form from bank, so that you will contact Bank for
the transfer of this money into your account.

Your Full Name___
Your Home Address
Your Age ___
Your Handset Number
Your Occupation ___

I am waiting for your urgent respond to enable us proceed further for
the transfer through my private e-mail(daoudaali...@gmail.com)

Yours faithfully,
Mr.Daouda Ali


Re: [BUG REPORT] Git does not correctly replay bisect log

2018-12-04 Thread Lukáš Krejčí
On Tue, 2018-12-04 at 13:01 +0100, Christian Couder wrote:
> To debug I think it would be interesting to see the output of the
> following commands just before we get different results:
> 
> git for-each-ref 'refs/bisect/*'
> 
> and
> 
> git log -1 --format=oneline
> 

I placed the following snippet at the end of the loop in bisect_replay():
echo "COMMAND: '$git' '$bisect' '$command' '$rev'"  
git for-each-ref 'refs/bisect/*'
echo "current HEAD: $(git log -1 --format=oneline)"
echo "---"

$ env GIT_TRACE=0 git bisect replay /var/tmp/git-bisect.log 

We are not bisecting.
COMMAND: 'git' 'bisect' 'start' ''
current HEAD: 2595646791c319cadfdbf271563aac97d0843dc7 Linux 4.20-rc5
---
COMMAND: 'git' 'bisect' 'bad' '5b394b2ddf0347bef56e50c69a58773c94343ff3'
5b394b2ddf0347bef56e50c69a58773c94343ff3 commit refs/bisect/bad
current HEAD: 2595646791c319cadfdbf271563aac97d0843dc7 Linux 4.20-rc5
---
COMMAND: 'git' 'bisect' 'good' '94710cac0ef4ee177a63b5227664b38c95bbf703'
5b394b2ddf0347bef56e50c69a58773c94343ff3 commit refs/bisect/bad
94710cac0ef4ee177a63b5227664b38c95bbf703 commit 
refs/bisect/good-94710cac0ef4ee177a63b5227664b38c95bbf703
current HEAD: 2595646791c319cadfdbf271563aac97d0843dc7 Linux 4.20-rc5
---
COMMAND: 'git' 'bisect' 'bad' '54dbe75bbf1e189982516de179147208e90b5e45'
54dbe75bbf1e189982516de179147208e90b5e45 commit refs/bisect/bad
94710cac0ef4ee177a63b5227664b38c95bbf703 commit 
refs/bisect/good-94710cac0ef4ee177a63b5227664b38c95bbf703
current HEAD: 2595646791c319cadfdbf271563aac97d0843dc7 Linux 4.20-rc5
---
COMMAND: 'git' 'bisect' 'bad' '0a957467c5fd46142bc9c52758ffc552d4c5e2f7'
0a957467c5fd46142bc9c52758ffc552d4c5e2f7 commit refs/bisect/bad
94710cac0ef4ee177a63b5227664b38c95bbf703 commit 
refs/bisect/good-94710cac0ef4ee177a63b5227664b38c95bbf703
current HEAD: 2595646791c319cadfdbf271563aac97d0843dc7 Linux 4.20-rc5
---
COMMAND: 'git' 'bisect' 'good' '958f338e96f874a0d29442396d6adf9c1e17aa2d'
0a957467c5fd46142bc9c52758ffc552d4c5e2f7 commit refs/bisect/bad
94710cac0ef4ee177a63b5227664b38c95bbf703 commit 
refs/bisect/good-94710cac0ef4ee177a63b5227664b38c95bbf703
958f338e96f874a0d29442396d6adf9c1e17aa2d commit 
refs/bisect/good-958f338e96f874a0d29442396d6adf9c1e17aa2d
current HEAD: 2595646791c319cadfdbf271563aac97d0843dc7 Linux 4.20-rc5
---
COMMAND: 'git' 'bisect' 'bad' '2c20443ec221dcb76484b30933593e8ecd836bbd'
2c20443ec221dcb76484b30933593e8ecd836bbd commit refs/bisect/bad
94710cac0ef4ee177a63b5227664b38c95bbf703 commit 
refs/bisect/good-94710cac0ef4ee177a63b5227664b38c95bbf703
958f338e96f874a0d29442396d6adf9c1e17aa2d commit 
refs/bisect/good-958f338e96f874a0d29442396d6adf9c1e17aa2d
current HEAD: 2595646791c319cadfdbf271563aac97d0843dc7 Linux 4.20-rc5
---
COMMAND: 'git' 'bisect' 'bad' 'c2fc71c9b74c1e87336a27dba1a5edc69d2690f1'
c2fc71c9b74c1e87336a27dba1a5edc69d2690f1 commit refs/bisect/bad
94710cac0ef4ee177a63b5227664b38c95bbf703 commit 
refs/bisect/good-94710cac0ef4ee177a63b5227664b38c95bbf703
958f338e96f874a0d29442396d6adf9c1e17aa2d commit 
refs/bisect/good-958f338e96f874a0d29442396d6adf9c1e17aa2d
current HEAD: 2595646791c319cadfdbf271563aac97d0843dc7 Linux 4.20-rc5
---
COMMAND: 'git' 'bisect' 'bad' 'b86d865cb1cae1e61527ea0b8977078bbf694328'
b86d865cb1cae1e61527ea0b8977078bbf694328 commit refs/bisect/bad
94710cac0ef4ee177a63b5227664b38c95bbf703 commit 
refs/bisect/good-94710cac0ef4ee177a63b5227664b38c95bbf703
958f338e96f874a0d29442396d6adf9c1e17aa2d commit 
refs/bisect/good-958f338e96f874a0d29442396d6adf9c1e17aa2d
current HEAD: 2595646791c319cadfdbf271563aac97d0843dc7 Linux 4.20-rc5
---
COMMAND: 'git' 'bisect' 'bad' '1b0d274523df5ef1caedc834da055ff721e4d4f0'
1b0d274523df5ef1caedc834da055ff721e4d4f0 commit refs/bisect/bad
94710cac0ef4ee177a63b5227664b38c95bbf703 commit 
refs/bisect/good-94710cac0ef4ee177a63b5227664b38c95bbf703
958f338e96f874a0d29442396d6adf9c1e17aa2d commit 
refs/bisect/good-958f338e96f874a0d29442396d6adf9c1e17aa2d
current HEAD: 2595646791c319cadfdbf271563aac97d0843dc7 Linux 4.20-rc5
---
Bisecting: a merge base must be tested
[1e4b044d22517cae7047c99038abb23243ca] Linux 4.18-rc4







# I placed git for-each-ref 'refs/bisect/*' after each command in the file:

$ . /var/tmp/git-bisect.log 
5b394b2ddf0347bef56e50c69a58773c94343ff3 commit refs/bisect/bad
Bisecting: 6112 revisions left to test after this (roughly 13 steps)
[54dbe75bbf1e189982516de179147208e90b5e45] Merge tag 'drm-next-2018-08-15' of 
git://anongit.freedesktop.org/drm/drm
5b394b2ddf0347bef56e50c69a58773c94343ff3 commit refs/bisect/bad
94710cac0ef4ee177a63b5227664b38c95bbf703 commit 
refs/bisect/good-94710cac0ef4ee177a63b5227664b38c95bbf703
Bisecting: 3881 revisions left to test after this (roughly 12 steps)
[0a957467c5fd46142bc9c52758ffc552d4c5e2f7] x86: i8259: Add missing include file
54dbe75bbf1e189982516de179147208e90b5e45 commit refs/bisect/bad
94710cac0ef4ee177a63b5227664b38c95bbf703 commit 
refs/bisect/good-94710cac0ef4ee177a63b5227664b38c95bbf703
Bisecting: 1595 revisions left to test after this 

[PATCH 2/3] sha1-file: emit error if an alternate looks like a repository

2018-12-04 Thread Ævar Arnfjörð Bjarmason
Since 26125f6b9b ("detect broken alternates.", 2006-02-22) we've
emitted an error if the alternates directory doesn't exist, but not
for the common misstep of adding a path to another git repository as
an alternate, as opposed to its "objects" directory.

Let's check for this, i.e. whether X/objects or X/.git/objects exists
if the user supplies X and print an error (which as a commit leading
up to this one shows doesn't change the exit code, just "warns").

This check is intentionally not implemented by e.g. requiring that any
of X/?? exists or X/info or X/pack exists. It's a legitimate use-case
to point to an existing alternate that hasn't been populated yet, but
pointing to one where an "X/objects" or "X/.git/objects" directory
exists is definitely a mistake we should warn the user about.

Signed-off-by: Ævar Arnfjörð Bjarmason 
---
 sha1-file.c   | 10 +-
 t/t5613-info-alternate.sh | 14 ++
 2 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/sha1-file.c b/sha1-file.c
index 5bd11c85bc..f142f81658 100644
--- a/sha1-file.c
+++ b/sha1-file.c
@@ -376,12 +376,20 @@ static int alt_odb_usable(struct raw_object_store *o,
 {
struct alternate_object_database *alt;
 
-   /* Detect cases where alternate disappeared */
if (!is_directory(path->buf)) {
+   /* Detect cases where alternate disappeared */
error(_("object directory %s does not exist; "
"check .git/objects/info/alternates"),
  path->buf);
return 0;
+   } else if (is_directory(mkpath("%s/objects", path->buf)) ||
+  is_directory(mkpath("%s/.git/objects", path->buf))) {
+   /* Detect cases where alternate is a git repository */
+   error(_("object directory %s looks like a git repository; "
+   "alternates must point to the 'objects' directory. "
+   "check .git/objects/info/alternates"),
+ path->buf);
+   return 0;
}
 
/*
diff --git a/t/t5613-info-alternate.sh b/t/t5613-info-alternate.sh
index d2964c57b7..b959e21421 100755
--- a/t/t5613-info-alternate.sh
+++ b/t/t5613-info-alternate.sh
@@ -143,4 +143,18 @@ test_expect_success 'print "error" on non-existing 
alternate' '
test_i18ngrep "does not exist; check" stderr
 '
 
+test_expect_success 'print "error" on alternate that looks like a git 
repository' '
+   git init --bare J &&
+   git init --bare K &&
+
+   # H is bare, G is not
+   echo ../../H >J/objects/info/alternates &&
+   echo ../../G >K/objects/info/alternates &&
+
+   git -C J fsck 2>stderr &&
+   test_i18ngrep "looks like a git repository; alternates must" stderr &&
+   git -C K fsck 2>stderr &&
+   test_i18ngrep "looks like a git repository; alternates must" stderr
+'
+
 test_done
-- 
2.20.0.rc2.403.gdbc3b29805



[PATCH 1/3] sha1-file: test the error behavior of alt_odb_usable()

2018-12-04 Thread Ævar Arnfjörð Bjarmason
Add a test for the error() case in alt_odb_usable() where an alternate
directory doesn't exist. This behavior has been the same since
26125f6b9b ("detect broken alternates.", 2006-02-22), but if that
error() was turned into die() the entire test suite would still pass.

Perhaps we should die() in that case, but let's start by adding a test
here to assert the long-standing existing behavior.

Signed-off-by: Ævar Arnfjörð Bjarmason 
---
 t/t5613-info-alternate.sh | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/t/t5613-info-alternate.sh b/t/t5613-info-alternate.sh
index 895f46bb91..d2964c57b7 100755
--- a/t/t5613-info-alternate.sh
+++ b/t/t5613-info-alternate.sh
@@ -136,4 +136,11 @@ test_expect_success CASE_INSENSITIVE_FS 'dup finding can 
be case-insensitive' '
test_cmp expect actual.alternates
 '
 
+test_expect_success 'print "error" on non-existing alternate' '
+   git init --bare I &&
+   echo DOES_NOT_EXIST >I/objects/info/alternates &&
+   git -C I fsck 2>stderr &&
+   test_i18ngrep "does not exist; check" stderr
+'
+
 test_done
-- 
2.20.0.rc2.403.gdbc3b29805



[PATCH 3/3] sha1-file: change alternate "error:" message to "warning:"

2018-12-04 Thread Ævar Arnfjörð Bjarmason
Change the "error" message emitted by alt_odb_usable() to be a
"warning" instead. As noted in commits leading up to this one this has
never impacted the exit code ever since the check was initially added
in 26125f6b9b ("detect broken alternates.", 2006-02-22).

It's confusing to emit an "error" when e.g. "git fsck" will exit with
0, so let's emit a "warning:" instead.

Signed-off-by: Ævar Arnfjörð Bjarmason 
---
 sha1-file.c | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/sha1-file.c b/sha1-file.c
index f142f81658..4b9b63bdcb 100644
--- a/sha1-file.c
+++ b/sha1-file.c
@@ -378,17 +378,17 @@ static int alt_odb_usable(struct raw_object_store *o,
 
if (!is_directory(path->buf)) {
/* Detect cases where alternate disappeared */
-   error(_("object directory %s does not exist; "
-   "check .git/objects/info/alternates"),
- path->buf);
+   warning(_("object directory %s does not exist; "
+ "check .git/objects/info/alternates"),
+   path->buf);
return 0;
} else if (is_directory(mkpath("%s/objects", path->buf)) ||
   is_directory(mkpath("%s/.git/objects", path->buf))) {
/* Detect cases where alternate is a git repository */
-   error(_("object directory %s looks like a git repository; "
-   "alternates must point to the 'objects' directory. "
-   "check .git/objects/info/alternates"),
- path->buf);
+   warning(_("object directory %s looks like a git repository; "
+ "alternates must point to the 'objects' directory. "
+ "check .git/objects/info/alternates"),
+   path->buf);
return 0;
}
 
-- 
2.20.0.rc2.403.gdbc3b29805



[PATCH 0/3] sha1-file: warn if alternate is a git repo (not object dir)

2018-12-04 Thread Ævar Arnfjörð Bjarmason
This adds a warning for the issue discussed upthread. As noted in
these patches we've been emitting an "error" while not impacting the
exit code, should we die() instead? Maybe, but until there's consensus
on that let's change this to warning() while we're at it.

Ævar Arnfjörð Bjarmason (3):
  sha1-file: test the error behavior of alt_odb_usable()
  sha1-file: emit error if an alternate looks like a repository
  sha1-file: change alternate "error:" message to "warning:"

 sha1-file.c   | 16 
 t/t5613-info-alternate.sh | 21 +
 2 files changed, 33 insertions(+), 4 deletions(-)

-- 
2.20.0.rc2.403.gdbc3b29805



Re: [BUG REPORT] Git does not correctly replay bisect log

2018-12-04 Thread Christian Couder
On Tue, Dec 4, 2018 at 12:20 PM Lukáš Krejčí  wrote:
>
> On Tue, 2018-12-04 at 12:04 +0100, Christian Couder wrote:
> >
> > Could you try to check that? And first could you give us the output of:
> >
> > git merge-base 5b394b2ddf0347bef56e50c69a58773c94343ff3
> > 94710cac0ef4ee177a63b5227664b38c95bbf703
>
> $ git merge-base 5b394b2ddf0347bef56e50c69a58773c94343ff3 
> 94710cac0ef4ee177a63b5227664b38c95bbf703
> 94710cac0ef4ee177a63b5227664b38c95bbf703
> $ git log -1 --format=oneline 94710cac0ef4ee177a63b5227664b38c95bbf703
> 94710cac0ef4ee177a63b5227664b38c95bbf703 (tag: v4.18) Linux 4.18

94710cac0ef4ee177a63b5227664b38c95bbf703 is the good commit that was
initially given. This means that the good commit
94710cac0ef4ee177a63b5227664b38c95bbf703 is an ancestor of the bad
commit 5b394b2ddf0347bef56e50c69a58773c94343ff3 i and there should be
no reason to test a merge base when replaying.

After testing on my machine, it seems that the problem is not
happening at the beginning of the replay.

To debug I think it would be interesting to see the output of the
following commands just before we get different results:

git for-each-ref 'refs/bisect/*'

and

git log -1 --format=oneline

in the case we are using `git bisect replay` and in the case we are
running the commands from the bisect log manually.

(You might need to temporarily remove the last command from the bisect
log to do that.)


Re: [BUG REPORT] Git does not correctly replay bisect log

2018-12-04 Thread Lukáš Krejčí
(I'm sorry about the formatting, here's the message again.)

Executing git bisect replay reaches a different commit than
the one that is obtained by running the commands from the bisect log manually.

Distribution: Arch Linux
git: 2.19.2-1
perl: 5.28.1-1
pcre2: 10.32-1
expat: 2.2.6-1
perl-error: 0.17027-1
grep: 3.1-2
bash: 4.4.023-1

no system /etc/gitconfig is present
tried with no ~/.gitconfig

$ cat .git/config 
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote "origin"]
url = git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
remote = origin
merge = refs/heads/master

$ git fsck
Checking object directories: 100% (256/256), done.
warning in tag 5dc01c595e6c6ec9ccda4f6f69c131c0dd945f8c: missingTaggerEntry: 
invalid format - expected 'tagger' line
warning in tag 26791a8bcf0e6d33f43aef7682bdb555236d56de: missingTaggerEntry: 
invalid format - expected 'tagger' line
warning in tag 9e734775f7c22d2f89943ad6c745571f1930105f: missingTaggerEntry: 
invalid format - expected 'tagger' line
warning in tag 0397236d43e48e821cce5bbe6a80a1a56bb7cc3a: missingTaggerEntry: 
invalid format - expected 'tagger' line
warning in tag ebb5573ea8beaf000d4833735f3e53acb9af844c: missingTaggerEntry: 
invalid format - expected 'tagger' line
warning in tag 06f6d9e2f140466eeb41e494e14167f90210f89d: missingTaggerEntry: 
invalid format - expected 'tagger' line
warning in tag 701d7ecec3e0c6b4ab9bb824fd2b34be4da63b7e: missingTaggerEntry: 
invalid format - expected 'tagger' line
warning in tag 733ad933f62e82ebc92fed988c7f0795e64dea62: missingTaggerEntry: 
invalid format - expected 'tagger' line
warning in tag c521cb0f10ef2bf28a18e1cc8adf378ccbbe5a19: missingTaggerEntry: 
invalid format - expected 'tagger' line
warning in tag a339981ec18d304f9efeb9ccf01b1f04302edf32: missingTaggerEntry: 
invalid format - expected 'tagger' line
Checking objects: 100% (6428247/6428247), done.
Checking connectivity: 6369862, done.

$ cat /var/tmp/git-bisect.log
git bisect start
# bad: [5b394b2ddf0347bef56e50c69a58773c94343ff3] Linux 4.19-rc1
git bisect bad 5b394b2ddf0347bef56e50c69a58773c94343ff3
# good: [94710cac0ef4ee177a63b5227664b38c95bbf703] Linux 4.18
git bisect good 94710cac0ef4ee177a63b5227664b38c95bbf703
# bad: [54dbe75bbf1e189982516de179147208e90b5e45] Merge tag 
'drm-next-2018-08-15' of git://anongit.freedesktop.org/drm/drm
git bisect bad 54dbe75bbf1e189982516de179147208e90b5e45
# bad: [0a957467c5fd46142bc9c52758ffc552d4c5e2f7] x86: i8259: Add missing 
include file
git bisect bad 0a957467c5fd46142bc9c52758ffc552d4c5e2f7
# good: [958f338e96f874a0d29442396d6adf9c1e17aa2d] Merge branch 'l1tf-final' of 
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect good 958f338e96f874a0d29442396d6adf9c1e17aa2d
# bad: [2c20443ec221dcb76484b30933593e8ecd836bbd] Merge tag 'acpi-4.19-rc1' of 
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
git bisect bad 2c20443ec221dcb76484b30933593e8ecd836bbd
# bad: [c2fc71c9b74c1e87336a27dba1a5edc69d2690f1] Merge tag 'mtd/for-4.19' of 
git://git.infradead.org/linux-mtd
git bisect bad c2fc71c9b74c1e87336a27dba1a5edc69d2690f1
# bad: [b86d865cb1cae1e61527ea0b8977078bbf694328] blkcg: Make 
blkg_root_lookup() work for queues in bypass mode
git bisect bad b86d865cb1cae1e61527ea0b8977078bbf694328
# bad: [1b0d274523df5ef1caedc834da055ff721e4d4f0] nvmet: don't use uuid_le type
git bisect bad 1b0d274523df5ef1caedc834da055ff721e4d4f0

$ git status
On branch master
Your branch is up to date with 'origin/master'.

nothing to commit, working tree clean

$ git log -1 --format=oneline
2595646791c319cadfdbf271563aac97d0843dc7 (HEAD -> master, tag: v4.20-rc5, 
origin/master, origin/HEAD) Linux 4.20-rc5

$ git bisect replay /var/tmp/git-bisect.log 
We are not bisecting.
Bisecting: a merge base must be tested
[d72e90f33aa4709ebecc5005562f52335e106a60] Linux 4.18-rc6

$ git log -1 --format=oneline
d72e90f33aa4709ebecc5005562f52335e106a60 (HEAD, tag: v4.18-rc6) Linux 4.18-rc6





# Running the commands from the bisect log manually, however:

$ git bisect reset
Checking out files: 100% (18326/18326), done.
Previous HEAD position was d72e90f33aa4 Linux 4.18-rc6
Switched to branch 'master'
Your branch is up to date with 'origin/master'.

$ . /var/tmp/git-bisect.log 
Bisecting: 6112 revisions left to test after this (roughly 13 steps)
[54dbe75bbf1e189982516de179147208e90b5e45] Merge tag 'drm-next-2018-08-15' of 
git://anongit.freedesktop.org/drm/drm
Bisecting: 3881 revisions left to test after this (roughly 12 steps)
[0a957467c5fd46142bc9c52758ffc552d4c5e2f7] x86: i8259: Add missing include file
Bisecting: 1595 revisions left to test after this (roughly 11 steps)
[958f338e96f874a0d29442396d6adf9c1e17aa2d] Merge branch 'l1tf-final' of 
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Bisecting: 854 revisions left to test after this (roughly 10 

Re: [BUG REPORT] Git does not correctly replay bisect log

2018-12-04 Thread Lukáš Krejčí
On Tue, 2018-12-04 at 12:04 +0100, Christian Couder wrote:
> 
> Could you try to check that? And first could you give us the output of:
> 
> git merge-base 5b394b2ddf0347bef56e50c69a58773c94343ff3
> 94710cac0ef4ee177a63b5227664b38c95bbf703

$ git merge-base 5b394b2ddf0347bef56e50c69a58773c94343ff3 
94710cac0ef4ee177a63b5227664b38c95bbf703
94710cac0ef4ee177a63b5227664b38c95bbf703
$ git log -1 --format=oneline 94710cac0ef4ee177a63b5227664b38c95bbf703
94710cac0ef4ee177a63b5227664b38c95bbf703 (tag: v4.18) Linux 4.18



Re: [BUG REPORT] Git does not correctly replay bisect log

2018-12-04 Thread Christian Couder
On Tue, Dec 4, 2018 at 10:53 AM Lukáš Krejčí  wrote:
>
> Executing git bisect replay reaches a different commit than
> the one that is obtained by running the commands from the bisect log manually.


> $ git bisect replay /var/tmp/git-bisect.log
> We are not bisecting.
> Bisecting: a merge base must be tested
> [d72e90f33aa4709ebecc5005562f52335e106a60] Linux 4.18-rc6

Merge bases are tested only when the good commit is not an ancestor of
the bad commit. If this didn't happen when the log was recorded, it
shouldn't happen when it is replayed.

Here it seems that this is happening at the beginning of the replay.
Perhaps git bisect replay is taking into account the current
branch/commit though it shouldn't.

I wonder if this would happen if the current branch/commit has the
good commit as an ancestor.

Could you try to check that? And first could you give us the output of:

git merge-base 5b394b2ddf0347bef56e50c69a58773c94343ff3
94710cac0ef4ee177a63b5227664b38c95bbf703


Offre de prêts

2018-12-04 Thread POPINET
Bonjour

Vous aviez besoin de prêts d'argent entre particuliers pour faire face
aux difficultés financières pour enfin sortir de l'impasse que
provoquent les banques, par le rejet de vos dossiers de demande de
crédits ?
Je suis un citoyen français à la retraite en mesure de vous faire
un prêt de 5000 euros à 500 euros et avec des conditions qui vous
faciliteront la vie.Voici les domaines dans lesquels je peux vous
aider:
* Financier
* Prêt immobilier
* Prêt à l'investissement
* Prêt automobile
* Dette de consolidation
* Marge de crédit
* Deuxième hypothèque
* Rachat de crédit
* Prêt personnel
Vous êtes fichés, interdits bancaires et vous n'avez pas la faveur des
banques ou mieux vous avez un projet et besoin de financement, un
mauvais dossier de crédit ou besoin d'argent pour payer des factures,
fonds à investir dans les entreprises.
Alors si vous avez besoin de prêts d'argent n'hésitez pas à me
contacter par mail  popinetmic...@yahoo.com pour en savoir plus sur
mes conditions bien favorables.
Personne pas sérieuse s'abstenir


[BUG REPORT] Git does not correctly replay bisect log

2018-12-04 Thread Lukáš Krejčí
Executing git bisect replay reaches a different commit than
the one that is obtained by running the commands from the bisect log manually.

Distribution: Arch Linux
git: 2.19.2-1
perl: 5.28.1-1
pcre2: 10.32-1
expat: 2.2.6-1
perl-error: 0.17027-1
grep: 3.1-2
bash: 4.4.023-1

no system /etc/gitconfig is present
tried with no ~/.gitconfig

$ cat .git/config
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote "origin"]
url = git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
remote = origin
merge = refs/heads/master

$ git fsck
Checking object directories: 100% (256/256), done.
warning in tag 5dc01c595e6c6ec9ccda4f6f69c131c0dd945f8c:
missingTaggerEntry: invalid format - expected 'tagger' line
warning in tag 26791a8bcf0e6d33f43aef7682bdb555236d56de:
missingTaggerEntry: invalid format - expected 'tagger' line
warning in tag 9e734775f7c22d2f89943ad6c745571f1930105f:
missingTaggerEntry: invalid format - expected 'tagger' line
warning in tag 0397236d43e48e821cce5bbe6a80a1a56bb7cc3a:
missingTaggerEntry: invalid format - expected 'tagger' line
warning in tag ebb5573ea8beaf000d4833735f3e53acb9af844c:
missingTaggerEntry: invalid format - expected 'tagger' line
warning in tag 06f6d9e2f140466eeb41e494e14167f90210f89d:
missingTaggerEntry: invalid format - expected 'tagger' line
warning in tag 701d7ecec3e0c6b4ab9bb824fd2b34be4da63b7e:
missingTaggerEntry: invalid format - expected 'tagger' line
warning in tag 733ad933f62e82ebc92fed988c7f0795e64dea62:
missingTaggerEntry: invalid format - expected 'tagger' line
warning in tag c521cb0f10ef2bf28a18e1cc8adf378ccbbe5a19:
missingTaggerEntry: invalid format - expected 'tagger' line
warning in tag a339981ec18d304f9efeb9ccf01b1f04302edf32:
missingTaggerEntry: invalid format - expected 'tagger' line
Checking objects: 100% (6428247/6428247), done.
Checking connectivity: 6369862, done.

$ cat /var/tmp/git-bisect.log
git bisect start
# bad: [5b394b2ddf0347bef56e50c69a58773c94343ff3] Linux 4.19-rc1
git bisect bad 5b394b2ddf0347bef56e50c69a58773c94343ff3
# good: [94710cac0ef4ee177a63b5227664b38c95bbf703] Linux 4.18
git bisect good 94710cac0ef4ee177a63b5227664b38c95bbf703
# bad: [54dbe75bbf1e189982516de179147208e90b5e45] Merge tag
'drm-next-2018-08-15' of git://anongit.freedesktop.org/drm/drm
git bisect bad 54dbe75bbf1e189982516de179147208e90b5e45
# bad: [0a957467c5fd46142bc9c52758ffc552d4c5e2f7] x86: i8259: Add
missing include file
git bisect bad 0a957467c5fd46142bc9c52758ffc552d4c5e2f7
# good: [958f338e96f874a0d29442396d6adf9c1e17aa2d] Merge branch
'l1tf-final' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect good 958f338e96f874a0d29442396d6adf9c1e17aa2d
# bad: [2c20443ec221dcb76484b30933593e8ecd836bbd] Merge tag
'acpi-4.19-rc1' of
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
git bisect bad 2c20443ec221dcb76484b30933593e8ecd836bbd
# bad: [c2fc71c9b74c1e87336a27dba1a5edc69d2690f1] Merge tag
'mtd/for-4.19' of git://git.infradead.org/linux-mtd
git bisect bad c2fc71c9b74c1e87336a27dba1a5edc69d2690f1
# bad: [b86d865cb1cae1e61527ea0b8977078bbf694328] blkcg: Make
blkg_root_lookup() work for queues in bypass mode
git bisect bad b86d865cb1cae1e61527ea0b8977078bbf694328
# bad: [1b0d274523df5ef1caedc834da055ff721e4d4f0] nvmet: don't use uuid_le type
git bisect bad 1b0d274523df5ef1caedc834da055ff721e4d4f0

$ git status
On branch master
Your branch is up to date with 'origin/master'.

nothing to commit, working tree clean

$ git log -1 --format=oneline
2595646791c319cadfdbf271563aac97d0843dc7 (HEAD -> master, tag:
v4.20-rc5, origin/master, origin/HEAD) Linux 4.20-rc5

$ git bisect replay /var/tmp/git-bisect.log
We are not bisecting.
Bisecting: a merge base must be tested
[d72e90f33aa4709ebecc5005562f52335e106a60] Linux 4.18-rc6

$ git log -1 --format=oneline
d72e90f33aa4709ebecc5005562f52335e106a60 (HEAD, tag: v4.18-rc6) Linux 4.18-rc6





Running the commands from the bisect log manually, however:

$ git bisect reset
Checking out files: 100% (18326/18326), done.
Previous HEAD position was d72e90f33aa4 Linux 4.18-rc6
Switched to branch 'master'
Your branch is up to date with 'origin/master'.

$ . /var/tmp/git-bisect.log
Bisecting: 6112 revisions left to test after this (roughly 13 steps)
[54dbe75bbf1e189982516de179147208e90b5e45] Merge tag
'drm-next-2018-08-15' of git://anongit.freedesktop.org/drm/drm
Bisecting: 3881 revisions left to test after this (roughly 12 steps)
[0a957467c5fd46142bc9c52758ffc552d4c5e2f7] x86: i8259: Add missing include file
Bisecting: 1595 revisions left to test after this (roughly 11 steps)
[958f338e96f874a0d29442396d6adf9c1e17aa2d] Merge branch 'l1tf-final'
of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Bisecting: 854 revisions left to test after this (roughly 10 steps)
[2c20443ec221dcb76484b30933593e8ecd836bbd] Merge tag 'acpi-4.19-rc1'
of 

git-gui: Norwegian Bokmål translation

2018-12-04 Thread Petter Reinholdtsen


>From 9a7edd7f94b82335177917463cb334541b4d2cb0 Mon Sep 17 00:00:00 2001
From: Petter Reinholdtsen 
Date: Mon, 3 Dec 2018 12:24:22 +0100
Subject: [PATCH] i18n: Update the nb translation of git-gui and start on
 glossary list.

Signed-off-by: Petter Reinholdtsen 
---
 po/glossary/nb.po | 171 ++
 po/nb.po  |  57 --
 2 files changed, 210 insertions(+), 18 deletions(-)
 create mode 100644 po/glossary/nb.po

diff --git a/po/glossary/nb.po b/po/glossary/nb.po
new file mode 100644
index 000..1a80451
--- /dev/null
+++ b/po/glossary/nb.po
@@ -0,0 +1,171 @@
+# Copyright (C) 2018 Free Software Foundation, Inc.
+#
+# Petter Reinholdtsen , 2018.
+msgid ""
+msgstr ""
+"Project-Id-Version: \n"
+"POT-Creation-Date: 2008-01-07 21:20+0100\n"
+"PO-Revision-Date: 2018-12-03 12:35+0100\n"
+"Last-Translator: Petter Reinholdtsen \n"
+"Language-Team: NorwegianBokmal \n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: ENCODING\n"
+"Language: nb\n"
+"Plural-Forms: nplurals=2; plural=(n != 1);\n"
+"X-Generator: Lokalize 2.0\n"
+
+#. "English Definition (Dear translator: This file will never be visible to 
the user! It should only serve as a tool for you, the translator. Nothing 
more.)"
+msgid ""
+"English Term (Dear translator: This file will never be visible to the user!)"
+msgstr ""
+
+#. ""
+msgid "amend"
+msgstr "legg til"
+
+#. ""
+msgid "annotate"
+msgstr "kommenter"
+
+#. "A 'branch' is an active line of development."
+msgid "branch [noun]"
+msgstr "gren [substantiv]"
+
+#. ""
+msgid "branch [verb]"
+msgstr ""
+
+#. ""
+msgid "checkout [noun]"
+msgstr "utsjekk [substantiv]"
+
+#. "The action of updating the working tree to a revision which was stored in 
the object database."
+msgid "checkout [verb]"
+msgstr "sjekk ut [verb]"
+
+#. ""
+msgid "clone [verb]"
+msgstr "klone [verb]"
+
+#. "A single point in the git history."
+msgid "commit [noun]"
+msgstr "endring [substantiv]"
+
+#. "The action of storing a new snapshot of the project's state in the git 
history."
+msgid "commit [verb]"
+msgstr "sjekk inn [verb]"
+
+#. ""
+msgid "diff [noun]"
+msgstr "forskjell [substantiv]"
+
+#. ""
+msgid "diff [verb]"
+msgstr ""
+
+#. "A fast-forward is a special type of merge where you have a revision and 
you are merging another branch's changes that happen to be a descendant of what 
you have."
+msgid "fast forward merge"
+msgstr ""
+
+#. "Fetching a branch means to get the branch's head from a remote repository, 
to find out which objects are missing from the local object database, and to 
get them, too."
+msgid "fetch"
+msgstr "hent"
+
+#. "One context of consecutive lines in a whole patch, which consists of many 
such hunks"
+msgid "hunk"
+msgstr ""
+
+#. "A collection of files. The index is a stored version of your working tree."
+msgid "index (in git-gui: staging area)"
+msgstr ""
+
+#. "A successful merge results in the creation of a new commit representing 
the result of the merge."
+msgid "merge [noun]"
+msgstr ""
+
+#. "To bring the contents of another branch into the current branch."
+msgid "merge [verb]"
+msgstr ""
+
+#. ""
+msgid "message"
+msgstr "melding"
+
+#. "Deletes all stale tracking branches under . These stale branches 
have already been removed from the remote repository referenced by , but 
are still locally available in 'remotes/'."
+msgid "prune"
+msgstr ""
+
+#. "Pulling a branch means to fetch it and merge it."
+msgid "pull"
+msgstr "dra inn"
+
+#. "Pushing a branch means to get the branch's head ref from a remote 
repository, and ... (well, can someone please explain it for mere mortals?)"
+msgid "push"
+msgstr "skyv"
+
+#. ""
+msgid "redo"
+msgstr "gjør om"
+
+#. "An other repository ('remote'). One might have a set of remotes whose 
branches one tracks."
+msgid "remote"
+msgstr ""
+
+#. "A collection of refs (?) together with an object database containing all 
objects which are reachable from the refs... (oops, you've lost me here. Again, 
please an explanation for mere mortals?)"
+msgid "repository"
+msgstr "depot"
+
+#. ""
+msgid "reset"
+msgstr "nullstill"
+
+#. ""
+msgid "revert"
+msgstr ""
+
+#. "A particular state of files and directories which was stored in the object 
database."
+msgid "revision"
+msgstr ""
+
+#. ""
+msgid "sign off"
+msgstr ""
+
+#. ""
+msgid "staging area"
+msgstr ""
+
+#. ""
+msgid "status"
+msgstr "status"
+
+#. "A ref pointing to a tag or commit object"
+msgid "tag [noun]"
+msgstr "merkelapp [substantiv] "
+
+#. ""
+msgid "tag [verb]"
+msgstr "merk [verb]"
+
+#. "A regular git branch that is used to follow changes from another 
repository."
+msgid "tracking branch"
+msgstr ""
+
+#. ""
+msgid "undo"
+msgstr ""
+
+#. ""
+msgid "update"
+msgstr "oppdater"
+
+#. ""
+msgid "verify"
+msgstr ""
+
+#. "The tree of actual checked out files."
+msgid "working copy, working tree"
+msgstr ""
+
+
diff --git a/po/nb.po b/po/nb.po
index d66aa50..92a7f5d 100644
--- 

Re: [PATCH] rebase -i: introduce the 'test' command

2018-12-04 Thread Johannes Schindelin
Hi Peff,

On Mon, 3 Dec 2018, Jeff King wrote:

> On Mon, Dec 03, 2018 at 08:01:44PM +0100, Johannes Schindelin wrote:
> 
> > > In this sort of situation, I often whish to be able to do nested rebases.
> > > Even more because it happen relatively often that I forget that I'm
> > > working in a rebase and not on the head, and then it's quite natural
> > > to me to type things like 'git rebase -i @^^^' while already rebasing.
> > > But I suppose this has already been discussed.
> > 
> > Varieties of this have been discussed, but no, not nested rebases.
> > 
> > The closest we thought about was re-scheduling the latest  commits,
> > which is now harder because of the `--rebase-merges` mode.
> > 
> > But I think it would be doable. Your idea of a "nested" rebase actually
> > opens that door quite nicely. It would not *really* be a nested rebase,
> > and it would still only be possible in interactive mode, but I could
> > totally see
> > 
> > git rebase --nested -i HEAD~3
> > 
> > to generate and prepend the following lines to the `git-rebase-todo` file:
> > 
> > reset abcdef01 # This is HEAD~3
> > pick abcdef02 # This is HEAD~2
> > pick abcdef03 # This is HEAD~
> > pick abcdef04 # This is HEAD
> > 
> > (assuming that the latest 3 commits were non-merge commits; It would look
> > quite a bit more complicated in other situations.)
> 
> Yeah, I would probably use that if it existed.

I kind of use it, even if it does not exist ;-)

> It would be nicer to have real nested sequencer operations, I think, for
> other situations.

I agree. But for the moment, our data format is too married to the exact
layout of .git/, thanks to `git rebase`'s evolution from a Unix shell
script.

Alban has this really great patch series to work on the todo list
in-memory, and that paves the way to decouple the entire sequencer thing
from the file system.

The most notably thing that still would need to be encapsulated would be
the options: currently, there is a plethora of inconsistent options files
being saved into the state directory (for some, the mere presence
indicates `true`, some contain `true` or `false`, others contain text,
etc).

> E.g., cherry-picking a sequence of commits while you're in the middle of
> a rebase.

You will be delighted to learn that you can cherry-pick a sequence of
commits in the middle of a rebase already. I do `exec git cherry-pick
` *all* the time.

> But I suspect getting that right would be _loads_ more work, and
> probably would involve some funky UI corner cases to handle the stack of
> operations (so truly aborting a rebase may mean an arbitrary number of
> "rebase --abort" calls to pop the stack). Your suggestion is probably a
> reasonable trick in the meantime.

You know what is an even more reasonable trick? Worktrees.

I only thought about that this morning, but I should have mentioned it
right away, as I use it quite frequently.

When I have tricky nested rebases to perform, I do use throw-away
worktrees where I check out unnamed branches, work on those, and then
integrate them back into the "outer rebase" via the `reset` command in the
todo list.

Ciao,
Dscho