[PATCH v3 3/3] rebase: rebasing can also be done when HEAD is detached

2017-11-30 Thread Kaartic Sivaraam
Attempting to rebase when the HEAD is detached and is already
up to date with upstream (so there's nothing to do), the
following message is shown

Current branch HEAD is up to date.

which is clearly wrong as HEAD is not a branch.

Handle the special case of HEAD correctly to give a more precise
error message.

Signed-off-by: Kaartic Sivaraam 
---

Changes in v2:

- avoided unnecesarily spawning a subshell in a conditional


 git-rebase.sh | 16 ++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/git-rebase.sh b/git-rebase.sh
index 3f8d99e99..1886167e0 100755
--- a/git-rebase.sh
+++ b/git-rebase.sh
@@ -602,11 +602,23 @@ then
test -z "$switch_to" ||
GIT_REFLOG_ACTION="$GIT_REFLOG_ACTION: checkout $switch_to" \
git checkout -q "$switch_to" --
-   say "$(eval_gettext "Current branch \$branch_or_commit is up to 
date.")"
+   if test "$branch_or_commit" = "HEAD" &&
+! git symbolic-ref -q HEAD
+   then
+   say "$(eval_gettext "HEAD is up to date.")"
+   else
+   say "$(eval_gettext "Current branch \$branch_or_commit 
is up to date.")"
+   fi
finish_rebase
exit 0
else
-   say "$(eval_gettext "Current branch \$branch_or_commit is up to 
date, rebase forced.")"
+   if test "$branch_or_commit" = "HEAD" &&
+! git symbolic-ref -q HEAD
+   then
+   say "$(eval_gettext "HEAD is up to date, rebase 
forced.")"
+   else
+   say "$(eval_gettext "Current branch \$branch_or_commit 
is up to date, rebase forced.")"
+   fi
fi
 fi
 
-- 
2.15.0.531.g2ccb3012c



[PATCH v5 4/4] builtin/branch: strip refs/heads/ using skip_prefix

2017-11-30 Thread Kaartic Sivaraam
Instead of hard-coding the offset strlen("refs/heads/") to skip
the prefix "refs/heads/" use the skip_prefix() function which
is more communicative and verifies that the string actually
starts with that prefix.

Signed-off-by: Kaartic Sivaraam 
---
Sorry, missed a ';' in v4.

The surprising thing I discovered in the TravisCI build for v4
was that apart from the 'Documentation' build the 'Static Analysis'
build passed, with the following output,

-- 
$ ci/run-static-analysis.sh
GIT_VERSION = 2.13.1.1972.g6ced3f745
 SPATCH contrib/coccinelle/array.cocci
 SPATCH result: contrib/coccinelle/array.cocci.patch
 SPATCH contrib/coccinelle/free.cocci
 SPATCH contrib/coccinelle/object_id.cocci
 SPATCH contrib/coccinelle/qsort.cocci
 SPATCH contrib/coccinelle/strbuf.cocci
 SPATCH result: contrib/coccinelle/strbuf.cocci.patch
 SPATCH contrib/coccinelle/swap.cocci
 SPATCH contrib/coccinelle/xstrdup_or_null.cocci

The command "ci/run-static-analysis.sh" exited with 0.
--  --

+Cc: SZEDER
I guess static analysis tools make an assumption that the source
code is syntactically valid for them to work correctly. So, I guess
we should at least make sure the code 'compiles' before running
the static analysis tool even though we don't build it completely.
I'm not sure if it's a bad thing to run the static analysis on code
that isn't syntactically valid, though.


 builtin/branch.c | 15 +++
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/builtin/branch.c b/builtin/branch.c
index ca9d8abd0..196d5fe9b 100644
--- a/builtin/branch.c
+++ b/builtin/branch.c
@@ -462,6 +462,8 @@ static void copy_or_rename_branch(const char *oldname, 
const char *newname, int
 {
struct strbuf oldref = STRBUF_INIT, newref = STRBUF_INIT, logmsg = 
STRBUF_INIT;
struct strbuf oldsection = STRBUF_INIT, newsection = STRBUF_INIT;
+   const char *interpreted_oldname = NULL;
+   const char *interpreted_newname = NULL;
int recovery = 0;
int clobber_head_ok;
 
@@ -493,6 +495,11 @@ static void copy_or_rename_branch(const char *oldname, 
const char *newname, int
 
reject_rebase_or_bisect_branch(oldref.buf);
 
+   if (!skip_prefix(oldref.buf, "refs/heads/", _oldname) ||
+   !skip_prefix(newref.buf, "refs/heads/", _newname)) {
+   die("BUG: expected prefix missing for refs");
+   }
+
if (copy)
strbuf_addf(, "Branch: copied %s to %s",
oldref.buf, newref.buf);
@@ -508,10 +515,10 @@ static void copy_or_rename_branch(const char *oldname, 
const char *newname, int
if (recovery) {
if (copy)
warning(_("Created a copy of a misnamed branch '%s'"),
-   oldref.buf + 11);
+   interpreted_oldname);
else
warning(_("Renamed a misnamed branch '%s' away"),
-   oldref.buf + 11);
+   interpreted_oldname);
}
 
if (!copy &&
@@ -520,9 +527,9 @@ static void copy_or_rename_branch(const char *oldname, 
const char *newname, int
 
strbuf_release();
 
-   strbuf_addf(, "branch.%s", oldref.buf + 11);
+   strbuf_addf(, "branch.%s", interpreted_oldname);
strbuf_release();
-   strbuf_addf(, "branch.%s", newref.buf + 11);
+   strbuf_addf(, "branch.%s", interpreted_newname);
strbuf_release();
if (!copy && git_config_rename_section(oldsection.buf, newsection.buf) 
< 0)
die(_("Branch is renamed, but update of config-file failed"));
-- 
2.15.0.531.g2ccb3012c



Re: Bare repository fetch/push race condition

2017-11-30 Thread Dmitry Neverov
Sorry for misleading subject. It should be "Race condition between pushing to 
and pushing from a bare repository"


Re: [PATCH 2/2] t/lib-git-svn.sh: improve svnserve tests with parallel make test

2017-11-30 Thread Todd Zullinger

Jonathan Nieder wrote:
Yep, with this description it is 
Reviewed-by: Jonathan Nieder 


Thanks for putting up with my nits. :)


Thank you for taking the time and looking at the details. :)

I'll send a v2 with the changes in the morning, in case there are any 
other comments (but mostly because it's late and time for a swim).


--
Todd
~~
It is impossible to enjoy idling thoroughly unless one has plenty of
work to do.
   -- Jerome K. Jerome



Re: [PATCH v4 2/2] launch_editor(): indicate that Git waits for user input

2017-11-30 Thread Kaartic Sivaraam

On Friday 01 December 2017 02:21 AM, Jeff King wrote:


These are obviously the result of devils-advocate poking at the feature.
I doubt any editor would end its output with a CR. But the first case is
probably going to be common, especially for actual graphical editors. We
know that emacsclient prints its own line, and I wouldn't be surprised
if other graphical editors spew some telemetry to stderr (certainly
anything built against GTK tends to do so).



Yeah, at times 'gedit' does do what you say. And if the user 
(surprisingly!) uses an IDE such as "eclipse" or a hackable text editor 
"atom" (of course with the '-f' option) for entering his commit message 
it is likely to happen all the time for him.




I don't think there's a good way around it. Portably saying "delete
_this_ line that I wrote earlier" would probably require libcurses or
similar. So maybe we just live with it. The deletion magic makes the
common cases better (a terminal editor that doesn't print random
lines, or a graphical editor that is quiet), and everyone else can flip
the advice switch if they need to. I dunno.



---
Kaartic


Re: [PATCH 1/2] t/lib-git-svn: whitespace cleanup

2017-11-30 Thread Jonathan Nieder
Todd Zullinger wrote:
> Jonathan Nieder wrote:

>> nit: it would have been a tiny bit easier to review if the commit
>> message mentioned that this is only changing the indentation from an
>> inconsistent space/tab mixture to tabs and isn't making any other
>> changes.
>
> If only you saw how many times I typed a subject and changed it
> before settling on the terse version...

Heh.  No worries, it was a really small nit.

> How about:
>
>t/lib-git-svn: cleanup inconsistent tab/space usage
>
> ?

Sure, looks good.

Thanks again,
Jonathan


Re: [PATCH 2/2] t/lib-git-svn.sh: improve svnserve tests with parallel make test

2017-11-30 Thread Todd Zullinger

Hi Jonathan,

Jonathan Nieder wrote:

Todd Zullinger wrote:


Previously, setting SVNSERVE_PORT enabled several tests which require a
local svnserve daemon to be run (in t9113 & t9126).  The tests share the
setup of the local svnserve via `start_svnserve()`.  The function uses
the svnserve option `--listen-once` which causes svnserve to accept one
connection on the port, serve it, and exit.  When running the tests in
parallel this fails if one test tries to start svnserve while the other
is still running.


I had trouble reading this because I didn't know what previous time it
was referring to.  Is it about how the option currently behaves?

(Git's commit messages tend to use the present tense to describe the
behavior before the patch, like a bug report, and the imperative to
describe the change the patch proposes to make, like an impolite bug
report. :))


This is what I get for skipping grammar classes to go hiking in my
youth.  But I'm sure I'd do it all again, if given the chance. ;)


Use the test number as the svnserve port (similar to httpd tests) to
avoid port conflicts.  Set GIT_TEST_SVNSERVE to any value other than
'false' or 'auto' to enable these tests.


This uses imperative in two ways and also ended up confusing me.  The
second one is a direction to me, not Git, right?  How about:

Use the test number instead of $SVNSERVE_PORT as the svnserve
port (similar to httpd tests) to avoid port conflicts.
Developers can set GIT_TEST_SVNSERVE to any value other than
'false' or 'auto' to enable these tests.


Much better, thank you.  How about this for the full commit message:

   t/lib-git-svn.sh: improve svnserve tests with parallel make test

   Setting SVNSERVE_PORT enables several tests which require a local
   svnserve daemon to be run (in t9113 & t9126).  The tests share setup of
   the local svnserve via `start_svnserve()`.  The function uses svnserve's
   `--listen-once` option, which causes svnserve to accept one connection
   on the port, serve it, and exit.  When running the tests in parallel
   this fails if one test tries to start svnserve while the other is still
   running.

   Use the test number as the svnserve port (similar to httpd tests) to
   avoid port conflicts.  Developers can set GIT_TEST_SVNSERVE to any value
   other than 'false' or 'auto' to enable these tests.

?

Thanks,

--
Todd
~~
Curiosity killed the cat, but for awhile I was a suspect.
   -- Steven Wright



Re: [PATCH 1/2] t/lib-git-svn: whitespace cleanup

2017-11-30 Thread Todd Zullinger

Hi Jonathan,

Jonathan Nieder wrote:

nit: it would have been a tiny bit easier to review if the commit
message mentioned that this is only changing the indentation from an
inconsistent space/tab mixture to tabs and isn't making any other
changes.


If only you saw how many times I typed a subject and changed it
before settling on the terse version...

How about:

   t/lib-git-svn: cleanup inconsistent tab/space usage

?

Thanks,

--
Todd
~~
How am I supposed to hallucinate with all these swirling colors
distracting me?
   -- Lisa Simpson



Re: [PATCH v4 0/2] launch_editor(): indicate that Git waits for user input

2017-11-30 Thread Kaartic Sivaraam
On Thu, 2017-11-30 at 16:13 +0100, Andreas Schwab wrote:
> On Nov 30 2017, Thomas Adam  wrote:
> 
> > On Thu, Nov 30, 2017 at 02:55:35PM +0100, Lars Schneider wrote:
> > > 
> > > > On 29 Nov 2017, at 19:35, Thomas Adam  wrote:
> > > > 
> > > > On Wed, Nov 29, 2017 at 03:37:50PM +0100, lars.schnei...@autodesk.com 
> > > > wrote:
> > > > > + if (print_waiting_for_editor) {
> > > > > + fprintf(stderr, _("hint: Waiting for your 
> > > > > editor input..."));
> > > > >   fflush(stderr);
> > > > 
> > > > Just FYI, stderr is typically unbuffered on most systems I've used, and
> > > > although the call to fflush() is harmless, I suspect it's not having any
> > > > effect.  That said, there's plenty of other places in Git which seems 
> > > > to think
> > > > fflush()ing stderr actually does something.
> > > 
> > > I agree with the "unbuffered" statement. I am surprised that you expect 
> > > fflush()
> > > to do nothing in that situation... but I am no expert in that area. Can 
> > > you
> > > point me to some documentation?
> > 
> > Because stderr is unbuffered, it will get printed immediately.
> 
> POSIX only requires stderr to be "not fully buffered".  If it is line
> buffered, the message may not appear immediately.
> 

I guess Junio's reply for the same "unbuffered" question I asked for an
earlier version of this patch (now, a series) might be relevant here,

> > Being curious again, is flushing 'stderr' required ? AFAIK, 'stderr'
> > is unbuffered by default and I didn't notice any calls that changed
> > the buffering mode of it along this code path.
> 
> "By default" is the key phrase.  The code is merely being defensive
> to changes in other area of the code.

cf. 


-- 
Kaartic


Re: [PATCH 0/2] t/lib-git-svn.sh: improve svnserve tests with parallel make test

2017-11-30 Thread Todd Zullinger

Hi Jonathan,

Jonathan Nieder wrote:

Todd Zullinger wrote:


These tests are not run by default nor are they enabled in travis-ci.  I
don't know how much testing they get in user or other packager builds.

I've been slowly increasing the test suite usage in fedora builds.  I
ran into this while testing locally with parallel make test.  The
official fedora builds don't run in parallel (yet), as even before I ran
into this issue, builds on the fedora builders randomly failed too
often.  I'm hoping to eventually enable parallel tests by default
though, since it's so much faster.


This background could go in the commit message for patch 2, but it
also speaks for itself as an obviously good change so I could go
either way.


Heh.  If there's something in there in particular that seems useful, I
can certainly add it.  I'm not sure what parts of this text would be
beneficial to someone down the line though.

I usually err on the 'too much information' side of commit messages.
I'm happy that it's much harder to do that here.  I'd rather have to
skim a long message than wonder about the motivation for a change.


I'm not sure if there's any objection to changing the variable needed to
enable the tests from SVNSERVE_PORT to GIT_TEST_SVNSERVE.  The way
SVNSERVE_PORT is set in this patch should allow the port to be set
explicitly, in case someone requires that -- and they understand that it
can fail if running parallel tests, of course.  Whether that's a
feature or a bug, I'm not sure. :)


micronit: can this just say something like

Patch 2 is the important one --- see that one for rationale.

Patch 1 is an optional preparatory style cleanup.

next time?  That way, you get an automatic guarantee that all the
important information is available in "git log" output to people who
need it later.


Yeah, I'll try to remember that.  I started this without the
whitespace cleanup and as I was writing in the single patch
description that I didn't think the whitespace cleanup was warranted,
I talked myself into doing it as the prep patch. :)

Thanks,

--
Todd
~~
Tradition: Just because you've always done it that way doesn't mean
it's not incredibly stupid.
   -- Demotivators (www.despair.com)



Re: [PATCH 0/2] t/lib-git-svn.sh: improve svnserve tests with parallel make test

2017-11-30 Thread Todd Zullinger

Hi Eric,

Eric Wong wrote:

I'm fine with this for now.  Since svnserve (and git-daemon)
both support inetd behavior, I think we can eventually have a
test helper which binds random ports and pretends to be an
inetd, letting the test run without any special setup.

It would let multiple test instances run in parallel, even.


Indeed, that would be a nice general improvement. :)

Thanks,

--
Todd
~~
Suppose I were a member of Congress, and suppose I were an idiot. But,
I repeat myself.
   -- Mark Twain



Re: How hard would it be to implement sparse fetching/pulling?

2017-11-30 Thread Vitaly Arbuzov
Makes sense, I think this perfectly aligns with our needs too.
Let me dive deeper into those patches and previous discussions, that
you've kindly shared above, so I better understand details.

I'm very excited about what you guys already did, it's a big deal for
the community!


On Thu, Nov 30, 2017 at 6:51 PM, Jonathan Nieder  wrote:
> Hi Vitaly,
>
> Vitaly Arbuzov wrote:
>
>> I think it would be great if we high level agree on desired user
>> experience, so let me put a few possible use cases here.
>
> I think one thing this thread is pointing to is a lack of overview
> documentation about how the 'partial clone' series currently works.
> The basic components are:
>
>  1. extending git protocol to (1) allow fetching only a subset of the
> objects reachable from the commits being fetched and (2) later,
> going back and fetching the objects that were left out.
>
> We've also discussed some other protocol changes, e.g. to allow
> obtaining the sizes of un-fetched objects without fetching the
> objects themselves
>
>  2. extending git's on-disk format to allow having some objects not be
> present but only be "promised" to be obtainable from a remote
> repository.  When running a command that requires those objects,
> the user can choose to have it either (a) error out ("airplane
> mode") or (b) fetch the required objects.
>
> It is still possible to work fully locally in such a repo, make
> changes, get useful results out of "git fsck", etc.  It is kind of
> similar to the existing "shallow clone" feature, except that there
> is a more straightforward way to obtain objects that are outside
> the "shallow" clone when needed on demand.
>
>  3. improving everyday commands to require fewer objects.  For
> example, if I run "git log -p", then I way to see the history of
> most files but I don't necessarily want to download large binary
> files just to print 'Binary files differ' for them.
>
> And by the same token, we might want to have a mode for commands
> like "git log -p" to default to restricting to a particular
> directory, instead of downloading files outside that directory.
>
> There are some fundamental changes to make in this category ---
> e.g. modifying the index format to not require entries for files
> outside the sparse checkout, to avoid having to download the
> trees for them.
>
> The overall goal is to make git scale better.
>
> The existing patches do (1) and (2), though it is possible to do more
> in those categories. :)  We have plans to work on (3) as well.
>
> These are overall changes that happen at a fairly low level in git.
> They mostly don't require changes command-by-command.
>
> Thanks,
> Jonathan


Re: [PATCH v4 1/2] refactor "dumb" terminal determination

2017-11-30 Thread Kaartic Sivaraam

On Wednesday 29 November 2017 08:07 PM, lars.schnei...@autodesk.com wrote:

+int is_terminal_dumb(void)
+{
+   const char *terminal = getenv("TERM");
+   return !terminal || !strcmp(terminal, "dumb");


So, IIUC, there terminal is considered to be 'dumb' when the TERM 
environment variable is NOT set or when it is set to 'dumb'.



+}
+
  const char *git_editor(void)
  {
const char *editor = getenv("GIT_EDITOR");
-   const char *terminal = getenv("TERM");
-   int terminal_is_dumb = !terminal || !strcmp(terminal, "dumb");
+   int terminal_is_dumb = is_terminal_dumb();
  
  	if (!editor && editor_program)

editor = editor_program;
diff --git a/sideband.c b/sideband.c
index 1e4d684d6c..6d7f943e43 100644
--- a/sideband.c
+++ b/sideband.c
@@ -20,13 +20,12 @@
  
  int recv_sideband(const char *me, int in_stream, int out)

  {
-   const char *term, *suffix;
+   const char *suffix;
char buf[LARGE_PACKET_MAX + 1];
struct strbuf outbuf = STRBUF_INIT;
int retval = 0;
  
-	term = getenv("TERM");

-   if (isatty(2) && term && strcmp(term, "dumb"))
+   if (isatty(2) && !is_terminal_dumb())
suffix = ANSI_SUFFIX;
else
suffix = DUMB_SUFFIX;



This one looks good to me if my observation above is correct.

---
Kaartic


Re: [PATCH 0/2] t/lib-git-svn.sh: improve svnserve tests with parallel make test

2017-11-30 Thread Jonathan Nieder
Todd Zullinger wrote:

> These tests are not run by default nor are they enabled in travis-ci.  I
> don't know how much testing they get in user or other packager builds.
>
> I've been slowly increasing the test suite usage in fedora builds.  I
> ran into this while testing locally with parallel make test.  The
> official fedora builds don't run in parallel (yet), as even before I ran
> into this issue, builds on the fedora builders randomly failed too
> often.  I'm hoping to eventually enable parallel tests by default
> though, since it's so much faster.

This background could go in the commit message for patch 2, but it
also speaks for itself as an obviously good change so I could go
either way.

> I'm not sure if there's any objection to changing the variable needed to
> enable the tests from SVNSERVE_PORT to GIT_TEST_SVNSERVE.  The way
> SVNSERVE_PORT is set in this patch should allow the port to be set
> explicitly, in case someone requires that -- and they understand that it
> can fail if running parallel tests, of course.  Whether that's a
> feature or a bug, I'm not sure. :)

micronit: can this just say something like

Patch 2 is the important one --- see that one for rationale.

Patch 1 is an optional preparatory style cleanup.

next time?  That way, you get an automatic guarantee that all the
important information is available in "git log" output to people who
need it later.

Thanks,
Jonathan


Re: [PATCH 1/2] t/lib-git-svn: whitespace cleanup

2017-11-30 Thread Jonathan Nieder
Todd Zullinger wrote:

> Subject: t/lib-git-svn: whitespace cleanup
>
> Signed-off-by: Todd Zullinger 
> ---
>  t/lib-git-svn.sh | 22 +++---
>  1 file changed, 11 insertions(+), 11 deletions(-)

Reviewed-by: Jonathan Nieder 
Thanks.

nit: it would have been a tiny bit easier to review if the commit
message mentioned that this is only changing the indentation from an
inconsistent space/tab mixture to tabs and isn't making any other
changes.


Re: [PATCH 2/2] t/lib-git-svn.sh: improve svnserve tests with parallel make test

2017-11-30 Thread Jonathan Nieder
Hi,

Todd Zullinger wrote:

> Previously, setting SVNSERVE_PORT enabled several tests which require a
> local svnserve daemon to be run (in t9113 & t9126).  The tests share the
> setup of the local svnserve via `start_svnserve()`.  The function uses
> the svnserve option `--listen-once` which causes svnserve to accept one
> connection on the port, serve it, and exit.  When running the tests in
> parallel this fails if one test tries to start svnserve while the other
> is still running.

I had trouble reading this because I didn't know what previous time it
was referring to.  Is it about how the option currently behaves?

(Git's commit messages tend to use the present tense to describe the
behavior before the patch, like a bug report, and the imperative to
describe the change the patch proposes to make, like an impolite bug
report. :))

> Use the test number as the svnserve port (similar to httpd tests) to
> avoid port conflicts.  Set GIT_TEST_SVNSERVE to any value other than
> 'false' or 'auto' to enable these tests.

This uses imperative in two ways and also ended up confusing me.  The
second one is a direction to me, not Git, right?  How about:

Use the test number instead of $SVNSERVE_PORT as the svnserve
port (similar to httpd tests) to avoid port conflicts.
Developers can set GIT_TEST_SVNSERVE to any value other than
'false' or 'auto' to enable these tests.
>
> Signed-off-by: Todd Zullinger 
> ---
>  t/lib-git-svn.sh | 6 --
>  1 file changed, 4 insertions(+), 2 deletions(-)

The patch looks good.  Thanks.

> diff --git a/t/lib-git-svn.sh b/t/lib-git-svn.sh
> index 84366b2624..4c1f81f167 100644
> --- a/t/lib-git-svn.sh
> +++ b/t/lib-git-svn.sh
> @@ -110,14 +110,16 @@ EOF
>  }
>  
>  require_svnserve () {
> - if test -z "$SVNSERVE_PORT"
> + test_tristate GIT_TEST_SVNSERVE
> + if ! test "$GIT_TEST_SVNSERVE" = true
>   then
> - skip_all='skipping svnserve test. (set $SVNSERVE_PORT to 
> enable)'
> + skip_all='skipping svnserve test. (set $GIT_TEST_SVNSERVE to 
> enable)'
>   test_done
>   fi
>  }
>  
>  start_svnserve () {
> + SVNSERVE_PORT=${SVNSERVE_PORT-${this_test#t}}
>   svnserve --listen-port $SVNSERVE_PORT \
>--root "$rawsvnrepo" \
>--listen-once \
> -- 
> 2.15.1
> 


Re: How hard would it be to implement sparse fetching/pulling?

2017-11-30 Thread Jonathan Nieder
Hi Vitaly,

Vitaly Arbuzov wrote:

> I think it would be great if we high level agree on desired user
> experience, so let me put a few possible use cases here.

I think one thing this thread is pointing to is a lack of overview
documentation about how the 'partial clone' series currently works.
The basic components are:

 1. extending git protocol to (1) allow fetching only a subset of the
objects reachable from the commits being fetched and (2) later,
going back and fetching the objects that were left out.

We've also discussed some other protocol changes, e.g. to allow
obtaining the sizes of un-fetched objects without fetching the
objects themselves

 2. extending git's on-disk format to allow having some objects not be
present but only be "promised" to be obtainable from a remote
repository.  When running a command that requires those objects,
the user can choose to have it either (a) error out ("airplane
mode") or (b) fetch the required objects.

It is still possible to work fully locally in such a repo, make
changes, get useful results out of "git fsck", etc.  It is kind of
similar to the existing "shallow clone" feature, except that there
is a more straightforward way to obtain objects that are outside
the "shallow" clone when needed on demand.

 3. improving everyday commands to require fewer objects.  For
example, if I run "git log -p", then I way to see the history of
most files but I don't necessarily want to download large binary
files just to print 'Binary files differ' for them.

And by the same token, we might want to have a mode for commands
like "git log -p" to default to restricting to a particular
directory, instead of downloading files outside that directory.

There are some fundamental changes to make in this category ---
e.g. modifying the index format to not require entries for files
outside the sparse checkout, to avoid having to download the
trees for them.

The overall goal is to make git scale better.

The existing patches do (1) and (2), though it is possible to do more
in those categories. :)  We have plans to work on (3) as well.

These are overall changes that happen at a fairly low level in git.
They mostly don't require changes command-by-command.

Thanks,
Jonathan


[PATCH 2/2] t/lib-git-svn.sh: improve svnserve tests with parallel make test

2017-11-30 Thread Todd Zullinger
Previously, setting SVNSERVE_PORT enabled several tests which require a
local svnserve daemon to be run (in t9113 & t9126).  The tests share the
setup of the local svnserve via `start_svnserve()`.  The function uses
the svnserve option `--listen-once` which causes svnserve to accept one
connection on the port, serve it, and exit.  When running the tests in
parallel this fails if one test tries to start svnserve while the other
is still running.

Use the test number as the svnserve port (similar to httpd tests) to
avoid port conflicts.  Set GIT_TEST_SVNSERVE to any value other than
'false' or 'auto' to enable these tests.

Signed-off-by: Todd Zullinger 
---
 t/lib-git-svn.sh | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/t/lib-git-svn.sh b/t/lib-git-svn.sh
index 84366b2624..4c1f81f167 100644
--- a/t/lib-git-svn.sh
+++ b/t/lib-git-svn.sh
@@ -110,14 +110,16 @@ EOF
 }
 
 require_svnserve () {
-   if test -z "$SVNSERVE_PORT"
+   test_tristate GIT_TEST_SVNSERVE
+   if ! test "$GIT_TEST_SVNSERVE" = true
then
-   skip_all='skipping svnserve test. (set $SVNSERVE_PORT to 
enable)'
+   skip_all='skipping svnserve test. (set $GIT_TEST_SVNSERVE to 
enable)'
test_done
fi
 }
 
 start_svnserve () {
+   SVNSERVE_PORT=${SVNSERVE_PORT-${this_test#t}}
svnserve --listen-port $SVNSERVE_PORT \
 --root "$rawsvnrepo" \
 --listen-once \
-- 
2.15.1



[PATCH 0/2] t/lib-git-svn.sh: improve svnserve tests with parallel make test

2017-11-30 Thread Todd Zullinger
These tests are not run by default nor are they enabled in travis-ci.  I
don't know how much testing they get in user or other packager builds.

I've been slowly increasing the test suite usage in fedora builds.  I
ran into this while testing locally with parallel make test.  The
official fedora builds don't run in parallel (yet), as even before I ran
into this issue, builds on the fedora builders randomly failed too
often.  I'm hoping to eventually enable parallel tests by default
though, since it's so much faster.

I'm not sure if there's any objection to changing the variable needed to
enable the tests from SVNSERVE_PORT to GIT_TEST_SVNSERVE.  The way
SVNSERVE_PORT is set in this patch should allow the port to be set
explicitly, in case someone requires that -- and they understand that it
can fail if running parallel tests, of course.  Whether that's a
feature or a bug, I'm not sure. :)

The indentation of lib-git-svn.sh didn't use tabs consistently, in only
a few places, so I cleaned that up first.  I can drop that change if
it's unwanted.

Todd Zullinger (2):
  t/lib-git-svn: whitespace cleanup
  t/lib-git-svn.sh: improve svnserve tests with parallel make test

 t/lib-git-svn.sh | 24 +---
 1 file changed, 13 insertions(+), 11 deletions(-)

-- 
2.15.1



[PATCH 1/2] t/lib-git-svn: whitespace cleanup

2017-11-30 Thread Todd Zullinger
Signed-off-by: Todd Zullinger 
---
 t/lib-git-svn.sh | 22 +++---
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/t/lib-git-svn.sh b/t/lib-git-svn.sh
index 688313ed5c..84366b2624 100644
--- a/t/lib-git-svn.sh
+++ b/t/lib-git-svn.sh
@@ -17,8 +17,8 @@ SVN_TREE=$GIT_SVN_DIR/svn-tree
 svn >/dev/null 2>&1
 if test $? -ne 1
 then
-skip_all='skipping git svn tests, svn not found'
-test_done
+   skip_all='skipping git svn tests, svn not found'
+   test_done
 fi
 
 svnrepo=$PWD/svnrepo
@@ -110,18 +110,18 @@ EOF
 }
 
 require_svnserve () {
-if test -z "$SVNSERVE_PORT"
-then
-   skip_all='skipping svnserve test. (set $SVNSERVE_PORT to enable)'
-test_done
-fi
+   if test -z "$SVNSERVE_PORT"
+   then
+   skip_all='skipping svnserve test. (set $SVNSERVE_PORT to 
enable)'
+   test_done
+   fi
 }
 
 start_svnserve () {
-svnserve --listen-port $SVNSERVE_PORT \
- --root "$rawsvnrepo" \
- --listen-once \
- --listen-host 127.0.0.1 &
+   svnserve --listen-port $SVNSERVE_PORT \
+--root "$rawsvnrepo" \
+--listen-once \
+--listen-host 127.0.0.1 &
 }
 
 prepare_a_utf8_locale () {
-- 
2.15.1



Re: How hard would it be to implement sparse fetching/pulling?

2017-11-30 Thread Vitaly Arbuzov
I think it would be great if we high level agree on desired user
experience, so let me put a few possible use cases here.

1. Init and fetch into a new repo with a sparse list.
Preconditions: origin blah exists and has a lot of folders inside of
src including "bar".
Actions:
git init foo && cd foo
git config core.sparseAll true # New flag to activate all sparse
operations by default so you don't need to pass options to each
command.
echo "src/bar" > .git/info/sparse-checkout
git remote add origin blah
git pull origin master
Expected results: foo contains src/bar folder and nothing else,
objects that are unrelated to this tree are not fetched.
Notes: This should work same when fetch/merge/checkout operations are
used in the right order.

2. Add a file and push changes.
Preconditions: all steps above followed.
touch src/bar/baz.txt && git add -A && git commit -m "added a file"
git push origin master
Expected results: changes are pushed to remote.

3. Clone a repo with a sparse list as a filter.
Preconditions: same as for #1
Actions:
echo "src/bar" > /tmp/blah-sparse-checkout
git clone --sparse /tmp/blah-sparse-checkout blah # Clone should be
the only command that would requires specific option key being passed.
Expected results: same as for #1 plus /tmp/blah-sparse-checkout is
copied into .git/info/sparse-checkout

4. Showing log for sparsely cloned repo.
Preconditions: #3 is followed
Actions:
git log
Expected results: recent changes that affect src/bar tree.

5. Showing diff.
Preconditions: #3 is followed
Actions:
git diff HEAD^ HEAD
Expected results: changes from the most recent commit affecting
src/bar folder are shown.
Notes: this can be tricky operation as filtering must be done to
remove results from unrelated subtrees.

*Note that I intentionally didn't mention use cases that are related
to filtering by blob size as I think we should logically consider them
as a separate, although related, feature.

What do you think about these examples above? Is that something that
more-or-less fits into current development? Are there other important
flows that I've missed?

-Vitaly

On Thu, Nov 30, 2017 at 5:27 PM, Vitaly Arbuzov  wrote:
> Jonathan, thanks for references, that is super helpful, I will follow
> your suggestions.
>
> Philip, I agree that keeping original DVCS off-line capability is an
> important point. Ideally this feature should work even with remotes
> that are located on the local disk.
> Which part of Jeff's work do you think wouldn't work offline after
> repo initialization is done and sparse fetch is performed? All the
> stuff that I've seen seems to be quite usable without GVFS.
> I'm not sure if we need to store markers/tombstones on the client,
> what problem does it solve?
>
> On Thu, Nov 30, 2017 at 3:43 PM, Philip Oakley  wrote:
>> From: "Vitaly Arbuzov" 
>>>
>>> Found some details here: https://github.com/jeffhostetler/git/pull/3
>>>
>>> Looking at commits I see that you've done a lot of work already,
>>> including packing, filtering, fetching, cloning etc.
>>> What are some areas that aren't complete yet? Do you need any help
>>> with implementation?
>>>
>>
>> comments below..
>>
>>>
>>> On Thu, Nov 30, 2017 at 9:01 AM, Vitaly Arbuzov  wrote:

 Hey Jeff,

 It's great, I didn't expect that anyone is actively working on this.
 I'll check out your branch, meanwhile do you have any design docs that
 describe these changes or can you define high level goals that you
 want to achieve?

 On Thu, Nov 30, 2017 at 6:24 AM, Jeff Hostetler 
 wrote:
>
>
>
> On 11/29/2017 10:16 PM, Vitaly Arbuzov wrote:
>>
>>
>> Hi guys,
>>
>> I'm looking for ways to improve fetch/pull/clone time for large git
>> (mono)repositories with unrelated source trees (that span across
>> multiple services).
>> I've found sparse checkout approach appealing and helpful for most of
>> client-side operations (e.g. status, reset, commit, etc.)
>> The problem is that there is no feature like sparse fetch/pull in git,
>> this means that ALL objects in unrelated trees are always fetched.
>> It may take a lot of time for large repositories and results in some
>> practical scalability limits for git.
>> This forced some large companies like Facebook and Google to move to
>> Mercurial as they were unable to improve client-side experience with
>> git while Microsoft has developed GVFS, which seems to be a step back
>> to CVCS world.
>>
>> I want to get a feedback (from more experienced git users than I am)
>> on what it would take to implement sparse fetching/pulling.
>> (Downloading only objects related to the sparse-checkout list)
>> Are there any issues with missing hashes?
>> Are there any fundamental problems why it can't be done?
>> Can we get away with only client-side changes or would it 

I wait for your prompt response.

2017-11-30 Thread SAM AZADA
Good day,

I am Mr. Sam Azada from Burkina Faso  a Minister confide on me to look
for foreign partner who will assist him to invest the sum of  Fifty
Million  Dollars  ($50,000,000) in your country.

He has investment interest in mining, exotic properties for commercial
resident, development properties, hotels and any other viable
investment opportunities in your country based on your recommendation
will be highly welcomed.

Hence your co -operation is highly needed to actualize this investment project

I wait for your prompt response.

Sincerely yours

Mr Sam Azada.


Re: How hard would it be to implement sparse fetching/pulling?

2017-11-30 Thread Vitaly Arbuzov
Jonathan, thanks for references, that is super helpful, I will follow
your suggestions.

Philip, I agree that keeping original DVCS off-line capability is an
important point. Ideally this feature should work even with remotes
that are located on the local disk.
Which part of Jeff's work do you think wouldn't work offline after
repo initialization is done and sparse fetch is performed? All the
stuff that I've seen seems to be quite usable without GVFS.
I'm not sure if we need to store markers/tombstones on the client,
what problem does it solve?

On Thu, Nov 30, 2017 at 3:43 PM, Philip Oakley  wrote:
> From: "Vitaly Arbuzov" 
>>
>> Found some details here: https://github.com/jeffhostetler/git/pull/3
>>
>> Looking at commits I see that you've done a lot of work already,
>> including packing, filtering, fetching, cloning etc.
>> What are some areas that aren't complete yet? Do you need any help
>> with implementation?
>>
>
> comments below..
>
>>
>> On Thu, Nov 30, 2017 at 9:01 AM, Vitaly Arbuzov  wrote:
>>>
>>> Hey Jeff,
>>>
>>> It's great, I didn't expect that anyone is actively working on this.
>>> I'll check out your branch, meanwhile do you have any design docs that
>>> describe these changes or can you define high level goals that you
>>> want to achieve?
>>>
>>> On Thu, Nov 30, 2017 at 6:24 AM, Jeff Hostetler 
>>> wrote:



 On 11/29/2017 10:16 PM, Vitaly Arbuzov wrote:
>
>
> Hi guys,
>
> I'm looking for ways to improve fetch/pull/clone time for large git
> (mono)repositories with unrelated source trees (that span across
> multiple services).
> I've found sparse checkout approach appealing and helpful for most of
> client-side operations (e.g. status, reset, commit, etc.)
> The problem is that there is no feature like sparse fetch/pull in git,
> this means that ALL objects in unrelated trees are always fetched.
> It may take a lot of time for large repositories and results in some
> practical scalability limits for git.
> This forced some large companies like Facebook and Google to move to
> Mercurial as they were unable to improve client-side experience with
> git while Microsoft has developed GVFS, which seems to be a step back
> to CVCS world.
>
> I want to get a feedback (from more experienced git users than I am)
> on what it would take to implement sparse fetching/pulling.
> (Downloading only objects related to the sparse-checkout list)
> Are there any issues with missing hashes?
> Are there any fundamental problems why it can't be done?
> Can we get away with only client-side changes or would it require
> special features on the server side?
>
>
> I have, for separate reasons been _thinking_ about the issue ($dayjob is in
> defence, so a similar partition would be useful).
>
> The changes would almost certainly need to be server side (as well as client
> side), as it is the server that decides what is sent over the wire in the
> pack files, which would need to be a 'narrow' pack file.
>
> If we had such a feature then all we would need on top is a separate
> tool that builds the right "sparse" scope for the workspace based on
> paths that developer wants to work on.
>
> In the world where more and more companies are moving towards large
> monorepos this improvement would provide a good way of scaling git to
> meet this demand.
>
>
> The 'companies' problem is that it tends to force a client-server, always-on
> on-line mentality. I'm also wanting the original DVCS off-line capability to
> still be available, with _user_ control, in a generic sense, of what they
> have locally available (including files/directories they have not yet looked
> at, but expect to have. IIUC Jeff's work is that on-line view, without the
> off-line capability.
>
> I'd commented early in the series at [1,2,3].
>
>
> At its core, my idea was to use the object store to hold markers for the
> 'not yet fetched' objects (mainly trees and blobs). These would be in a
> known fixed format, and have the same effect (conceptually) as the
> sub-module markers - they _confirm_ the oid, yet say 'not here, try
> elsewhere'.
>
> The comaprison with submodules mean there is the same chance of
> de-synchronisation with triangular and upstream servers, unless managed.
>
> The server side, as noted, will need to be included as it is the one that
> decides the pack file.
>
> Options for a server management are:
>
> - "I accept narrow packs?" No; yes
>
> - "I serve narrow packs?" No; yes.
>
> - "Repo completeness checks on reciept": (must be complete) || (allow narrow
> to nothing).
>
> For server farms (e.g. Github..) the settings could be global, or by repo.
> (note that the completeness requirement and narrow reciept option are not
> incompatible - the recipient server can reject the pack from a narrow
> subordinate 

Re: [PATCH] git-prompt: fix reading files with windows line endings

2017-11-30 Thread SZEDER Gábor
On Thu, Nov 30, 2017 at 2:51 AM, Johannes Schindelin
 wrote:
> On Thu, 30 Nov 2017, SZEDER Gábor wrote:
>
>> > > diff --git a/contrib/completion/git-prompt.sh 
>> > > b/contrib/completion/git-prompt.sh
>> > > index c6cbef38c2..71a64e7959 100644
>> > > --- a/contrib/completion/git-prompt.sh
>> > > +++ b/contrib/completion/git-prompt.sh
>> > > @@ -282,7 +282,7 @@ __git_eread ()
>> > >  {
>> > >   local f="$1"
>> > >   shift
>> > > - test -r "$f" && read "$@" <"$f"
>> > > + test -r "$f" && read "$@" <"$f" && export $@="${!@%$'\r'}"
>>
>> I don't think that export is necessary here.
>>
>> > As far as I understand, $'\r' is a Bash-only construct, and this file
>> > (git-prompt.sh) is targeting other Unix shells, too.
>>
>> The only other shell the prompt (and completion) script is targeting
>> is ZSH, and ZSH understands this construct.  We already use this
>> construct to set IFS in several places in both scripts for a long
>> time, so it should be fine here, too.
>
> That's good to know! I should have `git grep`ped...
>
> Sorry for the noise,

No, no, your concern is justified, you just happened to pick the wrong
construct :)

It's the ${!var} indirect expansion construct that ZSH doesn't know, it
uses a different syntax for that.


Gábor


Re: How hard would it be to implement sparse fetching/pulling?

2017-11-30 Thread Philip Oakley

From: "Vitaly Arbuzov" 

Found some details here: https://github.com/jeffhostetler/git/pull/3

Looking at commits I see that you've done a lot of work already,
including packing, filtering, fetching, cloning etc.
What are some areas that aren't complete yet? Do you need any help
with implementation?



comments below..


On Thu, Nov 30, 2017 at 9:01 AM, Vitaly Arbuzov  wrote:

Hey Jeff,

It's great, I didn't expect that anyone is actively working on this.
I'll check out your branch, meanwhile do you have any design docs that
describe these changes or can you define high level goals that you
want to achieve?

On Thu, Nov 30, 2017 at 6:24 AM, Jeff Hostetler 
wrote:



On 11/29/2017 10:16 PM, Vitaly Arbuzov wrote:


Hi guys,

I'm looking for ways to improve fetch/pull/clone time for large git
(mono)repositories with unrelated source trees (that span across
multiple services).
I've found sparse checkout approach appealing and helpful for most of
client-side operations (e.g. status, reset, commit, etc.)
The problem is that there is no feature like sparse fetch/pull in git,
this means that ALL objects in unrelated trees are always fetched.
It may take a lot of time for large repositories and results in some
practical scalability limits for git.
This forced some large companies like Facebook and Google to move to
Mercurial as they were unable to improve client-side experience with
git while Microsoft has developed GVFS, which seems to be a step back
to CVCS world.

I want to get a feedback (from more experienced git users than I am)
on what it would take to implement sparse fetching/pulling.
(Downloading only objects related to the sparse-checkout list)
Are there any issues with missing hashes?
Are there any fundamental problems why it can't be done?
Can we get away with only client-side changes or would it require
special features on the server side?



I have, for separate reasons been _thinking_ about the issue ($dayjob is in
defence, so a similar partition would be useful).

The changes would almost certainly need to be server side (as well as client
side), as it is the server that decides what is sent over the wire in the 
pack files, which would need to be a 'narrow' pack file.



If we had such a feature then all we would need on top is a separate
tool that builds the right "sparse" scope for the workspace based on
paths that developer wants to work on.

In the world where more and more companies are moving towards large
monorepos this improvement would provide a good way of scaling git to
meet this demand.


The 'companies' problem is that it tends to force a client-server, always-on
on-line mentality. I'm also wanting the original DVCS off-line capability to
still be available, with _user_ control, in a generic sense, of what they
have locally available (including files/directories they have not yet looked
at, but expect to have. IIUC Jeff's work is that on-line view, without the
off-line capability.

I'd commented early in the series at [1,2,3].


At its core, my idea was to use the object store to hold markers for the
'not yet fetched' objects (mainly trees and blobs). These would be in a 
known fixed format, and have the same effect (conceptually) as the 
sub-module markers - they _confirm_ the oid, yet say 'not here, try 
elsewhere'.


The comaprison with submodules mean there is the same chance of
de-synchronisation with triangular and upstream servers, unless managed.

The server side, as noted, will need to be included as it is the one that
decides the pack file.

Options for a server management are:

- "I accept narrow packs?" No; yes

- "I serve narrow packs?" No; yes.

- "Repo completeness checks on reciept": (must be complete) || (allow narrow 
to nothing).


For server farms (e.g. Github..) the settings could be global, or by repo.
(note that the completeness requirement and narrow reciept option are not
incompatible - the recipient server can reject the pack from a narrow
subordinate as incomplete - see below)

* Marking of 'missing' objects in the local object store, and on the wire.
The missing objects are replaced by a place holder object, which used the
same oid/sha1, but has a short fixed length, with content “GitNarrowObject
”. The chance that that string would actually have such an oid clash is
the same as all other object hashes, so is a *safe* self-referential device.


* The stored object already includes length (and inferred type), so we do
know what it stands in for. Thus the local index (index file) should be able
to be recreated from the object store alone (including the ‘promised /
narrow / missing’ files/directory markers)

* the ‘same’ as sub-modules.
The potential for loss of synchronisation with a golden complete repo is
just the same as for sub-modules. (We expected object/commit X here, but it’s 
not in the store). This could happen with a small user group who have 
locally narrow clones, who interact with their 

Re: [PATCH v3] diff: support anchoring line(s)

2017-11-30 Thread Jonathan Tan
On Thu, 30 Nov 2017 01:36:37 +0100 (CET)
Johannes Schindelin  wrote:

> Hi Jonathan,
> 
> On Tue, 28 Nov 2017, Jonathan Tan wrote:
> 
> > @@ -4607,7 +4627,14 @@ int diff_opt_parse(struct diff_options *options,
> > DIFF_XDL_CLR(options, NEED_MINIMAL);
> > options->xdl_opts &= ~XDF_DIFF_ALGORITHM_MASK;
> > options->xdl_opts |= value;
> > +   if (value == XDF_PATIENCE_DIFF)
> > +   clear_patience_anchors(options);
> > return argcount;
> > +   } else if (skip_prefix(arg, "--anchored=", )) {
> > +   options->xdl_opts = DIFF_WITH_ALG(options, PATIENCE_DIFF);
> > +   ALLOC_GROW(options->anchors, options->anchors_nr + 1,
> > +  options->anchors_alloc);
> > +   options->anchors[options->anchors_nr++] = xstrdup(arg);
> 
> I looked and failed to find the code that releases this array after the
> diff is done... did I miss anything?

You didn't miss anything. As far as I can tell, occurrences of struct
diff_options live throughout the lifetime of an invocation of Git and
are not freed. (Even if the struct itself is allocated on the stack, its
pathspec has some elements allocated on the heap.)


Mutual Coperation thank you

2017-11-30 Thread Mr bassole Obama
-- 
Dear Friend,

I know that this message will come to you as a surprise. I am the
Auditing and Accounting section manager with African Development Bank,
Ouagadougou Burkina faso. I Hope that you will not expose or betray
this trust and confident that I am about to repose on you for the
mutual benefit of our both families.

I need your urgent assistance in transferring the sum of($39.5)million
to your account within 10 or 14 banking days. This money has been
dormant for years in our Bank without claim.I want the bank to release
the money to you as the nearest person to our deceased customer late
George small. who died along with his supposed next of kin in an air
crash since 31st October 1999.

I don't want the money to go into government treasury as an abandoned
fund. So this is the reason why I am contacting you so that the bank
can release the money to you as the next of kin to the deceased
customer. Please I would like you to keep this proposal as atop secret
and delete it if you are not interested.

Upon receipt of your reply, I will give you full details on how the
business will be executed and also note that you will have 40% of the
above mentioned sum if you agree to handle this business with me.

I am expecting your urgent response as soon as you receive my message.

Best Regard,

Auditor Mr Obama Bassole


Re: [SCRIPT/RFC 0/3] git-commit --onto-parent (three-way merge, no working tree file changes)

2017-11-30 Thread Chris Nerwert
On Nov 27, 2017, at 00:35, Igor Djordjevic  wrote:
> Approach discussed here could have a few more useful applications, 
> but one seems to be standing out the most - in case where multiple 
> topic branches are temporarily merged for integration testing, it 
> could be very useful to be able to post "hotfix" commits to merged 
> branches directly, _without actually switching to them_ (and thus 
> without touching working tree files), and still keeping everything 
> merged, in one go.
I'm actually doing the described workflow quite often with git rebase when 
working on a topic. Given the following structure:

  ---o   (master)
  \
   o---A---B---C (topic)

When I want to make changes to commit A one option is to make them directly on 
topic, then do "git commit --fixup A", and then eventual interactive rebase 
onto master will clean them up:

  ---o (master)
  \
   o---A---B---C---f!A (topic)

However, sometimes this breaks when changes in B or C conflict somehow with A 
(which may happen quite a lot during development of a topic), so the rebase 
will not apply cleanly. So sometimes I make a temporary branch from A, commit 
the fixup there:

  ---o   (master)
  \
   o---A---B---C (topic)
\
 f!A (temp)

and then use "git rebase --onto temp A topic" to move the topic back on track:

  ---o (master)
  \
   o---A---f!A (temp)
  \
   B'---C' (topic)

after which the final cleanup rebase is much easier to do.

Obviously, all the branch switching and rebasing does take its tall on file 
modifications.

Re: [PATCH v5 4/6] list-objects: filter objects in traverse_commit_list

2017-11-30 Thread Jeff King
On Mon, Nov 27, 2017 at 02:39:43PM -0500, Jeff Hostetler wrote:

> On 11/22/2017 5:56 PM, Stefan Beller wrote:
> > On Tue, Nov 21, 2017 at 12:58 PM, Jeff Hostetler  
> > wrote:
> > > +   assert(arg);
> > > +   assert(!unset);
> > 
> > I count 16 asserts in this patch. Is that really needed?
> > Either omit them or use BUG if we want to rely on user
> > bug reports when these conditions trigger, as assert is unreliable
> > due to its dependence on the NDEBUG flag.
> 
> Yes, there are a few asserts in the code.  Old habits
> 
> I could remove some/all of them, but personally I feel they
> have merit and hint to the mindset of the author for future
> readers of the code.  Are there other opinions?

I think I'd prefer in general to see assertions remain in one form or
another, if only because of the documentation benefits you mention here.

I do think there's such a thing as too many asserts, but I don't think I
see that here. "Too many" would probably be something like asserting
things that are a normal part of the contract (so "assert(foo)" on
every pointer parameter coming in to make sure it's not NULL).

I thought at first that's what was happening with the ones quoted above,
but it's actually documenting that no, we do not support "--no-filter"
in opt_parse_list_objects_filter (which is really checking that we're in
sync with the PARSE_OPT_NONEG found elsewhere).

So arguably my confusion argues that this one ought to have a custom
message or a comment.

Of course, it also makes me wonder whether we ought to just support
--no-filter. Shouldn't it just set us back to FILTER_DISABLED?

> Personally, I think it might be awkward to keep repeating
> something like:
> 
> if (!c)
> BUG(msg);
> 
> Do we want to think about a macro that builds on BUG() and
> does the test?
> 
> Something like:
> #define ASSERT_OR_BUG(c) do { if (!(c)) BUG("%s", #c); } while (0)

Yeah, I think that was where the other thread[1] led to. IMHO that's
probably what BUG_ON() ought to do (though personally I'm fine with just
continuing to use assert for simple cases).

I think we can sidestep the whole variadic-macros thing mentioned in
that thread since we don't take a custom message.

-Peff

[1] https://public-inbox.org/git/2017113827.26773-1-sbel...@google.com/


Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules

2017-11-30 Thread Jeff King
On Wed, Nov 29, 2017 at 07:54:30PM +, Ævar Arnfjörð Bjarmason wrote:

> Replace the perl/Makefile.PL and the fallback perl/Makefile used under
> NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily
> inspired by how the i18n infrastructure's build process works[1].

I'm very happy to see the recursive make invocation go away. The perl
makefile generation was one of the few places where parallel make could
racily get confused (though I haven't seen that for a while, so maybe it
was fixed alongside some of the other .stamp work you did).

> The reason for having the Makefile.PL in the first place is that it
> was initially[2] building a perl C binding to interface with libgit,
> this functionality, that was removed[3] before Git.pm ever made it to
> the master branch.

Thanks for doing all this history digging. I agree that it doesn't seem
like there's really any reason to carry the complexity. Of your
functional changes, the only one that gives me pause is:

>  * This will not always install into perl's idea of its global
>"installsitelib". This only potentially matters for packagers that
>need to expose Git.pm for non-git use, and as explained in the
>INSTALL file there's a trivial workaround.

This could be a minor hiccup for people using Git.pm from other scripts.
But maybe only in funny setups? It seems like $prefix/share/perl5 would
be in most people's @INC unless they are doing something exotic.

>  * We don't build the Git(3) Git::I18N(3) etc. man pages from the
>embedded perldoc. I suspect nobody really cares, these are mostly
>internal APIs, and if someone's developing against them they likely
>know enough to issue a "perldoc" against the installed file to get
>the same result.

I don't have a real opinion on this, but it sounds from the rest of the
thread like we should maybe build these to be on the safe side.

> @@ -2291,6 +2273,17 @@ endif
>  po/build/locale/%/LC_MESSAGES/git.mo: po/%.po
>   $(QUIET_MSGFMT)mkdir -p $(dir $@) && $(MSGFMT) -o $@ $<
>  
> +PMFILES := $(wildcard perl/*.pm perl/*/*.pm perl/*/*/*.pm perl/*/*/*/*.pm)

Yuck. :) I don't think there's a better wildcard solution within make,
though. And I'd rather see this than doing a $(shell) to "find" or
similar.

The other option is to actually list the files, as we do for .o files.
That's a minor pain to update, but it would allow things like
differentiating which ones get their documentation built.

> +PMCFILES := $(patsubst perl/%.pm,perl/build/%.pmc,$(PMFILES))

TIL about pmc files. It sounds like they've had a storied history, but
should be supported everywhere.

> [...]

The rest of the patch all looked good to me. Thanks for working on this.

-Peff


Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules

2017-11-30 Thread Eric Wong
Jonathan Nieder  wrote:
> Yeah, people really do use Git.pm as an external API.

Yikes :<

> If we want to prevent this, then we should not be installing it in the
> public perl module path.  Or we should at least add a note to the
> manpages we ship :) to recommend not using it.

I think a note in manpages is fine; maybe a load-time warning
to non-git.git users.  We shouldn't be breaking other peoples'
code when they upgrade, though.


Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules

2017-11-30 Thread Jonathan Nieder
Ævar Arnfjörð Bjarmason wrote:
> On Thu, Nov 30 2017, Jonathan Nieder jotted:
>> Ævar Arnfjörð Bjarmason wrote:

>>>  * We don't build the Git(3) Git::I18N(3) etc. man pages from the
>>>embedded perldoc. I suspect nobody really cares, these are mostly
>>>internal APIs, and if someone's developing against them they likely
>>>know enough to issue a "perldoc" against the installed file to get
>>>the same result.
[...]
>> Debian cares (see
>> https://www.debian.org/doc/packaging-manuals/perl-policy/ch-module_packages.html
>> for details).
[...]
> It just says you have to install the manpages in such-and-such a place,
> but I don't have any. There, policy issue fixed :)
>
> More seriously, it seems to me that the only reason we have these
> manpages in the first place is because of emergent effects. *Maybe* I'm
> wrong about someone using Git.pm as an external API, is that the case?

Yeah, people really do use Git.pm as an external API.

Unlike e.g. something on CPAN, its API stability guarantees are not
great, so I am not saying I recommend it, but people have been using
it that way.

If we want to prevent this, then we should not be installing it in the
public perl module path.  Or we should at least add a note to the
manpages we ship :) to recommend not using it.

Thanks,
Jonathan


Re: [PATCH v4 2/2] launch_editor(): indicate that Git waits for user input

2017-11-30 Thread Jeff King
On Wed, Nov 29, 2017 at 03:37:52PM +0100, lars.schnei...@autodesk.com wrote:

> No message is printed in a "dumb" terminal as it would not be possible
> to remove the message after the editor returns. This should not be a
> problem as this feature is targeted at novice Git users and they are
> unlikely to work with a "dumb" terminal.

I think novice users could end up in this situation with something like:

  ssh remote_host git commit

But then I'd expect most terminal-based editors to give some sort of
error in that situation, too. And at any rate, the worst case is that
they get no special "waiting..." message from Git, which is already the
status quo.  So it's probably not worth worrying about such an obscure
case.

> Power users might not want to see this message or their editor might
> already print such a message (e.g. emacsclient). Allow these users to
> suppress the message by disabling the "advice.waitingForEditor" config.

I'm happy to see the hard-coded emacsclient behavior go. Hopefully we
won't see too many complaints about people having to set the advice
flag.

> The standard advise() function is not used here as it would always add
> a newline which would make deleting the message harder.

I tried to think of ways this "show a message and then delete it" could
go wrong. It should work OK with editors that just do curses-like
things, taking over the terminal and then restoring it at the end.

It does behave in a funny way if the editor produces actual lines of
output outside of the curses handling. E.g. (I just quit vim
immediately, hence the aborting message):

  $ GIT_EDITOR='echo foo; vim' git commit
  hint: Waiting for your editor input...foo
  Aborting commit due to empty commit message.

our "foo" gets tacked onto the hint line, and then our deletion does
nothing (because the newline after "foo" bumped us to a new line, and
there was nothing on that line to erase).

An even worse case (and yes, this is really reaching) is:

  $ GIT_EDITOR='echo one; printf "two\\r"; vim' git commit
  hint: Waiting for your editor input...one
  Aborting commit due to empty commit message.

There we ate the "two" line.

These are obviously the result of devils-advocate poking at the feature.
I doubt any editor would end its output with a CR. But the first case is
probably going to be common, especially for actual graphical editors. We
know that emacsclient prints its own line, and I wouldn't be surprised
if other graphical editors spew some telemetry to stderr (certainly
anything built against GTK tends to do so).

I don't think there's a good way around it. Portably saying "delete
_this_ line that I wrote earlier" would probably require libcurses or
similar. So maybe we just live with it. The deletion magic makes the
common cases better (a terminal editor that doesn't print random
lines, or a graphical editor that is quiet), and everyone else can flip
the advice switch if they need to. I dunno.

> ---
>  Documentation/config.txt |  3 +++
>  advice.c |  2 ++
>  advice.h |  1 +
>  editor.c | 15 +++
>  4 files changed, 21 insertions(+)

The patch itself looks fine, as far as correctly implementing the
design.

-Peff


Re: [PATCH v4 0/2] launch_editor(): indicate that Git waits for user input

2017-11-30 Thread Thomas Adam
On Thu, Nov 30, 2017 at 03:12:17PM -0500, Jeff King wrote:
> On Wed, Nov 29, 2017 at 06:35:16PM +, Thomas Adam wrote:
> 
> > On Wed, Nov 29, 2017 at 03:37:50PM +0100, lars.schnei...@autodesk.com wrote:
> > > + if (print_waiting_for_editor) {
> > > + fprintf(stderr, _("hint: Waiting for your editor 
> > > input..."));
> > >   fflush(stderr);
> > 
> > Just FYI, stderr is typically unbuffered on most systems I've used, and
> > although the call to fflush() is harmless, I suspect it's not having any
> > effect.  That said, there's plenty of other places in Git which seems to 
> > think
> > fflush()ing stderr actually does something.
> 
> I'd prefer to keep them (including this one), even if they are noops on
> most platforms, because:
> 
>   1. They serve as a note for readers of the code that it's important
>  for the output to have been printed immediately.
> 
>   2. We build on some funny and antique platforms. I wouldn't be
>  surprised if there's one that line buffers by default. Or even a
>  modern system with funny settings (e.g., using the GNU stdbuf
>  tool).
> 
> (I know you said later you don't think this case needs to be removed,
> but I want to make it clear I think it's a reasonable project-wide
> policy to not assume we we know how stderr is buffered).

We're talking past each other, Peff.  I'm agreeing with you.  I was surprised
to see the introduction of fflush(stderr) in the interdiff, when it wasn't
present before, was curious to understand why.  I've done that, and since
stated it's fine to leave it as-is.

-- Thomas Adam


Re: [PATCH v4 1/2] refactor "dumb" terminal determination

2017-11-30 Thread Jeff King
On Wed, Nov 29, 2017 at 03:37:51PM +0100, lars.schnei...@autodesk.com wrote:

> From: Lars Schneider 
> 
> Move the code to detect "dumb" terminals into a single location. This
> avoids duplicating the terminal detection code yet again in a subsequent
> commit.

Makes sense, and probably worth doing even if we don't follow through on
2/2.

The patch itself looks good to me.

-Peff


Re: [PATCH v4 0/2] launch_editor(): indicate that Git waits for user input

2017-11-30 Thread Jeff King
On Wed, Nov 29, 2017 at 06:35:16PM +, Thomas Adam wrote:

> On Wed, Nov 29, 2017 at 03:37:50PM +0100, lars.schnei...@autodesk.com wrote:
> > +   if (print_waiting_for_editor) {
> > +   fprintf(stderr, _("hint: Waiting for your editor 
> > input..."));
> > fflush(stderr);
> 
> Just FYI, stderr is typically unbuffered on most systems I've used, and
> although the call to fflush() is harmless, I suspect it's not having any
> effect.  That said, there's plenty of other places in Git which seems to think
> fflush()ing stderr actually does something.

I'd prefer to keep them (including this one), even if they are noops on
most platforms, because:

  1. They serve as a note for readers of the code that it's important
 for the output to have been printed immediately.

  2. We build on some funny and antique platforms. I wouldn't be
 surprised if there's one that line buffers by default. Or even a
 modern system with funny settings (e.g., using the GNU stdbuf
 tool).

(I know you said later you don't think this case needs to be removed,
but I want to make it clear I think it's a reasonable project-wide
policy to not assume we we know how stderr is buffered).

-Peff


Re: How hard would it be to implement sparse fetching/pulling?

2017-11-30 Thread Jonathan Nieder
Hi Vitaly,

Vitaly Arbuzov wrote:

> Found some details here: https://github.com/jeffhostetler/git/pull/3
>
> Looking at commits I see that you've done a lot of work already,
> including packing, filtering, fetching, cloning etc.
> What are some areas that aren't complete yet? Do you need any help
> with implementation?

That's a great question!  I've filed https://crbug.com/git/2 to track
this project.  Feel free to star it to get updates there, or to add
updates of your own.

As described at https://crbug.com/git/2#c1, currently there are three
patch series for which review would be very welcome.  Building on top
of them is welcome as well.  Please make sure to coordinate with
jeffh...@microsoft.com and jonathanta...@google.com (e.g. through the
bug tracker or email).

One piece of missing functionality that looks intereseting to me: that
series batches fetches of the missing blobs involved in a "git
checkout" command:

 https://public-inbox.org/git/20171121211528.21891-14-...@jeffhostetler.com/

But if doesn't batch fetches of the missing blobs involved in a "git
diff  " command.  That might be a good place to get
your hands dirty. :)

Thanks,
Jonathan


Re: "git describe" documentation and behavior mismatch

2017-11-30 Thread Daniel Knittl-Frank
On Thu, Nov 30, 2017 at 7:47 PM, Daniel Knittl-Frank
 wrote:
> […]
>
> Running the above commands in the git.git repository yields a different 
> result:
>
>> $ git describe --all --abbrev=4 v1.0.5^2
>> v1.0.0-21-g975b3
>
> No "reference path" to see. It is however shown, when the output is a
> branch name:
>
>> $ git describe --all --abbrev=4 origin/next
>> heads/next
>
> Is this expected behavior? IOW is the documentation outdated or is the
> git describe command misbehaving?

Bisecting history goes as far back as Feb 2008: commit
212945d4a85dfa172ea55ec73b1d830ef2d8582f

> Teach git-describe to verify annotated tag names before output

The warning mentioned in the commit message has since been gone. So I
guess the documentation is outdated? Nobody has complained for the
past 9 years, so we could call this a "feature" :)

An interesting fact (and intentional behavior?) is that describing a
commit with only a lightweight tag will properly display the tags/
prefix. I assume this is because the annotated tags only store the
tagname without any ref namespace, which is then picked up by git
describe and displayed.

I will try to come up with a patch for the man page.

Daniel

-- 
typed with http://neo-layout.org


"git describe" documentation and behavior mismatch

2017-11-30 Thread Daniel Knittl-Frank
Hi Git list,

the help page/manpage of the git describe command has an example with
the --all flag which should prepend the ref namespace (tags/ or
heads/):

> With --all, the command can use branch heads as references, so the output 
> shows the reference path as well:
>
>  [torvalds@g5 git]$ git describe --all --abbrev=4 v1.0.5^2
>  tags/v1.0.0-21-g975b
>
>  [torvalds@g5 git]$ git describe --all --abbrev=4 HEAD^
>  heads/lt/describe-7-g975b

Running the above commands in the git.git repository yields a different result:

> $ git describe --all --abbrev=4 v1.0.5^2
> v1.0.0-21-g975b3

No "reference path" to see. It is however shown, when the output is a
branch name:

> $ git describe --all --abbrev=4 origin/next
> heads/next

Is this expected behavior? IOW is the documentation outdated or is the
git describe command misbehaving?

Thanks,
Daniel

-- 
typed with http://neo-layout.org


Re: [PATCH] git-prompt: fix reading files with windows line endings

2017-11-30 Thread Robert Abel
Hi Johannes,

On 30 Nov 2017 16:21, Johannes Schindelin wrote:
> On Thu, 30 Nov 2017, Robert Abel wrote:
>> So reading a dummy variable along with the actual content variable
>> works for git-prompt:
>>
>> __git_eread ()
>> {
>> local f="$1"
>> local dummy
>> shift
>> test -r "$f" && IFS=$'\r\n' read "$@" dummy < "$f"
>> }
>>
>> I feel like this would be the most readable solution thus far.
> 
> Hmm. I am just a little concerned about "dummy" swallowing the rest of the
> line, e.g. when reading "1 2 3" via `__git_eread line`... the way I read
> it, dummy would consume "2 3" and line would *not* receive "1 2 3" but
> only "1"...
You missed that tab and space aren't field separator anymore,
because IFS=$'\r\n'. The way I see it, __git_eread was never meant to
split tokens. Its primary purpose was to test if a file exists and if
so, read all its contents sans the newline into a variable.

That's how all call to __git_eread use it. And none of them are equipped
to handle multi-line file contents or want to read more than one variable.

So this version does exactly that, but for CRLF line endings, too.
I successfully use the above version now on two of my PCs.

If you agree and nobody else has any concerns, I'll resend an edited
patch to accomodate for the changes and probably put a comment with
usage info above __git_eread.

Regards,

Robert




Re: [PATCH] imap-send: URI encode server folder

2017-11-30 Thread Eric Sunshine
On Thu, Nov 30, 2017 at 5:07 AM, Nicolas Morey-Chaisemartin
 wrote:
> URI encode the server folder string before passing it to libcurl.
> This fixes the access to the draft folder on Gmail accounts (named 
> [Gmail]/Drafts)

For someone reading this commit message in the future -- someone who
didn't follow the email thread which led to this patch -- "this fixes"
doesn't say much about the actual problem being addressed. Can you
expand the commit message a bit to make it more self-contained? At
minimum, perhaps show the error message you were experiencing, and
cite (as Daniel pointed out) RFC 3986 and the bit about a "legal" URL
not containing brackets.

Also, a natural question which pops into the head of someone reading
this patch is whether other parts of the URL (host, user, etc.) also
need to be handled similarly. It's possible that you audited the code
and determined that they are handled fine already, but the reader of
the commit message is unable to infer that. Consequently, it might be
nice to have a sentence about that, as well ("other parts of the URL
are already encoded, thus are fine" or "other parts of the URL are not
subject to this problem because ...").

The patch itself looks okay (from a cursory read).

Thanks.

> Reported-by: Doron Behar 
> Signed-off-by: Nicolas Morey-Chaisemartin 
> ---
>  imap-send.c | 8 +++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/imap-send.c b/imap-send.c
> index 54e6a80fd..36c7c1b4f 100644
> --- a/imap-send.c
> +++ b/imap-send.c
> @@ -1412,6 +1412,7 @@ static CURL *setup_curl(struct imap_server_conf *srvc, 
> struct credential *cred)
>  {
> CURL *curl;
> struct strbuf path = STRBUF_INIT;
> +   char *uri_encoded_folder;
>
> if (curl_global_init(CURL_GLOBAL_ALL) != CURLE_OK)
> die("curl_global_init failed");
> @@ -1429,7 +1430,12 @@ static CURL *setup_curl(struct imap_server_conf *srvc, 
> struct credential *cred)
> strbuf_addstr(, server.host);
> if (!path.len || path.buf[path.len - 1] != '/')
> strbuf_addch(, '/');
> -   strbuf_addstr(, server.folder);
> +
> +   uri_encoded_folder = curl_easy_escape(curl, server.folder, 0);
> +   if (!uri_encoded_folder)
> +   die("failed to encode server folder");
> +   strbuf_addstr(, uri_encoded_folder);
> +   curl_free(uri_encoded_folder);
>
> curl_easy_setopt(curl, CURLOPT_URL, path.buf);
> strbuf_release();
> --
> 2.15.1.272.g8e603414b


Re: [PATCH V4] config: add --expiry-date

2017-11-30 Thread Jeff King
On Thu, Nov 30, 2017 at 12:18:49PM +0100, Heiko Voigt wrote:

> > Fine by me. While I think the original intent was to be more lenient to
> > malformed .gitmodules, it's not like we're seeing bug reports about it.
> 
> My original intent was not about being more lenient about malformed
> .gitmodules but having a way to deal with repository history that might
> have a malformed .gitmodules in its history. Since depending on the
> branch it is on it might be quite carved in stone.
> On an active project it would not be that easy to rewrite history to get
> out of that situation.
> 
> When a .gitmodules file in the worktree is malformed it is easy to fix.
> That is not the case when we are reading configurations from blobs.
> 
> My guess why there are no reports is that maybe not too many users are
> using this infrastructure yet, plus it is probably seldom that someone
> edits the .gitmodules file by hand which could lead to such a situation.
> But if such an error occurs it will be very annoying if we die while
> parsing submodule configurations. The only solution I see currently is
> to turn submodule recursion off completely.
> 
> But maybe I am being overly cautious here.

Ah, OK, that makes a lot of sense to me. Thanks for explaining.

I agree that is a good goal to shoot for in the long term. It's not the
end of the world if there are a few code paths that may die() for now,
but we should try not to add more, and eventually weed out the ones that
do.

-Peff


Re: How hard would it be to implement sparse fetching/pulling?

2017-11-30 Thread Vitaly Arbuzov
Found some details here: https://github.com/jeffhostetler/git/pull/3

Looking at commits I see that you've done a lot of work already,
including packing, filtering, fetching, cloning etc.
What are some areas that aren't complete yet? Do you need any help
with implementation?


On Thu, Nov 30, 2017 at 9:01 AM, Vitaly Arbuzov  wrote:
> Hey Jeff,
>
> It's great, I didn't expect that anyone is actively working on this.
> I'll check out your branch, meanwhile do you have any design docs that
> describe these changes or can you define high level goals that you
> want to achieve?
>
> On Thu, Nov 30, 2017 at 6:24 AM, Jeff Hostetler  
> wrote:
>>
>>
>> On 11/29/2017 10:16 PM, Vitaly Arbuzov wrote:
>>>
>>> Hi guys,
>>>
>>> I'm looking for ways to improve fetch/pull/clone time for large git
>>> (mono)repositories with unrelated source trees (that span across
>>> multiple services).
>>> I've found sparse checkout approach appealing and helpful for most of
>>> client-side operations (e.g. status, reset, commit, etc.)
>>> The problem is that there is no feature like sparse fetch/pull in git,
>>> this means that ALL objects in unrelated trees are always fetched.
>>> It may take a lot of time for large repositories and results in some
>>> practical scalability limits for git.
>>> This forced some large companies like Facebook and Google to move to
>>> Mercurial as they were unable to improve client-side experience with
>>> git while Microsoft has developed GVFS, which seems to be a step back
>>> to CVCS world.
>>>
>>> I want to get a feedback (from more experienced git users than I am)
>>> on what it would take to implement sparse fetching/pulling.
>>> (Downloading only objects related to the sparse-checkout list)
>>> Are there any issues with missing hashes?
>>> Are there any fundamental problems why it can't be done?
>>> Can we get away with only client-side changes or would it require
>>> special features on the server side?
>>>
>>> If we had such a feature then all we would need on top is a separate
>>> tool that builds the right "sparse" scope for the workspace based on
>>> paths that developer wants to work on.
>>>
>>> In the world where more and more companies are moving towards large
>>> monorepos this improvement would provide a good way of scaling git to
>>> meet this demand.
>>>
>>> PS. Please don't advice to split things up, as there are some good
>>> reasons why many companies decide to keep their code in the monorepo,
>>> which you can easily find online. So let's keep that part out the
>>> scope.
>>>
>>> -Vitaly
>>>
>>
>>
>> This work is in-progress now.  A short summary can be found in [1]
>> of the current parts 1, 2, and 3.
>>
>>> * jh/object-filtering (2017-11-22) 6 commits
>>> * jh/fsck-promisors (2017-11-22) 10 commits
>>> * jh/partial-clone (2017-11-22) 14 commits
>>
>>
>> [1]
>> https://public-inbox.org/git/xmqq1skh6fyz@gitster.mtv.corp.google.com/T/
>>
>> I have a branch that contains V5 all 3 parts:
>> https://github.com/jeffhostetler/git/tree/core/pc5_p3
>>
>> This is a WIP, so there are some rough edges
>> I hope to have a V6 out before the weekend with some
>> bug fixes and cleanup.
>>
>> Please give it a try and see if it fits your needs.
>> Currently, there are filter methods to filter all blobs,
>> all large blobs, and one to match a sparse-checkout
>> specification.
>>
>> Let me know if you have any questions or problems.
>>
>> Thanks,
>> Jeff


Re: git reset of addition of a submodule?

2017-11-30 Thread David Turner
On Thu, 2017-11-30 at 12:05 -0500, David Turner wrote:
> git submodule add https://my-git-repo blort
> git commit -m 'add a submodule'
> git reset HEAD^ blort
> 
> The reset deletes the gitlink, but does not delete the entry in
> .gitmodules.  On one hand, this is exactly what the user asked for --
> they wanted the path 'blort' to be changed in the index, and that's
> what they got.  On the other hand, the behavior differs from git rm,
> and seems confusing: most folks don't want an entry in .gitmodules
> which doesn't correspond to a gitlink.  
> 
> If reset isn't the right thing for me to do when I want to say "oops"
> about adding a submodule, then what is?  I could do:
> git reset HEAD^ blort .gitmodules
> but what if I added two submodules and only wanted to undo the
> addition
> of one?


Also, resetting the deletion of a submodule has an even worse issue --
you end up with a gitlink but no entry in .gitmodules.



git reset of addition of a submodule?

2017-11-30 Thread David Turner
git submodule add https://my-git-repo blort
git commit -m 'add a submodule'
git reset HEAD^ blort

The reset deletes the gitlink, but does not delete the entry in
.gitmodules.  On one hand, this is exactly what the user asked for --
they wanted the path 'blort' to be changed in the index, and that's
what they got.  On the other hand, the behavior differs from git rm,
and seems confusing: most folks don't want an entry in .gitmodules
which doesn't correspond to a gitlink.  

If reset isn't the right thing for me to do when I want to say "oops"
about adding a submodule, then what is?  I could do:
git reset HEAD^ blort .gitmodules
but what if I added two submodules and only wanted to undo the addition
of one?




Re: How hard would it be to implement sparse fetching/pulling?

2017-11-30 Thread Vitaly Arbuzov
Hey Jeff,

It's great, I didn't expect that anyone is actively working on this.
I'll check out your branch, meanwhile do you have any design docs that
describe these changes or can you define high level goals that you
want to achieve?

On Thu, Nov 30, 2017 at 6:24 AM, Jeff Hostetler  wrote:
>
>
> On 11/29/2017 10:16 PM, Vitaly Arbuzov wrote:
>>
>> Hi guys,
>>
>> I'm looking for ways to improve fetch/pull/clone time for large git
>> (mono)repositories with unrelated source trees (that span across
>> multiple services).
>> I've found sparse checkout approach appealing and helpful for most of
>> client-side operations (e.g. status, reset, commit, etc.)
>> The problem is that there is no feature like sparse fetch/pull in git,
>> this means that ALL objects in unrelated trees are always fetched.
>> It may take a lot of time for large repositories and results in some
>> practical scalability limits for git.
>> This forced some large companies like Facebook and Google to move to
>> Mercurial as they were unable to improve client-side experience with
>> git while Microsoft has developed GVFS, which seems to be a step back
>> to CVCS world.
>>
>> I want to get a feedback (from more experienced git users than I am)
>> on what it would take to implement sparse fetching/pulling.
>> (Downloading only objects related to the sparse-checkout list)
>> Are there any issues with missing hashes?
>> Are there any fundamental problems why it can't be done?
>> Can we get away with only client-side changes or would it require
>> special features on the server side?
>>
>> If we had such a feature then all we would need on top is a separate
>> tool that builds the right "sparse" scope for the workspace based on
>> paths that developer wants to work on.
>>
>> In the world where more and more companies are moving towards large
>> monorepos this improvement would provide a good way of scaling git to
>> meet this demand.
>>
>> PS. Please don't advice to split things up, as there are some good
>> reasons why many companies decide to keep their code in the monorepo,
>> which you can easily find online. So let's keep that part out the
>> scope.
>>
>> -Vitaly
>>
>
>
> This work is in-progress now.  A short summary can be found in [1]
> of the current parts 1, 2, and 3.
>
>> * jh/object-filtering (2017-11-22) 6 commits
>> * jh/fsck-promisors (2017-11-22) 10 commits
>> * jh/partial-clone (2017-11-22) 14 commits
>
>
> [1]
> https://public-inbox.org/git/xmqq1skh6fyz@gitster.mtv.corp.google.com/T/
>
> I have a branch that contains V5 all 3 parts:
> https://github.com/jeffhostetler/git/tree/core/pc5_p3
>
> This is a WIP, so there are some rough edges
> I hope to have a V6 out before the weekend with some
> bug fixes and cleanup.
>
> Please give it a try and see if it fits your needs.
> Currently, there are filter methods to filter all blobs,
> all large blobs, and one to match a sparse-checkout
> specification.
>
> Let me know if you have any questions or problems.
>
> Thanks,
> Jeff


Re: [git-users] How hard would it be to implement sparse fetching/pulling?

2017-11-30 Thread Jeff Hostetler



On 11/30/2017 3:12 AM, Konstantin Khomoutov wrote:

On Wed, Nov 29, 2017 at 06:42:54PM -0800, vit via Git for human beings wrote:


I'm looking for ways to improve fetch/pull/clone time for large git
(mono)repositories with unrelated source trees (that span across multiple
services).
I've found sparse checkout approach appealing and helpful for most of
client-side operations (e.g. status, reset, commit, add etc)
The problem is that there is no feature like sparse fetch/pull in git, this
means that ALL objects in unrelated trees are always fetched.
It takes a lot of time for large repositories and results in some practical
scalability limits for git.
This forced some large companies like Facebook and Google to move to
Mercurial as they were unable to improve client-side experience with git
and Microsoft has developed GVFS which seems to be a step back to CVCS
world.

[...]

(To anyone interested, there's a cross-post to the main Git list which
Vitaly failed to mention: [1]. I think it could spark some interesting
discussion.)

As to the essence of the question, I think you blame GVFS for no real
reason. While Microsoft is being Microsoft — their implementation of
GVFS is written in .NET and *requires* Windows 10 (this one is beyond
me), it's based on an open protocol [2] which basically assumes the
presence of a RESTful HTTP endpoint at the "Git server side" and
apparently designed to work well with the repository format the current
stock Git uses which makes it implementable on both sides by anyone
interested.

The second hint I have is that the idea of fetching data lazily
is being circulated among the Git developers for some time already, and
something is really being done in this venue so you could check and see
what's there [3, 4] and maybe trial it and help out those who works on this
stuff.

1. 
https://public-inbox.org/git/CANxXvsMbpBOSRKaAi8iVUikfxtQp=kofz60n0phxs+r+q1k...@mail.gmail.com/
2. https://github.com/Microsoft/GVFS/blob/master/Protocol.md
3. https://public-inbox.org/git/?q=lazy+fetch
4. https://public-inbox.org/git/?q=partial+clone



For completeness with the git-users mailing list.
Here is info on the work-in-progress for this feature.

https://public-inbox.org/git/e2d5470b-9252-07b4-f3cf-57076d103...@jeffhostetler.com/

Jeff



Re: [PATCH] git-prompt: fix reading files with windows line endings

2017-11-30 Thread Johannes Schindelin
Hi Robert,

On Thu, 30 Nov 2017, Robert Abel wrote:

> So reading a dummy variable along with the actual content variable
> works for git-prompt:
> 
> __git_eread ()
> {
> local f="$1"
> local dummy
> shift
> test -r "$f" && IFS=$'\r\n' read "$@" dummy < "$f"
> }
> 
> I feel like this would be the most readable solution thus far.

Hmm. I am just a little concerned about "dummy" swallowing the rest of the
line, e.g. when reading "1 2 3" via `__git_eread line`... the way I read
it, dummy would consume "2 3" and line would *not* receive "1 2 3" but
only "1"...

Ciao,
Johannes


Re: [PATCH v4 0/2] launch_editor(): indicate that Git waits for user input

2017-11-30 Thread Andreas Schwab
On Nov 30 2017, Thomas Adam  wrote:

> On Thu, Nov 30, 2017 at 02:55:35PM +0100, Lars Schneider wrote:
>> 
>> > On 29 Nov 2017, at 19:35, Thomas Adam  wrote:
>> > 
>> > On Wed, Nov 29, 2017 at 03:37:50PM +0100, lars.schnei...@autodesk.com 
>> > wrote:
>> >> + if (print_waiting_for_editor) {
>> >> + fprintf(stderr, _("hint: Waiting for your editor 
>> >> input..."));
>> >>   fflush(stderr);
>> > 
>> > Just FYI, stderr is typically unbuffered on most systems I've used, and
>> > although the call to fflush() is harmless, I suspect it's not having any
>> > effect.  That said, there's plenty of other places in Git which seems to 
>> > think
>> > fflush()ing stderr actually does something.
>> 
>> I agree with the "unbuffered" statement. I am surprised that you expect 
>> fflush()
>> to do nothing in that situation... but I am no expert in that area. Can you
>> point me to some documentation?
>
> Because stderr is unbuffered, it will get printed immediately.

POSIX only requires stderr to be "not fully buffered".  If it is line
buffered, the message may not appear immediately.

Andreas.

-- 
Andreas Schwab, sch...@linux-m68k.org
GPG Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


Re: [PATCH v4 0/2] launch_editor(): indicate that Git waits for user input

2017-11-30 Thread Thomas Adam
On Thu, Nov 30, 2017 at 02:55:35PM +0100, Lars Schneider wrote:
> 
> > On 29 Nov 2017, at 19:35, Thomas Adam  wrote:
> > 
> > On Wed, Nov 29, 2017 at 03:37:50PM +0100, lars.schnei...@autodesk.com wrote:
> >> +  if (print_waiting_for_editor) {
> >> +  fprintf(stderr, _("hint: Waiting for your editor 
> >> input..."));
> >>fflush(stderr);
> > 
> > Just FYI, stderr is typically unbuffered on most systems I've used, and
> > although the call to fflush() is harmless, I suspect it's not having any
> > effect.  That said, there's plenty of other places in Git which seems to 
> > think
> > fflush()ing stderr actually does something.
> 
> I agree with the "unbuffered" statement. I am surprised that you expect 
> fflush()
> to do nothing in that situation... but I am no expert in that area. Can you
> point me to some documentation?

Because stderr is unbuffered, it will get printed immediately.

> In any way, would all this be a problem here? The worst that could happen 
> would
> be that the user would not see the message, right?

Correct -- I only bring this up because your interdiff showed you added the
fflush() call and I was merely pointing that out.  I don't expect you to
change it.

> Are you aware of stderr usage in Git that could cause more trouble?

No.  It'll all be fine.

-- Thomas Adam


Re: How hard would it be to implement sparse fetching/pulling?

2017-11-30 Thread Jeff Hostetler



On 11/29/2017 10:16 PM, Vitaly Arbuzov wrote:

Hi guys,

I'm looking for ways to improve fetch/pull/clone time for large git
(mono)repositories with unrelated source trees (that span across
multiple services).
I've found sparse checkout approach appealing and helpful for most of
client-side operations (e.g. status, reset, commit, etc.)
The problem is that there is no feature like sparse fetch/pull in git,
this means that ALL objects in unrelated trees are always fetched.
It may take a lot of time for large repositories and results in some
practical scalability limits for git.
This forced some large companies like Facebook and Google to move to
Mercurial as they were unable to improve client-side experience with
git while Microsoft has developed GVFS, which seems to be a step back
to CVCS world.

I want to get a feedback (from more experienced git users than I am)
on what it would take to implement sparse fetching/pulling.
(Downloading only objects related to the sparse-checkout list)
Are there any issues with missing hashes?
Are there any fundamental problems why it can't be done?
Can we get away with only client-side changes or would it require
special features on the server side?

If we had such a feature then all we would need on top is a separate
tool that builds the right "sparse" scope for the workspace based on
paths that developer wants to work on.

In the world where more and more companies are moving towards large
monorepos this improvement would provide a good way of scaling git to
meet this demand.

PS. Please don't advice to split things up, as there are some good
reasons why many companies decide to keep their code in the monorepo,
which you can easily find online. So let's keep that part out the
scope.

-Vitaly




This work is in-progress now.  A short summary can be found in [1]
of the current parts 1, 2, and 3.


* jh/object-filtering (2017-11-22) 6 commits
* jh/fsck-promisors (2017-11-22) 10 commits
* jh/partial-clone (2017-11-22) 14 commits


[1] https://public-inbox.org/git/xmqq1skh6fyz@gitster.mtv.corp.google.com/T/

I have a branch that contains V5 all 3 parts:
https://github.com/jeffhostetler/git/tree/core/pc5_p3

This is a WIP, so there are some rough edges
I hope to have a V6 out before the weekend with some
bug fixes and cleanup.

Please give it a try and see if it fits your needs.
Currently, there are filter methods to filter all blobs,
all large blobs, and one to match a sparse-checkout
specification.

Let me know if you have any questions or problems.

Thanks,
Jeff


Re: [PATCH v4 0/2] launch_editor(): indicate that Git waits for user input

2017-11-30 Thread Lars Schneider

> On 29 Nov 2017, at 19:35, Thomas Adam  wrote:
> 
> On Wed, Nov 29, 2017 at 03:37:50PM +0100, lars.schnei...@autodesk.com wrote:
>> +if (print_waiting_for_editor) {
>> +fprintf(stderr, _("hint: Waiting for your editor 
>> input..."));
>>  fflush(stderr);
> 
> Just FYI, stderr is typically unbuffered on most systems I've used, and
> although the call to fflush() is harmless, I suspect it's not having any
> effect.  That said, there's plenty of other places in Git which seems to think
> fflush()ing stderr actually does something.

I agree with the "unbuffered" statement. I am surprised that you expect fflush()
to do nothing in that situation... but I am no expert in that area. Can you
point me to some documentation?

In any way, would all this be a problem here? The worst that could happen would
be that the user would not see the message, right?

Are you aware of stderr usage in Git that could cause more trouble?

- Lars


Re: Make patch-id more flexible?

2017-11-30 Thread Eugeniu Rosca
Hello Junio,

> > file-names. Here comes my actual question. Would it be conceptually fine
> > to implement some `git patch-id` parameter, which would allow ignoring
> > the file-names (or reducing those to their `basename`) before computing
> > the patch id? Or would it break the concept of patch id (which shouldn't
> > accept variations)?
> 
> My gut feeling is that a tool like that would be fine as long as it
> is local to your organization and is not called "git patch-id"; it
> may be useful in the situation you described, but as you mention
> above, it feels that it is differnt from what a patch-id is.
> 

Thank you very much for your feedback. That's exactly I was looking for.
A clear statement from the maintainer. We will live then with a custom
tool that acts like `git patch-id`, just strips the patches from
file-names before computing the hash.

Best regards,
Eugeniu.


Google winning

2017-11-30 Thread Google Award Team
Dear Google Award Winner

You have been award £500,000 Gbp from google award promotion 2017. Kindly reply 
for more winning information and for claim.

Regards,
Google Lottery Board.


Bare repository fetch/push race condition

2017-11-30 Thread Dmitry Neverov
It looks like there is a race condition between fetch and push in a
bare repository in the following setup. There is a bare git repository
on a local file system. Some process pushes to this repository via
jgit. There is a cron task which pushes this repository to the backup
remote repo over ssh. We observe the following in the reflog:

6932a831843f4dbe3b394acde9adc9a8269b6cf1
57b77b8c2a04029e7f5af4d3b7e36a3ba0c7cac9 XXX 1505903078 +0200push:
forced-update
57b77b8c2a04029e7f5af4d3b7e36a3ba0c7cac9
44a221b0271b9abc885dd6e54f691d5248c4171f XXX 1505905206 +0200push:
forced-update
44a221b0271b9abc885dd6e54f691d5248c4171f
57b77b8c2a04029e7f5af4d3b7e36a3ba0c7cac9 YYY1505905217 +0200
update by push

Where XXX is the process pushing via jgit and YYY is the cron task. It
looks like the cron task started a push when the ref was pointing to
57b77b8c2a04029e7f5af4d3b7e36a3ba0c7cac9 and push finished when the
ref was already updated to 44a221b0271b9abc885dd6e54f691d5248c4171f.
The push unconditionally updated the local tracking branch back to the
commit 57b77b8c2a04029e7f5af4d3b7e36a3ba0c7cac9 and we lost the commit
44a221b0271b9abc885dd6e54f691d5248c4171f since the next push from the
local process created a new commit with
57b77b8c2a04029e7f5af4d3b7e36a3ba0c7cac9 as a parent.

Shouldn't the update_ref at transport.c:308 specify the expected
old hash, like this:

update_ref("update by push", rs.dst, ref->new_oid.hash,
ref->old_oid.hash, 0, 0);

at least for bare repositories?


[PATCH] imap-send: URI encode server folder

2017-11-30 Thread Nicolas Morey-Chaisemartin
URI encode the server folder string before passing it to libcurl.
This fixes the access to the draft folder on Gmail accounts (named 
[Gmail]/Drafts)

Reported-by: Doron Behar 
Signed-off-by: Nicolas Morey-Chaisemartin 
---
 imap-send.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/imap-send.c b/imap-send.c
index 54e6a80fd..36c7c1b4f 100644
--- a/imap-send.c
+++ b/imap-send.c
@@ -1412,6 +1412,7 @@ static CURL *setup_curl(struct imap_server_conf *srvc, 
struct credential *cred)
 {
CURL *curl;
struct strbuf path = STRBUF_INIT;
+   char *uri_encoded_folder;
 
if (curl_global_init(CURL_GLOBAL_ALL) != CURLE_OK)
die("curl_global_init failed");
@@ -1429,7 +1430,12 @@ static CURL *setup_curl(struct imap_server_conf *srvc, 
struct credential *cred)
strbuf_addstr(, server.host);
if (!path.len || path.buf[path.len - 1] != '/')
strbuf_addch(, '/');
-   strbuf_addstr(, server.folder);
+
+   uri_encoded_folder = curl_easy_escape(curl, server.folder, 0);
+   if (!uri_encoded_folder)
+   die("failed to encode server folder");
+   strbuf_addstr(, uri_encoded_folder);
+   curl_free(uri_encoded_folder);
 
curl_easy_setopt(curl, CURLOPT_URL, path.buf);
strbuf_release();
-- 
2.15.1.272.g8e603414b



Re: imap-send with gmail: curl_easy_perform() failed: URL using bad/illegal format or missing URL

2017-11-30 Thread Nicolas Morey-Chaisemartin


Le 30/11/2017 à 10:46, Daniel Stenberg a écrit :
> On Thu, 30 Nov 2017, Nicolas Morey-Chaisemartin wrote:
>
>> This is due to the weird "[Gmail]" prefix in the folder.
>> I tried manually replacing it with:
>>     folder = %5BGmail%5D/Drafts
>> in .git/config and it works.
>>
>> curl is doing some fancy handling with brackets and braces. It make sense 
>> for multiple FTP downloads like ftp://ftp.numericals.com/file[1-100].txt, 
>> not in our case. The curl command line has a --globoff argument to disable 
>> this "regexp" support and it seems to fix the gmail case. However I couldn't 
>> find a way to change this value through the API...
>
> That's just a feature of the command line tool, "globbing" isn't a function 
> provided by the library. libcurl actually "just" expects a plain old URL.
>
Yep that what I figured looking a bit further in the code.

> But with the risk of falling through the cracks into the rathole that is 
> "what is a URL" (I've blogged about the topic several times in the past and I 
> will surely do it again in the future):
>
> A "legal" URL (as per RFC 3986) does not contain brackets, such symbols 
> should be used URL encoded: %5B and %5D.
>
> This said: I don't know exactly why brackets cause a problem in this case. It 
> could still be worth digging into and see if libcurl could deal with them 
> better here...
>

It would make sense to have a way to ask libcurl to URI encode for us. I'm 
guessing there's already the code for that somewhere in curl and we would be 
wise to use it.
But to work wqith older version we'll have to do it ourselves anyway.

Nicolas


Re: imap-send with gmail: curl_easy_perform() failed: URL using bad/illegal format or missing URL

2017-11-30 Thread Daniel Stenberg

On Thu, 30 Nov 2017, Nicolas Morey-Chaisemartin wrote:

It would make sense to have a way to ask libcurl to URI encode for us. I'm 
guessing there's already the code for that somewhere in curl and we would be 
wise to use it. But to work wqith older version we'll have to do it 
ourselves anyway.


libcurl only offers curl_easy_escape() which URL encodes a string.

But that's not really usably on an entire existing URL or path since it would 
then also encode the slashes etc. You want to encode the relevant pieces and 
then put them together appropriately into the final URL...


--

 / daniel.haxx.se


Re: imap-send with gmail: curl_easy_perform() failed: URL using bad/illegal format or missing URL

2017-11-30 Thread Daniel Stenberg

On Thu, 30 Nov 2017, Nicolas Morey-Chaisemartin wrote:


This is due to the weird "[Gmail]" prefix in the folder.
I tried manually replacing it with:
    folder = %5BGmail%5D/Drafts
in .git/config and it works.

curl is doing some fancy handling with brackets and braces. It make sense 
for multiple FTP downloads like ftp://ftp.numericals.com/file[1-100].txt, 
not in our case. The curl command line has a --globoff argument to disable 
this "regexp" support and it seems to fix the gmail case. However I couldn't 
find a way to change this value through the API...


That's just a feature of the command line tool, "globbing" isn't a function 
provided by the library. libcurl actually "just" expects a plain old URL.


But with the risk of falling through the cracks into the rathole that is "what 
is a URL" (I've blogged about the topic several times in the past and I will 
surely do it again in the future):


A "legal" URL (as per RFC 3986) does not contain brackets, such symbols should 
be used URL encoded: %5B and %5D.


This said: I don't know exactly why brackets cause a problem in this case. It 
could still be worth digging into and see if libcurl could deal with them 
better here...


--

 / daniel.haxx.se

Re: imap-send with gmail: curl_easy_perform() failed: URL using bad/illegal format or missing URL

2017-11-30 Thread Nicolas Morey-Chaisemartin


Le 30/11/2017 à 10:39, Nicolas Morey-Chaisemartin a écrit :
>
> Le 30/11/2017 à 03:04, Jonathan Nieder a écrit :
>> (+cc: Nicolas)
>> Hi,
>>
>> Doron Behar wrote:
>>
>>> I'm trying to send a patch with the command `git imap-send`, I used the
>>> examples in the manual page as the main reference for my configuration:
>>>
>>> ```
>>> [imap]
>>> folder = "[Gmail]/Drafts"
>>> host = imaps://imap.gmail.com
>>> user = doron.be...@gmail.com
>>> port = 993
>>> sslverify = false
>>> ```
>>>
>>> This is my `cat patch.out | git imap-send` output:
>>>
>>> ```
>>> Password for 'imaps://doron.be...@gmail.com@imap.gmail.com':
>>> sending 3 messages
>>> curl_easy_perform() failed: URL using bad/illegal format or missing URL
>>> ```
>> Thanks for reporting this.  I suspect this is related to
>> v2.15.0-rc0~63^2 (imap-send: use curl by default when possible,
>> 2017-09-14) --- e.g. perhaps our custom IMAP code was doing some
>> escaping on the username that libcurl does not do.
>>
>> "man git imap-send" says this is a recommended configuration, so I
>> don't think it's a configuration error.
>>
>> What platform are you on?  What version of libcurl are you using?
>>
>> In libcurl::lib/easy.c I am also seeing
>>
>> if(mcode)
>>   return CURLE_URL_MALFORMAT; /* TODO: return a proper error! */
>>
>> which looks suspicious.
>>
>> Nicolas, am I on the right track?
>>
>> Thanks,
>> Jonathan
>>
> This is due to the weird "[Gmail]" prefix in the folder.
> I tried manually replacing it with:
>     folder = %5BGmail%5D/Drafts
> in .git/config and it works.
>
> curl is doing some fancy handling with brackets and braces. It make sense for 
> multiple FTP downloads like ftp://ftp.numericals.com/file[1-100].txt, not in 
> our case.
> The curl command line has a --globoff argument to disable this "regexp" 
> support and it seems to fix the gmail case.

In fact no, StackOverflow was wrong :)

> However I couldn't find a way to change this value through the API...
>
> I guess we should open a bug upstream to get access to this setting through 
> the API and add a patch that HTTP encode brackets and braces in the meantime.
>
This means with have to URI encode the folder. DO we have a helper for that ?

Nicolas


[PATCH] doc: clarify triangular workflow

2017-11-30 Thread Timothee Albertin
Changed the documentation about the triangular workflow because it was
not clear enough for a new contributor.

With a clearer and more precise documentation, any new Git contributors
will spend less time on understanding this doc and the way Git works.

Based-on-patch-by: Jordan DE GEA 
Signed-off-by: Michael Haggerty 
Signed-off-by: Matthieu Moy 
Signed-off-by: Daniel Bensoussan 
Signed-off-by: Nathan Payre 
Signed-off-by: Timothee Albertin 
---
 Documentation/gitworkflows.txt | 203 -
 1 file changed, 201 insertions(+), 2 deletions(-)

diff --git a/Documentation/gitworkflows.txt b/Documentation/gitworkflows.txt
index 02569d0..21f6dc8 100644
--- a/Documentation/gitworkflows.txt
+++ b/Documentation/gitworkflows.txt
@@ -407,8 +407,8 @@ follows.
 `git pull  `
 =
 
-Occasionally, the maintainer may get merge conflicts when he tries to
-pull changes from downstream.  In this case, he can ask downstream to
+Occasionally, the maintainers may get merge conflicts when they try to
+pull changes from downstream.  In this case, they can ask downstream to
 do the merge and resolve the conflicts themselves (perhaps they will
 know better how to resolve them).  It is one of the rare cases where
 downstream 'should' merge from upstream.
@@ -464,6 +464,205 @@ in patches to figure out the merge base.  See 
linkgit:git-am[1] for
 other options.
 
 
+TRIANGULAR WORKFLOW
+---
+
+Introduction
+
+
+In some projects, contributors cannot push directly to the project but
+have to suggest their commits to the maintainer (e.g. pull requests).
+For these projects, it's common to use what's called a *triangular
+workflow*:
+
+- The project maintainer publishes a repository, called **UPSTREAM** in
+this document, which is a read-only for contributors. They can clone and
+fetch from this repository.
+- Contributors publish their modifications by pushing to a repository,
+called **PUBLISH** in this document, and request a merge.
+- Opening a pull request
+- If the maintainers accept the changes, they merge them into the
+  **UPSTREAM** repository.
+
+This workflow is commonly used on different platforms like BitBucket,
+GitHub or GitLab which provide a dedicated mechanism for requesting merges.
+
+
+--   -
+| UPSTREAM   |  maintainer   | PUBLISH   |
+||- - - - - - - -|   |
+--  <-   -
+  \ /
+   \   /
+fetch | \ / ^ push
+  v  \   /  |
+  \ /
+   -
+   |   LOCAL   |
+   -
+
+
+Motivations
+~~~
+
+* Allows contributors to work with Git even if they don't have
+write access to **UPSTREAM**.
+
+With the triangular workflow, the contributors have the write 
+access on **PUBLISH** and push their code there.  Only the
+maintainers merge from **PUBLISH** to **UPSTREAM**.
+
+* Code review is done before integration.
+
+In a triangular workflow the rest of the community or the company
+can review the code before it's in production. Everyone can read on
+**PUBLISH** so everyone can review code before the maintainer merge
+it to **UPSTREAM**.  In free software, anyone can
+propose code.  Reviewers accept the code when everyone agrees
+with it.
+
+* Encourages clean history by using `rebase -i` and `push --force` to
+the public fork before the code is merged.
+
+This is just a side-effect of the "review before merge" mentioned
+above, but this is still a good point.
+
+
+Here are the configuration variables you will need to arrange your
+workflow.
+
+Preparation as a contributor
+
+
+Cloning from **UPSTREAM**.
+
+==
+`git clone `
+==
+
+If **PUBLISH** doesn't exist, a contributor can publish his own repository.
+**PUBLISH** contains modifications before integration.
+
+
+* `git clone `
+* `git remote add `
+* `git push`
+
+
+Adding **UPSTREAM** remote:
+
+===
+`git remote add upstream `
+===
+
+With the `remote add` above, using `git pull upstream` pulls there,
+instead of saying its URL. In addition, the `git pull` command
+(without argument) can be used to pull from **UPSTREAM**.
+
+For each branch requiring a triangular workflow, set
+`branch..remote` and `branch..pushRemote` to set up
+the **UPSTREAM** and **PUBLISH** repositories.
+
+Example with master as :

Re: imap-send with gmail: curl_easy_perform() failed: URL using bad/illegal format or missing URL

2017-11-30 Thread Nicolas Morey-Chaisemartin


Le 30/11/2017 à 03:04, Jonathan Nieder a écrit :
> (+cc: Nicolas)
> Hi,
>
> Doron Behar wrote:
>
>> I'm trying to send a patch with the command `git imap-send`, I used the
>> examples in the manual page as the main reference for my configuration:
>>
>> ```
>> [imap]
>>  folder = "[Gmail]/Drafts"
>>  host = imaps://imap.gmail.com
>>  user = doron.be...@gmail.com
>>  port = 993
>>  sslverify = false
>> ```
>>
>> This is my `cat patch.out | git imap-send` output:
>>
>> ```
>> Password for 'imaps://doron.be...@gmail.com@imap.gmail.com':
>> sending 3 messages
>> curl_easy_perform() failed: URL using bad/illegal format or missing URL
>> ```
> Thanks for reporting this.  I suspect this is related to
> v2.15.0-rc0~63^2 (imap-send: use curl by default when possible,
> 2017-09-14) --- e.g. perhaps our custom IMAP code was doing some
> escaping on the username that libcurl does not do.
>
> "man git imap-send" says this is a recommended configuration, so I
> don't think it's a configuration error.
>
> What platform are you on?  What version of libcurl are you using?
>
> In libcurl::lib/easy.c I am also seeing
>
> if(mcode)
>   return CURLE_URL_MALFORMAT; /* TODO: return a proper error! */
>
> which looks suspicious.
>
> Nicolas, am I on the right track?
>
> Thanks,
> Jonathan
>

This is due to the weird "[Gmail]" prefix in the folder.
I tried manually replacing it with:
    folder = %5BGmail%5D/Drafts
in .git/config and it works.

curl is doing some fancy handling with brackets and braces. It make sense for 
multiple FTP downloads like ftp://ftp.numericals.com/file[1-100].txt, not in 
our case.
The curl command line has a --globoff argument to disable this "regexp" support 
and it seems to fix the gmail case.
However I couldn't find a way to change this value through the API...

I guess we should open a bug upstream to get access to this setting through the 
API and add a patch that HTTP encode brackets and braces in the meantime.

Nicolas



Re: [PATCH] Makefile: replace perl/Makefile.PL with simple make rules

2017-11-30 Thread Ævar Arnfjörð Bjarmason

On Thu, Nov 30 2017, Jonathan Nieder jotted:

> Hi,
>
> Ævar Arnfjörð Bjarmason wrote:
>
>> Replace the perl/Makefile.PL and the fallback perl/Makefile used under
>> NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily
>> inspired by how the i18n infrastructure's build process works[1].
>
> Yay!  This looks exciting.
>
> One quick comment:
>
> [...]
>>  * We don't build the Git(3) Git::I18N(3) etc. man pages from the
>>embedded perldoc. I suspect nobody really cares, these are mostly
>>internal APIs, and if someone's developing against them they likely
>>know enough to issue a "perldoc" against the installed file to get
>>the same result.
>>
>>But this is a change in how Git is installed now on e.g. CentOS &
>>Debian which carry these manpages. They could be added (via
>>pod2man) if anyone really cares.
>>
>>I doubt they will. The reason these were built in the first place
>>was as a side-effect of how ExtUtils::MakeMaker works.
>
> Debian cares (see
> https://www.debian.org/doc/packaging-manuals/perl-policy/ch-module_packages.html
> for details).
>
> I'll try applying this patch and seeing what happens some time this
> week.

It just says you have to install the manpages in such-and-such a place,
but I don't have any. There, policy issue fixed :)

More seriously, it seems to me that the only reason we have these
manpages in the first place is because of emergent effects. *Maybe* I'm
wrong about someone using Git.pm as an external API, is that the case?

I was assuming this was more of a case where we were manifying the
equivalent of Documentation/technical/api-*.txt and shipping them as
user docs just because that's what EU::MM will do by default, and nobody
thought to stop it.

But sure, if we still want these I can just provide them, here's the
relevant generated perl.mak section:

POD2MAN_EXE = $(PERLRUN) "-MExtUtils::Command::MM" -e pod2man "--"
POD2MAN = $(POD2MAN_EXE)
manifypods : pure_all config  \
Git.pm \
Git/I18N.pm \
Git/SVN/Editor.pm \
Git/SVN/Fetcher.pm \
Git/SVN/Memoize/YAML.pm \
Git/SVN/Prompt.pm \
Git/SVN/Ra.pm \
Git/SVN/Utils.pm
$(NOECHO) $(POD2MAN) --section=$(MAN3EXT) --perm_rw=$(PERM_RW) 
-u \
  Git.pm $(INST_MAN3DIR)/Git.$(MAN3EXT) \
  Git/I18N.pm $(INST_MAN3DIR)/Git::I18N.$(MAN3EXT) \
  Git/SVN/Editor.pm $(INST_MAN3DIR)/Git::SVN::Editor.$(MAN3EXT) 
\
  Git/SVN/Fetcher.pm 
$(INST_MAN3DIR)/Git::SVN::Fetcher.$(MAN3EXT) \
  Git/SVN/Memoize/YAML.pm 
$(INST_MAN3DIR)/Git::SVN::Memoize::YAML.$(MAN3EXT) \
  Git/SVN/Prompt.pm $(INST_MAN3DIR)/Git::SVN::Prompt.$(MAN3EXT) 
\
  Git/SVN/Ra.pm $(INST_MAN3DIR)/Git::SVN::Ra.$(MAN3EXT) \
  Git/SVN/Utils.pm $(INST_MAN3DIR)/Git::SVN::Utils.$(MAN3EXT)

I.e. we'd just need to create the mandir, then call pod2man.

However, even then we should consider what I've noted above and decide
which modules we really want to be shipping docs for, e.g. Git::I18n is
never going to be used by anyone external, nor is the Git::SVN stuff.

I think the only thing we're talking about shipping manpages for is
*maybe* just Git.pm itself, no?

I don't really care, so if others want to ship them all I'll just hack
it up. This just seemed like a bug to fix while I was at it.


Re: [git-users] How hard would it be to implement sparse fetching/pulling?

2017-11-30 Thread Konstantin Khomoutov
On Wed, Nov 29, 2017 at 06:42:54PM -0800, vit via Git for human beings wrote:

> I'm looking for ways to improve fetch/pull/clone time for large git 
> (mono)repositories with unrelated source trees (that span across multiple 
> services).
> I've found sparse checkout approach appealing and helpful for most of 
> client-side operations (e.g. status, reset, commit, add etc)
> The problem is that there is no feature like sparse fetch/pull in git, this 
> means that ALL objects in unrelated trees are always fetched.
> It takes a lot of time for large repositories and results in some practical 
> scalability limits for git.
> This forced some large companies like Facebook and Google to move to 
> Mercurial as they were unable to improve client-side experience with git 
> and Microsoft has developed GVFS which seems to be a step back to CVCS 
> world.
[...]

(To anyone interested, there's a cross-post to the main Git list which
Vitaly failed to mention: [1]. I think it could spark some interesting
discussion.)

As to the essence of the question, I think you blame GVFS for no real
reason. While Microsoft is being Microsoft — their implementation of
GVFS is written in .NET and *requires* Windows 10 (this one is beyond
me), it's based on an open protocol [2] which basically assumes the
presence of a RESTful HTTP endpoint at the "Git server side" and
apparently designed to work well with the repository format the current
stock Git uses which makes it implementable on both sides by anyone
interested.

The second hint I have is that the idea of fetching data lazily
is being circulated among the Git developers for some time already, and
something is really being done in this venue so you could check and see
what's there [3, 4] and maybe trial it and help out those who works on this
stuff.

1. 
https://public-inbox.org/git/CANxXvsMbpBOSRKaAi8iVUikfxtQp=kofz60n0phxs+r+q1k...@mail.gmail.com/
2. https://github.com/Microsoft/GVFS/blob/master/Protocol.md
3. https://public-inbox.org/git/?q=lazy+fetch
4. https://public-inbox.org/git/?q=partial+clone



Re: [ANNOUNCE] Git for Windows 2.15.1(2), was Re:Git for Windows 2.15.1

2017-11-30 Thread stefan.naewe
Am 30.11.2017 um 02:50 schrieb Johannes Schindelin:
> Hi all,
> 
> unfortunately, a last-minute bug slipped in: MSYS2 updated vim (including
> its defaults.vim, in a way that interacted badly with Git for Windows'
> configuration). The result was that an ugly warning is shown every time a
> user opens vim 

But only if the user doesn't have a ~/.vimrc file!

> (which is still the default editor of Git for Windows, for
> historical reasons).> 
> Git for Windows v2.15.1(2) fixes just this one bug, and can be downloaded
> here:
> 
>   https://github.com/git-for-windows/git/releases/tag/v2.15.1.windows.2
> 
> Ciao,
> Johannes

Stefan
-- 

/dev/random says: Ignorance can be cured. Stupid is forever.
python -c "print 
'73746566616e2e6e616577654061746c61732d656c656b74726f6e696b2e636f6d'.decode('hex')"
 
GPG Key fingerprint = 2DF5 E01B 09C3 7501 BCA9  9666 829B 49C5 9221 27AF