[PATCH] completion: teach 'scissors' mode to 'git commit --cleanup='

2015-06-07 Thread SZEDER Gábor
Signed-off-by: SZEDER Gábor 
---
 contrib/completion/git-completion.bash | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/contrib/completion/git-completion.bash 
b/contrib/completion/git-completion.bash
index bfc74e9d57..a1098765f6 100644
--- a/contrib/completion/git-completion.bash
+++ b/contrib/completion/git-completion.bash
@@ -1108,7 +1108,7 @@ _git_commit ()
 
case "$cur" in
--cleanup=*)
-   __gitcomp "default strip verbatim whitespace
+   __gitcomp "default scissors strip verbatim whitespace
" "" "${cur##--cleanup=}"
return
;;
-- 
2.4.3.384.g605df7b

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] commit: cope with scissors lines in commit message

2015-06-07 Thread SZEDER Gábor
The diff and submodule shortlog appended to the commit message template
by 'git commit --verbose' are not stripped when the commit message
contains an indented scissors line.

When cleaning up a commit message with 'git commit --verbose' or
'--cleanup=scissors' the code is careful and triggers only on a pure
scissors line, i.e. a line containing nothing but a comment character, a
space, and the scissors cut.  This is good, because people can embed
scissor lines in the commit message while using 'git commit --verbose',
and the text they write after their indented scissors line doesn't get
deleted.

While doing so, however, the cleanup function only looks at the first
line matching the scissors pattern and if it doesn't start at the
beginning of the line, then the function just returns without performing
any cleanup.  This is bad, because a "real" scissors line added by 'git
commit --verbose' might follow, and in that case the diff and submodule
shortlog get included in the commit message.

Don't bail out if a scissors line doesn't start at the beginning of the
line, but keep looking for a non-indented scissors line to fix this.

Signed-off-by: SZEDER Gábor 
---
 t/t7502-commit.sh | 25 +
 wt-status.c   | 12 
 2 files changed, 33 insertions(+), 4 deletions(-)

diff --git a/t/t7502-commit.sh b/t/t7502-commit.sh
index 2e0d557243..77db3a31c3 100755
--- a/t/t7502-commit.sh
+++ b/t/t7502-commit.sh
@@ -239,6 +239,31 @@ EOF
 
 '
 
+test_expect_success 'cleanup commit messages (scissors option,-F,-e, scissors 
line in commit message)' '
+   echo >>negative &&
+   cat >text <8 
+  # to be kept, too
+#  >8 
+to be removed
+#  >8 
+to be removed, too
+EOF
+
+   cat >expect <8 
+  # to be kept, too
+EOF
+   git commit --cleanup=scissors -e -F text -a &&
+   git cat-file -p HEAD |sed -e "1,/^\$/d">actual &&
+   test_cmp expect actual
+'
+
 test_expect_success 'cleanup commit messages (strip option,-F)' '
 
echo >>negative &&
diff --git a/wt-status.c b/wt-status.c
index c56c78fb6f..e6d171a0cb 100644
--- a/wt-status.c
+++ b/wt-status.c
@@ -822,13 +822,17 @@ conclude:
 
 void wt_status_truncate_message_at_cut_line(struct strbuf *buf)
 {
-   const char *p;
+   const char *p = buf->buf;
struct strbuf pattern = STRBUF_INIT;
 
strbuf_addf(&pattern, "%c %s", comment_line_char, cut_line);
-   p = strstr(buf->buf, pattern.buf);
-   if (p && (p == buf->buf || p[-1] == '\n'))
-   strbuf_setlen(buf, p - buf->buf);
+   while ((p = strstr(p, pattern.buf))) {
+   if (p == buf->buf || p[-1] == '\n') {
+   strbuf_setlen(buf, p - buf->buf);
+   break;
+   }
+   p++;
+   }
strbuf_release(&pattern);
 }
 
-- 
2.4.2.423.gad3a03f

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] git-rebase--interactive.sh: add config option for custom

2015-06-07 Thread Michael Rappazzo
A config option 'rebase.instructionFormat' can override the
default 'oneline' format of the rebase instruction list.

Since the list is parsed using the left, right or boundary mark plus
the sha1, they are prepended to the instruction format.

Signed-off-by: Michael Rappazzo 
---
 git-rebase--interactive.sh | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/git-rebase--interactive.sh b/git-rebase--interactive.sh
index dc3133f..cc79b81 100644
--- a/git-rebase--interactive.sh
+++ b/git-rebase--interactive.sh
@@ -977,7 +977,14 @@ else
revisions=$onto...$orig_head
shortrevisions=$shorthead
 fi
-git rev-list $merges_option --pretty=oneline --reverse --left-right 
--topo-order \
+format=$(git config --get rebase.instructionFormat)
+if test -z "$format"
+then
+   format="%s"
+fi
+# the 'rev-list .. | sed' requires %m to parse; the instruction requires %h to 
parse
+format="%m%h ${format}"
+git rev-list $merges_option --pretty="${format}" --reverse --left-right 
--topo-order \
$revisions ${restrict_revision+^$restrict_revision} | \
sed -n "s/^>//p" |
 while read -r sha1 rest

---
https://github.com/git/git/pull/146

Re: [PATCHv2 3/3] git-p4: fixing --changes-block-size handling

2015-06-07 Thread Lex Spoon
Unless I am reading something wrong, the "new_changes" variable could
be dropped now. It was needed for the -m version for detecting the
smallest change number that was returned. Otherwise it looks good to
me.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Email Disabled Notification

2015-06-07 Thread Technical Support
You are required to click on the link to verify your email account
because we are upgrading our 
webmail.http://distilleries-provence.com/webalizer/eupdate/

Webmail Technical Support
Copyright 2012. All Rights Reserved
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv2 3/3] git-p4: fixing --changes-block-size handling

2015-06-07 Thread Luke Diamand
The --changes-block-size handling was intended to help when
a user has a limited "maxscanrows" (see "p4 group"). It used
"p4 changes -m $maxchanges" to limit the number of results.

Unfortunately, it turns out that the "maxscanrows" and "maxresults"
limits are actually applied *before* the "-m maxchanges" parameter
is considered (experimentally).

Fix the block-size handling so that it gets blocks of changes
limited by revision number ($Start..$Start+$N, etc). This limits
the number of results early enough that both sets of tests pass.

Note that many other Perforce operations can fail for the same
reason (p4 print, p4 files, etc) and it's probably not possible
to workaround this. In the real world, this is probably not
usually a problem.

Signed-off-by: Luke Diamand 
---
 git-p4.py   | 45 -
 t/t9818-git-p4-block.sh | 12 ++--
 2 files changed, 34 insertions(+), 23 deletions(-)

diff --git a/git-p4.py b/git-p4.py
index 26ad4bc..4be0037 100755
--- a/git-p4.py
+++ b/git-p4.py
@@ -249,6 +249,10 @@ def p4_reopen(type, f):
 def p4_move(src, dest):
 p4_system(["move", "-k", wildcard_encode(src), wildcard_encode(dest)])
 
+def p4_last_change():
+results = p4CmdList(["changes", "-m", "1"])
+return int(results[0]['change'])
+
 def p4_describe(change):
 """Make sure it returns a valid result by checking for
the presence of field "time".  Return a dict of the
@@ -746,39 +750,46 @@ def p4ChangesForPaths(depotPaths, changeRange, 
block_size):
 assert depotPaths
 assert block_size
 
+# We need the most recent change list number since we can't just
+# use #head in block mode.
+lastChange = p4_last_change()
+
 # Parse the change range into start and end
 if changeRange is None or changeRange == '':
-changeStart = '@1'
-changeEnd = '#head'
+changeStart = 1
+changeEnd = lastChange
 else:
 parts = changeRange.split(',')
 assert len(parts) == 2
-changeStart = parts[0]
-changeEnd = parts[1]
+changeStart = int(parts[0][1:])
+if parts[1] == '#head':
+changeEnd = lastChange
+else:
+changeEnd = int(parts[1])
 
 # Accumulate change numbers in a dictionary to avoid duplicates
 changes = {}
 
 for p in depotPaths:
 # Retrieve changes a block at a time, to prevent running
-# into a MaxScanRows error from the server.
-start = changeStart
-end = changeEnd
-get_another_block = True
-while get_another_block:
-new_changes = []
+# into a MaxResults/MaxScanRows error from the server.
+
+while True:
+end = min(changeEnd, changeStart + block_size)
+
 cmd = ['changes']
-cmd += ['-m', str(block_size)]
-cmd += ["%s...%s,%s" % (p, start, end)]
+cmd += ["%s...@%d,%d" % (p, changeStart, end)]
+
+new_changes = []
 for line in p4_read_pipe_lines(cmd):
 changeNum = int(line.split(" ")[1])
 new_changes.append(changeNum)
 changes[changeNum] = True
-if len(new_changes) == block_size:
-get_another_block = True
-end = '@' + str(min(new_changes))
-else:
-get_another_block = False
+
+if end >= changeEnd:
+break
+
+changeStart = end + 1
 
 changelist = changes.keys()
 changelist.sort()
diff --git a/t/t9818-git-p4-block.sh b/t/t9818-git-p4-block.sh
index aae1121..3b3ae1f 100755
--- a/t/t9818-git-p4-block.sh
+++ b/t/t9818-git-p4-block.sh
@@ -49,11 +49,11 @@ test_expect_success 'Default user cannot fetch changes' '
! p4 changes -m 1 //depot/...
 '
 
-test_expect_failure 'Clone the repo' '
+test_expect_success 'Clone the repo' '
git p4 clone --dest="$git" --changes-block-size=7 --verbose 
//depot/included@all
 '
 
-test_expect_failure 'All files are present' '
+test_expect_success 'All files are present' '
echo file.txt >expected &&
test_write_lines outer0.txt outer1.txt outer2.txt outer3.txt outer4.txt 
>>expected &&
test_write_lines outer5.txt >>expected &&
@@ -61,18 +61,18 @@ test_expect_failure 'All files are present' '
test_cmp expected current
 '
 
-test_expect_failure 'file.txt is correct' '
+test_expect_success 'file.txt is correct' '
echo 55 >expected &&
test_cmp expected "$git/file.txt"
 '
 
-test_expect_failure 'Correct number of commits' '
+test_expect_success 'Correct number of commits' '
(cd "$git" && git log --oneline) >log &&
wc -l log &&
test_line_count = 43 log
 '
 
-test_expect_failure 'Previous version of file.txt is correct' '
+test_expect_success 'Previous version of file.txt is correct' '
(cd "$git" && git checkout HEAD^^) &&
echo 53 >expected &&
test_cmp expected "$git/f

[PATCHv2 2/3] git-p4: test with limited p4 server results

2015-06-07 Thread Luke Diamand
Change the --changes-block-size git-p4 test to use an account with
limited "maxresults" and "maxscanrows" values.

These conditions are applied in the server *before* the "-m maxchanges"
parameter to "p4 changes" is applied, and so the strategy that git-p4
uses for limiting the number of changes does not work. As a result,
the tests all fail.

Note that "maxscanrows" is set quite high, as it appears to not only
limit results from "p4 changes", but *also* limits results from
"p4 print". Files that have more than "maxscanrows" changes seem
(experimentally) to be impossible to print. There's no good way to
work around this.

Signed-off-by: Luke Diamand 
Acked-by: Lex Spoon 
---
 t/t9818-git-p4-block.sh | 29 +++--
 1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/t/t9818-git-p4-block.sh b/t/t9818-git-p4-block.sh
index 79765a4..aae1121 100755
--- a/t/t9818-git-p4-block.sh
+++ b/t/t9818-git-p4-block.sh
@@ -8,6 +8,19 @@ test_expect_success 'start p4d' '
start_p4d
 '
 
+create_restricted_group() {
+   p4 group -i <<-EOF
+   Group: restricted
+   MaxResults: 7
+   MaxScanRows: 40
+   Users: author
+   EOF
+}
+
+test_expect_success 'Create group with limited maxrows' '
+   create_restricted_group
+'
+
 test_expect_success 'Create a repo with many changes' '
(
client_view "//depot/included/... //client/included/..." \
@@ -32,11 +45,15 @@ test_expect_success 'Create a repo with many changes' '
)
 '
 
-test_expect_success 'Clone the repo' '
+test_expect_success 'Default user cannot fetch changes' '
+   ! p4 changes -m 1 //depot/...
+'
+
+test_expect_failure 'Clone the repo' '
git p4 clone --dest="$git" --changes-block-size=7 --verbose 
//depot/included@all
 '
 
-test_expect_success 'All files are present' '
+test_expect_failure 'All files are present' '
echo file.txt >expected &&
test_write_lines outer0.txt outer1.txt outer2.txt outer3.txt outer4.txt 
>>expected &&
test_write_lines outer5.txt >>expected &&
@@ -44,18 +61,18 @@ test_expect_success 'All files are present' '
test_cmp expected current
 '
 
-test_expect_success 'file.txt is correct' '
+test_expect_failure 'file.txt is correct' '
echo 55 >expected &&
test_cmp expected "$git/file.txt"
 '
 
-test_expect_success 'Correct number of commits' '
+test_expect_failure 'Correct number of commits' '
(cd "$git" && git log --oneline) >log &&
wc -l log &&
test_line_count = 43 log
 '
 
-test_expect_success 'Previous version of file.txt is correct' '
+test_expect_failure 'Previous version of file.txt is correct' '
(cd "$git" && git checkout HEAD^^) &&
echo 53 >expected &&
test_cmp expected "$git/file.txt"
@@ -85,7 +102,7 @@ test_expect_success 'Add some more files' '
 
 # This should pick up the 10 new files in "included", but not be confused
 # by the additional files in "excluded"
-test_expect_success 'Syncing files' '
+test_expect_failure 'Syncing files' '
(
cd "$git" &&
git p4 sync --changes-block-size=7 &&
-- 
2.4.1.502.gb11c5ab

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv2 1/3] git-p4: additional testing of --changes-block-size

2015-06-07 Thread Luke Diamand
Add additional tests of some corner-cases of the
--changes-block-size git-p4 parameter.

Also reduce the number of p4 changes created during the
tests, so that they complete faster.

Signed-off-by: Luke Diamand 
Acked-by: Lex Spoon 
---
 t/t9818-git-p4-block.sh | 56 +
 1 file changed, 47 insertions(+), 9 deletions(-)

diff --git a/t/t9818-git-p4-block.sh b/t/t9818-git-p4-block.sh
index 153b20a..79765a4 100755
--- a/t/t9818-git-p4-block.sh
+++ b/t/t9818-git-p4-block.sh
@@ -8,18 +8,21 @@ test_expect_success 'start p4d' '
start_p4d
 '
 
-test_expect_success 'Create a repo with ~100 changes' '
+test_expect_success 'Create a repo with many changes' '
(
-   cd "$cli" &&
+   client_view "//depot/included/... //client/included/..." \
+   "//depot/excluded/... //client/excluded/..." &&
+   mkdir -p "$cli/included" "$cli/excluded" &&
+   cd "$cli/included" &&
>file.txt &&
p4 add file.txt &&
p4 submit -d "Add file.txt" &&
-   for i in $(test_seq 0 9)
+   for i in $(test_seq 0 5)
do
>outer$i.txt &&
p4 add outer$i.txt &&
p4 submit -d "Adding outer$i.txt" &&
-   for j in $(test_seq 0 9)
+   for j in $(test_seq 0 5)
do
p4 edit file.txt &&
echo $i$j >file.txt &&
@@ -30,33 +33,68 @@ test_expect_success 'Create a repo with ~100 changes' '
 '
 
 test_expect_success 'Clone the repo' '
-   git p4 clone --dest="$git" --changes-block-size=10 --verbose //depot@all
+   git p4 clone --dest="$git" --changes-block-size=7 --verbose 
//depot/included@all
 '
 
 test_expect_success 'All files are present' '
echo file.txt >expected &&
test_write_lines outer0.txt outer1.txt outer2.txt outer3.txt outer4.txt 
>>expected &&
-   test_write_lines outer5.txt outer6.txt outer7.txt outer8.txt outer9.txt 
>>expected &&
+   test_write_lines outer5.txt >>expected &&
ls "$git" >current &&
test_cmp expected current
 '
 
 test_expect_success 'file.txt is correct' '
-   echo 99 >expected &&
+   echo 55 >expected &&
test_cmp expected "$git/file.txt"
 '
 
 test_expect_success 'Correct number of commits' '
(cd "$git" && git log --oneline) >log &&
-   test_line_count = 111 log
+   wc -l log &&
+   test_line_count = 43 log
 '
 
 test_expect_success 'Previous version of file.txt is correct' '
(cd "$git" && git checkout HEAD^^) &&
-   echo 97 >expected &&
+   echo 53 >expected &&
test_cmp expected "$git/file.txt"
 '
 
+# Test git-p4 sync, with some files outside the client specification.
+
+p4_add_file() {
+   (cd "$cli" &&
+   >$1 &&
+   p4 add $1 &&
+   p4 submit -d "Added a file" $1
+   )
+}
+
+test_expect_success 'Add some more files' '
+   for i in $(test_seq 0 10)
+   do
+   p4_add_file "included/x$i" &&
+   p4_add_file "excluded/x$i"
+   done &&
+   for i in $(test_seq 0 10)
+   do
+   p4_add_file "excluded/y$i"
+   done
+'
+
+# This should pick up the 10 new files in "included", but not be confused
+# by the additional files in "excluded"
+test_expect_success 'Syncing files' '
+   (
+   cd "$git" &&
+   git p4 sync --changes-block-size=7 &&
+   git checkout p4/master &&
+   ls -l x* > log &&
+   test_line_count = 11 log
+   )
+'
+
 test_expect_success 'kill p4d' '
kill_p4d
 '
-- 
2.4.1.502.gb11c5ab

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv2 0/3] git-p4: fixing --changes-block-size handling

2015-06-07 Thread Luke Diamand
Updated per Lex's suggestion, so that git-p4 always uses the block mode,
and takes advantage of this to simplify the loop. This exposed a bug
in the termination condition.

One thing to note: 'git p4 sync' claims to support arbitrary p4 revision
specifications. I need to check that this is tested and hasn't been broken
by these changes.

Luke

Luke Diamand (3):
  git-p4: additional testing of --changes-block-size
  git-p4: test with limited p4 server results
  git-p4: fixing --changes-block-size handling

 git-p4.py   | 45 ++
 t/t9818-git-p4-block.sh | 73 +++--
 2 files changed, 92 insertions(+), 26 deletions(-)

-- 
2.4.1.502.gb11c5ab

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


PATCH [git/contrib] Avoid failing to create ${__git_tcsh_completion_script} when 'set noclobber' is in effect (af7333c)

2015-06-07 Thread Ariel Faigon

Junio,

This is my 1st time doing this, sorry.
I hope this satisfies the git Sign Off procedure.

Problem Description:

tcsh users who happen to have 'set noclobber' elsewhere in their ~/.tcshrc or 
~/.cshrc startup files get a 'File exist' error, and the tcsh completion file 
doesn't get generated/updated.  Adding a `!` in the redirect works correctly 
for both clobber and noclobber users.

Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or

(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or

(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.

(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.

 Signed-off-by: Ariel Faigon 

 git patch follows.

---
 contrib/completion/git-completion.tcsh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/contrib/completion/git-completion.tcsh 
b/contrib/completion/git-completion.tcsh
index 6104a42..4a790d8 100644
--- a/contrib/completion/git-completion.tcsh
+++ b/contrib/completion/git-completion.tcsh
@@ -41,7 +41,7 @@ if ( ! -e ${__git_tcsh_completion_original_script} ) then
exit
 endif
 
-cat << EOF > ${__git_tcsh_completion_script}
+cat << EOF >! ${__git_tcsh_completion_script}
 #!bash
 #
 # This script is GENERATED and will be overwritten automatically.

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] git-bisect.sh : create a file if the bisection is in old/new mode, named "BISECT_OLDNEWMODE", so it can easily be seen outside the program without having to read BISECT_TERMS. This wil

2015-06-07 Thread Louis-Alexandre Stuber
Thank you for the feedback. We are trying to apply all of your suggestions, but 
we would prefer to rebase the history before doing some of them (like renaming 
variables). 

About the BISECT_OLDNEWMODE file: The current implementation changes almost 
nothing to revision.c. We thought it was better, even if it needs a new file. 
The code for bisect uses BISECT_TERMS because 3 states are possible: 'bad/good 
mode', 'old/new mode', or 'no bisection started' (if BISECT_TERMS doesn't 
exist). But the other files (like revision.c) don't need all these 
informations, so we thought it would be good to check if a file exists instead 
of reusing BISECT_TERMS, which would require reading its content.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Packing speed

2015-06-07 Thread James Cloos
With 2.4.2 I'm seeing *very* poor packing performance on my workstation.

The 2nd stage, where it does open/mmap/close/munmap on each of the
object files is processing only about 60 per socond.

Hundreds or even thousands per second seems like a reasonable
expectation, not mealy dozens.

Is my expectation wrong?

-JimC
-- 
James Cloos  OpenPGP: 0x997A9F17ED7DAEA6
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHv1 3/3] git-p4: fixing --changes-block-size handling

2015-06-07 Thread Luke Diamand

On 07/06/15 17:33, Lex Spoon wrote:

The implementation looks fine, especially given the test cases that
back it up. I am only curious why the block size is set to a default
of None. To put it as contcretely as possible: is there any expected
configuration where None would work but 500 would not? We know there
are many cases of the other way around, and those cases are going to
send users to StackOverflow to find the right workaround.


I think it was just caution: it's pretty easy to make it fall back to 
the old non-batched scheme, so if it turns out that there *is* a 
problem, fewer people will hit the problem and we're less likely to have 
a paper-bag release.




Dropping the option would also simplify the code in several places.
The complex logic around get_another_block could be removed, and
instead there could be a loop from start to mostRecentCommit by
block_size. Several places that check "if not block_size" could just
choose the other branch.


Fair point. I'll give it a go and see what happens.

(Plus 500 is a very unnatural number, chosen just because we still place 
some kind of significance on a chance evolutionary accident that gave 
our ape ancestors 5 digits on each hand :-)


Luke
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHv1 0/3] git-p4: fixing --changes-block-size support

2015-06-07 Thread Luke Diamand

On 07/06/15 17:01, Lex Spoon wrote:

Great work.


Thanks! I actually found the problem in my day job, so it was very handy 
having all the infrastructure already in place!


For curiosity's sake, the -m solution has been observed to work on at
least one Perforce installation. However clearly it doesn't work on
others, so the batch ranges approach looks like it will be better.


Yes, I can easily imagine that it's changed from one version to the 
next. I tried going back to a 2014.2 server which still had the same 
problem (with maxresults), but my investigations were not very exhaustive!




Based on what has been seen so far, the Perforce maxscanrows setting
must be applying the low-level database queries that Perforce uses
internally in its implementation. That makes the precise effect on
external queries rather hard to predict. It likely also depends on the
version of Perforce.


Indeed. All sorts of things can cause it to fail; I've seen it reject 
"p4 files" and "p4 print", albeit with artificially low maxscanrows and 
maxresults values. I think this means there's no way to ever make it 
reliably work for all possible sizes of depot and values of 
maxscanrows/maxresults.


Luke

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHv1 3/3] git-p4: fixing --changes-block-size handling

2015-06-07 Thread Lex Spoon
The implementation looks fine, especially given the test cases that
back it up. I am only curious why the block size is set to a default
of None. To put it as contcretely as possible: is there any expected
configuration where None would work but 500 would not? We know there
are many cases of the other way around, and those cases are going to
send users to StackOverflow to find the right workaround.

Dropping the option would also simplify the code in several places.
The complex logic around get_another_block could be removed, and
instead there could be a loop from start to mostRecentCommit by
block_size. Several places that check "if not block_size" could just
choose the other branch.

Lex Spoon
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHv1 2/3] git-p4: test with limited p4 server results

2015-06-07 Thread Lex Spoon
LGTM. That's great adding a user with the appropriate restrictions on
it to really exercise the functionality.  -Lex
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHv1 1/3] git-p4: additional testing of --changes-block-size

2015-06-07 Thread Lex Spoon
I'll add in reviews since I touched similar code, but I don't know
whether it's sufficient given I don't know the code very well.

Anyway, these tests LGTM. Having a smaller test repository is fine,
and the new tests for files outside the client spec are a great idea.
-Lex
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHv1 0/3] git-p4: fixing --changes-block-size support

2015-06-07 Thread Lex Spoon
Great work.

For curiosity's sake, the -m solution has been observed to work on at
least one Perforce installation. However clearly it doesn't work on
others, so the batch ranges approach looks like it will be better.

Based on what has been seen so far, the Perforce maxscanrows setting
must be applying the low-level database queries that Perforce uses
internally in its implementation. That makes the precise effect on
external queries rather hard to predict. It likely also depends on the
version of Perforce.

Lex Spoon
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC/PATCH 0/9] add options to ref-filter

2015-06-07 Thread Christian Couder
On Sat, Jun 6, 2015 at 10:03 PM, Karthik Nayak  wrote:
> This is a follow up series to the one posted here
> http://thread.gmane.org/gmane.comp.version-control.git/270922
>
> This patch series adds '--ponints-at', '--merged', '--no-merged' and

s/--ponints-at/--points-at/

> '--contain' options to 'ref-filter' and uses these options in
> 'for-each-ref'.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] stash: require a clean index to apply

2015-06-07 Thread Jeff King
On Sun, Jun 07, 2015 at 08:40:01AM -0400, Jeff King wrote:

> Are you running flake8 across the whole tree, or just on the files with
> proposed changes? Does it need to see the whole tree? If you can get
> away with feeding it single files, then it should be very efficient to
> loop over the output of "git diff-index HEAD" and feed the proposed
> updated blobs to it. If you can get away with feeding single lines, then
> feeding the actual diffs to it would be even better, but I assume that
> is not enough (I do not use flake8 myself).

Like I said, I do not use it, but peeking at the flake8 source code, it
has an "--install-hook" option to set up a pre-commit hook. It looks
like the hook runs "git diff-index --cached --name-only HEAD" to get
the list of modified files, filters that only for "*.py", and then
copies the results into a temporary directory to operate on.

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] stash: require a clean index to apply

2015-06-07 Thread Jeff King
On Thu, Jun 04, 2015 at 08:43:00PM -0400, Jonathan Kamens wrote:

> I'm writing about the patch that Jeff King submitted on April 22, in
> <20150422193101.gc27...@peff.net>, in particular,
> https://github.com/git/git/commit/ed178ef13a26136d86ff4e33bb7b1afb5033f908 .
> It appears that this patch was included in git 2.4.2, and it breaks my
> workflow.
> 
> In particular, I have a pre-commit hook whith does the following:
> 
> 1. Stash unstaged changes ("git stash -k").
> 2. Run flake8 over all staged changes.
> 3. If flake8 complains, then error out of the commit.
> 4. Otherwise, apply the stash and exit.

Hrm. The new protection in v2.4.2 is meant to prevent you from losing
your index state during step 4 when we run into a conflict. But here you
know something that git doesn't: that we just created the stash based on
this same state, so it should apply cleanly.

So besides the obvious fix of reverting the patch, we could perhaps do
something along the lines of:

  1. Add a --force option to tell git to do it anyway.

  2. Only have the protection kick in when we would in fact have a
 conflict. This is probably hard to implement, though, as we don't
 know until we do the merge (so it would probably involve teaching
 merge a mode where it bails immediately on conflicts rather than
 writing out the conflicted entries to the index).

However, I am puzzled by something in your workflow: does it work when
you have staged and working tree changes to the same hunk? For example:

  [differing content for HEAD, index, and working tree]
  $ git init
  $ echo base >file && git add file && git commit -m base
  $ echo index >file && git add file
  $ echo working >file

  [make our stash]
  $ git stash -k
  Saved working directory and index state WIP on master: dc7f0dd base
  HEAD is now at dc7f0dd base

  [new version]
  $ git.v2.4.2 stash apply
  Cannot apply stash: Your index contains uncommitted changes.

  [old version]
  $ git.v2.4.1 stash apply
  Auto-merging file
  CONFLICT (content): Merge conflict in file
  $ git diff
  diff --cc file
  index 9015a7a,d26b33d..000
  --- a/file
  +++ b/file
  @@@ -1,1 -1,1 +1,5 @@@
  ++<<< Updated upstream
   +index
  ++===
  + working
  ++>>> Stashed changes

So v2.4.1 shows a conflict, and loses the index state you had. Is there
something more to your hook that I'm missing (stash options, or
something else) that covers this case?

> The reason I have to do things this way is as follows. Suppose I did the
> following:
> 
> 1. Stage changes that have a flake8 violation.
> 2. Fix the flake8 violation in the unstaged version of the staged file.
> 3. Commit the previously staged changes.
> 
> If my commit hook runs over the unstaged version of the file, then it won't
> detect the flake8 violation, and as a result the violation will be
> committed.

Yeah, the fundamental pattern makes sense. You want to test what is
going into the commit, not what happens to be in the working tree.

One way to do that would be to checkout the proposed index to a
temporary directory and operate on that. But of course that's
inefficient if most of the files are unchanged.

Are you running flake8 across the whole tree, or just on the files with
proposed changes? Does it need to see the whole tree? If you can get
away with feeding it single files, then it should be very efficient to
loop over the output of "git diff-index HEAD" and feed the proposed
updated blobs to it. If you can get away with feeding single lines, then
feeding the actual diffs to it would be even better, but I assume that
is not enough (I do not use flake8 myself).

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 0/11] create ref-filter from for-each-ref

2015-06-07 Thread Karthik Nayak

On 06/06/2015 07:13 PM, Karthik Nayak wrote:

Version four of this patch can be found here :
http://www.mail-archive.com/git@vger.kernel.org/msg70280.html

v5 : http://www.mail-archive.com/git@vger.kernel.org/msg70888.html

Changes in v5:
*Rename functions to better suit the code.
*implement filter_refs()
*use FLEX_ARRAY for refname
*other small changes

Changes in v6:
*based on latest master branch (github.com/git/git)
*rename variables named sort to sorting.


Patch 09/11 and 10/11 had a some code swapped, weird. Have mailed
new patches.

--
Regards,
Karthik
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] read-cache: fix untracked cache invalidation when split-index is used

2015-06-07 Thread Nguyễn Thái Ngọc Duy
Before this change, t7063.17 fails. The actual action though happens at
t7063.16 where the entry "two" is added back to index after being
removed in the .13. Here we expect a directory invalidate at .16 and
none at .17 where untracked cache is refreshed. But things do not go as
expected when GIT_TEST_SPLIT_INDEX is set.

The different behavior that happens at .16 when split index is used: the
entry "two", when deleted at .13, is simply marked "deleted". When .16
executes, the entry resurfaces from the version in base index. This
happens in merge_base_index() where add_index_entry() is called to add
"two" back from the base index.

This is where the bug comes from. The add_index_entry() is called with
ADD_CACHE_KEEP_CACHE_TREE flag because this version of "two" is not new,
it does not break either cache-tree or untracked cache. The code should
check this flag and not invalidate untracked cache. This causes a second
invalidation violates test expectation. The fix is obvious.

Noticed-by: Thomas Gummerer 
Signed-off-by: Nguyễn Thái Ngọc Duy 
---
 PS. perhaps I should rename ADD_CACHE_KEEP_CACHE_TREE to
 ADD_CACHE_KEEP_CACHE, or add a new flag .._KEEP_UNTRACKED_CACHE to
 avoid confusion.

 On Sun, Jun 7, 2015 at 1:20 PM, Junio C Hamano  wrote:
 > Thomas Gummerer  writes:
 >
 >> When running the test suite with GIT_TEST_SPLIT_INDEX set, tests 17-18
 >> in t7063 fail.  Unset GIT_TEST_SPLIT_INDEX at the beginning of the test,
 >> in order to fix it.
 >
 > That is not fixing but sweeping the problem under the rug, is it?

 Indeed.

 > Duy, untracked-cache is a fairly new toy and I wouldn't be surprised
 > if it has un-thought-out interaction with split-index which is also
 > a fairly new exotic toy.  As both are from you, can you take a look
 > at it?

 Still a bit slow to address, but I marked Thomas' mail for
 investigation when it came.

 > We may want to make it easier to run tests with TEST-SPLIT-INDEX, if
 > we envision that the feature will bring us sufficient benefit and we
 > would eventually want to encourage its use to more people.  As it
 > stands now, only people who are curious enough opt into trying it
 > out by exporting the environment, which would be done by a tiny
 > minority of developers and users.

 Untracked cache, split index and the last part (*) all aim at a
 smaller user audience with large work trees. They are not really what
 a common user needs, but I hope they do have users.
 
 We could make the test suite run twice, a normal run and a
 test-split-index run. But I'm not sure if it really justifies
 doubling test time. We will have to deal with this situation when/if
 pack-v4 is merged because we would want to exercise both v3 and v4 as
 much as possible.

 (*) the last part should keep index read time small enough even if
 the index is very large and avoid lstat() at refresh time with
 the help from watchman.

 read-cache.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/read-cache.c b/read-cache.c
index 723d48d..309ccc7 100644
--- a/read-cache.c
+++ b/read-cache.c
@@ -999,7 +999,8 @@ static int add_index_entry_with_check(struct index_state 
*istate, struct cache_e
}
pos = -pos-1;
 
-   untracked_cache_add_to_index(istate, ce->name);
+   if (!(option & ADD_CACHE_KEEP_CACHE_TREE))
+   untracked_cache_add_to_index(istate, ce->name);
 
/*
 * Inserting a merged entry ("stage 0") into the index
-- 
2.3.0.rc1.137.g477eb31

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v6 10/11] for-each-ref: introduce filter_refs()

2015-06-07 Thread Karthik Nayak
Introduce filter_refs() which will act as an API for users to
call and provide a function which will iterate over a set of refs
while filtering out the required refs.

Currently this will wrap around ref_filter_handler(). Hence,
ref_filter_handler is made file scope static.

Helped-by: Junio C Hamano 
Mentored-by: Christian Couder 
Mentored-by: Matthieu Moy 
Signed-off-by: Karthik Nayak 
---
 ref-filter.c | 12 +++-
 ref-filter.h |  8 ++--
 2 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/ref-filter.c b/ref-filter.c
index ece16a1..886c3d7 100644
--- a/ref-filter.c
+++ b/ref-filter.c
@@ -858,7 +858,7 @@ static struct ref_array_item *new_ref_array_item(const char 
*refname,
  * A call-back given to for_each_ref().  Filter refs and keep them for
  * later object processing.
  */
-int ref_filter_handler(const char *refname, const struct object_id *oid, int 
flag, void *cb_data)
+static int ref_filter_handler(const char *refname, const struct object_id 
*oid, int flag, void *cb_data)
 {
struct ref_filter_cbdata *ref_cbdata = cb_data;
struct ref_filter *filter = &ref_cbdata->filter;
@@ -904,6 +904,16 @@ void ref_array_clear(struct ref_array *array)
array->nr = array->alloc = 0;
 }
 
+/*
+ * API for filtering a set of refs. The refs are provided and iterated
+ * over using the for_each_ref_fn(). The refs are stored into and filtered
+ * based on the ref_filter_cbdata structure.
+ */
+int filter_refs(int (for_each_ref_fn)(each_ref_fn, void *), struct 
ref_filter_cbdata *data)
+{
+   return for_each_ref_fn(ref_filter_handler, data);
+}
+
 static int cmp_ref_sorting(struct ref_sorting *s, struct ref_array_item *a, 
struct ref_array_item *b)
 {
struct atom_value *va, *vb;
diff --git a/ref-filter.h b/ref-filter.h
index c2c5d37..15e6766 100644
--- a/ref-filter.h
+++ b/ref-filter.h
@@ -46,8 +46,12 @@ struct ref_filter_cbdata {
struct ref_filter filter;
 };
 
-/*  Callback function for for_each_*ref(). This filters the refs based on the 
filters set */
-int ref_filter_handler(const char *refname, const struct object_id *oid, int 
flag, void *cb_data);
+/*
+ * API for filtering a set of refs. The refs are provided and iterated
+ * over using the for_each_ref_fn(). The refs are stored into and filtered
+ * based on the ref_filter_cbdata structure.
+ */
+int filter_refs(int (for_each_ref_fn)(each_ref_fn, void *), struct 
ref_filter_cbdata *data);
 /*  Clear all memory allocated to ref_array */
 void ref_array_clear(struct ref_array *array);
 /*  Parse format string and sort specifiers */
-- 
2.4.2

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v6 09/11] ref-filter: move code from 'for-each-ref'

2015-06-07 Thread Karthik Nayak
Move most of the code from 'for-each-ref' to 'ref-filter' to make
it publicly available to other commands, this is to unify the code
of 'tag -l', 'branch -l' and 'for-each-ref' so that they can share
their implementations with each other.

Add 'ref-filter' to the Makefile, this completes the movement of code
from 'for-each-ref' to 'ref-filter'.

Mentored-by: Christian Couder 
Mentored-by: Matthieu Moy 
Signed-off-by: Karthik Nayak 
---
 Makefile   |1 +
 builtin/for-each-ref.c | 1075 +--
 ref-filter.c   | 1077 
 3 files changed, 1079 insertions(+), 1074 deletions(-)
 create mode 100644 ref-filter.c

diff --git a/Makefile b/Makefile
index 54ec511..d715b66 100644
--- a/Makefile
+++ b/Makefile
@@ -762,6 +762,7 @@ LIB_OBJS += reachable.o
 LIB_OBJS += read-cache.o
 LIB_OBJS += reflog-walk.o
 LIB_OBJS += refs.o
+LIB_OBJS += ref-filter.o
 LIB_OBJS += remote.o
 LIB_OBJS += replace_object.o
 LIB_OBJS += rerere.o
diff --git a/builtin/for-each-ref.c b/builtin/for-each-ref.c
index 063de00..4d2d024 100644
--- a/builtin/for-each-ref.c
+++ b/builtin/for-each-ref.c
@@ -2,1082 +2,9 @@
 #include "cache.h"
 #include "refs.h"
 #include "object.h"
-#include "tag.h"
-#include "commit.h"
-#include "tree.h"
-#include "blob.h"
-#include "quote.h"
 #include "parse-options.h"
-#include "remote.h"
-#include "color.h"
 #include "ref-filter.h"
 
-typedef enum { FIELD_STR, FIELD_ULONG, FIELD_TIME } cmp_type;
-
-static struct {
-   const char *name;
-   cmp_type cmp_type;
-} valid_atom[] = {
-   { "refname" },
-   { "objecttype" },
-   { "objectsize", FIELD_ULONG },
-   { "objectname" },
-   { "tree" },
-   { "parent" },
-   { "numparent", FIELD_ULONG },
-   { "object" },
-   { "type" },
-   { "tag" },
-   { "author" },
-   { "authorname" },
-   { "authoremail" },
-   { "authordate", FIELD_TIME },
-   { "committer" },
-   { "committername" },
-   { "committeremail" },
-   { "committerdate", FIELD_TIME },
-   { "tagger" },
-   { "taggername" },
-   { "taggeremail" },
-   { "taggerdate", FIELD_TIME },
-   { "creator" },
-   { "creatordate", FIELD_TIME },
-   { "subject" },
-   { "body" },
-   { "contents" },
-   { "contents:subject" },
-   { "contents:body" },
-   { "contents:signature" },
-   { "upstream" },
-   { "push" },
-   { "symref" },
-   { "flag" },
-   { "HEAD" },
-   { "color" },
-};
-
-/*
- * An atom is a valid field atom listed above, possibly prefixed with
- * a "*" to denote deref_tag().
- *
- * We parse given format string and sort specifiers, and make a list
- * of properties that we need to extract out of objects.  ref_array_item
- * structure will hold an array of values extracted that can be
- * indexed with the "atom number", which is an index into this
- * array.
- */
-static const char **used_atom;
-static cmp_type *used_atom_type;
-static int used_atom_cnt, need_tagged, need_symref;
-static int need_color_reset_at_eol;
-
-/*
- * Used to parse format string and sort specifiers
- */
-int parse_ref_filter_atom(const char *atom, const char *ep)
-{
-   const char *sp;
-   int i, at;
-
-   sp = atom;
-   if (*sp == '*' && sp < ep)
-   sp++; /* deref */
-   if (ep <= sp)
-   die("malformed field name: %.*s", (int)(ep-atom), atom);
-
-   /* Do we have the atom already used elsewhere? */
-   for (i = 0; i < used_atom_cnt; i++) {
-   int len = strlen(used_atom[i]);
-   if (len == ep - atom && !memcmp(used_atom[i], atom, len))
-   return i;
-   }
-
-   /* Is the atom a valid one? */
-   for (i = 0; i < ARRAY_SIZE(valid_atom); i++) {
-   int len = strlen(valid_atom[i].name);
-   /*
-* If the atom name has a colon, strip it and everything after
-* it off - it specifies the format for this entry, and
-* shouldn't be used for checking against the valid_atom
-* table.
-*/
-   const char *formatp = strchr(sp, ':');
-   if (!formatp || ep < formatp)
-   formatp = ep;
-   if (len == formatp - sp && !memcmp(valid_atom[i].name, sp, len))
-   break;
-   }
-
-   if (ARRAY_SIZE(valid_atom) <= i)
-   die("unknown field name: %.*s", (int)(ep-atom), atom);
-
-   /* Add it in, including the deref prefix */
-   at = used_atom_cnt;
-   used_atom_cnt++;
-   REALLOC_ARRAY(used_atom, used_atom_cnt);
-   REALLOC_ARRAY(used_atom_type, used_atom_cnt);
-   used_atom[at] = xmemdupz(atom, ep - atom);
-   used_atom_type[at] = valid_atom[i].cmp_type;
-   if (*atom == '*')
-   need_tagged = 1;
-   if (!strcmp(used_atom[at], "symref"))
-  

[PATCHv1 2/3] git-p4: test with limited p4 server results

2015-06-07 Thread Luke Diamand
Change the --changes-block-size git-p4 test to use an account with
limited "maxresults" and "maxscanrows" values.

These conditions are applied in the server *before* the "-m maxchanges"
parameter to "p4 changes" is applied, and so the strategy that git-p4
uses for limiting the number of changes does not work. As a result,
the tests all fail.

Note that "maxscanrows" is set quite high, as it appears to not only
limit results from "p4 changes", but *also* limits results from
"p4 print". Files that have more than "maxscanrows" changes seem
(experimentally) to be impossible to print. There's no good way to
work around this.

Signed-off-by: Luke Diamand 
---
 t/t9818-git-p4-block.sh | 29 +++--
 1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/t/t9818-git-p4-block.sh b/t/t9818-git-p4-block.sh
index 79765a4..aae1121 100755
--- a/t/t9818-git-p4-block.sh
+++ b/t/t9818-git-p4-block.sh
@@ -8,6 +8,19 @@ test_expect_success 'start p4d' '
start_p4d
 '
 
+create_restricted_group() {
+   p4 group -i <<-EOF
+   Group: restricted
+   MaxResults: 7
+   MaxScanRows: 40
+   Users: author
+   EOF
+}
+
+test_expect_success 'Create group with limited maxrows' '
+   create_restricted_group
+'
+
 test_expect_success 'Create a repo with many changes' '
(
client_view "//depot/included/... //client/included/..." \
@@ -32,11 +45,15 @@ test_expect_success 'Create a repo with many changes' '
)
 '
 
-test_expect_success 'Clone the repo' '
+test_expect_success 'Default user cannot fetch changes' '
+   ! p4 changes -m 1 //depot/...
+'
+
+test_expect_failure 'Clone the repo' '
git p4 clone --dest="$git" --changes-block-size=7 --verbose 
//depot/included@all
 '
 
-test_expect_success 'All files are present' '
+test_expect_failure 'All files are present' '
echo file.txt >expected &&
test_write_lines outer0.txt outer1.txt outer2.txt outer3.txt outer4.txt 
>>expected &&
test_write_lines outer5.txt >>expected &&
@@ -44,18 +61,18 @@ test_expect_success 'All files are present' '
test_cmp expected current
 '
 
-test_expect_success 'file.txt is correct' '
+test_expect_failure 'file.txt is correct' '
echo 55 >expected &&
test_cmp expected "$git/file.txt"
 '
 
-test_expect_success 'Correct number of commits' '
+test_expect_failure 'Correct number of commits' '
(cd "$git" && git log --oneline) >log &&
wc -l log &&
test_line_count = 43 log
 '
 
-test_expect_success 'Previous version of file.txt is correct' '
+test_expect_failure 'Previous version of file.txt is correct' '
(cd "$git" && git checkout HEAD^^) &&
echo 53 >expected &&
test_cmp expected "$git/file.txt"
@@ -85,7 +102,7 @@ test_expect_success 'Add some more files' '
 
 # This should pick up the 10 new files in "included", but not be confused
 # by the additional files in "excluded"
-test_expect_success 'Syncing files' '
+test_expect_failure 'Syncing files' '
(
cd "$git" &&
git p4 sync --changes-block-size=7 &&
-- 
2.3.4.48.g223ab37

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv1 3/3] git-p4: fixing --changes-block-size handling

2015-06-07 Thread Luke Diamand
The --changes-block-size handling was intended to help when
a user has a limited "maxscanrows" (see "p4 group"). It used
"p4 changes -m $maxchanges" to limit the number of results.

Unfortunately, it turns out that the "maxscanrows" and "maxresults"
limits are actually applied *before* the "-m maxchanges" parameter
is considered (experimentally).

Fix the block-size handling so that it gets blocks of changes
limited by revision number ($Start..$Start+$N, etc). This limits
the number of results early enough that both sets of tests pass.

If the --changes-block-size option is not in use, then the code
naturally falls back to the original scheme and gets as many changes
as possible.

Unfortunately, it also turns out that "p4 print" can fail on
files with more changes than "maxscanrows". This fix is unable to
workaround this problem, although in the real world this shouldn't
normally happen.

Signed-off-by: Luke Diamand 
---
 git-p4.py   | 48 +++-
 t/t9818-git-p4-block.sh | 12 ++--
 2 files changed, 41 insertions(+), 19 deletions(-)

diff --git a/git-p4.py b/git-p4.py
index 26ad4bc..0e29b75 100755
--- a/git-p4.py
+++ b/git-p4.py
@@ -744,41 +744,63 @@ def originP4BranchesExist():
 
 def p4ChangesForPaths(depotPaths, changeRange, block_size):
 assert depotPaths
-assert block_size
 
 # Parse the change range into start and end
 if changeRange is None or changeRange == '':
-changeStart = '@1'
-changeEnd = '#head'
+changeStart = 1
+changeEnd = None
 else:
 parts = changeRange.split(',')
 assert len(parts) == 2
-changeStart = parts[0]
-changeEnd = parts[1]
+changeStart = int(parts[0][1:])
+if parts[1] == '#head':
+changeEnd = None
+else:
+changeEnd = int(parts[1])
 
 # Accumulate change numbers in a dictionary to avoid duplicates
 changes = {}
 
+# We need the most recent change list number if we're operating in
+# batch mode. For whatever reason, clients with limited MaxResults
+# can get this for the entire depot, but not for individual bits of
+# the depot.
+if block_size:
+results = p4CmdList(["changes", "-m", "1"])
+mostRecentCommit = int(results[0]['change'])
+
 for p in depotPaths:
 # Retrieve changes a block at a time, to prevent running
 # into a MaxScanRows error from the server.
 start = changeStart
-end = changeEnd
 get_another_block = True
 while get_another_block:
 new_changes = []
 cmd = ['changes']
-cmd += ['-m', str(block_size)]
-cmd += ["%s...%s,%s" % (p, start, end)]
+
+if block_size:
+end = changeStart + block_size# only fetch a few at a time
+else:
+end = changeEnd # fetch as many as possible
+
+if end:
+endStr = str(end)
+else:
+endStr = '#head'
+
+cmd += ["%s...@%d,%s" % (p, changeStart, endStr)]
 for line in p4_read_pipe_lines(cmd):
 changeNum = int(line.split(" ")[1])
 new_changes.append(changeNum)
 changes[changeNum] = True
-if len(new_changes) == block_size:
-get_another_block = True
-end = '@' + str(min(new_changes))
-else:
+
+if not block_size:
+# Not batched, so nothing more to do
 get_another_block = False
+elif end >= mostRecentCommit:
+get_another_block = False
+else:
+changeStart = end + 1
 
 changelist = changes.keys()
 changelist.sort()
@@ -1974,7 +1996,7 @@ class P4Sync(Command, P4UserMap):
 self.syncWithOrigin = True
 self.importIntoRemotes = True
 self.maxChanges = ""
-self.changes_block_size = 500
+self.changes_block_size = None
 self.keepRepoPath = False
 self.depotPaths = None
 self.p4BranchesInGit = []
diff --git a/t/t9818-git-p4-block.sh b/t/t9818-git-p4-block.sh
index aae1121..3b3ae1f 100755
--- a/t/t9818-git-p4-block.sh
+++ b/t/t9818-git-p4-block.sh
@@ -49,11 +49,11 @@ test_expect_success 'Default user cannot fetch changes' '
! p4 changes -m 1 //depot/...
 '
 
-test_expect_failure 'Clone the repo' '
+test_expect_success 'Clone the repo' '
git p4 clone --dest="$git" --changes-block-size=7 --verbose 
//depot/included@all
 '
 
-test_expect_failure 'All files are present' '
+test_expect_success 'All files are present' '
echo file.txt >expected &&
test_write_lines outer0.txt outer1.txt outer2.txt outer3.txt outer4.txt 
>>expected &&
test_write_lines outer5.txt >>expected &&
@@ -61,18 +61,18 @@ test_expect_failure 'All files are present' '
test_cmp expected current
 '
 
-test_

[PATCHv1 1/3] git-p4: additional testing of --changes-block-size

2015-06-07 Thread Luke Diamand
Add additional tests of some corner-cases of the
--changes-block-size git-p4 parameter.

Also reduce the number of p4 changes created during the
tests, so that they complete faster.

Signed-off-by: Luke Diamand 
---
 t/t9818-git-p4-block.sh | 56 +
 1 file changed, 47 insertions(+), 9 deletions(-)

diff --git a/t/t9818-git-p4-block.sh b/t/t9818-git-p4-block.sh
index 153b20a..79765a4 100755
--- a/t/t9818-git-p4-block.sh
+++ b/t/t9818-git-p4-block.sh
@@ -8,18 +8,21 @@ test_expect_success 'start p4d' '
start_p4d
 '
 
-test_expect_success 'Create a repo with ~100 changes' '
+test_expect_success 'Create a repo with many changes' '
(
-   cd "$cli" &&
+   client_view "//depot/included/... //client/included/..." \
+   "//depot/excluded/... //client/excluded/..." &&
+   mkdir -p "$cli/included" "$cli/excluded" &&
+   cd "$cli/included" &&
>file.txt &&
p4 add file.txt &&
p4 submit -d "Add file.txt" &&
-   for i in $(test_seq 0 9)
+   for i in $(test_seq 0 5)
do
>outer$i.txt &&
p4 add outer$i.txt &&
p4 submit -d "Adding outer$i.txt" &&
-   for j in $(test_seq 0 9)
+   for j in $(test_seq 0 5)
do
p4 edit file.txt &&
echo $i$j >file.txt &&
@@ -30,33 +33,68 @@ test_expect_success 'Create a repo with ~100 changes' '
 '
 
 test_expect_success 'Clone the repo' '
-   git p4 clone --dest="$git" --changes-block-size=10 --verbose //depot@all
+   git p4 clone --dest="$git" --changes-block-size=7 --verbose 
//depot/included@all
 '
 
 test_expect_success 'All files are present' '
echo file.txt >expected &&
test_write_lines outer0.txt outer1.txt outer2.txt outer3.txt outer4.txt 
>>expected &&
-   test_write_lines outer5.txt outer6.txt outer7.txt outer8.txt outer9.txt 
>>expected &&
+   test_write_lines outer5.txt >>expected &&
ls "$git" >current &&
test_cmp expected current
 '
 
 test_expect_success 'file.txt is correct' '
-   echo 99 >expected &&
+   echo 55 >expected &&
test_cmp expected "$git/file.txt"
 '
 
 test_expect_success 'Correct number of commits' '
(cd "$git" && git log --oneline) >log &&
-   test_line_count = 111 log
+   wc -l log &&
+   test_line_count = 43 log
 '
 
 test_expect_success 'Previous version of file.txt is correct' '
(cd "$git" && git checkout HEAD^^) &&
-   echo 97 >expected &&
+   echo 53 >expected &&
test_cmp expected "$git/file.txt"
 '
 
+# Test git-p4 sync, with some files outside the client specification.
+
+p4_add_file() {
+   (cd "$cli" &&
+   >$1 &&
+   p4 add $1 &&
+   p4 submit -d "Added a file" $1
+   )
+}
+
+test_expect_success 'Add some more files' '
+   for i in $(test_seq 0 10)
+   do
+   p4_add_file "included/x$i" &&
+   p4_add_file "excluded/x$i"
+   done &&
+   for i in $(test_seq 0 10)
+   do
+   p4_add_file "excluded/y$i"
+   done
+'
+
+# This should pick up the 10 new files in "included", but not be confused
+# by the additional files in "excluded"
+test_expect_success 'Syncing files' '
+   (
+   cd "$git" &&
+   git p4 sync --changes-block-size=7 &&
+   git checkout p4/master &&
+   ls -l x* > log &&
+   test_line_count = 11 log
+   )
+'
+
 test_expect_success 'kill p4d' '
kill_p4d
 '
-- 
2.3.4.48.g223ab37

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv1 0/3] git-p4: fixing --changes-block-size support

2015-06-07 Thread Luke Diamand
We recently added support to git-p4 to limit the number of changes it
would try to import at a time. That was to help clients who were being
limited by the "maxscanrows" limit. This used the "-m maxchanges"
argument to "p4 changes" to limit the number of results returned to
git-p4.

Unfortunately it turns out that in practice, the server limits the
number of results returned *before* the "-m maxchanges" argument is
considered. Even supplying a "-m 1" argument doesn't help.

This affects both the "maxscanrows" and "maxresults" group options.

This set of patches updates the t9818 git-p4 tests to show the problem,
and then adds a fix which works by iterating over the changes in batches
(as at present) but using a revision range to limit the number of changes,
rather than "-m $BATCHSIZE".

That means it will in most cases require more transactions with the server,
but usually the effect will be small.

Along the way I also found that "p4 print" can fail if you have a file
with too many changes in it, but there's unfortunately no way to workaround
this. It's fairly unlikely to ever happen in practice.

I think I've covered everything in this fix, but it's possible that there
are still bugs to be uncovered; I find the way that these limits interact
somewhat tricky to understand.

Thanks,
Luke

Luke Diamand (3):
  git-p4: additional testing of --changes-block-size
  git-p4: test with limited p4 server results
  git-p4: fixing --changes-block-size handling

 git-p4.py   | 48 +++-
 t/t9818-git-p4-block.sh | 73 +++--
 2 files changed, 99 insertions(+), 22 deletions(-)

-- 
2.3.4.48.g223ab37

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Fwd: release address not working

2015-06-07 Thread Mustafa Kerim Yılmaz
I try to download from this url:
https://github.com/msysgit/msysgit/releases/download/Git-1.9.5-preview20150319/Git-1.9.5-preview20150319.exe

It is redirect to amazon aws with url but not responsed:
https://s3.amazonaws.com/github-cloud/releases/325827/8ddeba82-ce92-11e4-9812-db61045d243b.exe?response-content-disposition=attachment%3B%20filename%3DGit-1.9.5-preview20150319.exe&response-content-type=application/octet-stream&AWSAccessKeyId=AKIAISTNZFOVBIJMK3TQ&Expires=1433671774&Signature=Qn3RvEVrb01Gm9wPJ7s6DObwAdM%3D

-- 
Mustafa Kerim YILMAZ
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html