merge master into the subtree branches

2015-06-12 Thread Nico Schlömer
Hi everyone,

I have a large Git project which I would like to dissect into
subprojects with their own repositories. Git subtrees are ideal for
this task: I first

 * create a branch with the contents of only one subfoldergit subtree
split -P name-of-folder -b name-of-new-branch

and then

 * pull this branch into another repository.

For a transitional phase, I would like to have the subprojects
read-only and sync them from master. The question is how to organize
this. For every commit to master, I could of course perform the above
procedure repeatedly for all subprojects, but this seems less then
ideal since it does all the work all over again.

Is there a way to merge master into the subtree branches?

Cheers,
Nico
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Bug when doing make test using root user

2015-06-12 Thread Jean-Yves LENHOF

Hi,

I tried to compile git 2.4.3 using root on a server. It failed on test 
41 of t0302-credential-store.sh
In fact even if we remove read access on a directory, root still can 
acces this directory.

Using a not privilegied user make the test work.
Perhaps the test should be adapted to this corner case.
Trace below.

Have fun,

JYL

*** t0302-credential-store.sh ***
ok 1 - helper (store) has no existing data
ok 2 - helper (store) stores password
ok 3 - helper (store) can retrieve password
ok 4 - helper (store) requires matching protocol
ok 5 - helper (store) requires matching host
ok 6 - helper (store) requires matching username
ok 7 - helper (store) requires matching path
ok 8 - helper (store) can forget host
ok 9 - helper (store) can store multiple users
ok 10 - helper (store) can forget user
ok 11 - helper (store) remembers other user
ok 12 - when xdg file does not exist, xdg file not created
ok 13 - setup xdg file
ok 14 - helper (store) has no existing data
ok 15 - helper (store) stores password
ok 16 - helper (store) can retrieve password
ok 17 - helper (store) requires matching protocol
ok 18 - helper (store) requires matching host
ok 19 - helper (store) requires matching username
ok 20 - helper (store) requires matching path
ok 21 - helper (store) can forget host
ok 22 - helper (store) can store multiple users
ok 23 - helper (store) can forget user
ok 24 - helper (store) remembers other user
ok 25 - when xdg file exists, home file not created
ok 26 - setup custom xdg file
ok 27 - helper (store) has no existing data
ok 28 - helper (store) stores password
ok 29 - helper (store) can retrieve password
ok 30 - helper (store) requires matching protocol
ok 31 - helper (store) requires matching host
ok 32 - helper (store) requires matching username
ok 33 - helper (store) requires matching path
ok 34 - helper (store) can forget host
ok 35 - helper (store) can store multiple users
ok 36 - helper (store) can forget user
ok 37 - helper (store) remembers other user
ok 38 - if custom xdg file exists, home and xdg files not created
ok 39 - get: use home file if both home and xdg files have matches
ok 40 - get: use xdg file if home file has no matches
not ok 41 - get: use xdg file if home file is unreadable
#
#   echo https://home-user:home-p...@example.com; 
$HOME/.git-credentials 

#   chmod -r $HOME/.git-credentials 
#   mkdir -p $HOME/.config/git 
#   echo https://xdg-user:xdg-p...@example.com; 
$HOME/.config/git/credentials 

#   check fill store -\EOF
#   protocol=https
#   host=example.com
#   --
#   protocol=https
#   host=example.com
#   username=xdg-user
#   password=xdg-pass
#   --
#   EOF
#
ok 42 - store: if both xdg and home files exist, only store in home file
ok 43 - erase: erase matching credentials from both xdg and home files
# failed 1 among 43 test(s)
1..43
make[2]: *** [t0302-credential-store.sh] Error 1
make[2]: Leaving directory `/root/git-2.4.3/t'
make[1]: *** [test] Error 2
make[1]: Leaving directory `/root/git-2.4.3/t'
make: *** [test] Error 2

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] fetch-pack: optionally save packs to disk

2015-06-12 Thread Johannes Sixt

Am 11.06.2015 um 20:59 schrieb Augie Fackler:

When developing server software, it's often helpful to save a
potentially-bogus pack for later analysis. This makes that trivial,
instead of painful.


When you develop server software, shouldn't you test drive the server 
via the bare metal protocol anyway? That *is* painful, but unavoidable 
because you must harden the server against any garbage that a 
potentially malicous client could throw at it. Restricting yourself to a 
well-behaved client such as fetch-pack is only half the deal.


That said, I do think that fetch-pack could learn a mode that makes it 
easier to debug the normal behavior of a server (if such a mode is 
missing currently).


What is the problem with the current fetch-pack implementation? Does it 
remove a bogus packfile after download? Does it abort during download 
when it detects a broken packfile? Does --keep not do what you need?


Instead of your approach (which forks off tee to dump a copy of the 
packfile), would it not be simpler to add an option --debug-pack 
(probably not the best name) that skips the cleanup step when a broken 
packfile is detected and prints the name of the downloaded packfile?



diff --git a/fetch-pack.c b/fetch-pack.c
index a912935..fe6ba58 100644
--- a/fetch-pack.c
+++ b/fetch-pack.c
@@ -684,7 +684,7 @@ static int get_pack(struct fetch_pack_args *args,
const char *argv[22];
char keep_arg[256];
char hdr_arg[256];
-   const char **av, *cmd_name;
+   const char **av, *cmd_name, *savepath;
int do_keep = args-keep_pack;
struct child_process cmd = CHILD_PROCESS_INIT;
int ret;
@@ -708,9 +708,8 @@ static int get_pack(struct fetch_pack_args *args,
cmd.argv = argv;
av = argv;
*hdr_arg = 0;
+   struct pack_header header;
if (!args-keep_pack  unpack_limit) {
-   struct pack_header header;
-
if (read_pack_header(demux.out, header))
die(protocol error: bad pack header);
snprintf(hdr_arg, sizeof(hdr_arg),
@@ -762,7 +761,44 @@ static int get_pack(struct fetch_pack_args *args,
*av++ = --strict;
*av++ = NULL;

-   cmd.in = demux.out;
+   savepath = getenv(GIT_SAVE_FETCHED_PACK_TO);
+   if (savepath) {
+   struct child_process cmd2 = CHILD_PROCESS_INIT;
+   const char *argv2[22];
+   int pipefds[2];
+   int e;
+   const char **av2;
+   cmd2.argv = argv2;
+   av2 = argv2;
+   *av2++ = tee;
+   if (*hdr_arg) {
+   /* hdr_arg being nonempty means we already read the
+* pack header from demux, so we need to drop a pack
+* header in place for tee to append to, otherwise
+* we'll end up with a broken pack on disk.
+*/


/*
 * Write multi-line comments
 * like this (/* on its own line)
 */


+   int fp;
+   struct sha1file *s;
+   fp = open(savepath, O_CREAT | O_TRUNC | O_WRONLY, 0666);
+   s = sha1fd_throughput(fp, savepath, NULL);
+   sha1write(s, header, sizeof(header));
+   sha1flush(s);


Are you abusing sha1write() and sha1flush() to write a byte sequence to 
a file? Is write_in_full() not sufficient?



+   close(fp);
+   /* -a is supported by both GNU and BSD tee */
+   *av2++ = -a;
+   }
+   *av2++ = savepath;
+   *av2++ = NULL;
+   cmd2.in = demux.out;
+   e = pipe(pipefds);
+   if (e != 0)
+   die(couldn't make pipe to save pack);


start_command() can create the pipe for you. Just say cmd2.out = -1.


+   cmd2.out = pipefds[1];
+   cmd.in = pipefds[0];
+   if (start_command(cmd2))
+   die(couldn't start tee to save a pack);


When you call start_command(), you must also call finish_command(). 
start_command() prints an error message for you; you don't have to do 
that (the start_command() in the context below is a bad example).



+   } else
+   cmd.in = demux.out;
cmd.git_cmd = 1;
if (start_command(cmd))
die(fetch-pack: unable to fork off %s, cmd_name);
diff --git a/t/t5551-http-fetch-smart.sh b/t/t5551-http-fetch-smart.sh
index 58207d8..bf4640d 100755
--- a/t/t5551-http-fetch-smart.sh
+++ b/t/t5551-http-fetch-smart.sh
@@ -82,11 +82,23 @@ test_expect_success 'fetch changes via http' '
test_cmp file clone/file
  '

+test_expect_success 'fetch changes via http and save pack' '
+   echo content file 
+   git commit -a -m two 

git init with template dir

2015-06-12 Thread Alex Cornejo
I was surprised to see that when using git-init, if the template folder
is itself a symlink, then the contents of the template is NOT copied to
the resulting git repository, but instead each individual file is
symlinked.

For my particular use case, this is undesirable (since if I am not
careful, then when I change the hook of one git repo, it
actually changes the hooks of all other repos too). It is easy
enough for me to work around this (i.e. by instead pointing my gitconfig
to use a template dir which is not a symlink), but I was
wondering weather this is a feature folks use (and for what end), or if
this is unintended behavior.

Furthermore, would a patch be welcome that either disables this
feature through an option (or perhaps permanently by just copying the
contents of the symlinked folder instead of creating individual
symlinks), or am I the only git user that was surprised by this
behavior and wanted to disable it?

- Alex

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Fix power checking on OS X

2015-06-12 Thread Panagiotis Astithas
On Fri, Jun 12, 2015 at 6:46 AM, Eric Sunshine sunsh...@sunshineco.com wrote:
 On Thu, Jun 11, 2015 at 10:37 AM, Panagiotis Astithas past...@gmail.com 
 wrote:
 The output of pmset -g batt changed at some point from
 Currently drawing from 'AC Power' to the slightly different
 Now drawing from 'AC Power'. Starting the match from drawing
 makes the check work in both old and new versions of OS X.

 Would it make sense to try to future-proof this further by searching
 only for 'AC (including the leading single-quote) or just AC
 (without the leading quote)?

 (Genuine question. I don't have a strong feeling about it.)

It's a reasonable idea, although I'm wondering what are the odds of
pmset changing to output a string when running on battery in the
future, containing something like no longer on 'AC Power'. That's
probably too far out though.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Need some help on patching buildin-files // was: Looking for feedback and help with a git-mirror for local usage

2015-06-12 Thread Bernd Naumann
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello again,

After digging the code I may have got a clue where to start but I
would  still appreciate some help from a developer, cause I have never
learned to write C. (Some basics at school which happened over a
decade ago.)

Currently I have questions on:

* How to patch clone: would cmd_clone() a good place? Or are there
other calls which might be better. I think about to insert the check
if a mirror will be setup or just updated, right after dest_exists.

* Is it correct that a new config key just get specified via a config
file or by cmd_init_db()? So later, a check on that value is enough?
Would be the section 'user' a good place for this key or is it
something that would get a own/new section?

* Have I missed a relevant file?

git/git.c
git/builtin/clone.c
git/builtin/fetch.c
git/builtin/push.c
git/buildin/remote.c
along with the translation and Documentation, of course.


If you have some comments on that, please share these with me, and if
you are interested in helping me to got this implemented, I would
appreciate that :)

Sincere regards,
Bernd


On 06/11/2015 10:44 PM, Bernd Naumann wrote:
 Hello,
 
 I have came up with an idea # Yep I know, exactly that kind of 
 e-mail everyone wants to read ;) and I'm working currently on a 
 shell-prototype to face the following situation and problem and 
 need some feedback/advise:
 
 
 I often build in example 'openwrt' with various build-scripts which
 depends heavily on a fresh or clean environment and they omit many
 sources via `git clone`, which results sometimes in over 100 MB of
 traffic just for one build. /* Later needed .tar.gz source archives
 are stored in a symlinked download directory which is supported by
 'openwrt/.config' since a few months... to reduce network traffic.
 */
 
 My connection to the internet is not the fastest in world and 
 sometimes unstable, so I wanted to have some kind of local bare 
 repository mirror, which is possible with `git clone --mirror`.
 
 From these repositories I can later clone from, by calling `git 
 clone --reference /path/to.git url`, but I do not wish to edit 
 all the build-scripts and Makefiles.
 
 
 So I wrote a git wrapper script (`$HOME/bin/git`), which checks if
  `git` was called with 'clone', and if so, then it will first 
 clones the repository as a mirror and then clones from that local 
 mirror. If the mirror already exists, then it will only be updated 
 (`git remote update`). This works for now.
 
 /* To be able to have multiple identical named repositories, the 
 script builds paths like:
 
 ~/var/cache/gitmirror $ find . -name *.git
 
 ./github.com/openwrt-management/packages.git 
 ./github.com/openwrt/packages.git 
 ./github.com/openwrt-routing/packages.git ./nbd.name/packages.git 
 ./git.openwrt.org/packages.git ./git.openwrt.org/openwrt.git
 
 It strips the schema from the url and replaces : with / in
 case a port is specified or a svn link is provided. The remaining
 should be a valid linux file and directory structure, if I guess 
 correctly!? */
 
 Ok, so far, so good, but the implementation of the current 
 shell-prototype looks way too hacky [0] and I have found some edge
  cases on which my script will fail: The script depends on the
 fact that the last, or at least the second last argument is a
 valid git-url, but the following is a valid call, too :
 
 `git --no-pager \ clone g...@github.com:openwrt/packages.git 
 openwrt-packages --depth 1`
 
 But this is not valid:
 
 `git clone https://github.com/openwrt/packages.git --reference 
 packages.git packages-2` or `git clone --verbose 
 https://github.com/openwrt/packages.git packages-2 --reference 
 packages.git`
 
 
 I found out that git-clone actually also can only make a guess
 what is the url and what not.
 
 
 
 However, now I'm looking for a way to write something like a 
 submodul for git which will check for a *new* git-config value like
 user.mirror (or something...) which points to a directory, and
 will be used to clone from, and in case of 'fetch', 'pull' or 
 'remote update' update the mirror first, and then the update of
 the current working directory is gotten from that mirror. (And in
 case of 'push' the mirror would be updated from the working dir,
 of course.)
 
 
 I would like to hear some toughs on that, and how I could start to
  build this submodul, or if someone more talented, then I am, is 
 willed to spent some time on that. If requested/wished I could
 send a link to the shell-prototype.
 
 
 [0] For a reason I have to do ugly things like `$( eval exec 
 /usr/bin/git clone --mirror $REPO_URL ) 21 /dev/null` cause 
 otherwise in case of just `eval exec` the script stops after 
 execution, and without `eval exec` arguments with spaces will 
 interpreted as seperated arguments, which is no good, because of 
 failing .
 
 
 Thanks for your time! Yours faithfully, Bernd -- To unsubscribe 
 from this list: send the line unsubscribe git in the body of a 
 

[PATCH] checkout: don't check worktrees when not necessary

2015-06-12 Thread Nguyễn Thái Ngọc Duy
When --patch or pathspecs are passed to git checkout, the working tree
will not be switching branch, so there's no need to check if the branch
that we are running checkout on is already checked out.

Original-patch-by: Spencer Baugh sba...@catern.com
Signed-off-by: Nguyễn Thái Ngọc Duy pclo...@gmail.com
---
 builtin/checkout.c | 23 +++
 t/t2025-checkout-to.sh |  8 
 2 files changed, 19 insertions(+), 12 deletions(-)

diff --git a/builtin/checkout.c b/builtin/checkout.c
index 9b49f0e..e227f64 100644
--- a/builtin/checkout.c
+++ b/builtin/checkout.c
@@ -1110,7 +1110,6 @@ static int parse_branchname_arg(int argc, const char 
**argv,
 {
struct tree **source_tree = opts-source_tree;
const char **new_branch = opts-new_branch;
-   int force_detach = opts-force_detach;
int argcount = 0;
unsigned char branch_rev[20];
const char *arg;
@@ -1231,17 +1230,6 @@ static int parse_branchname_arg(int argc, const char 
**argv,
else
new-path = NULL; /* not an existing branch */
 
-   if (new-path  !force_detach  !*new_branch) {
-   unsigned char sha1[20];
-   int flag;
-   char *head_ref = resolve_refdup(HEAD, 0, sha1, flag);
-   if (head_ref 
-   (!(flag  REF_ISSYMREF) || strcmp(head_ref, new-path)) 
-   !opts-ignore_other_worktrees)
-   check_linked_checkouts(new);
-   free(head_ref);
-   }
-
new-commit = lookup_commit_reference_gently(rev, 1);
if (!new-commit) {
/* not a commit */
@@ -1321,6 +1309,17 @@ static int checkout_branch(struct checkout_opts *opts,
die(_(Cannot switch branch to a non-commit '%s'),
new-name);
 
+   if (new-path  !opts-force_detach  !opts-new_branch) {
+   unsigned char sha1[20];
+   int flag;
+   char *head_ref = resolve_refdup(HEAD, 0, sha1, flag);
+   if (head_ref 
+   (!(flag  REF_ISSYMREF) || strcmp(head_ref, new-path)) 
+   !opts-ignore_other_worktrees)
+   check_linked_checkouts(new);
+   free(head_ref);
+   }
+
if (opts-new_worktree)
return prepare_linked_checkout(opts, new);
 
diff --git a/t/t2025-checkout-to.sh b/t/t2025-checkout-to.sh
index f8e4df4..a8d9336 100755
--- a/t/t2025-checkout-to.sh
+++ b/t/t2025-checkout-to.sh
@@ -28,6 +28,14 @@ test_expect_success 'checkout --to refuses to checkout 
locked branch' '
! test -d .git/worktrees/zere
 '
 
+test_expect_success 'checking out paths not complaining about linked 
checkouts' '
+   (
+   cd existing_empty 
+   echo dirty init.t 
+   git checkout master -- init.t
+   )
+'
+
 test_expect_success 'checkout --to a new worktree' '
git rev-parse HEAD expect 
git checkout --detach --to here master 
-- 
2.3.0.rc1.137.g477eb31

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RFC: reverse history tree, for faster background clones

2015-06-12 Thread Andres G. Aragoneses

Hello git devs,

I'm toying with an idea of an improvement I would like to work on, but 
not sure if it would be desirable enough to be considered good to merge 
in the end, so I'm requesting your opinions before I work on it.


AFAIU git stores the contents of a repo as a sequence of patches in the 
.git metadata folder. So then let's look at an example to illustrate my 
point more easily.


Repo foo contains the following 2 commits:

1 file, first commit, with the content:
+First Line
+Second Line
+Third Line

2nd and last commit:
 First Line
 Second Line
-Third Line
+Last Line

Simple enough, right?

But, what if we decided to store it backwards in the metadata?

So first commit would be:
1 file, first commit, with the content:
+First Line
+Second Line
+Last Line

2nd commit:
 First Line
 Second Line
-Last Line
+Third Line


This would bring some advantages, as far as I understand:

1. `git clone --depth 1` would be way faster, and without the need of 
on-demand compressing of packfiles in the server side, correct me if I'm 
wrong?
2. `git clone` would be able to allow a fast operation, complete in the 
background mode that would allow you to download the first snapshot of 
the repo very quickly, so that the user would be able to start working 
on his working directory very quickly, while a background job keeps 
retreiving the history data in the background.

3. Any more advantages you see?


I'm aware that this would have also downsides, but IMHO the benefits 
would outweigh them. The ones I see:
1. Everytime a commit is made, a big change of the history-metadata tree 
would need to happen. (Well but this is essentially equivalent to 
enabling an INDEX in a DB, you make WRITES more expensive in order to 
improve the speed of READS.)
2. Locking issues? I imagine that rewriting the indexes would open 
longer time windows to have locking issues, but I'm not an expert in 
this, please expand.

3. Any more downsides you see?


I would be glad for any feedback you have. Thanks, and have a great day!

  Andrés

--






--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFC: reverse history tree, for faster background clones

2015-06-12 Thread Dennis Kaarsemaker
On vr, 2015-06-12 at 13:39 +0200, Andres G. Aragoneses wrote:
 On 12/06/15 13:33, Dennis Kaarsemaker wrote:
  On vr, 2015-06-12 at 13:26 +0200, Andres G. Aragoneses wrote:
 
  AFAIU git stores the contents of a repo as a sequence of patches in the
  .git metadata folder.
 
  It does not, it stores full snapshots of files.
 
 In bare repos too?

Yes. A bare repo is nothing more than the .git dir of a non-bare repo
with the core.bare variable set to True :)

  1. `git clone --depth 1` would be way faster, and without the need of
  on-demand compressing of packfiles in the server side, correct me if I'm
  wrong?
 
  You're wrong due to the misunderstanding of how git works :)
 
 Thanks for pointing this out, do you mind giving me a link of some docs 
 where I can correct my knowledge about this?

http://git-scm.com/book/en/v2/Git-Internals-Git-Objects should help.

  2. `git clone` would be able to allow a fast operation, complete in the
  background mode that would allow you to download the first snapshot of
  the repo very quickly, so that the user would be able to start working
  on his working directory very quickly, while a background job keeps
  retreiving the history data in the background.
 
  This could actually be a good thing, and can be emulated now with git
  clone --depth=1 and subsequent fetches in the background to deepen the
  history. I can see some value in clone doing this by itself, first doing
  a depth=1 fetch, then launching itself into the background, giving you a
  worktree to play with earlier.
 
 You're right, didn't think about the feature that converts a --depth=1 
 repo to a normal one. Then a patch that would create a --progressive 
 flag (for instance, didn't think of a better name yet) for the `clone` 
 command would actually be trivial to create, I assume, because it would 
 just use `depth=1` and then retrieve the rest of the history in the 
 background, right?

A naive implementation that does just clone --depth=1 and then fetch
--unshallow would probably not be too hard, no. But whether that would
be the 'right' way of implementing it, I wouldn't know.

-- 
Dennis Kaarsemaker
http://www.kaarsemaker.net

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFC: reverse history tree, for faster background clones

2015-06-12 Thread Andres G. Aragoneses

On 12/06/15 13:33, Dennis Kaarsemaker wrote:

On vr, 2015-06-12 at 13:26 +0200, Andres G. Aragoneses wrote:


AFAIU git stores the contents of a repo as a sequence of patches in the
.git metadata folder.


It does not, it stores full snapshots of files.


In bare repos too?



1. `git clone --depth 1` would be way faster, and without the need of
on-demand compressing of packfiles in the server side, correct me if I'm
wrong?


You're wrong due to the misunderstanding of how git works :)


Thanks for pointing this out, do you mind giving me a link of some docs 
where I can correct my knowledge about this?




2. `git clone` would be able to allow a fast operation, complete in the
background mode that would allow you to download the first snapshot of
the repo very quickly, so that the user would be able to start working
on his working directory very quickly, while a background job keeps
retreiving the history data in the background.


This could actually be a good thing, and can be emulated now with git
clone --depth=1 and subsequent fetches in the background to deepen the
history. I can see some value in clone doing this by itself, first doing
a depth=1 fetch, then launching itself into the background, giving you a
worktree to play with earlier.


You're right, didn't think about the feature that converts a --depth=1 
repo to a normal one. Then a patch that would create a --progressive 
flag (for instance, didn't think of a better name yet) for the `clone` 
command would actually be trivial to create, I assume, because it would 
just use `depth=1` and then retrieve the rest of the history in the 
background, right?


Thanks


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


strange result of `git describe` while bisecting

2015-06-12 Thread Philippe De Muyter
Hi,

I am bisecting the kernel tree between v3.17 and v3.18, and 'git describe'
is used by the kernel compilation process.  Why do I get a version
v3.17-rc7-1626-ga4b4a2b, that seems outside of [v3.17..v3.18] ?

To reproduce :

git bisect start
git bisect bad v3.18
git bisect good v3.17
Bisecting: 6197 revisions left to test after this (roughly 13 steps)
[754c780953397dd5ee5191b7b3ca67e09088ce7a] Merge branch 'for-v3.18' of 
git://git.linaro.org/people/mszyprowski/linux-dma-mapping

git describe
v3.17-6163-g754c780

git bisect bad
Bisecting: 3086 revisions left to test after this (roughly 12 steps)
[4a4743e840d06a5772be7c21110807165c5b3c9f] Merge tag 'fixes-nc-for-linus' of 
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

git describe
v3.17-3076-g4a4743e

git bisect good
Bisecting: 1506 revisions left to test after this (roughly 11 steps)
[a4b4a2b7f98a45c71a906b1126cabea6446a9905] Merge tag 'master-2014-10-02' of 
git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next

git describe
v3.17-rc7-1626-ga4b4a2b

Philippe

-- 
Philippe De Muyter +32 2 6101532 Macq SA rue de l'Aeronef 2 B-1140 Bruxelles
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: strange result of `git describe` while bisecting

2015-06-12 Thread Andreas Schwab
Philippe De Muyter p...@macq.eu writes:

 I am bisecting the kernel tree between v3.17 and v3.18, and 'git describe'
 is used by the kernel compilation process.  Why do I get a version
 v3.17-rc7-1626-ga4b4a2b, that seems outside of [v3.17..v3.18] ?

Because your are testing a side branch that is based on v3.17-rc7.

3.17-rc7 --- 3.17 --- 3.18
   \   /
\- * -/
   ^
You are here --^

Andreas.

-- 
Andreas Schwab, sch...@linux-m68k.org
GPG Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
And now for something completely different.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Bug when doing make test using root user

2015-06-12 Thread Paul Tan
On Fri, Jun 12, 2015 at 5:43 PM, Jean-Yves LENHOF
jean-y...@lenhof.eu.org wrote:
 Hi,

 I tried to compile git 2.4.3 using root on a server. It failed on test 41 of
 t0302-credential-store.sh
 In fact even if we remove read access on a directory, root still can acces
 this directory.
 Using a not privilegied user make the test work.
 Perhaps the test should be adapted to this corner case.
 Trace below.

Right, the test should have the SANITY prereq.

Thanks for reporting.

Regards,
Paul
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/3] pkt-line: tighten sideband PACK check when tracing

2015-06-12 Thread Stefan Beller
On Fri, Jun 12, 2015 at 2:41 PM, Jeff King p...@peff.net wrote:
 On Fri, Jun 12, 2015 at 02:39:01PM -0700, Stefan Beller wrote:

  -   if (starts_with(buf, PACK) || starts_with(buf + 1, PACK)) {
  +   if (starts_with(buf, PACK) || starts_with(buf, \1PACK)) {

 This answers the question on the previous patch actually, maybe the
 code could be improved to

 if (is_sidechannel(out, ...)
 out++;
 if (starts_with(buf, PACK) {
 ...

 If it's not a PACK, then we don't want to skip past the side-channel
 character (we show it as part of the trace).

 Hopefully the end result after patch 3 reads well, as it sets an
 explicit sideband flag.

Yeah, I just looked at that, and realized I should not worry about
these first two patches
as the line is deleted anyway. :( Writing faster than I can think.


 -Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/3] pkt-line: tighten sideband PACK check when tracing

2015-06-12 Thread Jeff King
On Fri, Jun 12, 2015 at 02:39:01PM -0700, Stefan Beller wrote:

  -   if (starts_with(buf, PACK) || starts_with(buf + 1, PACK)) {
  +   if (starts_with(buf, PACK) || starts_with(buf, \1PACK)) {
 
 This answers the question on the previous patch actually, maybe the
 code could be improved to
 
 if (is_sidechannel(out, ...)
 out++;
 if (starts_with(buf, PACK) {
 ...

If it's not a PACK, then we don't want to skip past the side-channel
character (we show it as part of the trace).

Hopefully the end result after patch 3 reads well, as it sets an
explicit sideband flag.

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] git-rebase--interactive.sh: add config option for custom instruction format

2015-06-12 Thread Junio C Hamano
Mike Rappazzo rappa...@gmail.com writes:

 On Fri, Jun 12, 2015 at 4:56 PM, Junio C Hamano gits...@pobox.com wrote:
 ...
 The autosquash part somehow makes me feel uneasy, though.  The
 feature fundamentally has to have %s as the first thing in the
 format to work, but by making the format overridable, you are
 potentially breaking that feature, aren't you?

 It only needs the '%s' for the autosquash when the todo/instruction
 list order is determined.  For this, in the rearrange_squash function,
 it will re-calculate the message:

 +   test -z ${format} || message=$(git log -n 1
 --format=%s ${sha1})

 Additionally, it may also rerun the log command when preparing the final list.

Yeah, I noticed that when I took a diff between v3 and v4.  I didn't
mean by break that you may end up reorder lines incorrectly.

But because you overwrite the $message variable you read from the
original insn sheet (which uses the custom format) and compute $rest
based on the default %s and store that in $1.sq, lines in
$1.sq do not know anything about the custom format, do they?

And then they are injected to appropriate places in $1.rearranged.
Moved lines in the the rearranged result would end up written in the
default %s format, no?

That was the part that made me uneasy.

I do not think that is a bug worth fixing, but I view it as a sign
that fundamentally the autosquash and the idea of configurable
format do not mesh well with each other.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] upload-pack: Fail if cloning empty namespace

2015-06-12 Thread Johannes Löthberg

On 13/06, Johannes Löthberg wrote:

Git should fail to clone if trying to clone from an non-existing
ref namespace, since it's the same as a non-existing repository

Signed-off-by: Johannes Löthberg johan...@kyriasis.com
---

Changes since v1:

* Fixed the namespace check, since I apparently forgot to check with a
 bare repo in my last test. D'oh.

Two other options for this would be to either add a
get_git_namespace_len() function and use that, or a is_namespaced()
functon. But since it's only used here for now at least it feels simpler
to not bloat the codabase with another function which has no other use.



I should note that I have a small test script written now, ready to be 
converted into a git test, though I want some opinions on whether the 
patch would be merged before bothering to convert it.


--
Sincerely,
 Johannes Löthberg
 PGP Key ID: 0x50FB9B273A9D0BB5
 https://theos.kyriasis.com/~kyrias/


signature.asc
Description: PGP signature


Re: [PATCH v2] fetch-pack: optionally save packs to disk

2015-06-12 Thread Jeff King
On Fri, Jun 12, 2015 at 02:00:05PM -0400, Jeff King wrote:

 When I added GIT_TRACE_PACKET long ago, I had always intended to
 follow-up with a GIT_TRACE_PACKFILE. The former stops tracing when we
 get to the binary data, but I had intended the latter to store the pure
 on-the-wire packfile transmission for later debugging to. I never got
 around to it, but I think it is something like the patch below.

Here it is, a bit more cleaned up. I think this is a nice improvement,
and it does fix some minor issues in the existing GIT_TRACE_PACKET code.
But I don't think it will ever work for receive-pack, because we hand
off the socket directly to index-pack.

I can live with that. But another approach would be to make it easier to
capture the stdin of programs with GIT_TRACE, rather than just their
arguments. That makes it more generically usable (e.g., I have hacky
patches on our servers for capturing pack-objects input so I can
replay expensive fetch requests). As Augie noted, that's not a full
pack-file due to the --pack-header stuff. But given a full trace (which
has both the arguments and stdin), you could reconstruct the packfile,
which we could do as a post-processing step.

I dunno. I prefer introducing a nicely generic mechanism, but it would
probably be a little more irritating to use in practice.

I guess yet another way to do it would be to put the GIT_TRACE_PACK
intelligence into index-pack and unpack-objects (rather than
fetch-pack). Then it at least applies to both the pushing and fetching
sides.

Anyway, here are the patches I came up with:

  [1/3]: pkt-line: simplify starts_with checks in packet tracing
  [2/3]: pkt-line: tighten sideband PACK check when tracing
  [3/3]: pkt-line: support tracing verbatim pack contents

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/3] pkt-line: tighten sideband PACK check when tracing

2015-06-12 Thread Stefan Beller
On Fri, Jun 12, 2015 at 2:28 PM, Jeff King p...@peff.net wrote:
 To find the start of the pack data, we accept the word PACK
 at the beginning of any sideband channel, even though what
 we really want is to find the pack data on channel 1. In
 practice this doesn't matter, as sideband-2 messages tend to
 start with error: or similar, but it is a good idea to be
 explicit (especially as we add more code in this area, we
 will rely on this assumption).

 Signed-off-by: Jeff King p...@peff.net
 ---
  pkt-line.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

 diff --git a/pkt-line.c b/pkt-line.c
 index 0477d2e..e75af02 100644
 --- a/pkt-line.c
 +++ b/pkt-line.c
 @@ -24,7 +24,7 @@ static void packet_trace(const char *buf, unsigned int len, 
 int write)
 strbuf_addf(out, packet: %12s%c ,
 packet_trace_prefix, write ? '' : '');

 -   if (starts_with(buf, PACK) || starts_with(buf + 1, PACK)) {
 +   if (starts_with(buf, PACK) || starts_with(buf, \1PACK)) {

This answers the question on the previous patch actually, maybe the
code could be improved to

if (is_sidechannel(out, ...)
out++;
if (starts_with(buf, PACK) {
...


 strbuf_addstr(out, PACK ...);
 trace_disable(trace_packet);
 }
 --
 2.4.2.752.geeb594a

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH/RFC] upload-pack: Fail if cloning empty namespace

2015-06-12 Thread Johannes Löthberg
Git should fail to clone if trying to clone from an non-existing
ref namespace, since it's the same as a non-existing repository

Signed-off-by: Johannes Löthberg johan...@kyriasis.com
---

In version 4 of the ArchLinux User Repository, which is a hosting 
platform for recepies for building Arch packages, we use Git to store 
the recepies.

To save space we are using ref namespaces, which so far has saved quite 
a bit of space. There is one issue though, when cloning a non-existing 
repository we don't want the clone to work. This patch fixes this issue 
by failing the clone if a namespace was specified but it doesn't exist.

Making this conditional on a namespace being specified makes sure that 
cloning regular empty repos still works.

 upload-pack.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/upload-pack.c b/upload-pack.c
index 89e832b..21f8891 100644
--- a/upload-pack.c
+++ b/upload-pack.c
@@ -778,6 +778,10 @@ static void upload_pack(void)
 
head_ref_namespaced(find_symref, symref);
 
+   if (get_git_namespace()  !symref.items) {
+   die(git upload-pack: tried to clone from empty namespace);
+   }
+
if (advertise_refs || !stateless_rpc) {
reset_timeout();
head_ref_namespaced(send_ref, symref);
-- 
2.4.2

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] git-rebase--interactive.sh: add config option for custom instruction format

2015-06-12 Thread Junio C Hamano
Michael Rappazzo rappa...@gmail.com writes:

 A config option 'rebase.instructionFormat' can override the
 default 'oneline' format of the rebase instruction list.

 Since the list is parsed using the left, right or boundary mark plus
 the sha1, they are prepended to the instruction format.

 Signed-off-by: Michael Rappazzo rappa...@gmail.com
 ---
  Documentation/git-rebase.txt |  7 +++
  git-rebase--interactive.sh   | 20 +---
  t/t3415-rebase-autosquash.sh | 21 +
  3 files changed, 45 insertions(+), 3 deletions(-)

Thanks, will replace.

The autosquash part somehow makes me feel uneasy, though.  The
feature fundamentally has to have %s as the first thing in the
format to work, but by making the format overridable, you are
potentially breaking that feature, aren't you?
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] git-rebase--interactive.sh: add config option for custom instruction format

2015-06-12 Thread Mike Rappazzo
On Fri, Jun 12, 2015 at 6:05 PM, Junio C Hamano gits...@pobox.com wrote:
 But because you overwrite the $message variable you read from the
 original insn sheet (which uses the custom format) and compute $rest
 based on the default %s and store that in $1.sq, lines in
 $1.sq do not know anything about the custom format, do they?

 And then they are injected to appropriate places in $1.rearranged.
 Moved lines in the the rearranged result would end up written in the
 default %s format, no?

 That was the part that made me uneasy.

 I do not think that is a bug worth fixing, but I view it as a sign
 that fundamentally the autosquash and the idea of configurable
 format do not mesh well with each other.

My understanding of the rearrange_squash function is this:
There are two loops.  The first loop collects the commits which should
be moved (squashed).  The second loop re-constructs the instruction
list using the info from the first loop.

In the second loop, I changed it to recalculate the presented message
when the re-ordered commit is added:

+   if test -n ${format}
+   then
+msg_content=$(git log -n 1 --format=${format} ${squash})


That is the $rest.

I have patched my locally installed `git-rebase--interactive` with
these changes, and I did see the proper rearrangement of commits with
the custom formatted message.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2] upload-pack: Fail if cloning empty namespace

2015-06-12 Thread Johannes Löthberg
Git should fail to clone if trying to clone from an non-existing
ref namespace, since it's the same as a non-existing repository

Signed-off-by: Johannes Löthberg johan...@kyriasis.com
---

Changes since v1:

* Fixed the namespace check, since I apparently forgot to check with a
  bare repo in my last test. D'oh.

Two other options for this would be to either add a 
get_git_namespace_len() function and use that, or a is_namespaced() 
functon. But since it's only used here for now at least it feels simpler 
to not bloat the codabase with another function which has no other use.

 upload-pack.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/upload-pack.c b/upload-pack.c
index 89e832b..99fb271 100644
--- a/upload-pack.c
+++ b/upload-pack.c
@@ -778,6 +778,10 @@ static void upload_pack(void)
 
head_ref_namespaced(find_symref, symref);
 
+   if (strcmp(get_git_namespace(), )  !symref.items) {
+   die(git upload-pack: tried to clone from empty namespace);
+   }
+
if (advertise_refs || !stateless_rpc) {
reset_timeout();
head_ref_namespaced(send_ref, symref);
-- 
2.4.2

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 03/12] for-each-ref: change comment in ref_sort

2015-06-12 Thread Junio C Hamano
Christian Couder christian.cou...@gmail.com writes:

 I think it is needed later when struct ref_sort is moved into
 ref-filter.h, because then the used_atom[] array is not moved.

Now I am confused.  used_atom[] is the mechanism we use to give a
small integer to each atom used in the %(placeholder) string, so
that we do not have to refer to them as placeholder string and we
do not have to call !strcmp() to compare for equality.  How can a
field in ref_sort refer to it internally if used_atom[] is not
visible?

Indeed, ref-filter.c after the series does have used_atom[] and
get_ref_atom_value() does use atom to index into it.  So these two
lines do not make much sense to me.  I am puzzled.

If by moved you are referring to the fact that the structure and
its fields are exposed to the callers of the API while the
implementation detail of the mechanism to give short integer to each
used atom is not exposed to them, then I do agree that the comment
to the structure field as the external API definition should not
talk about used_atom[] array.

Perhaps in 03/12, you can stop talking about the implementation
(i.e. the value is used to index into used_atom[] to get to the
original string) and instead start talking about what the value
means to the callers (that are still internal to for-each-ref
implementation), to make things better.

Having said that, I am not sure if there is a sensible description
for that field if you avoid exposing the implementation detail.

You would probably want to start by asking what that value means.
For (evantual) external callers, the number is what they get from
parse_ref_filter_atom(); calling that function is the only way they
can get an appropriate value to stuff in the field, and
parse_opt_ref_sorting() is the most likely function external callers
use to make that happen.

The number internally used to represent an attribute of a ref used
when sorting the set of refs would be one way to say what it is
without exposing the implementation detail to the readers.

But does that help anybody?  I doubt it.  It is mouthful for
external users, and it is not concrete enough for implementors.

So either

 - treat ref-filter.h as information for API users, and not talk
   about what it means at all, or

 - treat ref-filter.h as information for both API users and API
   implementors, and describe it as an index into used_atom[],
   i.e. do not change anything.

I'd say the latter is a more sensible way to go.  I think it is also
OK to change this comment to index into used_atom array (internal)
when ref-filter.h is created as an external API definition.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] Allow to control where the replace refs are looked for

2015-06-12 Thread Junio C Hamano
Mike Hommey m...@glandium.org writes:

 It can be useful to have grafts or replace refs for specific use-cases while
 keeping the default view of the repository pristine (or with a different
 set of grafts/replace refs).

 It is possible to use a different graft file with GIT_GRAFT_FILE, but while
 replace refs are more powerful, they don't have an equivalent override.

 Add a GIT_REPLACE_REF_BASE environment variable to control where git is
 going to look for replace refs.

 Signed-off-by: Mike Hommey m...@glandium.org
 ---
  builtin/replace.c | 6 +++---
  cache.h   | 2 ++
  environment.c | 6 ++
  log-tree.c| 5 +++--
  refs.c| 3 ++-
  5 files changed, 16 insertions(+), 6 deletions(-)

Thanks.

git am seems to be seeing a patch based on a stale base.  What
branch did you base this on?
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/3] pkt-line: support tracing verbatim pack contents

2015-06-12 Thread Jeff King
When debugging the pack protocol, it is sometimes useful to
store the verbatim pack that we sent or received on the
wire. Looking at the on-disk result is often not helpful for
a few reasons:

  1. If the operation is a clone, we destroy the repo on
 failure, leaving nothing on disk.

  2. If the pack is small, we unpack it immediately, and the
 full pack never hits the disk.

  3. If we feed the pack to index-pack --fix-thin, the
 resulting pack has the extra delta bases added to it.

We already have a GIT_TRACE_PACKET mechanism for tracing
packets. Let's extend it with GIT_TRACE_PACK to dump the
verbatim packfile.

There are a few other positive fallouts that come from
rearranging this code:

 - We currently disable the packet trace after seeing the
   PACK header, even though we may get human-readable lines
   on other sidebands; now we include them in the trace.

 - We currently try to print PACK ... in the trace to
   indicate that the packfile has started. But because we
   disable packet tracing, we never printed this line. We
   will now do so.

Signed-off-by: Jeff King p...@peff.net
---
 Documentation/git.txt | 13 +++-
 pkt-line.c| 59 ++-
 t/t5601-clone.sh  |  7 ++
 trace.c   |  7 ++
 trace.h   |  1 +
 5 files changed, 71 insertions(+), 16 deletions(-)

diff --git a/Documentation/git.txt b/Documentation/git.txt
index 45b64a7..8c44d14 100644
--- a/Documentation/git.txt
+++ b/Documentation/git.txt
@@ -1000,9 +1000,20 @@ Unsetting the variable, or setting it to empty, 0 or
Enables trace messages for all packets coming in or out of a
given program. This can help with debugging object negotiation
or other protocol issues. Tracing is turned off at a packet
-   starting with PACK.
+   starting with PACK (but see 'GIT_TRACE_PACK' below).
See 'GIT_TRACE' for available trace output options.
 
+'GIT_TRACE_PACK'::
+   Enables tracing of packfiles sent or received by a
+   given program. Unlike other trace output, this trace is
+   verbatim: no headers, and no quoting of binary data. You almost
+   certainly want to direct into a file (e.g.,
+   `GIT_TRACE_PACK=/tmp/my.pack`) rather than displaying it on the
+   terminal or mixing it with other trace output.
++
+Note that this is currently only implemented for the client side
+of clones and fetches.
+
 'GIT_TRACE_PERFORMANCE'::
Enables performance related trace messages, e.g. total execution
time of each Git command.
diff --git a/pkt-line.c b/pkt-line.c
index e75af02..2e122c0 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -4,16 +4,51 @@
 char packet_buffer[LARGE_PACKET_MAX];
 static const char *packet_trace_prefix = git;
 static struct trace_key trace_packet = TRACE_KEY_INIT(PACKET);
+static struct trace_key trace_pack = TRACE_KEY_INIT(PACK);
 
 void packet_trace_identity(const char *prog)
 {
packet_trace_prefix = xstrdup(prog);
 }
 
+static int packet_trace_pack(const char *buf, unsigned int len, int sideband)
+{
+   if (!sideband) {
+   trace_verbatim(trace_pack, buf, len);
+   return 1;
+   } else if (len  *buf == '\1') {
+   trace_verbatim(trace_pack, buf + 1, len - 1);
+   return 1;
+   } else {
+   /* it's another non-pack sideband */
+   return 0;
+   }
+}
+
 static void packet_trace(const char *buf, unsigned int len, int write)
 {
int i;
struct strbuf out;
+   static int in_pack, sideband;
+
+   if (!trace_want(trace_packet)  !trace_want(trace_pack))
+   return;
+
+   if (in_pack) {
+   if (packet_trace_pack(buf, len, sideband))
+   return;
+   } else if (starts_with(buf, PACK) || starts_with(buf, \1PACK)) {
+   in_pack = 1;
+   sideband = *buf == '\1';
+   packet_trace_pack(buf, len, sideband);
+
+   /*
+* Make a note in the human-readable trace that the pack data
+* started.
+*/
+   buf = PACK ...;
+   len = strlen(buf);
+   }
 
if (!trace_want(trace_packet))
return;
@@ -24,21 +59,15 @@ static void packet_trace(const char *buf, unsigned int len, 
int write)
strbuf_addf(out, packet: %12s%c ,
packet_trace_prefix, write ? '' : '');
 
-   if (starts_with(buf, PACK) || starts_with(buf, \1PACK)) {
-   strbuf_addstr(out, PACK ...);
-   trace_disable(trace_packet);
-   }
-   else {
-   /* XXX we should really handle printable utf8 */
-   for (i = 0; i  len; i++) {
-   /* suppress newlines */
-   if (buf[i] == '\n')
-   continue;
-   if (buf[i] = 0x20  buf[i] = 0x7e)
-   

Re: [PATCH 1/3] pkt-line: simplify starts_with checks in packet tracing

2015-06-12 Thread Stefan Beller
On Fri, Jun 12, 2015 at 2:28 PM, Jeff King p...@peff.net wrote:
 We carefully check that our pkt buffer has enough characters
 before seeing if it starts with PACK. The intent is to
 avoid reading random memory if we get a short buffer like
 PAC.

 However, we know that the traced packets are always
 NUL-terminated. They come from one of these sources:

   1. A string literal.

   2. `format_packet`, which uses a strbuf.

   3. `packet_read`, which defensively NUL-terminates what we
  read.

 We can therefore drop the length checks, as we know we will
 hit the trailing NUL if we have a short input.

 Signed-off-by: Jeff King p...@peff.net
 ---
  pkt-line.c | 3 +--
  1 file changed, 1 insertion(+), 2 deletions(-)

 diff --git a/pkt-line.c b/pkt-line.c
 index 187a229..0477d2e 100644
 --- a/pkt-line.c
 +++ b/pkt-line.c
 @@ -24,8 +24,7 @@ static void packet_trace(const char *buf, unsigned int len, 
 int write)
 strbuf_addf(out, packet: %12s%c ,
 packet_trace_prefix, write ? '' : '');

 -   if ((len = 4  starts_with(buf, PACK)) ||
 -   (len = 5  starts_with(buf+1, PACK))) {

So I wondered why there is a possible extra character in front of PACK.
So I run the blame machinery and ended up with bbc30f9963 (add
packet tracing debug code, 2011-02-24), which was also authored
by you. Where does the extra char come from?

Would it be better for readability to write it as

int offset = 0;
if (*buf == CHARACTER_STEFAN_IS_WONDERING_ABOUT)
/* ignore char foo because bar */
offset++;
if (starts_with(buf+offset, PACK) {
...



 +   if (starts_with(buf, PACK) || starts_with(buf + 1, PACK)) {
 strbuf_addstr(out, PACK ...);
 trace_disable(trace_packet);
 }
 --
 2.4.2.752.geeb594a

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/8] object_id part 2

2015-06-12 Thread brian m. carlson
On Fri, Jun 12, 2015 at 03:14:25PM -0700, Junio C Hamano wrote:
 brian m. carlson sand...@crustytoothpaste.net writes:
 
  While I did run the tests between each commit, I hadn't noticed they
  were failing because I don't have Apache installed on my laptop, so they
  were silently skipped.  I'll resubmit with that fixed.
 
 It is somewhat strange that _only_ http part had failures like this,
 and is unnerving, too, given that a few people seem to have given at
 least a cursory read over the patches and didn't spot anything
 obviously wrong.
 
 Was that because there was a single manual botch, or was that merely
 that other parts of the system do not have sufficient test coverage?

It appears that I broke the change in parse_fetch: convert to use
struct object_id which modifies remote-curl.c, so I think it's a single
manual botch.  I'm going to rework that patch anyway since Michael said
that he didn't like the idea of parse_oid_hex as it stands, so it will
end up being mostly moot.

I mostly introduced it in an effort to avoid having to use
GIT_SHA1_HEXSZ repeatedly throughout small blocks of code, but since we
don't use assignments in conditionals, it doesn't seem useful as it
currently stands.
-- 
brian m. carlson / brian with sandals: Houston, Texas, US
+1 832 623 2791 | http://www.crustytoothpaste.net/~bmc | My opinion only
OpenPGP: RSA v4 4096b: 88AC E9B2 9196 305B A994 7552 F1BA 225C 0223 B187


signature.asc
Description: Digital signature


Re: [PATCH] checkout: don't check worktrees when not necessary

2015-06-12 Thread Junio C Hamano
Nguyễn Thái Ngọc Duy  pclo...@gmail.com writes:

 When --patch or pathspecs are passed to git checkout, the working tree
 will not be switching branch, so there's no need to check if the branch
 that we are running checkout on is already checked out.

Yeah, I agree that having this check in parse_branchname_arg() does
not make any sense, as that function is still trying to decide if we
are switching to a different branch or not.  The new location looks
much more sensible.

Will queue this on nd/multiple-work-trees (which has been in
'master' for about a month).  Good thing that we caught it before it
got in any tagged release.

Thanks.


 Original-patch-by: Spencer Baugh sba...@catern.com
 Signed-off-by: Nguyễn Thái Ngọc Duy pclo...@gmail.com
 ---
  builtin/checkout.c | 23 +++
  t/t2025-checkout-to.sh |  8 
  2 files changed, 19 insertions(+), 12 deletions(-)

 diff --git a/builtin/checkout.c b/builtin/checkout.c
 index 9b49f0e..e227f64 100644
 --- a/builtin/checkout.c
 +++ b/builtin/checkout.c
 @@ -1110,7 +1110,6 @@ static int parse_branchname_arg(int argc, const char 
 **argv,
  {
   struct tree **source_tree = opts-source_tree;
   const char **new_branch = opts-new_branch;
 - int force_detach = opts-force_detach;
   int argcount = 0;
   unsigned char branch_rev[20];
   const char *arg;
 @@ -1231,17 +1230,6 @@ static int parse_branchname_arg(int argc, const char 
 **argv,
   else
   new-path = NULL; /* not an existing branch */
  
 - if (new-path  !force_detach  !*new_branch) {
 - unsigned char sha1[20];
 - int flag;
 - char *head_ref = resolve_refdup(HEAD, 0, sha1, flag);
 - if (head_ref 
 - (!(flag  REF_ISSYMREF) || strcmp(head_ref, new-path)) 
 - !opts-ignore_other_worktrees)
 - check_linked_checkouts(new);
 - free(head_ref);
 - }
 -
   new-commit = lookup_commit_reference_gently(rev, 1);
   if (!new-commit) {
   /* not a commit */
 @@ -1321,6 +1309,17 @@ static int checkout_branch(struct checkout_opts *opts,
   die(_(Cannot switch branch to a non-commit '%s'),
   new-name);
  
 + if (new-path  !opts-force_detach  !opts-new_branch) {
 + unsigned char sha1[20];
 + int flag;
 + char *head_ref = resolve_refdup(HEAD, 0, sha1, flag);
 + if (head_ref 
 + (!(flag  REF_ISSYMREF) || strcmp(head_ref, new-path)) 
 + !opts-ignore_other_worktrees)
 + check_linked_checkouts(new);
 + free(head_ref);
 + }
 +
   if (opts-new_worktree)
   return prepare_linked_checkout(opts, new);
  
 diff --git a/t/t2025-checkout-to.sh b/t/t2025-checkout-to.sh
 index f8e4df4..a8d9336 100755
 --- a/t/t2025-checkout-to.sh
 +++ b/t/t2025-checkout-to.sh
 @@ -28,6 +28,14 @@ test_expect_success 'checkout --to refuses to checkout 
 locked branch' '
   ! test -d .git/worktrees/zere
  '
  
 +test_expect_success 'checking out paths not complaining about linked 
 checkouts' '
 + (
 + cd existing_empty 
 + echo dirty init.t 
 + git checkout master -- init.t
 + )
 +'
 +
  test_expect_success 'checkout --to a new worktree' '
   git rev-parse HEAD expect 
   git checkout --detach --to here master 
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] Allow to control where the replace refs are looked for

2015-06-12 Thread Junio C Hamano
Junio C Hamano gits...@pobox.com writes:

 Mike Hommey m...@glandium.org writes:

 It can be useful to have grafts or replace refs for specific use-cases while
 keeping the default view of the repository pristine (or with a different
 set of grafts/replace refs).

 It is possible to use a different graft file with GIT_GRAFT_FILE, but while
 replace refs are more powerful, they don't have an equivalent override.

 Add a GIT_REPLACE_REF_BASE environment variable to control where git is
 going to look for replace refs.

 Signed-off-by: Mike Hommey m...@glandium.org
 ---
  builtin/replace.c | 6 +++---
  cache.h   | 2 ++
  environment.c | 6 ++
  log-tree.c| 5 +++--
  refs.c| 3 ++-
  5 files changed, 16 insertions(+), 6 deletions(-)

 Thanks.

 git am seems to be seeing a patch based on a stale base.  What
 branch did you base this on?

Ah, no need to resend; the conflict is with recently merged
bc/object-id.

Thanks.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] git-rebase--interactive.sh: add config option for custom instruction format

2015-06-12 Thread Mike Rappazzo
It only needs the '%s' for the autosquash when the todo/instruction
list order is determined.  For this, in the rearrange_squash function,
it will re-calculate the message:

+   test -z ${format} || message=$(git log -n 1
--format=%s ${sha1})

Additionally, it may also rerun the log command when preparing the final list.

It is possible that this could be made more efficient by separating
the list arrangement from the list presentation.  I can look into that
for a future patch.

I did add a test which uses the instructionFormat config, and then
interactively auto-squashes using both a 'squash! sha1' and a
'squash! comment'. in the commits.


On Fri, Jun 12, 2015 at 4:56 PM, Junio C Hamano gits...@pobox.com wrote:
 Michael Rappazzo rappa...@gmail.com writes:

 A config option 'rebase.instructionFormat' can override the
 default 'oneline' format of the rebase instruction list.

 Since the list is parsed using the left, right or boundary mark plus
 the sha1, they are prepended to the instruction format.

 Signed-off-by: Michael Rappazzo rappa...@gmail.com
 ---
  Documentation/git-rebase.txt |  7 +++
  git-rebase--interactive.sh   | 20 +---
  t/t3415-rebase-autosquash.sh | 21 +
  3 files changed, 45 insertions(+), 3 deletions(-)

 Thanks, will replace.

 The autosquash part somehow makes me feel uneasy, though.  The
 feature fundamentally has to have %s as the first thing in the
 format to work, but by making the format overridable, you are
 potentially breaking that feature, aren't you?
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/3] pkt-line: simplify starts_with checks in packet tracing

2015-06-12 Thread Jeff King
We carefully check that our pkt buffer has enough characters
before seeing if it starts with PACK. The intent is to
avoid reading random memory if we get a short buffer like
PAC.

However, we know that the traced packets are always
NUL-terminated. They come from one of these sources:

  1. A string literal.

  2. `format_packet`, which uses a strbuf.

  3. `packet_read`, which defensively NUL-terminates what we
 read.

We can therefore drop the length checks, as we know we will
hit the trailing NUL if we have a short input.

Signed-off-by: Jeff King p...@peff.net
---
 pkt-line.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/pkt-line.c b/pkt-line.c
index 187a229..0477d2e 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -24,8 +24,7 @@ static void packet_trace(const char *buf, unsigned int len, 
int write)
strbuf_addf(out, packet: %12s%c ,
packet_trace_prefix, write ? '' : '');
 
-   if ((len = 4  starts_with(buf, PACK)) ||
-   (len = 5  starts_with(buf+1, PACK))) {
+   if (starts_with(buf, PACK) || starts_with(buf + 1, PACK)) {
strbuf_addstr(out, PACK ...);
trace_disable(trace_packet);
}
-- 
2.4.2.752.geeb594a

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/3] pkt-line: tighten sideband PACK check when tracing

2015-06-12 Thread Jeff King
To find the start of the pack data, we accept the word PACK
at the beginning of any sideband channel, even though what
we really want is to find the pack data on channel 1. In
practice this doesn't matter, as sideband-2 messages tend to
start with error: or similar, but it is a good idea to be
explicit (especially as we add more code in this area, we
will rely on this assumption).

Signed-off-by: Jeff King p...@peff.net
---
 pkt-line.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pkt-line.c b/pkt-line.c
index 0477d2e..e75af02 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -24,7 +24,7 @@ static void packet_trace(const char *buf, unsigned int len, 
int write)
strbuf_addf(out, packet: %12s%c ,
packet_trace_prefix, write ? '' : '');
 
-   if (starts_with(buf, PACK) || starts_with(buf + 1, PACK)) {
+   if (starts_with(buf, PACK) || starts_with(buf, \1PACK)) {
strbuf_addstr(out, PACK ...);
trace_disable(trace_packet);
}
-- 
2.4.2.752.geeb594a

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/8] object_id part 2

2015-06-12 Thread Junio C Hamano
brian m. carlson sand...@crustytoothpaste.net writes:

 While I did run the tests between each commit, I hadn't noticed they
 were failing because I don't have Apache installed on my laptop, so they
 were silently skipped.  I'll resubmit with that fixed.

It is somewhat strange that _only_ http part had failures like this,
and is unnerving, too, given that a few people seem to have given at
least a cursory read over the patches and didn't spot anything
obviously wrong.

Was that because there was a single manual botch, or was that merely
that other parts of the system do not have sufficient test coverage?
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: git push keeps writing after server failure

2015-06-12 Thread Jeff King
On Fri, Jun 12, 2015 at 02:20:45PM -0400, Jeff King wrote:

   Notice GitHub prints remote: fatal: pack exceeds maximum allowed
   size. That interrupted my Writing objects progress meter, and then
   git push just kept going and wrote really really fast (170 MiB/s!)
   until the entire pack was sent.
  
  Sounds like it's writing to a closed fd, then. Which makes sense; I
  think we should hang up the socket after writing the fatal message
  above.
 
 For reference, here's the patch implementing the max-size check on the
 server. It's on my long list of patches to clean up and send to the
 list; I never did this one because of the unpack-objects caveat
 mentioned below.

I did a little more digging on this.

With the max-size patch, we seem to reliably notice the problem and die
of EPIPE (you can bump receive.maxsize to something reasonable like 1m).
Pushing to GitHub[1], though, sometimes dies and sometimes ends up
pushing the whole pack over the ssh session. It seems racy.

I've confirmed in both cases that the receive-pack process dies on our
server. So presumably the problem is in between; it might be an ssh
weirdness, or it might be a problem with our proxy layer. I'll open an
issue internally to look at that.

-Peff

[1] I can tweak the max-size on a per-repo basis, which is how I did my
testing without waiting for 2G to transfer. If anybody is interested
in diagnosing the client side of this, I am happy to drop the
max-size on a test repo for you. But AFAICT it is not a client
problem at all.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GNU diff and git diff - difference on myers algorithm?

2015-06-12 Thread Luis R. Rodriguez
On Tue, Jun 9, 2015 at 1:25 AM, Johannes Schindelin
johannes.schinde...@gmx.de wrote:
 Hi Luis,

 On 2015-06-08 20:34, Luis R. Rodriguez wrote:
 Based on a cursory review of the git code I get the impression that
 GNU diff and git 'diff' do not share any code for the possible diff
 algorithms.

 Indeed, Git's diff machinery is based[*1*] ofn libxdiff[*2*], not on GNU diff.

Great thanks for the confirmation. Interesting point by Linus on that
commit that changed from forking to use GNU diff to libxdiff:

  generating a diff is not an exact
   science - you can get two different diffs (and you will), and they can
   both be perfectly valid. So it's not possible to validate the
   libxdiff output by just comparing it against GNU diff.

Indeed, simple example is using different starting context lines, or
different number of context lines. I do however wonder if under
certain parameters they *need* to be equal, or if this has been
studied exactly before.

 I'm in particularly curious more about the default myers
 algorithm.

 Are you looking for a freely available implementation of the Myers algorithm? 
 Or are you interested in understanding it?

I was trying to determine the above, of possibilities of differences.
Now granted there are the diffs using different output layouts
should differ, and as I note above if you modify the conext preference
you might end up with slightly different diffs, but I am also curious
to know if anyone has done research to see whether or not two hunks
which are slightly different are functionally equivalent. More on the
reasoning behind this below.

 Please note that Myers' algorithm is just one first step in most diff 
 implementations (and that other diff algorithms have become popular, in 
 particular because comparing strings can be accelerated by hashing the text 
 lines first, and those hashes can also be used to identify matching pairs of 
 unique lines, giving rise to yet another huge performance boost for typical 
 uses).

Awesome.

 The reason why Myers' algorithm is not sufficient for diff implementations is 
 that it only optimizes the edit distance, i.e. the amount of added/removed 
 lines, while patches should be readable, too, i.e. prefer *consecutive* edits 
 to disjunct ones.

Indeed, this is along the lines of what I am looking for but with some
other tweaks considered, more on this below.

 Just to mention one post-processing technique that is so useful that I 
 implemented it for Git[*3*]: the patience diff algorithm of Bram Cohen (of 
 BitTorrent fame) finds matching pairs of unique lines -- think of a function 
 from which another function is refactored, for example, intuitively you want 
 the diff to keep the signature of the original function as a context line.

Indeed.

 Disclaimer: While it is true that Gene and I shared an office for one month, 
 and that I am once again working in the same institute as he does, all my 
 knowledge about this algorithm stems from my reading his paper and 
 implementing the algorithm in Java for use in JGit[*3*].

:)

 I can take time to do a precise code review of the
 algorithms used on both GNU diff and git but if someone can already
 vet for any differences that'd be appreciated as it would save time.

 Again, I am curious what your goal is? I am sure I can support your quest 
 better when I understand what the purpose of this code review should be.

OK wells I'm curious about more research / effort when trying to
evaluate a diff with two seprate but adjoining preprocessor directives
and if anyone has implemented an optimizaiton option to let the diff
generator join them.

For example, to let it infer that:

--- a/test.c
+++ b/test.c
@@ -10,8 +10,6 @@ int main(int argc, char *argv[])

 #ifdef FOO
a = 4;
-#endif /* FOO */
-#ifdef FOO
a = 5;
 #endif /* FOO */

is possible.

 Luis
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHv3 0/4] git-p4: fixing --changes-block-size handling

2015-06-12 Thread Lex Spoon
The latest patch series LGTM. It's a pity about the more complicated
structure with two different ways to query the changes list, but it
does look hard to make it any simpler.

Lex Spoon
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] git-checkout.txt: Document git checkout pathspec better

2015-06-12 Thread Torsten Bögershausen
On 2015-06-12 06.49, Scott Schmit wrote:
 'git checkout' with paths or `--patch` is used to restore modified or
 deleted paths to their original contents from the index or replace paths
 with the contents from a named tree-ish (most often a commit-ish)
 instead of switching branches.
---
I will probably send a patch, the next days or so.
It feels as if we can split the long sentence, and differntiate
between the restore and copy content from other tree-sh.
How about this:


'git checkout' [--] pathspec...::
'git checkout' with paths is used to restore modified or
deleted paths to their original contents from the index.

'git checkout' [-p|--patch] [tree-ish] [--] pathspec...::
'git checkout' with [tree-ish] and paths or `--patch` is used
to replace paths with the contents from a named tree-ish
(most often a commit-ish) instead of switching branches.
In this case, the `-b` and `--track` options are
meaningless and giving either of them results in an error.  The
tree-ish argument can be used to specify a specific tree-ish
(i.e.  commit, tag or tree) to update the index for the given
paths before updating the working tree.
+

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 11/12] for-each-ref: introduce filter_refs()

2015-06-12 Thread Junio C Hamano
Matthieu Moy matthieu@grenoble-inp.fr writes:

 Karthik Nayak karthik@gmail.com writes:

 +filter_refs(array, FILTER_REFS_ALL | FILTER_REFS_INCLUDE_BROKEN, 
 filter);

 I think it is more common to have options at the end, so I'd write it as

 filter_refs(array, filter, FILTER_REFS_ALL | FILTER_REFS_INCLUDE_BROKEN);

 (changing the declaration too, obviously)

Good point.


 I really like the way cmd_for_each_ref looks like now.

Likewise.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 03/12] for-each-ref: change comment in ref_sort

2015-06-12 Thread Christian Couder
On Fri, Jun 12, 2015 at 8:29 PM, Karthik Nayak karthik@gmail.com wrote:
 On 06/12/2015 11:34 PM, Junio C Hamano wrote:

 Karthik Nayak karthik@gmail.com writes:

 What change since 9f613dd do you have in mind, exactly, though?


 Well initially the atoms were indexed into used_atom array, which
 later was removed. Hence the comment becomes obsolete.


 Later in which commit?  In builtin/for-each-ref.c in the version
 after applying patches 1-3 of this series on top of master, I still
 see used_atom[] array there, so...?


 Uh! My bad.
 Ignore this! I think I got confused, On having a look now that patch is
 not needed. Sorry.

I think it is needed later when struct ref_sort is moved into
ref-filter.h, because then the used_atom[] array is not moved.

So either:

1) you update the comment when you move struct ref_sort into
ref-filter.h, but then the downside is that there is not only code
movement in the patch that moves struct ref_sort into ref-filter.h,
or

2) you explain that, as struct ref_sort will be moved later while
the used_atom[] array will not be moved, the direct connection between
the comment and used_atom[] will be broken, and you need to prepare
for that because you want to avoid solution 1) as you want only code
movement when moving struct ref_sort into ref-filter.h.

Hope this helps,
Christian.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/8] object_id part 2

2015-06-12 Thread brian m. carlson
On Thu, Jun 11, 2015 at 01:00:04PM -0700, Junio C Hamano wrote:
 Fetched that branch, built and found out that it does not pass the
 tests, at least these (there may be others I do not usually run that
 are broken by this series; I dunno), so I'll discard what I fetched
 for now X-.
 
 Test Summary Report
 ---
 t5540-http-push-webdav.sh  (Wstat: 256 Tests: 19 Failed: 12)
   Failed tests:  4-10, 12-15, 17
   Non-zero exit status: 1
 t5539-fetch-http-shallow.sh (Wstat: 256 Tests: 3 Failed: 2)
   Failed tests:  2-3
   Non-zero exit status: 1
 t5541-http-push-smart.sh   (Wstat: 256 Tests: 34 Failed: 27)
   Failed tests:  3-17, 22-29, 31-34
   Non-zero exit status: 1
 t5551-http-fetch-smart.sh  (Wstat: 256 Tests: 26 Failed: 17)
   Failed tests:  4-14, 16, 19-20, 22, 24-25
   Non-zero exit status: 1
 t5550-http-fetch-dumb.sh   (Wstat: 256 Tests: 29 Failed: 12)
   Failed tests:  3, 7-16, 19
   Non-zero exit status: 1

While I did run the tests between each commit, I hadn't noticed they
were failing because I don't have Apache installed on my laptop, so they
were silently skipped.  I'll resubmit with that fixed.
-- 
brian m. carlson / brian with sandals: Houston, Texas, US
+1 832 623 2791 | http://www.crustytoothpaste.net/~bmc | My opinion only
OpenPGP: RSA v4 4096b: 88AC E9B2 9196 305B A994 7552 F1BA 225C 0223 B187


signature.asc
Description: Digital signature


Re: [PATCH/RFC] upload-pack: Fail if cloning empty namespace

2015-06-12 Thread Junio C Hamano
Johannes Löthberg johan...@kyriasis.com writes:

 + if (get_git_namespace()  !symref.items) {
 + die(git upload-pack: tried to clone from empty namespace);
 + }

Is this sufficient?

get_git_namespace() returns environment.c::namespace, which is set
up in setup_git_env() by calling expand_namespace() and strlen()
is run on that value, so I would presume the function will *ALWAYS*
return true.  Even when not namespaced, you would get an empty
string  whose address is not NULL, no?
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 03/12] for-each-ref: change comment in ref_sort

2015-06-12 Thread karthik nayak
On Sat, Jun 13, 2015 at 1:57 AM, Junio C Hamano gits...@pobox.com wrote:
 Christian Couder christian.cou...@gmail.com writes:

 I think it is needed later when struct ref_sort is moved into
 ref-filter.h, because then the used_atom[] array is not moved.

 Now I am confused.  used_atom[] is the mechanism we use to give a
 small integer to each atom used in the %(placeholder) string, so
 that we do not have to refer to them as placeholder string and we
 do not have to call !strcmp() to compare for equality.  How can a
 field in ref_sort refer to it internally if used_atom[] is not
 visible?

 Indeed, ref-filter.c after the series does have used_atom[] and
 get_ref_atom_value() does use atom to index into it.  So these two
 lines do not make much sense to me.  I am puzzled.

 If by moved you are referring to the fact that the structure and
 its fields are exposed to the callers of the API while the
 implementation detail of the mechanism to give short integer to each
 used atom is not exposed to them, then I do agree that the comment
 to the structure field as the external API definition should not
 talk about used_atom[] array.

 Perhaps in 03/12, you can stop talking about the implementation
 (i.e. the value is used to index into used_atom[] to get to the
 original string) and instead start talking about what the value
 means to the callers (that are still internal to for-each-ref
 implementation), to make things better.

 Having said that, I am not sure if there is a sensible description
 for that field if you avoid exposing the implementation detail.

 You would probably want to start by asking what that value means.
 For (evantual) external callers, the number is what they get from
 parse_ref_filter_atom(); calling that function is the only way they
 can get an appropriate value to stuff in the field, and
 parse_opt_ref_sorting() is the most likely function external callers
 use to make that happen.

 The number internally used to represent an attribute of a ref used
 when sorting the set of refs would be one way to say what it is
 without exposing the implementation detail to the readers.

 But does that help anybody?  I doubt it.  It is mouthful for
 external users, and it is not concrete enough for implementors.

 So either

  - treat ref-filter.h as information for API users, and not talk
about what it means at all, or

  - treat ref-filter.h as information for both API users and API
implementors, and describe it as an index into used_atom[],
i.e. do not change anything.

 I'd say the latter is a more sensible way to go.  I think it is also
 OK to change this comment to index into used_atom array (internal)
 when ref-filter.h is created as an external API definition.

Like you said, the comment is still relevent to the code.
So I guess of the two options suggested by you the option of keeping the
comment and just adding (internal) while the code is moved to ref-filter.h
seems to be the best solution. This would eliminate the need for PATCH 03.

--
Regards,
Karthik
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4] git-rebase--interactive.sh: add config option for custom instruction format

2015-06-12 Thread Junio C Hamano
Mike Rappazzo rappa...@gmail.com writes:

 In the second loop, I changed it to recalculate the presented message
 when the re-ordered commit is added:

 +   if test -n ${format}
 +   then
 +msg_content=$(git log -n 1 --format=${format} ${squash})

 That is the $rest.

Ahh, yeah, I missed that --format=${format} thing.

Thanks.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 09/19] pull: error on no merge candidates

2015-06-12 Thread Paul Tan
On Wed, Jun 10, 2015 at 7:56 AM, Junio C Hamano gits...@pobox.com wrote:
 Paul Tan pyoka...@gmail.com writes:

  /**
 + * Appends merge candidates from FETCH_HEAD that are not marked 
 not-for-merge
 + * into merge_heads.
 + */

 Hmph, I vaguely recall doing that in C elsewhere already, even
 though I do not remember where offhand...

Right, handle_fetch_head() in builtin/merge.c. It looks up the commit
IDs into commit objects though, which is not required for git-pull. We
only need the list of hashes.

 +static void get_merge_heads(struct sha1_array *merge_heads)
 +{
 + const char *filename = git_path(FETCH_HEAD);
 + FILE *fp;
 + struct strbuf sb = STRBUF_INIT;
 + unsigned char sha1[GIT_SHA1_RAWSZ];
 +
 + if (!(fp = fopen(filename, r)))
 + die_errno(_(could not open '%s' for reading), filename);
 + while(strbuf_getline(sb, fp, '\n') != EOF) {

 Missing SP after while

OK

 + if (get_sha1_hex(sb.buf, sha1))
 + continue;  /* invalid line: does not start with SHA1 */
 + if (starts_with(sb.buf + GIT_SHA1_HEXSZ, \tnot-for-merge))

 Look for \tnot-for-merge\t instead?

Right, it's better to be stricter.

Thanks,
Paul
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Fix power checking on OS X

2015-06-12 Thread Eric Sunshine
On Fri, Jun 12, 2015 at 4:53 AM, Panagiotis Astithas past...@gmail.com wrote:
 On Fri, Jun 12, 2015 at 6:46 AM, Eric Sunshine sunsh...@sunshineco.com 
 wrote:
 On Thu, Jun 11, 2015 at 10:37 AM, Panagiotis Astithas past...@gmail.com 
 wrote:
 The output of pmset -g batt changed at some point from
 Currently drawing from 'AC Power' to the slightly different
 Now drawing from 'AC Power'. Starting the match from drawing
 makes the check work in both old and new versions of OS X.

 Would it make sense to try to future-proof this further by searching
 only for 'AC (including the leading single-quote) or just AC
 (without the leading quote)?

 (Genuine question. I don't have a strong feeling about it.)

 It's a reasonable idea, although I'm wondering what are the odds of
 pmset changing to output a string when running on battery in the
 future, containing something like no longer on 'AC Power'. That's
 probably too far out though.

That was my concern, as well, hence not feeling strongly about it.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: git init with template dir

2015-06-12 Thread Alex Cornejo
Junio C Hamano gitster at pobox.com writes:

 Hmmm, I do not seem to be able to do this, though.
 
 $ ln -s $HOME/g/share/git-core/templates /var/tmp/git-template
 $ cd /var/tmp
   $ git init --template=/var/tmp/git-template new
 $ find new/.git -type l
 ... nothing ...

Thanks for your prompt response Juno.

That make sense. The fact that you were unable to reproduce this tells
me that there is probably something fishy/unexpected with the
environment in which I tried this (which is not too surprising, given
that I was doing it inside a linux container, inside a virtual machine,
where both of these were setup using a scripts which ultimately failed
after the git init step, due to the symlink behavior I described, but
most likely this is my own fault). I should have tried to reproduce this
on a clean slate before posting this question.

Thanks again,

Alex

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: git init with template dir

2015-06-12 Thread Junio C Hamano
Alex Cornejo acorn...@gmail.com writes:

 Junio C Hamano gitster at pobox.com writes:

 Hmmm, I do not seem to be able to do this, though.
 
 $ ln -s $HOME/g/share/git-core/templates /var/tmp/git-template
 $ cd /var/tmp
 $ git init --template=/var/tmp/git-template new
 $ find new/.git -type l
 ... nothing ...

 Thanks for your prompt response Juno.

 That make sense. The fact that you were unable to reproduce this tells
 me that there is probably something fishy/unexpected with the
 environment in which I tried this (which is not too surprising, given
 that I was doing it inside a linux container, inside a virtual machine,
 where both of these were setup using a scripts which ultimately failed
 after the git init step, due to the symlink behavior I described, but
 most likely this is my own fault).

I wouldn't call that your fault.  After all, as more people want
to run Git in different environments, we would want to make sure Git
runs correctly for them.

I quickly re-scanned what we do inside git init and how we
populate the repository from templates.  This happens all in
builtin/init-db.c:

 - copy_templates() does opendir(), so it should not have mattered
   that I used /var/tmp/git-template that is a symbolic link to a
   real location in my quick reproduction attempt;

 - it calls a recursive copy_templates_1() with that directory
   handle it opened for the template directory.  Each entry it finds
   are inspected and

   - real directories are recursed into;
   - files are copied; and
   - symlinks are recreated.

So if I instead made a new directory /var/tmp/git-template/ and then
populated it with a bunch of symbolic links e.g. hooks, description,
etc., that points at their real location, I would have seen that the
resulting repository populated with symbolic links.

And I think that is an expected behaviour.

But if git init made bunch of symbolic links only because it was
given a symbolic link to the real directory via its --template
parameter, that _is_ unexpected, and we may want to dig deeper to
correct it.

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] git-checkout.txt: Document git checkout pathspec better

2015-06-12 Thread Junio C Hamano
Scott Schmit i.g...@comcast.net writes:

 On Wed, Jun 10, 2015 at 08:05:32AM -0700, Junio C Hamano wrote:

 How about this?
 
 'git checkout' with paths or `--patch` is used to restore
 modified or deleted paths to their original contents from
 the index file or from a named tree-ish (most often a
 commit) without switching branches.

 I think these changes would improve the above:

 s/index file/index/
 - index file is implementation; the glossary only defines index

Yup, that was sloppy of me.  Thanks.

 s/or from/or replace paths with the contents from/
 - the latter case isn't always restoration, if tree-ish doesn't come
   from an ancestor of HEAD (so I don't like restore in the summary
   either)

Yes, that is why the original said 'checkout' in the first place.

 s/without switching/instead of switching/
 - 'without' implies it makes sense to restore/replace with switching
   branches, but we've chosen not to.  (I then waste time trying to
   understand that)

OK.

 s/commit/commit-ish/
 - tags are also tree-ishes, though you could argue this case is less
   often

Correct.

 leaving:

 'git checkout' with paths or `--patch` is used to restore modified or
 deleted paths to their original contents from the index or replace paths
 with the contents from a named tree-ish (most often a commit-ish)
 instead of switching branches.

Yeah, I like that.  I'd appreciate if somebody can submit the final
version as a patch form after waiting for a few days to hear other's
opinions.

 does a sha1 count as named? Maybe s/named //.

The named in the original named tree-ish does not mean the
tree-ish has a human readable name (e.g. a tag); it merely means
the user tells Git to use one tree-ish to use for this operation;
and the tree-ish was specified (by some means) by the user, i.e.
the same thing as specified.  If you specify the tree-ish with its
object name, yes, you are naming that (after all, that is what
everything in sha1-name.c does).

s/a named tree-ish/the tree-ish/ in the improved text you
proposed above would be sufficient, I would think, as it is clear
which tree-ish we are talking about in the context.

Thanks.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] t0302: unreadable test needs SANITY prereq

2015-06-12 Thread Paul Tan
The test expects that chmod -r ~/.git-credentials would make it
unreadable to the user, and thus needs the SANITY prerequisite.

Reported-by: Jean-Yves LENHOF jean-y...@lenhof.eu.org
Signed-off-by: Paul Tan pyoka...@gmail.com
---
 t/t0302-credential-store.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/t/t0302-credential-store.sh b/t/t0302-credential-store.sh
index 0979df9..1d8d1f2 100755
--- a/t/t0302-credential-store.sh
+++ b/t/t0302-credential-store.sh
@@ -75,7 +75,7 @@ test_expect_success 'get: use xdg file if home file has no 
matches' '
EOF
 '
 
-test_expect_success POSIXPERM 'get: use xdg file if home file is unreadable' '
+test_expect_success POSIXPERM,SANITY 'get: use xdg file if home file is 
unreadable' '
echo https://home-user:home-p...@example.com; 
$HOME/.git-credentials 
chmod -r $HOME/.git-credentials 
mkdir -p $HOME/.config/git 
-- 
2.1.4

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFC: reverse history tree, for faster background clones

2015-06-12 Thread Dennis Kaarsemaker
On vr, 2015-06-12 at 13:26 +0200, Andres G. Aragoneses wrote:

 AFAIU git stores the contents of a repo as a sequence of patches in the 
 .git metadata folder. 

It does not, it stores full snapshots of files.

[I've cut the example, as it's not how git works]

 1. `git clone --depth 1` would be way faster, and without the need of 
 on-demand compressing of packfiles in the server side, correct me if I'm 
 wrong?

You're wrong due to the misunderstanding of how git works :)

 2. `git clone` would be able to allow a fast operation, complete in the 
 background mode that would allow you to download the first snapshot of 
 the repo very quickly, so that the user would be able to start working 
 on his working directory very quickly, while a background job keeps 
 retreiving the history data in the background.

This could actually be a good thing, and can be emulated now with git
clone --depth=1 and subsequent fetches in the background to deepen the
history. I can see some value in clone doing this by itself, first doing
a depth=1 fetch, then launching itself into the background, giving you a
worktree to play with earlier.

-- 
Dennis Kaarsemaker
http://www.kaarsemaker.net

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] t0302: unreadable test needs SANITY prereq

2015-06-12 Thread Jeff King
On Fri, Jun 12, 2015 at 09:29:58PM +0800, Paul Tan wrote:

 The test expects that chmod -r ~/.git-credentials would make it
 unreadable to the user, and thus needs the SANITY prerequisite.

Yup, looks obviously correct to me. Thanks for a quick turnaround.

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Fix power checking on OS X

2015-06-12 Thread Junio C Hamano
Panagiotis Astithas past...@gmail.com writes:

 On Fri, Jun 12, 2015 at 6:46 AM, Eric Sunshine sunsh...@sunshineco.com 
 wrote:
 On Thu, Jun 11, 2015 at 10:37 AM, Panagiotis Astithas past...@gmail.com 
 wrote:
 The output of pmset -g batt changed at some point from
 Currently drawing from 'AC Power' to the slightly different
 Now drawing from 'AC Power'. Starting the match from drawing
 makes the check work in both old and new versions of OS X.

 Would it make sense to try to future-proof this further by searching
 only for 'AC (including the leading single-quote) or just AC
 (without the leading quote)?

 (Genuine question. I don't have a strong feeling about it.)

 It's a reasonable idea, although I'm wondering what are the odds of
 pmset changing to output a string when running on battery in the
 future, containing something like no longer on 'AC Power'. That's
 probably too far out though.

Once they start drawing from USB-C, they may stop referring to AC
altogether; unless you have a crystal ball, it is futile to spend
too many brain cycles to try futureproofing.

Prediction is very difficult, especially if it's about the future.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] fetch-pack: optionally save packs to disk

2015-06-12 Thread Junio C Hamano
Johannes Sixt j...@kdbg.org writes:

 What is the problem with the current fetch-pack implementation? Does
 it remove a bogus packfile after download? Does it abort during
 download when it detects a broken packfile? Does --keep not do what
 you need?

Doesn't the incoming data still go through the fattening process,
though?  You will not be able to inspect the byte-for-byte identical
stream that came out of the server end whose packfile generation
logic is suspect.

For the purpose of debugging your own new server implementation, it
might be a better approach to capture the pack as it comes out at
the server end, instead of doing it at the fetch-pack end as it
comes in. But the approach to add this dump at the fetch-pack side
is that it gives us a tool to diagnose problems that come from
broken server (re)implementations by other people we cannot debug,
i.e. you are spewing this corrupt pack against this request; here
is a dump we took to help you go fix your server.

 Instead of your approach (which forks off tee to dump a copy of the
 packfile), would it not be simpler to add an option --debug-pack
 (probably not the best name) that skips the cleanup step when a broken
 packfile is detected and prints the name of the downloaded packfile?

As long as we need to debug a thin pack that comes from the other
end, that approach is not sufficient, I am afraid.

I anticipated that you'd have problem with its use of tee.  It
probably can do this internally with async interface, perhaps,
instead?
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: git init with template dir

2015-06-12 Thread Junio C Hamano
Alex Cornejo acorn...@gmail.com writes:

 I was surprised to see that when using git-init, if the template folder
 is itself a symlink, then the contents of the template is NOT copied to
 the resulting git repository, but instead each individual file is
 symlinked.

Hmmm, I do not seem to be able to do this, though.

$ ln -s $HOME/g/share/git-core/templates /var/tmp/git-template
$ cd /var/tmp
$ git init --template=/var/tmp/git-template new
$ find new/.git -type l
... nothing ...

 For my particular use case, this is undesirable (since if I am not
 careful, then when I change the hook of one git repo, it
 actually changes the hooks of all other repos too). It is easy
 enough for me to work around this (i.e. by instead pointing my gitconfig
 to use a template dir which is not a symlink), but I was
 wondering weather this is a feature folks use (and for what end), or if
 this is unintended behavior.

That you had to predicate this is undesireable with For my
particular use case tells me that other people may want to see that
these things shared and automatically receive updates when the
originals in the temporate directory are updated, which makes it
sound like a feature not an unintended behaviour, at least to
me.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 03/12] for-each-ref: change comment in ref_sort

2015-06-12 Thread Junio C Hamano
Karthik Nayak karthik@gmail.com writes:

 The comment in 'ref_sort' hasn't been changed 9f613dd.

Bad grammar?  hasn't been changed since 9f613dd, perhaps?

But more importantly, don't just give an abbreviated object name.  I
think the comment hasn't changed since the for-each-ref command was
originally introduced is what you meant to say, and it is OK to
append since 9f613ddd (Add git-for-each-ref: helper for language
bindings, 2006-09-15) to that sentence as a supporting material.

 Change the comment to reflect changes made in the code since
 9f613dd.

What change since 9f613dd do you have in mind, exactly, though?

I do not think the fact that this field indexes into used_atom[]
array has ever changed during the life of this implementation.
I see static const char **used_atom; in builtin/for-each-ref.c
still in the 'master', and that is the array that holds the atoms
that are used by the end-user request.

So I do not think The comment was there from the beginning, it
described the initial implementation, the implementation was updated
and the comment has become stale is a good justification for this
change, as I do not think that is what has happened here.

You may be changing used_atom to something else later in your
series, but then isn't that commit the appropriate place to update
this comment?

 Mentored-by: Christian Couder christian.cou...@gmail.com
 Mentored-by: Matthieu Moy matthieu@grenoble-inp.fr
 Signed-off-by: Karthik Nayak karthik@gmail.com
 ---
  builtin/for-each-ref.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

 diff --git a/builtin/for-each-ref.c b/builtin/for-each-ref.c
 index 0dd2df2..bfad03f 100644
 --- a/builtin/for-each-ref.c
 +++ b/builtin/for-each-ref.c
 @@ -27,7 +27,7 @@ struct atom_value {
  
  struct ref_sort {
   struct ref_sort *next;
 - int atom; /* index into used_atom array */
 + int atom; /* index into 'struct atom_value *' array */
   unsigned reverse : 1;
  };
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] fetch-pack: optionally save packs to disk

2015-06-12 Thread Augie Fackler
On Fri, Jun 12, 2015 at 11:07 AM, Junio C Hamano gits...@pobox.com wrote:
 What is the problem with the current fetch-pack implementation? Does
 it remove a bogus packfile after download? Does it abort during
 download when it detects a broken packfile? Does --keep not do what
 you need?

 Doesn't the incoming data still go through the fattening process,
 though?  You will not be able to inspect the byte-for-byte identical
 stream that came out of the server end whose packfile generation
 logic is suspect.

 For the purpose of debugging your own new server implementation, it
 might be a better approach to capture the pack as it comes out at
 the server end, instead of doing it at the fetch-pack end as it
 comes in. But the approach to add this dump at the fetch-pack side
 is that it gives us a tool to diagnose problems that come from
 broken server (re)implementations by other people we cannot debug,
 i.e. you are spewing this corrupt pack against this request; here
 is a dump we took to help you go fix your server.

*nod* that's an important part of it. Also, in the small-pull case,
the pack data gets sent to unpack-objects anyway, so git is never
saving the packfile anywhere in that case (I think it's for a pull of
less than 100 objects, which characterizes most of my reduced test
cases for weirdness.)


 Instead of your approach (which forks off tee to dump a copy of the
 packfile), would it not be simpler to add an option --debug-pack
 (probably not the best name) that skips the cleanup step when a broken
 packfile is detected and prints the name of the downloaded packfile?

 As long as we need to debug a thin pack that comes from the other
 end, that approach is not sufficient, I am afraid.

 I anticipated that you'd have problem with its use of tee.  It
 probably can do this internally with async interface, perhaps,
 instead?

I'd be happy to make such changes if that's the consensus and someone
can give me pointers. My C is admittedly pretty rusty from non-use,
and I'm not at all familiar with git's codebase, but I'll at least
try.

Thanks!
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 01/12] for-each-ref: extract helper functions out of grab_single_ref()

2015-06-12 Thread Karthik Nayak

On 06/12/2015 11:00 PM, Junio C Hamano wrote:

Karthik Nayak karthik@gmail.com writes:


Extract two helper functions out of grab_single_ref(). Firstly,
new_refinfo() which is used to allocate memory for a new refinfo
structure and copy the objectname, refname and flag to it.
Secondly, match_name_as_path() which when given an array of patterns
and the refname checks if the refname matches any of the patterns
given while the pattern is a pathname, also supports wildcard
characters.

This is a preperatory patch for restructuring 'for-each-ref' and
eventually moving most of it to 'ref-filter' to provide the
functionality to similar commands via public API's.

Helped-by: Junio C Hamano gits...@pobox.com
Mentored-by: Christian Couder christian.cou...@gmail.com
Mentored-by: Matthieu Moy matthieu@grenoble-inp.fr
Signed-off-by: Karthik Nayak karthik@gmail.com
---
  builtin/for-each-ref.c | 64 --
  1 file changed, 41 insertions(+), 23 deletions(-)

diff --git a/builtin/for-each-ref.c b/builtin/for-each-ref.c
index f7e51a7..67c8b62 100644
--- a/builtin/for-each-ref.c
+++ b/builtin/for-each-ref.c
@@ -851,6 +851,44 @@ struct grab_ref_cbdata {
  };

  /*
+ * Return 1 if the refname matches with one of the patterns,


s/with //;


Thanks! will change :)




+ * otherwise 0.  The patterns can be literal prefix (e.g. a
+ * refname refs/heads/master matches a pattern refs/heads/)
+ * or a wildcard (e.g. the same ref matches refs/heads/m*,too).
+ */


I know this was my bad suggestion, but refs/heads/m can be thought
of as a literal prefix that may match refs/heads/master; we do
not want to make that match, so perhaps literal is a bad way to
say this.  A pattern can be a path prefix or a worldcard?



Yes! that sounds right, after all its doing a path match.

--
Regards,
Karthik
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] fetch-pack: optionally save packs to disk

2015-06-12 Thread Augie Fackler
On Fri, Jun 12, 2015 at 2:22 AM, Johannes Sixt j...@kdbg.org wrote:

 Am 11.06.2015 um 20:59 schrieb Augie Fackler:

 When developing server software, it's often helpful to save a
 potentially-bogus pack for later analysis. This makes that trivial,
 instead of painful.


 When you develop server software, shouldn't you test drive the server via
the bare metal protocol anyway? That *is* painful, but unavoidable because
you must harden the server against any garbage that a potentially malicous
client could throw at it. Restricting yourself to a well-behaved client
such as fetch-pack is only half the deal.


We do that too, but sometimes I've encountered an edge case that's
trivially reproduced from an existing repo, and going through the work to
manually drive the server is a monumental pain in the butt, and all I
*really* need is to see the bytes sent from the server to the client. If it
weren't for SSL-everywhere, I'd probably just do this with wireshark, but
that's not the world I live in.



 That said, I do think that fetch-pack could learn a mode that makes it
easier to debug the normal behavior of a server (if such a mode is missing
currently).

 What is the problem with the current fetch-pack implementation? Does it
remove a bogus packfile after download? Does it abort during download when
it detects a broken packfile? Does --keep not do what you need?


fetch-pack doesn't store the pack anywhere - it's sending it to index-pack
(or unpack-objects) using --stdin, which means that the raw bytes from the
server currently are never materialized anywhere on disk. Having index-pack
do this is too late, because it's doing things like rewriting the pack
header in a potentially new format.

(Junio also covered this well, thanks!)



 Instead of your approach (which forks off tee to dump a copy of the
packfile), would it not be simpler to add an option --debug-pack (probably
not the best name) that skips the cleanup step when a broken packfile is
detected and prints the name of the downloaded packfile?


 diff --git a/fetch-pack.c b/fetch-pack.c
 index a912935..fe6ba58 100644
 --- a/fetch-pack.c
 +++ b/fetch-pack.c
 @@ -684,7 +684,7 @@ static int get_pack(struct fetch_pack_args *args,
 const char *argv[22];
 char keep_arg[256];
 char hdr_arg[256];
 -   const char **av, *cmd_name;
 +   const char **av, *cmd_name, *savepath;
 int do_keep = args-keep_pack;
 struct child_process cmd = CHILD_PROCESS_INIT;
 int ret;
 @@ -708,9 +708,8 @@ static int get_pack(struct fetch_pack_args *args,
 cmd.argv = argv;
 av = argv;
 *hdr_arg = 0;
 +   struct pack_header header;
 if (!args-keep_pack  unpack_limit) {
 -   struct pack_header header;
 -
 if (read_pack_header(demux.out, header))
 die(protocol error: bad pack header);
 snprintf(hdr_arg, sizeof(hdr_arg),
 @@ -762,7 +761,44 @@ static int get_pack(struct fetch_pack_args *args,
 *av++ = --strict;
 *av++ = NULL;

 -   cmd.in = demux.out;
 +   savepath = getenv(GIT_SAVE_FETCHED_PACK_TO);
 +   if (savepath) {
 +   struct child_process cmd2 = CHILD_PROCESS_INIT;
 +   const char *argv2[22];
 +   int pipefds[2];
 +   int e;
 +   const char **av2;
 +   cmd2.argv = argv2;
 +   av2 = argv2;
 +   *av2++ = tee;
 +   if (*hdr_arg) {
 +   /* hdr_arg being nonempty means we already read
the
 +* pack header from demux, so we need to drop a
pack
 +* header in place for tee to append to,
otherwise
 +* we'll end up with a broken pack on disk.
 +*/


 /*
  * Write multi-line comments
  * like this (/* on its own line)
  */

 +   int fp;
 +   struct sha1file *s;
 +   fp = open(savepath, O_CREAT | O_TRUNC |
O_WRONLY, 0666);
 +   s = sha1fd_throughput(fp, savepath, NULL);
 +   sha1write(s, header, sizeof(header));
 +   sha1flush(s);


 Are you abusing sha1write() and sha1flush() to write a byte sequence to a
file? Is write_in_full() not sufficient?


I didn't know about write_in_full. I'm very much *not* familiar with git's
codebase - I know the protocols and formats reasonably well, but have
needed only occasionally to look at the code. That works, thanks.




 +   close(fp);
 +   /* -a is supported by both GNU and BSD tee */
 +   *av2++ = -a;
 +   }
 +   *av2++ = savepath;
 +   *av2++ = NULL;
 +   cmd2.in = demux.out;
 +   e = pipe(pipefds);
 +  

git push keeps writing after server failure

2015-06-12 Thread Shawn Pearce
I did something stupid like trying to push a copy of WebKit[1] into my
GitHub account. This is ~5.2 GiB of data, which GitHub prefers not to
accept. Ok ...

$ git push --all g...@github.com:spearce/wk.git
Counting objects: 2752427, done.
Delta compression using up to 12 threads.
Compressing objects: 100% (442684/442684), done.
remote: fatal: pack exceeds maximum allowed size
Writing objects: 100% (2752427/2752427), 5.28 GiB | 8.86 MiB/s, done.
Total 2752427 (delta 2225007), reused 2752427 (delta 2225007)
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly

Notice GitHub prints remote: fatal: pack exceeds maximum allowed
size. That interrupted my Writing objects progress meter, and then
git push just kept going and wrote really really fast (170 MiB/s!)
until the entire pack was sent.

A similar thing happens on https:// if the remote HTTP server goes
away in the middle of sending the pack. Except its slower to send the
remainder of the pack before git push finally terminates with an
error.

Shouldn't git push realize its stream is broken and stop writing when
the peer is all like uh, no, I'm not going to do that, but thanks for
trying?


[1] https://webkit.googlesource.com/WebKit/
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 01/12] for-each-ref: extract helper functions out of grab_single_ref()

2015-06-12 Thread Junio C Hamano
Karthik Nayak karthik@gmail.com writes:

 Extract two helper functions out of grab_single_ref(). Firstly,
 new_refinfo() which is used to allocate memory for a new refinfo
 structure and copy the objectname, refname and flag to it.
 Secondly, match_name_as_path() which when given an array of patterns
 and the refname checks if the refname matches any of the patterns
 given while the pattern is a pathname, also supports wildcard
 characters.

 This is a preperatory patch for restructuring 'for-each-ref' and
 eventually moving most of it to 'ref-filter' to provide the
 functionality to similar commands via public API's.

 Helped-by: Junio C Hamano gits...@pobox.com
 Mentored-by: Christian Couder christian.cou...@gmail.com
 Mentored-by: Matthieu Moy matthieu@grenoble-inp.fr
 Signed-off-by: Karthik Nayak karthik@gmail.com
 ---
  builtin/for-each-ref.c | 64 
 --
  1 file changed, 41 insertions(+), 23 deletions(-)

 diff --git a/builtin/for-each-ref.c b/builtin/for-each-ref.c
 index f7e51a7..67c8b62 100644
 --- a/builtin/for-each-ref.c
 +++ b/builtin/for-each-ref.c
 @@ -851,6 +851,44 @@ struct grab_ref_cbdata {
  };
  
  /*
 + * Return 1 if the refname matches with one of the patterns,

s/with //;

 + * otherwise 0.  The patterns can be literal prefix (e.g. a
 + * refname refs/heads/master matches a pattern refs/heads/)
 + * or a wildcard (e.g. the same ref matches refs/heads/m*,too).
 + */

I know this was my bad suggestion, but refs/heads/m can be thought
of as a literal prefix that may match refs/heads/master; we do
not want to make that match, so perhaps literal is a bad way to
say this.  A pattern can be a path prefix or a worldcard?

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 03/12] for-each-ref: change comment in ref_sort

2015-06-12 Thread Karthik Nayak

On 06/12/2015 11:10 PM, Junio C Hamano wrote:

Karthik Nayak karthik@gmail.com writes:


The comment in 'ref_sort' hasn't been changed 9f613dd.


Bad grammar?  hasn't been changed since 9f613dd, perhaps?


Yes! thanks :)



But more importantly, don't just give an abbreviated object name.  I
think the comment hasn't changed since the for-each-ref command was
originally introduced is what you meant to say, and it is OK to
append since 9f613ddd (Add git-for-each-ref: helper for language
bindings, 2006-09-15) to that sentence as a supporting material.



Ok will do.


Change the comment to reflect changes made in the code since
9f613dd.


What change since 9f613dd do you have in mind, exactly, though?


Well initially the atoms were indexed into used_atom array, which
later was removed. Hence the comment becomes obsolete.



I do not think the fact that this field indexes into used_atom[]
array has ever changed during the life of this implementation.
I see static const char **used_atom; in builtin/for-each-ref.c
still in the 'master', and that is the array that holds the atoms
that are used by the end-user request.

So I do not think The comment was there from the beginning, it
described the initial implementation, the implementation was updated
and the comment has become stale is a good justification for this
change, as I do not think that is what has happened here.

You may be changing used_atom to something else later in your
series, but then isn't that commit the appropriate place to update
this comment?



But isn't that what happened here, the code was altered but the comment
was left the way it is.

What do you suggest I do?

--
Regards,
Karthik
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: git push keeps writing after server failure

2015-06-12 Thread Junio C Hamano
Shawn Pearce spea...@spearce.org writes:

 I did something stupid like trying to push a copy of WebKit[1] into my
 GitHub account. This is ~5.2 GiB of data, which GitHub prefers not to
 accept. Ok ...
 ...
 Shouldn't git push realize its stream is broken and stop writing when
 the peer is all like uh, no, I'm not going to do that, but thanks for
 trying?

Nice wish, but I am not sure if we are structured to do that easily.
The fatal: message is going on the sideband that is demultiplexed
by a separate async sideband-demux thread and pack_objects() is
oblivious to what is being said on the error channel #2.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] fetch-pack: optionally save packs to disk

2015-06-12 Thread Jeff King
On Fri, Jun 12, 2015 at 08:07:36AM -0700, Junio C Hamano wrote:

 Johannes Sixt j...@kdbg.org writes:
 
  What is the problem with the current fetch-pack implementation? Does
  it remove a bogus packfile after download? Does it abort during
  download when it detects a broken packfile? Does --keep not do what
  you need?
 
 Doesn't the incoming data still go through the fattening process,
 though?  You will not be able to inspect the byte-for-byte identical
 stream that came out of the server end whose packfile generation
 logic is suspect.
 
 For the purpose of debugging your own new server implementation, it
 might be a better approach to capture the pack as it comes out at
 the server end, instead of doing it at the fetch-pack end as it
 comes in. But the approach to add this dump at the fetch-pack side
 is that it gives us a tool to diagnose problems that come from
 broken server (re)implementations by other people we cannot debug,
 i.e. you are spewing this corrupt pack against this request; here
 is a dump we took to help you go fix your server.

When I added GIT_TRACE_PACKET long ago, I had always intended to
follow-up with a GIT_TRACE_PACKFILE. The former stops tracing when we
get to the binary data, but I had intended the latter to store the pure
on-the-wire packfile transmission for later debugging to. I never got
around to it, but I think it is something like the patch below.

With:

  GIT_TRACE_PACKET=1 GIT_TRACE_PACK=/tmp/foo.pack git clone ...

this yields a usable pack in /tmp/foo.pack (note that it only kicks in
when packet-tracing is on at all; if we restructure the code a bit, we
can remove that limitation).

In theory it would also work when receiving a pack via push, but I
think we actually skip the pkt-line protocol there. We'd have to
manually check GIT_TRACE_PACK.

Also, as a bonus, it means we do not stop tracing completely when we
start to receive a sideband pack. The current GIT_TRACE_PACKET code
misses any sideband-2 messages that come after we start receiving the
pack.

diff --git a/pkt-line.c b/pkt-line.c
index 187a229..f82871a 100644
--- a/pkt-line.c
+++ b/pkt-line.c
@@ -4,20 +4,39 @@
 char packet_buffer[LARGE_PACKET_MAX];
 static const char *packet_trace_prefix = git;
 static struct trace_key trace_packet = TRACE_KEY_INIT(PACKET);
+static struct trace_key trace_pack = TRACE_KEY_INIT(PACK);
 
 void packet_trace_identity(const char *prog)
 {
packet_trace_prefix = xstrdup(prog);
 }
 
+static int packet_trace_pack(const char *buf, unsigned int len, int sideband)
+{
+   if (!sideband) {
+   trace_verbatim(trace_pack, buf, len);
+   return 1;
+   } else if (len  *buf == '\1') {
+   trace_verbatim(trace_pack, buf + 1, len - 1);
+   return 1;
+   } else {
+   /* it's another non-pack sideband */
+   return 0;
+   }
+}
+
 static void packet_trace(const char *buf, unsigned int len, int write)
 {
int i;
struct strbuf out;
+   static int in_pack, sideband;
 
if (!trace_want(trace_packet))
return;
 
+   if (in_pack  packet_trace_pack(buf, len, sideband))
+   return;
+
/* +32 is just a guess for header + quoting */
strbuf_init(out, len+32);
 
@@ -27,7 +46,9 @@ static void packet_trace(const char *buf, unsigned int len, 
int write)
if ((len = 4  starts_with(buf, PACK)) ||
(len = 5  starts_with(buf+1, PACK))) {
strbuf_addstr(out, PACK ...);
-   trace_disable(trace_packet);
+   in_pack = 1;
+   sideband = *buf == '\1';
+   packet_trace_pack(buf, len, sideband);
}
else {
/* XXX we should really handle printable utf8 */
diff --git a/trace.c b/trace.c
index 3c3bd8f..7393926 100644
--- a/trace.c
+++ b/trace.c
@@ -120,6 +120,13 @@ static int prepare_trace_line(const char *file, int line,
return 1;
 }
 
+void trace_verbatim(struct trace_key *key, const void *buf, unsigned len)
+{
+   if (!trace_want(key))
+   return;
+   write_or_whine_pipe(get_trace_fd(key), buf, len, err_msg);
+}
+
 static void print_trace_line(struct trace_key *key, struct strbuf *buf)
 {
strbuf_complete_line(buf);
diff --git a/trace.h b/trace.h
index ae6a332..179b249 100644
--- a/trace.h
+++ b/trace.h
@@ -18,6 +18,7 @@ extern int trace_want(struct trace_key *key);
 extern void trace_disable(struct trace_key *key);
 extern uint64_t getnanotime(void);
 extern void trace_command_performance(const char **argv);
+extern void trace_verbatim(struct trace_key *key, const void *buf, unsigned 
len);
 
 #ifndef HAVE_VARIADIC_MACROS
 
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: git push keeps writing after server failure

2015-06-12 Thread Jeff King
On Fri, Jun 12, 2015 at 10:31:33AM -0700, Shawn Pearce wrote:

 I did something stupid like trying to push a copy of WebKit[1] into my
 GitHub account. This is ~5.2 GiB of data, which GitHub prefers not to
 accept. Ok ...

Heh, yeah. We cap it at 2G, and if you are going to have a WebKit fork,
we prefer you fork the actual WebKit repo so it shares objects (though
if you have a need to create a new fork network, let me know).

 $ git push --all g...@github.com:spearce/wk.git
 Counting objects: 2752427, done.
 Delta compression using up to 12 threads.
 Compressing objects: 100% (442684/442684), done.
 remote: fatal: pack exceeds maximum allowed size
 Writing objects: 100% (2752427/2752427), 5.28 GiB | 8.86 MiB/s, done.
 Total 2752427 (delta 2225007), reused 2752427 (delta 2225007)
 fatal: The remote end hung up unexpectedly
 fatal: The remote end hung up unexpectedly
 
 Notice GitHub prints remote: fatal: pack exceeds maximum allowed
 size. That interrupted my Writing objects progress meter, and then
 git push just kept going and wrote really really fast (170 MiB/s!)
 until the entire pack was sent.

Sounds like it's writing to a closed fd, then. Which makes sense; I
think we should hang up the socket after writing the fatal message
above.

 Shouldn't git push realize its stream is broken and stop writing when
 the peer is all like uh, no, I'm not going to do that, but thanks for
 trying?

Hrm. I have this old patch, which was originally written so that kill
$(pidof git-push) did not let a rogue pack-objects continue writing.

I'm not sure if that's what is going on here, though. I think we connect
pack-objects directly to the socket. So it sounds more like
pack-objects --stdout needs to know to stop writing when writes to the
socket fail.

-- 8 --
Date: Sun, 3 Apr 2011 20:53:08 -0400
Subject: [PATCH] send-pack: kill pack-objects helper on signal or exit

We spawn an external pack-objects process to actually send
objects to the remote side. If we are killed by a signal
during this process, the pack-objects will keep running and
complete the push, which may surprise the user. We should
take it down when we go down.

Signed-off-by: Jeff King p...@peff.net
---
 send-pack.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/send-pack.c b/send-pack.c
index 2a64fec..bdf723b 100644
--- a/send-pack.c
+++ b/send-pack.c
@@ -67,6 +67,7 @@ static int pack_objects(int fd, struct ref *refs, struct 
sha1_array *extra, stru
po.in = -1;
po.out = args-stateless_rpc ? -1 : fd;
po.git_cmd = 1;
+   po.clean_on_exit = 1;
if (start_command(po))
die_errno(git pack-objects failed);
 
-- 
2.4.2.752.geeb594a

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 03/12] for-each-ref: change comment in ref_sort

2015-06-12 Thread Karthik Nayak

On 06/12/2015 11:34 PM, Junio C Hamano wrote:

Karthik Nayak karthik@gmail.com writes:


What change since 9f613dd do you have in mind, exactly, though?


Well initially the atoms were indexed into used_atom array, which
later was removed. Hence the comment becomes obsolete.


Later in which commit?  In builtin/for-each-ref.c in the version
after applying patches 1-3 of this series on top of master, I still
see used_atom[] array there, so...?



Uh! My bad.
Ignore this! I think I got confused, On having a look now that patch is
not needed. Sorry.

--
Regards,
Karthik
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 03/12] for-each-ref: change comment in ref_sort

2015-06-12 Thread Junio C Hamano
Karthik Nayak karthik@gmail.com writes:

 What change since 9f613dd do you have in mind, exactly, though?

 Well initially the atoms were indexed into used_atom array, which
 later was removed. Hence the comment becomes obsolete.

Later in which commit?  In builtin/for-each-ref.c in the version
after applying patches 1-3 of this series on top of master, I still
see used_atom[] array there, so...?
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3] l10n: de.po: translate index as Index

2015-06-12 Thread Ralf Thielow
The term index is translated as Staging-Area to
match a majority of German books and to not confuse
Git beginners who don't know about Git's index.

Staging Area is used in German books as a thing where
content can be staged for commit.  While the translation
is good for those kind of messages, it's bad for messages
that mean the Git index as the tree state or the index
file, in which case we should translate as Index.

Signed-off-by: Ralf Thielow ralf.thie...@gmail.com
---
Sorry for being late.
I've changed the commit message to explain why we use both
Staging-Area and Index as a translation for index, and
skipped the translation of some messages accordingly.
I hope that makes sense.

 po/de.po | 136 +++
 1 file changed, 66 insertions(+), 70 deletions(-)

diff --git a/po/de.po b/po/de.po
index 4317dbc..3b7a000 100644
--- a/po/de.po
+++ b/po/de.po
@@ -657,7 +657,7 @@ msgstr Lesen des Zwischenspeichers fehlgeschlagen
 #: merge.c:94 builtin/checkout.c:374 builtin/checkout.c:580
 #: builtin/clone.c:662
 msgid unable to write new index file
-msgstr Konnte neue Staging-Area-Datei nicht schreiben.
+msgstr Konnte neue Index-Datei nicht schreiben.
 
 #: merge-recursive.c:189
 #, c-format
@@ -913,7 +913,7 @@ msgstr Konnte Objekt '%s' nicht parsen.
 
 #: merge-recursive.c:2019 builtin/merge.c:667
 msgid Unable to write index.
-msgstr Konnte Staging-Area nicht schreiben.
+msgstr Konnte Index nicht schreiben.
 
 #: notes-utils.c:41
 msgid Cannot commit uninitialized/unreferenced notes tree
@@ -1226,7 +1226,7 @@ msgstr 
 #: sequencer.c:321
 #, c-format
 msgid %s: Unable to write new index file
-msgstr %s: Konnte neue Staging-Area-Datei nicht schreiben
+msgstr %s: Konnte neue Index-Datei nicht schreiben
 
 #: sequencer.c:339
 msgid Could not resolve HEAD commit\n
@@ -1248,7 +1248,7 @@ msgstr Konnte Eltern-Commit %s nicht parsen\n
 
 #: sequencer.c:482
 msgid Your index file is unmerged.
-msgstr Ihre Staging-Area-Datei ist nicht zusammengeführt.
+msgstr Ihre Index-Datei ist nicht zusammengeführt.
 
 #: sequencer.c:501
 #, c-format
@@ -1294,12 +1294,12 @@ msgstr leere Menge von Commits übergeben
 #: sequencer.c:661
 #, c-format
 msgid git %s: failed to read the index
-msgstr git %s: Fehler beim Lesen der Staging-Area
+msgstr git %s: Fehler beim Lesen des Index
 
 #: sequencer.c:665
 #, c-format
 msgid git %s: failed to refresh the index
-msgstr git %s: Fehler beim Aktualisieren der Staging-Area
+msgstr git %s: Fehler beim Aktualisieren des Index
 
 #: sequencer.c:725
 #, c-format
@@ -2067,7 +2067,7 @@ msgstr 
 
 #: builtin/add.c:194 builtin/rev-parse.c:785
 msgid Could not read the index
-msgstr Konnte die Staging-Area nicht lesen
+msgstr Konnte den Index nicht lesen
 
 #: builtin/add.c:205
 #, c-format
@@ -2145,7 +2145,7 @@ msgstr gelöschte Pfade im Arbeitsverzeichnis ignorieren 
(genau wie --no-all)
 
 #: builtin/add.c:262
 msgid don't add, only refresh the index
-msgstr nichts hinzufügen, nur die Staging-Area aktualisieren
+msgstr nichts hinzufügen, nur den Index aktualisieren
 
 #: builtin/add.c:263
 msgid just skip files which cannot be added because of errors
@@ -2188,11 +2188,11 @@ msgstr Meinten Sie vielleicht 'git add .'?\n
 #: builtin/add.c:363 builtin/check-ignore.c:172 builtin/clean.c:920
 #: builtin/commit.c:335 builtin/mv.c:130 builtin/reset.c:235 builtin/rm.c:299
 msgid index file corrupt
-msgstr Staging-Area-Datei beschädigt
+msgstr Index-Datei beschädigt
 
 #: builtin/add.c:446 builtin/apply.c:4675 builtin/mv.c:279 builtin/rm.c:431
 msgid Unable to write new index file
-msgstr Konnte neue Staging-Area-Datei nicht schreiben.
+msgstr Konnte neue Index-Datei nicht schreiben.
 
 #: builtin/apply.c:59
 msgid git apply [options] [patch...]
@@ -2396,7 +2396,7 @@ msgstr Pfad %s wurde umbenannt/gelöscht
 #: builtin/apply.c:3349 builtin/apply.c:3504
 #, c-format
 msgid %s: does not exist in index
-msgstr %s ist nicht in der Staging-Area
+msgstr %s ist nicht im Index
 
 #: builtin/apply.c:3353 builtin/apply.c:3496 builtin/apply.c:3518
 #, c-format
@@ -2406,7 +2406,7 @@ msgstr %s: %s
 #: builtin/apply.c:3358 builtin/apply.c:3512
 #, c-format
 msgid %s: does not match index
-msgstr %s entspricht nicht der Version in der Staging-Area
+msgstr %s entspricht nicht der Version im Index
 
 #: builtin/apply.c:3460
 msgid removal patch leaves file contents
@@ -2470,7 +2470,7 @@ msgstr make_cache_entry für Pfad '%s' fehlgeschlagen
 #: builtin/apply.c:4049
 #, c-format
 msgid unable to remove %s from index
-msgstr konnte %s nicht aus der Staging-Area entfernen
+msgstr konnte %s nicht aus dem Index entfernen
 
 #: builtin/apply.c:4078
 #, c-format
@@ -2539,7 +2539,7 @@ msgstr nicht erkannte Eingabe
 
 #: builtin/apply.c:4405
 msgid unable to read index file
-msgstr Konnte Staging-Area-Datei nicht lesen
+msgstr Konnte Index-Datei nicht lesen
 
 #: builtin/apply.c:4522 builtin/apply.c:4525 builtin/clone.c:92
 #: builtin/fetch.c:92
@@ -2593,8 +2593,7 @@ msgstr 
 #: 

Re: git push keeps writing after server failure

2015-06-12 Thread Jeff King
On Fri, Jun 12, 2015 at 02:12:56PM -0400, Jeff King wrote:

  $ git push --all g...@github.com:spearce/wk.git
  Counting objects: 2752427, done.
  Delta compression using up to 12 threads.
  Compressing objects: 100% (442684/442684), done.
  remote: fatal: pack exceeds maximum allowed size
  Writing objects: 100% (2752427/2752427), 5.28 GiB | 8.86 MiB/s, done.
  Total 2752427 (delta 2225007), reused 2752427 (delta 2225007)
  fatal: The remote end hung up unexpectedly
  fatal: The remote end hung up unexpectedly
  
  Notice GitHub prints remote: fatal: pack exceeds maximum allowed
  size. That interrupted my Writing objects progress meter, and then
  git push just kept going and wrote really really fast (170 MiB/s!)
  until the entire pack was sent.
 
 Sounds like it's writing to a closed fd, then. Which makes sense; I
 think we should hang up the socket after writing the fatal message
 above.

For reference, here's the patch implementing the max-size check on the
server. It's on my long list of patches to clean up and send to the
list; I never did this one because of the unpack-objects caveat
mentioned below.

The death happens in index-pack. I think receive-pack should notice that
and bail (closing the socket), but it's possible that it doesn't.

There's also a git-aware proxying layer at GitHub between clients and
git-receive-pack these days. It's possible that it is not properly
hanging up until it sees EOF from the client.

If any of those things are true, then it is a GitHub-specific problem
(which I still want to fix, but it is not something git upstream needs
to care about).

-- 8 --
Date: Tue, 4 Sep 2012 12:01:06 -0400
Subject: [PATCH] receive-pack: allow a maximum input size to be specified

Receive-pack feeds its input to index-pack, which will
happily accept as many bytes as a sender is willing to
provide. Let's allow an arbitrary cutoff point where we will
stop writing bytes to disk (they're still left in the
tmp_pack, but cleaning that can easily be done outside of
receive-pack).

Note that this doesn't handle the git-unpack-objects code
path at all (because GitHub sets transfer.unpacklimit to 1,
so we never unpack pushed objects anyway).

Signed-off-by: Jeff King p...@peff.net
---
 builtin/index-pack.c   |  5 +
 builtin/receive-pack.c | 14 +-
 t/t5528-push-limits.sh | 27 +++
 3 files changed, 45 insertions(+), 1 deletion(-)
 create mode 100755 t/t5528-push-limits.sh

diff --git a/builtin/index-pack.c b/builtin/index-pack.c
index dd1c5c9..054a144 100644
--- a/builtin/index-pack.c
+++ b/builtin/index-pack.c
@@ -70,6 +70,7 @@ static struct progress *progress;
 static unsigned char input_buffer[4096];
 static unsigned int input_offset, input_len;
 static off_t consumed_bytes;
+static off_t max_input_size;
 static unsigned deepest_delta;
 static git_SHA_CTX input_ctx;
 static uint32_t input_crc32;
@@ -167,6 +168,8 @@ static void use(int bytes)
if (signed_add_overflows(consumed_bytes, bytes))
die(pack too large for current definition of off_t);
consumed_bytes += bytes;
+   if (max_input_size  consumed_bytes  max_input_size)
+   die(pack exceeds maximum allowed size);
 }
 
 static const char *open_pack_file(const char *pack_name)
@@ -1148,6 +1151,8 @@ int cmd_index_pack(int argc, const char **argv, const 
char *prefix)
opts.off32_limit = strtoul(c+1, c, 0);
if (*c || opts.off32_limit  0x8000)
die(bad %s, arg);
+   } else if (!prefixcmp(arg, --max-input-size=)) {
+   max_input_size = strtoul(arg + 17, NULL, 10);
} else
usage(index_pack_usage);
continue;
diff --git a/builtin/receive-pack.c b/builtin/receive-pack.c
index 0afb8b2..e64b8d2 100644
--- a/builtin/receive-pack.c
+++ b/builtin/receive-pack.c
@@ -31,6 +31,7 @@ static int transfer_fsck_objects = -1;
 static int receive_unpack_limit = -1;
 static int transfer_unpack_limit = -1;
 static int unpack_limit = 100;
+static off_t max_input_size;
 static int report_status;
 static int use_sideband;
 static int quiet;
@@ -113,6 +114,11 @@ static int receive_pack_config(const char *var, const char 
*value, void *cb)
return 0;
}
 
+   if (strcmp(var, receive.maxsize) == 0) {
+   max_input_size = git_config_ulong(var, value);
+   return 0;
+   }
+
return git_default_config(var, value, cb);
 }
 
@@ -832,9 +838,10 @@ static const char *unpack(void)
return NULL;
return unpack-objects abnormal exit;
} else {
-   const char *keeper[7];
+   const char *keeper[8];
int s, status, i = 0;
char keep_arg[256];
+   char max_input_arg[256];
struct 

Re: strange result of `git describe` while bisecting

2015-06-12 Thread Philippe De Muyter
On Fri, Jun 12, 2015 at 03:17:40PM +0200, Andreas Schwab wrote:
 Philippe De Muyter p...@macq.eu writes:
 
  I am bisecting the kernel tree between v3.17 and v3.18, and 'git describe'
  is used by the kernel compilation process.  Why do I get a version
  v3.17-rc7-1626-ga4b4a2b, that seems outside of [v3.17..v3.18] ?
 
 Because your are testing a side branch that is based on v3.17-rc7.
 
 3.17-rc7 --- 3.17 --- 3.18
\   /
 \- * -/
^
 You are here --^

Thank you Andreas

Philippe
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html