допоможіть нашому сайту http://schoolsite.pp.ua/ - будь ласка, відкрийте його для перегляду однієї-двох сторінок

2015-09-25 Thread admin
Доброго дня, 
будь ласка, просимо переглянути наш сайт,
якщо це не важко для вас,
http://schoolsite.pp.ua/ - будь ласка, відкрийте його для перегляду однієї-двох 
сторінок,
і на будь-якій сторінці один раз натисніть на рекламний банер, який вам 
найбільш цікавий,
це Ваша допомога, щоб ми могли заплатити за хостинг нашого сайту,
дякуємо,
системний адміністратор
ad...@schoolsite.pp.ua
http://schoolsite.pp.ua/

hello, 
our school site require you view,
please, if it's not hard for you, 
http://schoolsite.pp.ua/ - please open it for viewing our site - one or two 
pages,
and on any page, 
click once on the advertising banner that most interesting for you,
it is your help to pay for hosting our school site,
thank you
system Administrator
ad...@schoolsite.pp.ua
http://schoolsite.pp.ua/

Здравствуйте, просим просмотреть наш сайт,
если это не трудно для вас,
http://schoolsite.pp.ua/ - пожалуйста, откройте его для просмотра одной-двух 
страниц,
и на любой странице один раз нажмите на рекламный баннер, который вам наиболее 
интересен, 
это Ваша помощь, чтобы мы могли заплатить за хостинг нашего сайта,
спасибо
системный администратор
ad...@schoolsite.pp.ua
http://schoolsite.pp.ua/ 
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] submodule-parallel-fetch: make some file local symbols static

2015-09-25 Thread Ramsay Jones

Signed-off-by: Ramsay Jones 
---

Hi Stefan,

When you next re-roll the patches for your
'sb/submodule-parallel-fetch' branch, could you please squash
parts of this into the relevant patches. [which would correspond
to commits a93fb7a ("run-command: add an asynchronous parallel
child processor", 22-09-2015) and 58713c8 ("fetch_populated_submodules:
use new parallel job processing", 22-09-2015).]

Thanks!

Also, you might consider renaming some (file local) functions with
really long names, to something a little shorter, like (say):

  handle_submodule_fetch_start_err -> fetch_start_failure
  handle_submodule_fetch_finish -> fetch_finish

(but, as I have said several times, I'm not good at naming things ... ;-)

Also, you could move the definition of get_next_submodule() to be
above/before fetch_populated_submodules() so that you can remove the
forward declaration.

[Note these issues were spotted by sparse and static-check.pl]

HTH

ATB,
Ramsay Jones


 run-command.c  | 12 ++--
 submodule.c| 12 ++--
 test-run-command.c |  6 +++---
 3 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/run-command.c b/run-command.c
index 528a4fb..6ca0151 100644
--- a/run-command.c
+++ b/run-command.c
@@ -902,9 +902,9 @@ struct parallel_processes {
struct strbuf buffered_output; /* of finished children */
 };
 
-void default_start_failure(void *data,
-  struct child_process *cp,
-  struct strbuf *err)
+static void default_start_failure(void *data,
+ struct child_process *cp,
+ struct strbuf *err)
 {
int i;
struct strbuf sb = STRBUF_INIT;
@@ -915,9 +915,9 @@ void default_start_failure(void *data,
die_errno("Starting a child failed:%s", sb.buf);
 }
 
-void default_return_value(void *data,
- struct child_process *cp,
- int result)
+static void default_return_value(void *data,
+struct child_process *cp,
+int result)
 {
int i;
struct strbuf sb = STRBUF_INIT;
diff --git a/submodule.c b/submodule.c
index f362d6a..d786a76 100644
--- a/submodule.c
+++ b/submodule.c
@@ -620,16 +620,16 @@ struct submodule_parallel_fetch {
 };
 #define SPF_INIT {0, ARGV_ARRAY_INIT, NULL, NULL, 0, 0, 0}
 
-int get_next_submodule(void *data, struct child_process *cp,
-  struct strbuf *err);
+static int get_next_submodule(void *data, struct child_process *cp,
+ struct strbuf *err);
 
-void handle_submodule_fetch_start_err(void *data, struct child_process *cp, 
struct strbuf *err)
+static void handle_submodule_fetch_start_err(void *data, struct child_process 
*cp, struct strbuf *err)
 {
struct submodule_parallel_fetch *spf = data;
spf->result = 1;
 }
 
-void handle_submodule_fetch_finish( void *data, struct child_process *cp, int 
retvalue)
+static void handle_submodule_fetch_finish( void *data, struct child_process 
*cp, int retvalue)
 {
struct submodule_parallel_fetch *spf = data;
 
@@ -673,8 +673,8 @@ out:
return spf.result;
 }
 
-int get_next_submodule(void *data, struct child_process *cp,
-  struct strbuf *err)
+static int get_next_submodule(void *data, struct child_process *cp,
+ struct strbuf *err)
 {
int ret = 0;
struct submodule_parallel_fetch *spf = data;
diff --git a/test-run-command.c b/test-run-command.c
index 94c6eee..2555791 100644
--- a/test-run-command.c
+++ b/test-run-command.c
@@ -16,9 +16,9 @@
 #include 
 
 static int number_callbacks;
-int parallel_next(void *data,
- struct child_process *cp,
- struct strbuf *err)
+static int parallel_next(void *data,
+struct child_process *cp,
+struct strbuf *err)
 {
struct child_process *d = data;
if (number_callbacks >= 4)
-- 
2.5.0
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6b 5/8] branch: drop non-commit error reporting

2015-09-25 Thread Matthieu Moy
Junio C Hamano  writes:

> While I was trying to address your "actually already report", I
> spotted a typo and then found that the early part of the second
> paragraph is a bit hard, so here is what I ended up with.

LGTM.

-- 
Matthieu Moy
http://www-verimag.imag.fr/~moy/
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] submodule-parallel-fetch: make some file local symbols static

2015-09-25 Thread Jacob Keller
On Fri, Sep 25, 2015 at 8:15 AM, Ramsay Jones
 wrote:
> [Note these issues were spotted by sparse and static-check.pl]

Any chance you got a link to docs about static-check.pl? I've never
heard of this...

Regards,
Jake
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6b 5/8] branch: drop non-commit error reporting

2015-09-25 Thread Junio C Hamano
Matthieu Moy  writes:

> Karthik Nayak  writes:
>
>> Remove the error "branch '%s' does not point at a commit" in
>> apppend_ref() which reports branch refs which do not point to
>> commits. Also remove the error "some refs could not be read" in
>> print_ref_list() which is triggered as a consequence of the first
>> error.
>>
>> This seems to be the wrong codepath whose purpose is not to diagnose
>> and report a repository corruption. If we care about such a repository
>> corruption, we should report it from fsck instead.
>
> (We actually already report it from fsck indeed)
>
>> This also helps in a smooth port of branch.c to use ref-filter APIs
>> over the following patches. On the other hand, ref-filter ignores refs
>> which do not point at commits silently.
>
> Seems much better. Thanks,

Yes, it seems that I can just replace 5/8 with this and the
remainder can stay as they are.

While I was trying to address your "actually already report", I
spotted a typo and then found that the early part of the second
paragraph is a bit hard, so here is what I ended up with.

branch: drop non-commit error reporting

Remove the error "branch '%s' does not point at a commit" in
append_ref(), which reports branch refs which do not point to
commits.  Also remove the error "some refs could not be read" in
print_ref_list() which is triggered as a consequence of the first
error.

The purpose of these codepaths is not to diagnose and report a
repository corruption.  If we care about such a corruption, we
should report it from fsck instead, which we already do.

This also helps in a smooth port of branch.c to use ref-filter APIs
over the following patches. On the other hand, ref-filter ignores refs
which do not point at commits silently.

Based-on-patch-by: Jeff King 
Helped-by: Junio C Hamano 
Mentored-by: Christian Couder 
Mentored-by: Matthieu Moy 
Signed-off-by: Karthik Nayak 
Signed-off-by: Junio C Hamano 

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] t5561: get rid of racy appending to logfile

2015-09-25 Thread Junio C Hamano
Stephan Beyer  writes:

> The definition of log_div() appended information to the web server's
> logfile to make the test more readable. However, log_div() was called
> right after a request is served (which is done by git-http-backend);
> the web server waits for the git-http-backend process to exit before
> it writes to the log file. When the duration between serving a request
> and exiting was long, the log_div() output was written before the last
> request's log, and the test failed. (This duration could become
> especially long for PROFILE=GEN builds.)
>
> To get rid of this behavior, we should not change the logfile at all.
> This commit removes log_div() and its calls. The additional information
> is kept in the test (for readability reasons) but filtered out before
> comparing it to the actual logfile.
>
> Signed-off-by: Stephan Beyer 
> ---
>  Okay Peff, I added the information to the commit message (in my own
>  words). Past tense for the situation before the patch, present tense
>  for the situation after (hope that's right but should not be too
>  important).
>
>  I also used your proposed grep line because it is probably more robust.

Thanks, both.  

I vaguely recall this test mysteriously failing a few times during
the past several years for me.  Thanks for digging to the bottom of
the problem.  Both the diagnosis and fix look very sensible.

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC/PATCH v1] Add Travis CI support

2015-09-25 Thread Jeff King
On Thu, Sep 24, 2015 at 05:41:06PM -0700, Junio C Hamano wrote:

> Of course, this can be improved if we start using signed push into
> GitHub.  It is a separate issue in the sense that it would help
> GitHub to make that assurance stronger---those who fetch/clone can
> be assured that the tips of branches are what I pushed, without even
> trusting GitHub.

It's been on my todo list to investigate this further, but I just
haven't gotten around to it. My understanding is that GitHub would need
to store your signed-push certificate somewhere (e.g., in a git tree
that records all of the push certs).

If the point is for clients not to trust GitHub, though, it doesn't
really matter what GitHub does with the cert, as long as it is put
somewhere that clients know to get it.  So I wonder if it would be
helpful to have a microformat that the client would use to look at this.
E.g., it would fetch the cert tree, then confirm that the current ref
values match the latest cert.

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] t5561: get rid of racy appending to logfile

2015-09-25 Thread Junio C Hamano
Stephan Beyer  writes:

>  SubmittingPatches says that when there is consensus the patch has to
>  be resent to Junio and cc'ed to the list. Here it is (although I
>  don't know if there is consensus, but, hey, it's a rather trivial patch,
>  so it should be okay).

Yup.

The patch text matches exactly what I already queued with
Reviewed-by from Peff earlier, which is good.  

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 20/43] refs-be-files.c: add methods for the ref iterators

2015-09-25 Thread David Turner
On Fri, 2015-09-25 at 13:54 -0700, Junio C Hamano wrote:
> Up to high-teens in this 43 patch series, the changes all looked
> "separate filesystem backend specific part from refs.c to
> refs-be-files.c" without other questionable changes, but I have to
> give up at this step for now, as conflicts between the patch and the
> current codebase is getting a bit too much to manually adjust the
> patch only to make sure there is no funnies other than a straight
> rename of static functions going on.

Unfortunately, as long as there continue to be changes to refs.c, this
will continue to be an issue.   I can rebase, fix the conflicts, and
re-send.

Later, you say

> * Pick 'next', 'jch' and 'pu' as the starting point, attempted to

Do you mean that you merged these branches together, or that you tried
each of the three?  Which would you like me to rebase on?

> We seem to have added a few more iterators in refs.c that would need
> to be also wrapped as methods, so this step would need to be redone.

Will fix in the re-roll.

> Regarding [03/43], it is a straight rename without any content
> change, so you probably could have done "format-patch -M".  But that
> original commit, if I am not mistaken, left an empty ref.c instead
> of removing, which was somewhat funny (and Makefile still expects
> refs.o can be produced from refs.c).
> 
> The other side of the same coin is that [04/43] expects an empty
> refs.c to be in the original; it should be creating a new file
> instead.

This was intentional.  Ronnie Sahlberg's original version of this patch
simply removed refs.c (without changing Makefile), which broke the
build.  I didn't like that.  So instead I simply left an empty file. 

It looks like you would prefer that 03/43 move refs.c and update
Makefile, then have 04/43 create a new file and update Makefile again.
I'll do that instead.


--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 20/43] refs-be-files.c: add methods for the ref iterators

2015-09-25 Thread Junio C Hamano
David Turner  writes:

>> * Pick 'next', 'jch' and 'pu' as the starting point, attempted to
>
> Do you mean that you merged these branches together, or that you tried
> each of the three?

I tried at least these three (and some other intermediate states)
before giving up.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 20/43] refs-be-files.c: add methods for the ref iterators

2015-09-25 Thread Junio C Hamano
Up to high-teens in this 43 patch series, the changes all looked
"separate filesystem backend specific part from refs.c to
refs-be-files.c" without other questionable changes, but I have to
give up at this step for now, as conflicts between the patch and the
current codebase is getting a bit too much to manually adjust the
patch only to make sure there is no funnies other than a straight
rename of static functions going on.

We seem to have added a few more iterators in refs.c that would need
to be also wrapped as methods, so this step would need to be redone.

Regarding [03/43], it is a straight rename without any content
change, so you probably could have done "format-patch -M".  But that
original commit, if I am not mistaken, left an empty ref.c instead
of removing, which was somewhat funny (and Makefile still expects
refs.o can be produced from refs.c).

The other side of the same coin is that [04/43] expects an empty
refs.c to be in the original; it should be creating a new file
instead.

Just for future reference to others, what I did was:

 * looked at the gzipped patch and made sure the preimage of refs.c
   and the postimage of refs-be-files.c were identical.

 * started from the tip of current master, merged the topics
   mentioned in the message with the gzipped patch to it, and called
   the result $BASE0.

 * applied 01/43 and 02/43 on $BASE0.

 * then manually moved refs.c to refs-be-files.c and told git about
   them, and applied changes to Makefile in 03/43, and committed the
   result.

 * adjusted 04/43 to expect refs.c to be missing and applied it.

 * continued to apply from 05/43 thru until I get a conflict that
   I feel uncomfortable to adjust myself.

 * "git format-patch --stdout -M $BASE0.. >./+dt0".

 * Pick 'next', 'jch' and 'pu' as the starting point, attempted to
   run "git am ./+dt0" (with success).  At least, by adjusting for
   03/43 and 04/43 and recording 03/43 as a rename in "./+dt0", the
   early parts of these attempts were survivable ;-).  Then
   attempted to apply 20/43 on top of the result, all of which
   unfortunately left a conflict that I feel uncomfortable to adjust
   myself.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] Another squash on run-command: add an asynchronous parallel child processor

2015-09-25 Thread Stefan Beller
On Thu, Sep 24, 2015 at 6:08 PM, Junio C Hamano  wrote:
> Stefan Beller  writes:
>
>>  * If you do not die() in start_failure_fn or return_value_fn, you
>>don't want to write to stderr directly as you would destroy the fine
>>ordering of the processes output. So make the err strbuf available in
>>both these functions, and make sure the strbuf is appended to the
>>buffered output in both cases.
>
> Another thing I noticed after re-reading the above is that we shared
> the thinking that dying in these is _the_ normal thing to do and
> continuing is an advanced and/or wierd setting.
>
> And I think it is wrong.  Suppose after spawning 15 tasks and while
> they are still running, you start the 16th one and it fails to stop.
> If your start-failure called die() to kill the controller, what
> happens to the 15 tasks that are already running?
>
> I think two sensible choices that start-failure and return-value can
> make are
>
>  (1) This one task failed, but that is OK.  Please let the other
>  tasks run [*1*].
>
>  (2) There is something seriously wrong with the whole world and I
>  declare an emergency.  Please kill the other ones and exit.

  (3) There is something wrong, such that I cannot finish my
  job, but I know the other 15 processes help towards the goal,
  so I want to let them live on until they are done. E.g: fetch submodules
  may want to take this strategy if it fails to start another sub
process fetching.

By having a return value indicating which strategy you want to pursue here,
we're making the design choice to have everything done monolithically
inside the pp machinery.

We could also offer more access to the pp machinery and an implementation for
(2) might look like this:

static void fictious_start_failure(void *data,
void *pp,
struct child_process *cp,
struct strbuf *err)
{
struct mydata *m = data;

if (m->failstrategy == 1)
; /* nothing here */
else if (m->failstrategy == 2)
killall_children(pp);
else if (m->failstrategy == 3) {
m->stop_scheduling_new_tasks = 1;
redirect_children_to_dev_null(pp);
else
...
}

By having the pointer to the pp struct passed around, we allow
for adding new callback functions to be added later to the
pp machinery, which may not be expressed via a return code.

>
> Dying in these callbacks do not achieve neither.  Perhaps make these
> two functions return bool (or enum if you already know a third
> sensible option, but otherwise bool is fine and the person who
> discovers the need for the third will turn it into enum) to signal
> which one of these two behaviours it wants?
>
> And the default handlers should stop dying, of course.
>
>
> [Footnote]
>
> *1* Because start-failure gets pp, it can even leave a note in it to
> ask the next invocation of get-next to retry it if it chooses
> to.  At this point in the design cycle, all we need to do is to
> make sure that kind of advanced usage is possible with this
> parallel-run-command API.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] Another squash on run-command: add an asynchronous parallel child processor

2015-09-25 Thread Stefan Beller
On Fri, Sep 25, 2015 at 12:04 PM, Junio C Hamano  wrote:
> Stefan Beller  writes:
>
>>> I think two sensible choices that start-failure and return-value can
>>> make are
>>>
>>>  (1) This one task failed, but that is OK.  Please let the other
>>>  tasks run [*1*].
>>>
>>>  (2) There is something seriously wrong with the whole world and I
>>>  declare an emergency.  Please kill the other ones and exit.
>>
>>   (3) There is something wrong, such that I cannot finish my
>>   job, but I know the other 15 processes help towards the goal,
>>   so I want to let them live on until they are done. E.g: fetch 
>> submodules
>>   may want to take this strategy if it fails to start another sub
>> process fetching.
>
> How is that different from (1)?  Do you meann "let other ones that
> are already running continue, but do not spawn any new ones?"

Right. "Phasing out" before gracefully dying. Well to achieve this,
we don't need more options here. (Just set a flag to not return more
work in get_next_task).

>
>> We could also offer more access to the pp machinery and an implementation for
>> (2) might look like this:
>> ...
>> By having the pointer to the pp struct passed around, we allow
>> for adding new callback functions to be added later to the
>> pp machinery, which may not be expressed via a return code.
>
> What you are suggesting would lead to the same "different smart
> participants making decisions locally, so you need to run around and
> follow all the detailed codepaths to understand what is going on"
> design.
>
> I was hoping that we have already passed discussing that stage.
>
> The whole point of that "SQUASH???" commit was to correct the design
> of the overall structure so that we make the central dispatcher that
> uses bunch of "dumb" helpers (that do not make policy decisions
> locally on their own) as the single place you need to read in order
> to understand the logic.

And I thought the discussion there was focused on the internals of the
dispatcher. (You need to only read one place inside _that_ file now)

This discussion would have focused on the public API side (What do we
want to hide, how do we want to hide it?), which would be different enough
for me to warrant a new discussion.

But I guess I'll just go with your suggestion for now to have the return value
indicate a full stop or keep going.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC/PATCH v1] Add Travis CI support

2015-09-25 Thread Jeff King
On Fri, Sep 25, 2015 at 11:29:31AM -0700, Junio C Hamano wrote:

> When I finally complain to the hosting site that it is deliberately
> rejecting the fix that would rob them the illicit revenue source, it
> does not help the hosting site to keep copies of push certificates
> when it wants to refute such a complaint.  "We publish all push
> certificates and there is no record that gitster already tried to
> fix the issue" has to be taken with faith in that scenario.

Right. Your earlier examples showed non-repudiation by the original
signer (the hosting site says "you definite pushed this to us, and here
is the signature to prove it, so you cannot deny it"). But in this
example, it is going the other way: the pusher wants the hosting site to
admit to an action.

To do that, the hosting site would have to re-sign the push cert to say
"we got this, it is published", and return the receipt to the original
pusher, who can then use it as proof of the event. Or alternatively, it
could be signed by a third-party notary.

I don't think it is all that interesting an avenue to pursue, though. If
you say "I have this update and the hosting site is not providing it to
people", people are not that interested in whether the hosting site is
being laggy, malicious, or whatever. They are interested in getting your
update. :)

So the more general problem is "I want to make sure I have Junio's
latest push, and I do not want to trust anything else". For that, you
could publish expiring certs (so you can fool me for up to, say, a week,
but after that I consider the old certs to be garbage either way). Or
you could do something clever with a quorum (e.g., N of K hosting sites
say there is no update, so there probably isn't one).

But I think all of that is outside of git's scope. Git provides the
signed ref-state in the form of a push cert. Since it's a small-ish blob
of data, you can use any external mechanism you want to decide on the
correct value of it.

> >  So I wonder if it would be
> > helpful to have a microformat that the client would use to look at this.
> > E.g., it would fetch the cert tree, then confirm that the current ref
> > values match the latest cert.
> 
> Yeah, that is one possibility.  Just a single flat file that
> concatenates all the push cert in the received order would do as an
> export format, too ;-)

I agree that's a more logical format, in a sense; it really is a linear
log. It's just that the receive-pack code already creates a blob for us,
so it's cheap to reference that in tree (and then fetching it is cheap,
too). IOW, git is much better at adding files to trees than it is at
appending to files. :)

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] Another squash on run-command: add an asynchronous parallel child processor

2015-09-25 Thread Junio C Hamano
Stefan Beller  writes:

>> I think two sensible choices that start-failure and return-value can
>> make are
>>
>>  (1) This one task failed, but that is OK.  Please let the other
>>  tasks run [*1*].
>>
>>  (2) There is something seriously wrong with the whole world and I
>>  declare an emergency.  Please kill the other ones and exit.
>
>   (3) There is something wrong, such that I cannot finish my
>   job, but I know the other 15 processes help towards the goal,
>   so I want to let them live on until they are done. E.g: fetch submodules
>   may want to take this strategy if it fails to start another sub
> process fetching.

How is that different from (1)?  Do you meann "let other ones that
are already running continue, but do not spawn any new ones?"

> We could also offer more access to the pp machinery and an implementation for
> (2) might look like this:
> ...
> By having the pointer to the pp struct passed around, we allow
> for adding new callback functions to be added later to the
> pp machinery, which may not be expressed via a return code.

What you are suggesting would lead to the same "different smart
participants making decisions locally, so you need to run around and
follow all the detailed codepaths to understand what is going on"
design.

I was hoping that we have already passed discussing that stage.

The whole point of that "SQUASH???" commit was to correct the design
of the overall structure so that we make the central dispatcher that
uses bunch of "dumb" helpers (that do not make policy decisions
locally on their own) as the single place you need to read in order
to understand the logic.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] Another squash on run-command: add an asynchronous parallel child processor

2015-09-25 Thread Junio C Hamano
Stefan Beller  writes:

> We could also offer more access to the pp machinery and an implementation for
> (2) might look like this:
>
> static void fictious_start_failure(void *data,
> void *pp,
> struct child_process *cp,
> struct strbuf *err)
> {
> struct mydata *m = data;
>
> if (m->failstrategy == 1)
> ; /* nothing here */
> else if (m->failstrategy == 2)
> killall_children(pp);

That looks nice and clean in theory, but I highly doubt that it is a
good idea to have killall_children(pp) in a client code of the API
like this, especially if you are envisioning that that function
merely goes through the list of children and does kill on the
processes.  If killall_children(pp) is supplied by the API, it makes
things less insane (now it can know and keep up with the evolution
of the API implementation detail, such as how buffered output are
kept and such), but still such an API constrains the structure of
the overall scheduling loop and the helper functions that live on
the API side.  You need to make absolutely sure that calling
killall_children() is something that can be sanely done from inside
start_failure() callback, for example.

If you signal the "emergency" with a return value, the callchain on
the API side can choose to do the killing at a place it knows is
safe to do so in a more controlled way.  For example, the main loop
of the API side IIRC (I do not bother checking out 'pu' to read it
as my working tree is busy on another topic right now) is

while (1) {
for (cnt = 0; cnt < 4 && we still have slots; cnt++)
start_one();

collect_output();
drain_output_for_foreground();
wait_and_flush();
}

Imagine that you have 0 or more processes running and start another
iteration of this loop.  The first task started by start_one() fails
and the above fictitious_start_failure() calls killall_children()
itself.  What happens to the other three iteration of the inner
loop?  After existing ones are killed, it adds three more?

And to prevent that from happening, you also need to tell your
fictitious_next_task() that no more processes are desired.  The
client of the API is forced to coordinate across its multiple
callback functions.

And you did that to gain what?  One major thing I can think of is
that that way, the main scheduling loop does not have to know why
after attempting to spawn but failing, fictitious_next_task()
started saying "no more tasks".  For somebody who is coming from
"The main loop is a dumb bullet-list of things to do.  All smarts
are inside the callee", that might appear to be a good thing.

But I do not think it is a good thing at all.  That forces the
reader of the code to not just follow the API layer but even down to
its individual clients of the API to understand what is going on.

If you propagate the return from start_failure() callback upwards,
then the main scheduler would *know* that the client application
does not want any more new tasks, and it does want to abort even the
running tasks, so it will not call fictitious_next_task() in the
first place after the client application declares an "emergency
exit".

And that would make the overall logic a lot easier to follow.










--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 3/5] enter_repo: allow .git files in strict mode

2015-09-25 Thread Junio C Hamano
Nguyễn Thái Ngọc Duy   writes:

> Strict mode is about not guessing where .git is. If the user points to a
> .git file, we know exactly where the target .git dir will be.
>
> This is needed even in local clone case because transport.c code uses
> upload-pack for fetching remote refs.
>
> Signed-off-by: Nguyễn Thái Ngọc Duy 
> ---
>  path.c | 9 +++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/path.c b/path.c
> index 7340e11..32d4ca6 100644
> --- a/path.c
> +++ b/path.c
> @@ -438,8 +438,13 @@ const char *enter_repo(const char *path, int strict)
>   return NULL;
>   path = validated_path;
>   }
> - else if (chdir(path))
> - return NULL;
> + else {
> + const char *gitfile = read_gitfile(used_path);

At this point, used_path[] has not been touched since this function
was called.  What file are we reading from?

Is that just a typo of used_path?  Do we lack test that cover this
codepath?

Thanks.

> + if (gitfile)
> + path = gitfile;
> + if (chdir(path))
> + return NULL;
> + }
>  
>   if (is_git_directory(".")) {
>   set_git_dir(".");

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] branch: handle errors on setting tracking branches

2015-09-25 Thread Junio C Hamano
Jeff King  writes:

> I count 4 callers in the current code, and none of them currently looks
> at the return value. Your patch updates setup_tracking() to propagate
> the error, which is good. But that is called from exactly one place,
> create_branch(), which also ignores the outcome. :-/
>
> I don't think we want to die() when the config fails. That might be the
> right thing for "branch --set-upstream-to", but probably not for
> "checkout -b". I think we need to look at each call site and see whether
> we should be propagating the error back. With the hope that we would
> ultimately affect the exit code of the command.
>
> In the case of branch creation, we are in a two-step procedure: create
> the branch, then set up its tracking config. We _could_ rollback the
> first step when the second step fails, but that is probably not worth
> the complication.
>
>> Actually, there are quite a few places where we don't check the
>> return values of git_config_set and related functions. I didn't
>> go through them yet, but might do so if you deem this to be of
>> value.
>
> I think more error-checking is better than less. But I do think it's
> worth plumbing through the error codes in each case.
>
> It's tempting to just die() when the operation fails, but as discussed
> above, that's not always the most appropriate thing.
>
>>  branch.c | 24 
>>  branch.h |  3 ++-
>>  2 files changed, 18 insertions(+), 9 deletions(-)
>
> The patch itself looks OK to me from a cursory read.

I agree with everything you said in the analysis, including that
this patch is a good first step in the right direction, but at the
same time it is a glorified no-op whose only external effect is to
make the error diagnosis a bit noisier.  A good first step in the
right direction with stride length of zero ;-)

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC/PATCH v1] Add Travis CI support

2015-09-25 Thread Junio C Hamano
Luke Diamand  writes:

> From past experience, if it's configured to email people when things
> break, sooner or later it will email the wrong people, probably once
> every few seconds over a weekend.
>
> Automated testing is a Good Thing, but it's still software, so needs
> maintenance or it will break.

That does sound like a valid concern (thanks for education---we
should all learn from others' past experience).  Unless it is just a
"set up and forget" thing, I do not think I'd want to be in charge
of it.

Thanks.



--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] SQUASH???

2015-09-25 Thread Stefan Beller
On Thu, Sep 24, 2015 at 5:49 PM, Junio C Hamano  wrote:
> Stefan Beller  writes:
>
>>  * If you do not die() in start_failure_fn or return_value_fn, you
>>don't want to write to stderr directly as you would destroy the fine
>>ordering of the processes output. So make the err strbuf available in
>>both these functions, and make sure the strbuf is appended to the
>>buffered output in both cases
>
> I think that is a sensible change.  Don't we want the same for the
> other failure handler, though.  Capture any message from it and
> append it to the output of the process that just finished, or
> something?
>
> By the way, I understand that these two are solely for early review
> and I'll be getting them as either new patches or part of updated
> patches in the next reroll (i.e. you are not expecting me to split
> these apart and do "rebase -i" for you to the last-posted version).
> Asking only to make sure we are on the same wavelength.

Sure. I just wanted to point out the details instead of resending the series.
I'll do a resend later today, hoping to get all issues addressed.

Thanks,
Stefan


>
> Thanks.
>
>>
>> Signed-off-by: Junio C Hamano 
>> Signed-off-by: Stefan Beller 
>> ---
>>  run-command.c | 43 ++-
>>  run-command.h |  1 +
>>  2 files changed, 31 insertions(+), 13 deletions(-)
>>
>> diff --git a/run-command.c b/run-command.c
>> index 494e1f8..0d22291 100644
>> --- a/run-command.c
>> +++ b/run-command.c
>> @@ -907,6 +907,7 @@ void default_start_failure(void *data,
>>
>>  void default_return_value(void *data,
>> struct child_process *cp,
>> +   struct strbuf *err,
>> int result)
>>  {
>>   int i;
>> @@ -977,7 +978,7 @@ static void set_nonblocking(int fd)
>>   "output will be degraded");
>>  }
>>
>> -/* returns 1 if a process was started, 0 otherwise */
>> +/* return 0 if get_next_task() ran out of things to do, non-zero otherwise 
>> */
>>  static int pp_start_one(struct parallel_processes *pp)
>>  {
>>   int i;
>> @@ -991,26 +992,30 @@ static int pp_start_one(struct parallel_processes *pp)
>>   if (!pp->get_next_task(pp->data,
>>  >children[i].process,
>>  >children[i].err))
>> - return 1;
>> + return 0;
>>
>> - if (start_command(>children[i].process))
>> + if (start_command(>children[i].process)) {
>>   pp->start_failure(pp->data,
>> >children[i].process,
>> >children[i].err);
>> + strbuf_addbuf(>buffered_output, >children[i].err);
>> + strbuf_reset(>children[i].err);
>> + return -1;
>> + }
>>
>>   set_nonblocking(pp->children[i].process.err);
>>
>>   pp->nr_processes++;
>>   pp->children[i].in_use = 1;
>>   pp->pfd[i].fd = pp->children[i].process.err;
>> - return 0;
>> + return 1;
>>  }
>>
>> -static void pp_buffer_stderr(struct parallel_processes *pp)
>> +static void pp_buffer_stderr(struct parallel_processes *pp, int 
>> output_timeout)
>>  {
>>   int i;
>>
>> - while ((i = poll(pp->pfd, pp->max_processes, 100)) < 0) {
>> + while ((i = poll(pp->pfd, pp->max_processes, output_timeout)) < 0) {
>>   if (errno == EINTR)
>>   continue;
>>   pp_cleanup(pp);
>> @@ -1069,7 +1074,8 @@ static void pp_collect_finished(struct 
>> parallel_processes *pp)
>>   error("waitpid is confused (%s)",
>> pp->children[i].process.argv[0]);
>>
>> - pp->return_value(pp->data, >children[i].process, code);
>> + pp->return_value(pp->data, >children[i].process,
>> +  >children[i].err, code);
>>
>>   argv_array_clear(>children[i].process.args);
>>   argv_array_clear(>children[i].process.env_array);
>> @@ -,15 +1117,26 @@ int run_processes_parallel(int n, void *data,
>>  return_value_fn return_value)
>>  {
>>   struct parallel_processes pp;
>> - pp_init(, n, data, get_next_task, start_failure, return_value);
>>
>> + pp_init(, n, data, get_next_task, start_failure, return_value);
>>   while (1) {
>> - while (pp.nr_processes < pp.max_processes &&
>> -!pp_start_one())
>> - ; /* nothing */
>> - if (!pp.nr_processes)
>> + int no_more_task, cnt;
>> + int output_timeout = 100;
>> + int spawn_cap = 4;
>> +
>> + for (cnt = spawn_cap, no_more_task = 0;
>> +  cnt && pp.nr_processes < pp.max_processes;
>> +  cnt--) {
>> + if (!pp_start_one()) {
>> + no_more_task = 1;
>> +

Re: [RFC/PATCH v1] Add Travis CI support

2015-09-25 Thread Junio C Hamano
Jeff King  writes:

> If the point is for clients not to trust GitHub, though, it doesn't
> really matter what GitHub does with the cert, as long as it is put
> somewhere that clients know to get it.

Correct.  A spiffy Web interface that says "Click this button and we
show you the output of GPG signature verification" would not help.
The push certificate is all about allowing third-parties to conduct
an independent audit, so anything the hosting site computes using
the certificates does not add value, unless the certificates
themselves are exported for such an independent audit.

If somebody found a change to "git push" that makes it pick the
user's wallet and sends a few coins every time it talks to the
hosting site, the hosting site can say it is not their doing by
showing that the tip of the commit that contains such a change came
from me, and it is not their evil doing.  Push certificates help the
hosting site prove their innocence, and those who do not trust the
site can still be convinced by the claim.

There is one scenario that signed push would not help very much,
though.  The hosting site cannot deny that it did not receive a
push.

Following such an incident (perhaps the evil change came as a side
effect of a innocuous looking patch), I would push a commit that
fixes such an issue out to the hosting site (with signed commit).
But if the hosting site deliberately keeps the tip of the branch
unmodified (e.g. you can appear to accept the push to the pusher,
without updating what is served to the general public), there will
be more people who will fetch from the hosting site to contaminate
their copy of git and the damage will spread in the meantime.

When I finally complain to the hosting site that it is deliberately
rejecting the fix that would rob them the illicit revenue source, it
does not help the hosting site to keep copies of push certificates
when it wants to refute such a complaint.  "We publish all push
certificates and there is no record that gitster already tried to
fix the issue" has to be taken with faith in that scenario.

So push certificate is not perfect.  But it does protect hosting
sites and projects hosted on them.

>  So I wonder if it would be
> helpful to have a microformat that the client would use to look at this.
> E.g., it would fetch the cert tree, then confirm that the current ref
> values match the latest cert.

Yeah, that is one possibility.  Just a single flat file that
concatenates all the push cert in the received order would do as an
export format, too ;-)
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6b 5/8] branch: drop non-commit error reporting

2015-09-25 Thread Karthik Nayak
On Fri, Sep 25, 2015 at 9:30 PM, Junio C Hamano  wrote:
> Matthieu Moy  writes:
>
>> Karthik Nayak  writes:
>>
>>> Remove the error "branch '%s' does not point at a commit" in
>>> apppend_ref() which reports branch refs which do not point to
>>> commits. Also remove the error "some refs could not be read" in
>>> print_ref_list() which is triggered as a consequence of the first
>>> error.
>>>
>>> This seems to be the wrong codepath whose purpose is not to diagnose
>>> and report a repository corruption. If we care about such a repository
>>> corruption, we should report it from fsck instead.
>>
>> (We actually already report it from fsck indeed)
>>
>>> This also helps in a smooth port of branch.c to use ref-filter APIs
>>> over the following patches. On the other hand, ref-filter ignores refs
>>> which do not point at commits silently.
>>
>> Seems much better. Thanks,
>
> Yes, it seems that I can just replace 5/8 with this and the
> remainder can stay as they are.
>
> While I was trying to address your "actually already report", I
> spotted a typo and then found that the early part of the second
> paragraph is a bit hard, so here is what I ended up with.
>
> branch: drop non-commit error reporting
>
> Remove the error "branch '%s' does not point at a commit" in
> append_ref(), which reports branch refs which do not point to
> commits.  Also remove the error "some refs could not be read" in
> print_ref_list() which is triggered as a consequence of the first
> error.
>
> The purpose of these codepaths is not to diagnose and report a
> repository corruption.  If we care about such a corruption, we
> should report it from fsck instead, which we already do.
>
> This also helps in a smooth port of branch.c to use ref-filter APIs
> over the following patches. On the other hand, ref-filter ignores refs
> which do not point at commits silently.
>
> Based-on-patch-by: Jeff King 
> Helped-by: Junio C Hamano 
> Mentored-by: Christian Couder 
> Mentored-by: Matthieu Moy 
> Signed-off-by: Karthik Nayak 
> Signed-off-by: Junio C Hamano 
>
> Thanks.

Looks good to me, thanks :)

-- 
Regards,
Karthik Nayak
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] SQUASH???

2015-09-25 Thread Junio C Hamano
Stefan Beller  writes:

> Sure. I just wanted to point out the details instead of resending the series.
> I'll do a resend later today, hoping to get all issues addressed.

Thanks.  For something this small, unless there are many small
pieces that need to be picked apart, I do not terribly mind to do
the squashing myself (after all that is what I do every day, prepare
a squash while queuing, do the integration testing and send the
squash out to make sure the original author agrees with it).

I just wanted to make sure that we won't be in a situation where
I am waiting forever expecting you to reroll while you are expecting
me to do the squashing, resulting in no progress getting made until
either one of us starts wondering what is going on.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


broken racy detection and performance issues with nanosecond file times

2015-09-25 Thread Karsten Blees
Hi there,

I think I found a few nasty problems with racy detection, as well as
performance issues when using git implementations with different file
time resolutions on the same repository (e.g. git compiled with and
without USE_NSEC, libgit2 compiled with and without USE_NSEC, JGit
executed in different Java implementations...).

Let me start by listing relevant file time gotchas (skip this if it
sounds too familiar) before diving into problem descriptions. Some
ideas for potential solutions are at the end.


Notable file time facts:


The st_ctime discrepancy:
* stat.st_ctime means "change time" (of file metadata) on POSIX
  systems and "creation time" on Windows
* While some file systems may track all four time stamps (mtime,
  atime, change time and creation time), there are no public OS APIs
  to obtain creation time on POSIX / change time on Windows.

Linux:
* In-core file times may not be properly rounded to on-disk
  precision, causing spurious file time changes when the cache is
  refreshed from disk. This was fixed for typical Unix file systems
  in kernel 2.6.11. The fix for CEPH, CIFS, NTFS, UFS and FUSE will
  be in kernel 4.3. There's no fix for FAT-based file systems yet.
* Maximum file time precision is 1 ns (or 1 s with really old glibc).

Windows:
* Maximum file time precision is 100 ns.

Java <= 6:
* Only exposes mtime in milliseconds (via File.getLastModifiedTime).

Java >= 7:
* Only exposes mtime, atime and creation time, no change time (see
  java.nio.file.attribute.BasicFileAttributes).
* Maximum file time precision is implementation specific (OpenJDK:
  1 microsecond on both Unix [1] and Windows [2]).
* On platforms or file systems that don't support creation time,
  BasicFileAttribtes.creationTime() is implementation specific
  (OpenJDK returns mtime instead). There's no public API to detect
  whether creation time is supported or "emulated" in some way.

Git Options:
* NO_NSEC (git only): compile-time option that disables recording of
  nanoseconds in the index, implies USE_NSEC=false.
* USE_NSEC (git and libgit2 with [3]): compile-time option that
  enables nanosecond comparison in both up-to-date and racy checks.
* core.checkStat=minimal (git, libgit2, JGit): config-option that
  disables nanosecond comparison in up-to-date checks, but not in
  racy checks.

JGit:
* Only uses mtime, rounded to milliseconds. While there is a
  DirCacheEntry.setCreationTime() [4] to set the index entry's ctime
  field, AFAICT its not used anywhere.
* Does not compare nanoseconds if the cached value recorded in the
  index is 0, to prevent performance issues with NO_NSEC git
  implementations [5].


Problem 1: Failure to detect racy files (without USE_NSEC)
==

Git may not detect racy changes when 'update-index' runs in parallel
to work tree updates.

Consider this (where timestamps are t.):

 t0.0$ echo "foo" > file1
 t0.1$ git update-index file1 &  # runs in background
 t0.2$ # update-index records stats and sha1 of file1 in new index
 t0.3$ echo "bar" > file1
 $ # update-index writes other index entries
 t1.0$ # update-index finishes (sets mtime of the new index to t1.0!)
 t1.1$ git status # doesn't detect that file1 has changed

The problem here is that racy checks in 'git status' compare against
the new index file's mtime (t1.0), which may be newer than the last
change of file1.


Problem 2: Failure to detect racy files (mixed USE_NSEC)


Git may fail to detect racy conditions if file times in .git/index
have been recorded by another git implementation with better file
time resolution.

Consider the following sequence:

 t0.0$ echo "foo" > file1
 t0.1$ use-nsec-git update-index file1
 t0.2$ echo "bar" > file1
 $ sleep 1
 t1.0$ touch file2
 t1.1$ use-nsec-git status # rewrites index, to store file2 change
 t1.2$ git status # doesn't detect that file1 has changed

The problem here is that the first, nsec-enabled 'git status' does
not consider file1 racy (with nanosecond precision, the file is dirty
already (t0.0 != t0.2), so no racy-checks are performed). Thus, it
will not squash the size field (as a second-precision-git would).
However, it will rewrite the index to capture the status change of
file2, and thus create a new index file with mtime = t1.1. Similar
to problem 1, subsequent 'git status' with second-precision has no
way to detect that file1 has changed.

This problem would not be limited to USE_NSEC-enabled/disabled git,
it occurs whenever different file time resolutions are at play, e.g.:
 * second-based git vs. millisecond-based JGit
 * millisecond-based JGit vs. nanosecond-enabled git
 * GIT_WORK_TREE on ext2 (1 s) and GIT_DIR on ext4 (1 ns)
 * JGit executed by different Java implementations (with different
   file time resolutions)


Problem 3: Failure to detect racy files with core.checkStat=minimal

Re: [PATCH 66/68] fsck: use for_each_loose_file_in_objdir

2015-09-25 Thread Jeff King
On Thu, Sep 24, 2015 at 05:08:32PM -0400, Jeff King wrote:

> +static int fsck_subdir(int nr, const char *path, void *progress)
> +{
> + display_progress(progress, nr + 1);
> + return 0;
> +}
> +
>  static void fsck_object_dir(const char *path)
>  {
> - int i;
>   struct progress *progress = NULL;
>  
>   if (verbose)
> @@ -501,12 +481,9 @@ static void fsck_object_dir(const char *path)
>  
>   if (show_progress)
>   progress = start_progress(_("Checking object directories"), 
> 256);
> - for (i = 0; i < 256; i++) {
> - static char dir[4096];
> - sprintf(dir, "%s/%02x", path, i);
> - fsck_dir(i, dir);
> - display_progress(progress, i+1);
> - }
> +
> + for_each_loose_file_in_objdir(path, fsck_loose, fsck_cruft, fsck_subdir,
> +   progress);
>   stop_progress();

I happened to be running git-fsck today and noticed that it finished
with the progress bar still reading 94%. The problem is that we update
the progress when we finish a subdir, but of course we do not
necessarily have all 256 subdirs, and the for_each_loose code only
triggers our callback for ones that exist.

So we need this on top:

diff --git a/builtin/fsck.c b/builtin/fsck.c
index 2fe6a31..d50efd5 100644
--- a/builtin/fsck.c
+++ b/builtin/fsck.c
@@ -484,6 +484,7 @@ static void fsck_object_dir(const char *path)
 
for_each_loose_file_in_objdir(path, fsck_loose, fsck_cruft, fsck_subdir,
  progress);
+   display_progress(progress, 256);
stop_progress();
 }
 

to make things pretty.

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC/PATCH v1] Add Travis CI support

2015-09-25 Thread Luke Diamand
On 25 September 2015 at 08:27, Johannes Schindelin
 wrote:
> Hi,
>
> On 2015-09-25 05:14, Dennis Kaarsemaker wrote:
>> On do, 2015-09-24 at 17:41 -0700, Junio C Hamano wrote:
>>> larsxschnei...@gmail.com writes:
>>>
>>> > My idea is that the owner of "https://github.com/git/git; enables this 
>>> > account
>>> > for Travis (it's free!). Then we would automatically get the test state 
>>> > for all
>>> > official branches.
>>>
>>> The last time I heard about this "it's free" thing, I thought I
>>> heard that it wants write access to the repository.
>>
>> It does not need write access to the git data, only to auxiliary GitHub
>> data: commit status and deployment status (where it can put "this
>> commit failed tests"), repository hooks (to set up build triggers),
>> team membership (ro) and email addresses (ro).
>
> If that still elicits concerns, a fork could be set up that is automatically 
> kept up-to-date via a web hook, and enable Travis CI there.
>
> Junio, if that is something with which you feel more comfortable, I would be 
> willing to set it up. Even if the visibility (read: impact) would be higher 
> if the badges were attached to https://github.com/git/git proper...
>

It would be less intrusive for the CI system to have a fork. Otherwise
other people using git with the same CI system will get annoying merge
conflicts, and we'll also end up with a repo littered with the control
files from past CI systems if the CI system is ever changed.

>From past experience, if it's configured to email people when things
break, sooner or later it will email the wrong people, probably once
every few seconds over a weekend.

Automated testing is a Good Thing, but it's still software, so needs
maintenance or it will break.


Luke
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 0/7] git-p4: add support for large file systems

2015-09-25 Thread Luke Diamand

On 25/09/15 09:35, Lars Schneider wrote:



  size: 3   flags: 0

What's going on?

I believe this is correct. Git-LFS uses the clean/smudge filter to replace the 
LFS pointer with the actual file on checkout. Therefore you see the actual file!
You can find details here: 
https://github.com/github/git-lfs/blob/master/docs/spec.md#intercepting-git

Can you run `du -hs .git/objects` in your Git repository? This should report a 
value well below 16 MB.


It does, thanks!


--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC/PATCH v1] Add Travis CI support

2015-09-25 Thread Johannes Schindelin
Hi,

On 2015-09-25 05:14, Dennis Kaarsemaker wrote:
> On do, 2015-09-24 at 17:41 -0700, Junio C Hamano wrote:
>> larsxschnei...@gmail.com writes:
>>
>> > My idea is that the owner of "https://github.com/git/git; enables this 
>> > account
>> > for Travis (it's free!). Then we would automatically get the test state 
>> > for all
>> > official branches.
>>
>> The last time I heard about this "it's free" thing, I thought I
>> heard that it wants write access to the repository.
> 
> It does not need write access to the git data, only to auxiliary GitHub
> data: commit status and deployment status (where it can put "this
> commit failed tests"), repository hooks (to set up build triggers),
> team membership (ro) and email addresses (ro).

If that still elicits concerns, a fork could be set up that is automatically 
kept up-to-date via a web hook, and enable Travis CI there.

Junio, if that is something with which you feel more comfortable, I would be 
willing to set it up. Even if the visibility (read: impact) would be higher if 
the badges were attached to https://github.com/git/git proper...

Ciao,
Dscho
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 6/7] git-p4: add support for large file systems

2015-09-25 Thread Luke Diamand
One tiny comment, otherwise this looks good to me.

Can we use test_path_is_missing in place of !test_path_is_file ?



On 21 September 2015 at 23:41,   wrote:
> From: Lars Schneider 
>
> Perforce repositories can contain large (binary) files. Migrating these
> repositories to Git generates very large local clones. External storage
> systems such as Git LFS [1], Git Fat [2], Git Media [3], git-annex [4]
> try to address this problem.
>
> Add a generic mechanism to detect large files based on extension,
> uncompressed size, and/or compressed size.
>
> [1] https://git-lfs.github.com/
> [2] https://github.com/jedbrown/git-fat
> [3] https://github.com/alebedev/git-media
> [4] https://git-annex.branchable.com/
>
>
> diff --git a/t/t9823-git-p4-mock-lfs.sh b/t/t9823-git-p4-mock-lfs.sh
> new file mode 100755
> index 000..5684ee3
> --- /dev/null
> +++ b/t/t9823-git-p4-mock-lfs.sh
> @@ -0,0 +1,170 @@
> +#!/bin/sh
> +
> +test_description='Clone repositories and store files in Mock LFS'
> +
> +. ./lib-git-p4.sh
> +
> +test_file_is_not_in_mock_lfs () {
> +   FILE="$1" &&
> +   CONTENT="$2" &&
> +   echo "$CONTENT" >expect_content &&
> +   test_path_is_file "$FILE" &&
> +   test_cmp expect_content "$FILE"
> +}
> +
> +test_file_is_in_mock_lfs () {
> +   FILE="$1" &&
> +   CONTENT="$2" &&
> +   LOCAL_STORAGE=".git/mock-storage/local/$CONTENT" &&
> +   SERVER_STORAGE=".git/mock-storage/remote/$CONTENT" &&
> +   echo "pointer-$CONTENT" >expect_pointer &&
> +   echo "$CONTENT" >expect_content &&
> +   test_path_is_file "$FILE" &&
> +   test_path_is_file "$LOCAL_STORAGE" &&
> +   test_path_is_file "$SERVER_STORAGE" &&
> +   test_cmp expect_pointer "$FILE" &&
> +   test_cmp expect_content "$LOCAL_STORAGE" &&
> +   test_cmp expect_content "$SERVER_STORAGE"
> +}
> +
> +test_file_is_deleted_in_mock_lfs () {
> +   FILE="$1" &&
> +   CONTENT="$2" &&
> +   LOCAL_STORAGE=".git/mock-storage/local/$CONTENT" &&
> +   SERVER_STORAGE=".git/mock-storage/remote/$CONTENT" &&
> +   echo "pointer-$CONTENT" >expect_pointer &&
> +   echo "$CONTENT" >expect_content &&
> +   ! test_path_is_file "$FILE" &&

Perhaps use test_path_is_missing instead here?

> +   test_path_is_file "$LOCAL_STORAGE" &&
> +   test_path_is_file "$SERVER_STORAGE" &&
> +   test_cmp expect_content "$LOCAL_STORAGE" &&
> +   test_cmp expect_content "$SERVER_STORAGE"
> +}
> +
> +test_file_count_in_dir () {
> +   DIR="$1" &&
> +   EXPECTED_COUNT="$2" &&
> +   find "$DIR" -type f >actual &&
> +   test_line_count = $EXPECTED_COUNT actual
> +}
> +
> +test_expect_success 'start p4d' '
> +   start_p4d
> +'
> +
> +test_expect_success 'Create repo with binary files' '
> +   client_view "//depot/... //client/..." &&
> +   (
> +   cd "$cli" &&
> +
> +   echo "content 1 txt 23 bytes" >file1.txt &&
> +   p4 add file1.txt &&
> +   echo "content 2-3 bin 25 bytes" >file2.dat &&
> +   p4 add file2.dat &&
> +   p4 submit -d "Add text and binary file" &&
> +
> +   mkdir "path with spaces" &&
> +   echo "content 2-3 bin 25 bytes" >"path with spaces/file3.bin" 
> &&
> +   p4 add "path with spaces/file3.bin" &&
> +   p4 submit -d "Add another binary file with same content and 
> spaces in path" &&
> +
> +   echo "content 4 bin 26 bytes XX" >file4.bin &&
> +   p4 add file4.bin &&
> +   p4 submit -d "Add another binary file with different content"
> +   )
> +'
> +
> +test_expect_success 'Store files in Mock LFS based on size (>24 bytes)' '
> +   client_view "//depot/... //client/..." &&
> +   test_when_finished cleanup_git &&
> +   (
> +   cd "$git" &&
> +   git init . &&
> +   git config git-p4.useClientSpec true &&
> +   git config git-p4.largeFileSystem MockLFS &&
> +   git config git-p4.largeFileThreshold 24 &&
> +   git config git-p4.pushLargeFiles True &&
> +   git p4 clone --destination="$git" //depot@all &&
> +
> +   test_file_is_not_in_mock_lfs file1.txt "content 1 txt 23 
> bytes" &&
> +   test_file_is_in_mock_lfs file2.dat "content 2-3 bin 25 bytes" 
> &&
> +   test_file_is_in_mock_lfs "path with spaces/file3.bin" 
> "content 2-3 bin 25 bytes" &&
> +   test_file_is_in_mock_lfs file4.bin "content 4 bin 26 bytes 
> XX" &&
> +
> +   test_file_count_in_dir ".git/mock-storage/local" 2 &&
> +   test_file_count_in_dir ".git/mock-storage/remote" 2
> +   )
> +'
> +
> +test_expect_success 'Store files in Mock LFS based on extension (dat)' '
> +   client_view "//depot/... //client/..." &&
> +   test_when_finished cleanup_git &&
> +   (
> +   cd "$git" &&
> 

Formatting error in page http://git-scm.com/docs/user-manual

2015-09-25 Thread Valentin VALCIU
Hello!

There is a formatting error in the source code of page
http://git-scm.com/docs/user-manual that makes almost half of it be
rendered in a  element displaying the page source in the original
markup language instead of being converted to HTML.

The issue is in the paragraph that stars with "The index is a binary
file (generally kept in `.git/index`)..."


Regards,
--
Valentin VĂLCIU,
Bucharest, Romania
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Formatting error in page http://git-scm.com/docs/user-manual

2015-09-25 Thread John Keeping
On Fri, Sep 25, 2015 at 03:38:02PM +0300, Valentin VALCIU wrote:
> There is a formatting error in the source code of page
> http://git-scm.com/docs/user-manual that makes almost half of it be
> rendered in a  element displaying the page source in the original
> markup language instead of being converted to HTML.
> 
> The issue is in the paragraph that stars with "The index is a binary
> file (generally kept in `.git/index`)..."

It looks like the header underline isn't quite accurate enough to keep
Asciidoctor happy.  The patch below should fix it.

-- >8 --
Subject: [PATCH] Documentation/user-manual: fix header underline

Asciidoctor is stricter than AsciiDoc when deciding if underlining is a
section title or the start of preformatted text.  Make the length of the
underlining match the text to ensure that it renders correctly in all
implementations.

Signed-off-by: John Keeping 
---
 Documentation/user-manual.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Documentation/user-manual.txt b/Documentation/user-manual.txt
index 68978f5..1b7987e 100644
--- a/Documentation/user-manual.txt
+++ b/Documentation/user-manual.txt
@@ -3424,7 +3424,7 @@ just missing one particular blob version.
 
 [[the-index]]
 The index

+-
 
 The index is a binary file (generally kept in `.git/index`) containing a
 sorted list of path names, each with permissions and the SHA-1 of a blob
-- 
2.6.0.rc2.198.g81437b7
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: git-p4: t9819 failing

2015-09-25 Thread Lars Schneider
Good catch. The case-handling test is actually fine. There was a bug in my 
implementation!

If you do this:
git p4 clone //depot/something/...

Then git p4 generates a directory “something” and clones into that (similar to 
Git). Since I set “cloneDirectory” to the current working directory that logic 
never kicked in. Therefore the depot was cloned into the current working 
directory instead of a new directory “something” and the test broke.

Thanks,
Lars

On 24 Sep 2015, at 22:29, Luke Diamand  wrote:

> OK, slight correction there - it now doesn't crash getting the disk
> usage, but I think it still needs to be updated following the other
> changes to case-handling.
> 
> Luke
> 
> On 24 September 2015 at 08:45, Luke Diamand  wrote:
>> On 23 September 2015 at 13:28, Lars Schneider  
>> wrote:
>>> 
 Here's the last bit of the crash dump from git-p4 I get:
 
 File "/home/ldiamand/git/git/git-p4", line 2580, in streamP4FilesCbSelf
   self.streamP4FilesCb(entry)
 File "/home/ldiamand/git/git/git-p4", line 2497, in streamP4FilesCb
   required_bytes = int((4 * int(self.stream_file["fileSize"])) -
 calcDiskFree(self.cloneDestination))
 File "/home/ldiamand/git/git/git-p4", line 116, in calcDiskFree
   st = os.statvfs(dirname)
 OSError: [Errno 2] No such file or directory: 'lc'
 
 Luke
>>> Confirmed. What do you think about this fix?
>> 
>> Works for me!
>> 
>> 
>> 
>>> 
>>> Thank you,
>>> Lars
>>> 
>>> ---
>>> git-p4.py | 1 +
>>> 1 file changed, 1 insertion(+)
>>> 
>>> diff --git a/git-p4.py b/git-p4.py
>>> index 1d1bb87..66c0a4e 100755
>>> --- a/git-p4.py
>>> +++ b/git-p4.py
>>> @@ -3478,6 +3478,7 @@ class P4Clone(P4Sync):
>>> 
>>> print "Importing from %s into %s" % (', '.join(depotPaths), 
>>> self.cloneDestination)
>>> 
>>> +self.cloneDestination = os.path.abspath(self.cloneDestination)
>>> if not os.path.exists(self.cloneDestination):
>>> os.makedirs(self.cloneDestination)
>>> chdir(self.cloneDestination)
>>> --

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Fwd: diff not finding difference

2015-09-25 Thread Johannes Schindelin
Hi Jack Adrian,

On 2015-09-24 23:09, Jack Adrian Zappa wrote:
> This is a weird one:
> 
> [file-1 begin]
> 
> abcd efg hijklmnop
> 
> [file-1 end]
> 
> [file-2 begin]
> 
> blah blah blah
> /
> abdc boo ya!
> 
> [file-2 end]
> 
> Do a diff between these and it won't find any difference.
> 
> Same with the following two lines, each in a different file:
> sabc fed ghi jkl
> abc def ghi jkl
> 
> I first noticed this on the command line git and then in VS2013.  The
> original problem was like my first example.  The files were much
> longer, but all that git would see is the addition of the line of
> ..., but not the removal of the original line.
> 
> I've tried some other simple file changes with similar results.
> Something seems to be definitely broken in git diff. :(

You might want to show your exact command-line invocation, i.e. the full 
information. I suspect that you missed the fact that `git diff a b` does not 
compare the file a to the file b, but instead it compares both a and b to what 
is recorded in the index. With one quirk: if the files a and b are not even 
recorded in the index, `git diff` will output nothing.

Now, the really confusing part for you was probably that your `file-2` *was* 
recorded in the index (maybe you made a backup copy with the extra file 
extension `.bak` or some such, and then called `git diff my-file.bak my-file` 
where `my-file` *actually is tracked by Git* but `my-file.bak` is not).

But `git diff` has so nice features that I wanted to use it myself to compare 
files or directories. That is why I introduced the `--no-index` option, years 
ago. And so I suspect that you called

git diff file-1 file-2

when you actually wanted to call

git diff --no-index file-1 file-2

Here is my shell session to verify that `git diff` works as designed:

```
me@work $ cat >file-1

abcd efg hijklmnop

me@work $ cat >file-2

blah blah blah
/
abdc boo ya!

me@work $ git diff --no-index file-1 file-2
diff --git a/file-1 b/file-2
index 80ea2bc..f7fd7eb 100644
--- a/file-1
+++ b/file-2
@@ -1,3 +1,5 @@

-abcd efg hijklmnop
+blah blah blah
+/
+abdc boo ya!

me@work $
```

Please note that I ended the file contents for both `cat` calls [*1*] with a 
`Ctrl+D` which you cannot see in the pasted lines.

Ciao,
Johannes

Footnote *1*: Can you believe that I wanted to make that pun for at least ten 
years now?
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Fwd: diff not finding difference

2015-09-25 Thread Michael J Gruber
Johannes Schindelin venit, vidit, dixit 25.09.2015 12:11:
> Hi Jack Adrian,
> 
> On 2015-09-24 23:09, Jack Adrian Zappa wrote:
>> This is a weird one:
>> 
>> [file-1 begin]
>> 
>> abcd efg hijklmnop
>> 
>> [file-1 end]
>> 
>> [file-2 begin]
>> 
>> blah blah blah 
>> /
>>
>> 
abdc boo ya!
>> 
>> [file-2 end]
>> 
>> Do a diff between these and it won't find any difference.
>> 
>> Same with the following two lines, each in a different file: sabc
>> fed ghi jkl abc def ghi jkl
>> 
>> I first noticed this on the command line git and then in VS2013.
>> The original problem was like my first example.  The files were
>> much longer, but all that git would see is the addition of the line
>> of ..., but not the removal of the original line.
>> 
>> I've tried some other simple file changes with similar results. 
>> Something seems to be definitely broken in git diff. :(
> 
> You might want to show your exact command-line invocation, i.e. the
> full information. I suspect that you missed the fact that `git diff a
> b` does not compare the file a to the file b, but instead it compares
> both a and b to what is recorded in the index. With one quirk: if the
> files a and b are not even recorded in the index, `git diff` will
> output nothing.
> 
> Now, the really confusing part for you was probably that your
> `file-2` *was* recorded in the index (maybe you made a backup copy
> with the extra file extension `.bak` or some such, and then called
> `git diff my-file.bak my-file` where `my-file` *actually is tracked
> by Git* but `my-file.bak` is not).
> 
> But `git diff` has so nice features that I wanted to use it myself to
> compare files or directories. That is why I introduced the
> `--no-index` option, years ago. And so I suspect that you called

Ah, now is a good time to rename my (shell) alias "gdiff" for "git diff
--no-index" to dschodiff.

Thanks, Dscho :)
Michael

P.S.: Note that dschodiff works perfectly even outside a git working
directory, with all the --color-words and whitespace goodness and what not!
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Formatting error in page http://git-scm.com/docs/user-manual

2015-09-25 Thread Michael J Gruber
John Keeping venit, vidit, dixit 25.09.2015 14:59:
> On Fri, Sep 25, 2015 at 03:38:02PM +0300, Valentin VALCIU wrote:
>> There is a formatting error in the source code of page
>> http://git-scm.com/docs/user-manual that makes almost half of it be
>> rendered in a  element displaying the page source in the original
>> markup language instead of being converted to HTML.
>>
>> The issue is in the paragraph that stars with "The index is a binary
>> file (generally kept in `.git/index`)..."
> 
> It looks like the header underline isn't quite accurate enough to keep
> Asciidoctor happy.  The patch below should fix it.
> 
> -- >8 --
> Subject: [PATCH] Documentation/user-manual: fix header underline
> 
> Asciidoctor is stricter than AsciiDoc when deciding if underlining is a
> section title or the start of preformatted text.  Make the length of the
> underlining match the text to ensure that it renders correctly in all
> implementations.
> 
> Signed-off-by: John Keeping 
> ---
>  Documentation/user-manual.txt | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/Documentation/user-manual.txt b/Documentation/user-manual.txt
> index 68978f5..1b7987e 100644
> --- a/Documentation/user-manual.txt
> +++ b/Documentation/user-manual.txt
> @@ -3424,7 +3424,7 @@ just missing one particular blob version.
>  
>  [[the-index]]
>  The index
> 
> +-
>  
>  The index is a binary file (generally kept in `.git/index`) containing a
>  sorted list of path names, each with permissions and the SHA-1 of a blob
> 

The only other instance seems to be:

Documentation/git-bisect-lk2009.txt-Acknowledgments
Documentation/git-bisect-lk2009.txt:

Dunno if we want to fix that, too.

Michael
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html