OK

2018-10-31 Thread AHMED ZAMA
Greetings,

I humbly solicit for your partnership to transfer €15 million Euros
into your personal or company’s account .Contact me for more detailed
explanation.

Kindly send me the followings

Full Names
Address
Occupation
Direct Mobile Telephone Lines
Nationality

Ahmed Zama
+22675844869


OK

2018-09-20 Thread AHMED ZAMA
Greetings,

Can we discuss business here in the internet ? Contact me for more details.

Ahmed Zama


OK

2018-09-15 Thread Ahmed Zama
Greetings,

I humbly solicit for your partnership to transfer €15 million Euros
into your personal or company’s account .Contact me for more detailed
explanation.

Kindly send me the follwings

Full Names
Address
Occupation
Direct Mobile Telephone Lines
Nationality

Ahmed Zama
+22675844869


[PATCH 5/5] fmt_with_err: add a comment that truncation is OK

2018-05-18 Thread Jeff King
Functions like die_errno() use fmt_with_err() to combine the
caller-provided format with the strerror() string. We use a
fixed stack buffer because we're already handling an error
and don't have any way to report another one. Our buffer
should generally be big enough to fit this, but if it's not,
truncation is our best option. Let's add a comment to that
effect, so that anybody auditing the code for truncation
bugs knows that this is fine.

Signed-off-by: Jeff King 
---
 usage.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/usage.c b/usage.c
index cdd534c9df..b3c78931ad 100644
--- a/usage.c
+++ b/usage.c
@@ -148,6 +148,7 @@ static const char *fmt_with_err(char *buf, int n, const 
char *fmt)
}
}
str_error[j] = 0;
+   /* Truncation is acceptable here */
snprintf(buf, n, "%s: %s", fmt, str_error);
return buf;
 }
-- 
2.17.0.1052.g7d69f75dbf


[PATCH 1/6] test_must_fail: support ok=sigabrt

2018-04-29 Thread Johannes Schindelin
In the upcoming patch, we will prepare t1406 to handle the conversion of
refs/files-backend.c to call BUG() instead of die("BUG: ..."). This will
require handling SIGABRT as valid failure case.

Signed-off-by: Johannes Schindelin <johannes.schinde...@gmx.de>
---
 t/test-lib-functions.sh | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/t/test-lib-functions.sh b/t/test-lib-functions.sh
index 7d620bf2a9a..926aefd1551 100644
--- a/t/test-lib-functions.sh
+++ b/t/test-lib-functions.sh
@@ -616,7 +616,7 @@ list_contains () {
 #   ok=[,<...>]:
 # Don't treat an exit caused by the given signal as error.
 # Multiple signals can be specified as a comma separated list.
-# Currently recognized signal names are: sigpipe, success.
+# Currently recognized signal names are: sigabrt, sigpipe, success.
 # (Don't use 'success', use 'test_might_fail' instead.)
 
 test_must_fail () {
@@ -636,6 +636,9 @@ test_must_fail () {
echo >&4 "test_must_fail: command succeeded: $*"
return 1
elif test_match_signal 13 $exit_code && list_contains "$_test_ok" 
sigpipe
+   then
+   return 0
+   elif test_match_signal 6 $exit_code && list_contains "$_test_ok" sigabrt
then
return 0
elif test $exit_code -gt 129 && test $exit_code -le 192
-- 
2.17.0.windows.1.36.gdf4ca5fb72a




OK

2018-04-07 Thread AHMED ZAMA
Dear  Friend,



Please can both of us handle a lucrative deal.?? I will give you the
full detail explanation as soon as I hear from you.



Faithfully yours,
Mr Ahmed Zama


OK

2018-03-18 Thread Ahmed Zama
Dear Sir,



Please can both of us handle a lucrative deal.?? I will give you the
full detail explanation as soon as I hear from you.



Faithfully yours,
Mr Ahmed Zama


OK

2018-03-14 Thread Ahmed Zama
Dear Friend

I have a Beneficial Business Project for you worth €15 million Euros.
Reply me for more information.

Ahmed Zama


OK

2018-02-26 Thread Ahmed Zama
Greetings,

I am desperately in need of a foreigner with a foreign account. I have
a profitable business that wil be of a benefit to both of us. Permit
me to disclose the details of the business to you

Ahmed Zama


[PATCH 1/3] t: document 'test_must_fail ok='

2018-02-08 Thread SZEDER Gábor
Since 'test_might_fail' is implemented as a thin wrapper around
'test_must_fail', it also accepts the same options.  Mention this in
the docs as well.

Signed-off-by: SZEDER Gábor <szeder@gmail.com>
---
 t/README| 14 --
 t/test-lib-functions.sh | 10 ++
 2 files changed, 22 insertions(+), 2 deletions(-)

diff --git a/t/README b/t/README
index b3f7b449c3..1a1361a806 100644
--- a/t/README
+++ b/t/README
@@ -655,7 +655,7 @@ library for your script to use.
test_expect_code 1 git merge "merge msg" B master
'
 
- - test_must_fail 
+ - test_must_fail [] 
 
Run a git command and ensure it fails in a controlled way.  Use
this instead of "! ".  When git-command dies due to a
@@ -663,11 +663,21 @@ library for your script to use.
treats it as just another expected failure, which would let such a
bug go unnoticed.
 
- - test_might_fail 
+   Accepts the following options:
+
+ ok=[,<...>]:
+   Don't treat an exit caused by the given signal as error.
+   Multiple signals can be specified as a comma separated list.
+   Currently recognized signal names are: sigpipe, success.
+   (Don't use 'success', use 'test_might_fail' instead.)
+
+ - test_might_fail [] 
 
Similar to test_must_fail, but tolerate success, too.  Use this
instead of " || :" to catch failures due to segv.
 
+   Accepts the same options as test_must_fail.
+
  - test_cmp  
 
Check whether the content of the  file matches the
diff --git a/t/test-lib-functions.sh b/t/test-lib-functions.sh
index 1701fe2a06..26b149ac1d 100644
--- a/t/test-lib-functions.sh
+++ b/t/test-lib-functions.sh
@@ -610,6 +610,14 @@ list_contains () {
 #
 # Writing this as "! git checkout ../outerspace" is wrong, because
 # the failure could be due to a segv.  We want a controlled failure.
+#
+# Accepts the following options:
+#
+#   ok=[,<...>]:
+# Don't treat an exit caused by the given signal as error.
+# Multiple signals can be specified as a comma separated list.
+# Currently recognized signal names are: sigpipe, success.
+# (Don't use 'success', use 'test_might_fail' instead.)
 
 test_must_fail () {
case "$1" in
@@ -656,6 +664,8 @@ test_must_fail () {
 #
 # Writing "git config --unset all.configuration || :" would be wrong,
 # because we want to notice if it fails due to segv.
+#
+# Accepts the same options as test_must_fail.
 
 test_might_fail () {
test_must_fail ok=success "$@"
-- 
2.16.1.180.g07550b0b1b



Re: RFC: Design and code of partial clones (now, missing commits and trees OK)

2017-09-29 Thread Jonathan Tan
On Tue, 26 Sep 2017 17:26:33 +0200
Michael Haggerty  wrote:

> Maybe naming has been discussed at length before, and I am jumping into
> a long-settled topic. And admittedly this is bikeshedding.
> 
> But I find these names obscure, even as a developer. And terms like this
> will undoubtedly bleed into the UI and documentation, so it would be
> good to put some effort into choosing the best names possible.

Names are definitely not a long-settled topic. :-)

I agree that naming is important, and thanks for your efforts.

> I suppose that the term "promisor" comes from the computer science term
> "promise" [1]. In that sense it is apt, because, say, a promisor object
> is something that is known to be obtainable, but we don't have it yet.
> 
> But from the user's point of view, I think this isn't a very
> illuminating term. I think the user's mental model will be that there is
> a distinguished remote repository that holds the project's entire
> published history, and she has to remain connected to it for certain Git
> operations to work [2]. Another interesting aspect of this remote is
> that it has to be trusted never (well, almost never) to discard any
> objects [3].

Yes, that is the mental model I have too. I think the ordinary meaning
of the word "promise" works, though - you're not working completely on
things you have, but you're working partly based on the guarantees (or
promises) that this distinguished remote repository gives.

> Personally I think "lazy remote" and "backing remote" are not too bad.

I think these terms are imprecise. "Lazy remote" seems to me to imply
that it is the remote that is lazy, not us.

"Backing remote" does evoke the concept of a "backing store". For me,
the ability to transfer objects to the backing store to be stored
permanently (so that you don't have to store it yourself) is an
essential part of a backing store, and that is definitely something we
don't do here (although, admittedly, such a feature might be useful), so
I don't agree with that term. But if transferring objects is not
essential to a backing store, or if adding such a feature is a natural
fit to the partial clone feature, maybe we could use that.

> [2] I haven't checked whether the current proposal allows for
> multiple "promisor remotes". It's certainly thinkable, if not
> now then in the future. But I suppose that even then, 99% of
> users will configure a single "promisor remote" for each
> repository.

It does not allow for multiple "promisor remotes". Support for that
would require upgrades in the design (including knowing which remote to
fetch a missing object from), but otherwise I agree with your
statements.


Re: RFC: Design and code of partial clones (now, missing commits and trees OK)

2017-09-28 Thread Junio C Hamano
Jonathan Tan  writes:

> I've pushed a new version:
>
> https://github.com/jonathantanmy/git/tree/partialclone3

Just FYI, the reason why I commented only on the first patch in your
previous series at GitHub wasn't because I found the others perfect
and nothing to comment on.  It was because I found it extremely
painful to conduct review and comment in the webform and gave up
while trying to review the series that way just after doing a single
patch.

I also found it frustrating that it is not even obvious which one of
the many patches in the series have been already commented on,
without clicking to each and every commit (and back), even when the
result is to find that nobody has commented on them yet.


Re: RFC: Design and code of partial clones (now, missing commits and trees OK)

2017-09-28 Thread Jonathan Tan
On Fri, 15 Sep 2017 13:43:43 -0700
Jonathan Tan  wrote:

> For those interested in partial clones and/or missing objects in repos,
> I've updated my original partialclone patches to not require an explicit
> list of promises. Fetch/clone still only permits exclusion of blobs, but
> the infrastructure is there for a local repo to support missing trees
> and commits as well.
> 
> They can be found here:
> 
> https://github.com/jonathantanmy/git/tree/partialclone2

I've pushed a new version:

https://github.com/jonathantanmy/git/tree/partialclone3

Besides some small changes as requested by comments on the GitHub
repository, I've also updated the code to do the following:
 - clarified terminology - in particular, I've tried to avoid
   "promised", only using "promisor object" to denote objects that the
   local repo knows that the promisor remote has, whether the local repo
   has it or not
 - restored bulk checkout functionality (so now you can clone with
   --blob-max-bytes=0)
 - a fix to fetch-pack to restore a global flag after it uses it, so
   commands like "git log -S" still work (but to test this, I used
   --blob-max-bytes=20 with the Git repository, because batch
   fetching is not implemented for commands like these)

In its current form, the code is already useful for situations like:
 - a large repository with many blobs in which the client only needs to
   checkout, at most, and does not need to search through history
   locally, and
 - a repository with a few large blobs, where the client still can
   search through history as long as the client is online


Re: RFC: Design and code of partial clones (now, missing commits and trees OK) (part 3)

2017-09-26 Thread Jonathan Tan
On Tue, 26 Sep 2017 10:25:16 -0400
Jeff Hostetler  wrote:

> >> Perhaps you could augment the OID lookup to remember where the object
> >> was found (essentially a .promisor bit set).  Then you wouldn't need
> >> to touch them all.
> > 
> > Sorry - I don't understand this. Are you saying that missing promisor
> > objects should go into the global object hashtable, so that we can set a
> > flag on them?
> 
> I just meant could we add a bit to "struct object_info" to indicate
> that the object was found in a .promisor packfile ?  This could
> be set in sha1_object_info_extended().
> 
> Then the is_promised() calls in fsck and gc would just test that bit.
> 
> Given that that bit will be set on promisOR objects (and we won't
> have object_info for missing objects), you may need to adjust the
> iterator in the fsck/gc code slightly.
> 
> This is a bit of a handwave, but could something like that eliminate
> the need to build this oidset?

This oidset is meant to contain the missing objects, and is needed as
the final check (attempt to read the object, then check this oidset).
Admittedly, right now I add objects to it even if they are present in
the DB, but that's because I think that it's better for the set to be
bigger than to incur the repeated existence checks. But even if we only
include truly missing objects in this oidset, we still need the oidset,
or store information about missing objects in some equivalent data
structure.

The bit that you mention being set on promisOR objects is already being
set. See the invocation of mark_uninteresting() in rev-list.c.


Re: RFC: Design and code of partial clones (now, missing commits and trees OK)

2017-09-26 Thread Michael Haggerty
On 09/22/2017 12:42 AM, Jonathan Tan wrote:
> On Thu, 21 Sep 2017 13:57:30 Jeff Hostetler  wrote:
> [...]
>> I struggled with the terms here a little when looking at the source.
>> () A remote responding to a partial-clone is termed a
>> "promisor-remote". () Packfiles received from a promisor-remote are
>> marked with ".promisor" like ".keep" names.
>> () An object actually contained in such packfiles is called a
>> "promisor-object". () An object not-present but referenced by one of
>> the above promisor-objects is called a "promised-object" (aka a
>> "missing-object").
>>
>> I think the similarity of the words "promisOR" and "promisED" threw
>> me here and as I was looking at the code.  The code in is_promised()
>> [1] looked like it was adding all promisor- and promised-objects to
>> the "promised" OIDSET, but I could be mistaken.
>>
>> [1]
>> https://github.com/jonathantanmy/git/commit/7a9c2d9b6e2fce293817b595dee29a7eede0#diff-5d5d5dc185ef37dc30bb7d9a7ae0c4e8R1960
> 
> I was struggling a bit with the terminology, true.
> 
> Right now I'm thinking of:
>  - promisor remote (as you defined)
>  - promisor packfile (as you defined)
>  - promisor object is an object known to belong to the promisor (whether
>because we have it in a promisor packfile or because it is referenced
>by an object in a promisor packfile)

Maybe naming has been discussed at length before, and I am jumping into
a long-settled topic. And admittedly this is bikeshedding.

But I find these names obscure, even as a developer. And terms like this
will undoubtedly bleed into the UI and documentation, so it would be
good to put some effort into choosing the best names possible.

I suppose that the term "promisor" comes from the computer science term
"promise" [1]. In that sense it is apt, because, say, a promisor object
is something that is known to be obtainable, but we don't have it yet.

But from the user's point of view, I think this isn't a very
illuminating term. I think the user's mental model will be that there is
a distinguished remote repository that holds the project's entire
published history, and she has to remain connected to it for certain Git
operations to work [2]. Another interesting aspect of this remote is
that it has to be trusted never (well, almost never) to discard any
objects [3].

Let me brainstorm about other names or concepts that seem closer to the
user's mental model:

* "backing remote", "backing repository"

* "lazy remote", "live remote", "cloud remote", "shared remote",
  "on-demand remote"

* "full remote", "deep remote", "permanent remote"

* "attached remote", "bound remote", "linked remote"

* "trusted remote", "authoritative remote", "official remote"
  (these terms probably imply more than we want)

* "upstream", "upstream remote" (probably too confusing with how
  the term "upstream" is currently used, even if in practice they
  will often be the same remote)

* "object depot", "depot remote", "depot repository", "remote
  object depot" (I don't like the term "object" that much, because
  it is rather remote from the Git user's daily life)

* "repository of record", "remote of record" (too wordy)

* "history depot", "history warehouse" (meh)

* (dare I say it?) "central repository"

* "object archive", "archival remote" (not so good because we already
  use "archive" for other concepts)

* depository (sounds too much like "repository")

* The thesaurus suggests nouns like: store, bank, warehouse, library,
  chronicle, annals, registry, locker, strongbox, attic, bunker

Personally I think "lazy remote" and "backing remote" are not too bad.

Michael

[1] https://en.wikipedia.org/wiki/Futures_and_promises

[2] I haven't checked whether the current proposal allows for
multiple "promisor remotes". It's certainly thinkable, if not
now then in the future. But I suppose that even then, 99% of
users will configure a single "promisor remote" for each
repository.

[3] For those rare occasions where the server has to discard objects,
it might make sense for the server to remember the names of the
objects that were deleted, so that it can tell clients "no, you're
not insane. I used to have that object but it has intentionally
been obliterated", and possibly even a reason: "it is now taboo"
vs. "I got tired of carrying it around".


Re: RFC: Design and code of partial clones (now, missing commits and trees OK) (part 3)

2017-09-26 Thread Jeff Hostetler



On 9/22/2017 6:58 PM, Jonathan Tan wrote:

On Fri, 22 Sep 2017 17:32:00 -0400
Jeff Hostetler  wrote:


I guess I'm afraid that the first call to is_promised() is going
cause a very long pause as it loads up a very large hash of objects.


Yes, the first call will cause a long pause. (I think fsck and gc can
tolerate this, but a better solution is appreciated.)


Perhaps you could augment the OID lookup to remember where the object
was found (essentially a .promisor bit set).  Then you wouldn't need
to touch them all.


Sorry - I don't understand this. Are you saying that missing promisor
objects should go into the global object hashtable, so that we can set a
flag on them?


I just meant could we add a bit to "struct object_info" to indicate
that the object was found in a .promisor packfile ?  This could
be set in sha1_object_info_extended().

Then the is_promised() calls in fsck and gc would just test that bit.

Given that that bit will be set on promisOR objects (and we won't
have object_info for missing objects), you may need to adjust the
iterator in the fsck/gc code slightly.

This is a bit of a handwave, but could something like that eliminate
the need to build this oidset?





The oidset will deduplicate OIDs.


Right, but you still have an entry for each object.  For a repo the
size of Windows, you may have 25M+ objects your copy of the ODB.


We have entries only for the "frontier" objects (the objects directly
referenced by any promisor object). For the Windows repo, for example, I
foresee that many of the blobs, trees, and commits will be "hiding"
behind objects that the repository user did not download into their
repo.



Re: RFC: Design and code of partial clones (now, missing commits and trees OK) (part 2/3)

2017-09-26 Thread Jeff Hostetler



On 9/22/2017 6:52 PM, Jonathan Tan wrote:

On Fri, 22 Sep 2017 17:19:50 -0400
Jeff Hostetler <g...@jeffhostetler.com> wrote:


In your specific example, how would rev-list know, on the client, to
include (or exclude) a large blob in its output if it does not have it,
and thus does not know its size?



The client doesn't have the size. It just knows it is missing and it
needs it. It doesn't matter why it is missing.  (But I guess the client
could assume it is because it is large.)


Ah, OK.


So rev-list on the client could filter the objects it has by size.


My issue is that if the purpose of this feature in rev-list is to do
prefetching, the only criterion we need to check for is absence from the
local repo right? (Or is filtering by size on the client useful for
other reasons?)


The prefetch before a checkout may want all missing blobs, or it
want to work with the sparse-checkout specification and only get
the required missing blobs in the subset of the tree.  By putting
the same filter logic in rev-list, we can do that.

It also sets the stage for later filtering trees.  (My current patch
only filters blobs.  It would be nice to have a second version of the
sparse filter that also omits trees, but that may require a recursive
option in the fetch-objects protocol.)





FYI I just posted my RFC this afternoon.
https://public-inbox.org/git/20170922204211.ga24...@google.com/T/


Thanks, I'll take a look.



Re: RFC: Design and code of partial clones (now, missing commits and trees OK) (part 3)

2017-09-22 Thread Jonathan Tan
On Fri, 22 Sep 2017 17:32:00 -0400
Jeff Hostetler  wrote:

> I guess I'm afraid that the first call to is_promised() is going
> cause a very long pause as it loads up a very large hash of objects.

Yes, the first call will cause a long pause. (I think fsck and gc can
tolerate this, but a better solution is appreciated.)

> Perhaps you could augment the OID lookup to remember where the object
> was found (essentially a .promisor bit set).  Then you wouldn't need
> to touch them all.

Sorry - I don't understand this. Are you saying that missing promisor
objects should go into the global object hashtable, so that we can set a
flag on them?

> > The oidset will deduplicate OIDs.
> 
> Right, but you still have an entry for each object.  For a repo the
> size of Windows, you may have 25M+ objects your copy of the ODB.

We have entries only for the "frontier" objects (the objects directly
referenced by any promisor object). For the Windows repo, for example, I
foresee that many of the blobs, trees, and commits will be "hiding"
behind objects that the repository user did not download into their
repo.


Re: RFC: Design and code of partial clones (now, missing commits and trees OK) (part 2/3)

2017-09-22 Thread Jonathan Tan
On Fri, 22 Sep 2017 17:19:50 -0400
Jeff Hostetler <g...@jeffhostetler.com> wrote:

> > In your specific example, how would rev-list know, on the client, to
> > include (or exclude) a large blob in its output if it does not have it,
> > and thus does not know its size?
> > 
> 
> The client doesn't have the size. It just knows it is missing and it
> needs it. It doesn't matter why it is missing.  (But I guess the client
> could assume it is because it is large.)

Ah, OK.

> So rev-list on the client could filter the objects it has by size.

My issue is that if the purpose of this feature in rev-list is to do
prefetching, the only criterion we need to check for is absence from the
local repo right? (Or is filtering by size on the client useful for
other reasons?)

> FYI I just posted my RFC this afternoon.
> https://public-inbox.org/git/20170922204211.ga24...@google.com/T/

Thanks, I'll take a look.


Re: RFC: Design and code of partial clones (now, missing commits and trees OK)

2017-09-22 Thread Jonathan Tan
On Fri, 22 Sep 2017 17:02:11 -0400
Jeff Hostetler  wrote:

> > I was struggling a bit with the terminology, true.
> > 
> > Right now I'm thinking of:
> >   - promisor remote (as you defined)
> >   - promisor packfile (as you defined)
> >   - promisor object is an object known to belong to the promisor (whether
> > because we have it in a promisor packfile or because it is referenced
> > by an object in a promisor packfile)
> > 
> > This might eliminate "promise(d)", and thus eliminate the confusion
> > between "promised" and "promisor", but I haven't done an exhaustive
> > search.
> > 
> 
> maybe just call the "promised" ones "missing".

They are not the same, though - "missing" usually just means that the
local repo does not have it, without regard to whether another repo has
it.

> >> I guess it depends on how many missing-objects you expect the client
> >> to have. My concern here is that we're limiting the design to the
> >> "occasional" big file problem, rather than the more general scale
> >> problem.
> > 
> > Do you have a specific situation in mind?
> > 
> 
> I have would like to be able do sparse-enlistments in the Windows
> source tree. (3.5M files at HEAD.)  Most developers only need a small
> feature area (a device driver or file system or whatever) and not the
> whole tree.  A typical Windows developer may have only 30-50K files
> populated.  If we can synchronize on a sparse-checkout spec and use
> that for both the checkout and the clone/fetch, then we can bulk fetch
> the blobs that they'll actually need.  GVFS can hydrate the files as
> they touch them, but I can use this to pre-fetch the needed blobs in
> bulk, rather than faulting and fetching them one-by-one.
> 
> So, my usage may have >95% of the ODB be missing blobs.  Contrast that
> with the occasional large blob / LFS usage where you may have <5% of
> the ODB as large objects (by count of OIDs not disk usage).

I don't think the current design precludes a more intelligent bulk
fetching (e.g. being allowed to specify a "want" tree and several "have"
trees, although we will have to figure out a design for that, including
how to select the "have" trees to inform the server about).

In the meantime, yes, this will be more useful for occasional large blob
repos, and (if/when the hook support is added) a GVFS situation where
the missing objects are available network-topologically close by.


Re: RFC: Design and code of partial clones (now, missing commits and trees OK) (part 3)

2017-09-22 Thread Jeff Hostetler



On 9/21/2017 7:04 PM, Jonathan Tan wrote:

On Thu, 21 Sep 2017 14:00:40 -0400
Jeff Hostetler  wrote:


(part 3)

Additional overall comments on:
https://github.com/jonathantanmy/git/commits/partialclone2

{} WRT the code in is_promised() [1]

[1]
https://github.com/jonathantanmy/git/commit/7a9c2d9b6e2fce293817b595dee29a7eede0#diff-5d5d5dc185ef37dc30bb7d9a7ae0c4e8R1960

 {} it looked like it was adding ALL promisor- and
promised-objects to the "promised" OIDSET, rather than just
promised-objects, but I could be mistaken.


As far as I can tell, it is just adding the promised objects (some of
which may also be promisor objects). If you're saying that you expected
it to add the promisor objects as well, that might be a reasonable
expectation...I'm thinking of doing that.



It looked like it was adding both types.  I was concerned that that
it might be doing too much.  But I haven't run the code, that was from
an observation.


 {} Is this iterating over ALL promisor-packfiles?


Yes.


 {} It looked like this was being used by fsck and rev-list.  I
have concerns about how big this OIDSET will get and how it will
scale, since if we start with a partial-clone all packfiles will be
promisor-packfiles.


It's true that scaling is an issue. I'm not sure if omitting the oidset
will solve anything, though - as it is, Git maintains an object hash and
adds to it quite liberally.


I guess I'm afraid that the first call to is_promised() is going
cause a very long pause as it loads up a very large hash of objects.

Perhaps you could augment the OID lookup to remember where the object
was found (essentially a .promisor bit set).  Then you wouldn't need
to touch them all.



One thing that might help is some sort of flushing of objects in
promisor packfiles from the local repository - that way, the oidset
won't be so large.



 {} When iterating thru a tree object, you add everything that it
references (everything in that folder).  This adds all of the
child OIDs -- without regard to whether they are new to this
version of the tree object. (Granted, this is hard to compute.)


The oidset will deduplicate OIDs.


Right, but you still have an entry for each object.  For a repo the
size of Windows, you may have 25M+ objects your copy of the ODB.




My concern is that this will add too many objects to the
OIDSET. That is, a new tree object (because of a recent change to
something in that folder) will also have the OIDs of the other
*unchanged* files which may be present in an earlier non-provisor
packfile from an earlier commit.

I worry that this will grow the OIDSET to essentially include
everything.  And possibly defeating its own purpose.  I could
be wrong here, but that's my concern.


Same answer as above (about flushing of objects in promisor packfiles).


{} I'm not opposed to the .promisor file concept, but I have concerns
 that in practice all packfiles after a partial clone will be
 promisor-packfiles and therefore short-cut during fsck, so fsck
 still won't gain anything.

 It would help if there are also non-promisor packfiles present,
 but only for objects referenced by non-promisor packfiles.

 But then I also have to wonder whether we can even support
non-promisor packfiles after starting with a partial clone -- because
of the properties of received thin-packs on a non-partial fetch.


Same reply as to your other e-mail - locally created objects are not in
promisor packfiles. (Or were you thinking of a situation where locally
created objects are immediately uploaded to the promisor remote, thus
making them promisor objects too?)



Thanks,
Jeff


Re: RFC: Design and code of partial clones (now, missing commits and trees OK) (part 2/3)

2017-09-22 Thread Jeff Hostetler



On 9/21/2017 6:51 PM, Jonathan Tan wrote:

On Thu, 21 Sep 2017 13:59:43 -0400
Jeff Hostetler <g...@jeffhostetler.com> wrote:


(part 2)

Additional overall comments on:
https://github.com/jonathantanmy/git/commits/partialclone2

{} I think it would help to split the blob-max-bytes filtering and the
 promisor/promised concepts and discuss them independently.

 {} Then we can talk about about the promisor/promised
functionality independent of any kind of filter.  The net-net is that
the client has missing objects and it doesn't matter what filter
criteria or mechanism caused that to happened.

 {} blob-max-bytes is but one such filter we should have.  This
might be sufficient if the goal is replace LFS (where you rarely ever
need any given very very large object) and dynamically loading
them as needed is sufficient and the network round-trip isn't
too much of a perf penalty.

 {} But if we want to do things like a "sparse-enlistments" where
the client only needs a small part of the tree using sparse-checkout.
For example, only populating 50,000 files from a tree of 3.5M
files at HEAD, then we need a more general filtering.

 {} And as I said above, how we chose to filter should be
independent of how the client handles promisor/promised objects.


I agree that they are independent. (I put everything together so that
people could see how they work together, but they can be changed
independently of each other.)


{} Also, if we rely strictly on dynamic object fetching to fetch
missing objects, we are effectively tethered to the server during
operations (such as checkout) that the user might not think about as
requiring a network connection.  And we are forced to keep the same
limitations of LFS in that you can't prefetch and go offline (without
actually checking out to your worktree first).  And we can't bulk or
parallel fetch objects.


I don't think dynamic object fetching precludes any other more optimized
way of fetching or prefetching - I implemented dynamic object fetching
first so that we would have a fallback, but the others definitely can be
implemented too.


yes, we need that as a fallback/default for the odd cases where we
can't predict perfectly.  Like during a blame or history or a merge.

I didn't mean to say we didn't need it, but rather that we should
try to minimize it.




{} I think it would also help to move the blob-max-bytes calculation
out of pack-objects.c : add_object_entry() [1].  The current code
isolates the computation there so that only pack-objects can do the
filtering.

 Instead, put it in list-objects.c and traverse_commit_list() so
that pack-objects and rev-list can share it (as Peff suggested [2] in
 response to my first patch series in March).

 For example, this would let the client have a pre-checkout hook,
use rev-list to compute the set of missing objects needed for that
commit, and pipe that to a command to BULK fetch them from the server
BEFORE starting the actual checkout.  This would allow the savy user
to manually run a prefetch before going offline.

[1]
https://github.com/jonathantanmy/git/commit/68e529484169f4800115c5a32e0904c25ad14bd8#diff-a8d2c9cf879e775d748056cfed48440cR1110

[2]
https://public-inbox.org/git/20170309073117.g3br5btsfwntc...@sigill.intra.peff.net/


In your specific example, how would rev-list know, on the client, to
include (or exclude) a large blob in its output if it does not have it,
and thus does not know its size?



The client doesn't have the size. It just knows it is missing and it
needs it. It doesn't matter why it is missing.  (But I guess the client
could assume it is because it is large.)

So rev-list on the client could filter the objects it has by size.

I added that to rev-list primarily to demonstrate and debug the
filtering concept (it's easier than playing with packfiles).  But
it can be used to drive client-side queries and bulk requests.


My reason for including it in pack-objects.c is because I only needed it
there and it is much simpler, but I agree that if it can be used
elsewhere, we can put it in a more general place.


{} This also locks us into size-only filtering and makes it more
 difficult to add other filters.  In that the add_object_entry()
 code gets called on an object after the traversal has decided
 what to do with it.  It would be difficult to add tree-trimming
 at this level, for example.


That is true.


{} An early draft of this type of filtering is here [3].  I hope to
push up a revised draft of this shortly.

[3]
https://public-inbox.org/git/20170713173459.3559-1-...@jeffhostetler.com/


OK - I'll take a look when that is done (I think I commented on an
earlier version on that).



FYI I just posted my RFC this afternoon.
https://public-inbox.org/git/20170922204211.ga24...@google.com/T/


Thanks
Jeff



Re: RFC: Design and code of partial clones (now, missing commits and trees OK)

2017-09-22 Thread Jeff Hostetler



On 9/21/2017 6:42 PM, Jonathan Tan wrote:

On Thu, 21 Sep 2017 13:57:30 -0400
Jeff Hostetler <g...@jeffhostetler.com> wrote:


There's a lot in this patch series.  I'm still studying it, but here
are some notes and questions.  I'll start with direct responses to
the RFC here and follow up in a second email with specific questions
and comments to keep this from being too long).


OK - thanks for your detailed comments.


I like that git-clone saves the partial clone settings in the
.git/config.  This should make it easier for subsequent commands to
default to the right settings.

Do we allow a partial-fetch following a regular clone (such that
git-fetch would create these config file settings)?  This seems like
a reasonable upgrade path for a user with an existing repo to take
advantage of partial fetches going forward.


A "git-fetch ...options... --save-options" does not sound unreasonable,
although I would think that (i) partial fetches/clones are useful on
very large repositories, and (ii) the fact that you had already cloned
this large repository shows that you can handle the disk and network
load, so partial fetch after non-partial clone doesn't seem very useful.

But if there is a use case, I think it could work. Although note that
the GC in my patch set stops at promisor objects, so if an object
originally cloned cannot be reached through a walk (with non-promisor
intermediate objects only), it might end up GC-ed (which might be fine).


Or do we require that git-fetch only be allowed to do partial-fetches
after a partial-clone (so that only clone creates these settings) and
fetch always does partial-fetches thereafter?  It might be useful to
allow fetch to do a full fetch on top of a partial-clone, but there
are probably thin-pack issues to work out first.


About the thin-pack issue, I wonder if it is sufficient to turn on
fetch_if_missing while index-pack is trying to fix the thin pack.

If the thin-pack issues are worked out, then non-partial fetch after
partial clone seems doable (all commits from our tip to their tip are
fetched, as well as all new trees and all new blobs; any trees and blobs
still missing are already promised).


Agreed.  If we get a thin-pack and there are missing objects from the
commits in the edge set, we wouldn't be able to fix the newly received
objects without demand loading the object they are relative to.  Perhaps
we can use my filter/prefetch concept on the edge set to bulk fetch
them. (this is a bit of a SWAG.)  turning on the dynamic fetch would
be a first step (and may be sufficient).



Thanks for these questions - I am concentrating on repos in which both
clone and fetch are partial, but it is good to discuss what happens if
the user does something else.


Also, there is an assumption here that the user will want to keep
using the same filtering settings on subsequent fetches.  That's
probably fine for now and until we get a chance to try it out for
real.


Yes. Having said that, the fetching of missing objects does not take
into account the filter at all, so the filter can be easily changed.


Do we allow EXACTLY ONE promisor-remote?  That is, can we later add
another remote as a promisor-remote?  And then do partial-fetches from
either?


Yes, exactly one (because we need to know which remote to fetch missing
objects from, and which remote to allow partial fetches from). But the
promisor remote can be switched to another.


Do we need to disallow removing or altering a remote that is
listed as a promisor-remote?


Perhaps, although I think that right now configs are checked when we run
the command using the config, not when we run "git config".


I think for now, one promisor-remote is fine.  Just asking.

Changing a remote's URL might be fine, but deleting the
promisor-remote would put the user in a weird state.  We don't need
to worry about it now though.


Agreed.


I struggled with the terms here a little when looking at the source.
() A remote responding to a partial-clone is termed a
"promisor-remote". () Packfiles received from a promisor-remote are
marked with ".promisor" like ".keep" names.
() An object actually contained in such packfiles is called a
"promisor-object". () An object not-present but referenced by one of
the above promisor-objects is called a "promised-object" (aka a
"missing-object").

I think the similarity of the words "promisOR" and "promisED" threw
me here and as I was looking at the code.  The code in is_promised()
[1] looked like it was adding all promisor- and promised-objects to
the "promised" OIDSET, but I could be mistaken.

[1]
https://github.com/jonathantanmy/git/commit/7a9c2d9b6e2fce293817b595dee29a7eede0#diff-5d5d5dc185ef37dc30bb7d9a7ae0c4e8R1960


I was struggling a bit with the terminology, true.

Right now I'm thinking of:
  - promisor remote (as you defined)
  - promisor packfile (as you def

Re: RFC: Design and code of partial clones (now, missing commits and trees OK) (part 3)

2017-09-21 Thread Jonathan Tan
On Thu, 21 Sep 2017 14:00:40 -0400
Jeff Hostetler  wrote:

> (part 3)
> 
> Additional overall comments on:
> https://github.com/jonathantanmy/git/commits/partialclone2
> 
> {} WRT the code in is_promised() [1]
> 
> [1]
> https://github.com/jonathantanmy/git/commit/7a9c2d9b6e2fce293817b595dee29a7eede0#diff-5d5d5dc185ef37dc30bb7d9a7ae0c4e8R1960
> 
> {} it looked like it was adding ALL promisor- and
> promised-objects to the "promised" OIDSET, rather than just
> promised-objects, but I could be mistaken.

As far as I can tell, it is just adding the promised objects (some of
which may also be promisor objects). If you're saying that you expected
it to add the promisor objects as well, that might be a reasonable
expectation...I'm thinking of doing that.

> {} Is this iterating over ALL promisor-packfiles?

Yes.

> {} It looked like this was being used by fsck and rev-list.  I
> have concerns about how big this OIDSET will get and how it will
> scale, since if we start with a partial-clone all packfiles will be
>promisor-packfiles.

It's true that scaling is an issue. I'm not sure if omitting the oidset
will solve anything, though - as it is, Git maintains an object hash and
adds to it quite liberally.

One thing that might help is some sort of flushing of objects in
promisor packfiles from the local repository - that way, the oidset
won't be so large.

> 
> {} When iterating thru a tree object, you add everything that it
>references (everything in that folder).  This adds all of the
>child OIDs -- without regard to whether they are new to this
>version of the tree object. (Granted, this is hard to compute.)

The oidset will deduplicate OIDs.

>My concern is that this will add too many objects to the
> OIDSET. That is, a new tree object (because of a recent change to
> something in that folder) will also have the OIDs of the other
> *unchanged* files which may be present in an earlier non-provisor
> packfile from an earlier commit.
> 
>I worry that this will grow the OIDSET to essentially include
>everything.  And possibly defeating its own purpose.  I could
> be wrong here, but that's my concern.

Same answer as above (about flushing of objects in promisor packfiles).

> {} I'm not opposed to the .promisor file concept, but I have concerns
> that in practice all packfiles after a partial clone will be
> promisor-packfiles and therefore short-cut during fsck, so fsck
> still won't gain anything.
> 
> It would help if there are also non-promisor packfiles present,
> but only for objects referenced by non-promisor packfiles.
> 
> But then I also have to wonder whether we can even support
> non-promisor packfiles after starting with a partial clone -- because
> of the properties of received thin-packs on a non-partial fetch.

Same reply as to your other e-mail - locally created objects are not in
promisor packfiles. (Or were you thinking of a situation where locally
created objects are immediately uploaded to the promisor remote, thus
making them promisor objects too?)


Re: RFC: Design and code of partial clones (now, missing commits and trees OK) (part 2/3)

2017-09-21 Thread Jonathan Tan
On Thu, 21 Sep 2017 13:59:43 -0400
Jeff Hostetler <g...@jeffhostetler.com> wrote:

> (part 2)
> 
> Additional overall comments on:
> https://github.com/jonathantanmy/git/commits/partialclone2
> 
> {} I think it would help to split the blob-max-bytes filtering and the
> promisor/promised concepts and discuss them independently.
> 
> {} Then we can talk about about the promisor/promised
> functionality independent of any kind of filter.  The net-net is that
> the client has missing objects and it doesn't matter what filter
> criteria or mechanism caused that to happened.
> 
> {} blob-max-bytes is but one such filter we should have.  This
> might be sufficient if the goal is replace LFS (where you rarely ever
>need any given very very large object) and dynamically loading
>them as needed is sufficient and the network round-trip isn't
>too much of a perf penalty.
> 
> {} But if we want to do things like a "sparse-enlistments" where
> the client only needs a small part of the tree using sparse-checkout.
>For example, only populating 50,000 files from a tree of 3.5M
> files at HEAD, then we need a more general filtering.
> 
> {} And as I said above, how we chose to filter should be
> independent of how the client handles promisor/promised objects.

I agree that they are independent. (I put everything together so that
people could see how they work together, but they can be changed
independently of each other.)

> {} Also, if we rely strictly on dynamic object fetching to fetch
> missing objects, we are effectively tethered to the server during
> operations (such as checkout) that the user might not think about as
> requiring a network connection.  And we are forced to keep the same
> limitations of LFS in that you can't prefetch and go offline (without
> actually checking out to your worktree first).  And we can't bulk or
> parallel fetch objects.

I don't think dynamic object fetching precludes any other more optimized
way of fetching or prefetching - I implemented dynamic object fetching
first so that we would have a fallback, but the others definitely can be
implemented too.

> {} I think it would also help to move the blob-max-bytes calculation
> out of pack-objects.c : add_object_entry() [1].  The current code
> isolates the computation there so that only pack-objects can do the
> filtering.
> 
> Instead, put it in list-objects.c and traverse_commit_list() so
> that pack-objects and rev-list can share it (as Peff suggested [2] in
> response to my first patch series in March).
> 
> For example, this would let the client have a pre-checkout hook,
> use rev-list to compute the set of missing objects needed for that
> commit, and pipe that to a command to BULK fetch them from the server
> BEFORE starting the actual checkout.  This would allow the savy user
> to manually run a prefetch before going offline.
> 
> [1]
> https://github.com/jonathantanmy/git/commit/68e529484169f4800115c5a32e0904c25ad14bd8#diff-a8d2c9cf879e775d748056cfed48440cR1110
> 
> [2]
> https://public-inbox.org/git/20170309073117.g3br5btsfwntc...@sigill.intra.peff.net/

In your specific example, how would rev-list know, on the client, to
include (or exclude) a large blob in its output if it does not have it,
and thus does not know its size?

My reason for including it in pack-objects.c is because I only needed it
there and it is much simpler, but I agree that if it can be used
elsewhere, we can put it in a more general place.

> {} This also locks us into size-only filtering and makes it more
> difficult to add other filters.  In that the add_object_entry()
> code gets called on an object after the traversal has decided
> what to do with it.  It would be difficult to add tree-trimming
> at this level, for example.

That is true.

> {} An early draft of this type of filtering is here [3].  I hope to
> push up a revised draft of this shortly.
> 
> [3]
> https://public-inbox.org/git/20170713173459.3559-1-...@jeffhostetler.com/

OK - I'll take a look when that is done (I think I commented on an
earlier version on that).


Re: RFC: Design and code of partial clones (now, missing commits and trees OK)

2017-09-21 Thread Jonathan Tan
On Thu, 21 Sep 2017 13:57:30 -0400
Jeff Hostetler <g...@jeffhostetler.com> wrote:

> There's a lot in this patch series.  I'm still studying it, but here
> are some notes and questions.  I'll start with direct responses to
> the RFC here and follow up in a second email with specific questions
> and comments to keep this from being too long).

OK - thanks for your detailed comments.

> I like that git-clone saves the partial clone settings in the
> .git/config.  This should make it easier for subsequent commands to
> default to the right settings.
> 
> Do we allow a partial-fetch following a regular clone (such that
> git-fetch would create these config file settings)?  This seems like
> a reasonable upgrade path for a user with an existing repo to take
> advantage of partial fetches going forward.

A "git-fetch ...options... --save-options" does not sound unreasonable,
although I would think that (i) partial fetches/clones are useful on
very large repositories, and (ii) the fact that you had already cloned
this large repository shows that you can handle the disk and network
load, so partial fetch after non-partial clone doesn't seem very useful.

But if there is a use case, I think it could work. Although note that
the GC in my patch set stops at promisor objects, so if an object
originally cloned cannot be reached through a walk (with non-promisor
intermediate objects only), it might end up GC-ed (which might be fine).

> Or do we require that git-fetch only be allowed to do partial-fetches
> after a partial-clone (so that only clone creates these settings) and
> fetch always does partial-fetches thereafter?  It might be useful to
> allow fetch to do a full fetch on top of a partial-clone, but there
> are probably thin-pack issues to work out first.

About the thin-pack issue, I wonder if it is sufficient to turn on
fetch_if_missing while index-pack is trying to fix the thin pack.

If the thin-pack issues are worked out, then non-partial fetch after
partial clone seems doable (all commits from our tip to their tip are
fetched, as well as all new trees and all new blobs; any trees and blobs
still missing are already promised).

Thanks for these questions - I am concentrating on repos in which both
clone and fetch are partial, but it is good to discuss what happens if
the user does something else.

> Also, there is an assumption here that the user will want to keep
> using the same filtering settings on subsequent fetches.  That's
> probably fine for now and until we get a chance to try it out for
> real.

Yes. Having said that, the fetching of missing objects does not take
into account the filter at all, so the filter can be easily changed.

> Do we allow EXACTLY ONE promisor-remote?  That is, can we later add
> another remote as a promisor-remote?  And then do partial-fetches from
> either?

Yes, exactly one (because we need to know which remote to fetch missing
objects from, and which remote to allow partial fetches from). But the
promisor remote can be switched to another.

> Do we need to disallow removing or altering a remote that is
> listed as a promisor-remote?

Perhaps, although I think that right now configs are checked when we run
the command using the config, not when we run "git config".

> I think for now, one promisor-remote is fine.  Just asking.
> 
> Changing a remote's URL might be fine, but deleting the
> promisor-remote would put the user in a weird state.  We don't need
> to worry about it now though.

Agreed.

> I struggled with the terms here a little when looking at the source.
> () A remote responding to a partial-clone is termed a
> "promisor-remote". () Packfiles received from a promisor-remote are
> marked with ".promisor" like ".keep" names.
> () An object actually contained in such packfiles is called a
> "promisor-object". () An object not-present but referenced by one of
> the above promisor-objects is called a "promised-object" (aka a
> "missing-object").
> 
> I think the similarity of the words "promisOR" and "promisED" threw
> me here and as I was looking at the code.  The code in is_promised()
> [1] looked like it was adding all promisor- and promised-objects to
> the "promised" OIDSET, but I could be mistaken.
> 
> [1]
> https://github.com/jonathantanmy/git/commit/7a9c2d9b6e2fce293817b595dee29a7eede0#diff-5d5d5dc185ef37dc30bb7d9a7ae0c4e8R1960

I was struggling a bit with the terminology, true.

Right now I'm thinking of:
 - promisor remote (as you defined)
 - promisor packfile (as you defined)
 - promisor object is an object known to belong to the promisor (whether
   because we have it in a promisor packfile or because it is referenced
   by an object in a promisor packfile)

This might eliminate "

Re: RFC: Design and code of partial clones (now, missing commits and trees OK) (part 3)

2017-09-21 Thread Jeff Hostetler

(part 3)

Additional overall comments on:
https://github.com/jonathantanmy/git/commits/partialclone2

{} WRT the code in is_promised() [1]

[1] 
https://github.com/jonathantanmy/git/commit/7a9c2d9b6e2fce293817b595dee29a7eede0#diff-5d5d5dc185ef37dc30bb7d9a7ae0c4e8R1960

   {} it looked like it was adding ALL promisor- and promised-objects to
  the "promised" OIDSET, rather than just promised-objects, but I
  could be mistaken.

   {} Is this iterating over ALL promisor-packfiles?

   {} It looked like this was being used by fsck and rev-list.  I have
  concerns about how big this OIDSET will get and how it will scale,
  since if we start with a partial-clone all packfiles will be
  promisor-packfiles.

   {} When iterating thru a tree object, you add everything that it
  references (everything in that folder).  This adds all of the
  child OIDs -- without regard to whether they are new to this
  version of the tree object. (Granted, this is hard to compute.)

  My concern is that this will add too many objects to the OIDSET.
  That is, a new tree object (because of a recent change to something
  in that folder) will also have the OIDs of the other *unchanged*
  files which may be present in an earlier non-provisor packfile
  from an earlier commit.

  I worry that this will grow the OIDSET to essentially include
  everything.  And possibly defeating its own purpose.  I could be
  wrong here, but that's my concern.


{} I'm not opposed to the .promisor file concept, but I have concerns
   that in practice all packfiles after a partial clone will be
   promisor-packfiles and therefore short-cut during fsck, so fsck
   still won't gain anything.

   It would help if there are also non-promisor packfiles present,
   but only for objects referenced by non-promisor packfiles.

   But then I also have to wonder whether we can even support non-promisor
   packfiles after starting with a partial clone -- because of the
   properties of received thin-packs on a non-partial fetch.
   


Thanks,
Jeff



Re: RFC: Design and code of partial clones (now, missing commits and trees OK) (part 2/3)

2017-09-21 Thread Jeff Hostetler

(part 2)

Additional overall comments on:
https://github.com/jonathantanmy/git/commits/partialclone2

{} I think it would help to split the blob-max-bytes filtering and the
   promisor/promised concepts and discuss them independently.

   {} Then we can talk about about the promisor/promised functionality
  independent of any kind of filter.  The net-net is that the client
  has missing objects and it doesn't matter what filter criteria
  or mechanism caused that to happened.

   {} blob-max-bytes is but one such filter we should have.  This might
  be sufficient if the goal is replace LFS (where you rarely ever
  need any given very very large object) and dynamically loading
  them as needed is sufficient and the network round-trip isn't
  too much of a perf penalty.

   {} But if we want to do things like a "sparse-enlistments" where the
  client only needs a small part of the tree using sparse-checkout.
  For example, only populating 50,000 files from a tree of 3.5M files
  at HEAD, then we need a more general filtering.

   {} And as I said above, how we chose to filter should be independent
  of how the client handles promisor/promised objects.


{} Also, if we rely strictly on dynamic object fetching to fetch missing
   objects, we are effectively tethered to the server during operations
   (such as checkout) that the user might not think about as requiring
   a network connection.  And we are forced to keep the same limitations
   of LFS in that you can't prefetch and go offline (without actually
   checking out to your worktree first).  And we can't bulk or parallel
   fetch objects.


{} I think it would also help to move the blob-max-bytes calculation out
   of pack-objects.c : add_object_entry() [1].  The current code isolates
   the computation there so that only pack-objects can do the filtering.

   Instead, put it in list-objects.c and traverse_commit_list() so that
   pack-objects and rev-list can share it (as Peff suggested [2] in
   response to my first patch series in March).

   For example, this would let the client have a pre-checkout hook, use
   rev-list to compute the set of missing objects needed for that commit,
   and pipe that to a command to BULK fetch them from the server BEFORE
   starting the actual checkout.  This would allow the savy user to
   manually run a prefetch before going offline.

[1] 
https://github.com/jonathantanmy/git/commit/68e529484169f4800115c5a32e0904c25ad14bd8#diff-a8d2c9cf879e775d748056cfed48440cR1110

[2] 
https://public-inbox.org/git/20170309073117.g3br5btsfwntc...@sigill.intra.peff.net/


{} This also locks us into size-only filtering and makes it more
   difficult to add other filters.  In that the add_object_entry()
   code gets called on an object after the traversal has decided
   what to do with it.  It would be difficult to add tree-trimming
   at this level, for example.


{} An early draft of this type of filtering is here [3].  I hope to push
   up a revised draft of this shortly.

[3] https://public-inbox.org/git/20170713173459.3559-1-...@jeffhostetler.com/


Thanks,
Jeff



Re: RFC: Design and code of partial clones (now, missing commits and trees OK)

2017-09-21 Thread Jeff Hostetler


There's a lot in this patch series.  I'm still studying it, but here
are some notes and questions.  I'll start with direct responses to
the RFC here and follow up in a second email with specific questions
and comments to keep this from being too long).


On 9/15/2017 4:43 PM, Jonathan Tan wrote:

For those interested in partial clones and/or missing objects in repos,
I've updated my original partialclone patches to not require an explicit
list of promises. Fetch/clone still only permits exclusion of blobs, but
the infrastructure is there for a local repo to support missing trees
and commits as well.

They can be found here:

https://github.com/jonathantanmy/git/tree/partialclone2

To make the number of patches more manageable, I have omitted support
for a custom fetching hook (but it can be readded in fetch-object.c),
and only support promisor packfiles for now (but I don't have any
objection to supporting promisor loose objects in the future).

Let me know what you think of the overall approach. In particular, I'm
still wondering if there is a better approach than to toggle
"fetch_if_missing" whenever we need lazy fetching (or need to suppress
it).

Also, if there any patches that you think might be useful to others, let
me know and I'll send them to this mailing list for review.

A demo and an overview of the design (also available from that
repository's README):

Demo


Obtain a repository.

 $ make prefix=$HOME/local install
 $ cd $HOME/tmp
 $ git clone https://github.com/git/git

Make it advertise the new feature and allow requests for arbitrary blobs.

 $ git -C git config uploadpack.advertiseblobmaxbytes 1
 $ git -C git config uploadpack.allowanysha1inwant 1

Perform the partial clone and check that it is indeed smaller. Specify
"file://" in order to test the partial clone mechanism. (If not, Git will
perform a local clone, which unselectively copies every object.)

 $ git clone --blob-max-bytes=10 "file://$(pwd)/git" git2
 $ git clone "file://$(pwd)/git" git3
 $ du -sh git2 git3
 116M   git2
 129M   git3

Observe that the new repo is automatically configured to fetch missing objects
from the original repo. Subsequent fetches will also be partial.

 $ cat git2/.git/config
 [core]
repositoryformatversion = 1
filemode = true
bare = false
logallrefupdates = true
 [remote "origin"]
url = [snip]
fetch = +refs/heads/*:refs/remotes/origin/*
blobmaxbytes = 10
 [extensions]
partialclone = origin
 [branch "master"]
remote = origin
merge = refs/heads/master



I like that git-clone saves the partial clone settings in the
.git/config.  This should make it easier for subsequent commands to
default to the right settings.

Do we allow a partial-fetch following a regular clone (such that git-fetch
would create these config file settings)?  This seems like a reasonable
upgrade path for a user with an existing repo to take advantage of partial
fetches going forward.

Or do we require that git-fetch only be allowed to do partial-fetches
after a partial-clone (so that only clone creates these settings) and
fetch always does partial-fetches thereafter?  It might be useful to
allow fetch to do a full fetch on top of a partial-clone, but there
are probably thin-pack issues to work out first.

Also, there is an assumption here that the user will want to keep
using the same filtering settings on subsequent fetches.  That's
probably fine for now and until we get a chance to try it out for
real.



Unlike in an older version of this code (see the `partialclone` branch), this
also works with the HTTP/HTTPS protocols.

Design
==

Local repository layout
---

A repository declares its dependence on a *promisor remote* (a remote that
declares that it can serve certain objects when requested) by a repository
extension "partialclone". `extensions.partialclone` must be set to the name of
the remote ("origin" in the demo above).



Do we allow EXACTLY ONE promisor-remote?  That is, can we later add
another remote as a promisor-remote?  And then do partial-fetches from
either?  Do we need to disallow removing or altering a remote that is
listed as a promisor-remote?

I think for now, one promisor-remote is fine.  Just asking.

Changing a remote's URL might be fine, but deleting the promisor-remote
would put the user in a weird state.  We don't need to worry about it
now though.



A packfile can be annotated as originating from the promisor remote by the
existence of a "(packfile name).promisor" file with arbitrary contents (similar
to the ".keep" file). Whenever a promisor remote sends an object, it declares
that it can serve every object directly or indirectly referenced by the sent
object.

A promisor packfile is a packfile annotated with the ".promisor" file. A
promisor object is an object in a promisor packfile. A promised object is an
object 

Re: RFC: Design and code of partial clones (now, missing commits and trees OK)

2017-09-18 Thread Junio C Hamano
Jonathan Tan  writes:

> For those interested in partial clones and/or missing objects in repos,
> I've updated my original partialclone patches to not require an explicit
> list of promises. Fetch/clone still only permits exclusion of blobs, but
> the infrastructure is there for a local repo to support missing trees
> and commits as well.
> ...
> Demo
> 
>
> Obtain a repository.
>
> $ make prefix=$HOME/local install
> $ cd $HOME/tmp
> $ git clone https://github.com/git/git
>
> Make it advertise the new feature and allow requests for arbitrary blobs.
>
> $ git -C git config uploadpack.advertiseblobmaxbytes 1
> $ git -C git config uploadpack.allowanysha1inwant 1
>
> Perform the partial clone and check that it is indeed smaller. Specify
> "file://" in order to test the partial clone mechanism. (If not, Git will
> perform a local clone, which unselectively copies every object.)
>
> $ git clone --blob-max-bytes=10 "file://$(pwd)/git" git2
> $ git clone "file://$(pwd)/git" git3
> $ du -sh git2 git3
> 116M  git2
> 129M  git3
>
> Observe that the new repo is automatically configured to fetch missing objects
> from the original repo. Subsequent fetches will also be partial.
>
> $ cat git2/.git/config
> [core]
>   repositoryformatversion = 1
>   filemode = true
>   bare = false
>   logallrefupdates = true
> [remote "origin"]
>   url = [snip]
>   fetch = +refs/heads/*:refs/remotes/origin/*
>   blobmaxbytes = 10
> [extensions]
>   partialclone = origin
> [branch "master"]
>   remote = origin
>   merge = refs/heads/master

The above sequence of events make quite a lot of sense.  And the
following description of how it is designed (snipped) is clear
enough (at least to me) to allow me to say that I quite like it.




RFC: Design and code of partial clones (now, missing commits and trees OK)

2017-09-15 Thread Jonathan Tan
For those interested in partial clones and/or missing objects in repos,
I've updated my original partialclone patches to not require an explicit
list of promises. Fetch/clone still only permits exclusion of blobs, but
the infrastructure is there for a local repo to support missing trees
and commits as well.

They can be found here:

https://github.com/jonathantanmy/git/tree/partialclone2

To make the number of patches more manageable, I have omitted support
for a custom fetching hook (but it can be readded in fetch-object.c),
and only support promisor packfiles for now (but I don't have any
objection to supporting promisor loose objects in the future).

Let me know what you think of the overall approach. In particular, I'm
still wondering if there is a better approach than to toggle
"fetch_if_missing" whenever we need lazy fetching (or need to suppress
it).

Also, if there any patches that you think might be useful to others, let
me know and I'll send them to this mailing list for review.

A demo and an overview of the design (also available from that
repository's README):

Demo


Obtain a repository.

$ make prefix=$HOME/local install
$ cd $HOME/tmp
$ git clone https://github.com/git/git

Make it advertise the new feature and allow requests for arbitrary blobs.

$ git -C git config uploadpack.advertiseblobmaxbytes 1
$ git -C git config uploadpack.allowanysha1inwant 1

Perform the partial clone and check that it is indeed smaller. Specify
"file://" in order to test the partial clone mechanism. (If not, Git will
perform a local clone, which unselectively copies every object.)

$ git clone --blob-max-bytes=10 "file://$(pwd)/git" git2
$ git clone "file://$(pwd)/git" git3
$ du -sh git2 git3
116Mgit2
129Mgit3

Observe that the new repo is automatically configured to fetch missing objects
from the original repo. Subsequent fetches will also be partial.

$ cat git2/.git/config
[core]
repositoryformatversion = 1
filemode = true
bare = false
logallrefupdates = true
[remote "origin"]
url = [snip]
fetch = +refs/heads/*:refs/remotes/origin/*
blobmaxbytes = 10
[extensions]
partialclone = origin
[branch "master"]
remote = origin
merge = refs/heads/master

Unlike in an older version of this code (see the `partialclone` branch), this
also works with the HTTP/HTTPS protocols.

Design
==

Local repository layout
---

A repository declares its dependence on a *promisor remote* (a remote that
declares that it can serve certain objects when requested) by a repository
extension "partialclone". `extensions.partialclone` must be set to the name of
the remote ("origin" in the demo above).

A packfile can be annotated as originating from the promisor remote by the
existence of a "(packfile name).promisor" file with arbitrary contents (similar
to the ".keep" file). Whenever a promisor remote sends an object, it declares
that it can serve every object directly or indirectly referenced by the sent
object.

A promisor packfile is a packfile annotated with the ".promisor" file. A
promisor object is an object in a promisor packfile. A promised object is an
object directly referenced by a promisor object.

(In the future, we might need to add ".promisor" support to loose objects.)

Connectivity check and gc
-

The object walk done by the connectivity check (as used by fsck and fetch) stops
at all promisor objects and promised objects.

The object walk done by gc also stops at all promisor objects and promised
objects. Only non-promisor packfiles are deleted (if pack deletion is
requested); promisor packfiles are left alone. This maintains the distinction
between promisor packfiles and non-promisor packfiles. (In the future, we might
need to do something more sophisticated with promisor packfiles.)

Fetching of promised objects


When `sha1_object_info_extended()` (or similar) is invoked, it will
automatically attempt to fetch a missing object from the promisor remote if that
object is not in the local repository. For efficiency, no check is made as to
whether that object is a promised object or not.

This automatic fetching can be toggled on and off by the `fetch_if_missing`
global variable, and it is on by default.

The actual fetch is done through the fetch-pack/upload-pack protocol. Right now,
this uses the fact that upload-pack allows blob and tree "want"s, and this
incurs the overhead of the unnecessary ref advertisement. I hope that protocol
v2 will allow us to declare that blob and tree "want"s are allowed, and allow
the client to declare that it does not want the ref advertisement. All packfiles
downloaded in this way are annotated with ".promisor".

Fetching with `git fetch`
-

The fetch-pack/upload-pack protocol has also been extended to support omission
of blobs 

Re: t7300-clean.sh fails "not ok 32 - should avoid cleaning possible submodules" on debian jessie

2016-06-08 Thread Johannes Schindelin
Hi Pirate Praveen,

On Tue, 7 Jun 2016, Pirate Praveen wrote:

> On Tuesday 07 June 2016 04:00 PM, Johannes Schindelin wrote:
> > Hi Pirate Praveen,
> > 
> > On Tue, 7 Jun 2016, Pirate Praveen wrote:
> > 
> >> I'm trying to rebuild git 2.8.1 on debian jessie/stable and I get this
> >> error (tests upto this succeeds).
> >>
> >> not ok 32 - should avoid cleaning possible submodules
> > 
> > How about re-running the script with -i -v -x? If the output is still
> > not shining enough light on it, maybe you want to paste the (relevant part
> > of the) output into a reply?
> 
> + rm -fr to_clean possible_sub1
> [...]

Sorry, I must have missed your diligent analysis.

Ciao,
Johannes
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: t7300-clean.sh fails "not ok 32 - should avoid cleaning possible submodules" on debian jessie

2016-06-07 Thread Pirate Praveen


On 2016, ജൂൺ 7 9:16:01 PM IST, Stefan Beller  wrote:
>On Tue, Jun 7, 2016 at 8:43 AM, Stefan Beller 
>wrote:
>> (Are you telling me that patch is faulty?)
>
>The patch is not part of v2.8.1 but part of v2.8.3,
>so take a later version, or cherry-pick that patch manually.

Thanks! I have ignored that test failure for now. Good to know its fixed in 
2.8.3.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: t7300-clean.sh fails "not ok 32 - should avoid cleaning possible submodules" on debian jessie

2016-06-07 Thread Stefan Beller
On Tue, Jun 7, 2016 at 8:43 AM, Stefan Beller  wrote:
> (Are you telling me that patch is faulty?)

The patch is not part of v2.8.1 but part of v2.8.3,
so take a later version, or cherry-pick that patch manually.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: t7300-clean.sh fails "not ok 32 - should avoid cleaning possible submodules" on debian jessie

2016-06-07 Thread Stefan Beller
http://thread.gmane.org/gmane.comp.version-control.git/293025

TL;DR:  don't run tests as root, or cherry-pick
cadfbef98032fbc6874b5efd70d1e33dbeb4640d
(Are you telling me that patch is faulty?)
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: t7300-clean.sh fails "not ok 32 - should avoid cleaning possible submodules" on debian jessie

2016-06-07 Thread Pirate Praveen
On Tuesday 07 June 2016 04:00 PM, Johannes Schindelin wrote:
> Hi Pirate Praveen,
> 
> On Tue, 7 Jun 2016, Pirate Praveen wrote:
> 
>> I'm trying to rebuild git 2.8.1 on debian jessie/stable and I get this
>> error (tests upto this succeeds).
>>
>> not ok 32 - should avoid cleaning possible submodules
> 
> How about re-running the script with -i -v -x? If the output is still
> not shining enough light on it, maybe you want to paste the (relevant part
> of the) output into a reply?

+ rm -fr to_clean possible_sub1
+ mkdir to_clean possible_sub1
+ test_when_finished rm -rf possible_sub*
+ test 0 = 0
+ test_cleanup={ rm -rf possible_sub*
} && (exit "$eval_ret"); eval_ret=$?; :
+ echo gitdir: foo
+
+ chmod 0 possible_sub1/.git
+
+ git clean -f -d
Skipping repository baz/boo

Skipping repository foo/
Removing possible_sub1/
Skipping repository repo/
Skipping repository sub2/
Removing to_clean/
+ test_path_is_file possible_sub1/.git
+ test -f possible_sub1/.git
+ echo File possible_sub1/.git doesn't exist.
File possible_sub1/.git doesn't exist.
+ false
error: last command exited with $?=1
not ok 32 - should avoid cleaning possible submodules
#
#   rm -fr to_clean possible_sub1 &&
#   mkdir to_clean possible_sub1 &&
#   test_when_finished "rm -rf possible_sub*" &&
#   echo "gitdir: foo" >possible_sub1/.git &&
#   >possible_sub1/hello.world &&
#   chmod 0 possible_sub1/.git &&
#   >to_clean/should_clean.this &&
#   git clean -f -d &&
#   test_path_is_file possible_sub1/.git &&
#   test_path_is_file possible_sub1/hello.world &&
#   test_path_is_missing to_clean
#

> Ciao,
> Johannes
> 




signature.asc
Description: OpenPGP digital signature


Re: t7300-clean.sh fails "not ok 32 - should avoid cleaning possible submodules" on debian jessie

2016-06-07 Thread Johannes Schindelin
Hi Pirate Praveen,

On Tue, 7 Jun 2016, Pirate Praveen wrote:

> I'm trying to rebuild git 2.8.1 on debian jessie/stable and I get this
> error (tests upto this succeeds).
> 
> not ok 32 - should avoid cleaning possible submodules

How about re-running the script with -i -v -x? If the output is still
not shining enough light on it, maybe you want to paste the (relevant part
of the) output into a reply?

Ciao,
Johannes
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


t7300-clean.sh fails "not ok 32 - should avoid cleaning possible submodules" on debian jessie

2016-06-07 Thread Pirate Praveen
Hi,

I'm trying to rebuild git 2.8.1 on debian jessie/stable and I get this
error (tests upto this succeeds).

not ok 32 - should avoid cleaning possible submodules

I added debian stretch repo to apt sources.list and ran apt-get source
-b git.

You can see the build options passed here
http://repo.or.cz/git/debian.git/blob/HEAD:/debian/rules

Since it is a working fine on debian sid/unstable, I did not want to
report it to debian package maintainers.

I noticed the same failure for git 2.8.0-rc3 as well. I could ignore the
test failure and go ahead, but I'd like to fix this if possible.

Thanks
Praveen



signature.asc
Description: OpenPGP digital signature


OK DEAR

2016-05-06 Thread mpmlmmawvx7958
My greeting to you over there and i hope all is fine, how are you doing, please 
my dear i saw your profile on FB and i
became interested to know more about you, and i hope it will be the same from 
you, please i will like you to contact me
to my email so that i will tell you more about me below,( vivianeri...@gmail.com
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 5/5] t5504: drop sigpipe=ok from push tests

2016-04-19 Thread Jeff King
These were added by 8bf4bec (add "ok=sigpipe" to
test_must_fail and use it to fix flaky tests, 2015-11-27)
because we would racily die via SIGPIPE when the pack was
rejected by the other side.

But since we have recently de-flaked send-pack, we should be
able to tighten up these tests (including re-adding the
expected output checks).

Signed-off-by: Jeff King <p...@peff.net>
---
Would be nice to reference HEAD^2 by name here, but of
course I don't know its final commit sha1 yet.

 t/t5504-fetch-receive-strict.sh | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/t/t5504-fetch-receive-strict.sh b/t/t5504-fetch-receive-strict.sh
index a3e12d2..44f3d5f 100755
--- a/t/t5504-fetch-receive-strict.sh
+++ b/t/t5504-fetch-receive-strict.sh
@@ -100,11 +100,8 @@ test_expect_success 'push with receive.fsckobjects' '
git config receive.fsckobjects true &&
git config transfer.fsckobjects false
) &&
-   test_must_fail ok=sigpipe git push --porcelain dst 
master:refs/heads/test >act &&
-   {
-   test_cmp exp act ||
-   ! test -s act
-   }
+   test_must_fail git push --porcelain dst master:refs/heads/test >act &&
+   test_cmp exp act
 '
 
 test_expect_success 'push with transfer.fsckobjects' '
@@ -114,7 +111,8 @@ test_expect_success 'push with transfer.fsckobjects' '
cd dst &&
git config transfer.fsckobjects true
) &&
-   test_must_fail ok=sigpipe git push --porcelain dst 
master:refs/heads/test >act
+   test_must_fail git push --porcelain dst master:refs/heads/test >act &&
+   test_cmp exp act
 '
 
 cat >bogus-commit <<\EOF
-- 
2.8.1.512.g4e0a533
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1 2/2] add "ok=sigpipe" to test_must_fail and use it to fix flaky tests

2015-12-01 Thread Lars Schneider

On 28 Nov 2015, at 18:10, Jeff King  wrote:

> On Fri, Nov 27, 2015 at 10:15:14AM +0100, larsxschnei...@gmail.com wrote:
> 
>> From: Lars Schneider 
>> 
>> t5516 "75 - deny fetch unreachable SHA1, allowtipsha1inwant=true" is
>> flaky in the following case:
>> 1. remote upload-pack finds out "not our ref"
>> 2. remote sends a response and closes the pipe
>> 3. fetch-pack still tries to write commands to the remote upload-pack
>> 4. write call in wrapper.c dies with SIGPIPE
>> 
>> t5504 "9 - push with transfer.fsckobjects" is flaky, too, and returns
>> SIGPIPE once in a while. I had to remove the final "To dst..." output
>> check because there is no output if the process dies with SIGPUPE.
> 
> s/PUPE/PIPE/ :)
> 
> I think it would be nice for future readers to understand a bit better
> _why_ this is flaky, and why the fix is to the test suite and not to git
> itself. I added this paragraph in between the two above:
> 
>The test is flaky because the sending fetch-pack may or may not have
>finished writing its output by step (3). If it did, then we see a
>closed pipe on the next read() call. If it didn't, then we get the
>SIGPIPE from step (4) above. Both are fine, but the latter fools
>test_must_fail.
> 
Sounds good! Thank you :-)

- Lars

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1 2/2] add "ok=sigpipe" to test_must_fail and use it to fix flaky tests

2015-11-28 Thread Jeff King
On Fri, Nov 27, 2015 at 10:15:14AM +0100, larsxschnei...@gmail.com wrote:

> From: Lars Schneider 
> 
> t5516 "75 - deny fetch unreachable SHA1, allowtipsha1inwant=true" is
> flaky in the following case:
> 1. remote upload-pack finds out "not our ref"
> 2. remote sends a response and closes the pipe
> 3. fetch-pack still tries to write commands to the remote upload-pack
> 4. write call in wrapper.c dies with SIGPIPE
> 
> t5504 "9 - push with transfer.fsckobjects" is flaky, too, and returns
> SIGPIPE once in a while. I had to remove the final "To dst..." output
> check because there is no output if the process dies with SIGPUPE.

s/PUPE/PIPE/ :)

I think it would be nice for future readers to understand a bit better
_why_ this is flaky, and why the fix is to the test suite and not to git
itself. I added this paragraph in between the two above:

The test is flaky because the sending fetch-pack may or may not have
finished writing its output by step (3). If it did, then we see a
closed pipe on the next read() call. If it didn't, then we get the
SIGPIPE from step (4) above. Both are fine, but the latter fools
test_must_fail.

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v1 2/2] add "ok=sigpipe" to test_must_fail and use it to fix flaky tests

2015-11-27 Thread larsxschneider
From: Lars Schneider <larsxschnei...@gmail.com>

t5516 "75 - deny fetch unreachable SHA1, allowtipsha1inwant=true" is
flaky in the following case:
1. remote upload-pack finds out "not our ref"
2. remote sends a response and closes the pipe
3. fetch-pack still tries to write commands to the remote upload-pack
4. write call in wrapper.c dies with SIGPIPE

t5504 "9 - push with transfer.fsckobjects" is flaky, too, and returns
SIGPIPE once in a while. I had to remove the final "To dst..." output
check because there is no output if the process dies with SIGPUPE.

Accept such a death-with-sigpipe also as OK when we are expecting a
failure.

Signed-off-by: Lars Schneider <larsxschnei...@gmail.com>
---
 t/t5504-fetch-receive-strict.sh | 5 ++---
 t/t5516-fetch-push.sh   | 6 +++---
 t/test-lib-functions.sh | 3 +++
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/t/t5504-fetch-receive-strict.sh b/t/t5504-fetch-receive-strict.sh
index 44f3d5f..89224ed 100755
--- a/t/t5504-fetch-receive-strict.sh
+++ b/t/t5504-fetch-receive-strict.sh
@@ -100,7 +100,7 @@ test_expect_success 'push with receive.fsckobjects' '
git config receive.fsckobjects true &&
git config transfer.fsckobjects false
) &&
-   test_must_fail git push --porcelain dst master:refs/heads/test >act &&
+   test_must_fail ok=sigpipe git push --porcelain dst 
master:refs/heads/test >act &&
test_cmp exp act
 '
 
@@ -111,8 +111,7 @@ test_expect_success 'push with transfer.fsckobjects' '
cd dst &&
git config transfer.fsckobjects true
) &&
-   test_must_fail git push --porcelain dst master:refs/heads/test >act &&
-   test_cmp exp act
+   test_must_fail ok=sigpipe git push --porcelain dst 
master:refs/heads/test >act
 '
 
 cat >bogus-commit <<\EOF
diff --git a/t/t5516-fetch-push.sh b/t/t5516-fetch-push.sh
index ec22c98..0a87e19 100755
--- a/t/t5516-fetch-push.sh
+++ b/t/t5516-fetch-push.sh
@@ -1162,15 +1162,15 @@ do
mk_empty shallow &&
(
cd shallow &&
-   test_must_fail git fetch ../testrepo/.git $SHA1_3 &&
-   test_must_fail git fetch ../testrepo/.git $SHA1_1 &&
+   test_must_fail ok=sigpipe git fetch ../testrepo/.git 
$SHA1_3 &&
+   test_must_fail ok=sigpipe git fetch ../testrepo/.git 
$SHA1_1 &&
git --git-dir=../testrepo/.git config 
uploadpack.allowreachablesha1inwant true &&
git fetch ../testrepo/.git $SHA1_1 &&
git cat-file commit $SHA1_1 &&
test_must_fail git cat-file commit $SHA1_2 &&
    git fetch ../testrepo/.git $SHA1_2 &&
git cat-file commit $SHA1_2 &&
-   test_must_fail git fetch ../testrepo/.git $SHA1_3
+   test_must_fail ok=sigpipe git fetch ../testrepo/.git 
$SHA1_3
)
'
 done
diff --git a/t/test-lib-functions.sh b/t/test-lib-functions.sh
index 94c449a..06d3fcb 100644
--- a/t/test-lib-functions.sh
+++ b/t/test-lib-functions.sh
@@ -612,6 +612,9 @@ test_must_fail () {
then
echo >&2 "test_must_fail: command succeeded: $*"
return 1
+   elif list_contains "$_test_ok" sigpipe && test "$exit_code" -eq 141
+   then
+   return 0
elif test $exit_code -gt 129 && test $exit_code -le 192
then
echo >&2 "test_must_fail: died by signal: $*"
-- 
2.5.1

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v6 2/6] add "ok=sigpipe" to test_must_fail and use it to fix flaky tests

2015-11-19 Thread larsxschneider
From: Lars Schneider <larsxschnei...@gmail.com>

t5516 "75 - deny fetch unreachable SHA1, allowtipsha1inwant=true" is
flaky in the following case:
1. remote upload-pack finds out "not our ref"
2. remote sends a response and closes the pipe
3. fetch-pack still tries to write commands to the remote upload-pack
4. write call in wrapper.c dies with SIGPIPE

t5504 "9 - push with transfer.fsckobjects" is flaky, too, and returns
SIGPIPE once in a while. I had to remove the final "To dst..." output
check because there is no output if the process dies with SIGPUPE.

Accept such a death-with-sigpipe also as OK when we are expecting a
failure.

Signed-off-by: Lars Schneider <larsxschnei...@gmail.com>
---
 t/t5504-fetch-receive-strict.sh | 3 +--
 t/t5516-fetch-push.sh   | 6 +++---
 t/test-lib-functions.sh | 4 
 3 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/t/t5504-fetch-receive-strict.sh b/t/t5504-fetch-receive-strict.sh
index 44f3d5f..a9e382c 100755
--- a/t/t5504-fetch-receive-strict.sh
+++ b/t/t5504-fetch-receive-strict.sh
@@ -111,8 +111,7 @@ test_expect_success 'push with transfer.fsckobjects' '
cd dst &&
git config transfer.fsckobjects true
) &&
-   test_must_fail git push --porcelain dst master:refs/heads/test >act &&
-   test_cmp exp act
+   test_must_fail ok=sigpipe git push --porcelain dst 
master:refs/heads/test >act
 '
 
 cat >bogus-commit <<\EOF
diff --git a/t/t5516-fetch-push.sh b/t/t5516-fetch-push.sh
index ec22c98..0a87e19 100755
--- a/t/t5516-fetch-push.sh
+++ b/t/t5516-fetch-push.sh
@@ -1162,15 +1162,15 @@ do
mk_empty shallow &&
(
cd shallow &&
-   test_must_fail git fetch ../testrepo/.git $SHA1_3 &&
-   test_must_fail git fetch ../testrepo/.git $SHA1_1 &&
+   test_must_fail ok=sigpipe git fetch ../testrepo/.git 
$SHA1_3 &&
+   test_must_fail ok=sigpipe git fetch ../testrepo/.git 
$SHA1_1 &&
git --git-dir=../testrepo/.git config 
uploadpack.allowreachablesha1inwant true &&
git fetch ../testrepo/.git $SHA1_1 &&
git cat-file commit $SHA1_1 &&
test_must_fail git cat-file commit $SHA1_2 &&
git fetch ../testrepo/.git $SHA1_2 &&
git cat-file commit $SHA1_2 &&
-   test_must_fail git fetch ../testrepo/.git $SHA1_3
+   test_must_fail ok=sigpipe git fetch ../testrepo/.git 
$SHA1_3
)
'
 done
diff --git a/t/test-lib-functions.sh b/t/test-lib-functions.sh
index 1e762da..1fdc58c 100644
--- a/t/test-lib-functions.sh
+++ b/t/test-lib-functions.sh
@@ -598,6 +598,10 @@ test_must_fail () {
then
echo >&2 "test_must_fail: command succeeded: $*"
return 0
+   elif ! case ",$_test_ok," in *,sigpipe,*) false;; esac &&
+   test $exit_code = 141
+   then
+   return 0
elif test $exit_code -gt 129 && test $exit_code -le 192
then
echo >&2 "test_must_fail: died by signal: $*"
-- 
2.5.1

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v5 2/6] add "ok=sigpipe" to test_must_fail and use it to fix flaky tests

2015-11-15 Thread larsxschneider
From: Lars Schneider <larsxschnei...@gmail.com>

t5516 "75 - deny fetch unreachable SHA1, allowtipsha1inwant=true" is
flaky in the following case:
1. remote upload-pack finds out "not our ref"
2. remote sends a response and closes the pipe
3. fetch-pack still tries to write commands to the remote upload-pack
4. write call in wrapper.c dies with SIGPIPE

t5504 "9 - push with transfer.fsckobjects" is flaky, too, and returns
SIGPIPE once in a while. I had to remove the final "To dst..." output
check because there is no output if the process dies with SIGPUPE.

Accept such a death-with-sigpipe also as OK when we are expecting a
failure.

Signed-off-by: Lars Schneider <larsxschnei...@gmail.com>
---
 t/t5504-fetch-receive-strict.sh | 3 +--
 t/t5516-fetch-push.sh   | 6 +++---
 t/test-lib-functions.sh | 4 
 3 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/t/t5504-fetch-receive-strict.sh b/t/t5504-fetch-receive-strict.sh
index 44f3d5f..a9e382c 100755
--- a/t/t5504-fetch-receive-strict.sh
+++ b/t/t5504-fetch-receive-strict.sh
@@ -111,8 +111,7 @@ test_expect_success 'push with transfer.fsckobjects' '
cd dst &&
git config transfer.fsckobjects true
) &&
-   test_must_fail git push --porcelain dst master:refs/heads/test >act &&
-   test_cmp exp act
+   test_must_fail ok=sigpipe git push --porcelain dst 
master:refs/heads/test >act
 '
 
 cat >bogus-commit <<\EOF
diff --git a/t/t5516-fetch-push.sh b/t/t5516-fetch-push.sh
index ec22c98..0a87e19 100755
--- a/t/t5516-fetch-push.sh
+++ b/t/t5516-fetch-push.sh
@@ -1162,15 +1162,15 @@ do
mk_empty shallow &&
(
cd shallow &&
-   test_must_fail git fetch ../testrepo/.git $SHA1_3 &&
-   test_must_fail git fetch ../testrepo/.git $SHA1_1 &&
+   test_must_fail ok=sigpipe git fetch ../testrepo/.git 
$SHA1_3 &&
+   test_must_fail ok=sigpipe git fetch ../testrepo/.git 
$SHA1_1 &&
git --git-dir=../testrepo/.git config 
uploadpack.allowreachablesha1inwant true &&
git fetch ../testrepo/.git $SHA1_1 &&
git cat-file commit $SHA1_1 &&
test_must_fail git cat-file commit $SHA1_2 &&
git fetch ../testrepo/.git $SHA1_2 &&
git cat-file commit $SHA1_2 &&
-   test_must_fail git fetch ../testrepo/.git $SHA1_3
+   test_must_fail ok=sigpipe git fetch ../testrepo/.git 
$SHA1_3
)
'
 done
diff --git a/t/test-lib-functions.sh b/t/test-lib-functions.sh
index 1e762da..1fdc58c 100644
--- a/t/test-lib-functions.sh
+++ b/t/test-lib-functions.sh
@@ -598,6 +598,10 @@ test_must_fail () {
then
echo >&2 "test_must_fail: command succeeded: $*"
return 0
+   elif ! case ",$_test_ok," in *,sigpipe,*) false;; esac &&
+   test $exit_code = 141
+   then
+   return 0
elif test $exit_code -gt 129 && test $exit_code -le 192
then
echo >&2 "test_must_fail: died by signal: $*"
-- 
2.5.1

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


`-u`/`update-head-ok` unsupported on pull

2015-10-08 Thread Victor Engmark
According to the documentation these options are supported:
$ git help pull | grep -e '--update-head-ok'
   -u, --update-head-ok

However:
$ git pull --update-head-ok
error: unknown option `update-head-ok'

Using:
$ git --version
git version 2.6.1
$ pacman --query --info git | grep ^Version
Version: 2.6.1-1

Am I missing something? The manual seems to be for the right version:

$ git help pull | tail -n1 | tr -s ' '
Git 2.6.1 10/06/2015 GIT-PULL(1)

I'm running the system installed Git:

$ type -a git
git is /usr/bin/git

Cheers
Victor
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: `-u`/`update-head-ok` unsupported on pull

2015-10-08 Thread Stefan Beller
+Paul Tan who rewrote git pull in C recently.

The manpage:
-u, --update-head-ok
   By default git fetch refuses to update the head which
corresponds to the current branch. This flag disables the check. This
is purely for the internal use for git pull to communicate with
   git fetch, and unless you are implementing your own
Porcelain you are not supposed to use it.

I guess we either need to update the manpage to remove that option
then, or add it back in in case somebody actually uses it despite the
warning in the man page?
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] fsck: it is OK for a tag and a commit to lack the body

2015-06-29 Thread Johannes Schindelin
Hi Junio,

On 2015-06-29 07:42, Junio C Hamano wrote:
 Johannes Schindelin johannes.schinde...@gmx.de writes:
 
 Hmm. Maybe we should still warn when there is no empty line finishing
 the header explicitly, or at least make it FSCK_IGNORE by default so
 that maintainers who like a stricter check can upgrade the condition
 to an error?
 
 [...]
 
 And such a check can certainly be added in the future

True. I take my suggestion back.

Thanks for the reality check,
Dscho
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] fsck: it is OK for a tag and a commit to lack the body

2015-06-28 Thread Eric Sunshine
On Sun, Jun 28, 2015 at 2:18 PM, Junio C Hamano gits...@pobox.com wrote:
 When fsck validates a commit or a tag object, it scans each line in
 the header the object using helper functions such as start_with(),

s/header/ of/

 etc. that work on a NUL terminated buffer, but before a1e920a0
 (index-pack: terminate object buffers with NUL, 2014-12-08), the
 validation functions were fed the object data as counted strings,
 not necessarily terminated with a NUL.  We added a helper function
 require_end_of_header() to be called at the beginning of these
 validation functions to insist that the object data contains an
 empty line before its end.  The theory is that the validating
 functions will notice and stop when it hits an empty line as a
 normal end of header (or a required header line that is missing)
 before scanning past the end of potentially not NUL-terminated
 buffer.

 But the theory forgot that in the older days, Git itself happily
 created objects with only the header lines without a body. This
 caused Git 2.2 and later to issue an unnecessary warning on some
 existing repositories.

 With a1e920a0, we do not need to require an empty line (or the body)
 in these objects to safely parse and validate them.  Drop the
 offending must have an empty line check from this helper function,
 while keeping the other check to make sure that there is no NUL in
 the header part of the object, and adjust the name of the helper to
 what it does accordingly.

 Noticed-by: Wolfgang Denk w...@denx.de
 Helped-by: Jeff King p...@peff.net
 Signed-off-by: Junio C Hamano gits...@pobox.com
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] fsck: it is OK for a tag and a commit to lack the body

2015-06-28 Thread Junio C Hamano
When fsck validates a commit or a tag object, it scans each line in
the header the object using helper functions such as start_with(),
etc. that work on a NUL terminated buffer, but before a1e920a0
(index-pack: terminate object buffers with NUL, 2014-12-08), the
validation functions were fed the object data as counted strings,
not necessarily terminated with a NUL.  We added a helper function
require_end_of_header() to be called at the beginning of these
validation functions to insist that the object data contains an
empty line before its end.  The theory is that the validating
functions will notice and stop when it hits an empty line as a
normal end of header (or a required header line that is missing)
before scanning past the end of potentially not NUL-terminated
buffer.

But the theory forgot that in the older days, Git itself happily
created objects with only the header lines without a body. This
caused Git 2.2 and later to issue an unnecessary warning on some
existing repositories.

With a1e920a0, we do not need to require an empty line (or the body)
in these objects to safely parse and validate them.  Drop the
offending must have an empty line check from this helper function,
while keeping the other check to make sure that there is no NUL in
the header part of the object, and adjust the name of the helper to
what it does accordingly.

Noticed-by: Wolfgang Denk w...@denx.de
Helped-by: Jeff King p...@peff.net
Signed-off-by: Junio C Hamano gits...@pobox.com
---

 * With a proper commit message this time. I did this directly on
   top of a1e920a0 (index-pack: terminate object buffers with NUL,
   2014-12-08); it has trivial merge conflicts with more recent
   codebase, whose resolutions I'll push out later on 'pu'.

 fsck.c | 17 +
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/fsck.c b/fsck.c
index 88c92e8..3f264e7 100644
--- a/fsck.c
+++ b/fsck.c
@@ -238,8 +238,8 @@ static int fsck_tree(struct tree *item, int strict, 
fsck_error error_func)
return retval;
 }
 
-static int require_end_of_header(const void *data, unsigned long size,
-   struct object *obj, fsck_error error_func)
+static int verify_headers(const void *data, unsigned long size,
+ struct object *obj, fsck_error error_func)
 {
const char *buffer = (const char *)data;
unsigned long i;
@@ -255,6 +255,15 @@ static int require_end_of_header(const void *data, 
unsigned long size,
}
}
 
+   /*
+* We did not find double-LF that separates the header
+* and the body.  Not having a body is not a crime but
+* we do want to see the terminating LF for the last header
+* line.
+*/
+   if (size  buffer[size - 1] == '\n')
+   return 0;
+
return error_func(obj, FSCK_ERROR, unterminated header);
 }
 
@@ -305,7 +314,7 @@ static int fsck_commit_buffer(struct commit *commit, const 
char *buffer,
unsigned parent_count, parent_line_count = 0;
int err;
 
-   if (require_end_of_header(buffer, size, commit-object, error_func))
+   if (verify_headers(buffer, size, commit-object, error_func))
return -1;
 
if (!skip_prefix(buffer, tree , buffer))
@@ -384,7 +393,7 @@ static int fsck_tag_buffer(struct tag *tag, const char 
*data,
}
}
 
-   if (require_end_of_header(buffer, size, tag-object, error_func))
+   if (verify_headers(buffer, size, tag-object, error_func))
goto done;
 
if (!skip_prefix(buffer, object , buffer)) {
-- 
2.5.0-rc0-151-g019519d

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] fsck: it is OK for a tag and a commit to lack the body

2015-06-28 Thread Junio C Hamano
Johannes Schindelin johannes.schinde...@gmx.de writes:

 +/*
 + * We did not find double-LF that separates the header
 + * and the body.  Not having a body is not a crime but
 + * we do want to see the terminating LF for the last header
 + * line.
 + */
 +if (size  buffer[size - 1] == '\n')
 +return 0;
 +
  return error_func(obj, FSCK_ERROR, unterminated header);
  }

 Hmm. Maybe we should still warn when there is no empty line finishing
 the header explicitly, or at least make it FSCK_IGNORE by default so
 that maintainers who like a stricter check can upgrade the condition
 to an error?

Wolfgang, do you know how these old tags without messages were
created?  I think in the old days we didn't advertise git tag as
the sole way to create a tag object and more people drove git
mktag from their script, and mktag until e0aaf781 (mktag.c:
improve verification of tagger field and tests, 2008-03-27) did not
complain if the header were not terminated with double-LF even when
the tag did not have a body (hence there is no need for double-LF).

Dscho, I do not think it is reasonable to force all repository
owners of projects that started using Git before early 2008 to set
configuration variable to ignore warnings for something that was
perfectly kosher back when their project started.  More importantly,
even though Git core itself adds unnecessary double-LF after the
header in a tag or a commit that does not have any body, I am not
sure if we punish people who use current versions of third-party
reimplementations of Git that do not write that unnecessary
double-LF at the end an object without a body (I am not saying that
there is any known third-party reimplementation to do so---I am
saying that I do not think it is their bug if such implementations
existed today).

Do we have check marked as FSCK_IGNORE by default?  I think a more
interesting stricter check may be to flag a tag object or a commit
object that does not have any body, with or without the double-LF at
the end.

And such a check can certainly be added in the future, but what I
sent was a fix to a regression that caused us to start whining on a
syntactically valid object in the v2.2 timeframe, and is not about
adding a new feature.
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] fsck: it is OK for a tag and a commit to lack the body

2015-06-28 Thread Johannes Schindelin
Hi Junio,

On 2015-06-28 20:18, Junio C Hamano wrote:
 diff --git a/fsck.c b/fsck.c
 index 88c92e8..3f264e7 100644
 --- a/fsck.c
 +++ b/fsck.c
 @@ -255,6 +255,15 @@ static int require_end_of_header(const void
 *data, unsigned long size,
   }
   }
  
 + /*
 +  * We did not find double-LF that separates the header
 +  * and the body.  Not having a body is not a crime but
 +  * we do want to see the terminating LF for the last header
 +  * line.
 +  */
 + if (size  buffer[size - 1] == '\n')
 + return 0;
 +
   return error_func(obj, FSCK_ERROR, unterminated header);
  }
  

Hmm. Maybe we should still warn when there is no empty line finishing the 
header explicitly, or at least make it FSCK_IGNORE by default so that 
maintainers who like a stricter check can upgrade the condition to an error?

Otherwise: ACK.

Ciao,
Dscho
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/2] Revert test-lib: allow prefixing a custom string before ok N etc.

2013-10-19 Thread Thomas Rast
Now that ad0e623 (test-lib: support running tests under valgrind in
parallel, 2013-06-23) has been reverted, this support code has no
users any more.  Revert it, too.

This reverts commit e939e15d241e942662b9f88f6127ab470ab0a0b9.
---
 t/test-lib.sh | 27 ---
 1 file changed, 12 insertions(+), 15 deletions(-)

diff --git a/t/test-lib.sh b/t/test-lib.sh
index eaf6759..29c1410 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -210,9 +210,6 @@ do
--root=*)
root=$(expr z$1 : 'z[^=]*=\(.*\)')
shift ;;
-   --statusprefix=*)
-   statusprefix=$(expr z$1 : 'z[^=]*=\(.*\)')
-   shift ;;
*)
echo error: unknown test option '$1' 2; exit 1 ;;
esac
@@ -320,12 +317,12 @@ trap 'die' EXIT
 
 test_ok_ () {
test_success=$(($test_success + 1))
-   say_color  ${statusprefix}ok $test_count - $@
+   say_color  ok $test_count - $@
 }
 
 test_failure_ () {
test_failure=$(($test_failure + 1))
-   say_color error ${statusprefix}not ok $test_count - $1
+   say_color error not ok $test_count - $1
shift
echo $@ | sed -e 's/^/#   /'
test $immediate =  || { GIT_EXIT_OK=t; exit 1; }
@@ -333,12 +330,12 @@ test_failure_ () {
 
 test_known_broken_ok_ () {
test_fixed=$(($test_fixed+1))
-   say_color error ${statusprefix}ok $test_count - $@ # TODO known 
breakage vanished
+   say_color error ok $test_count - $@ # TODO known breakage vanished
 }
 
 test_known_broken_failure_ () {
test_broken=$(($test_broken+1))
-   say_color warn ${statusprefix}not ok $test_count - $@ # TODO known 
breakage
+   say_color warn not ok $test_count - $@ # TODO known breakage
 }
 
 test_debug () {
@@ -462,8 +459,8 @@ test_skip () {
of_prereq= of $test_prereq
fi
 
-   say_color skip 3 ${statusprefix}skipping test: $@
-   say_color skip ${statusprefix}ok $test_count # skip $1 
(missing $missing_prereq${of_prereq})
+   say_color skip 3 skipping test: $@
+   say_color skip ok $test_count # skip $1 (missing 
$missing_prereq${of_prereq})
: true
;;
*)
@@ -501,11 +498,11 @@ test_done () {
 
if test $test_fixed != 0
then
-   say_color error ${statusprefix}# $test_fixed known breakage(s) 
vanished; please update test(s)
+   say_color error # $test_fixed known breakage(s) vanished; 
please update test(s)
fi
if test $test_broken != 0
then
-   say_color warn ${statusprefix}# still have $test_broken known 
breakage(s)
+   say_color warn # still have $test_broken known breakage(s)
fi
if test $test_broken != 0 || test $test_fixed != 0
then
@@ -528,9 +525,9 @@ test_done () {
then
if test $test_remaining -gt 0
then
-   say_color pass ${statusprefix}# passed all 
$msg
+   say_color pass # passed all $msg
fi
-   say ${statusprefix}1..$test_count$skip_all
+   say 1..$test_count$skip_all
fi
 
test -d $remove_trash 
@@ -544,8 +541,8 @@ test_done () {
*)
if test $test_external_has_tap -eq 0
then
-   say_color error ${statusprefix}# failed $test_failure 
among $msg
-   say ${statusprefix}1..$test_count
+   say_color error # failed $test_failure among $msg
+   say 1..$test_count
fi
 
exit 1 ;;
-- 
1.8.4.1.810.g312044e

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 7/8] test-lib: allow prefixing a custom string before ok N etc.

2013-06-23 Thread Thomas Rast
This is not really meant for external use, and thus not documented. It
allows the next commit to neatly distinguish between sub-tests and the
main run.

The format is intentionally not valid TAP.  The use in the next commit
would not result in anything valid either way, and it seems better to
make it obvious.

Signed-off-by: Thomas Rast tr...@inf.ethz.ch
---
 t/test-lib.sh | 27 +++
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/t/test-lib.sh b/t/test-lib.sh
index a926828..682459b 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -209,6 +209,9 @@ do
--root=*)
root=$(expr z$1 : 'z[^=]*=\(.*\)')
shift ;;
+   --statusprefix=*)
+   statusprefix=$(expr z$1 : 'z[^=]*=\(.*\)')
+   shift ;;
*)
echo error: unknown test option '$1' 2; exit 1 ;;
esac
@@ -316,12 +319,12 @@ trap 'die' EXIT
 
 test_ok_ () {
test_success=$(($test_success + 1))
-   say_color  ok $test_count - $@
+   say_color  ${statusprefix}ok $test_count - $@
 }
 
 test_failure_ () {
test_failure=$(($test_failure + 1))
-   say_color error not ok $test_count - $1
+   say_color error ${statusprefix}not ok $test_count - $1
shift
echo $@ | sed -e 's/^/#   /'
test $immediate =  || { GIT_EXIT_OK=t; exit 1; }
@@ -329,12 +332,12 @@ test_failure_ () {
 
 test_known_broken_ok_ () {
test_fixed=$(($test_fixed+1))
-   say_color error ok $test_count - $@ # TODO known breakage vanished
+   say_color error ${statusprefix}ok $test_count - $@ # TODO known 
breakage vanished
 }
 
 test_known_broken_failure_ () {
test_broken=$(($test_broken+1))
-   say_color warn not ok $test_count - $@ # TODO known breakage
+   say_color warn ${statusprefix}not ok $test_count - $@ # TODO known 
breakage
 }
 
 test_debug () {
@@ -458,8 +461,8 @@ test_skip () {
of_prereq= of $test_prereq
fi
 
-   say_color skip 3 skipping test: $@
-   say_color skip ok $test_count # skip $1 (missing 
$missing_prereq${of_prereq})
+   say_color skip 3 ${statusprefix}skipping test: $@
+   say_color skip ${statusprefix}ok $test_count # skip $1 
(missing $missing_prereq${of_prereq})
: true
;;
*)
@@ -497,11 +500,11 @@ test_done () {
 
if test $test_fixed != 0
then
-   say_color error # $test_fixed known breakage(s) vanished; 
please update test(s)
+   say_color error ${statusprefix}# $test_fixed known breakage(s) 
vanished; please update test(s)
fi
if test $test_broken != 0
then
-   say_color warn # still have $test_broken known breakage(s)
+   say_color warn ${statusprefix}# still have $test_broken known 
breakage(s)
fi
if test $test_broken != 0 || test $test_fixed != 0
then
@@ -524,9 +527,9 @@ test_done () {
then
if test $test_remaining -gt 0
then
-   say_color pass # passed all $msg
+   say_color pass ${statusprefix}# passed all 
$msg
fi
-   say 1..$test_count$skip_all
+   say ${statusprefix}1..$test_count$skip_all
fi
 
test -d $remove_trash 
@@ -540,8 +543,8 @@ test_done () {
*)
if test $test_external_has_tap -eq 0
then
-   say_color error # failed $test_failure among $msg
-   say 1..$test_count
+   say_color error ${statusprefix}# failed $test_failure 
among $msg
+   say ${statusprefix}1..$test_count
fi
 
exit 1 ;;
-- 
1.8.3.1.727.gcbe3af3

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 7/8] test-lib: allow prefixing a custom string before ok N etc.

2013-06-18 Thread Thomas Rast
This is not really meant for external use, and thus not documented. It
allows the next commit to neatly distinguish between sub-tests and the
main run.

The format is intentionally not valid TAP.  The use in the next commit
would not result in anything valid either way, and it seems better to
make it obvious.

Signed-off-by: Thomas Rast tr...@inf.ethz.ch
---
 t/test-lib.sh | 27 +++
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/t/test-lib.sh b/t/test-lib.sh
index 817ab43..f2abff9 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -209,6 +209,9 @@ do
--root=*)
root=$(expr z$1 : 'z[^=]*=\(.*\)')
shift ;;
+   --statusprefix=*)
+   statusprefix=$(expr z$1 : 'z[^=]*=\(.*\)')
+   shift ;;
*)
echo error: unknown test option '$1' 2; exit 1 ;;
esac
@@ -316,12 +319,12 @@ trap 'die' EXIT
 
 test_ok_ () {
test_success=$(($test_success + 1))
-   say_color  ok $test_count - $@
+   say_color  ${statusprefix}ok $test_count - $@
 }
 
 test_failure_ () {
test_failure=$(($test_failure + 1))
-   say_color error not ok $test_count - $1
+   say_color error ${statusprefix}not ok $test_count - $1
shift
echo $@ | sed -e 's/^/#   /'
test $immediate =  || { GIT_EXIT_OK=t; exit 1; }
@@ -329,12 +332,12 @@ test_failure_ () {
 
 test_known_broken_ok_ () {
test_fixed=$(($test_fixed+1))
-   say_color error ok $test_count - $@ # TODO known breakage vanished
+   say_color error ${statusprefix}ok $test_count - $@ # TODO known 
breakage vanished
 }
 
 test_known_broken_failure_ () {
test_broken=$(($test_broken+1))
-   say_color warn not ok $test_count - $@ # TODO known breakage
+   say_color warn ${statusprefix}not ok $test_count - $@ # TODO known 
breakage
 }
 
 test_debug () {
@@ -458,8 +461,8 @@ test_skip () {
of_prereq= of $test_prereq
fi
 
-   say_color skip 3 skipping test: $@
-   say_color skip ok $test_count # skip $1 (missing 
$missing_prereq${of_prereq})
+   say_color skip 3 ${statusprefix}skipping test: $@
+   say_color skip ${statusprefix}ok $test_count # skip $1 
(missing $missing_prereq${of_prereq})
: true
;;
*)
@@ -495,11 +498,11 @@ test_done () {
 
if test $test_fixed != 0
then
-   say_color error # $test_fixed known breakage(s) vanished; 
please update test(s)
+   say_color error ${statusprefix}# $test_fixed known breakage(s) 
vanished; please update test(s)
fi
if test $test_broken != 0
then
-   say_color warn # still have $test_broken known breakage(s)
+   say_color warn ${statusprefix}# still have $test_broken known 
breakage(s)
fi
if test $test_broken != 0 || test $test_fixed != 0
then
@@ -522,9 +525,9 @@ test_done () {
then
if test $test_remaining -gt 0
then
-   say_color pass # passed all $msg
+   say_color pass ${statusprefix}# passed all 
$msg
fi
-   say 1..$test_count$skip_all
+   say ${statusprefix}1..$test_count$skip_all
fi
 
test -d $remove_trash 
@@ -538,8 +541,8 @@ test_done () {
*)
if test $test_external_has_tap -eq 0
then
-   say_color error # failed $test_failure among $msg
-   say 1..$test_count
+   say_color error ${statusprefix}# failed $test_failure 
among $msg
+   say ${statusprefix}1..$test_count
fi
 
exit 1 ;;
-- 
1.8.3.1.530.g6f90e57

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 5/6] test-lib: allow prefixing a custom string before ok N etc.

2013-06-17 Thread Thomas Rast
This is not really meant for external use, and thus not documented. It
allows the next commit to neatly distinguish between sub-tests and the
main run.

The format is intentionally not valid TAP.  The use in the next commit
would not result in anything valid either way, and it seems better to
make it obvious.

Signed-off-by: Thomas Rast tr...@inf.ethz.ch
---
 t/test-lib.sh | 27 +++
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/t/test-lib.sh b/t/test-lib.sh
index 40bd7da..aaf6084 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -209,6 +209,9 @@ do
--root=*)
root=$(expr z$1 : 'z[^=]*=\(.*\)')
shift ;;
+   --statusprefix=*)
+   statusprefix=$(expr z$1 : 'z[^=]*=\(.*\)')
+   shift ;;
*)
echo error: unknown test option '$1' 2; exit 1 ;;
esac
@@ -316,12 +319,12 @@ trap 'die' EXIT
 
 test_ok_ () {
test_success=$(($test_success + 1))
-   say_color  ok $test_count - $@
+   say_color  ${statusprefix}ok $test_count - $@
 }
 
 test_failure_ () {
test_failure=$(($test_failure + 1))
-   say_color error not ok $test_count - $1
+   say_color error ${statusprefix}not ok $test_count - $1
shift
echo $@ | sed -e 's/^/#   /'
test $immediate =  || { GIT_EXIT_OK=t; exit 1; }
@@ -329,12 +332,12 @@ test_failure_ () {
 
 test_known_broken_ok_ () {
test_fixed=$(($test_fixed+1))
-   say_color error ok $test_count - $@ # TODO known breakage vanished
+   say_color error ${statusprefix}ok $test_count - $@ # TODO known 
breakage vanished
 }
 
 test_known_broken_failure_ () {
test_broken=$(($test_broken+1))
-   say_color warn not ok $test_count - $@ # TODO known breakage
+   say_color warn ${statusprefix}not ok $test_count - $@ # TODO known 
breakage
 }
 
 test_debug () {
@@ -460,8 +463,8 @@ test_skip () {
of_prereq= of $test_prereq
fi
 
-   say_color skip 3 skipping test: $@
-   say_color skip ok $test_count # skip $1 (missing 
$missing_prereq${of_prereq})
+   say_color skip 3 ${statusprefix}skipping test: $@
+   say_color skip ${statusprefix}ok $test_count # skip $1 
(missing $missing_prereq${of_prereq})
: true
;;
*)
@@ -497,11 +500,11 @@ test_done () {
 
if test $test_fixed != 0
then
-   say_color error # $test_fixed known breakage(s) vanished; 
please update test(s)
+   say_color error ${statusprefix}# $test_fixed known breakage(s) 
vanished; please update test(s)
fi
if test $test_broken != 0
then
-   say_color warn # still have $test_broken known breakage(s)
+   say_color warn ${statusprefix}# still have $test_broken known 
breakage(s)
fi
if test $test_broken != 0 || test $test_fixed != 0
then
@@ -524,9 +527,9 @@ test_done () {
then
if test $test_remaining -gt 0
then
-   say_color pass # passed all $msg
+   say_color pass ${statusprefix}# passed all 
$msg
fi
-   say 1..$test_count$skip_all
+   say ${statusprefix}1..$test_count$skip_all
fi
 
test -d $remove_trash 
@@ -540,8 +543,8 @@ test_done () {
*)
if test $test_external_has_tap -eq 0
then
-   say_color error # failed $test_failure among $msg
-   say 1..$test_count
+   say_color error ${statusprefix}# failed $test_failure 
among $msg
+   say ${statusprefix}1..$test_count
fi
 
exit 1 ;;
-- 
1.8.3.1.530.g6f90e57

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/6] test-lib: allow prefixing a custom string before ok N etc.

2013-05-17 Thread Thomas Rast
Phil Hord phil.h...@gmail.com writes:

 On Thu, May 16, 2013 at 4:50 PM, Thomas Rast tr...@inf.ethz.ch wrote:
 This is not really meant for external use, but allows the next commit
 to neatly distinguish between sub-tests and the main run.

 Maybe we do not care about standards for this library or for your
 use-case, but placing this prefix before the {ok,not ok} breaks the
 TAProtocol.
 http://podwiki.hexten.net/TAP/TAP.html?page=TAP

 Maybe you can put the prefix _after_ the {ok, not ok} and test number.

Actually that was half on purpose.  You will notice I did not document
that option, as it is intended only to be used to distinguish between
the parallel runs implemented in [6/6].

Those parallel runs look something like

[4] ok 1 - plain
[4] ok 2 - plain nested in bare
[...snip until othes catch up...]
[4] ok 33 - re-init to update git link
[4] ok 34 - re-init to move gitdir
[3] ok 1 - plain
[2] ok 1 - plain
[4] ok 35 - re-init to move gitdir symlink
[4] # still have 2 known breakage(s)
[4] # passed all remaining 33 test(s)
[4] 1..35
[3] ok 2 - plain nested in bare

It's invalid TAP no matter what: there are N plans and the ok/not ok
lines from N runs all intermingled.  So I'd rather not even pretend that
it is valid in any way.

-- 
Thomas Rast
trast@{inf,student}.ethz.ch
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/6] test-lib: allow prefixing a custom string before ok N etc.

2013-05-17 Thread Phil Hord
On Fri, May 17, 2013 at 4:00 AM, Thomas Rast tr...@inf.ethz.ch wrote:
 Phil Hord phil.h...@gmail.com writes:

 On Thu, May 16, 2013 at 4:50 PM, Thomas Rast tr...@inf.ethz.ch wrote:
 This is not really meant for external use, but allows the next commit
 to neatly distinguish between sub-tests and the main run.

 Maybe we do not care about standards for this library or for your
 use-case, but placing this prefix before the {ok,not ok} breaks the
 TAProtocol.
 http://podwiki.hexten.net/TAP/TAP.html?page=TAP

 Maybe you can put the prefix _after_ the {ok, not ok} and test number.

 Actually that was half on purpose.  You will notice I did not document
 that option, as it is intended only to be used to distinguish between
 the parallel runs implemented in [6/6].

 Those parallel runs look something like

 [4] ok 1 - plain
 [4] ok 2 - plain nested in bare
 [...snip until othes catch up...]
 [4] ok 33 - re-init to update git link
 [4] ok 34 - re-init to move gitdir
 [3] ok 1 - plain
 [2] ok 1 - plain
 [4] ok 35 - re-init to move gitdir symlink
 [4] # still have 2 known breakage(s)
 [4] # passed all remaining 33 test(s)
 [4] 1..35
 [3] ok 2 - plain nested in bare

 It's invalid TAP no matter what: there are N plans and the ok/not ok
 lines from N runs all intermingled.  So I'd rather not even pretend that
 it is valid in any way.

Yes, I guessed this might have been the goal.  Maybe you can mention
it in the commit message.

I hope some future change might even unwind these back into a valid
continuous TAP stream.  But at least for now, if someone needs such a
stream, she can unwind it herself.

Phil
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 5/6] test-lib: allow prefixing a custom string before ok N etc.

2013-05-16 Thread Thomas Rast
This is not really meant for external use, but allows the next commit
to neatly distinguish between sub-tests and the main run.

Signed-off-by: Thomas Rast tr...@inf.ethz.ch
---
 t/test-lib.sh | 27 +++
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/t/test-lib.sh b/t/test-lib.sh
index 9ae7c7b..55fa749 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -209,6 +209,9 @@ do
--root=*)
root=$(expr z$1 : 'z[^=]*=\(.*\)')
shift ;;
+   --statusprefix=*)
+   statusprefix=$(expr z$1 : 'z[^=]*=\(.*\)')
+   shift ;;
*)
echo error: unknown test option '$1' 2; exit 1 ;;
esac
@@ -316,12 +319,12 @@ trap 'die' EXIT
 
 test_ok_ () {
test_success=$(($test_success + 1))
-   say_color  ok $test_count - $@
+   say_color  ${statusprefix}ok $test_count - $@
 }
 
 test_failure_ () {
test_failure=$(($test_failure + 1))
-   say_color error not ok $test_count - $1
+   say_color error ${statusprefix}not ok $test_count - $1
shift
echo $@ | sed -e 's/^/#   /'
test $immediate =  || { GIT_EXIT_OK=t; exit 1; }
@@ -329,12 +332,12 @@ test_failure_ () {
 
 test_known_broken_ok_ () {
test_fixed=$(($test_fixed+1))
-   say_color error ok $test_count - $@ # TODO known breakage vanished
+   say_color error ${statusprefix}ok $test_count - $@ # TODO known 
breakage vanished
 }
 
 test_known_broken_failure_ () {
test_broken=$(($test_broken+1))
-   say_color warn not ok $test_count - $@ # TODO known breakage
+   say_color warn ${statusprefix}not ok $test_count - $@ # TODO known 
breakage
 }
 
 test_debug () {
@@ -435,8 +438,8 @@ test_skip () {
of_prereq= of $test_prereq
fi
 
-   say_color skip 3 skipping test: $@
-   say_color skip ok $test_count # skip $1 (missing 
$missing_prereq${of_prereq})
+   say_color skip 3 ${statusprefix}skipping test: $@
+   say_color skip ${statusprefix}ok $test_count # skip $1 
(missing $missing_prereq${of_prereq})
: true
;;
*)
@@ -472,11 +475,11 @@ test_done () {
 
if test $test_fixed != 0
then
-   say_color error # $test_fixed known breakage(s) vanished; 
please update test(s)
+   say_color error ${statusprefix}# $test_fixed known breakage(s) 
vanished; please update test(s)
fi
if test $test_broken != 0
then
-   say_color warn # still have $test_broken known breakage(s)
+   say_color warn ${statusprefix}# still have $test_broken known 
breakage(s)
fi
if test $test_broken != 0 || test $test_fixed != 0
then
@@ -499,9 +502,9 @@ test_done () {
then
if test $test_remaining -gt 0
then
-   say_color pass # passed all $msg
+   say_color pass ${statusprefix}# passed all 
$msg
fi
-   say 1..$test_count$skip_all
+   say ${statusprefix}1..$test_count$skip_all
fi
 
test -d $remove_trash 
@@ -515,8 +518,8 @@ test_done () {
*)
if test $test_external_has_tap -eq 0
then
-   say_color error # failed $test_failure among $msg
-   say 1..$test_count
+   say_color error ${statusprefix}# failed $test_failure 
among $msg
+   say ${statusprefix}1..$test_count
fi
 
exit 1 ;;
-- 
1.8.3.rc2.393.g8636c0b

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 1/7] tests: test number comes first in 'not ok $count - $message'

2012-12-20 Thread Junio C Hamano
From: Adam Spiers g...@adamspiers.org

The old output to say not ok - 1 messsage was working by accident
only because the test numbers are optional in TAP.

Signed-off-by: Adam Spiers g...@adamspiers.org
Signed-off-by: Junio C Hamano gits...@pobox.com
---
 t/t-basic.sh | 4 ++--
 t/test-lib.sh| 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/t/t-basic.sh b/t/t-basic.sh
index ae6a3f0..c6b42de 100755
--- a/t/t-basic.sh
+++ b/t/t-basic.sh
@@ -167,13 +167,13 @@ test_expect_success 'tests clean up even on failures' 
! test -s err 
! test -f \trash directory.failing-cleanup/clean-after-failure\ 
sed -e 's/Z$//' -e 's/^ //' expect -\\EOF 
-not ok - 1 tests clean up even after a failure
+not ok 1 - tests clean up even after a failure
 # Z
 # touch clean-after-failure 
 # test_when_finished rm clean-after-failure 
 # (exit 1)
 # Z
-not ok - 2 failure to clean up causes the test to fail
+not ok 2 - failure to clean up causes the test to fail
 # Z
 # test_when_finished \(exit 2)\
 # Z
diff --git a/t/test-lib.sh b/t/test-lib.sh
index f8e3733..03b86b8 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -268,7 +268,7 @@ test_ok_ () {
 
 test_failure_ () {
test_failure=$(($test_failure + 1))
-   say_color error not ok - $test_count $1
+   say_color error not ok $test_count - $1
shift
echo $@ | sed -e 's/^/#   /'
test $immediate =  || { GIT_EXIT_OK=t; exit 1; }
-- 
1.8.1.rc2.225.g8d36ab4

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v6 1/7] tests: test number comes first in 'not ok $count - $message'

2012-12-16 Thread Adam Spiers
The old output to say not ok - 1 messsage was working by accident
only because the test numbers are optional in TAP.

Signed-off-by: Adam Spiers g...@adamspiers.org
---
 t/t-basic.sh | 4 ++--
 t/test-lib.sh| 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/t/t-basic.sh b/t/t-basic.sh
index 562cf41..46ccda3 100755
--- a/t/t-basic.sh
+++ b/t/t-basic.sh
@@ -189,13 +189,13 @@ test_expect_success 'tests clean up even on failures' 
! test -s err 
! test -f \trash directory.failing-cleanup/clean-after-failure\ 
sed -e 's/Z$//' -e 's/^ //' expect -\\EOF 
-not ok - 1 tests clean up even after a failure
+not ok 1 - tests clean up even after a failure
 # Z
 # touch clean-after-failure 
 # test_when_finished rm clean-after-failure 
 # (exit 1)
 # Z
-not ok - 2 failure to clean up causes the test to fail
+not ok 2 - failure to clean up causes the test to fail
 # Z
 # test_when_finished \(exit 2)\
 # Z
diff --git a/t/test-lib.sh b/t/test-lib.sh
index f50f834..d0b236f 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -298,7 +298,7 @@ test_ok_ () {
 
 test_failure_ () {
test_failure=$(($test_failure + 1))
-   say_color error not ok - $test_count $1
+   say_color error not ok $test_count - $1
shift
echo $@ | sed -e 's/^/#   /'
test $immediate =  || { GIT_EXIT_OK=t; exit 1; }
-- 
1.7.12.1.396.g53b3ea9

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/2] config, gitignore: failure to access with ENOTDIR is ok

2012-10-13 Thread Jonathan Nieder
The access_or_warn() function is used to check for optional
configuration files like .gitconfig and .gitignore and warn when they
are not accessible due to a configuration issue (e.g., bad
permissions).  It is not supposed to complain when a file is simply
missing.

Noticed on a system where ~/.config/git was a file --- when the new
XDG_CONFIG_HOME support looks for ~/.config/git/config it should
ignore ~/.config/git instead of printing irritating warnings:

 $ git status -s
 warning: unable to access '/home/jrn/.config/git/config': Not a directory
 warning: unable to access '/home/jrn/.config/git/config': Not a directory
 warning: unable to access '/home/jrn/.config/git/config': Not a directory
 warning: unable to access '/home/jrn/.config/git/config': Not a directory

Compare v1.7.12.1~2^2 (attr:failure to open a .gitattributes file
is OK with ENOTDIR, 2012-09-13).

Signed-off-by: Jonathan Nieder jrnie...@gmail.com
---
 git-compat-util.h | 5 -
 wrapper.c | 2 +-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/git-compat-util.h b/git-compat-util.h
index 2fbf1fd8..f567767f 100644
--- a/git-compat-util.h
+++ b/git-compat-util.h
@@ -635,7 +635,10 @@ int rmdir_or_warn(const char *path);
  */
 int remove_or_warn(unsigned int mode, const char *path);
 
-/* Call access(2), but warn for any error besides ENOENT. */
+/*
+ * Call access(2), but warn for any error except missing file
+ * (ENOENT or ENOTDIR).
+ */
 int access_or_warn(const char *path, int mode);
 
 /* Warn on an inaccessible file that ought to be accessible */
diff --git a/wrapper.c b/wrapper.c
index 68739aaa..c1b919f3 100644
--- a/wrapper.c
+++ b/wrapper.c
@@ -411,7 +411,7 @@ void warn_on_inaccessible(const char *path)
 int access_or_warn(const char *path, int mode)
 {
int ret = access(path, mode);
-   if (ret  errno != ENOENT)
+   if (ret  errno != ENOENT  errno != ENOTDIR)
warn_on_inaccessible(path);
return ret;
 }
-- 
1.8.0.rc2

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/6] Make 'not ok $count - $message' consistent with 'ok $count - $message'

2012-09-19 Thread Jeff King
On Wed, Sep 19, 2012 at 06:15:11PM +0100, Adam Spiers wrote:

  test_failure_ () {
   test_failure=$(($test_failure + 1))
 - say_color error not ok - $test_count $1
 + say_color error not ok $test_count - $1

Interesting. I wondered what TAP had to say about this, and in fact we
were doing it wrong before. However, since the test numbers are optional
in TAP, a harness is supposed to keep its own counter when we fail to
provide a number.

So it happened to work, but this change in fact makes us more
TAP-compliant. Good.

-Peff
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Blackfin] arch: Fix BUG - Enable ISP1362 driver to work ok with BF561

2008-02-08 Thread Linux Kernel Mailing List
Gitweb: 
http://git.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=6cda2e90588ba2f70543abf68b4815e10c86aef1
Commit: 6cda2e90588ba2f70543abf68b4815e10c86aef1
Parent: a680ae9bdd8746ea4338e843db388fa67f1d1920
Author: Michael Hennerich [EMAIL PROTECTED]
AuthorDate: Sat Feb 2 15:10:51 2008 +0800
Committer:  Bryan Wu [EMAIL PROTECTED]
CommitDate: Sat Feb 2 15:10:51 2008 +0800

[Blackfin] arch: Fix BUG - Enable ISP1362 driver to work ok with BF561

This fixes a bug (zero pointer access) only seen on BF561, during USB
Mass Storage/SCSI Host initialization.

It appears to be related to registering a none existing CPU

Signed-off-by: Michael Hennerich [EMAIL PROTECTED]
Signed-off-by: Bryan Wu [EMAIL PROTECTED]
---
 arch/blackfin/kernel/setup.c |   18 ++
 1 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/arch/blackfin/kernel/setup.c b/arch/blackfin/kernel/setup.c
index 2f156bf..aca5e6e 100644
--- a/arch/blackfin/kernel/setup.c
+++ b/arch/blackfin/kernel/setup.c
@@ -48,6 +48,8 @@
 #include asm/fixed_code.h
 #include asm/early_printk.h
 
+static DEFINE_PER_CPU(struct cpu, cpu_devices);
+
 u16 _bfin_swrst;
 
 unsigned long memory_start, memory_end, physical_mem_end;
@@ -763,15 +765,15 @@ void __init setup_arch(char **cmdline_p)
 
 static int __init topology_init(void)
 {
-#if defined (CONFIG_BF561)
-   static struct cpu cpu[2];
-   register_cpu(cpu[0], 0);
-   register_cpu(cpu[1], 1);
+   int cpu;
+
+   for_each_possible_cpu(cpu) {
+   struct cpu *c = per_cpu(cpu_devices, cpu);
+
+   register_cpu(c, cpu);
+   }
+
return 0;
-#else
-   static struct cpu cpu[1];
-   return register_cpu(cpu, 0);
-#endif
 }
 
 subsys_initcall(topology_init);
-
To unsubscribe from this list: send the line unsubscribe git-commits-head in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


email-clients.txt: sylpheed is OK at IMAP

2008-02-07 Thread Linux Kernel Mailing List
Gitweb: 
http://git.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=8f1466ff0a6e81653e9bb0d9247495bf4e9db7e2
Commit: 8f1466ff0a6e81653e9bb0d9247495bf4e9db7e2
Parent: 77cc23b8c7f2f5ea0270bf4be31438aa38316e16
Author: Randy Dunlap [EMAIL PROTECTED]
AuthorDate: Thu Feb 7 00:13:42 2008 -0800
Committer:  Linus Torvalds [EMAIL PROTECTED]
CommitDate: Thu Feb 7 08:42:17 2008 -0800

email-clients.txt: sylpheed is OK at IMAP

This comment is not helpful (no reason given) and is incorrect.
Just stick to facts that are useful regarding working on Linux.

(akpm: I've used sylpheed+imap for years)

Signed-off-by: Randy Dunlap [EMAIL PROTECTED]
Acked-by: Paul Jackson [EMAIL PROTECTED]
Signed-off-by: Andrew Morton [EMAIL PROTECTED]
Signed-off-by: Linus Torvalds [EMAIL PROTECTED]
---
 Documentation/email-clients.txt |1 -
 1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/Documentation/email-clients.txt b/Documentation/email-clients.txt
index 113165b..2ebb94d 100644
--- a/Documentation/email-clients.txt
+++ b/Documentation/email-clients.txt
@@ -170,7 +170,6 @@ Sylpheed (GUI)
 
 - Works well for inlining text (or using attachments).
 - Allows use of an external editor.
-- Not good for IMAP.
 - Is slow on large folders.
 - Won't do TLS SMTP auth over a non-SSL connection.
 - Has a helpful ruler bar in the compose window.
-
To unsubscribe from this list: send the line unsubscribe git-commits-head in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


x86 setup: OK - ok (no need to scream)

2008-01-30 Thread Linux Kernel Mailing List
Gitweb: 
http://git.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=f7775016c66c2f45125f22968c351c88868ee7a3
Commit: f7775016c66c2f45125f22968c351c88868ee7a3
Parent: e479c8306f898fcdb9b36179071eae6338a17364
Author: H. Peter Anvin [EMAIL PROTECTED]
AuthorDate: Wed Jan 30 13:33:03 2008 +0100
Committer:  Ingo Molnar [EMAIL PROTECTED]
CommitDate: Wed Jan 30 13:33:03 2008 +0100

x86 setup: OK - ok (no need to scream)

Unnecessary capitals are shouting; no need for it here.
Thus, change OK to ok and add a space.

Signed-off-by: H. Peter Anvin [EMAIL PROTECTED]
Signed-off-by: Ingo Molnar [EMAIL PROTECTED]
Signed-off-by: Thomas Gleixner [EMAIL PROTECTED]
---
 arch/x86/boot/edd.c |4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/edd.c b/arch/x86/boot/edd.c
index b3504cb..d3e9780 100644
--- a/arch/x86/boot/edd.c
+++ b/arch/x86/boot/edd.c
@@ -154,7 +154,7 @@ void query_edd(void)
 */
 
if (!be_quiet)
-   printf(Probing EDD...);
+   printf(Probing EDD... );
 
for (devno = 0x80; devno  0x80+EDD_MBR_SIG_MAX; devno++) {
/*
@@ -174,7 +174,7 @@ void query_edd(void)
}
 
if (!be_quiet)
-   printf(OK\n);
+   printf(ok\n);
 }
 
 #endif
-
To unsubscribe from this list: send the line unsubscribe git-commits-head in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[MIPS] Only build r4k clocksource for systems that work ok with it.

2007-11-26 Thread Linux Kernel Mailing List
Gitweb: 
http://git.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=940f6b48a130e0a33cb8bd397dd0e277166470ad
Commit: 940f6b48a130e0a33cb8bd397dd0e277166470ad
Parent: 5aa85c9fc49a6ce44dc10a42e2011bbde9dc445a
Author: Ralf Baechle [EMAIL PROTECTED]
AuthorDate: Sat Nov 24 22:33:28 2007 +
Committer:  Ralf Baechle [EMAIL PROTECTED]
CommitDate: Mon Nov 26 17:26:14 2007 +

[MIPS] Only build r4k clocksource for systems that work ok with it.

In particular as-is it's not suited for multicore and mutiprocessors
systems where there is on guarantee that the counter are synchronized
or running from the same clock at all.  This broke Sibyte and probably
others since the [MIPS] Handle R4000/R4400 mfc0 from count register.
commit.

Signed-off-by: Ralf Baechle [EMAIL PROTECTED]
---
 arch/mips/Kconfig|   24 +++
 arch/mips/au1000/Kconfig |1 +
 arch/mips/kernel/Makefile|2 +
 arch/mips/kernel/csrc-r4k.c  |   29 ++
 arch/mips/kernel/smp-up.c|   67 ++
 arch/mips/kernel/time.c  |   25 ---
 arch/mips/pmc-sierra/Kconfig |2 +
 arch/mips/vr41xx/Kconfig |6 
 include/asm-mips/time.h  |   11 +++
 9 files changed, 142 insertions(+), 25 deletions(-)

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 2f2ce0c..7750829 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -22,6 +22,7 @@ config MACH_ALCHEMY
 config BASLER_EXCITE
bool Basler eXcite smart camera
select CEVT_R4K
+   select CSRC_R4K
select DMA_COHERENT
select HW_HAS_PCI
select IRQ_CPU
@@ -49,6 +50,7 @@ config BASLER_EXCITE_PROTOTYPE
 config BCM47XX
bool BCM47XX based boards
select CEVT_R4K
+   select CSRC_R4K
select DMA_NONCOHERENT
select HW_HAS_PCI
select IRQ_CPU
@@ -66,6 +68,7 @@ config BCM47XX
 config MIPS_COBALT
bool Cobalt Server
select CEVT_R4K
+   select CSRC_R4K
select CEVT_GT641XX
select DMA_NONCOHERENT
select HW_HAS_PCI
@@ -85,6 +88,7 @@ config MACH_DECSTATION
bool DECstations
select BOOT_ELF32
select CEVT_R4K
+   select CSRC_R4K
select DMA_NONCOHERENT
select NO_IOPORT
select IRQ_CPU
@@ -117,6 +121,7 @@ config MACH_JAZZ
select ARC32
select ARCH_MAY_HAVE_PC_FDC
select CEVT_R4K
+   select CSRC_R4K
select GENERIC_ISA_DMA
select IRQ_CPU
select I8253
@@ -137,6 +142,7 @@ config MACH_JAZZ
 config LASAT
bool LASAT Networks platforms
select CEVT_R4K
+   select CSRC_R4K
select DMA_NONCOHERENT
select SYS_HAS_EARLY_PRINTK
select HW_HAS_PCI
@@ -154,6 +160,7 @@ config LEMOTE_FULONG
bool Lemote Fulong mini-PC
select ARCH_SPARSEMEM_ENABLE
select CEVT_R4K
+   select CSRC_R4K
select SYS_HAS_CPU_LOONGSON2
select DMA_NONCOHERENT
select BOOT_ELF32
@@ -179,6 +186,7 @@ config MIPS_ATLAS
bool MIPS Atlas board
select BOOT_ELF32
select CEVT_R4K
+   select CSRC_R4K
select DMA_NONCOHERENT
select SYS_HAS_EARLY_PRINTK
select IRQ_CPU
@@ -210,6 +218,7 @@ config MIPS_MALTA
select ARCH_MAY_HAVE_PC_FDC
select BOOT_ELF32
select CEVT_R4K
+   select CSRC_R4K
select DMA_NONCOHERENT
select GENERIC_ISA_DMA
select IRQ_CPU
@@ -241,6 +250,7 @@ config MIPS_MALTA
 config MIPS_SEAD
bool MIPS SEAD board
select CEVT_R4K
+   select CSRC_R4K
select IRQ_CPU
select DMA_NONCOHERENT
select SYS_HAS_EARLY_PRINTK
@@ -260,6 +270,7 @@ config MIPS_SEAD
 config MIPS_SIM
bool 'MIPS simulator (MIPSsim)'
select CEVT_R4K
+   select CSRC_R4K
select DMA_NONCOHERENT
select SYS_HAS_EARLY_PRINTK
select IRQ_CPU
@@ -278,6 +289,7 @@ config MIPS_SIM
 config MARKEINS
bool NEC EMMA2RH Mark-eins
select CEVT_R4K
+   select CSRC_R4K
select DMA_NONCOHERENT
select HW_HAS_PCI
select IRQ_CPU
@@ -293,6 +305,7 @@ config MARKEINS
 config MACH_VR41XX
bool NEC VR4100 series based machines
select CEVT_R4K
+   select CSRC_R4K
select SYS_HAS_CPU_VR41XX
select GENERIC_HARDIRQS_NO__DO_IRQ
 
@@ -330,6 +343,7 @@ config PMC_MSP
 config PMC_YOSEMITE
bool PMC-Sierra Yosemite eval board
select CEVT_R4K
+   select CSRC_R4K
select DMA_COHERENT
select HW_HAS_PCI
select IRQ_CPU
@@ -351,6 +365,7 @@ config PMC_YOSEMITE
 config QEMU
bool Qemu
select CEVT_R4K
+   select CSRC_R4K
select DMA_COHERENT
select GENERIC_ISA_DMA
select HAVE_STD_PC_SERIAL_PORT
@@ -382,6 +397,7 @@ config SGI_IP22
select ARC32
select BOOT_ELF32
select CEVT_R4K

[POWERPC] pseries: device node status can be ok or okay

2007-10-11 Thread Linux Kernel Mailing List
Gitweb: 
http://git.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=a7fb7ea76e20740c641a9b5401ef45b3b022cb69
Commit: a7fb7ea76e20740c641a9b5401ef45b3b022cb69
Parent: 576e393e74e58bd4c949d551a3340accc8dbab0f
Author: Linas Vepstas [EMAIL PROTECTED]
AuthorDate: Fri Aug 10 09:27:00 2007 +1000
Committer:  Paul Mackerras [EMAIL PROTECTED]
CommitDate: Tue Oct 2 22:09:56 2007 +1000

[POWERPC] pseries: device node status can be ok or okay

It seems that some versions of firmware will report a device
node status as the string okay. As we are not expecting this
string, the device node will be ignored by the EEH subsystem.
Which means EEH will not be enabled.

When EEH is not enabled, PCI errors will be converted into
Machine Check exceptions, and we'll have a very unhappy system.

Signed-off-by: Linas Vepstas [EMAIL PROTECTED]
Signed-off-by: Paul Mackerras [EMAIL PROTECTED]
---
 arch/powerpc/platforms/pseries/eeh.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/eeh.c 
b/arch/powerpc/platforms/pseries/eeh.c
index b242c6c..22322b3 100644
--- a/arch/powerpc/platforms/pseries/eeh.c
+++ b/arch/powerpc/platforms/pseries/eeh.c
@@ -955,7 +955,7 @@ static void *early_enable_eeh(struct device_node *dn, void 
*data)
pdn-eeh_freeze_count = 0;
pdn-eeh_false_positives = 0;
 
-   if (status  strcmp(status, ok) != 0)
+   if (status  strncmp(status, ok, 2) != 0)
return NULL;/* ignore devices with bad status */
 
/* Ignore bad nodes. */
-
To unsubscribe from this list: send the line unsubscribe git-commits-head in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


make powerpc BUG_ON() OK with pointers and bitwise

2007-07-26 Thread Linux Kernel Mailing List
Gitweb: 
http://git.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=97f1e7f7d2cd950f90d64ac6920822e709095519
Commit: 97f1e7f7d2cd950f90d64ac6920822e709095519
Parent: fdd33961e983dd5b1983c54ef39d243c88a4bffc
Author: Al Viro [EMAIL PROTECTED]
AuthorDate: Thu Jul 26 17:35:49 2007 +0100
Committer:  Linus Torvalds [EMAIL PROTECTED]
CommitDate: Thu Jul 26 11:11:57 2007 -0700

make powerpc BUG_ON() OK with pointers and bitwise

Since powerpc insists on printing the _value_ of condition
and on casting it to long...  At least let's make it a force-cast.

Signed-off-by: Al Viro [EMAIL PROTECTED]
Signed-off-by: Linus Torvalds [EMAIL PROTECTED]
---
 include/asm-powerpc/bug.h |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/include/asm-powerpc/bug.h b/include/asm-powerpc/bug.h
index f6fa394..a248b8b 100644
--- a/include/asm-powerpc/bug.h
+++ b/include/asm-powerpc/bug.h
@@ -79,7 +79,7 @@
_EMIT_BUG_ENTRY \
: : i (__FILE__), i (__LINE__), i (0),\
  i (sizeof(struct bug_entry)),   \
- r ((long)(x))); \
+ r ((__force long)(x))); \
}   \
 } while (0)
 
-
To unsubscribe from this list: send the line unsubscribe git-commits-head in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


docs: static initialization of spinlocks is OK

2007-07-16 Thread Linux Kernel Mailing List
Gitweb: 
http://git.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=017f021c7e5fe3f82ccc5cbb7b1750e66e00f527
Commit: 017f021c7e5fe3f82ccc5cbb7b1750e66e00f527
Parent: 7e7d136e9e083f04b859411248c699cbb89e418d
Author: Ed L. Cashin [EMAIL PROTECTED]
AuthorDate: Sun Jul 15 23:41:50 2007 -0700
Committer:  Linus Torvalds [EMAIL PROTECTED]
CommitDate: Mon Jul 16 09:05:52 2007 -0700

docs: static initialization of spinlocks is OK

Static initialization of spinlocks is preferable to dynamic initialization
when it is practical.  This patch updates documentation for consistency
with comments in spinlock_types.h.

Signed-off-by: Ed L. Cashin [EMAIL PROTECTED]
Signed-off-by: Andrew Morton [EMAIL PROTECTED]
Signed-off-by: Linus Torvalds [EMAIL PROTECTED]
---
 Documentation/spinlocks.txt |   20 +++-
 1 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/Documentation/spinlocks.txt b/Documentation/spinlocks.txt
index a661d68..471e753 100644
--- a/Documentation/spinlocks.txt
+++ b/Documentation/spinlocks.txt
@@ -1,7 +1,12 @@
-UPDATE March 21 2005 Amit Gud [EMAIL PROTECTED]
+SPIN_LOCK_UNLOCKED and RW_LOCK_UNLOCKED defeat lockdep state tracking and
+are hence deprecated.
 
-Macros SPIN_LOCK_UNLOCKED and RW_LOCK_UNLOCKED are deprecated and will be
-removed soon. So for any new code dynamic initialization should be used:
+Please use DEFINE_SPINLOCK()/DEFINE_RWLOCK() or
+__SPIN_LOCK_UNLOCKED()/__RW_LOCK_UNLOCKED() as appropriate for static
+initialization.
+
+Dynamic initialization, when necessary, may be performed as
+demonstrated below.
 
spinlock_t xxx_lock;
rwlock_t xxx_rw_lock;
@@ -15,12 +20,9 @@ removed soon. So for any new code dynamic initialization 
should be used:
 
module_init(xxx_init);
 
-Reasons for deprecation
-  - it hurts automatic lock validators
-  - it becomes intrusive for the realtime preemption patches
-
-Following discussion is still valid, however, with the dynamic initialization
-of spinlocks instead of static.
+The following discussion is still valid, however, with the dynamic
+initialization of spinlocks or with DEFINE_SPINLOCK, etc., used
+instead of SPIN_LOCK_UNLOCKED.
 
 ---
 
-
To unsubscribe from this list: send the line unsubscribe git-commits-head in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/4] add --missing-ok option to write-tree

2005-07-10 Thread Bryan Larsen
Add --missing-ok option to git-write.tree.  This option allows a write-tree 
even if the referenced objects are not in the database.

Signed-off-by: Bryan Larsen [EMAIL PROTECTED]
---

diff --git a/Documentation/git-write-tree.txt b/Documentation/git-write-tree.txt
--- a/Documentation/git-write-tree.txt
+++ b/Documentation/git-write-tree.txt
@@ -10,6 +10,7 @@ git-write-tree - Creates a tree from the
 SYNOPSIS
 
 'git-write-tree'
+   [--missing-ok]
 
 DESCRIPTION
 ---
@@ -23,7 +24,11 @@ In order to have that match what is actu
 now, you need to have done a git-update-cache phase before you did the
 git-write-tree.
 
-
+OPTIONS
+---
+--missing-ok::
+   Normally git-write-tree ensures that the objects referenced by the
+   directory exist in the object database.  This option disables this 
check.
 
 
 
diff --git a/write-tree.c b/write-tree.c
--- a/write-tree.c
+++ b/write-tree.c
@@ -5,6 +5,8 @@
  */
 #include cache.h
 
+static int missing_ok = 0;
+
 static int check_valid_sha1(unsigned char *sha1)
 {
int ret;
@@ -61,7 +63,7 @@ static int write_tree(struct cache_entry
sha1 = subdir_sha1;
}
 
-   if (check_valid_sha1(sha1)  0)
+   if (!missing_ok  check_valid_sha1(sha1)  0)
exit(1);
 
entrylen = pathlen - baselen;
@@ -86,6 +88,16 @@ int main(int argc, char **argv)
int i, funny;
int entries = read_cache();
unsigned char sha1[20];
+   
+   if (argc==2) {
+   if (!strcmp(argv[1], --missing-ok))
+   missing_ok = 1;
+   else
+   die(unknown option %s, argv[1]);
+   }
+   
+   if (argc2)
+   die(too many options);
 
if (entries  0)
die(git-write-tree: error reading cache);
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/4] switch cg-commit -N to use --missing-ok instead of --no-check

2005-07-10 Thread Bryan Larsen
Make cg-commit aware of the rename of git-write-tree --no-check to --missing-ok.

Signed-off-by: Bryan Larsen [EMAIL PROTECTED]
---

diff --git a/cg-commit b/cg-commit
--- a/cg-commit
+++ b/cg-commit
@@ -111,13 +111,13 @@ forceeditor=
 ignorecache=
 infoonly=
 commitalways=
-nocheck=
+missingok=
 msgs=()
 while optparse; do
if optparse -C; then
ignorecache=1
elif optparse -N; then
-   nocheck=--no-check
+   missingok=--missing-ok
infoonly=--info-only
elif optparse -e; then
forceeditor=1
@@ -311,7 +311,7 @@ if [ -s $_git/HEAD ]; then
oldheadstr=-p $oldhead
 fi
 
-treeid=$(git-write-tree ${nocheck})
+treeid=$(git-write-tree ${missingok})
 [ $treeid ] || die git-write-tree failed
 if [ ! $force ]  [ ! $merging ]  [ $oldhead ] 
[ $treeid = $(tree-id) ]; then
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Errors received during git pull from linux-2.6.git, but resulting kernel looks OK.

2005-04-21 Thread Steven Cole
Executive summary: I received some alarming errors while doing
a git pull of the latest kernel from kernel.org, but it appears
that all is well.  Continue reading for the gory details.
I updated my git-pasky tools this morning, by doing git pasky pull, make, make 
install.
Working from a repo converted yesterday using Linus'instructions and 
subsequently
successfully updated once:
[EMAIL PROTECTED] linux-2.6-origin]$ git lsremote
origin  rsync://www.kernel.org/pub/linux/kernel/people/torvalds/linux-2.6.git
[EMAIL PROTECTED] linux-2.6-origin]$ time git pull origin
MOTD:
MOTD:   Welcome to the Linux Kernel Archive.
MOTD:
MOTD:   Due to U.S. Exports Regulations, all cryptographic software on this
MOTD:   site is subject to the following legal notice:
MOTD:
MOTD:   This site includes publicly available encryption source code
MOTD:   which, together with object code resulting from the compiling of
MOTD:   publicly available source code, may be exported from the United
MOTD:   States under License Exception TSU pursuant to 15 C.F.R. Section
MOTD:   740.13(e).
MOTD:
MOTD:   This legal notice applies to cryptographic software only.
MOTD:   Please see the Bureau of Industry and Security,
MOTD:   http://www.bis.doc.gov/ for more information about current
MOTD:   U.S. regulations.
MOTD:
receiving file list ... done
18/13b464853cba4439b3c30412059ed6284114a0
8d/a3a306d0c0c070d87048d14a033df02f40a154
a2/755a80f40e5794ddc20e00f781af9d6320fafb
sent 181 bytes  received 952105 bytes  272081.71 bytes/sec
total size is 63450766  speedup is 66.63
receiving file list ... done
client: nothing to do: perhaps you need to specify some filenames or the 
--recursive option?
Tree change: 
4d78b6c78ae6d87e4c1c8072f42efa716f04afb9:a2755a80f40e5794ddc20e00f781af9d6320fafb
*100644-100644 blob
8e5f9bbdf4de94a1bc4b4da8cb06677ce0a57716-8da3a306d0c0c070d87048d14a033df02f40a154 
Makefile
Tracked branch, applying changes...
Fast-forwarding 4d78b6c78ae6d87e4c1c8072f42efa716f04afb9 - 
a2755a80f40e5794ddc20e00f781af9d6320fafb
on top of 4d78b6c78ae6d87e4c1c8072f42efa716f04afb9...
error: bad index version
error: verify header failed
read_cache: Invalid argument
gitdiff.sh: no files matched
error: bad index version
error: verify header failed
real6m4.771s
user0m16.538s
sys 0m12.952s
[EMAIL PROTECTED] linux-2.6-origin]$
Maybe those errors are harmless.  Checking out the new repo:
[EMAIL PROTECTED] linux-2.6-origin]$ git export ../linux-2.6.12-rc3
[EMAIL PROTECTED] linux-2.6-origin]$ cd ..
[EMAIL PROTECTED] GIT]$ diff -urN linux-2.6.11 linux-2.6.12-rc3 
gitdiff-2.6.12-rc3
So, now I have patch-2.6.12-rc3 from kernel.org and gitdiff-2.6.12-rc3 made 
above.
[EMAIL PROTECTED] GIT]$ diffstat gitdiff-2.6.12-rc3 | tail -n 2
 sound/usb/usx2y/usbusx2yaudio.c  |1
 4622 files changed, 271839 insertions(+), 156792 deletions(-)
[EMAIL PROTECTED] GIT]$ diffstat patch-2.6.12-rc3 | tail -n 2
 sound/usb/usx2y/usbusx2yaudio.c  |1
 4622 files changed, 271839 insertions(+), 156792 deletions(-)
Despite the errors from the git pull, the files look OK.
Steven
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Errors received during git pull from linux-2.6.git, but resulting kernel looks OK.

2005-04-21 Thread Petr Baudis
Dear diary, on Thu, Apr 21, 2005 at 03:55:26PM CEST, I got a letter
where Steven Cole [EMAIL PROTECTED] told me that...
 Executive summary: I received some alarming errors while doing
 a git pull of the latest kernel from kernel.org, but it appears
 that all is well.  Continue reading for the gory details.

It seems that the directory cache format changed since you pulled the
last time.

-- 
Petr Pasky Baudis
Stuff: http://pasky.or.cz/
C++: an octopus made by nailing extra legs onto a dog. -- Steve Taylor
-
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html