RE: read() MAX_IO_SIZE bytes, more than SSIZE_MAX?

2015-02-07 Thread Randall S. Becker
On 2015-02-07 12:30PM Torsten Bögershausen wrote:
On 2015-02-07 17.45, Joachim Schmitz wrote:
 Hi there
 
 While investigating the problem with hung git-upload-pack we think to 
 have found a bug in wrapper.c:
 
 #define MAX_IO_SIZE (8*1024*1024)
 
 This is then used in xread() to split read()s into suitable chunks.
 So far so good, but read() is only guaranteed to read as much as 
 SSIZE_MAX bytes at a time. And on our platform that is way lower than 
 those 8MB (only 52kB, POSIX allows it to be as small as 32k), and as a 
 (rather strange) consequence mmap() (from compat/mmap.c) fails with 
 EACCESS (why EACCESS?), because xpread() returns something  0.
 
 How large is SSIZE_MAX on other platforms? What happens there if you 
 try to
 read() more? Should't we rather use SSIZE_MAX on all platforms? If I'm 
 reading the header files right, on Linux it is LONG_MAX (2TB?), so I 
 guess we should really go for MIN(8*1024*1024,SSIZE_MAX)?
How about changing wrapper.c like this: 
#ifndef MAX_IO_SIZE
 #define MAX_IO_SIZE (8*1024*1024)
#endif
-
and to change config.mak.uname like this:
ifeq ($(uname_S),NONSTOP_KERNEL)
   BASIC_CFLAGS += -DMAX_IO_SIZE=(32*1024) Does this work for you ?

Yes, thank you Torsten. I have made this change in our branch (on behalf of
Jojo). I think we can accept it. The (32*1024) does need to be properly
quoted, however.

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: read() MAX_IO_SIZE bytes, more than SSIZE_MAX?

2015-02-07 Thread Randall S. Becker
On 2015-02-07 13:07PM Randall S. Becker wrote:
On 2015-02-07 12:30PM Torsten Bögershausen wrote:
On 2015-02-07 17.45, Joachim Schmitz wrote:
 Hi there
 
 While investigating the problem with hung git-upload-pack we think to 
 have found a bug in wrapper.c:
 
 #define MAX_IO_SIZE (8*1024*1024)
 
 This is then used in xread() to split read()s into suitable chunks.
 So far so good, but read() is only guaranteed to read as much as 
 SSIZE_MAX bytes at a time. And on our platform that is way lower than 
 those 8MB (only 52kB, POSIX allows it to be as small as 32k), and as a 
 (rather strange) consequence mmap() (from compat/mmap.c) fails with 
 EACCESS (why EACCESS?), because xpread() returns something  0.
 
 How large is SSIZE_MAX on other platforms? What happens there if you 
 try to
 read() more? Should't we rather use SSIZE_MAX on all platforms? If I'm 
 reading the header files right, on Linux it is LONG_MAX (2TB?), so I 
 guess we should really go for MIN(8*1024*1024,SSIZE_MAX)?
How about changing wrapper.c like this: 
#ifndef MAX_IO_SIZE
 #define MAX_IO_SIZE (8*1024*1024)
#endif
-

Although I do agree with Jojo, that MAX_IO_SIZE seems to be a platform
constant and should be defined in terms of SSIZE_MAX. So something like:

#ifndef MAX_IO_SIZE
# ifdef SSIZE_MAX
#  define MAX_IO_SIZE (SSIZE_MAX)
# else
#  define MAX_IO_SIZE (8*1024*1024)
# endif
#endif

would be desirable.

Cheers, Randall

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: hang in git-upload-pack

2015-02-07 Thread Randall S. Becker
 -Original Message-
Sent: February 7, 2015 11:26 AM
In HP-Nonstop we're experiencing hangs in git-upload-pack, which seems to
be the result
of reads from / writes to pipes don't ever finish or don't get interrupted
properly (SIGPIPE, SIGCHLD?)
Any idea why that might be and how to fix it?

More context on this issue:
This is a new port of Git 2.3 to HP NonStop OSS (POSIX-ish). With very
minimal changes in git_compat-util.h to include floss and wrapper.c (below),
we are able to clone remote repositories and work on local repositories
without issue. However, when attempting to fetch from a local bare
repository (set up as a remote but on the same server) into a working
repository, or when a remote client attempts to clone from any repository on
the server over any protocol, we end up with git-upload-pack hanging as the
common point of failure. Note that this function has not worked in prior
version of git, so we have no working reference to compare. The team is
suspecting differences in how the OS deals with pipes but our primary need
from the community is some guidance on continuing our investigation in
resolving this.

Most git tests succeed except for: t0025(test 2), t0301(test
12-expected),t5507(test 4 - suspicious of this),t9001(expected).

A sample trace showing the issue is below. There are no external clients
involved in this sample. This is git to git locally. The condition appears
to be representative of all of the hangs.

GIT_TRACE=1 /usr/local/bin/git --exec-path=/usr/local/bin fetch
old_floss_tail
09:52:01.198401 trace: built-in: git 'fetch' 'old_floss_tail'
09:52:01.226684 trace: run_command: 'git-upload-pack
'\''/home/git/floss.git/.git'\'''
09:52:01.229229 trace: exec: '/usr/local/bin/bash' '-c' 'git-upload-pack
'\''/home/git/floss.git/.git'\''' 'git-upload-pack
'\''/home/git/floss.git/.git'\'''
09:52:01.303638 trace: run_command: 'rev-list' '--objects' '--stdin' '--not'
'--all' '--quiet'
warning: no common commits
09:52:01.438320 trace: run_command: 'pack-objects' '--revs' '--thin'
'--stdout' '--progress' '--delta-base-offset' '--include-tag'
remote: 09:52:01.445274 trace: exec: 'git' 'pack-objects' '--revs' '--thin'
'--stdout' '--progress' '--delta-base-offset' '--include-tag'
remote: 09:52:01.463846 trace: built-in: git 'pack-objects' '--revs'
'--thin' '--stdout' '--progress' '--delta-base-offset' '--include-tag'
remote: Counting objects: 485, done.
remote: Compressing objects: 100% (472/472), done.
 Hangs forever at this point.  

The git-unpack-load is stopped at (not that the addresses might mean much):
  xread + 0x130 (UCr)
  create_pack_file + 0x18F0 (UCr)
  upload_pack + 0x450 (UCr)
  .

There are two git processes at:
  xread + 0x130 (UCr)
  read_in_full + 0x130 (UCr)
  get_packet_data + 0x4A0 (UCr)
  .

And one git is at:
  xwrite + 0x130 (UCr)
  flush + 0x530 (UCr)
  sha1write + 0x600 (UCr)
  write_no_reuse_object + 0x1390 (UCr)
  .

Wrapper.c change:
@@ -173,7 +173,12 @@ void *xcalloc(size_t nmemb, size_t size)
  * the absence of bugs, large chunks can result in bad latencies when
  * you decide to kill the process.
  */
-#define MAX_IO_SIZE (8*1024*1024)
+#ifdef __TANDEM
+# include limits.h /* SSIZE_MAX == 52k */
+# define MAX_IO_SIZE SSIZE_MAX
+#else
+# define MAX_IO_SIZE (8*1024*1024)
+#endif

Best Regards,
Randall
-- Brief whoami: NonStopUNIX developer since approximately
UNIX(421664400)/NonStop(2112884442)

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: read() MAX_IO_SIZE bytes, more than SSIZE_MAX?

2015-02-07 Thread Randall S. Becker
On Feb 7 2015 at 9:14 PM Junio C Hamano wrote:
On Sat, Feb 7, 2015 at 2:31 PM, Joachim Schmitz j...@schmitz-digital.de 
wrote:
 Junio C Hamano gitster at pobox.com writes:

 Yup, I agree that is a sensible way to go.

  (1) if Makefile overrides the size, use it; otherwise
  (2) if SSIZE_MAX is defined, and it is smaller than our internal 
 default, use it; otherwise
  (3) use our internal default.

 And leave our internal default to 8MB.

 That way, nobody needs to do anything differently from his current 
 build
 set-up,
 and I suspect that it would make step (1) optional.

 something like this:

 /* allow overwriting from e.g. Makefile */ #if !defined(MAX_IO_SIZE) # 
 define MAX_IO_SIZE (8*1024*1024) #endif
 /* for plattforms that have SSIZE and have it smaller */ #if 
 defined(SSIZE_MAX  (SSIZE_MAX  MAX_IO_SIZE) # undef MAX_IO_SIZE /* 
 avoid warning */ # define MAX_IO_SIZE SSIZE_MAX #endif
No, not like that. If you do (1), that is only so that the Makefile can 
override a broken definition a platform may give to SSIZE_MAX.  So
 (1) if Makefile gives one, use it without second-guessing with SSIZE_MAX.
 (2) if SSIZE_MAX is defined, and if it is smaller than our internal default, 
 use it.
 (3) all other cases, us our internal default.

That is reasonable. I am more concerned about our git-upload-pak (separate 
thread) anyway :)

Cheers, Randall

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


t5570 trap use in start/stop_git_daemon

2015-02-12 Thread Randall S. Becker
On the NonStop port, we found that “trap” was causing an issue with test
success for t5570. When start_git_daemon completes, the shell (ksh,bash) on
this platform is sending a signal 0 that is being caught and acted on by the
trap command within the start_git_daemon and stop_git_daemon functions. I am
taking this up with the operating system group, but in any case, it may be
appropriate to include a trap reset at the end of both functions, as below.
I verified this change on SUSE Linux.

diff --git a/t/lib-git-daemon.sh b/t/lib-git-daemon.sh
index bc4b341..543e98a 100644
--- a/t/lib-git-daemon.sh
+++ b/t/lib-git-daemon.sh
@@ -62,6 +62,7 @@ start_git_daemon() {
    test_skip_or_die $GIT_TEST_GIT_DAEMON \
    git daemon failed to start
   fi
+   trap '' EXIT
}

stop_git_daemon() {
@@ -84,4 +85,6 @@ stop_git_daemon() {
    fi
    GIT_DAEMON_PID=
    rm -f git_daemon_output
+
+   trap '' EXIT
}

Cheers,
Randall
-- Brief whoami: NonStopUNIX developer since approximately
UNIX(421664400)/NonStop(2112884442)
-- In real life, I talk too much.


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Git Feature Request - show current branch

2015-02-19 Thread Randall S. Becker
Hi Martin,

I use:

git symbolic-ref --short HEAD

in scripts. Not sure it's the best way, but it works 100% for me.

Regards,
Randall

-Original Message-
From: git-ow...@vger.kernel.org [mailto:git-ow...@vger.kernel.org] On Behalf Of 
mdc...@seznam.cz
Sent: February 19, 2015 8:15 AM
To: git@vger.kernel.org
Subject: Git Feature Request - show current branch

Hello,

To start with, I did not find an official way to submit feature request so 
hopefully this is the right way to do so - if not then my apologize  
appreciate if somebody could re-submit to the proper place.

I'd like to request adding a parameter to 'git branch' that would only show the 
current branch (w/o the star) - i.e. the outcome should only be the name of the 
branch that is normally marked with the star when I do 'git branch' command. 
This may be very helpful in some external scripts that just simply need to know 
the name of the current branch. I know there are multiple ways to do this today 
(some described here: 
http://stackoverflow.com/questions/6245570/how-to-get-current-branch-name-in-git)
 but I really think that adding simple argument to 'git branch' would be very 
useful instead of forcing people to use 'workarounds'.

My suggestion is is to name the parameter '--current' or '--show-current'.
Example:

Command: git branch
Outcome:
 branchA
 branchB
* master

Command: git branch --current
Outcome:
master

Thank you,
Martin
--
To unsubscribe from this list: send the line unsubscribe git in the body of a 
message to majord...@vger.kernel.org More majordomo info at  
http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: t5570 trap use in start/stop_git_daemon

2015-02-13 Thread Randall S. Becker
On 2015/02/13 3:58AM Joachim Schmitz wrote:
Jeff King peff at peff.net writes:
  On Fri, Feb 13, 2015 at 02:44:03AM -0500, Jeff King wrote:
  On Thu, Feb 12, 2015 at 03:31:12PM -0500, Randall S. Becker wrote:
  
snip 
 Hmm, today I learned something new about ksh. Apparently when you use
 the function keyword to define a function like:
 
   function foo {
 trap 'echo trapped' EXIT
   }
   echo before
   foo
   echo after
 
 then the trap runs when the function exits! If you declare the same
 function as:
 
   foo() {
 trap 'echo trapped' EXIT
   }
 
 it behaves differently. POSIX shell does not have the function keyword,
 of course, and we are not using it here. Bash _does_ have the function
 keyword, but seems to behave POSIX-y even when it is present. I.e.,
 running the first script:
 
   $ ksh foo.sh
   before
   trapped
   after
 
   $ bash foo.sh
   before
   after
   trapped
 
snip
Both versions produce your first output on our platform
$ ksh foo1.sh
before
trapped
after
$ bash foo1.sh
before
after
trapped
$ ksh foo2.sh
before
trapped
after
$ bash foo2.sh
before
after
trapped
$
This might have been one (or even _the_) reason why we picked bash as our 
SHELL_PATH in config.mak.uname (I don't remember, it's more than 2 years 
ago), not sure which shell Randall's test used?

I tested both for trying to get t5570 to work. No matter which, without
resetting the trap, function return would kill the git-daemon and the test
would fail.


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Promoting Git developers

2015-03-15 Thread Randall S. Becker
 On March 15, 2015 6:19 PM Christian Couder wrote:
snip
 Just one suggestion on the name and half a comment.
 
 How would Git Review (or Git Monthly Review, or replace your favourite
 how-often-per-period-ly in its name) sound?  I meant it to sound similar
to
 academic journals that summarize and review contemporary works in the
field
 and keeps your original pun about our culture around patch reviews.

If I may humbly offer the suggestion that Git Blame would be a far more
appropriate pun as a name :)

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Git with Lader logic

2015-03-18 Thread Randall S. Becker
On March 18, 2015 6:29 PM Doug Kelly wrote:
 On Wed, Mar 18, 2015 at 2:53 PM, Randall S. Becker
 rsbec...@nexbridge.com wrote:
  On March 17, 2015 7:34 PM, Bharat Suvarna wrote:
  I am trying to find a way of using version control on PLC programmers
  like
  Allen
  Bradley PLC. I can't find a way of this.
 
  Could you please give me an idea if it will work with Plc programs.
  Which
  are
  basically Ladder logic.
 
  Many PLC programs either store their project code in XML, L5K or L5X
  (for example), TXT, CSV, or some other text format or can import and
  export to text forms. If you have a directory structure that
  represents your project, and the file formats have reasonable line
  separators so that diffs can be done easily, git very likely would
  work out for you. You do not have to have the local .git repository in
  the same directory as your working area if your tool has issues with
  that or .gitignore. You may want to use a GUI client to manage your
  local repository and handle the commit/push/pull/merge/rebase
  functions as I expect whatever PLC system you are using does not have git
 built-in.
 
  To store binary PLC data natively, which some tools use, I expect that
  those who are better at git-conjuring than I, could provide guidance
  on how to automate binary diffs for your tool's particular file format.
 
 The one thing I find interesting about RSLogix in general (caveat: I only have
 very limited experience with RSLogix 500 / 5000; if I do anything nowadays, 
 it's
 in the micro series using RSLogix Micro Starter Lite)... they do have some
 limited notion of version control inside the application itself, though it 
 seems
 rudimentary to me.
 This could prove to be helpful or extremely annoying, since even when I
 connect to a PLC and go online, just to reset the RTC, it still prompts me to 
 save
 again (even though nothing changed, other than the processor state).
 
 You may also find this link on stackexchange helpful:
 http://programmers.stackexchange.com/questions/102487/are-there-
 realistic-useful-solutions-for-source-control-for-ladder-logic-program
 
 As Randall noted, L5K is just text, and RSLogix 5000 uses it, according to 
 this
 post.  It may work okay.

A really good reason to use git instead of some other systems is that new 
versions of files are determined by SHA signatures (real differences) rather 
than straight timestamps. So saving a file that has not changed will not force 
a new version - unlike some systems that shall remain nameless.

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Allowing weak references to blobs and strong references to commits

2015-03-31 Thread Randall S. Becker
On March 31, 2015 3:55 PM Philip Oakley wrote:
 From: Mike Hommey m...@glandium.org
 [...]
  So I thought, since commits are already allowed in tree objects, for
  submodules, why not add a bit to the mode that would tell git that
  those commit object references are meant to always be there aka strong
  reference, as opposed to the current weak references for submodules.
  I was thinking something like 020, which is above S_IFMT, but I
  haven't checked if mode is expected to be a short anywhere, maybe one
  of the file permission flags could be abused instead (sticky bit?).
 
  I could see this used in the future to e.g. implement a fetchable
  reflog (which could be a ref to a tree with strong references to
  commits).
 
  Then that got me thinking that the opposite would be useful to me as
  well: I'm currently storing mercurial manifests as git trees with
  (weak) commit references using the mercurial sha1s for files.
  Unfortunately, that doesn't allow to store the corresponding file
  permissions, so I'm going through hoops to get that. It would be
  simpler for me if I could just declare files or symlinks with the
  right permissions and say 'the corresponding blob doesn't need to
  exist'.
  I'm sure other tools using git as storage would have a use for such
  weak references.
 
 The weak references idea is something that's on my back list of
Toh-Doh's for
 the purpose of having a Narrow clone.
 
 However it's not that easy as you need to consider three areas - what's on
disk
 (worktree/file system), what's in the index, and what's in the object
store and
 how a coherent view is kept of all three without breakage.
 
 The 'Sparse Checkout' / 'Skip Worktree' (see `git help read-tree`) covers
the
 first two but not the third (which submodules does) [that's your 'the
 corresponding blob doesn't need to exist' aspect from my perspective]
 
 
  What do you think about this? Does that seem reasonable to have in git
  core, and if yes, how would you go about implementing it (same bit
  with different meaning for blobs and commits (or would you rather that
  were only done for commits and not for blobs)? what should I be
  careful about, besides making sure gc and fsck don't mess up?)

I don't know whether this is relevant or not - forgiveness requested in
advance. It may be useful to store primarily the SHA1 for a weak object. In
a product called RMS, this was called an External Reference. The file
itself was not stored, but its signature was. It was possible to tell that
the commit was validly and completely on disk, only if the signature matched
(so git status would know). If the file was missing, or had an invalid
signature, the working area was considered dirty (so git status would
presumably report modified). All signatures were stored for these types of
files, but the contents were not - hence external. Otherwise, we stored
all other repository attributes - except the contents, with the obvious
risks. This was typically used to track versions of the compilers and
headers being used for builds, which we did not want to store in the
repository, managed by a separate systems operations group, but wanted to
know the signatures in case we had to go back in time. From my point of
view, I would like to be able to have /usr/include (example only) as a
working area where I can be 100% certain it contains what I expect it to
contain, but I don't really want to store the objects in a repository - and
may not have root anyway.

Cheers,
Randall

-- Brief whoami: NonStopUNIX developer since approximately
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH] Use unsigned char to squash compiler warnings

2015-03-04 Thread Randall S. Becker
On 4 Mar 2015, Junio C Hamano Wrote:
 Sent: March 4, 2015 5:11 PM
 To: Ben Walton
 Cc: git@vger.kernel.org
 Subject: Re: [PATCH] Use unsigned char to squash compiler warnings
 
 Ben Walton bdwal...@gmail.com writes:
 
  On Mon, Mar 2, 2015 at 8:30 PM Junio C Hamano gits...@pobox.com
 wrote:
 
  The conversion looked good from a cursory view; I didn't check it
  very carefully though.
 
  Yes, because of the Solaris ABI, the Studio compiler defaults char to
  signed char.
 
 Doesn't our beloved GCC also uses signed char when you write char?
 You keep saying that defaults to signed char is the problem, but that
does not
 explain why those in the rest of the world outside the Solaris land do not
 encounter this problem.
 
   $ cat x.c \EOF
 #include stdio.h
 int main (void) {
 SIGNED char ch = 0xff;
 printf(%d\n, ch);
 return 0;
 }
   EOF
 $ gcc -Wall -DSIGNED= x.c  ./a.out
 -1
 $ gcc -Wall -DSIGNED=signed x.c  ./a.out
   -1
 
 I think th problem is not Solaris uses signed char for char like everybody
else
 does ;-) but it gives a fairly useless warning to annoy people.
 
 In any case, here is what I queued, FYI, on bw/kwset-use-unsigned topic.

Even the NonStop c99 compiler does not report a warning - and it is usually
very noisy. The default is unsigned char for c99 on this platform, and the
value interpretation is significant.

#include stdio.h

int main (void) {
char ch0 = 0xff;
signed char ch1 = 0xff;
unsigned char ch = 0xff;
printf(%d, %d, %d, %d, %d\n, ch0, ch, ch1, ch==ch0, ch==ch1);
return 0;
}
255, 255, -1, 1, 0

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: An interesting opinion on DVCS/git

2015-03-03 Thread Randall S. Becker

 On 03 Mar 2015, Shawn Pearce Wrote:
 On Sun, Mar 1, 2015 at 7:29 PM, Stefan Beller sbel...@google.com wrote:
  bitquabit.com/post/unorthodocs-abandon-your-dvcs-and-return-to-sanity
 
 Indeed, a DVCS like Git or Hg does not fit everyone. And neither do 
 centralized
 systems like Subversion. Choice is good.
 
 However... I found some passages troubling for Git, e.g.:
 
 ---snip---
 Git is so amazingly simple to use that APress, a single publisher, needs to 
 have
 three different books on how to use it. It’s so simple that Atlassian and 
 GitHub
 both felt a need to write their own online tutorials to try to clarify the 
 main Git
 tutorial on the actual Git website. It’s so transparent that developers 
 routinely
 tell me that the easiest way to learn Git is to start with its file formats 
 and work
 up to the commands.
 ---snap---
 
 We have heard this sort of feedback for years. But we have been unable to
 adequately write our own documentation or clean up our man pages to be
 useful to the average person who doesn't know why the --no-frobbing option
 doesn't disable the --frobinator option to the --frobbing-subcommand of git
 frob.  :(

In real life, I do process automation, so I'm coming at this from a slightly 
different point of view. What appeals to me about git is the richness of 
processes that can be implemented with it. You may want to consider it a 
complex process enabler engine that happens to do DVCS. Having built one of 
these also, and being saddled with huge numbers of requirements, I can say from 
experience that complexity is a side effect of doing what you need to do. Like 
many complex products, git takes on a life of its own, and obviously chose 
completeness instead of simplicity as a goal. Personally, I am not complaining, 
but I hear the complaints too. The bigger complaints are when you cannot do 
your job because the engine is not rich enough (see anything derived from SCCS 
- yes saying that shows my hair colour), which forced my company *to* git. 

When looking at git, I personally feel that it is important to deploy 
simple-to-use scripts and instructions to implement the process you want to use 
- and I hate to leave a footprint saying this, but, people are fundamentally 
lazy about non-goal activities. Thinking about mundane tasks like committing 
and delivering is outside the typical work-instruction box, but if, as a 
repository manager, you need a rich engine, spend the couple of days and script 
it. I think the objections in the article are essentially sound, from one point 
of view, but omit the core domain-space of why git is around and necessary, as 
opposed to many other unnamed RCS-like systems that are *not* sufficient.

 http://git-man-page-generator.lokaltog.net/ shouldn't exist and shouldn't be
 funny. Yet it does. :(

Mockery is the not the kindest form of flattery, but it sure is the sincerest. 
I've been the target of this too. Laugh, and suggest workflows. And, for the 
record, the only way you will remove atomicity/immutability of changes is out 
of my cold dead hands. :)

Cheers,
Randall

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Identifying user who ran git reset command

2015-02-23 Thread Randall S. Becker
On 23 Feb 2015, Kevin Daudt wrote:
 On Fri, Feb 20, 2015 at 10:16:18PM -0700, Technext wrote:
  Thanks Junio for the prompt reply! :) Yes, that's exactly how i would
  like things to be. I'll definitely try to push this thing and see if
  this flow can be implemented.
  However, can you please guide me whether there's any way i could have
  figured out about the git reset command that the developer executed on
  his local? (my first query)
 
 git reset . is just a local working tree operation, which does not leave
 something behind, just like when the user would do any other file
operations
 and comitted that. This created a so called evil merge, which are not easy
to
 detect (see [1] for some possible solutions)
 
 
  Also, am i right in thinking that a check cannot be implemented using
  hooks or any other similar way? (my second query)
 
 Because an evil merge is hard to detect, it's even harder to do it
automated in a
 script. Human review works much better for this (when merging in the
changes
 from the developer).

The only effective way I have found to deal with this is to have an
implemented policy of making sure that developers only push changes to topic
branches and have an approver handle the merge. This will not eliminate the
evil merge or reset, but at least you get a second set of eyes on it. With
that said, the oops merge or reset is different, which an accidental
operation.

I know it is off-topic, but there is an approach used by other systems (some
code-management, some not) that implement per-command policies. Something
like a client-side hook or config-like access control list may be useful:
like a hooks/pre-execute (invoked possibly as high up as in run_argv() after
handle_options()) that gets passed argv, and is able to accept/decline the
command, might catch accidents. Granted this slows things down a whole lot,
but places that use (I didn't say need) command-level restrictions, often
are willing to accept performance degradation and the resulting grumbling
that comes with it. And you've probably had this discussion before, so I
sincerely apologize in advance for bringing it up.

Cheers,
Randall

-- Brief whoami: NonStopUNIX developer since approximately
UNIX(421664400)/NonStop(2112884442)
-- In real life, I talk too much.



--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Git Scaling: What factors most affect Git performance for a large repo?

2015-02-20 Thread Randall S. Becker
-Original Message-
On Feb 20, 2015 1:58AM Martin Fick wrote:
On Feb 19, 2015 5:42 PM, David Turner dtur...@twopensource.com wrote:
  This one is not affected by how deep your repo's history is, or how 
  wide your tree is, so should be quick.. 
Good to hear that others are starting to experiment with solutions to this 
problem!  I hope to hear more updates on this.

snip-snip

Now that Jojo and I  have git 2.3.0 ported to the HP NonStop platform, there 
are some very large code bases out there that may start being managed using 
git. These will tend to initially shallow histories (100's not 1000's of 
commits, and fairly linear) but large source and binaries - I know of a few 
where just the distributed set of sources are above 1Gb and are unlikely to be 
managed in multiple repos despite my previous best efforts to change that. 
Fortunately, It is a relatively simple matter to profile the code on the 
platform for various operations so data on where to improve may be available - 
I hope. 

With that said, for NonStop file system tends to be heavier weight than on 
Linux (many more moving parts by virtue of the MPP nature of the OS and 
hardware). Packing up changes seems pretty good, but any operating involving 
creating a large number of small files does hurt a bunch.


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Git with Lader logic

2015-03-18 Thread Randall S. Becker
On March 17, 2015 7:34 PM, Bharat Suvarna wrote:
 I am trying to find a way of using version control on PLC programmers like
Allen
 Bradley PLC. I can't find a way of this.
 
 Could you please give me an idea if it will work with Plc programs. Which
are
 basically Ladder logic.

Many PLC programs either store their project code in XML, L5K or L5X (for
example), TXT, CSV, or some other text format or can import and export to
text forms. If you have a directory structure that represents your project,
and the file formats have reasonable line separators so that diffs can be
done easily, git very likely would work out for you. You do not have to have
the local .git repository in the same directory as your working area if your
tool has issues with that or .gitignore. You may want to use a GUI client to
manage your local repository and handle the commit/push/pull/merge/rebase
functions as I expect whatever PLC system you are using does not have git
built-in.

To store binary PLC data natively, which some tools use, I expect that those
who are better at git-conjuring than I, could provide guidance on how to
automate binary diffs for your tool's particular file format.

Cheers,
Randall
-- Brief whoami: NonStopUNIX developer since approximately
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: t5570 - not cloned error

2015-05-05 Thread Randall S. Becker
Sorry to repost - ended up in my own spam trap.

On May 1, 2015 11:05 AM, I wrote, in my haste:
 
 Greetings - and asking for a bit of help resolving test failures.
 
 I'm having an issue with t5570 at 2.3.7 which seems to be a regression
from
 2.3.3 (currently installed), but I cannot be sure. This test failed prior
to
 2.3.0 in the box, worked from 2.3.0 to 2.3.3 - suggesting that it may be
 environmental, not actually in git. Making some assumptions, it looks like
 the URL for the test repository is not correct and may depend on localhost
 resolving properly - which DNS does not do well on this box (outside my
 control, we are multi-home, and localhost does not resolve to 127.0.0.1 or
 [::1]). Only t5570 #'s 3-5 fail and I found a strange message in the
output
 of the test seemingly referring to a bad repo name. I would really
 appreciate some pointers on where to look next and how to go about
resolving
 this. I am happy to try to work through this on 2.4.0 if that would be
more
 efficient for the team. Anything relating to git-daemon makes me nervous
in
 terms of installing the code.
 
 Platform is HP NonStop (Posix-esque environment):
 
 In the test output:
 *** t5570-git-daemon.sh ***
 snip
 not ok 3 - clone git repository
 #
 #   git clone $GIT_DAEMON_URL/repo.git clone 
 #   test_cmp file clone/file
 #
 not ok 4 - fetch changes via git protocol
 #
 #   echo content file 
 #   git commit -a -m two 
 #   git push public 
 #   (cd clone  git pull) 
 #   test_cmp file clone/file
 #
 not ok 5 - remote detects correct HEAD
 #
 #   git push public master:other 
 #   (cd clone 
 #git remote set-head -d origin 
 #git remote set-head -a origin 
 
 And
 
 ../git/t/trash directory.t5570-git-daemon: cat output
 fatal: remote error: repository not exported: /repo.git
 
 Additional context: t0025, t0301, t3900, t9001, t9020 are not 100% but the
 issues are acceptable - we can discuss separately.

We definitely have an issue with localhost. When forcing the DNS resolver to
return 127.0.0.1, we pass 1-16 then 17 fails as I expected to happen based
on my DNS futzing. Heads up that this test is not-surprisingly sensitive to
DNS problems. My environment is still in a messy state where I can reproduce
the original problem so it might be a useful moment for me to find a way to
modify the test script to harden it. Any suggestion on that score (as in
where and
roughly how it might be made more reliable)?

Note: Since the original post, I moved the fork to 2.4.0.

Cheers,
Randall

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [ANNOUNCE] Git v2.4.0-rc2

2015-04-15 Thread Randall S. Becker
On April 15, 2015 10:22 PM Jeff King wrote:
 Sent: April 15, 2015 10:22 PM
 To: Bryan Turner
 Cc: Junio C Hamano; Git Users
 Subject: Re: [ANNOUNCE] Git v2.4.0-rc2
 
 [side note: please trim your quoted material when doing inline quoting]
 
 On Thu, Apr 16, 2015 at 12:05:57PM +1000, Bryan Turner wrote:
 
 merge: pass verbosity flag down to merge-recursive
 
  I'm pretty confident this change is working as intended, but the
  intended change is causing a regression in behavior for me. I'll
  readily admit that my workflow is probably wrong, but I thought
  perhaps it would be worth surfacing.
 
  [...]
  If the goal of passing the verbosity flag down was to fix git merge
  --quiet, should the Automatic merge failed line also be omitted? But
  if that line should _stay_, wouldn't it be better for the CONFLICT
  lines to also stay?
 
 Yeah, I feared there might be fallouts like this. We are somewhat blindly
 passing down the --quiet flag without doing a careful audit of the severity
 levels in merge-recursive. Potentially we would want a few levels of 
 verbosity:
 
   -2: totally silent (-q -q)
   -1: silence chat, mention important errors like CONFLICT markers (-q)
0: current defaults
1: more verbosity (-v, what is currently level 3, I guess?)
   1: and so on with more -v; I don't even know what levels are used
 
 That's off the top of my head. I think it really needs somebody to look 
 through
 and categorize all of the messages generated by merge-recursive.
 In the meantime, unless somebody is planning to jump on this topic
 immediately (I am not), I think we can revert 2bf15a3330a from master.
 It's definitely fixing _a_ problem, but it's one we've lived with for many 
 years
 already.
 
 -Peff

As a more (slightly nano) enhanced suggestion, please consider adding something 
along the lines of multiple occurrences of -v:{module=level} to specifically 
show messages from things like git-upload-pak, specifically passing verbosity 
down to selective components. I do not know whether there is value in the git 
subject domain for this, but I'm bringing it up since I have had specific 
issues with that part of the code while porting on my platform and I would have 
liked to be able to ignore verbosity from everything other than that module 
while diagnosing issues. Having this available to test suites would be a bit 
more useful as well.

Cheers,
Randall

-- Brief whoami: NonStopUNIX developer since approximately 
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Followup: Managing EXCEL with git - diff problem.

2015-04-07 Thread Randall S. Becker
Hi all,

There was a discussion a while back on how to manage EXCEL content in git.
This involved a simple trick of modifying the file extension from .xlsx to
.zip and unpacking the file - resulting in a whole bunch of XML files. Git
is happy with that part and the content can be managed - slightly.

Unfortunately, EXCEL stores its XML content in single lines. Git has no
problem with that either, as far as managing the content, but the lines can
be really long. However, after about 20K in size, while the config:

alias.wdiff=diff --color-words

The ability of git to report differences goes away - as in no output from
git diff. This occurs on Windows and Linux under git 2.3.3 and git 2.3.0.
I'm not sure whether this is a user error, a usage error, or an actual
problem.

I had originally raised this as a SourceTree problem figuring it might be
there: https://jira.atlassian.com/browse/SRCTREEWIN-3145

Any advice (preferably no teasing - :-) - I am considering smudging but
would rather avoid that)?

Cheers,
Randall

-- Brief whoami: NonStopUNIX developer since approximately
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.


--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Suggestion: make git checkout safer

2015-06-03 Thread Randall S. Becker
On June 3, 2015 3:06 PM Jeff King wrote:
 On Wed, Jun 03, 2015 at 10:32:40AM -0700, Junio C Hamano wrote:
  git checkout $paths (and you can give . for $paths to mean
  everything) is akin to cp -R $elsewhere/$path . to restore the
  working tree copies from somewhere else.
 
  Ouch, 'git checkout .'  overwrote what was in my working tree is
  exactly the same kind of confusion as I ran 'cp -r ../saved .' and it
  overwrote everything.  As you said in your initial response, that is
  what the command is meant for.
 
  What does that similar command outside world, cp, have for more
  safety?  'cp -i' asks if the user wants to overwrite a file for each
  path; perhaps a behaviour similar to that was the original poster
  wanted to see?
 
 Yeah, I'd say cp -i is the closest thing. I don't have a problem with 
 adding that,
 but I'd really hate for it to be the default (just as I find distros which 
 alias
 rm='rm -i annoying).

Brainstorming a few compromises:

or some such config option to turn on behaviour like this:
core.checkout=-i

or some such thing where if there are strictly more than m files being touched 
and strictly less than n files to act accordingly - a threshold concept:
core.checkout_warn_upperlimit=n # default to 0
core.checkout_warn_lowerlimit=m # default to 0

or in a more gross fashion provide a pre-checkout hook to do all the work of 
prompting/control of the situation.

Personally I'm happy with the defaults as they are (and was not a fan of 
defaulting rm -i or cp -i either) but I can see the point and have had diffuse 
whines from my team on the checkout subject, which is why I'm commenting.

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: GIT for Microsoft Access projects

2015-06-08 Thread Randall S. Becker
 -Original Message-
 From: git-ow...@vger.kernel.org [mailto:git-ow...@vger.kernel.org] On
 Behalf Of Konstantin Khomoutov
 Sent: June 8, 2015 12:15 PM
 To: hack...@suddenlink.net
 Cc: git@vger.kernel.org
 Subject: Re: GIT for Microsoft Access projects
 
 On Mon, 8 Jun 2015 9:45:17 -0500
 hack...@suddenlink.net wrote:
 
 [...]
  My question is, will GIT work with MS access forms, queries, tables,
  modules, etc?
 [...]
 
 Git works with files.  So in principle it will work with *files*
 containing your MS access stuff.
 
 But Git will consider and treat those files as opaque blobs of data.
 That is, you will get no fancy diffing like asking Git to graphically
 (or otherwise) show you what exact changes have been made to a
 particular form or query between versions X and Y of a given MS access
 document -- all it will be able to show you is commit messages
 describing those changes.
 
 So... If you're fine with this setting, Git will work for you,
 but if not, it won't.
 
 One last note: are you really sure you want an SCM/VCS tool to manage
 your files and not a document management system (DMS) instead?
 I mean stuff like Alfresco (free software by the way) and the like.

Consider also what you are specifically managing in MS Access. Are you
looking for management of configuration data, like settings, properties, and
such, or is this transactional or user-related. If managing
environment-specific content, it may be worth storing raw SQL INSERT
statements, with appropriate variable references, or export to XML/CSV, and
having those in git so that the purpose for configuration-like data can be
explained and diff'ed.

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: RFC/Pull Request: Refs db backend

2015-06-23 Thread Randall S. Becker
 -Original Message-
 From: git-ow...@vger.kernel.org [mailto:git-ow...@vger.kernel.org] On
 Behalf Of David Turner
 Sent: June 23, 2015 4:22 PM
 To: Randall S. Becker
 Cc: 'Stefan Beller'; 'git mailing list'; 'ronnie sahlberg'
 Subject: Re: RFC/Pull Request: Refs db backend
 
  Just to beg a request: LMDB is not available on some MPP architectures to
 which git has been ported. If it comes up, I beg you not to add this as a
 dependency to base git components.
 
 My changes make `configure` check for the presence of liblmdb. The LMDB
 code is only built if liblmdb is present.  So, I think we're good.

Thanks :) You have no idea how much, in a burnt by that in other projects POV.

Cheers,
Randall

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: RFC/Pull Request: Refs db backend

2015-06-23 Thread Randall S. Becker
 -Original Message-
 From: git-ow...@vger.kernel.org [mailto:git-ow...@vger.kernel.org] On
 Behalf Of David Turner
 Sent: June 23, 2015 4:05 PM
 To: Stefan Beller
 Cc: git mailing list; ronnie sahlberg
 Subject: Re: RFC/Pull Request: Refs db backend
 
 On Tue, 2015-06-23 at 10:16 -0700, Stefan Beller wrote:
   The db backend code was added in the penultimate commit; the rest is
   just code rearrangement and minor changes to make alternate backends
   possible.  There ended up being a fair amount of this rearrangement,
   but the end result is that almost the entire git test suite runs
   under the db backend without error (see below for
  details).
 
  Looking at the end result in refs-be-db.c it feels like there are more
  functions in the refs_be_db struct, did this originate from other
  design choices? IIRC Ronnie wanted to have as least functions in there
  as possible, and share as much of the code between the databases, such
  that the glue between the db and the refs code is minimal.
 
 I didn't actually spend that much time reading Ronnie's backend code.
 My code aims to be extremely thoroughly compatible.  I spent a ton of time
 making sure that the git test suite passed.  I don't know if an alternate
 approach would have been as compatible.
 
 The requirement for reflog storage did complicate things a bit.
 
 I also didn't see a strong need to abstract the database, since LMDB is 
 common,
 widely compatible, and tiny.

Just to beg a request: LMDB is not available on some MPP architectures to which 
git has been ported. If it comes up, I beg you not to add this as a dependency 
to base git components.

Thanks,
Randall

-- Brief whoami: NonStopUNIX developer since approximately 
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH v6 17/19] fsck: Introduce `git fsck --quick`

2015-06-20 Thread Randall S. Becker
On June 21, 2015 12:56 AM, Michael Haggerty wrote:
 On 06/19/2015 10:53 PM, Junio C Hamano wrote:
  Johannes Schindelin johannes.schinde...@gmx.de writes:
 
  Can you think of a name for the option that is as short as `--quick`
  but means the same as `--connectivity-only`?
 
  No I can't.  I think `--connectivity-only` is a very good name that is
  unfortunately a mouthful, I agree that we need a name that is as short
  as `--x` that means the same as `--connectivity-only`.  I do not
  think `--quick` is that word; it does not mean such a thing.
 
 `--connectivity-only` says that of all the things that fsck can do, skip
everything
 except for the connectivity check. But the switch really affects not the
 connectivity part of the checks (that part is done in either case), but
the blob
 part. So, if we ignore the length of the option name for a moment, it
seems like
 the options should be something like
`--check-blob-integrity`/`--no-check-blob-
 integrity`. The default would remain `--check-blob-integrity` of course,
but
 
 * Someday there might be a config setting that people can use to change
the
 default behavior of fsck to `--no-check-blob-integrity`.
 * Someday there might be other expensive types of checks [1] that we want
to
 turn on/off independent of blob integrity checks.
 
 But now that I'm writing this, a silly question occurs to me: Do we need
an
 overall option like this at all? If I demote all blob-integrity checks to
ignore
 via the mechanism that you have added, then shouldn't fsck automatically
 detect that it doesn't have to open the blobs at all and enable this
speedup
 automatically? So maybe `--(no-)?check-blob-integrity` is actually a
shorthand
 for turning a few more specific checks on/off at once.
 
 As for thinking of a shorter name for the option: assuming the blob
integrity
 checks can be turned on and off independently as described above, then I
think
 it is reasonable to *also* add a `--quick` option defined as
 
 --quick: Skip some expensive checks, dramatically reducing the
 runtime of `git fsck`. Currently this is equivalent to
 `--no-check-blob-integrity`.
 
 In the future if we invent other expensive checks we might also add them
to the
 list of things that are skipped by `--quick`.

Synonym suggestions: --links or --relations
I was going to include --refs but that may be ambiguous. Links also has
meaning so it's probably out and --hitch may just be silly and needlessly
introducing a new term.

Cheers,
Randall

-- Brief whoami: NonStopUNIX developer since approximately
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line unsubscribe git in



RE: Suggestion: make git checkout safer

2015-06-03 Thread Randall S. Becker
On June 3, 2015 1:35 PM Junio C Hamano wrote:
 Ed Avis e...@waniasset.com writes:
  If my personal experience is anything to go by, newcomers may fall
  into the habit of running 'git checkout .' to restore missing files.
 Is that really true?  It all depends on why you came to a situation to
have
 missing files in the first place, I would think, but git checkout
$path is I
 messed up the version in the working tree at $path, and want to restore
them.
 One particular kind of I messed up may be I deleted by mistake (hence
 making them missing), but is it so common to delete things by mistake,
as
 opposed to editing, making a mess and realizing that the work so far was
not
 improving things and wanting to restart from scratch?

When working in an IDE like ECLIPSE or MonoDevelop, accidentally hitting the
DEL button or a drag-drop move is a fairly common trigger for the
Wait-No-Stop-Oh-Drats process which includes running git checkout to
recover. My keyboard is excessively sensitive static, so this happens more
often than I will admit (shamelessly blaming hardware when it really is a
user problem). Git checkout is a life-saver in this case as is frequently
committing. :)

Cheers,
Randall

-- Brief whoami: NonStopUNIX developer since approximately
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Suggestion: make git checkout safer

2015-06-03 Thread Randall S. Becker
On June 3, 2015 2:11 PM Junio C Hamano wrote:
 Randall S. Becker rsbec...@nexbridge.com writes:
  On June 3, 2015 1:35 PM Junio C Hamano wrote:
  Is that really true?  It all depends on why you came to a situation
  to have missing files in the first place, I would think, but git
  checkout $path is I messed up the version in the working tree at
  $path, and want to restore them.  One particular kind of I messed
  up may be I deleted by mistake
  (hence making them missing), but is it so common to delete things
  by mistake, as opposed to editing, making a mess and realizing that
  the work so far was not improving things and wanting to restart from
  scratch?
 
  When working in an IDE like ECLIPSE or MonoDevelop, accidentally
  hitting the DEL button or a drag-drop move is a fairly common trigger
  for the Wait-No-Stop-Oh-Drats process which includes running git
  checkout to recover.
 
 That is an interesting tangent.  If you are lucky then the deleted file
may be
 unedited one, but I presume that you are not always lucky.  So perhaps
git
 checkout is not a solution to that particular IDE issue in the first
place?

Agreed. That's why I like knowing what's in my sausages and commit often.
Only lost a minor change once from this. I wonder what else is afoot. Ed,
can you expand on the issue?

--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: What's the ".git/gitdir" file?

2015-10-27 Thread Randall S. Becker
-Original Message-
On Tue, October-27-15 6:23 PM, Stefan Beller wrote:
>On Tue, Oct 27, 2015 at 3:04 PM, Kyle Meyer  wrote:
>> When a ".git" file points to another repo, a ".git/gitdir" file is 
>> created in that repo.
>>
>> For example, running
>>
>> $ mkdir repo-a repo-b
>> $ cd repo-a
>> $ git init
>> $ cd ../repo-b
>> $ echo "gitdir: ../repo-a/.git" > .git
>> $ git status
>>
>> results in a file "repo-a/.git/gitdir" that contains
>>
>> $ cat repo-a/.git/gitdir
>> .git
>>
>> I don't see this file mentioned in the gitrepository-layout manpage, 
>> and my searches haven't turned up any information on it.  What's the 
>> purpose of ".git/gitdir"?  Are there cases where it will contain 
>> something other than ".git"?
>
>It's designed for submodules to work IIUC.
>
>Back in the day each git submodule had its own .git directory keeping its 
>local >objects.

>Nowadays the repository of submodule  is kept in the superprojects 
>>.git/modules/ directory.

Slightly OT: Is there any way of avoiding having that file in the first place? 
I'm hoping to have a git repository in a normal file system (Posix) and a 
working area in a rather less-than-normal one where dots in file names are bad 
(actually a dot is a separator).

Cheers,
Randall

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: When a file was locked by some program, git will work stupidly

2015-11-02 Thread Randall S. Becker

On November-01-15 11:57 PM dayong xie wrote:
>To be specific
>In my Unity project, there is a native plugin,  and plugin's extension is 
>.dll, >and this plugin is under git version control, when Unity is running, 
>the plugin >file will be locked.
>If i merge another branch, which contains modification of the plugin, git will 
>>report error, looks
>like:
>error: unable to unlink old 'xxx/xxx.dll' (Permission denied) This is not 
>>bad, however, the unfinished merge action will not revert by git, a lot of 
>>changes produced in repository.
>usually it makes me crazy, even worse, some of my partners are not good at 
>>using git.
>Of course, this problem can be avoided by quit Unity, but not every time we 
>can >remember. In my opinion, git should revert the unfinished action when the 
>error >occurred, not just stop.

What version of Unity (or Unity Pro) are you using? Is this experienced with 
the Commit in MonoDevelop/Visual Studio or are you using a separate git client? 

I have found similar issues in some versions of Unity and MonoDevelop or Visual 
Studio not saving all files, especially the project files until you have fully 
exited - nothing to do with git, but your git commits may not contain complete 
images of your change.

When I use git with Unity, I either have the source committed through 
MonoDevelop (no issues) if my changes are source-only, or if I have assets and 
project changes, then I do exit completely so that I am sure Unity flushes 
everything to disk and I can get a single atomic commit with all the Unity and 
C# bits using SourceTree or gitk.

OTOH I'm not sure you really should be storing build products out of 
MonoDevelop or Visual Studio in your git repository. If the DLL can be rebuild 
automatically on the client - usual answer is yes - then let it. Handle the 
release build separately - at least in a separate branch that does not involve 
having the Unity editor open to get in the way.

In my environment, I have added *.dll (and other stuff) to .gitignore so that I 
do not track dll changes - they get built on demand instead. There are 
recommended ways of using git in the Unity forums.

Cheers,
Randall

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH] Limit the size of the data block passed to SHA1_Update()

2015-10-30 Thread Randall S. Becker
On October-30-15 6:18 PM, Atousa Pahlevan Duprat wrote:
>Some implementations of SHA_Updates have inherent limits on the max chunk
size. >SHA1_MAX_BLOCK_SIZE can be defined to set the max chunk size
supported, if >required.  This is enabled for OSX CommonCrypto library and
set to 1GiB.
>---
> 

Didn't we have this same issue with NonStop port about a year ago and
decided to provision this through the configure script?

Cheers,
Randall


--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: How to move users from SEU (AS400) to Git?

2015-12-02 Thread Randall S. Becker
On December-02-15 1:10 PM dleong wrote:
>I stumbled on this topic while doing a research on how to move RPG source
>control to adopt using Git. I wonder if the original question was answered.
>My company would love to have a more central system to maintain both RPG
codes >and javascript codes. We use Rational Developer exclusively (no more
>SEU) for our developers and we do not have budget to use Team Concert from
IBM. >So Git seems like a good solution.

I don't see any reason why Git would not be happy with RPG, whether
structured or not, providing you have a reasonably Posix-like file system.
The Rational Suite includes ClearCase, which can also be converted to Git
(BTDT), although it is a bit intricate to convert and the complexity depends
entirely on what part of history you want to preserve.

Cheers,
Randall
--
NonStop and Unix geek since before the CSNet mass migration.

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Git 2.3.7 hangs on fetch but not clone

2015-12-06 Thread Randall S. Becker
I have some strange behaviour I am trying to diagnose on the NonStop port of
git 2.3.7. The situation is we have a *LARGE* cloned repository with some
local modifications of openssl, which we are trying to clone again for a
Jenkins build. The command:
git clone /local/openssl openssl
works fine and rapidly (well under 30 seconds), but with a newly created
empty repo, simulating what the Jenkins Git plugin does:
mkdir /local/test/openssl
cd /local/test/openssl
git init /local/test/openssl
git -c core.askpass=true fetch --verbose --progress
/local/git/openssl +refs/heads/*:refs/remotes/origin/*
does the usual:
remote: Counting objects: 113436, done.
remote: Compressing objects: 100% (23462/23462), done.
then hangs forever without creating any files in the working directory.
There are also no files or directories modified since the init operation.
Processes left around, and without consuming resources, are:
1493172290 2030043151 - 15:58:29   00:15 git pack-objects --revs --thin
--stdout --progress --delta-base-offset --include-tag
452984908  402653262 - 15:58:29   00:00 git -c core.askpass=true fetch
--verbose --progress /local/git/openssl +refs/heads/*:refs/remotes/origin/*
402653262 1694498903 - 15:58:28   00:00 git -c core.askpass=true fetch
--verbose --progress /local/git/openssl +refs/heads/*:refs/remotes/origin/*
2030043151  402653262 - 15:58:29   00:00 git-upload-pack
/local/git/openssl

This does not happen for our smaller repositories. Any pointers on where to
look would be appreciated.

Kindly,
Randall

-- Brief whoami: NonStop developer since approximately
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: feature request: git svn dommit --preserve-timestamps

2016-06-10 Thread Randall S. Becker
Somewhen near June 10, 2016 9:40 PM, Eric Wong wrote:
> Peter Münster  wrote:
> > On Tue, Jun 07 2016, Eric Wong wrote:
> > > Peter Münster  wrote:
> > >> It would be nice, if timestamps could be preserved when rewriting
> > >> the git-log.
> > >
> > > Unfortunately, last I checked (a long time ago!), explicitly setting
> > > revprops might require SVN administrators to enable the feature for
> > > the repo.
> >
> > Not the svn-log, only the git-log.
> 
> The git log after dcommit is tied to the SVN log, so git-svn can only reflect
> changes which appear in SVN.
> 
>   Sidenote: The convention is reply-to-all on lists like
>   this one which do not require subscription to post.
>   It prevents the list from being a single-point-of-failure
>   or censorship.
> 
> > > It's been a while and I'm not up-to-date with the latest SVN.
> > > Maybe there's a newer/easier way you could give us details about :)
> >
> > No, sorry. I don't care about the svn-log.
> 
> Unfortunately, you would have to care about svn log as long as SVN exists in
> your workflow and you need to interact with SVN users.
> 
> git svn tries hard to work transparently and as close to the behavior of the
> upstream SVN repo as possible.

Having had to deal with this in pure git without factoring in git svn, this 
seems to be is a matter of policy rather than technical requirement. Various 
customers of mine have decided that using the commit time as a uniform 
timestamp to be applied to all files pulled in the specific commit, is the way 
to go when doing continuous integration. The solution that we ended up with was 
a step in our Jenkins build jobs that would set the timestamp of all files 
associated with the specific commit to the time of the commit itself. Any 
commit not part of the commit that changed that state of the repository was 
untouched. This became arbitrarily complex when the job was impacted by 
multiple commits, but the general consensus of those who made the decisions was 
to apply all timestamps associated with all commits, in order, of application 
(Jenkins seems happy to deal with this part), so that the files do keep 
relatively sane from a build perspective. Personally, I am relatively happy 
with this solution, even if it adds a huge amount of time to the build - 
generally more than the build itself - so that timestamps are "sane". Doing it 
for straight clones does not seem worth it, because timestamps don't appear to 
matter, policy wise, unless official builds are being done. It may be worth 
considering that in the discussion. 

My comments are just based on a production perspective, rather than 
development, so I ask forgiveness for any red-herrings that may be involved.

Cheers,
Randall

-- Brief whoami: NonStop developer since approximately 
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Using Git for Cobol source

2016-02-03 Thread Randall S. Becker
On February 3, 2016 4:20 AM, Per Jørgen Johnsen wrote:
> Subject: SV: Using Git for Cobol source
> I wonder if it is ok to use Git for source control for Cobol programs and
take
> advantage of parallel development ?
> 
> Today we are using VSS and needs to be replaced. Our Cobol development is
> done by an Eclipse tool (Micro Focus Enterprise Developer)

COBOL should be no problem for git. The one caution I would have for you is
that if you happen to be using fixed-format mode with column-based sequence
numbers (that would be really old COBOL 74 mode), the sequence numbers may
give false positives on diff results. If you have those, lose them. This
would apply to VSS also, so you're probably not using those anyway.

If you are functional in VSS for COBOL, you should be fine. Your challenge
will be migrating history, which is possible, but will involve some effort.

Regards,
Randall

-- Brief whoami: NonStop developer since approximately
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Consulting for implementation

2016-02-27 Thread Randall S. Becker
Hi Jose,
In my $DAYJOB, I do a lot of process and requirements work often involving git 
- so I'm more of an advocate than a representative. Although I cannot speak on 
behalf of the community as a whole, I would be happy to have a preliminary 
discussion with you on the type of guidance you might need. Please reply 
privately with contact info and I will try to help.
Regards,
Randall

> -Original Message-
> From: git-ow...@vger.kernel.org [mailto:git-ow...@vger.kernel.org] On
> Behalf Of JOSE
> Sent: February 26, 2016 4:56 PM
> To: git@vger.kernel.org
> Subject: Consulting for implementation
> 
> good afternoon GIT Community
> 
> We are a group of people interested in using management software versions
> of GIT for the solution of social problems and citizen empowerment in smart
> cities.
> 
> I need to speak with a representative of the community, to explain the idea
> and have an advice of the feasibility of the proposal.
> 
> Thanks for your time, I´ll wait for your answer.
> 
> Sincerely
> 
> José from BioRed   {.n +   +% ݶ  w  {.n +   ^n r   z   h&   
> z  z ޗ +  +zf
> h   ~iz_  j:+v   )ߣ m

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Some strange behavior of git

2016-02-24 Thread Randall S. Becker
On February 24, 2016 5:43 PM, Olga Pshenichnikova wrote
> What can be cause for further confusing behavior?
> 
>  git@ip5server:~$ git status
>  On branch master
>  Untracked files:
>(use "git add ..." to include in what will be committed)
> 
>  app/addons/arliteks/
> 
>  nothing added to commit but untracked files present (use "git add"
> to track)
>  git@ip5server:~$ git clean -dn
>  Would remove app/addons/arliteks/
>  Would remove design/
>  Would remove js/
>  Would remove var/langs/en/
> 
> Why I don't see all 4 directories in first command?

What do your .gitignore files contain in this project?

Cheers,
Randall
-- Brief whoami: NonStop developer since approximately 
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Tabs in commit messages - de-tabify option in strbuf_stripspace()?

2016-03-15 Thread Randall S. Becker
On March 15, 2016 8:17 PM Linus Torvalds wrote:
> So I end up doing this manually when I notice, but I was wondering ig maybe
> git could just have an option to "git am" and friends to de-tabify the commit
> message.
> 
> It's particularly noticeable when people line things up using tabs (for the
> kernel, it's often things like "cpu1 does X, cpu2 does Y"), and then when you
> do "git log" it looks like a unholy mess, because the 4-char indentation of 
> the
> log message ends up causing those things to not line up at all after all.
> 
> The natural thing to do would be to pass in a "tab size" parameter to
> strbuf_stripspace(), and default it to 0 (for no change), but have some way to
> let people say "expand tabs to spaces at 8-character tab-stops" or similar
> (but let people use different tab-stops if they want).
> 
> Do people hate that idea? I may not get around to it for a while (it's the
> kernel merge window right now), but I can write the patch eventually - I just
> wanted to do an RFC first.

Speaking partly as a consumer of the comments and partly as someone who 
generates the commits through APIs, I would ask that the commit tab handling 
semantic be more formalized than just tab size to strbuf_stripspace(). While it 
might seem a bit unfair to have to worry about non-git git clients, the 
detabbing can impact the other commit implementers (e.g., SourceTree, EGit, 
JGit, and the raft of process automation bits out there using JGit for cool 
stuff). Personally, I would prefer to have a normalized behaviour so that any 
bit of automation building a commit message would have a specific definition to 
go to (and hopefully comply with) in order to properly format the message for 
posterity and across all consumers. It might also be useful to have some 
ability to be presentation-compatible with legacy commits (done after this type 
of enhancement) so that a reasonable presentation can be done for those 8 year 
old commits that still have embedded tabs. Personally, I don't encourage tabs 
in commits myself and do see the value of this, but is this really restricted 
just to git am?

Just my $0.02,

Randall

-- Brief whoami: NonStop developer since approximately 
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: libcurl dependency for implementing RFC3161 timestamps

2016-03-09 Thread Randall S. Becker
On March 9, 2016 6:41 AM, Duy Nguyen wrote:
> To: Anton Wuerfel 
> Cc: Git Mailing List ; i4pa...@cs.fau.de;
> phillip.raff...@fau.de
> Subject: Re: libcurl dependency for implementing RFC3161 timestamps
> 
> On Wed, Mar 9, 2016 at 6:24 PM, Anton Wuerfel 
> wrote:
> > -As git tag is a builtin part of the main git executable, introduce a
> > libcurl dependency for the main executable (maybe not best-practice).
> 
> libcurl was part of the main executable and then split out because it
> increased startup time [1]. I don't know if it's still true nowadays, maybe 
> you
> should do a simple test before deciding to go that way.

The NSE NonStop port observed that at 2.7.3 (admittedly old) that libcurl was 
not used for local operations including status, log, reset, etc., but was 
needed for push, pull, fetch (a.k.a. network) operations. The libcurl.so is 
loaded statically at start-up for any components needing the latter operations. 
Added it for local processing is not going to help performance :(, which is 
quite bad enough on our platform.

Sincerely,
Randall

-- Brief whoami: NonStop developer since approximately 
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Ability to remember last known good build

2016-03-11 Thread Randall S. Becker
On March 11, 2016 1:08 PM Junio C Hamano wrote:
> "Pedroso, Osiris"  writes:
> 
> > I participate in an open source project that any pull merge is accepted,
no
> matter what.
> >
> > This makes for lots of broken builds, even though we do have Travis-CI
> enabled on the project, because people will merge a request before even
the
> build is complete.
> >
> > Therefore, I would like to remember the id of the commit of the last
> successful build. This would be updated by the Travis-CI script itself
upon a
> successful build.
> >
> > I imagine best option would be to merge master to a certain branch named
> "Last_known_Linux_build" or "Last_known_Windows_build" or even
> "Last_known_build_all_tests_passing".
> >
> > I am new to git, but some other experienced co-volunteers tell me that
it
> may not be possible due to authentication issues.
> >
> > Any better way of accomplishing this?
> 
> "test && git branch -f last-good"?

I think semantically a last-good tag might be another option, unless you are
applying build fixes to a last-good topic branch. You also have the option
of adding content to the tag describing the build reason, engine used, etc.
See git tag --help. I have used that in a Jenkins environment putting the
tag move in the step following a build (failure does not execute the step so
the last-good build tag stays where it is).

Cheers,
Randall

-- Brief whoami: NonStop developer since approximately
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.




--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [Opinion gathering] Git remote whitelist/blacklist

2016-05-20 Thread Randall S. Becker
On May 20, 2016 10:22 AM, Francois Beutin wrote:
> We (Ensimag students) plan to implement the "remote whitelist/blacklist"
> feature described in the SoC 2016 ideas, but first I would like to be sure we
> agree on what exactly this feature would be, and that the community sees an
> interest in it.
> 
> The general idea is to add a way to prevent accidental push to the wrong
> repository, we see two ways to do it:
> First solution:
>  - a whitelist: Git will accept a push to a repository in it
>  - a blacklist: Git will refuse a push to a repository in it
>  - a default policy
> 
> Second solution:
>  - a default policy
>  - a list of repository not following the default policy
> 
> The new options in config if we implement the first solution:
> 
> [remote]
>   # List of repository that will be allowed/denied with
>   # a whitelist/blacklist
>   whitelisted = "http://git-hosting.org;
>   blacklisted = "http://git-hosting2.org;
> 
>   # What is displayed when the user attempts a push on an
>   # unauthorised repository? (this option overwrites
>   # the default message)
>   denymessage = "message"
> 
>   # What git should do if the user attempts a push on an
>   # unauthorised repository (reject or warn and
>   # ask the user)?
>   denypolicy = reject(default)/warning
> 
>   # How should unknown repositories be treated?
>   defaultpolicy = allow(default)/deny
> 
> 
> Some concrete usage example:
> 
>  - A beginner is working on company code, to prevent him from
>   accidentally pushing the code on a public repository, the
>   company (or him) can do:
> git config --global remote.defaultpolicy "deny"
> git config --global remote.denymessage "Not the company's server!"
> git config --global remote.denypolicy "reject"
> git config --global remote.whitelisted "http://company-server.com;
> 
> 
>  - A regular git user fears that he might accidentally push sensible
>   code to a public repository he often uses for free-time
>   projects, he can do:
> git config remote.defaultpolicy "allow"   #not really needed
> git config remote.denymessage "Are you sure it is the good server?"
> git config remote.denypolicy "warning"
> git config remote.blacklisted "http://github/personnalproject;
> 
> 
> We would like to gather opinions about this before starting to
>   implement it, is there any controversy? Do you prefer the
>   first or second solution (or none)? Do you find the option's
>   names accurate?

How would this feature be secure and made reliably consistent in managing the 
policies (I do like storing the lists separate from the repository, btw)? My 
concern is that by using git config, a legitimate clone can be made of a 
repository with these attributes, then the attributes overridden by local 
config on the clone turning the policy off, changing the remote, and thereby 
allowing a push to an unauthorized destination (example: one on the originally 
intended blacklist). It is unclear to me how a policy manager would keep track 
of this or even know this happened and prevent policies from being bypassed - 
could you clarify this for the requirements?

Cheers,
Randall

-- Brief whoami: NonStop developer since approximately 
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [Opinion gathering] Git remote whitelist/blacklist

2016-05-24 Thread Randall S. Becker
On May 24, 2016 12:08 PM, Matthieu Moy wrote:
> > So, when trying a forbidden push, Git would deny it and the only way
> > to force the push would be to remove the blacklist from the config, right?
> >
> > Probably the sanest way to go. I thought about adding a "git push
> > --force-even-if-in-blacklist" or so, but I don't think the feature
> > deserves one specific option (hence add some noise in `git push -h`).
> 
> Yeah, I agree --even-if-in-blacklist is a road to madness, but I wonder how
> this is different from setting pushURL to /dev/null or something illegal and
> replace that phony configuration value when you really need to push?

May be missing the point, but isn't the original intent to provide policy-based 
to control the push destinations? A sufficiently knowledgeable person, being a 
couple of weeks into git, would easily see that the config points to a 
black-listed destination and easily bypass it with a config update, rendering 
all this pointless? This seems to me to be a lot of effort to go to for limited 
value - unless immutable attributes are going to be obtained from the upstream 
repository - which also seems to run counter to the whole point.

Confusededly,
Randall

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Git and Mozaik

2016-05-14 Thread Randall S. Becker
Hi Everyone,

I'm embarking on a bit of a quest to bring git into a CNC manufacturing
environment for the Mozaik software package. Does anyone in the group have
experience with git for that package (expecting probably not, but I had to
ask)? I'm hoping that there won't be too many problems (internal file format
seems relatively compatible for the stuff that needs to be versioned
although if there are one-liner text files it may be annoying and I may have
to provide my own diff engine).

Cheers,
Randall

-- Brief whoami: NonStop developer since approximately
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.


--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [Bug] git-log prints wrong unixtime with --date=format:%s

2016-05-18 Thread Randall S. Becker
On May 18, 2016 12:22 PM Jeff King wrote:
> > I tried a few obvious things, but couldn't make anything work. Setting
> > "timezone" manually seems to do nothing. It's supposed to be set by
> > putting the right thing in $TZ and then calling tzset(). So I tried
> > munging $TZ to something like "+0200". It did have _some_ effect, but
> > I
> 
> Wouldn't that be more like "UTC+0200"?
> 
> In any case, I do not think anybody wants to do tzset() on each and every
> commit while running "git log".  Can we declare "format:"
> will always use the local timezone, or something?

Off the wall: Dealing in a dispersed team sharing a server that has a
timezone local for only two of the members, git log messes with me also from
a TZ POV. I would like to suggest a more general solution, like configuring
my own TZ in ~/.gitconfig which would potentially allow an override on the
command line. Would user.timezone be helpful in this situation and if set,
call setenv("TZ=...")? It's not an issue when I'm local, but if I touch a
clone on the server, even I get confused around DST changes in October ;).

Cheers,
Randall

-- Brief whoami: NonStop developer since approximately
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Git mascot

2016-04-15 Thread Randall S. Becker
> On April 15, 2016 12:42 PM> Behalf Of Christian Howe wrote
> There has been talk of a git mascot a while back in 2005. Some people
> mentioned a fish or a turtle. Since all the great open source projects like
> Linux or RethinkDB have a cute mascot, git definitely needs one as well. A
> mascot gives people a recognizable persona towards which they can direct
> their unbounded love for the project. It'd even be good if a plush doll of 
> this
> mascot could eventually be created for people to physically express their love
> of git through cuddling and hugging.

The image in my mind is of a tree, shown above and below ground with roots and 
branches intertwined, reaching outward and downward.
Cheers,
Randall

-- Brief whoami: NonStop developer since approximately 
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: /* compiler workaround */ - what was the issue?

2016-05-09 Thread Randall S. Becker
On May 9, 2016 3:40 PM Philip Oakley wrote:
> From: "Stefan Beller" 
> > On Fri, May 6, 2016 at 12:57 PM, Junio C Hamano 
> wrote:
> >> Marc Branchaud  writes:
> >>
> >>> On 2016-05-06 02:54 PM, Junio C Hamano wrote:
> 
>  I wonder if can we come up with a short and sweet notation to
>  remind futhre readers that this "initialization" is not
>  initializing but merely squelching warnings from stupid compilers,
>  and agree to use it consistently?
> >>>
> >>> Perhaps
> >>>
> >>>   #define COMPILER_UNINITIALIZED_WARNING_INITIALIZER 0
> >>>
> >>> or, for short-and-sweet
> >>>
> >
> >   /* Here could go a longer explanation than the 4 character
> > define :) */
> >>>   #define CUWI 0
> >>>
> >>> ?
> >>>
> >>> :)
> >>
> >> I get that smiley.
> >>
> >> I was hinting to keep the /* compiler workaround */ comment, but in a
> >> bit shorter form.
> >> --
> 
> For some background, I found $gmane/169098/focus=169104 which covers
> some of the issues (the focused msg is one from Junio). Hannes then notes
> ($gmane/169121) that the current xx = xx; could be seen as possible
> undefined behaviour - perhaps one of those 'no good solution' problems.
> 
> Perhaps a suitable name...
> 
> #define SUPPRESS_COMPILER_UNINITIALIZED_WARNING 0
> /* Use when some in-use compiler is unable to determine if the variable is
> used uninitialized, and no good default value is available */
> 
> Though, where best to put it?

I would suggest this type of approach should be detected in the configure 
script and added automagically (as best as possible) or as a hint supplied by 
the platform's own specific configuration files if necessary as a last gasp.

-- Brief whoami: NonStop developer since approximately 
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.



--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [Opinion gathering] Git remote whitelist/blacklist

2016-05-24 Thread Randall S. Becker
On May 24, 2016 3:25 PM Lars Schneider wrote:
> > On 24 May 2016, at 12:16, Randall S. Becker <rsbec...@nexbridge.com>
> wrote:
> >
> > On May 24, 2016 12:08 PM, Matthieu Moy wrote:
> >>> So, when trying a forbidden push, Git would deny it and the only way
> >>> to force the push would be to remove the blacklist from the config,
right?
> >>>
> >>> Probably the sanest way to go. I thought about adding a "git push
> >>> --force-even-if-in-blacklist" or so, but I don't think the feature
> >>> deserves one specific option (hence add some noise in `git push -h`).
> >>
> >> Yeah, I agree --even-if-in-blacklist is a road to madness, but I
> >> wonder how this is different from setting pushURL to /dev/null or
> >> something illegal and replace that phony configuration value when you
> really need to push?
> >
> > May be missing the point, but isn't the original intent to provide
policy-
> based to control the push destinations? A sufficiently knowledgeable
person,
> being a couple of weeks into git, would easily see that the config points
to a
> black-listed destination and easily bypass it with a config update,
rendering
> all this pointless? This seems to me to be a lot of effort to go to for
limited
> value - unless immutable attributes are going to be obtained from the
> upstream repository - which also seems to run counter to the whole point.
> 
> An actor with a bad intent will *always* be able to bypass this. However,
I
> see two use cases:
> 
> (1) Accidental pushes.
> An inexpierenced developer clones a repo from github.com, commits for
> whatever reason company code and pushes. At this point the code leaked.
> The blacklist feature could have warned/stopped the developer.
> 
> (2) Intentional open source pushes.
> At my day job we encourage people to contribute to open source. However,
> we want them to follow our open source contribution process. If they run
> "git push" on a new github.com repo then I want to interrupt the push and
> tell them to look at our contribution guidelines. Afterwards they could
> whitelist the repo on their local machine and push without trouble.
> 
> In summary I think the feature could be a safety net for the developer to
not
> leak company code.

A more paranoid ;) and probably safer approach to satisfy UC.2 is to use
something like Github Enterprise or Stash on a local server inside your
firewall as the place where developers are allowed to push code, and then
firewall block external entities. If you want to allow sharing of specific
repositories, set up a pull from the remote that is allowed through the
firewall and that server on a specific branch that can be shared (the branch
should obviously be secured by a person in a different role/function - or
set up a Jenkins job to do the push, perhaps, from that server. This could
be considered potentially a closer implementation of your contribution
process. For UC.1, if your clone is done via anonymous HTTPS, and push via
SSH, accidents are less likely to happen, particularly if SSH to github is
blocked at the firewall. I think there may be technical solutions to your
problem that do not involve modification to git. These are just suggestions
from what I have observed others doing in harsher environments.

Cheers,
Randall

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Unconventional roles of git

2017-02-28 Thread Randall S. Becker
>From: ankostis [mailto:ankos...@gmail.com] 
>Sent: February 28, 2017 8:01 AM
>To: Randall S. Becker <rsbec...@nexbridge.com>
>Cc: Git Mailing List <git@vger.kernel.org>; Jason Cooper <g...@lakedaemon.net>
>Subject: Re: Unconventional roles of git
>On 27 February 2017 at 20:16, Randall S. Becker 
><mailto:rsbec...@nexbridge.com> wrote:
>> On February 26, 2017 6:52 AM, Kostis Anagnostopoulos 
>> <mailto:ankos...@gmail.com> worte:
>>> On 26 February 2017 at 02:13, Jason Cooper <mailto:g...@lakedaemon.net> 
>>> wrote:
>>> > As someone looking to deploy (and having previously deployed) git in
>>> > unconventional roles, I'd like to add ...
>>>
>>> We are developing a distributed storage for type approval files regarding 
>>> all
>>> vehicles registered in Europe.[1]  To ensure integrity even after 10 or 30
>>> years, the hash of a commit of these files (as contained in a tag) are to be
>>> printed on the the paper certificates.
>>>
>>> - Can you provide some hints for other similar unconventional roles of git?
>>> - Any other comment on the above usage of git are welcomed.
>>
>> I am involved in managing manufacturing designs and parts configurations and 
>> approvals with git being intimately involved in the process of developing 
>> and deploying tested designs to >computerized manufacturing environments. 
>> It's pretty cool actually to see things become real.

>Thanks for the tip.
>Can you provide some links or the legislation involved?

I have not done much in the way of write-ups other than PowerPoint-based 
training material for the companies involved. So far, this does not seem to be 
subject to regulatory or legislation - but that depends on what is being 
manufactured. In the current situation, I'm helping organize cabinet parts, 
components, GCode, optimizations, and other arcane artifacts in the woodworking 
community for CNC and related process support. It is an evolving domain. I do 
wish that Cloud providers like Atlassian would provide more comprehensive 
integrated code reviews (a.k.a. Gerrit) for some of this work. Surprisingly 
that's a harder sell to dedicate a server internally.

Cheers,
Randall




RE: Unconventional roles of git

2017-02-27 Thread Randall S. Becker
> -Original Message-
> From: git-ow...@vger.kernel.org [mailto:git-ow...@vger.kernel.org] On
> Behalf Of ankostis
> Sent: February 26, 2017 6:52 AM
> To: Git Mailing List 
> Cc: Jason Cooper 
> Subject: Unconventional roles of git
> 
> On 26 February 2017 at 02:13, Jason Cooper  wrote:
> > As someone looking to deploy (and having previously deployed) git in
> > unconventional roles, I'd like to add ...
> 
> We are developing a distributed storage for type approval files regarding all
> vehicles registered in Europe.[1]  To ensure integrity even after 10 or 30
> years, the hash of a commit of these files (as contained in a tag) are to be
> printed on the the paper certificates.
> 
> - Can you provide some hints for other similar unconventional roles of git?
> - Any other comment on the above usage of git are welcomed.

I am involved in managing manufacturing designs and parts configurations and 
approvals with git being intimately involved in the process of developing and 
deploying tested designs to computerized manufacturing environments. It's 
pretty cool actually to see things become real.

Cheers,
Randall

-- Brief whoami: NonStop developer since approximately 
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.





RE: Creating remote git repository?

2016-12-14 Thread Randall S. Becker
On December 14, 2016 1:01 AM, essam Ganadily wrote:
> given that git is an old and mature product i wounder why there is no
> command line (git.exe in windows) way of creating a remote git repository?
> 
> "git remote create repo myreponame"

Why not run the commands mkdir myreponame; cd myreponame; git init  under 
an SSH command on the destination host. That should get you what you want.
Cheers,
Randall

-- Brief whoami: NonStop developer since approximately 
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.





RE: Git Branching - Best Practices - Large project - long running branches

2017-03-31 Thread Randall S. Becker
-Original Message-
>On March 31, 2017 7:56 AM: Joe Mayne Wrote:
>Subject: Git Branching - Best Practices - Large project - long running
branches
>I work on a team of 15+ developers. We are trying to determine best
practices for branching
>because we have had code stepped on when a developer has a long running
feature branch. 
>We have a Development branch. Developers are instructed to create a branch
when they begin
>working on a feature. Sometimes a feature may take a week or two to
complete. So a Developer1
>creates a branch and works for a >week or two. In the meantime, other
developers have created
>feature branches from Development and merged them back into Development. 
>At this point we are not certain if Developer1 should: 
>* Periodically merge the evolving Origin/Development into their Feature
branch and when they
>are done work merge their feature branch into Origin/Development. 
>OR 
>* Stay on their pure feature branch and when they are done merge into
Origin/Development. 
>We have had issues with developers stepping on code when they have long
running branches.
>We are looking for a best practices.

One key thing that may help is standards on formatting. I know that sounds
silly, but many merge
issues result from developers hitting the source format button and create
merge problems. If you
keep things to format standards, it will help merging in future. Even
lengthening lines to 132 instead
of 80 may reduce confusion - another silly suggestion, but I have seen it
help in a couple of places.

Keep your interface classes and base classes stable. If you are changing
those during development,
you are going to impact the larger world and probably should set up a
dedicated feature branch
off of Development and have all topic branches involved in the API change
branch off that one.
Frequent merges into topic branches are common when API changes are ongoing
(as are conflicts)
- they should not be as much when the APIs are stable. If you can, get the
API changes done first,
then create topic branches after the API is stable.

>From what I have seen, frequent conflicts are sometimes an indication of an
underlying
development process problem - you should look into this and see whether
there are issues here.
Conflicts happen, but the "why" is important to understand.

Cheers,
Randall




RE: How do you script linux GIT client to pass kerberos credential to apache enabled GIT server?

2017-04-03 Thread Randall S. Becker
-Original Message-
On April 3, 2017 12:04 PM, Ken Edward Wrote:
>I have my git repositories behind an apache server configured with kerberos. 
>Works fine if the user is logged in on their workstation.
>Apache gets the kerberos credential, and validates, and  then sends the GIT 
>repo being requested.
>BUT, I want to write a script on linux that will also pass the kerberos 
>credential to the apache GIT server without having any manually intervention. 
>Seems I would create a kerberos keytab for the principal and then use that to 
>>authenticate kinit supports authenticating from a keytab using the -k -t 
> options, but has anyone done this?

Have you attempted prototyping this using curl? It might be able to help out a 
bit. I have done this in the past with Stash and their REST and credentials, 
but not using Kerberos. Just a thought.
Cheers,

Randall



RE: Git Vendor Support

2017-03-13 Thread Randall S. Becker
On March 13, 2017 10:34 AM, COLLINS, ROGER W GG-12 USAF NASIC/SCPW wrote:
>Thanks for the reply!
>>On March 10, 2017 11:48 AM, Stefan Beller wrote
>>On Fri, Mar 10, 2017 at 8:13 AM, COLLINS, ROGER W GG-12 USAF NASIC/SCPW 
>> wrote:
>>> ALCON,
>>>
>>> Is there is a specific group or vendor backing Git?
>>https://sfconservancy.org/ takes care of the financial needs of the community.
>>> active support
>>I guess companies that make money primarily via Git hosting (e.g. one of 
>>Github, GitLab, Atlassian, Bitbucket) *may* be interested in active support.

I have heard there are companies who provide porting, technical and user 
support for git on some platforms and in some environments. It might be worth 
checking in the platform communities specifically for companies that are 
servicing the platforms. I am well connected in the HPE NonStop community if 
that's a help.

Cheers,
Randall



RE: "groups of files" in Git?

2017-07-11 Thread Randall S. Becker
-Original Message-
On July 11, 2017 11:45 AM Nikolay Shustov wrote:
>I have been recently struggling with migrating my development workflow from 
>Perforce to Git, all because of the following thing:
>I have to work on several features in the same code tree parallel, in the same 
>Perforce workspace. The major reason why I cannot
>work on one feature then on another is just because I have to make sure that 
>the changes in the related areas of the product play together well.
>With Perforce, I can have multiple changelists opened, that group the changed 
>files as needed.

>With Git I cannot seem to finding the possibility to figure out how to achieve 
>the same result. And the problem is
>that putting change sets on different Git branches (or workdirs, or whatever 
>Git offers that makes the changes to
>be NOT in the same source tree) is not a viable option from me as I would have 
>to re-build code as I re-integrate
>the changes between the branches (or whatever changes separation Git feature 
>is used).
>Build takes time and resources and considering that I have to do it on 
>multiple platforms (I do cross-platform development) it really denominates the 
>option of not having multiple changes in the same code tree.

Change sets are core git functionality. When you commit, you commit a group of 
changes across multiple files, not single file at a time, like most legacy SCM 
systems. Individual features are managed typically managed using topic branches 
that can be switched (using checkout) rapidly, which in your case will cause a 
localized rebuild based on what files were swapped.

If you need something slightly different than topic branches, do multiple 
clones off a base integration branch. This will give you multiple working 
source trees. Switch each clone to its own branch and work with them locally. 
If you need to move changes from one branch to another, commit, push on one 
branch, and pull merge to the other branch.

You could also use stash to accomplish similar things, but I wouldn't.

Cheers,
Randall



RE: Git and Active directory ldap Authentication

2017-04-28 Thread Randall S. Becker
On April 28, 2017 5:31 AM  Miguel Angel Soriano Morales wrote:
> I would like use git in my Company. We use Active directory for
everything, but I prefer install git in ?
> centos7. I Would like authenticate all my user in Git through Active
Directory. And Every Project had
> ACL permissions .It this possible?

The first thing to remember is that local clones will usually be secured to
the user who did the clone and are not usually subject to enterprise
security rules or ACLs. Security is usually applied when interacting with an
upstream repository from where you clone and push changes and authentication
is important at that time.

This might help:

https://technet.microsoft.com/en-us/library/2008.12.linux.aspx

This discusses SSO for Linux. You should already be covered for Windows.
However please give details on where your upstream repository is and what
server which is likely where you have to authenticate. Typically
authentication to upstream repositories is done through SSH - see git push. 

There are discussions of integrating SSH keys and AD here (and elsewhere):
https://social.technet.microsoft.com/Forums/en-US/8aa28e34-2007-49fe-a689-e2
8e19b2757b/is-there-a-way-to-link-ssh-key-in-ad?forum=winserverDS

You should also consider when, in your environment, to use GPG signing to
definitively identify who did the change even in their local repository. AD
is unlikely to help you there, unless you can use a custom attribute to
store and manage a user's GPG key.

Good luck!

Cheers,
Randall




RE: git client debug with customer ssh client

2017-05-09 Thread Randall S. Becker
On May 5, 2017 7:50 AM  Pierre J. Ludwick wrote:

> How can we get more info from git client? Any helps suggestions welcomed?

It might be helpful to put a full trace in OpenSSH. Running ssh with -vvv 
should give you a lot of noise. I have used 
https://en.wikibooks.org/wiki/OpenSSH/Logging_and_Troubleshooting
to pull information when the platform's OpenSSH port was done. If you need 
access to the ported code, the sources are available in the usual spot at 
ITUGLIB.

Cheers,
Randall



RE: Add an option to automatically submodule update on checkout

2017-05-09 Thread Randall S. Becker
On May 8, 2017 10:58 PM, Junio C Hamano wrote:
>"Randall S. Becker" <rsbec...@nexbridge.com> writes:
>> I have to admit that I just assumed it would have to work that way 
>> this would not be particularly useful. However, in thinking about it, 
>> we might want to limit the depth of how far -b  takes effect. If 
>> the super module brings in submodules entirely within control of the 
>> development group, having -b  apply down to leaf submodules 
>> makes sense (in some policies). However, if some submodules span out 
>> to, say, gnulib, that might not make particular sense.

>I do not see a strong reason to avoid your own branches in "other people's
project" like this.
>The submodule's upstream may be a project you have no control over, but the
repository you have locally is under your total control and you can use >any
branch names to suit the need of your project as the whole (i.e. the
superproject and submodules bound to it).

>The fact that local branch names are under your control and for your own
use is true even when you are not using submodules, by the way.

I agree with the technical aspects of this, but doing a checkout -b into
something like gnulib will pin the code you are using in that submodule to
whatever commit was referenced when you did the checkout. Example: In a
situation like that, I would want gnulib to stay on 'master'. It is my
opinion, FWIW, that this is a matter of policy or standards within the
organization using git that we should not be imposing one way or another. In
the current state of affairs (2.12.x), when I checkout, I make sure that
people are aware of which branch each submodule is on because git won't go
into the submodules - I'm fine with imposing that as a policy at present
because it takes positive action by the developers and I keep the master
branch on my own repositories locked down and it's obvious when they are
accidentally on it. But we're talking changing this so that checkout
branches can apply recursively. This changes the policy requirements so that
people have to further act to undo what git will do by default on recursion.
The policy will be at a high level the same (i.e., always make sure you know
what branch you are on in submodules), but the implementation of it will
need to be different (i.e., after you checkout recursive, go into each
submodule and undo what git just did by checking out the default branch on
some submodules ___ ___ ___, which depends on which super repository they
are using, is onerous for me to manage, and for my developers to remember to
do).

With Respect,
Randall



RE: Feature Request: Show status of the stash in git status command

2017-06-11 Thread Randall S. Becker
On June 11, 2017 1:07 PM liam Beguin wrote:
>There is one thing I've noticed though. When using 'git stash pop', it shows 
>the the number of stashes before dropping the commit and I'm not quite ?>sure 
>how to address this.

On 10/06/17 06:22 AM, Jeff King wrote:
> On Sat, Jun 10, 2017 at 06:12:28AM -0400, Samuel Lijin wrote:
>> On Sat, Jun 10, 2017 at 4:25 AM, Jeff King  wrote:
>>> On Wed, Jun 07, 2017 at 06:46:18PM -0400, Houston Fortney wrote:
>>>
 I sometimes forget about something that I stashed. It would be nice 
 if the git status command would just say "There are x entries in 
 the stash." It can say nothing if there is nothing stashed so it is 
 usually not adding clutter.
>>>
>>> I think the clutter issue would depend on your workflow around stash.
>>>
>>> Some people carry tidbits in their stash for days or weeks. E.g., I 
>>> sometimes start on an idea and decide it's not worth pursuing (or 
>>> more likely, I post a snippet of a patch as a "how about this" to 
>>> the mailing list but don't plan on taking it further). Rather than 
>>> run "git reset --hard", I usually "git stash" the result. That means 
>>> if I really do decide I want it back, I can prowl through the stash list 
>>> and find it.
>>>
>>> All of which is to say that if we had such a feature, it should 
>>> probably be optional. For some people it would be very useful, and 
>>> for others it would be a nuisance.
>>
>> Perhaps there should be a flag for this if it is implemented, say 
>> status.showStash?

Random thought: what if a stash id could be used in the same way as any other 
ref, so diff stash[0] stash[1] would be possible - although I can see this 
being problematic for a merge or rebase.

Cheers,
Randall



RE: Feature Request: Show status of the stash in git status command

2017-06-11 Thread Randall S. Becker
On June 11, 2017 2:19 PM  Igor Djordjevic wrote: 
>On 11/06/2017 19:57, Randall S. Becker wrote:
>> Random thought: what if a stash id could be used in the same way as 
>> any other ref, so diff stash[0] stash[1] would be possible - although 
>> I can see this being problematic for a merge or rebase.
>Not sure if I`m misunderstanding you, but at least `git diff stash@{0} 
>stash@{1}` seems to already work as expected - I remember using it in the 
>past, >and I`ve tried it again now[1], and it still works.

I'm sorry for not checking first before posting. Thanks 

Randall



RE: Add an option to automatically submodule update on checkout

2017-05-08 Thread Randall S. Becker
On May 8, 2017 12:55 PM, Stefan Beller wrote:
>On Mon, May 8, 2017 at 9:46 AM, Randall S. Becker <rsbec...@nexbridge.com> 
>wrote:
>> On May 8, 2017 12:25 PM, Stefan Beller wrote:
>>>On Mon, May 8, 2017 at 7:42 AM, Randall S. Becker <rsbec...@nexbridge.com> 
>>>wrote:
>>>> On May 6, 2017 4:38 AM Ciro Santilli wrote:
>>>>> This is a must if you are working with submodules, otherwise every 
>>>>> git checkout requires a git submodule update, and you forget it, 
>>>>> and things break, and you understand, and you go to stack overflow 
>>>>> questions 
>>>>> http://stackoverflow.com/questions/22328053/why-doesnt-git-checkout
>>>>> -a utomatically-do-git-submodule-update-recursive
>>>>> http://stackoverflow.com/questions/4611512/is-there-a-way-to-make-g
>>>>> it -pull-automatically-update-submodules
>>>>> and you give up and create aliases :-)
>>
>>> The upcoming release (2.13) will have "git checkout 
>>> --recurse-submodules", which will checkout the submodules at the commit as 
>>> recorded in the superproject.
>>> I plan to add an option "submodule.recurse" (name is subject to 
>>> bikeshedding), which would make the --recurse-submodules flag given 
>>> by default for all commands that support the flag. (Currently cooking we 
>>> have reset --recurse-submodules, already existing there is push/pull).
>>
>> Brilliant! 
>>
>>>> I rather like the concept of supporting --recurse-submodules. The 
>>>> complexity is that the branches in all submodules all have to have 
>>>> compatible >>>semantics when doing the checkout, which is by no means 
>>>> guaranteed. In the scenario where you are including a submodule from a 
>>>> third-party (very >>>common - see gnulib), the branches likely won't be 
>>>> there, so you have a high probability of having the command fail or 
>>>> produce the same results as >>>currently exists if you allow the checkout 
>>>> even with problems (another option?). If you have control of everything, 
>>>> then this makes sense.
>>
>>>I am trying to give the use case of having control over everything (or 
>>>rather mixed) more thought as well, e.g. "checkout --recurse-submodules -b 
>>>>>" may want to create the branches in a subset of submodules as well.
>>
>> I have to admit that I just assumed it would have to work that way 
>> this would not be particularly useful. However, in thinking about it, 
>> we might want to limit the depth of how far -b  takes effect. If 
>> the super module brings in submodules entirely within control of the 
>> development group, having -b  apply down to leaf submodules 
>> makes sense (in some policies). However, if some submodules span out 
>> to, say, gnulib, that might not make particular sense. Some downward 
>> limit might be appropriate. Perhaps, in the submodule ref, you might 
>> want to qualify it as : (but the impact of that is 
>> probably and admittedly pretty horrid). I hesitate to suggest a 
>> numeric limit, as that assumes that submodules are organized in a 
>> balanced tree - which is axiomatically unreasonable. Maybe something 
>> in .git/config, like
>>
>> [branch "topic*"]
>> submodules=a,b,c
>>
>> But I suspect that would make things even more confusing.

>I thought about having yet-another-flag in the .gitmodules file, which states 
>if the submodule is extern or internal.

>[submodule "gnulib"]
>path=./gnulib
>external = true # implies no branch for checkout -b --recurse-submodules

>I think there are a couple more situations where such "external" submodules 
>are treated differently, so maybe we'd want to think carefully about the 
>>actual name as different workflows would want to have different features for 
>an internal/external submodule.

I didn't want to open up that one, but yes. That makes sense. However, I don't 
like overloading what "external" means or might mean in the future. Would you 
consider a distinct Boolean for that, like inherit-branch=true?

Cheers,
Randall



RE: Add an option to automatically submodule update on checkout

2017-05-08 Thread Randall S. Becker
On May 8, 2017 12:25 PM, Stefan Beller wrote:
>On Mon, May 8, 2017 at 7:42 AM, Randall S. Becker <rsbec...@nexbridge.com> 
>wrote:
>> On May 6, 2017 4:38 AM Ciro Santilli wrote:
>>> This is a must if you are working with submodules, otherwise every 
>>> git checkout requires a git submodule update, and you forget it, and 
>>> things break, and you understand, and you go to stack overflow 
>>> questions 
>>> http://stackoverflow.com/questions/22328053/why-doesnt-git-checkout-a
>>> utomatically-do-git-submodule-update-recursive
>>> http://stackoverflow.com/questions/4611512/is-there-a-way-to-make-git
>>> -pull-automatically-update-submodules
>>> and you give up and create aliases :-)

> The upcoming release (2.13) will have "git checkout --recurse-submodules", 
> which will checkout the submodules
> at the commit as recorded in the superproject.
> I plan to add an option "submodule.recurse" (name is subject to 
> bikeshedding), which would make the --recurse-submodules
> flag given by default for all commands that support the flag. (Currently 
> cooking we have reset --recurse-submodules, already
> existing there is push/pull).

Brilliant! 

>> I rather like the concept of supporting --recurse-submodules. The complexity 
>> is that the branches in all submodules all have to have compatible 
>> >>semantics when doing the checkout, which is by no means guaranteed. In the 
>> scenario where you are including a submodule from a third-party (very 
>> >>common - see gnulib), the branches likely won't be there, so you have a 
>> high probability of having the command fail or produce the same results as 
>> >>currently exists if you allow the checkout even with problems (another 
>> option?). If you have control of everything, then this makes sense.

>I am trying to give the use case of having control over everything (or rather 
>mixed) more thought as well, e.g. "checkout --recurse-submodules -b >" 
>may want to create the branches in a subset of submodules as well.

I have to admit that I just assumed it would have to work that way this would 
not be particularly useful. However, in thinking about it, we might want to 
limit the depth of how far -b  takes effect. If the super module brings 
in submodules entirely within control of the development group, having -b 
 apply down to leaf submodules makes sense (in some policies). However, 
if some submodules span out to, say, gnulib, that might not make particular 
sense. Some downward limit might be appropriate. Perhaps, in the submodule ref, 
you might want to qualify it as : (but the impact of that is 
probably and admittedly pretty horrid). I hesitate to suggest a numeric limit, 
as that assumes that submodules are organized in a balanced tree - which is 
axiomatically unreasonable. Maybe something in .git/config, like

[branch "topic*"]
submodules=a,b,c

But I suspect that would make things even more confusing.

Cheers,
Randall



RE: Feature request: Please add support to stash specific files

2017-06-06 Thread Randall S. Becker
-Original Message-
On June 6, 2017 9:23 AM, rajdeep mondal wrote:
>Work around found in:
>https://stackoverflow.com/questions/3040833/stash-only-one-file-out-of-multiple-files-that-have-changed-with-git
>Workaround is not very optimal. Please add this support to git.

Instead of using stash as part of your normal process, consider using topic 
branches instead. Before working, switch to a new topic branch. If you forget, 
stash, switch, apply, then go forth. While on the topic branch, you can use add 
and commit on a hunk or file basis to satisfy what appears to be the 
requirement here. You can then merge the desired commits from your topic branch 
into wherever you want to merge them either preserving the commit or by 
squashing commits together.

In my shop, stash is only used for the "I forgot to switch to a topic branch, 
oops" process. I try to encourage people not to use it. I also discourage 
squashed commits, but that's because I like knowing what's in my sausages 

Cheers,
Randall




[Question] Documenting platform implications on CVE to git

2017-10-06 Thread Randall S. Becker
Hi All,

I wonder whether there is some mechanism for providing official responses
from platform ports relating to security CVE reports, like CVE-2017-14867.
For example, the Perl implementation on HPE NonStop does not include the SCM
module so commands relating cvsserver may not be available - one thing to be
verified so is a question #1. But the real question (#2) is: where would one
most appropriately document issues like this to be consistent with other
platform responses relating to git?

Thanks,
Randall

-- Brief whoami: NonStop developer since approximately
UNIX(421664400)/NonStop(2112884442) 
-- In my real life, I talk too much.





RE: [Question] Documenting platform implications on CVE to git

2017-10-06 Thread Randall S. Becker
-Original Message-
On October 6, 2017 6:51 PM, Jonathan Nieder wrote
>Randall S. Becker wrote:
>> I wonder whether there is some mechanism for providing official 
>> responses from platform ports relating to security CVE reports, like
CVE-2017-14867.

>This question is too abstract for me.  Can you say more concretely what you
are trying to do?
>E.g. are you asking how you would communicate to users of your port that
CVE-2017-14867
?does not apply to them?  Or are you asking where to start a conversation
about
>who a bug applies to?  Or something else?

The first one, mostly. When looking at CVE-2017-14867, there are places like
https://nvd.nist.gov/vuln/detail/CVE-2017-14867 where the issue is
discussed. It provides hyperlinks to various platform discussions.
Unfortunately for me, I am not an HPE employee - and even if I was, there is
no specific site where I can publicly discuss the vulnerability. I'm looking
to the group here for advice on how to get the word out that it does not
appear to apply to the HPE NonStop Git port. The question of where to best
do that for any CVE pertaining to git as applicable to the NonStop Port is
question #1.

Question #2 - probably more relevant to the specific issue and this group -
is whether the vulnerability is contained to Git's use of Perl SCM and since
NonStop's Perl does not support SCM, the vulnerability may not be relevant,
but I'm not really enough of a Perl guru to make that determination.

Cheers,
Randall

-- Brief whoami: NonStop developer since approximately
UNIX(421664400)/NonStop(2112884442) 
-- In my real life, I talk too much.





RE: [Question] Documenting platform implications on CVE to git

2017-10-06 Thread Randall S. Becker
-Original Message-
On October 6, 2017 7:45 PM  Jonathan Nieder wrote: Cc: git@vger.kernel.org
>Randall S. Becker wrote:
>> The first one, mostly. When looking at CVE-2017-14867, there are 
>> places like
>> https://nvd.nist.gov/vuln/detail/CVE-2017-14867 where the issue is 
>> discussed. It provides hyperlinks to various platform discussions.
>> Unfortunately for me, I am not an HPE employee - and even if I was, 
>> there is no specific site where I can publicly discuss the 
>> vulnerability. I'm looking to the group here for advice on how to get 
>> the word out that it does not appear to apply to the HPE NonStop Git 
>> port. The question of where to best do that for any CVE pertaining to 
>> git as applicable to the NonStop Port is question #1.

>How do people find out about the HPE NonStop Git port?  Where is it 
>distributed? 

It is available at 
http://ituglib.connect-community.org/apps/Ituglib/SrchOpenSrcLib.jsf but we 
have limited abilities to modify anything but that page. There is a brief 
release note but nothing sufficient to have a discussion. 

>Does that distribution point allow you to publish release notes or other 
>documentation?

Not enough for what I think are our needs.
 
> Do you have a web page?  That's another place you can publish information. 
> http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-14867
> links to lots of resources that are not from the Git project.
> The oss-security list <http://www.openwall.com/lists/oss-security/>
> allows anyone to participate.  It is a place that people often collaborate to 
> figure out the impact of a published
> vulnerability, how to mitigate it, etc.  There are other similar mailing 
> lists elsewhere, too.

Thanks, I'll take these to the team.

>> Question #2 - probably more relevant to the specific issue and this 
>> group - is whether the vulnerability is contained to Git's use of Perl 
>> SCM and since NonStop's Perl does not support SCM, the vulnerability 
>> may not be relevant, but I'm not really enough of a Perl guru to make that 
>> determination.

>What is Perl SCM?  I don't know what you're talking about.

Base Perl does not have a lot of capability beyond a simple interpreter. The 
CPAN project,  https://www.cpan.org/, provides implementations of useful 
modules, including Source Code Management (SCM) modules that enable things like 
cvsserver AFAIK, and Mercurial to run. Without it (being the ability to 
arbitrarily add CPAN modules, which is an issue on NonStop), Perl tends to be a 
bit handcuffed and blindfolded. 

Thanks for the suggestions,
Randall



RE: Auto adding changed files

2017-10-09 Thread Randall S. Becker
-Original Message-
On October 9, 2017 3:35 PM Sascha Manns wrote:
>if i'm in a git repo and change a file, it is listed in git status. But i have 
>to add this file manually and commit them.

$ git commit -a

>From the git commit help:  by using the -a switch with the commit command to 
>automatically
"add" changes from all known files (i.e. all files that are already
listed in the index) and to automatically "rm" files in the index
that have been removed from the working tree, and then perform the
actual commit;

Cheers,
Randall

-- Brief whoami: NonStop developer since approximately 
UNIX(421664400)/NonStop(2112884442) 
-- In my real life, I talk too much.





[Bug/Solution] Git hangs in compat/poll/poll.c on HPE NonStop

2017-09-28 Thread Randall S. Becker
Hi Team,

After a whole lot of investigating, we (it is a large "we") have discovered
the reason for the hang we occasionally get in git-upload-pack on HPE
NonStop servers - reported here well over a year ago. This resulted from a
subtle check that the operating system does on file descriptors. When it
sees random values in pfd[i].revents, it sometimes thinks its dealing with a
TTY and well, things end badly after that. There is a non-destructive fix
that I would like to propose for this that I have already tested. Sadly, I
have no email mechanism where our repo resides for a real patch message. The
patch is based on 2.3.7 (16018ae), but should be applicable forward. We have
held off moving to a more recent version until resolving this, so that's
next on our plan.

--- a/compat/poll/poll.c
+++ b/compat/poll/poll.c
@@ -438,6 +438,10 @@ poll (struct pollfd *pfd, nfds_t nfd, int timeout)
pfd[i].revents = happened;
rc++;
  }
+else
+  {
+pfd[i].revents = 0;
+  }
   }

   return rc;

Sincerely,
Randall

-- Brief whoami: NonStop developer since approximately
UNIX(421664400)/NonStop(2112884442) 
-- In my real life, I talk too much.



RE: [PATCH v4 4/4] worktree: make add dwim

2017-11-25 Thread Randall S. Becker
On November 25, 2017 3:06 PM Thomas Gummerer wrote:

>however we currently document one behaviour, which I would like to change
(I usually have branches
>without a / in that I want to look at) we currently document one behaviour,
which I'd like to change.  So 
>in that case we are a bit worried about backwards compatibility, and how
this will affect current users
>that have a certain expectation of how the command is supposed to work,
hence the discussion of
>whether to hide the new behaviour behind a flag or not.

>Either way, if we do put the behaviour behind a flag, I'll also add a
configuration variable, which can
>be set to enable the new behaviour so one doesn't have to type out the flag
all the time.

To be consistent with other commands, you could put path after -- and the
ambiguity with refs containing '/' goes away, as refs before the -- would
always be considered refs while after you have paths.

What I don't like is the current add syntax of:
git worktree add [-f] [--detach] [--checkout] [--lock] [-b ]
 []

where the path-spec precedes branch making things a bit icky. It might be
better to have an alternate syntax of:

git worktree add [-f] [--detach] [--checkout] [--lock] [-b ]
 []
git worktree add [-f] [--detach] [--checkout] [--lock] [-b ]
[] -- 

But since we only have one path, that may be redundant. Just a thought, as
-- avoids a lot of interpretation evils. While we're here, I wonder whether
 should be changed to  for more general use. Consider
release identifiers using tags, and using the tag instead of branch to
define commit on which the worktree is based.

Cheers,
Randall

-- Brief whoami: NonStop developer since approximately
UNIX(421664400)/NonStop(2112884442) 
-- In my real life, I talk too much.







RE: Clone repository computer A to remote B doenst work

2017-11-25 Thread Randall S. Becker
On November 25, 2017 4:31 AM Roberto Garcia wrote:

>I'm trying clone in windows a git repository to other remote machine (NAS 
>Linux based).
>I have installed git for windows but i didn't installed nothing in the other 
>remote machine (NAS Linux based).

You have two choices:
1. Install git on your LINYX machine, which you probably can't do if it's a 
pure NAS outside of your control.
2. Run everything off Windows as git in local mode. Mount the NAS as a windows 
drive. In a command terminal:
a. cd X:\Share\repo.git #you'll have to mkdir this
b. git init --bare #creates a new empty repo on your NAS
c. cd C:\MyStuff #where you keep your clones
d. git clone -l X:\Share\repo.git #clone the bare repository
e. Start adding stuff (git add, git commit)
f. git push   # to move commits to your NAS repo.

Then you have your relationship and can push/pull from your NAS entirely from 
within Windows executing objects. Change directories and drive letters 
accordingly. -l means local, so git won't be starting any git-upload-pack 
processes remotely. Variations on this should work.

Good luck.

Randall



RE: Re: Unify annotated and non-annotated tags

2017-11-24 Thread Randall S. Becker
On November 24, 2017 4:52 AM anatoly techtonik wrote:
>On Thu, Nov 23, 2017 at 6:08 PM, Randall S. Becker <rsbec...@nexbridge.com> 
>wrote:
>> On 2017-11-23 02:31 (GMT-05:00) anatoly techtonik wrote
>>>Subject: Re: Unify annotated and non-annotated tags On Sat, Nov 11, 
>>>2017 at 5:06 AM, Junio C Hamano <gits...@pobox.com> wrote:
>>>> Igor Djordjevic <igor.d.djordje...@gmail.com> writes:
>>>>
>>>>> If you would like to mimic output of "git show-ref", repeating 
>>>>> commits for each tag pointing to it and showing full tag name as 
>>>>> well, you could do something like this, for example:
>>>>>
>>>>>   for tag in $(git for-each-ref --format="%(refname)" refs/tags)
>>>>>   do
>>>>>   printf '%s %s\n' "$(git rev-parse $tag^0)" "$tag"
>>>>>   done
>>>>>
>>>>>
>>>>> Hope that helps a bit.
>>>>
>>>> If you use for-each-ref's --format option, you could do something 
>>>> like (pardon a long line):
>>>>
>>>> git for-each-ref 
>>>> --format='%(if)%(*objectname)%(then)%(*objectname)%(else)%(objectnam
>>>> e)%(end) %(refname)' refs/tags
>>>>
>>>> without any loop, I would think.
>>>Thanks. That helps.
>>>So my proposal is to get rid of non-annotated tags, so to get all tags 
>>>with commits that they point to, one would use:
>>>git for-each-ref --format='%(*objectname) %(refname)' refs/tags> For 
>>>so-called non-annotated tags just leave the message empty.
>>>I don't see why anyone would need non-annotated tags though.
>>
>> I have seen non-annotated tags used in automations (not necessarily well 
>> written ones) that
>> create tags as a record of automation activity. I am not sure we should be 
>> writing off the
>> concept of unannotated tags entirely. This may cause breakage based on 
>> existing expectations
>> of how tags work at present. My take is that tags should include whodunnit, 
>> even if it's just the
>> version of the automation being used, but I don't always get to have my 
>> wishes fulfilled. In
>> essence, whatever behaviour a non-annotated tag has now may need to be 
>> emulated in
>> future even if reconciliation happens. An option to preserve empty tag 
>> compatibility with
>> pre-2.16 behaviour, perhaps? Sadly, I cannot supply examples of this usage 
>> based on a
>> human memory page-fault and NDAs.
>Are there any windows for backward compatibility breaks, or git is doomed to 
>preserve it forever?
>Automation without support won't survive for long, and people who rely on that,
>like Chromium team, usually hard set the version used.

Just pointing out that changing the semantics of a basic data item in git may 
have unintended consequences.



RE: Re: Unify annotated and non-annotated tags

2017-11-23 Thread Randall S. Becker
On 2017-11-23 02:31 (GMT-05:00) anatoly techtonik wrote
>Subject: Re: Unify annotated and non-annotated tags 
>On Sat, Nov 11, 2017 at 5:06 AM, Junio C Hamano  wrote:
>> Igor Djordjevic  writes:
>>
>>> If you would like to mimic output of "git show-ref", repeating
>>> commits for each tag pointing to it and showing full tag name as
>>> well, you could do something like this, for example:
>>>
>>>   for tag in $(git for-each-ref --format="%(refname)" refs/tags)
>>>   do
>>>   printf '%s %s\n' "$(git rev-parse $tag^0)" "$tag"
>>>   done
>>>
>>>
>>> Hope that helps a bit.
>>
>> If you use for-each-ref's --format option, you could do something
>> like (pardon a long line):
>>
>> git for-each-ref 
>> --format='%(if)%(*objectname)%(then)%(*objectname)%(else)%(objectname)%(end) 
>> %(refname)' refs/tags
>>
>> without any loop, I would think.
>Thanks. That helps.
>So my proposal is to get rid of non-annotated tags, so to get all
>tags with commits that they point to, one would use:
>git for-each-ref --format='%(*objectname) %(refname)' refs/tags>
>For so-called non-annotated tags just leave the message empty.
>I don't see why anyone would need non-annotated tags though.

I have seen non-annotated tags used in automations (not necessarily well 
written ones) that create tags as a record of automation activity. I am not 
sure we should be writing off the concept of unannotated tags entirely. This 
may cause breakage based on existing expectations of how tags work at present. 
My take is that tags should include whodunnit, even if it's just the version of 
the automation being used, but I don't always get to have my wishes fulfilled. 
In essence, whatever behaviour a non-annotated tag has now may need to be 
emulated in future even if reconciliation happens. An option to preserve empty 
tag compatibility with pre-2.16 behaviour, perhaps? Sadly, I cannot supply 
examples of this usage based on a human memory page-fault and NDAs.

Cheers,
Randall

-- Brief whoami: NonStop developer since approximately 
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.





RE: [RFE] Inverted sparseness

2017-12-03 Thread Randall S. Becker
On December 3, 2017 6:14 PM, Philip Oakley wrote a nugget of wisdom: 
>From: "Randall S. Becker" <rsbec...@nexbridge.com>
>Sent: Friday, December 01, 2017 6:31 PM
>> On December 1, 2017 1:19 PM, Jeff Hostetler wrote:
>>>On 12/1/2017 12:21 PM, Randall S. Becker wrote:
>>>> I recently encountered a really strange use-case relating to sparse 
>>>> clone/fetch that is really backwards from the discussion that has 
>>>> been going on, and well, I'm a bit embarrassed to bring it up, but I 
>>>> have no good solution including building a separate data store that 
>>>> will end up inconsistent with repositories (a bad solution).  The 
>>>> use-case is as
>>>> follows:
>>>>
>>>> Given a backbone of multiple git repositories spread across an 
>>>> organization with a server farm and upstream vendors.
>>>> The vendor delivers code by having the client perform git pull into 
>>>> a specific branch.
>>>> The customer may take the code as is or merge in customizations.
>>>> The vendor wants to know exactly what commit of theirs is installed 
>>>> on each server, in near real time.
>>>> The customer is willing to push the commit-ish to the vendor's 
>>>> upstream repo but does not want, by default, to share the actual 
>>>> commit contents for security reasons.
>>>> Realistically, the vendor needs to know that their own commit id was 
>>>> put somewhere (process exists to track this, so not part of the 
>>>> use-case) and whether there is a subsequent commit contributed >by 
>>>> the customer, but the content is not relevant initially.
>>>>
>>>> After some time, the vendor may request the commit contents from the 
>>>> customer in order to satisfy support requirements - a.k.a. a defect 
>>>> was found but has to be resolved.
>>>> The customer would then perform a deeper push that looks a lot like 
>>>> a "slightly" symmetrical operation of a deep fetch following a prior 
>>>> sparse fetch to supply the vendor with the specific commit(s).
>>
>>>Perhaps I'm not understanding the subtleties of what you're 
>>>describing, but could you do this with stock git functionality.
>>
>>>Let the vendor publish a "well known branch" for the client.
>>>Let the client pull that and build.
>>>Let the client create a branch set to the same commit that they fetched.
>>>Let the client push that branch as a client-specific branch to the 
>>>vendor to indicate that that is the official release they are based on.
>>
>>>Then the vendor would know the official commit that the client was using.
>> This is the easy part, and it doesn't require anything sparse to exist.
>>
>>>If the client makes local changes, does the vendor really need the SHA 
>>>of those -- without the actual content?
>>>I mean any SHA would do right?  Perhaps let the client create a second 
>>>client-specific branch (set to  the same commit as the first) to 
>>>indicate they had mods.
>>>Later, when the vendor needs the actual client changes, the client 
>>>does a normal push to this 2nd client-specific branch at the vendor.
>>>This would send everything that the client has done to the code since 
>>>the official release.
>>
>> What I should have added to the use-case was that there is a strong 
>> audit requirement (regulatory, actually) involved that the SHA is 
>> exact, immutable, and cannot be substitute or forged (one of the 
>> reasons git is in such high regard). So, no I can't arrange a fake SHA 
>> to represent a SHA to be named later. It SHA of the installed commit 
>> is part of the official record of what happened on the specific server, so 
>> I'm stuck with it.
>>
>>>I'm not sure what you mean about "it is inside a tree".
>>
>> m---a---b---c---H1
>>  `---d---H2
>>
>> d would be at a head. b would be inside. Determining content of c is 
>> problematic if b is sparse, so I'm really unsure that any of this is 
>> possible.

>I think I get the jist of your use case. Would I be right that you don't have 
>a true working
>solution yet? i.e. that it's a problem that is almost sorted but falls down at 
>the last step.

>If one pretended that this was a single development shop, and the various 
>vendors, clients
>and customers as being independent devolopers, each of whom is over protective 
>of their
>code, it may give a better view that

RE: [RFE] Inverted sparseness

2017-12-01 Thread Randall S. Becker
On December 1, 2017 1:19 PM, Jeff Hostetler wrote:
>On 12/1/2017 12:21 PM, Randall S. Becker wrote:
>> I recently encountered a really strange use-case relating to sparse 
>> clone/fetch that is really backwards from the discussion that has been going 
>> on, and well, I'm a bit embarrassed to bring it up, but I have no good 
>> solution including building a separate data store that will end up 
>> inconsistent with repositories (a bad solution).  The use-case is as follows:
>> 
>> Given a backbone of multiple git repositories spread across an organization 
>> with a server farm and upstream vendors.
>> The vendor delivers code by having the client perform git pull into a 
>> specific branch.
>> The customer may take the code as is or merge in customizations.
>> The vendor wants to know exactly what commit of theirs is installed on each 
>> server, in near real time.
>> The customer is willing to push the commit-ish to the vendor's upstream repo 
>> but does not want, by default, to share the actual commit contents for 
>> security reasons.
>>  Realistically, the vendor needs to know that their own commit id was 
>> put somewhere (process exists to track this, so not part of the use-case) 
>> and whether there is a subsequent commit contributed >by the customer, but 
>> the content is not relevant initially.
>> 
>> After some time, the vendor may request the commit contents from the 
>> customer in order to satisfy support requirements - a.k.a. a defect was 
>> found but has to be resolved.
>> The customer would then perform a deeper push that looks a lot like a 
>> "slightly" symmetrical operation of a deep fetch following a prior sparse 
>> fetch to supply the vendor with the specific commit(s).

>Perhaps I'm not understanding the subtleties of what you're describing, but 
>could you do this with stock git functionality.

>Let the vendor publish a "well known branch" for the client.
>Let the client pull that and build.
>Let the client create a branch set to the same commit that they fetched.
>Let the client push that branch as a client-specific branch to the vendor to 
>indicate that that is the official release they are based on.

>Then the vendor would know the official commit that the client was using.
This is the easy part, and it doesn't require anything sparse to exist.

>If the client makes local changes, does the vendor really need the SHA of 
>those -- without the actual content?
>I mean any SHA would do right?  Perhaps let the client create a second 
>client-specific branch (set to
> the same commit as the first) to indicate they had mods.
>Later, when the vendor needs the actual client changes, the client does a 
>normal push to this 2nd client-specific branch at the vendor.
>This would send everything that the client has done to the code since the 
>official release.

What I should have added to the use-case was that there is a strong audit 
requirement (regulatory, actually) involved that the SHA is exact, immutable, 
and cannot be substitute or forged (one of the reasons git is in such high 
regard). So, no I can't arrange a fake SHA to represent a SHA to be named 
later. It SHA of the installed commit is part of the official record of what 
happened on the specific server, so I'm stuck with it.

>I'm not sure what you mean about "it is inside a tree".

m---a---b---c---H1
  `---d---H2

d would be at a head. b would be inside. Determining content of c is 
problematic if b is sparse, so I'm really unsure that any of this is possible.

Cheers,
Randall

-- Brief whoami: NonStop developer since approximately 
UNIX(421664400)/NonStop(2112884442) 
-- In my real life, I talk too much.





RE: [PATCH v3] Makefile: replace perl/Makefile.PL with simple make rules

2017-12-12 Thread Randall S. Becker
-Original Message-
On December 10, 2017 4:14 PM, Ævar Arnfjörð Bjarmason wrote:
Subject: [PATCH v3] Makefile: replace perl/Makefile.PL with simple make rules

>Replace the perl/Makefile.PL and the fallback perl/Makefile used under 
>NO_PERL_MAKEMAKER=NoThanks with a much simpler implementation heavily inspired 
>by how the i18n infrastructure's build process works[1].
>The reason for having the Makefile.PL in the first place is that it was 
>initially[2] building a perl C binding to interface with libgit, this 
>functionality, that was removed[3] before Git.pm ever made it to the master 
>branch.


I would like to request that the we be careful that the git builds do not 
introduce arbitrary dependencies to CPAN. Some platforms (I can think of one 
off the top, being NonStop) does not provide for arbitrary additions to the 
supplied perl implementation as of yet. The assumption about being able to add 
CPAN modules may apply on some platforms but is not a general capability. I am 
humbly requesting that caution be used when adding dependencies. Being 
non-$DAYJOB responsible for the git port for NonStop, this scares me a bit, but 
I and my group can help validate the available modules used for builds.

Note: we do not yet have CPAN's SCM so can't and don't use perl for access to 
git anyway - much that I've tried to change that.

Please keep build dependencies to a minimum.

Thanks for my and my whole team.

Randall

-- Brief whoami: NonStop developer since approximately 
UNIX(421664400)/NonStop(2112884442) 
-- In my real life, I talk too much.





RE: [Proposed] Externalize man/html ref for quick-install-man and quick-install-html

2017-12-12 Thread Randall S. Becker
-Original Message-
On December 12, 2017 6:18 PM Junio C Hamano wrote:
Subject: Re: [Proposed] Externalize man/html ref for quick-install-man and 
quick-install-html
>"Randall S. Becker" <rsbec...@nexbridge.com> writes:
>> I can send you a pull request on github, if you want 
>I don't.  It's not that I can or cannot take a pull request.  I just do not 
>want to queue anything that is not reviwed on list.
No worries.

>I however could queue this (typed to mimic what I saw in your message), but 
>you'd need to say what you see here is OK (and preferably, you applied this 
>version and it tested fine); I may have made a typo or two, and I would really 
>prefer extra set of eyes.
Yes, needed. The lines wrapped om Documentation/Makefile - each change in 
quick-install-man/html should be exactly one line:

quick-install-man: require-manrepo
-   '$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(MAN_REPO) 
$(DESTDIR)$(mandir)
+   '$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(MAN_REPO) 
$(DESTDIR)$(mandir) $(GIT_MAN_REF)
 
And here

 quick-install-html: require-htmlrepo
-   '$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(HTML_REPO) 
$(DESTDIR)$(htmldir)
+   '$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(HTML_REPO) 
$(DESTDIR)$(htmldir) $(GIT_MAN_REF)

And otherwise please consider it signed off.

Signed-off-by: Randall S. Becker <randall.bec...@nexbridge.ca>

-- >8 --
From: "Randall S. Becker" <rsbec...@nexbridge.com>
Date: Sat, 9 Dec 2017 17:07:57 -0500
Subject: [PATCH] install-doc-quick: allow specifying what ref to install

We allow the builders, who want to install the preformatted manpages and html 
documents, to specify where in their filesystem these two repositories are 
stored.  Let them also specify which ref (or even a
revision) to grab the preformatted material from.
---
 Documentation/Makefile | 5 +++--
 Documentation/install-doc-quick.sh | 9 +
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/Documentation/Makefile b/Documentation/Makefile index 
2ab65561af..4ae9ba5c86 100644
--- a/Documentation/Makefile
+++ b/Documentation/Makefile
@@ -39,6 +39,7 @@ MAN7_TXT += gitworkflows.txt  MAN_TXT = $(MAN1_TXT) 
$(MAN5_TXT) $(MAN7_TXT)  MAN_XML = $(patsubst %.txt,%.xml,$(MAN_TXT))  MAN_HTML 
= $(patsubst %.txt,%.html,$(MAN_TXT))
+GIT_MAN_REF = master
 
 OBSOLETE_HTML += everyday.html
 OBSOLETE_HTML += git-remote-helpers.html @@ -437,14 +438,14 @@ 
require-manrepo::
then echo "git-manpages repository must exist at $(MAN_REPO)"; exit 1; 
fi
 
 quick-install-man: require-manrepo
-   '$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(MAN_REPO) 
$(DESTDIR)$(mandir)
+   '$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(MAN_REPO) 
+$(DESTDIR)$(mandir) $(GIT_MAN_REF)
 
 require-htmlrepo::
@if test ! -d $(HTML_REPO); \
then echo "git-htmldocs repository must exist at $(HTML_REPO)"; exit 1; 
fi
 
 quick-install-html: require-htmlrepo
-   '$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(HTML_REPO) 
$(DESTDIR)$(htmldir)
+   '$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(HTML_REPO) 
+$(DESTDIR)$(htmldir) $(GIT_MAN_REF)
 
 print-man1:
@for i in $(MAN1_TXT); do echo $$i; done diff --git 
a/Documentation/install-doc-quick.sh b/Documentation/install-doc-quick.sh
index 327f69bcf5..17231d8e59 100755
--- a/Documentation/install-doc-quick.sh
+++ b/Documentation/install-doc-quick.sh
@@ -3,11 +3,12 @@
 
 repository=${1?repository}
 destdir=${2?destination}
+GIT_MAN_REF=${3?master}
 
-head=master GIT_DIR=
+GIT_DIR=
 for d in "$repository/.git" "$repository"
 do
-   if GIT_DIR="$d" git rev-parse refs/heads/master >/dev/null 2>&1
+   if GIT_DIR="$d" git rev-parse "$GIT_MAN_REF" >/dev/null 2>&1
then
GIT_DIR="$d"
export GIT_DIR
@@ -27,12 +28,12 @@ export GIT_INDEX_FILE GIT_WORK_TREE  rm -f "$GIT_INDEX_FILE"
 trap 'rm -f "$GIT_INDEX_FILE"' 0
 
-git read-tree $head
+git read-tree "$GIT_MAN_REF"
 git checkout-index -a -f --prefix="$destdir"/
 
 if test -n "$GZ"
 then
-   git ls-tree -r --name-only $head |
+   git ls-tree -r --name-only "$GIT_MAN_REF" |
xargs printf "$destdir/%s\n" |
xargs gzip -f
 fi
--
2.15.1-525-g09180b8600



RE: [Proposed] Externalize man/html ref for quick-install-man and quick-install-html

2017-12-12 Thread Randall S. Becker
On December 12, 2017 6:40 PM Junio C Hamano wrote to my own embarrassment:
"Randall S. Becker" <rsbec...@nexbridge.com> writes:

>> Yes, needed. The lines wrapped om Documentation/Makefile - each change 
>> in quick-install-man/html should be exactly one line:
>>
>> quick-install-man: require-manrepo
>> -'$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(MAN_REPO) 
>> $(DESTDIR)$(mandir)
>> +'$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(MAN_REPO) 
>> +$(DESTDIR)$(mandir) $(GIT_MAN_REF)
>>  
>> And here
>>
>>  quick-install-html: require-htmlrepo
>> -'$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(HTML_REPO) 
>> $(DESTDIR)$(htmldir)
>> +'$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(HTML_REPO) 
>> +$(DESTDIR)$(htmldir) $(GIT_MAN_REF)>

>To everybody else who did not complain that what I sent was line-wrapped, the 
>message should be showing like this:
>https://public-inbox.org/git/xmqqtvwvy1rh@gitster.mtv.corp.google.com/
It is correct at the above link. My mailer is Outlook 2016... so... yeah.

>Perhaps the mail program on your receiving end is mangling what you got from 
>the mailing list, giving you a line-wrapped version.
Yes it is. It loves mangling. Nice to see it mangled it again ☹. Porting 
sendmail was on my list of things to do, but pretty far down.

>It also unfortunately makes me suspect that you didn't actually have a chance 
>to apply the patch mechanically and make sure it works for you due to mail 
>mangling at your end X-<.
I have no such capability on the system where the changes were made, nor even 
with Outlook on my own local Windows dev box. I've tried my mac and linux 
machines but can't connect up to my (bleep) mailer from those without creating 
more (bleep). It's either that or I'm too close to the holidays.

>> And otherwise please consider it signed off.
>Will do, thanks.





RE: Need help migrating workflow from svn to git.

2017-12-14 Thread Randall S. Becker
> On December 14, 2017 8:10 AM, Josef Wolf wrote:
> Subject: Need help migrating workflow from svn to git.
> 
> Hello folks,
> 
> I am wondering whether/how my mode of work for a specific project
> (currently based on SVN) could be transferred to git.
> 
> I have a repository for maintaining configuration of hosts. This
repository
> contains several hundered scripts. Most of those scripts are don't depend
on
> each other.
> 
> Every machine has a working copy of the repository in a specific
directory. A
> cron job (running every 15 minutes) executes "svn update" and executes the
> scripts which are contained in this working copy.
> 
> This way, I can commit changes to the main repository and all the hosts
will
> "download" and adopt by executing the newest revision of those scripts.
> (The sripts need to be idempotent, but this is a different topic).
> 
> NORMALLY, there are no local modifications in the working copy. Thus,
> conflicts can not happen. Everything works fine.
> 
> Sometimes, I need to fix a problem on some host or need to implement a
> new feature. For this, I go to the working copy of a host where the change
> needs to be done and start haking. With svn, I don't need to stop the cron
> job. "svn update" will happily merge any in-coming changes and leave alone
> the files which were not modified upstream. Conflicts with my local
> modifications which I am currently hacking on are extremely rare, because
> the scripts are pretty much independent. So I'm pretty much happy with
this
> mode of operation.
> 
> With git, by contrast, this won't work. Git will refuse to pull anything
as long
> as there are ANY local modifications. The cron job would need to
> 
>git stash
>git pull
>git stash pop
> 
> But this will temporarily remove my local modifications. If I happen to do
a
> test run at this time, the test run would NOT contain the local
modifications
> which I was about to test. Even worse: if I happen to save one of the
> modified files while the modifications are in the stash, the "git stash
pop" will
> definitely cause a conflict, although nothing really changed.
> 
> So, how would I get this workflow with git? Is it possible to emulate the
> behavior of "svn update"?
> 
> Any ideas?

You might want to consider a slight modification to your approach as
follows. 
Instead of using git pull, use git fetch.
Have each system on its own branch (sys1 = my-sys1-branch, for example) so
you can track who has what.
In your scripts, consider:
git fetch
if nothing changed, done
git status
if no changes, git merge --ff  master && git push origin my-sys1-branch &&
done
if changes, send an email whining about the changes
your script could then (depending on your environment) git commit -a && git
merge && git push origin my-sys1-branch && done

This would allow you to track the condition of each system at your single
upstream repository. 

Just my $0.02

Cheers.
Randall\
-- Brief whoami: NonStop developer since approximately
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.





RE: feature-request: git "cp" like there is git mv.

2017-12-13 Thread Randall S. Becker
-Original Message-
On December 13, 2017 11:40 AM Johannes Schindelin wrote:
>On Tue, 12 Dec 2017, Simon Doodkin wrote:
>> please develop a new feature, git "cp" like there is git mv 
>> tomovefile1 tofile2 (to save space).
>> there is a solution in https://stackoverflow.com/a/44036771/466363
>> however, it is not single easy command.
>This is not how this project works. The idea is that it is Open Source, so
that you can develop this feature yourself, and contribute a patch.

Agree with Johannes. Let's help though, to quantify the requirements so that
Simon can get this right. I'm putting my tyrannical repository manager hat
on here rather than developer so...

Are you looking to have git cp copy the entire history of tomovefile1 to
tofile2 or just copy the content of tomovefile1 to tofile2 and add and/or
commit the file?

In the latter, I see the convenience of this capability. Even so, a simple
cp would copy the content and then you can commit it fairly easily. In the
former, copying the entire history of a file inside the repository is going
to potentially cause tofile2 to appear in old commits where prior to the git
cp command the file was not present? In this situation, you are actually
rewriting history and potentially impacting signed commits (which would no
longer pass a signature check, I hope). Stitching repositories is sometimes
done when repairs or reorganization is required, but I'm concerned that this
is opening up a can of worms that breaks the atomicity of commits
(particularly signed ones). What I don't want, for my own teams, is for
members to think that git cp would be a harmless (unless it actually is)
command, rather than a repair/reorg mechanism used for splitting apart a
repository, or copying a file to a new project then splitting selectively.
So, I'm obviously a bit confused about the goal.

Simon: the stackoverflow post provides a few options on this command. Can
you clarify which particular direction you are interest it?

Cheers,
Randall

-- Brief whoami: NonStop developer since approximately
UNIX(421664400)/NonStop(2112884442) 
-- In my real life, I talk too much.





RE: [RFE] Inverted sparseness (amended)

2017-12-05 Thread Randall S. Becker
On December 3, 2017 6:14 PM, Philip Oakley wrote a nugget of wisdom: 
>From: "Randall S. Becker" <rsbec...@nexbridge.com>
>Sent: Friday, December 01, 2017 6:31 PM
>> On December 1, 2017 1:19 PM, Jeff Hostetler wrote:
>>>On 12/1/2017 12:21 PM, Randall S. Becker wrote:
>>>> I recently encountered a really strange use-case relating to sparse 
>>>> clone/fetch that is really backwards from the discussion that has 
>>>> been going on, and well, I'm a bit embarrassed to bring it up, but 
>>>> I have no good solution including building a separate data store 
>>>> that will end up inconsistent with repositories (a bad solution).  
>>>> The use-case is as
>>>> follows:
>>>>
>>>> Given a backbone of multiple git repositories spread across an 
>>>> organization with a server farm and upstream vendors.
>>>> The vendor delivers code by having the client perform git pull into 
>>>> a specific branch.
>>>> The customer may take the code as is or merge in customizations.
>>>> The vendor wants to know exactly what commit of theirs is installed 
>>>> on each server, in near real time.
>>>> The customer is willing to push the commit-ish to the vendor's 
>>>> upstream repo but does not want, by default, to share the actual 
>>>> commit contents for security reasons.
>>>> Realistically, the vendor needs to know that their own commit id 
>>>> was put somewhere (process exists to track this, so not part of the
>>>> use-case) and whether there is a subsequent commit contributed >by 
>>>> the customer, but the content is not relevant initially.
>>>>
>>>> After some time, the vendor may request the commit contents from 
>>>> the customer in order to satisfy support requirements - a.k.a. a 
>>>> defect was found but has to be resolved.
>>>> The customer would then perform a deeper push that looks a lot like 
>>>> a "slightly" symmetrical operation of a deep fetch following a 
>>>> prior sparse fetch to supply the vendor with the specific commit(s).
>>
>>>Perhaps I'm not understanding the subtleties of what you're 
>>>describing, but could you do this with stock git functionality.
>>
>>>Let the vendor publish a "well known branch" for the client.
>>>Let the client pull that and build.
>>>Let the client create a branch set to the same commit that they fetched.
>>>Let the client push that branch as a client-specific branch to the 
>>>vendor to indicate that that is the official release they are based on.
>>
>>>Then the vendor would know the official commit that the client was using.
>> This is the easy part, and it doesn't require anything sparse to exist.
>>
>>>If the client makes local changes, does the vendor really need the 
>>>SHA of those -- without the actual content?
>>>I mean any SHA would do right?  Perhaps let the client create a 
>>>second client-specific branch (set to  the same commit as the first) 
>>>to indicate they had mods.
>>>Later, when the vendor needs the actual client changes, the client 
>>>does a normal push to this 2nd client-specific branch at the vendor.
>>>This would send everything that the client has done to the code since 
>>>the official release.
>>
>> What I should have added to the use-case was that there is a strong 
>> audit requirement (regulatory, actually) involved that the SHA is 
>> exact, immutable, and cannot be substitute or forged (one of the 
>> reasons git is in such high regard). So, no I can't arrange a fake 
>> SHA to represent a SHA to be named later. It SHA of the installed 
>> commit is part of the official record of what happened on the specific 
>> server, so I'm stuck with it.
>>
>>>I'm not sure what you mean about "it is inside a tree".
>>
>> m---a---b---c---H1
>>  `---d---H2
>>
>> d would be at a head. b would be inside. Determining content of c is 
>> problematic if b is sparse, so I'm really unsure that any of this is 
>> possible.

>I think I get the jist of your use case. Would I be right that you 
>don't have a true working solution yet? i.e. that it's a problem that is 
>almost sorted but falls down at the last step.

>If one pretended that this was a single development shop, and the 
>various vendors, clients and customers as being independent devolopers, 
>each of whom is over protective of their code, it may give a better view that 
>maps onto 

Documentation Breakage at 2.5.6

2017-12-05 Thread Randall S. Becker
Hi All,

I'm trying to upgrade the NonStop port from 2.3.7 upward eventually to
2.15.1 and hit a snag on documentation. The xmlto component is a bit new to
me and I hit the following error:

XMLTO git-remote-testgit.1
xmlto: /home/git/git/Documentation/git-remote-testgit.xml does not validate
(status 3)
xmlto: Fix document syntax or use --skip-validation option
I/O error : Attempt to load network entity
http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd
/home/git/git/Documentation/git-remote-testgit.xml:2: warning: failed to
load external entity
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd;
D DocBook XML V4.5//EN"
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd;
 
^
I/O error : Attempt to load network entity
http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd
warning: failed to load external entity
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd;
validity error : Could not load the external subset
http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd

The -skip-validation option just takes me to a different problem validating
via sourceforge URL that appears not to exist anymore, although I had to
modify ./git/Documention/Makefile, which is vexing.

XMLTO git-remote-testgit.1
I/O error : Attempt to load network entity
http://docbook.sourceforge.net/release/xsl/current/manpages/docbook.xsl
warning: failed to load external entity
"http://docbook.sourceforge.net/release/xsl/current/manpages/docbook.xsl;
compilation error: file /tmp/xmlto-xsl.ie6J8p line 4 element import
xsl:import : unable to load
http://docbook.sourceforge.net/release/xsl/current/manpages/docbook.xsl
Makefile:328: recipe for target 'git-remote-testgit.1' failed

Any advice on getting past this? It would be nice to get git help to working
again. An answer of "you need to get past 2.5.6" is ok too as long as I know
where I'm going.

Thanks,
Randall

-- Brief whoami: NonStop developer since approximately
UNIX(421664400)/NonStop(2112884442) 
-- In my real life, I talk too much.





RE: Documentation Breakage at 2.5.6

2017-12-08 Thread Randall S. Becker
-Original Message-
On December 8, 2017 5:29 PM Junio C Hamano wrote:
>"Randall S. Becker" <rsbec...@nexbridge.com> writes:
>> One request to Junio: Would it be possible to tag the commits to align 
>> with the tags in the main repo? That way, I can build a nice little 
>> Jenkins job to automatically fetch the correct commit for man pages 
>> when packaging up a release.
>I am not interested in doing anything more than absolute minimum in the 
>history that records generated cruft.  We already describe the mainline commit 
>object names in the messages; perhaps that is >sufficient?
>commit daa88a54a985ed1ef258800c742223c2a8f0caaa
>   Author: Junio C Hamano <gits...@pobox.com>
>   Date:   Wed Dec 6 10:04:03 2017 -0800
>
>   Autogenerated manpages for v2.15.1-354-g95ec6
>The primary reason why I do not want to tag them is because the tree the 
>documentation sources were taken from is *not* the only thing that affects 
>these autogenerated cruft.  The AsciiDoc toolchain that >happen to be 
>installed on the box the day I ran the documentation tools is an obvious 
>difference, and I do not want to make them appear any more definitive and 
>official.  "This is *the* manpage for release >v2.15.1" is the message I do 
>not want to give.

No worries. I will push on with trying to get asciidoc to build so that I can 
generate the man pages. That probably won't happen soon, so I'll keep 
MacGyvering. I do get generating is the better solution, but I would rather 
focus my own efforts on keeping up with git (that ports fairly easily and is 
essential to work life) than burning off $DAYJOB hours on asciidoc and/or xmlto.

Cheers,
Randall




RE: Shared clone from worktree directory

2017-12-11 Thread Randall S. Becker
On December 11, 2017 12:02 PM, Marc-André Lureau wrote:
>For better, or worse, I encountered a script doing a git clone --shared from 
>the working directory. However, if clone --shared is run from a worktree, it 
>fails with cryptic errors.
>elmarco@boraha:/tmp/test/wt (wt)$ git worktree list
>/tmp/test 4ae16a0 [master]
>/tmp/test/wt  4ae16a0 [wt]
>elmarco@boraha:/tmp/test/wt (wt)$ git clone --shared  . clone-dir Cloning into 
>'clone-dir'...
>done.
>error: object directory /tmp/test/.git/worktrees/wt/objects does not exist; 
>check .git/objects/info/alternates.
>fatal: update_ref failed for ref 'HEAD': cannot update ref
>'refs/heads/wt': trying to write ref 'refs/heads/wt' with nonexistent object 
>4ae16a066ee088d40dbefeaaae7b5578d68b4b51
>fatal: The remote end hung up unexpectedly
>Is this a bug? If not, a nicer error message would be welcome, as well as man 
>page note.

"The add worktree documentation states:
Create  and checkout  into it. The new working directory is 
linked to the ***current repository***, sharing everything except working 
directory specific files such as HEAD, index, etc. - may also be specified as 
; it is synonymous with @{-1}."

So I'm going to assume that clone from a worktree is not supported. This sounds 
like a check is needed to prevent the operation from starting up in the first 
place, or changing the semantics to allow it. The error message, while cryptic, 
seems actually descriptive because the HEAD would not be available in a 
worktree as it is not propagated from the current repository.

If the idea is to support an add worktree from a worktree, I would suggest that 
a new branch go back to the main repository as normal, rather than being added 
to the worktree. I personally get a headache thinking about the prospect of 
having layers of worktrees.

Cheers,
Randall

-- Brief whoami: NonStop developer since approximately 
UNIX(421664400)/NonStop(2112884442) 
-- In my real life, I talk too much.





RE: [RFE] install-doc-quick.sh should accept a commit-ish

2017-12-06 Thread Randall S. Becker
On December 6, 2017 11:40 AM, Junio C Hamano wrote:
>"Randall S. Becker" <rsbec...@nexbridge.com> writes:
>> Having the git-manpages repo available is fantastic for platforms that 
>> cannot easily build documentation on demand, for example, when too 
>> many dependencies that do not build properly.
>> It would be really nice to have a version of install-doc-quick.sh to
either:
>> 1. Use whatever version is checked out in git-manpages; or
>> 2. Use the proper commit associated with the git commit being 
>> installed (0a8e923 for v2.6.0 , as an example); or
>> 3. Allow the commit to be passed through the Documentation Makefile on
demand so that any version of documentation can be installed.

>Do you mean something like this so that you can say "not the tip of the
master branch but this one?"

> Documentation/install-doc-quick.sh | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)

>diff --git a/Documentation/install-doc-quick.sh
b/Documentation/install-doc-quick.sh
>index 327f69bcf5..83764f7537 100755
>--- a/Documentation/install-doc-quick.sh
>+++ b/Documentation/install-doc-quick.sh
>@@ -3,8 +3,9 @@
 
> repository=${1?repository}
> destdir=${2?destination}
>+head=${3+master}
>+GIT_DIR=
 
>-head=master GIT_DIR=
> for d in "$repository/.git" "$repository"
> do
>   if GIT_DIR="$d" git rev-parse refs/heads/master >/dev/null 2>&1

Providing I can pass that through make via something like quick-install-man
head=commit-ish, that's what I'm hoping.

Cheers,
Randall




[RFE] install-doc-quick.sh should accept a commit-ish

2017-12-06 Thread Randall S. Becker
Other thread attached as context.

Having the git-manpages repo available is fantastic for platforms that cannot 
easily build documentation on demand, for example, when too many dependencies 
that do not build properly. 

It would be really nice to have a version of install-doc-quick.sh to either:

1. Use whatever version is checked out in git-manpages; or

2. Use the proper commit associated with the git commit being installed 
(0a8e923 for v2.6.0 , as an example); or

3. Allow the commit to be passed through the Documentation Makefile on demand 
so that any version of documentation can be installed.

Thanks,
Randall
P.S. If the idea is liked, I can try to make this happen.

-Original Message-
From: git-ow...@vger.kernel.org [mailto:git-ow...@vger.kernel.org] On Behalf Of 
Randall S. Becker
Sent: December 6, 2017 10:43 AM
To: 'Jeff King' <p...@peff.net>; 'Ævar Arnfjörð Bjarmason' <ava...@gmail.com>; 
'Junio C Hamano' <gits...@pobox.com>
Cc: git@vger.kernel.org
Subject: RE: Documentation Breakage at 2.5.6

-Original Message-
On December 6, 2017 3:49 AM, Jeff King wrote:
>On Wed, Dec 06, 2017 at 09:14:57AM +0100, Ævar Arnfjörð Bjarmason wrote:
>> > I'm trying to upgrade the NonStop port from 2.3.7 upward eventually 
>> > to
>> > 2.15.1 and hit a snag on documentation. The xmlto component is a 
>> > bit new to me and I hit the following error:
>Did it work before in v2.3.7? If so, can you bisect to the breakage?
It worked fine at 2.3.7. No seeming dependency on docbook at that point - it 
was never on my system.

>One alternative is to try to avoid docbook entirely. The only way to get 
>manpages with asciidoc is to generate docbook and then process it, but:
I have asciidoc installed, but using it via Make?

> - you can generate HTML directly (and "make -C Documentation html" 
> does  this). Perhaps not as nice, but you still at least have some
>   documentation.
Not an option. I need git help to work.

> - asciidoctor can generate manpages directly. I don't think our
>   Makefile supports that now, but it might not be too hard to hack in  
> (we already have some basic asciidoctor support). I'm not sure how
 > hard it would be to get Ruby running on NonStop Ruby runs fine. I'm a bit 
 > out of my configuration depth here.

>And of course one final option is to generate the manpages elsewhere and copy 
>them in, since they're platform-independent.
>In fact, that's what quick-install-man should do (you just have to clone 
>Junio's >git-manpages repository -- see the INSTALL file).

I've gone down this path and it works. Much cleaner in fact. Dependencies of 
docbook (jade) are too reliant on GCC C++ forms to port to the platform - not 
to mention being SVN, which is culturally uncomfortable 

One request to Junio: Would it be possible to tag the commits to align with the 
tags in the main repo? That way, I can build a nice little Jenkins job to 
automatically fetch the correct commit for man pages when packaging up a 
release.

-Peff



RE: SSH port ignored when ssh:// prefix isn't specified

2017-12-10 Thread Randall S. Becker
On December 10, 2017 3:24 PM Mahmoud wrote:

>It appears that for non-standard ports to be specified for ssh-based
clones/checkouts, the leading "ssh://" prefix must
>be applied. I am unsure if there's a reason for this or if it is simply an
overlooked idiosyncrasy in the parser.

>Basically, while `git clone ssh://g...@example.com:/path` works, the
same with the `ssh://` prefix doesn't,
>and attempts to establish a connection to port 22 instead: `git clone
g...@example.com:/path` (I'm not
>sure what it will do with the `:` should the connection actually
succeed).

Git is attempting to resolve the repository "/path" as it's default
behaviour since the ssh method is not specified. This is a situation where
the default is ambiguous so while git is choosing the ssh method, it is also
choosing repository resolution after the ':' which is also default
behaviour.

Cheers,
Randall

-- Brief whoami: NonStop developer since approximately
UNIX(421664400)/NonStop(2112884442) 
-- In my real life, I talk too much.





RE: [Proposed] Externalize man/html ref for quick-install-man and quick-install-html

2017-12-11 Thread Randall S. Becker
Sorry about the response positioning...

I can send you a pull request on github, if you want 

-Original Message-
From: git-ow...@vger.kernel.org [mailto:git-ow...@vger.kernel.org] On Behalf Of 
Junio C Hamano
Sent: December 11, 2017 6:27 PM
To: Randall S. Becker <rsbec...@nexbridge.com>
Cc: git@vger.kernel.org
Subject: Re: [Proposed] Externalize man/html ref for quick-install-man and 
quick-install-html

"Randall S. Becker" <rsbec...@nexbridge.com> writes:

> diff --git a/Documentation/Makefile b/Documentation/Makefile index 
> 3e39e28..4f1e6df 100644
> --- a/Documentation/Makefile
> +++ b/Documentation/Makefile
> @@ -39,6 +39,8 @@ MAN_TXT = $(MAN1_TXT) $(MAN5_TXT) $(MAN7_TXT)  
> MAN_XML = $(patsubst %.txt,%.xml,$(MAN_TXT))  MAN_HTML = $(patsubst 
> %.txt,%.html,$(MAN_TXT))
>
> +GIT_MAN_REF = master
> +
>  OBSOLETE_HTML += everyday.html
>  OBSOLETE_HTML += git-remote-helpers.html  DOC_HTML = $(MAN_HTML) 
> $(OBSOLETE_HTML) @@ -415,14 +417,14 @@ require-manrepo::
> then echo "git-manpages repository must exist at $(MAN_REPO)"; 
> exit 1; fi
>
>  quick-install-man: require-manrepo
> -   '$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(MAN_REPO)
> $(DESTDIR)$(mandir)
> +   '$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(MAN_REPO)
> $(DESTDIR)$(mandir) $(GIT_MAN_REF)

I suspect that this patch is line-wrapped and unusable for the final 
application, but I think what the change wants to do makes total sense---we are 
already letting the builder specify where the other repositories for docs are, 
and it is not such a big stretch to let them also say which branch or tag they 
want their documentation from.



[RFE] Inverted sparseness

2017-12-01 Thread Randall S. Becker
I recently encountered a really strange use-case relating to sparse clone/fetch 
that is really backwards from the discussion that has been going on, and well, 
I'm a bit embarrassed to bring it up, but I have no good solution including 
building a separate data store that will end up inconsistent with repositories 
(a bad solution).  The use-case is as follows:

Given a backbone of multiple git repositories spread across an organization 
with a server farm and upstream vendors.
The vendor delivers code by having the client perform git pull into a specific 
branch.
The customer may take the code as is or merge in customizations.
The vendor wants to know exactly what commit of theirs is installed on each 
server, in near real time.
The customer is willing to push the commit-ish to the vendor's upstream repo 
but does not want, by default, to share the actual commit contents for security 
reasons.
Realistically, the vendor needs to know that their own commit id was 
put somewhere (process exists to track this, so not part of the use-case) and 
whether there is a subsequent commit contributed by the customer, but the 
content is not relevant initially.

After some time, the vendor may request the commit contents from the customer 
in order to satisfy support requirements - a.k.a. a defect was found but has to 
be resolved.
The customer would then perform a deeper push that looks a lot like a 
"slightly" symmetrical operation of a deep fetch following a prior sparse fetch 
to supply the vendor with the specific commit(s).

This is not hard to realize if the sparse commit is HEAD on a branch, but if 
its inside a tree, well, I don't even know where to start. To self-deprecate, 
this is likely a bad idea, but it has come up a few times.

Thoughts? Nasty Remarks?

Randall

-- Brief whoami: NonStop developer since approximately 
UNIX(421664400)/NonStop(2112884442) 
-- In my real life, I talk too much.





[Proposed] Externalize man/html ref for quick-install-man and quick-install-html

2017-12-09 Thread Randall S. Becker
I am proposing the following trivial change to allow the external
specification of the reference used for quick-install-man. The justification
is that I cannot have my production team modifying scripts when non-master
versions of git are installed in production (that violates so many rules
that I would have trouble enumerating). This does add any requirements for
changes to the automation for building either git-manpages or  git-htmldocs.
What it does is allow the top level make to pass GIT_MAN_REF down to the
underlying shell script with a default of master if unspecific. Where I am
uncertain is what else would be required for this change (documentation,
unit tests).

I humbly submit this for consideration.
Sincerely,
Randall

>From 6acc4a4238b3e3e62674bf8a5d0b9084258a0967 Mon Sep 17 00:00:00 2001
From: "Randall S. Becker" <rsbec...@nexbridge.com>
Date: Sat, 9 Dec 2017 15:52:44 -0600
Subject: Externalize man/html ref for quick-install-man and
quick-install-html

---
 Documentation/Makefile | 6 --
 Documentation/install-doc-quick.sh | 7 ---
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/Documentation/Makefile b/Documentation/Makefile
index 3e39e28..4f1e6df 100644
--- a/Documentation/Makefile
+++ b/Documentation/Makefile
@@ -39,6 +39,8 @@ MAN_TXT = $(MAN1_TXT) $(MAN5_TXT) $(MAN7_TXT)
 MAN_XML = $(patsubst %.txt,%.xml,$(MAN_TXT))
 MAN_HTML = $(patsubst %.txt,%.html,$(MAN_TXT))

+GIT_MAN_REF = master
+
 OBSOLETE_HTML += everyday.html
 OBSOLETE_HTML += git-remote-helpers.html
 DOC_HTML = $(MAN_HTML) $(OBSOLETE_HTML)
@@ -415,14 +417,14 @@ require-manrepo::
then echo "git-manpages repository must exist at $(MAN_REPO)"; exit
1; fi

 quick-install-man: require-manrepo
-   '$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(MAN_REPO)
$(DESTDIR)$(mandir)
+   '$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(MAN_REPO)
$(DESTDIR)$(mandir) $(GIT_MAN_REF)

 require-htmlrepo::
@if test ! -d $(HTML_REPO); \
then echo "git-htmldocs repository must exist at $(HTML_REPO)"; exit
1; fi

 quick-install-html: require-htmlrepo
-   '$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(HTML_REPO)
$(DESTDIR)$(htmldir)
+   '$(SHELL_PATH_SQ)' ./install-doc-quick.sh $(HTML_REPO)
$(DESTDIR)$(htmldir) $(GIT_MAN_REF)

 print-man1:
@for i in $(MAN1_TXT); do echo $$i; done
diff --git a/Documentation/install-doc-quick.sh
b/Documentation/install-doc-quick.sh
index 327f69b..a7715eb 100755
--- a/Documentation/install-doc-quick.sh
+++ b/Documentation/install-doc-quick.sh
@@ -4,7 +4,8 @@
 repository=${1?repository}
 destdir=${2?destination}

-head=master GIT_DIR=
+GIT_MAN_REF=${3?master}
+GIT_DIR=
 for d in "$repository/.git" "$repository"
 do
if GIT_DIR="$d" git rev-parse refs/heads/master >/dev/null 2>&1
@@ -27,12 +28,12 @@ export GIT_INDEX_FILE GIT_WORK_TREE
 rm -f "$GIT_INDEX_FILE"
 trap 'rm -f "$GIT_INDEX_FILE"' 0

-git read-tree $head
+git read-tree $GIT_MAN_REF
 git checkout-index -a -f --prefix="$destdir"/

 if test -n "$GZ"
 then
-   git ls-tree -r --name-only $head |
+   git ls-tree -r --name-only $GIT_MAN_REF |
xargs printf "$destdir/%s\n" |
xargs gzip -f
 fi
--
2.5.6.18.ga013bef




RE: Documentation Breakage at 2.5.6

2017-12-06 Thread Randall S. Becker
-Original Message-
On December 6, 2017 3:49 AM, Jeff King wrote:
>On Wed, Dec 06, 2017 at 09:14:57AM +0100, Ævar Arnfjörð Bjarmason wrote:
>> > I'm trying to upgrade the NonStop port from 2.3.7 upward eventually 
>> > to
>> > 2.15.1 and hit a snag on documentation. The xmlto component is a bit 
>> > new to me and I hit the following error:
>Did it work before in v2.3.7? If so, can you bisect to the breakage?
It worked fine at 2.3.7. No seeming dependency on docbook at that point - it 
was never on my system.

>One alternative is to try to avoid docbook entirely. The only way to get 
>manpages with asciidoc is to generate docbook and then process it, but:
I have asciidoc installed, but using it via Make?

> - you can generate HTML directly (and "make -C Documentation html" does
>  this). Perhaps not as nice, but you still at least have some
>   documentation.
Not an option. I need git help to work.

> - asciidoctor can generate manpages directly. I don't think our
>   Makefile supports that now, but it might not be too hard to hack in
>  (we already have some basic asciidoctor support). I'm not sure how
 > hard it would be to get Ruby running on NonStop
Ruby runs fine. I'm a bit out of my configuration depth here.

>And of course one final option is to generate the manpages elsewhere and copy 
>them in, since they're platform-independent.
>In fact, that's what quick-install-man should do (you just have to clone 
>Junio's >git-manpages repository -- see the INSTALL file).

I've gone down this path and it works. Much cleaner in fact. Dependencies of 
docbook (jade) are too reliant on GCC C++ forms to port to the platform - not 
to mention being SVN, which is culturally uncomfortable 

One request to Junio: Would it be possible to tag the commits to align with the 
tags in the main repo? That way, I can build a nice little Jenkins job to 
automatically fetch the correct commit for man pages when packaging up a 
release.

-Peff



RE: [RFE] Add minimal universal release management capabilities to GIT

2017-10-21 Thread Randall S. Becker
-Original Message-
From: git-ow...@vger.kernel.org [mailto:git-ow...@vger.kernel.org] On Behalf 
of.mail...@laposte.net
On October 20, 2017 6:41 AM, nicolas wrote:
To: git@vger.kernel.org
Subject: [RFE] Add minimal universal release management capabilities to GIT

>Git is a wonderful tool, which has transformed how software is created, and 
>made code sharing and reuse, a lot easier (both between human and software 
>tools).


> Please please please add release handling and versioning capabilities to Git 
> itself. Without it some enthusiastic
> Git adopters are on a fast trajectory to unmanageable hash soup states, even 
> if they are not realising it yet, because
> the deleterious side effects of giving up on releases only get clear with 
> time.
> Here is what such capabilities could look like (people on this list can 
> probably invent something better, I don't care as long as something exists).


Nicolas makes some interesting points, and I do suggest looking at the original 
post, but there are more factors to consider when dealing with production-grade 
releases in regulatory environments. And my sincere apologies for what, even in 
my eyes looks like a bit of a soap-box rant. No slight intended, Nicolas.

Possibly most importantly, there are serious distinctions between what is built 
via CI, what is released, and what is installed. Some of these can be answered 
addressed directly by git, but others require convention, or a meta-system 
spanning platforms. I will abbreviate some of this:

Commits being used to initiate CI cycles are typically based on source commit 
ids (Jenkins, as an example uses this as an initiator). In Open Source 
environments, where source is specifically released, this is a perfectly 
reasonable release point requiring no more than the commit id itself. 
Committers tend to add tags for convention to make identification convenient, 
and git describe is really helpful here for generating identifying information 
(I state the obvious here). This is the beginning of wisdom, not the end (to 
mis-paraphrase).

Release commits, which are not explicitly in a one-to-one relationships with 
source commits, are a different matter. Suppose the target of your Jenkins 
build creates a release of objects packaged in some useful form. The release 
and source commits are somehow related in your repository of record (loads of 
ways to do this). However, in multi-platform situations, you are in a 
many-to-one situation, obviously since the changes of the release's hash 
matching between two platform builds approaches zero. Nonetheless, the 
release's commit id is relevant to what gets installed, but it is not 
sufficient for human identification purposes. The tag comes in nicely here, and 
hopefully is propagated from the dependent source commit. This 
release-to-source commit derivation is implicitly required in some regulatory 
environments (financial institutions, FDA, FAA, as examples where this exists 
for some systems).

But once you are in a production (or QA) environment, the actual install 
package contains artifacts from a release and from the environment into which 
the release is being installed and activated. The artifacts themselves can be 
highly dynamic and changeable on a radically different and independent schedule 
from the code drop. I advocate keeping those in separate repositories and they 
make for hellacious merge/distribution rules - particularly if the environments 
are radically different in structure, platform, and capability. The 
relationship between commits here is if anything specifically mutable. In a 
specific way, source and release commits are required to be time reversible in 
production, whereby if an installation fails, there exist in many environments 
requirements to be able to fully undo the install action. This is often 
different from the environment artifacts which can be time-forward constrained 
and reversible only in extreme situations. This separate, at least in my 
experience, tends to drive how releases are managed in production shops.

> So nothing terribly complex, just a lot a small helpers to make releasing 
> easier, less tedious, and cheaper for developers,
> that formalize, automate, and make easier existing practices of mature 
> software projects, making them accessible to
> smaller projects. They would make releasing more predictable and reliable for 
> people deploying the code, and easier
> to consume by higher-level cross-project management tools. That would 
> transform the deployment stage of software
> just like Git already transformed early code writing and autotest stages.

Possibly, but primarily for source releases. Release management and the related 
practices are production functions that do not map particularly well (by 
assertion) to the git command set or functionality. As an underlying mechanism 
to manage the production artifacts, git does wonderfully. But installable 
packages (what they think of as 

RE: Is it possible to convert a Json file to xml file with Git

2017-10-31 Thread Randall S. Becker
> On October 31, 2017 5:23 PM, Kevin Daudt wrote:
> > On Tue, Oct 31, 2017 at 05:28:40PM +, Eyjolfur Eyjolfsson wrote:
> > I have a question.
> > Is it possible to convert a Json file to XML with Git
> 
> git is a version control system, which is mostly content agnostic. It
knows
> nothing about json or xml, let alone how to convert them.
> 
> You might want to use some kind of programming language to do the
> conversion.

Speculating... one possible reason to do this is during a protocol
conversion effort, where definitions are moving from XML to JSON form. In
legacy VCS systems, keeping interface definitions in one file and converting
the content may be important. However, in git, with its concept of atomicity
(multiple files are committed in a single version across the whole
repository), dropping one file (e.g., XML) and adding another (e.g., JSON),
can be done in one commit, and never lost or confused as to what is
intended. This makes git ideal for modernization and evolutionary projects.

If, however, there is an application or systemic requirement to change the
content of a file from XML to JSON without changing the name - I've seen it
happen - you may want to consider building a smudge filter that understands
the difference and maps between the two, to allow git diff operations
between old and new formats. I would not recommend using this approach
except as a last possible resort. Make a new file as Kevin intimated.

Just Musing on the Topic,
Randall

-- Brief whoami: NonStop developer since approximately
UNIX(421664400)/NonStop(2112884442)
-- In my real life, I talk too much.





[Best Practices Request] clean/smudge configuration

2018-05-09 Thread Randall S. Becker
Hi Git Team,

I'm trying to work out some best practices for managing clean/smudge filters
and hit a bump. The situation is that I have an environment where the
possible clean/smudge filter configuration can change over time and needs to
be versioned with the product being managed in a git repository. Part of the
configuration is no problem in .gitattributes, but the other bits are in
.git/config. I get that the runnable part of the filters need to be strictly
platform independent in principle, but I can abstract that part in this
situation.

The question: what is the best practice for versioning the parts of
clean/smudge filters that are in .git/config given that only some users in
my environment will be cloning the repository in question and that I really
can't put the entries in /etc/gitconfig or ~/.gitconfig because of potential
conflicts with other repositories that might also have clean/smudge
definitions.

Thanks,
Randall 






RE: [Best Practices Request] clean/smudge configuration

2018-05-09 Thread Randall S. Becker
On May 9, 2018 6:39 PM, Bryan Turner wrote
> 
> On Wed, May 9, 2018 at 3:09 PM Randall S. Becker
> <rsbec...@nexbridge.com>
> wrote:
> 
> > The question: what is the best practice for versioning the parts of
> > clean/smudge filters that are in .git/config given that only some
> > users in my environment will be cloning the repository in question and
> > that I
> really
> > can't put the entries in /etc/gitconfig or ~/.gitconfig because of
> potential
> > conflicts with other repositories that might also have clean/smudge
> > definitions.
> 
> Depending on level of trust, one approach might be to use an [include] in
> .git/config to include a file that's in the repository. Something like:
> 
> [include]
>  path = ../path/to/config

It's a possibility, but I don't like the implications. Files that are subject 
to the clean/smudge would need to be reprocessed manually. In the scenario:

1. A checkout is done, changing ../path/to/config.
2. The clean/smudge configuration changes in ../path/to/config, but the files 
impacted by it do not.
3. git does not look like it would not be aware of the change until after the 
checkout, which is too late.
4. The work tree is now inconsistent with the idempotency of the clean/smudge 
rules, basically because nothing happened. (not blaming git here, just timing).

As far as I understand, this is a bit of a chicken-and-egg problem because the 
clean/smudge config needs to be there before the checkout. Correct?

Cheers,
Randall



RE: [Best Practices Request] clean/smudge configuration - Avoiding the chicken and egg

2018-05-12 Thread Randall S. Becker
On May 11, 2018 3:26 PM, I wrote:
> On May 10, 2018 10:27 PM, Junio C Hamano wrote:
> > "Randall S. Becker" <rsbec...@nexbridge.com> writes:
> >
> > > What if we create a ../.gitconfig like ../.gitattributes, that is
> > > loaded before .git/config?
> >
> > You should not forget one of the two reasons why clean/smudge/diff etc.
> > drivers must be given with a step of redirection, split between
> > attributes and config.  "This path holds text from ancient macs" at
> > the logical level may be expressed in .gitattributes, and then "when
> > checking out text from ancient macs, process the blob data with this
> > program to turn it into a form suitable for working tree" is given as
> > a smudge filter program, that is (1) suitable for the platform _you_
> > as the writer of the config file is using *AND* (2) something you are
> confortable running on behalf of you.
> >
> > We *deliberately* lack a mechanism to pay attention to in-tree config
> > that gets propagated to those who get them via "git clone", exactly
> > because otherwise your upstream may not just specify a "smudge"
> > program that your platform would be unable to run, but can specify a
> > "smudge" program that pretends to be a smudger, but is actually a
> > program that installs a keylogger and posts your login password and
> > bank details to this mailing list ;-)
> >
> > So, no, I do not think in-tree configuration will fly.
> 
> I agree, which is why I asked the original question instead of posting it as a
> formal patch. We would probably get a brand new CVE implementing the
> proposed proto-changes even if the original intent was internal trusted
> repositories only. That's why I asked this as a "Best Practices" type 
> question,
> which I think I have a better idea on now. The actual situation is rather cool
> from a DevOps point of view, and whatever the ultimate solution is, might
> make for a nice presentation at some future conference .

Here's where I ended up, from a solution standpoint:

0. Make sure the git scripts you use are always trusted using your favourite 
technique
1. Wrap the clone in a such a script to do the next two steps to avoid the 
usual problems of forgetting things
2. The clone script should use "git -c name=value clone repo" for all 
clean/smudge values needed that would otherwise be in .git/config if we had one 
which we don't
3. Have the script create/update .git/config using "git config --local name 
value" with all of the same clean/smudge values for subsequent operations.

>From there, it seems that the contents of the smudged files are always 
>correct, assuming the filter works of course. It was the use of -c that makes 
>this work.

Sound about right?

Cheers,
Randall



RE: [Best Practices Request] clean/smudge configuration

2018-05-11 Thread Randall S. Becker


On May 10, 2018 10:27 PM, Junio C Hamano wrote:
> "Randall S. Becker" <rsbec...@nexbridge.com> writes:
> 
> > What if we create a ../.gitconfig like ../.gitattributes, that is
> > loaded before .git/config?
> 
> You should not forget one of the two reasons why clean/smudge/diff etc.
> drivers must be given with a step of redirection, split between attributes and
> config.  "This path holds text from ancient macs" at the logical level may be
> expressed in .gitattributes, and then "when checking out text from ancient
> macs, process the blob data with this program to turn it into a form suitable
> for working tree" is given as a smudge filter program, that is (1) suitable 
> for
> the platform _you_ as the writer of the config file is using *AND* (2)
> something you are confortable running on behalf of you.
> 
> We *deliberately* lack a mechanism to pay attention to in-tree config that
> gets propagated to those who get them via "git clone", exactly because
> otherwise your upstream may not just specify a "smudge" program that your
> platform would be unable to run, but can specify a "smudge" program that
> pretends to be a smudger, but is actually a program that installs a keylogger
> and posts your login password and bank details to this mailing list ;-)
> 
> So, no, I do not think in-tree configuration will fly.

I agree, which is why I asked the original question instead of posting it as a 
formal patch. We would probably get a brand new CVE implementing the proposed 
proto-changes even if the original intent was internal trusted repositories 
only. That's why I asked this as a "Best Practices" type question, which I 
think I have a better idea on now. The actual situation is rather cool from a 
DevOps point of view, and whatever the ultimate solution is, might make for a 
nice presentation at some future conference .

Cheers and thanks,
Randall



RE: [Best Practices Request] clean/smudge configuration

2018-05-10 Thread Randall S. Becker


On May 9, 2018 6:39 PM, Bryan Turner wrote:
> On Wed, May 9, 2018 at 3:09 PM Randall S. Becker
> <rsbec...@nexbridge.com>
> wrote:
> 
> > The question: what is the best practice for versioning the parts of
> > clean/smudge filters that are in .git/config given that only some
> > users in my environment will be cloning the repository in question and
> > that I
> really
> > can't put the entries in /etc/gitconfig or ~/.gitconfig because of
> potential
> > conflicts with other repositories that might also have clean/smudge
> > definitions.
> 
> Depending on level of trust, one approach might be to use an [include] in
> .git/config to include a file that's in the repository. Something like:
> 
> [include]
>  path = ../path/to/config
> 

What if we create a ../.gitconfig like ../.gitattributes, that is loaded
before .git/config? With loads of warnings in the documentation about what
*NOT* to put in here, any platform specifics and your own risk. The code in
config.c would look like the following, with obvious updates to
documentation and the test suite, so it's not fully baked yet. So far, I
don't have a solution to the chicken-and-egg problem, other than this.
However, if I'm barking up the wrong ballpark...

diff --git a/config.c b/config.c
index b0c20e6cb..75d5288ff 100644
--- a/config.c
+++ b/config.c
@@ -1555,11 +1555,15 @@ static int do_git_config_sequence(const struct
config_options *opts,
char *xdg_config = xdg_config_home("config");
char *user_config = expand_user_path("~/.gitconfig", 0);
char *repo_config;
+   char *repo_config_versioned;

-   if (opts->commondir)
+   if (opts->commondir) {
repo_config = mkpathdup("%s/config", opts->commondir);
-   else
+   repo_config_versioned = mkpathdup("%s/../.gitconfig",
opts->commondir);
+   } else {
repo_config = NULL;
+   repo_config_versioned = NULL;
+   }

current_parsing_scope = CONFIG_SCOPE_SYSTEM;
if (git_config_system() && !access_or_die(git_etc_gitconfig(), R_OK,
0))
@@ -1574,6 +1578,8 @@ static int do_git_config_sequence(const struct
config_options *opts,
ret += git_config_from_file(fn, user_config, data);

current_parsing_scope = CONFIG_SCOPE_REPO;
+   if (repo_config_versioned && !access_or_die(repo_config_versioned,
R_OK, 0))
+   ret += git_config_from_file(fn, repo_config_versioned,
data);
if (repo_config && !access_or_die(repo_config, R_OK, 0))
ret += git_config_from_file(fn, repo_config, data);

@@ -1585,6 +1591,7 @@ static int do_git_config_sequence(const struct
config_options *opts,
free(xdg_config);
free(user_config);
free(repo_config);
+   free(repo_config_versioned);
return ret;
 }




RE: Add option to git to ignore binary files unless force added

2018-05-17 Thread Randall S. Becker
On May 16, 2018 11:18 PM, Jacob Keller
> On Wed, May 16, 2018 at 5:45 PM, Anmol Sethi  wrote:
> > I think it’d be great to have an option to have git ignore binary files. My
> repositories are always source only, committing a binary is always a mistake.
> At the moment, I have to configure the .gitignore to ignore every binary file
> and that gets tedious. Having git ignore all binary files would be great.
> >
> > This could be achieved via an option in .gitconfig or maybe a special line 
> > in
> .gitignore.
> >
> > I just want to never accidentally commit a binary again.
> 
> I believe you can do a couple things. There should be a hook which you can
> modify to validate that there are no binary files on pre-commit[1], or pre-
> push[2] to verify that you never push commits with binaries in them.
> 
> You could also implement the update hook on the server if you control it, to
> allow it to block pushes which contain binary files.

What about configuring ${HOME}/.config/git/ignore instead (described at 
https://git-scm.com/docs/gitignore). Inside, put:

*.o
*.exe
*.bin
*.dat
Etc

Cheers,
Randall




RE: which files are "known to git"?

2018-05-21 Thread Randall S. Becker
On May 21, 2018 7:19 AM, Robert P. J. Day:
>   updating my git courseware and, since some man pages refer to files
> "known to git", i just want to make sure i understand precisely which
files
> those are. AIUI, they would include:
> 
>   * tracked files
>   * ignored files
>   * new files which have been staged but not yet committed

You might want to consider git metadata/config/attribute files, hooks,
filters, etc., that may not be not formally part of a repository, but can be
required to ensure the content is complete.

Cheers,
Randall

-- Brief whoami:
 NonStop developer since approximately 2112884442
 UNIX developer since approximately 421664400
-- In my real life, I talk too much.





RE: Add option to git to ignore binary files unless force added

2018-05-18 Thread Randall S. Becker
On May 18, 2018 7:31 AM, Anmol Sethi <m...@anmol.io>
> That works but most binaries do not have a file extension. Its just not
> standard on linux.
> 
> > On May 17, 2018, at 8:37 AM, Randall S. Becker <rsbec...@nexbridge.com>
> wrote:
> >
> > On May 16, 2018 11:18 PM, Jacob Keller
> >> On Wed, May 16, 2018 at 5:45 PM, Anmol Sethi <m...@anmol.io> wrote:
> >>> I think it’d be great to have an option to have git ignore binary
> >>> files. My
> >> repositories are always source only, committing a binary is always a
> mistake.
> >> At the moment, I have to configure the .gitignore to ignore every
> >> binary file and that gets tedious. Having git ignore all binary files 
> >> would be
> great.
> >>>
> >>> This could be achieved via an option in .gitconfig or maybe a
> >>> special line in
> >> .gitignore.
> >>>
> >>> I just want to never accidentally commit a binary again.
> >>
> >> I believe you can do a couple things. There should be a hook which
> >> you can modify to validate that there are no binary files on
> >> pre-commit[1], or pre- push[2] to verify that you never push commits
> with binaries in them.
> >>
> >> You could also implement the update hook on the server if you control
> >> it, to allow it to block pushes which contain binary files.
> >
> > What about configuring ${HOME}/.config/git/ignore instead (described at
> https://git-scm.com/docs/gitignore). Inside, put:
> >
> > *.o
> > *.exe
> > *.bin
> > *.dat
> > Etc

I have a similar problem on my platform, with a different solution. My builds 
involve GCC binaries, NonStop L-series binaries (x86), and a NonStop J-series 
binaries (itanium). To keep me sane, I have all build targets going to separate 
directories, like Build/GCC, Build/Lbin, Build/Jbin away from the sources. This 
allows me to ignore Build/ regardless of extension and also to build different 
targets without link collisions. This is similar to how Java works (a.k.a. 
bin/). Much more workable, IMHO, than trying to manage individual binaries name 
by name or even by extension. I also have a mix of jpg and UTF-16 HTML that 
would end up in false-positives on a pure binary match and I do want to manage 
those.

What helps me is that I do most of my work in ECLIPSE, so derived resources 
(objects, generated sources) get auto-ignored by EGit, if you can make your 
compiler arrange that - but that's an ECLIPSE thing not a file system thing.

Cheers,
Randall

-- Brief whoami:
 NonStop developer since approximately 2112884442
 UNIX developer since approximately 421664400
-- In my real life, I talk too much.





RE: OAuth2 support in git?

2018-06-14 Thread Randall S. Becker
On June 14, 2018 11:15 AM, Jeff King wrote:
> On Thu, Jun 14, 2018 at 10:13:42AM +, brian m. carlson wrote:
> 
> > > I know that other git server environments like github support that
> > > on client side by allowing tokens to be used as usernames in a BASIC
> > > authentication flow. We could do the same but I am asking whether
> > > there is also a way to transport tokens in a standard conform
> > > "Authorization: Bearer ..." Header field.
> >
> > There isn't any support for Bearer authentication in Git.  For HTTP,
> > we use libcurl, which doesn't provide this natively.  While it could
> > in theory be added, it would require some reworking of the auth code.
> >
> > You are, of course, welcome to send a patch.
> 
> If it's just a custom Authorization header, we should be able to support it
> with existing curl versions without _too_ much effort.
> 
> I think there are probably two possible directions:
> 
>  1. add a special "bearer" command line option, etc, as a string
> 
>  2. add a boolean option to send the existing "password" field as a
> "bearer" header
> 
> I suspect (2) would fit in with the existing code better, as the special case
> would mostly be limited to the manner in which we feed the credential to
> curl. And you could probably just set a config option for "this url's auth 
> will
> be oauth2", and use the existing mechanisms for providing the password.
> 
> We'd maybe also want to allow credential helpers to say "by the way, this
> password should be treated as a bearer token", for cases where you might
> sometimes use oauth2 and sometimes a real password.

Be aware that there are 4 (ish) flavours of OAuth2 the last time I checked. It 
is important to know which one (or all) to implement. The embedded form is 
probably the easiest to comprehend - and the least implemented from my 
research. More common OAuth2 instances use a third-man website to hold session 
keys and authorization. That may be problematic for a whole bunch of us who do 
not play in that world.

Cheers,
Randall

-- Brief whoami:
  NonStop developer since approximately NonStop(2112884442)
  UNIX developer since approximately 421664400
-- In my real life, I talk too much.





RE: Git Vulnerability Announced?

2018-05-31 Thread Randall S. Becker
On May 31, 2018 11:57 AM, Erika Voss wrote:
> There was an article I came across yesterday identifying a vulnerability to
> patch our Git environments.  I don’t see one that is available for our Mac
> Clients - is there a more recent one that I can download that is available to
> patch the 2.17.0 version?

Do you have a reference, CVE number, or other information about this 
vulnerability?

Cheers,
Randall

-- Brief whoami:
 NonStop developer since approximately 2112884442
 UNIX developer since approximately 421664400
-- In my real life, I talk too much.





RE: how exactly can git config section names contain periods?

2018-06-01 Thread Randall S. Becker
> -Original Message-
> From: git-ow...@vger.kernel.org  On Behalf
> Of Robert P. J. Day
> Sent: June 1, 2018 4:14 PM
> To: Git Mailing list 
> Subject: how exactly can git config section names contain periods?
> 
> 
>   more oddities in my travels, this from Doc.../config.txt:
> 
> "The file consists of sections and variables.  A section begins with the
name
> of the section in square brackets and continues until the next section
begins.
> Section names are case-insensitive.  Only alphanumeric characters, `-` and
`.`
> are allowed in section names.
>   ^ ?
> 
>   what? how can section names contain periods? reading further,
> 
> "Sections can be further divided into subsections.  To begin a subsection
put
> its name in double quotes, separated by space from the section name, in
the
> section header, like in the example below:
> 
> 
> [section "subsection"]
> 
> 
>   ok, so how on earth would i use "git config" at the command line to set
a
> config variable with some arbitrary level of subsections? let's try this:
> 
>   $ git config --global a.b.c.d.e rday
> 
> huh ... seemed to work fine, and added this to my ~/.gitconfig:
> 
>   [a "b.c.d"]
>   e = rday
> 
> as i see it, the first component is intgerpreted as the section name, the
last
> component is the variable/key(?) name, and everything in between is
> treated as subsection(s), which is not at all obvious from that Doc file,
or from
> "man git-config".
> 
>   and if a section name can contain periods, how would you specify that at
the
> command line?

I'm with Robert on this. I would have thought that the interpretation should
have been:

["a.b.c.d"]
 e = rday

Confused as well,

Randall

-- Brief whoami:
 NonStop developer since approximately 2112884442
 UNIX developer since approximately 421664400
-- In my real life, I talk too much.





RE: git question from a newbie

2018-06-05 Thread Randall S. Becker
On June 5, 2018 5:24 PM, Steve Heinz wrote:
> I am new to Git and have read quite a few articles on it.
> I am planning on setting up a remote repository on a windows 2012 R2
server
> and will access it via HTTPS.
> I am setting up a local repository on my desk top (others in my group will
do
> the same).
> On "server1":  I install Git and create a repository "repos".
> On "server1":  I create a dummy webpage "default.htm" and place it in the
> repo folder.
> On "server1":  I create a web application in IIS pointing to Git
> On Server1":   change permissions so IIS_User  has access to the folders.
> On "server1":  inside the "repos" folder and right click and choose "bash
> here"
> On "server1":   $ git init  -bare(it's really 2 hyphens)
> 
> On Desktop:  open Chrome and type in URL to make sure we can access it
> https://xyz/repos/default.htm
>   ** make sure you have access, no certificate issues or firewall
issues.  The
> pages shows up fine
> 
> On Desktop:  install Git and create repository "repos".
> On Desktop:  right click in "repos" folder and choose "bash here"
> On Desktop:  $ git init
> On Desktop : add a folder "testProject" under the "repos" folder and add
> some files to the folder
> On Desktop:  $ git add . (will add files and folder to
working tree)
> On Desktop   $ git status   (shows it recognizes the filed
were added)
> On Desktop   $ git commit -m "test project commit"   (will stage
changes)
> On Desktop   $ git push https://xyz.domainname.com/repos master
> 
> ** this is the error I get,  I have tried many different things.  I am
sure I am
> doing something stupid
> ** I have tried a bunch of variations but I always get the same error.  It
looks
> like some type of network/permission
> ** thing but everything seems OK.
>Fatal: repository 'https://xyz.domainname.com/repos/' not found
> 
> *** this is where I get the error trying to push staged items to the
remote
> repository.
> *** I have tried to clone the empty remote repository still same error
> *** I checked port 443 is opened and being used for https
> *** tried to set origin to https://xyz.domainname.com/repos; and then $git
> push origin master   (same error)
> *** I tried passing credentials to the remote server as well

Missing glue - git remote

git remote add origin https://xyz.domainname.com/repos

Cheers,
Randall

-- Brief whoami:
 NonStop developer since approximately 2112884442
 UNIX developer since approximately 421664400
-- In my real life, I talk too much.





  1   2   3   >