Re: obsolete index in wt_status_print after pre-commit hook runs
Am 04.08.2016 um 12:45 nachm. schrieb Junio C Hamano <gits...@pobox.com>: > Andrew Keller <and...@kellerfarm.com> writes: > >> In summary, I think I prefer #2 from a usability point of view, however I’m >> having >> trouble proving that #1 is actually *bad* and should be disallowed. > > Yeah, I agree with your argument from the usability and safety point > of view. > >> Any thoughts? Would it be better for the pre-commit hook to be >> officially allowed to edit the index [1], or would it be better >> for the pre-commit hook to explicitly *not* be allowed to edit the >> index [2], or would it be yet even better to simply leave it as it >> is? > > It is clear that our stance has been the third one so far. > > Another thing I did not see in your analysis is what happens if the > user is doing a partial commit, and how the changes made by > pre-commit hook is propagated back to the main index and the working > tree. > > The HEAD may have a file with contents in the "original" state, the > index may have the file with "update 1", and the working tree file > may have it with "update 2". After the commit is made, the user > will continue working from a state where the HEAD and the index have > "update 1", and the working tree has "update 2". "git diff file" > output before and after the commit will be identical (i.e. the > difference between "update 1" and "update 2") as expected. Excellent point — one I had discovered myself but neglected to include in my email. In my post-commit hook, I have logic in both versions of my experiment that disallows [1] fixing up diffs that are partially staged. Both scripts then update both the index and the working copy. (Sort of like how rebase works — clean working directory required, and then it updates the index and the work tree) [1] In version #1, if any files it wants to change are partially staged, it prints a detailed error message and aborts the commit outright. In version #2, the pre-commit hook sees the change it _wants_ to make, informs the user that he/she should run the fixup command, aborts the commit, and when the user runs the fixup command, the fixup command sees the partially staged file, prints the same detailed error message, and dies. Thanks for your help on this. it’s really been interesting. I’ll leave it as-is for now. Thanks, - Andrew Keller -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: obsolete index in wt_status_print after pre-commit hook runs
Am 15.07.2016 um 6:03 nachm. schrieb Junio C Hamano <gits...@pobox.com>: > > Ahh, I misremembered. 2888605c (builtin-commit: fix partial-commit > support, 2007-11-18) does consider the possibility that pre-commit > may have modified the index contents after we take control back from > that hook, so that is probably a good place to enumerate what got > changed. Getting the list before running the hook can give an > out-of-date list, as you said. I’ve been experimenting with two different workflows recently: (1) Identify problem files during the pre-commit hook; when found, fix them automatically in the index and let the commit continue. (2) Identify problem files during the pre-commit hook; when found, provide instructions to fix the problem (and possibly set a helpful Git alias to do it in one command), and abort the commit. Require that the user fixup the index and try the commit again. And here are my thoughts: #1 seems to be quick and simple for the user, and it plays (mostly) nice with scripts and IDEs that do commits autonomously, but I’m having trouble trusting that my pre-commit hook made the *correct* changes (even though it’s worked nicely so far) (i.e., I keep looking at the new HEAD commit to make sure it looks right, where normally I just look at the index and make sure it looks right). #2 is slightly more difficult to implement just because it has more moving parts, however I’m finding that because I can interrogate the index after I manually run the command to make the required changes to the index, and *before* I commit again, I feel much more confident that I know what is going to be in my commit. However, this approach doesn’t play well with automated scripts that assume that a commit operation will always work. In summary, I think I prefer #2 from a usability point of view, however I’m having trouble proving that #1 is actually *bad* and should be disallowed. Any thoughts? Would it be better for the pre-commit hook to be officially allowed to edit the index [1], or would it be better for the pre-commit hook to explicitly *not* be allowed to edit the index [2], or would it be yet even better to simply leave it as it is? [1] and possibly create a patch that teaches builtin/commit.c to reread the index after the pre-commit hook runs and before rendering the commit message template [2] and possibly create a patch that teaches builtin/commit.c to detect changes to the index after the pre-commit hook runs Thanks, - Andrew Keller -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: obsolete index in wt_status_print after pre-commit hook runs
Am 15.07.2016 um 6:03 nachm. schrieb Junio C Hamano <gits...@pobox.com>: > Junio C Hamano <gits...@pobox.com> writes: >> On Fri, Jul 15, 2016 at 1:30 PM, Andrew Keller <and...@kellerfarm.com> wrote: >>> Am 15.07.2016 um 12:34 nachm. schrieb Andrew Keller <and...@kellerfarm.com>: >>> >>>> I pulled out the source for version 2.9.1 and briefly skimmed how >>>> run_commit and >>>> prepare_to_commit work. It seems that Git already understands that a >>>> pre-commit >>>> hook can change the index, and it rereads the index before running the >>>> prepare-commit-msg hook: >>>> https://github.com/git/git/blob/v2.9.1/builtin/commit.c#L941-L951 >>> >>> Quick question: Why does Git reread the index after the pre-commit hook >>> runs? >> >> Offhand I do not think of a good reason to do so; does something break >> if you took it out? > > Ahh, I misremembered. 2888605c (builtin-commit: fix partial-commit > support, 2007-11-18) does consider the possibility that pre-commit > may have modified the index contents after we take control back from > that hook, so that is probably a good place to enumerate what got > changed. Getting the list before running the hook can give an > out-of-date list, as you said. Interesting. So, the implication is that disallowing the pre-commit hook to change the index may cause some problems (491 problems, if my run of the tests was accurate). Does that mean it would be desirable to update the index before the commit message template is rendered? Thanks, - Andrew Keller -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: obsolete index in wt_status_print after pre-commit hook runs
Am 15.07.2016 um 5:19 nachm. schrieb Junio C Hamano <gits...@pobox.com>: > > On Fri, Jul 15, 2016 at 1:30 PM, Andrew Keller <and...@kellerfarm.com> wrote: >> Am 15.07.2016 um 12:34 nachm. schrieb Andrew Keller <and...@kellerfarm.com>: >> >>> I pulled out the source for version 2.9.1 and briefly skimmed how >>> run_commit and >>> prepare_to_commit work. It seems that Git already understands that a >>> pre-commit >>> hook can change the index, and it rereads the index before running the >>> prepare-commit-msg hook: >>> https://github.com/git/git/blob/v2.9.1/builtin/commit.c#L941-L951 >> >> Quick question: Why does Git reread the index after the pre-commit hook runs? > > Offhand I do not think of a good reason to do so; does something break > if you took it out? According to only test failures, it seems that only the `update_main_cache_tree(0)` invocation is needed to avoid a torrent of test failures (490 failures across 102 tests). Removing lines 946, 947, 949, and 950 do not cause test breakages (although my computer is not set up to run all of the tests). However, there seems to be an interaction between lines 946-947 and `update_main_cache_tree(0)` on line 948: although lines 946-947 can be removed by themselves without test breakages, when 946-948 are all disabled together (and, in turn, lines 949-950 never run), one additional test failure is registered (t2203.5). Thanks, - Andrew Keller -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: obsolete index in wt_status_print after pre-commit hook runs
Am 15.07.2016 um 12:34 nachm. schrieb Andrew Keller <and...@kellerfarm.com>: > I pulled out the source for version 2.9.1 and briefly skimmed how run_commit > and > prepare_to_commit work. It seems that Git already understands that a > pre-commit > hook can change the index, and it rereads the index before running the > prepare-commit-msg hook: > https://github.com/git/git/blob/v2.9.1/builtin/commit.c#L941-L951 Quick question: Why does Git reread the index after the pre-commit hook runs? Thanks, - Andrew Keller -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: obsolete index in wt_status_print after pre-commit hook runs
On 15.07.2016, at 1:28 nachm., Junio C Hamano <gits...@pobox.com> wrote: > Earlier you said you are working on a patch series. Since you have > already looked at the codepath already, perhaps you may want to try > a patch series to add the missing error-return instead, if you are > interested? Definitely interested — Sounds like a great learning experience. Thanks, - Andrew Keller -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: obsolete index in wt_status_print after pre-commit hook runs
On 15.07.2016, at 1:02 nachm., Junio C Hamano <gits...@pobox.com> wrote: > Expected outcome is an error saying "do not modify the index inside > pre-commit hook", and a rejection. It was meant as a verification > mechansim (hence it can be bypassed with --no-verify), not as a way > to make changes that the user didn't tell "git commit" to make. Ah! Good to know, then. I’ll rewrite my hook to behave more correctly. Thanks, - Andrew Keller -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
obsolete index in wt_status_print after pre-commit hook runs
Hi everyone, I have observed an interesting scenario. Here are example reproduction steps: 1. new repository 2. create new pre-commit hook that invokes `git mv one two` 3. touch one 4. git add one 5. git commit Expected outcome: In the commit message template, I expect to see “Changes to be committed: new file: two" Found outcome: In the commit message template, I see “Changes to be committed: new file: one" This behavior seems to be reproducible in versions 2.9.1, 2.8.1, 2.0.0, and 1.6.0. Skip the next 3 paragraphs if you are in a hurry. I pulled out the source for version 2.9.1 and briefly skimmed how run_commit and prepare_to_commit work. It seems that Git already understands that a pre-commit hook can change the index, and it rereads the index before running the prepare-commit-msg hook: https://github.com/git/git/blob/v2.9.1/builtin/commit.c#L941-L951 During the prepare-commit-msg hook, it seems that the index (according to Git commands) is correct and up-to-date, but the textual message inside the commit message template is out-of-date (it references the file `one` as a change to be committed). In builtin/commit.c, it seems that the commit message template is rendered immediately after the pre-commit hook is ran, and immediately before the index is reread. If I move the small block of code that rereads the index up, to just after the pre-commit hook is ran, the commit message template seems to be as I would expect, both in .git/COMMIT_EDITMSG during the prepare-commit-msg hook and in the editor for the commit message itself. I am putting together a 2-patch series that includes a failing test, and then this change (which fixes the test), but while I do that, I figure I may as well ping the community to make sure that this behavior is not intentional. I’d wager that this change is for the better, but since this behavior has been around so long (I stopped checking at 1.6.0), it doesn’t hurt to make sure. Any comments, concerns, or advice? Thanks, - Andrew Keller
Re: Handling empty directories in Git
On Apr 8, 2014, at 10:47 AM, Olivier LE ROY olivier_le_...@yahoo.com wrote: Hello, I have a project under SVN with contains empty directories. I would like to move this project on a Git server, still handling empty directories. The solution: put a .gitignore file in each empty directory to have them recognized by the Git database cannot work, because some scripts in my projects test the actual emptiness of the directories. Is there any expert able to tell me: this cannot be done in Git, or this can be done by the following trick, or why there is no valuable reason to maintain empty directories under version control? Git is designed to track files. The existence of folders is secondary to the notion that files have a relative path inside the repository, which is perceived by the user as folders. Why can't your scripts create the folders on demand? Or, could your scripts interpret a missing folder as an empty folder? Thanks, Andrew Keller -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Handling empty directories in Git
On Apr 8, 2014, at 1:02 PM, Andrew Keller and...@kellerfarm.com wrote: On Apr 8, 2014, at 10:47 AM, Olivier LE ROY olivier_le_...@yahoo.com wrote: Hello, I have a project under SVN with contains empty directories. I would like to move this project on a Git server, still handling empty directories. The solution: put a .gitignore file in each empty directory to have them recognized by the Git database cannot work, because some scripts in my projects test the actual emptiness of the directories. Is there any expert able to tell me: this cannot be done in Git, or this can be done by the following trick, or why there is no valuable reason to maintain empty directories under version control? Git is designed to track files. The existence of folders is secondary to the notion that files have a relative path inside the repository, which is perceived by the user as folders. To clarify: That's Git's personality from the point of view of the front end, and is not the same as how data is actually stored. Thanks, Andrew Keller -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Borrowing objects from nearby repositories
On Mar 26, 2014, at 1:29 PM, Junio C Hamano gits...@pobox.com wrote: Andrew Keller and...@kellerfarm.com writes: On Mar 25, 2014, at 6:17 PM, Junio C Hamano gits...@pobox.com wrote: ... I think that the standard practice with the existing toolset is to clone with reference and then repack. That is: $ git clone --reference borrowee git://over/there mine $ cd mine $ git repack -a -d And then you can try this: $ mv .git/objects/info/alternates .git/objects/info/alternates.disabled $ git fsck to make sure that you are no longer borrowing anything from the borrowee. Once you are satisfied, you can remove the saved-away alternates.disabled file. Oh, I forgot to say that I am not opposed if somebody wants to teach git clone a new option to copy its objects from two places, (hopefully) the majority from near-by reference repository and the remainder over the network, without permanently relying on the former via the alternates mechanism. The implementation of such a feature could even literally be clone with reference first and then repack at least initially but even in the final version. [Administrivia: please wrap your lines to a reasonable length] That was actually one of my first ideas - adding some sort of '--auto-repack' option to git-clone. It's a relatively small change, and would work. However, keeping in mind my end goal of automating the feature to the point where you could run simply 'git clone url', an '--auto-repack' option is more difficult to undo. You would need a new parameter to disable the automatic adding of reference repositories, and a new parameter to undo '--auto-repack', and you'd have to remember to actually undo both of those settings. In contrast, if the new feature was '--borrow', and the evolution of the feature was a global configuration 'fetch.autoBorrow', then to turn it off temporarily, one only needs a single new parameter '--no-auto-borrow'. I think this is a cleaner approach than the former, although much more work. I think you may have misread me. With the new option, I was hinting that the clone --reference repack rm alternates will be an acceptable internal implementation of the --borrow option that was mentioned in the thread. I am not sure where you got the auto-repack from. Ah, yes - that is better than what I was thinking. I was thinking a bit too low-level, and using two arguments in the place of your one. One of the reasons you may have misread me may be because I made it sound as if this may work and when it works you will be happy, but if it does not work you did not lose very much by mentioning mv fsck. That wasn't what I meant. The repack -a procedure is to make the borrower repository no longer dependent on the borrowee, and it is supposed to always work. In fact, this behaviour was the whole reason why repack later learned its -l option to disable it, because people who cloned with --reference in order to reduce the disk footprint by sharing older and more common objects [*1*] were rightfully surprised to see that the borrowed objects were copied over to their borrower repository when they ran repack [*2*]. Because this is clone, there is nothing complex to undo. Either it succeeds, or you remove the whole new directory if anything fails. I said even in the final version for a simple reason: you cannot cannot do realistically any better than the clone --reference repack -a d rm alternates sequence. Wow, that's very insightful - thanks! So, it sounds like I was right about the general areas of concern when trying to do this during a fetch, but I underestimated just how complicated it would be. Okay, so to re-frame my idea, like you said, the goal is to find a user- friendly way for the user to tell git-clone to set up the alternates file (or perhaps just use the --alternates parameter), and run a repack, and disconnect the alternate. And yet, we still want to be able to use --reference on its own, because there are existing use cases for that. Thanks! - Andrew Keller -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Borrowing objects from nearby repositories
On Mar 24, 2014, at 5:21 PM, Ævar Arnfjörð Bjarmason ava...@gmail.com wrote: On Wed, Mar 12, 2014 at 4:37 AM, Andrew Keller and...@kellerfarm.com wrote: Hi all, I am considering developing a new feature, and I'd like to poll the group for opinions. Background: A couple years ago, I wrote a set of scripts that speed up cloning of frequently used repositories. The scripts utilize a bare Git repository located at a known location, and automate providing a --reference parameter to `git clone` and `git submodule update`. Recently, some coworkers of mine expressed an interest in using the scripts, so I published the current version of my scripts, called `git repocache`, described at the bottom of https://github.com/andrewkeller/ak-git-tools. Slowly, it has occurred to me that this feature, or something similar to it, may be worth adding to Git, so I've been thinking about the best approach. Here's my best idea so far: 1) Introduce '--borrow' to `git-fetch`. This would behave similarly to '--reference', except that it operates on a temporary basis, and does not assume that the reference repository will exist after the operation completes, so any used objects are copied into the local objects database. In theory, this mechanism would be distinct from '--reference', so if both are used, some objects would be copied, and some objects would be accessible via a reference repository referenced by the alternates file. Isn't this the same as git clone --reference path --no-hardlinks url ? '--reference` adds an entry to 'info/alternates' inside the objects folder. When an object is looked up, any objects folder listed in 'objects/info/alternates' is considered to be an extension of the local objects folder. So, when, for example, fetch runs, when it goes to decide whether or not it already has a blob locally, it may decide yes, and not download the blob at all, because it already exists in one of the reference repositories. If I clone one of my 80 GB repositories over SSH using a reference repository, the resulting clone is only about 175 KB, because it's assuming the reference repository will exist going forward, so it doesn't actually own any objects itself at all. The '--no-hardlinks' option is only applicable when hard linking is available in the first place - i.e., when cloning from one local folder to another on the same filesystem (assuming the filesystem supports hard links). Thanks, - Andrew -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Borrowing objects from nearby repositories
Hi all, I am considering developing a new feature, and I'd like to poll the group for opinions. Background: A couple years ago, I wrote a set of scripts that speed up cloning of frequently used repositories. The scripts utilize a bare Git repository located at a known location, and automate providing a --reference parameter to `git clone` and `git submodule update`. Recently, some coworkers of mine expressed an interest in using the scripts, so I published the current version of my scripts, called `git repocache`, described at the bottom of https://github.com/andrewkeller/ak-git-tools. Slowly, it has occurred to me that this feature, or something similar to it, may be worth adding to Git, so I've been thinking about the best approach. Here's my best idea so far: 1) Introduce '--borrow' to `git-fetch`. This would behave similarly to '--reference', except that it operates on a temporary basis, and does not assume that the reference repository will exist after the operation completes, so any used objects are copied into the local objects database. In theory, this mechanism would be distinct from '--reference', so if both are used, some objects would be copied, and some objects would be accessible via a reference repository referenced by the alternates file. 2) Teach `git fetch` to read 'repocache.path' (or a better-named configuration), and use it to automatically activate borrowing. 3) For consistency, `git clone`, `git pull`, and `git submodule update` should probably all learn '--borrow', and forward it to `git fetch`. 4) In some scenarios, it may be necessary to temporarily not automatically borrow, so `git fetch`, and everything that calls it may need an argument to do that. Intended outcome: With 'repocache.path' set, and the cached repository properly updated, one could run `git clone url`, and the operation would complete much faster than it does now due to less load on the network. Things I haven't figured out yet: * What's the best approach to copying the needed objects? It's probably inefficient to copy individual objects out of pack files one at a time, but it could be wasteful to copy entire pack files just because you need one object. Hard-linking could help, but that won't always be available. One of my previous ideas was to add a '--auto-repack' option to `git-clone`, which solves this problem better, but introduces some other front-end usability problems. * To maintain optimal effectiveness, users would have to regularly run a fetch in the cache repository. Not all users know how to set up a scheduled task on their computer, so this might become a maintenance problem for the user. This kind of problem I think brings into question the viability of the underlying design here, assuming that the ultimate goal is to clone faster, with very little or no change in the use of git. Thoughts? Thanks, Andrew Keller -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH/RFC] Documentation: Say that submodule clones use a separate gitdirs.
On Mar 7, 2014, at 7:50 PM, Henri GEIST wrote: Le vendredi 07 mars 2014 à 15:37 -0800, Junio C Hamano a écrit : Henri GEIST geist.he...@laposte.net writes: This information is technical in nature but has some importance for general users. As this kind of clone have a separate gitdir, you will have a surprise if you copy past the worktree as the gitdir will not come together. I am not sure if I understand exactly what you are trying to say. Are you saying that you had a submodule at sub/dir in your working tree, and then mkdir ../another cp -R sub/dir ../another did not result in a usable Git working tree in ../another directory? It is almost like complaining that mkdir ../newone cp -R * ../newone/ did not result in a usable git repository in ../newone directory and honestly speaking, that sounds borderline insane, I'd have to say. Yes, if a user knows what she is doing, she should be able to make something like that work, without running git clone (which is probably the way most users would do it). And yes, it would be good to let the user learn from the documentation enough so that she knows what she is doing. But no, I do not think end-user facing documentation for git-submodule subcommand is the way to do that. That is why I suggested repository-layout as potentially a better alternative location. But perhaps I am mis-reading your rationale. Let me rephrase my example : To give one of my project to someone else I have copied it on a USB key. By a simple drag and drop with the mouse. And I am quite sure I am not alone doing this way. I have done those kind of things lot of time without any problem. But that day 'the_project' happened to be a submodule cloned by 'git submodule update' then on the USB key the $GIT_DIR of 'the_project' was missing. If 'man git-submodule' have made me aware of the particularities of submodules clone I had write in a terminal: git clone the_project /media/usb/the_project Or at least I had understand what happened quicker. I have nothing against also adding something in repository-layout but I am pretty sure normal users never read repository-layout as it is not a command they use. And it is not mentioned in most tutorials. How about something like this: The git directory of a submodule lives inside the git directory of the parent repository instead of within the working directory. I'm not sure where to put it, though. - Andrew Keller -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH/RFC] Documentation: Say that submodule clones use a separate gitdirs.
On Mar 7, 2014, at 2:53 AM, Henri GEIST geist.he...@laposte.net wrote: Adding a note in the submodule documentation signaling that the automatically cloned missing submodules are cloned with a separate gitdir. And where it is put. Signed-off-by: Henri GEIST geist.he...@laposte.net --- Documentation/git-submodule.txt |5 + 1 file changed, 5 insertions(+) diff --git a/Documentation/git-submodule.txt b/Documentation/git-submodule.txt index 21cb59a..ea837fd 100644 --- a/Documentation/git-submodule.txt +++ b/Documentation/git-submodule.txt @@ -64,6 +64,11 @@ using the 'status' subcommand and get a detailed overview of the difference between the index and checkouts using the 'summary' subcommand. +*NOTE*: when submodule add or submodule update commands clone a missing +submodule, the option --separate-git-dir is passed to the clone command +and the gitdir of the submodule is placed outside of its working +directory in the .git/module of the current repository. + The modules directory is 'modules'. And, the '.git' folder is not always called '.git' -- in a submodule, for example, the directory name is the name of the module. Also, this file contains mostly high-level documentation, and this addition feels technical in nature. Is there a location for more technical documentation? Or, perhaps it can be reworded to sound less technical? COMMANDS -- 1.7.9.3.369.gd715.dirty -- - Andrew Keller -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: submodules: reuse .git/modules/... for multiple checkouts of same URL
On Mar 3, 2014, at 7:24 AM, stefan.li...@partner.bmw.de wrote: I have a git superproject with 3 submodules. The submodules are cloned from the same URL but use different branches. Git clones the repo three times and I have three entries in .git/modules. Is it possible to reuse the first clone for the next submodule clones? Sort of - but I don't think you'll like the side effects. If each of the submodules are checked out on a different branch, then HEAD is going to be different in each repository in .git/modules. So, each of the repositories in .git/modules are unique. You could share the objects database (see --shared or --reference) which would dramatically reduce the size of each repo on its own, but that comes with a side effect: because there is no communication between any of the repositories involved in sharing objects, git-gc will happily delete an object that may be unneeded locally, but another repository will suddenly think it's corrupt. You could also customize the refspecs in each repository. For example, if one of the submodules has only 'origin/contrib' checked out, then you could change the refspec to '+refs/heads/contrib:refs/remotes/origin/contrib', which will cause only the objects pertaining to that branch will be downloaded. This has the most benefit when the commit graph is orphaned in some way. However, this approach requires manual labor every time you initialize a submodule. - Andrew Keller -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: gitweb.cgi bug - XML Parsing Error: not well-formed
On Feb 18, 2014, at 6:41 AM, Dongsheng Song dongsheng.s...@gmail.com wrote: Here is gitweb generated XHTML fragment: … You're going to have to be more specific. - Andrew -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] gitweb: Avoid overflowing page body frame with large images
When displaying a blob in gitweb, if it's an image, specify constraints for maximum display width and height to prevent the image from overflowing the frame of the enclosing page_body div. This change assumes that it is more desirable to see the whole image without scrolling (new behavior) than it is to see every pixel without zooming (previous behavior). Signed-off-by: Andrew Keller and...@kellerfarm.com --- This is an updated copy of this patch. Could I request a thumbs up, thumbs down, or thumbs sideways from those who develop gitweb? Thanks, Andrew Keller gitweb/gitweb.perl |2 +- gitweb/static/gitweb.css |5 + 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/gitweb/gitweb.perl b/gitweb/gitweb.perl index 3bc0f0b..79057b7 100755 --- a/gitweb/gitweb.perl +++ b/gitweb/gitweb.perl @@ -7094,7 +7094,7 @@ sub git_blob { git_print_page_path($file_name, blob, $hash_base); print div class=\page_body\\n; if ($mimetype =~ m!^image/!) { - print qq!img type=!.esc_attr($mimetype).qq!!; + print qq!img class=blob type=!.esc_attr($mimetype).qq!!; if ($file_name) { print qq! alt=!.esc_attr($file_name).qq! title=!.esc_attr($file_name).qq!!; } diff --git a/gitweb/static/gitweb.css b/gitweb/static/gitweb.css index 3b4d833..3212601 100644 --- a/gitweb/static/gitweb.css +++ b/gitweb/static/gitweb.css @@ -32,6 +32,11 @@ img.avatar { vertical-align: middle; } +img.blob { + max-height: 100%; + max-width: 100%; +} + a.list img.avatar { border-style: none; } -- 1.7.9.6 (Apple Git-31.1) -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: gitweb.cgi bug
On Feb 8, 2014, at 10:19 PM, Dongsheng Song wrote: On Sun, Feb 9, 2014 at 12:29 AM, Andrew Keller and...@kellerfarm.com wrote: On Feb 8, 2014, at 8:37 AM, Dongsheng Song wrote: I have an git repo PROJECT.git, the full path is /srv/repo/git/PROJECT.git, when I set git_base_url_list in gitweb.conf: @git_base_url_list = qw(https://192.168.30.239/repo/git git@192.168.30.239:repo/git); I got the result: https://192.168.30.239/repo/git/PROJECT.git git@192.168.30.239:/PROJECT.git This is wrong, it should be (without the leading '/') git@192.168.30.239:PROJECT.git There is no way to generate a fetch url of 'git@192.168.30.239:PROJECT.git' in gitweb. If one of the base urls was 'git@192.168.30.239:.', then you could get a fetch URL of 'git@192.168.30.239:./PROJECT.git' In general, though, I like to stay away from relative paths. Weird things can happen, like HTTP works but SSH doesn't, because the home directory for SSH changed because you used a different user. What's about the following translate rules ? git@192.168.30.239: - git@192.168.30.239:PROJECT.git git@192.168.30.239:/ - git@192.168.30.239:/PROJECT.git git@192.168.30.239:/repo - git@192.168.30.239:/repo/PROJECT.git git@192.168.30.239:/repo/ - git@192.168.30.239:/repo/PROJECT.git I think that those translation rules are completely reasonable. However, that's not what gitweb was originally designed to do. What you're describing is a desire for a new feature, not the existence of a bug. Basically, gitweb does not support relative paths when the base url does not contain part of the path already. I don't know Perl, but I think change the following translate code is not a hard work: # use per project git URL list in $projectroot/$project/cloneurl # or make project git URL from git base URL and project name my $url_tag = URL; my @url_list = git_get_project_url_list($project); @url_list = map { $_/$project } @git_base_url_list unless @url_list; foreach my $git_url (@url_list) { next unless $git_url; print format_repo_url($url_tag, $git_url); $url_tag = ; } You're right - that is where the change should be applied, and the change you suggest is pretty simple. However, I'm not confident that the syntax for a relative path is the same for all schemes. (Others on the list, feel free to object.) Since gitweb blindly concatenates the base URL and the relative project path, I'm worried that adding the proper functionality for one scheme will yield incorrect behavior for another scheme. Can you move your repository to a subfolder? Can use use absolute paths instead of relative paths? Either of those approaches work around this issue. I don't mean to coldly tell you that the solution is don't do that, but on the surface, this seems like a nasty problem. - Andrew -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: gitweb.cgi bug
On Feb 8, 2014, at 8:37 AM, Dongsheng Song wrote: I have an git repo PROJECT.git, the full path is /srv/repo/git/PROJECT.git, when I set git_base_url_list in gitweb.conf: @git_base_url_list = qw(https://192.168.30.239/repo/git git@192.168.30.239:repo/git); I got the result: https://192.168.30.239/repo/git/PROJECT.git git@192.168.30.239:/PROJECT.git This is wrong, it should be (without the leading '/') git@192.168.30.239:PROJECT.git There is no way to generate a fetch url of 'git@192.168.30.239:PROJECT.git' in gitweb. If one of the base urls was 'git@192.168.30.239:.', then you could get a fetch URL of 'git@192.168.30.239:./PROJECT.git' In general, though, I like to stay away from relative paths. Weird things can happen, like HTTP works but SSH doesn't, because the home directory for SSH changed because you used a different user. - Andrew -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] gitweb: Avoid overflowing page body frame with large images
On Feb 6, 2014, at 10:31 PM, Andrew Keller wrote: When displaying a blob in gitweb, if it's an image, specify constraints for maximum display width and height to prevent the image from overflowing the frame of the enclosing page_body div. This change assumes that it is more desirable to see the whole image without scrolling (new behavior) than it is to see every pixel without zooming (previous behavior). Signed-off-by: Andrew B Keller and...@kellerfarm.com --- I recently used Git to archive a set of scanned photos, and I used gitweb to provide access to them. Overall, everything worked well, but I found it undesirable that I had to zoom out in my browser on every photo to see the whole photo. In the spirit of making the default behavior the most likely correct behavior, this patch seems to be a good idea. However, I'm not an expert on the use cases of gitweb. In order for the maximum size constraints to take effect, the image would have to be at least the size of the web browser window (minus a handful of pixels), so the affected images are usually going to be pretty big. Are there any common use cases for displaying a large image without scaling (and hence, with scrolling)? Thanks, Andrew gitweb/gitweb.perl |2 +- gitweb/static/gitweb.css |5 + 2 files changed, 6 insertions(+), 1 deletions(-) diff --git a/gitweb/gitweb.perl b/gitweb/gitweb.perl index 3bc0f0b..2c6a77f 100755 --- a/gitweb/gitweb.perl +++ b/gitweb/gitweb.perl @@ -7094,7 +7094,7 @@ sub git_blob { git_print_page_path($file_name, blob, $hash_base); print div class=\page_body\\n; if ($mimetype =~ m!^image/!) { - print qq!img type=!.esc_attr($mimetype).qq!!; + print qq!img class=image_blob type=!.esc_attr($mimetype).qq!!; if ($file_name) { print qq! alt=!.esc_attr($file_name).qq! title=!.esc_attr($file_name).qq!!; } diff --git a/gitweb/static/gitweb.css b/gitweb/static/gitweb.css index 3b4d833..cd57c2f 100644 --- a/gitweb/static/gitweb.css +++ b/gitweb/static/gitweb.css @@ -32,6 +32,11 @@ img.avatar { vertical-align: middle; } +img.image_blob { I wonder if simply blob is a better style name here. image_blob stands out a bit amongst the existing code, and blob appears to be specific enough for the needs. + max-height: 100%; + max-width: 100%; +} + a.list img.avatar { border-style: none; } -- 1.7.7.1 - Andrew -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] gitweb: Avoid overflowing page body frame with large images
On Feb 7, 2014, at 7:35 AM, Vincent van Ravesteijn v...@lyx.org wrote: On Fri, Feb 7, 2014 at 4:31 AM, Andrew Keller and...@kellerfarm.com wrote: I recently used Git to archive a set of scanned photos, and I used gitweb to provide access to them. Overall, everything worked well, but I found it undesirable that I had to zoom out in my browser on every photo to see the whole photo. In the spirit of making the default behavior the most likely correct behavior, this patch seems to be a good idea. However, I'm not an expert on the use cases of gitweb. In order for the maximum size constraints to take effect, the image would have to be at least the size of the web browser window (minus a handful of pixels), so the affected images are usually going to be pretty big. Are there any common use cases for displaying a large image without scaling (and hence, with scrolling)? Thanks, Andrew It sounds like your usecase is exactly what camlistore.org tries to achieve. Yes. With that said, I don't think it's unreasonable for a software project to contain images larger than a browser window. And, when that happens, I'm pretty confident that the default behavior should be to scale the image down so the user can see the whole thing. - Andrew -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] gitweb: Avoid overflowing page body frame with large images
When displaying a blob in gitweb, if it's an image, specify constraints for maximum display width and height to prevent the image from overflowing the frame of the enclosing page_body div. This change assumes that it is more desirable to see the whole image without scrolling (new behavior) than it is to see every pixel without zooming (previous behavior). Signed-off-by: Andrew B Keller and...@kellerfarm.com --- I recently used Git to archive a set of scanned photos, and I used gitweb to provide access to them. Overall, everything worked well, but I found it undesirable that I had to zoom out in my browser on every photo to see the whole photo. In the spirit of making the default behavior the most likely correct behavior, this patch seems to be a good idea. However, I'm not an expert on the use cases of gitweb. In order for the maximum size constraints to take effect, the image would have to be at least the size of the web browser window (minus a handful of pixels), so the affected images are usually going to be pretty big. Are there any common use cases for displaying a large image without scaling (and hence, with scrolling)? Thanks, Andrew gitweb/gitweb.perl |2 +- gitweb/static/gitweb.css |5 + 2 files changed, 6 insertions(+), 1 deletions(-) diff --git a/gitweb/gitweb.perl b/gitweb/gitweb.perl index 3bc0f0b..2c6a77f 100755 --- a/gitweb/gitweb.perl +++ b/gitweb/gitweb.perl @@ -7094,7 +7094,7 @@ sub git_blob { git_print_page_path($file_name, blob, $hash_base); print div class=\page_body\\n; if ($mimetype =~ m!^image/!) { - print qq!img type=!.esc_attr($mimetype).qq!!; + print qq!img class=image_blob type=!.esc_attr($mimetype).qq!!; if ($file_name) { print qq! alt=!.esc_attr($file_name).qq! title=!.esc_attr($file_name).qq!!; } diff --git a/gitweb/static/gitweb.css b/gitweb/static/gitweb.css index 3b4d833..cd57c2f 100644 --- a/gitweb/static/gitweb.css +++ b/gitweb/static/gitweb.css @@ -32,6 +32,11 @@ img.avatar { vertical-align: middle; } +img.image_blob { + max-height: 100%; + max-width: 100%; +} + a.list img.avatar { border-style: none; } -- 1.7.7.1 -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Questions on local clone and push back
On Jan 30, 2014, at 12:43 AM, Arshavir Grigorian grigor...@gmail.com wrote: 1) is this a good approach to achieving what I need If you do not intend to track the parent projects in Git, then yes - that is a good approach. With that said, I recommend tracking each parent project in its own Git repository, and track the shared code in yet another Git repository, and link them using submodule references. 2) I was getting an error when I tied to run git push about the branch being checked out and What's the error when you use the more explicit syntax, `git push remote branch`? Depending on configuration, simply `git push` might not have all the information it needs to work. 3) how do I selectively push / merge only certain commits back to the source repository / branch? You can't. When pushing, pulling, or merging, you can only deal with subsections of the commit graph. With that said, you can rebuild parts of the commit graph using selected commits. Then, that result can be pushed, pulled, or merged. In my experience, you want to avoid picking and choosing commits in the shared repository for each parent project. Maintaining the shared repository is difficult enough. I advise that you find a way to make your shared code configurable for each project, such that you can have one master branch for all, and each project just uses or configures the code differently. With that said, try to keep your configurations to a minimum (within reason) - in general, the more configurations you have, the more difficult the shared library will be to maintain. Hope that helps, - Andrew -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Conceptual Question for git usage ...
On Jan 22, 2014, at 9:20 AM, John McIntyre joh98@gmail.com wrote: … So basically, what I'd like to do is this. I want to write code, write blg posts, write essays for university, whatever. And I want to use git to maintain revisions, but where do I store them? Do I make the Mac my hub? I have a git client on there. Do I make the server my 'hub'? If I make the server the 'hub', then won't rsync back-ups from the Mac to the server wipe them out? … Git's degree of flexibility in what is considered the server is valuable here. I advise that you simply try a configuration, and see how it works. It's easy to change where origin points later. With that said, like you, I have a small ad-hoc setup of automated rsync backups between my various computers and servers, and I have found some characteristics useful: * I have rsync saving backups into dedicated backup folders on the remote machines. This eliminates ambiguity of what to back up (server A won't blow away server B's Documents folder, for example). * Using a publicly accessible server has been useful. I set up port forwarding to the machine, and set up a domain name pointing to the server. In general, when I have Internet access, I can access the server that contains my repositories. I always use the same domain name, even if I'm in the same room as the server. Hope that helps, Andrew -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Installing GIT Server
On Aug 10, 2012, at 11:17 AM, Neeraj Mathawan wrote: We have decided to use GIT for a huge government implementation, I am looking for some help with installation of GIT SERVER. Few questions:- 1. What platform to choose - UNIX, MAC or Windows? We have lot of windows 2008 installations, and if there are no trade off's we would love to use Windows 2008 server and install the GIT server compoent there. 2. Once that is done, the client machine mostly MAC OSX development machines...will they be able to connect using SSH or file share? Can someone help me with this? Unix, Linux, and Mac OS X all work equally well. Haven't tried git-daemon on Windows, however I have noticed that Git in general is much slower on Windows and the default memory limit is low, which can cause problems with large repositories. I'd guess that git-daemon might have similar problems. In the past, I have had problems with programs with programs in general accessing shared resources on file shares. I've had permission problems, update problems, and corruption problems when the network fails. I have had much better success with SSH. ~ Andrew Keller -- To unsubscribe from this list: send the line unsubscribe git in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html