>From the output of ls-files, we remove all but the leftmost path
component and then we eliminate duplicates. We do this in a while loop,
which is a performance bottleneck when the number of iterations is large
(e.g. for 60000 files in linux.git).

$ COMP_WORDS=(git status -- ar) COMP_CWORD=3; time _git

real    0m11.876s
user    0m4.685s
sys     0m6.808s

Using an equivalent sed script improves performance significantly:

$ COMP_WORDS=(git status -- ar) COMP_CWORD=3; time _git

real    0m1.372s
user    0m0.263s
sys     0m0.167s

The measurements were done with mingw64 bash, which is used by Git for
Windows.

Signed-off-by: Clemens Buchacher <dri...@gmx.net>
---
 contrib/completion/git-completion.bash | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/contrib/completion/git-completion.bash 
b/contrib/completion/git-completion.bash
index 6da95b8..e3ddf27 100644
--- a/contrib/completion/git-completion.bash
+++ b/contrib/completion/git-completion.bash
@@ -384,12 +384,7 @@ __git_index_files ()
        local root="${2-.}" file
 
        __git_ls_files_helper "$root" "$1" |
-       while read -r file; do
-               case "$file" in
-               ?*/*) echo "${file%%/*}" ;;
-               *) echo "$file" ;;
-               esac
-       done | sort | uniq
+       sed -e '/^\//! s#/.*##' | sort | uniq
 }
 
 # Lists branches from the local repository.
-- 
2.7.4

Reply via email to