Re: [Bash-completion-devel] merging related completions into a file for dynamic loading?

2014-12-29 Thread Ville Skyttä
On Sat, Dec 13, 2014 at 2:42 AM, Peter Cordes  wrote:
>  So, thoughts on replacing some of the many files in completions/*
> with symlinks to groups of related commands?

We do already have some such cases done but I don't think we have
rules set when to do it and when not. Note however that there is also
_xfunc which can be used to call (common) functions from different
completion files that might not be loaded, that's kind of a middle
ground between grouped and completely separate completions.

> So I guess my
> thinking is that when we can bring COMPREPLY down from 100k to 50k
> items with sort -u, it's not a bad idea.

Agreed.

___
Bash-completion-devel mailing list
Bash-completion-devel@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/bash-completion-devel


Re: [Bash-completion-devel] merging related completions into a file for dynamic loading?

2014-12-13 Thread Peter Cordes
On Sat, Dec 13, 2014 at 02:08:08PM -0300, Raphaël wrote:
> On Fri, Dec 12, 2014 at 08:42:27PM -0400, Peter Cordes wrote:
> > I guess my thinking is that when we can bring COMPREPLY down from 100k
> > to 50k items with sort -u, it's not a bad idea.
> 
> Worth noting that `sort` use temporary files (= $TMPDIR filesystem
> accesses).
> Not sure about bash's compgen.

 sort doesn't use temp files if the input isn't THAT big.

for i in {0..1};do apt-cache pkgnames;done | strace -efile sort -u >/dev/null

I only see it open files in /tmp with more than 2 repeats of the
package list. (each about 43k items on Ubuntu trusty).

 My thinking is that if bash allocates a lot of memory, and it then
allocates some small things that it will keep for a while, even
freeing the big chunk doesn't fully get you back to where you were
before.  You might keep memory fragmentation and bloat to a minimum by
not passing giant COMPREPLY arrays back to bash for it to sort and
uniq, when the list is likely to be big enough to warrant piping the
data through sort(1).

 And like I said, I'm inclined to be cautious, because I saw bash die
once, with an error message about memory, maybe corruption, can't
remember.  My system doesn't randomly crash, so it's unlikely to be a
hardware error.   Ubuntu 4.3.11(1)-release.  I guess I should try to
reproduce this sometime.

-- 
#define X(x,y) x##y
Peter Cordes ;  e-mail: X(peter@cor , des.ca)

"The gods confound the man who first found out how to distinguish the hours!
 Confound him, too, who in this place set up a sundial, to cut and hack
 my day so wretchedly into small pieces!" -- Plautus, 200 BC

___
Bash-completion-devel mailing list
Bash-completion-devel@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/bash-completion-devel


Re: [Bash-completion-devel] merging related completions into a file for dynamic loading?

2014-12-13 Thread Raphaël
On Fri, Dec 12, 2014 at 08:42:27PM -0400, Peter Cordes wrote:
> I guess my thinking is that when we can bring COMPREPLY down from 100k
> to 50k items with sort -u, it's not a bad idea.

Worth noting that `sort` use temporary files (= $TMPDIR filesystem
accesses).
Not sure about bash's compgen.


___
Bash-completion-devel mailing list
Bash-completion-devel@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/bash-completion-devel


[Bash-completion-devel] merging related completions into a file for dynamic loading?

2014-12-12 Thread Peter Cordes
 apt-cache and apt-get completions use very similar commands, but
slightly different.  This is silly.  apt-get should just be a symlink
to apt-cache, with the contents in apt-cache.

e.g.
_apt_cache_sources() {
apt-cache dumpavail | command grep "^Source: $1" | cut -f2 -d" " | sort -u
}

vs. apt-get's:

 source)
 COMPREPLY=( $( apt-cache --no-generate pkgnames "$cur" \
 2> /dev/null ) $( apt-cache dumpavail | \
 command grep "^Source: $cur" | sort -u | cut -f2 -d" " ) )
 return 0
 ;;

 Note that in apt-get's case, sort -u happens before stripping off the
optional (ver-number) after the package name, which is silly.  bash
sorts and uniques COMPREPLY before using it, anyway.  (Is this a new
feature, or can we go around stripping sort out of pipelines all over
the place?  I'm inclined to leave it in, in this case, to reduce the
amount of memory bash has to allocate.  I'm also thinking it wouldn't
be a bad idea to rearrange things so the sort -u happened on the
combined output of the available binary packages and the available
source packages.  (Since apt-get source will accept binary-package
names, and get the source for them.)

 So, thoughts on replacing some of the many files in completions/*
with symlinks to groups of related commands?

 And, thoughts on performance of bash with very large arrays to sort?
I did manage to get Ubuntu's  GNU bash, version 4.3.11(1)-release
(x86_64-pc-linux-gnu) to print an error and exit once, while testing
apt-get completions.  I don't have the exact message, or details of
what I completed, because my screen(1) window only stayed open showing
the error message for about a second.  And I didn't try to write it
down while I still remembered it, unfortunately.  So I guess my
thinking is that when we can bring COMPREPLY down from 100k to 50k
items with sort -u, it's not a bad idea.

-- 
#define X(x,y) x##y
Peter Cordes ;  e-mail: X(peter@cor , des.ca)

"The gods confound the man who first found out how to distinguish the hours!
 Confound him, too, who in this place set up a sundial, to cut and hack
 my day so wretchedly into small pieces!" -- Plautus, 200 BC


signature.asc
Description: Digital signature
___
Bash-completion-devel mailing list
Bash-completion-devel@lists.alioth.debian.org
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/bash-completion-devel