Re: [PATCH v3] tags: much faster, parallel "make tags"
On Sun, May 10, 2015 at 09:58:12PM +0100, Pádraig Brady wrote: > On 10/05/15 14:26, Alexey Dobriyan wrote: > > On Sat, May 09, 2015 at 06:07:18AM +0100, Pádraig Brady wrote: > >> On 08/05/15 14:26, Alexey Dobriyan wrote: > > > >>> exuberant() > >>> { > >>> - all_target_sources | xargs $1 -a\ > >>> + rm -f .make-tags.* > >>> + > >>> + all_target_sources >.make-tags.src > >>> + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1) > >> > >> `nproc` is simpler and available since coreutils 8.1 (2009-11-18) > > > > nproc was discarded because getconf is standartized. > > Note getconf doesn't honor CPU affinity which may be fine here? > > $ taskset -c 0 getconf _NPROCESSORS_ONLN > 4 > $ taskset -c 0 nproc > 1 Why would anyone tag files under affinity? > >>> + NR_LINES=$(wc -l <.make-tags.src) > >>> + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS)) > >>> + > >>> + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src. > >> > >> `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 > >> (2010-12-22) > > > > -nl/ can't count and always make first file somewhat bigger, which is > > suspicious. What else it can't do right? > > It avoids the overhead of reading all data and counting the lines, > by splitting the data into approx equal numbers of lines as detailed at: > http://gnu.org/s/coreutils/split ~1 second -- statistical error. > >>> + sort .make-tags.* >>$2 > >>> + rm -f .make-tags.* > >> > >> Using sort --merge would speed up significantly? > > > > By ~1 second, yes. > > > >> Even faster would be to get sort to skip the header lines, avoiding the > >> need for sed. > >> It's a bit awkward and was discussed at: > >> http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html > >> Summarising that, is if not using merge you can: > >> > >> tlines=$(($(wc -l < "$2") + 1)) > >> tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2 > >> > >> Or if merge is appropriate then: > >> > >> tlines=$(($(wc -l < "$2") + 1)) > >> eval "eval LC_ALL=C sort -m '<(tail -n+$tlines > >> .make-tags.'{1..$(nproc)}')'" >>$2 > > > > Might as well teach ctags to do real parallel processing. > > LC_* are set by top level Makefile. > > > >> p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.* > > > > The real question is how to kill ctags reliably. > > Naive > > > > trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT > > > > doesn't work. > > > > Files are removed, but processes aren't. > > Is $(jobs -p) generating the correct list? It looks like it does. > On an interactive shell here it is. > Perhaps you need to explicitly use #!/bin/sh -m > at the start to enable job control like that? > Another option would be to append each background $! pid > to a list and kill that list. > Note also you may want to `wait` after the kill too. All of this doesn't work reliably. I switched to "xargs -P" and Ctrl+C became reliable, immediate and free for programmer. See updated patch. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v3] tags: much faster, parallel make tags
On Sun, May 10, 2015 at 09:58:12PM +0100, Pádraig Brady wrote: On 10/05/15 14:26, Alexey Dobriyan wrote: On Sat, May 09, 2015 at 06:07:18AM +0100, Pádraig Brady wrote: On 08/05/15 14:26, Alexey Dobriyan wrote: exuberant() { - all_target_sources | xargs $1 -a\ + rm -f .make-tags.* + + all_target_sources .make-tags.src + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2/dev/null || echo 1) `nproc` is simpler and available since coreutils 8.1 (2009-11-18) nproc was discarded because getconf is standartized. Note getconf doesn't honor CPU affinity which may be fine here? $ taskset -c 0 getconf _NPROCESSORS_ONLN 4 $ taskset -c 0 nproc 1 Why would anyone tag files under affinity? + NR_LINES=$(wc -l .make-tags.src) + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS)) + + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src. `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22) -nl/ can't count and always make first file somewhat bigger, which is suspicious. What else it can't do right? It avoids the overhead of reading all data and counting the lines, by splitting the data into approx equal numbers of lines as detailed at: http://gnu.org/s/coreutils/split ~1 second -- statistical error. + sort .make-tags.* $2 + rm -f .make-tags.* Using sort --merge would speed up significantly? By ~1 second, yes. Even faster would be to get sort to skip the header lines, avoiding the need for sed. It's a bit awkward and was discussed at: http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html Summarising that, is if not using merge you can: tlines=$(($(wc -l $2) + 1)) tail -q -n+$tlines .make-tags.* | LC_ALL=C sort $2 Or if merge is appropriate then: tlines=$(($(wc -l $2) + 1)) eval eval LC_ALL=C sort -m '(tail -n+$tlines .make-tags.'{1..$(nproc)}')' $2 Might as well teach ctags to do real parallel processing. LC_* are set by top level Makefile. p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.* The real question is how to kill ctags reliably. Naive trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT doesn't work. Files are removed, but processes aren't. Is $(jobs -p) generating the correct list? It looks like it does. On an interactive shell here it is. Perhaps you need to explicitly use #!/bin/sh -m at the start to enable job control like that? Another option would be to append each background $! pid to a list and kill that list. Note also you may want to `wait` after the kill too. All of this doesn't work reliably. I switched to xargs -P and Ctrl+C became reliable, immediate and free for programmer. See updated patch. -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v3] tags: much faster, parallel "make tags"
On 10/05/15 14:26, Alexey Dobriyan wrote: > On Sat, May 09, 2015 at 06:07:18AM +0100, Pádraig Brady wrote: >> On 08/05/15 14:26, Alexey Dobriyan wrote: > >>> exuberant() >>> { >>> - all_target_sources | xargs $1 -a\ >>> + rm -f .make-tags.* >>> + >>> + all_target_sources >.make-tags.src >>> + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1) >> >> `nproc` is simpler and available since coreutils 8.1 (2009-11-18) > > nproc was discarded because getconf is standartized. Note getconf doesn't honor CPU affinity which may be fine here? $ taskset -c 0 getconf _NPROCESSORS_ONLN 4 $ taskset -c 0 nproc 1 >>> + NR_LINES=$(wc -l <.make-tags.src) >>> + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS)) >>> + >>> + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src. >> >> `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 >> (2010-12-22) > > -nl/ can't count and always make first file somewhat bigger, which is > suspicious. What else it can't do right? It avoids the overhead of reading all data and counting the lines, by splitting the data into approx equal numbers of lines as detailed at: http://gnu.org/s/coreutils/split >>> + sort .make-tags.* >>$2 >>> + rm -f .make-tags.* >> >> Using sort --merge would speed up significantly? > > By ~1 second, yes. > >> Even faster would be to get sort to skip the header lines, avoiding the need >> for sed. >> It's a bit awkward and was discussed at: >> http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html >> Summarising that, is if not using merge you can: >> >> tlines=$(($(wc -l < "$2") + 1)) >> tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2 >> >> Or if merge is appropriate then: >> >> tlines=$(($(wc -l < "$2") + 1)) >> eval "eval LC_ALL=C sort -m '<(tail -n+$tlines >> .make-tags.'{1..$(nproc)}')'" >>$2 > > Might as well teach ctags to do real parallel processing. > LC_* are set by top level Makefile. > >> p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.* > > The real question is how to kill ctags reliably. > Naive > > trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT > > doesn't work. > > Files are removed, but processes aren't. Is $(jobs -p) generating the correct list? On an interactive shell here it is. Perhaps you need to explicitly use #!/bin/sh -m at the start to enable job control like that? Another option would be to append each background $! pid to a list and kill that list. Note also you may want to `wait` after the kill too. cheers, Pádraig. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v3] tags: much faster, parallel "make tags"
[fix Andrew's email] On Sun, May 10, 2015 at 04:26:34PM +0300, Alexey Dobriyan wrote: > On Sat, May 09, 2015 at 06:07:18AM +0100, Pádraig Brady wrote: > > On 08/05/15 14:26, Alexey Dobriyan wrote: > > > > exuberant() > > > { > > > - all_target_sources | xargs $1 -a\ > > > + rm -f .make-tags.* > > > + > > > + all_target_sources >.make-tags.src > > > + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1) > > > > `nproc` is simpler and available since coreutils 8.1 (2009-11-18) > > nproc was discarded because getconf is standartized. > > > > + NR_LINES=$(wc -l <.make-tags.src) > > > + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS)) > > > + > > > + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src. > > > > `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 > > (2010-12-22) > > -nl/ can't count and always make first file somewhat bigger, which is > suspicious. What else it can't do right? > > > > + sort .make-tags.* >>$2 > > > + rm -f .make-tags.* > > > > Using sort --merge would speed up significantly? > > By ~1 second, yes. > > > Even faster would be to get sort to skip the header lines, avoiding the > > need for sed. > > It's a bit awkward and was discussed at: > > http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html > > Summarising that, is if not using merge you can: > > > > tlines=$(($(wc -l < "$2") + 1)) > > tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2 > > > > Or if merge is appropriate then: > > > > tlines=$(($(wc -l < "$2") + 1)) > > eval "eval LC_ALL=C sort -m '<(tail -n+$tlines > > .make-tags.'{1..$(nproc)}')'" >>$2 > > Might as well teach ctags to do real parallel processing. > LC_* are set by top level Makefile. > > > p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.* > > The real question is how to kill ctags reliably. > Naive > > trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT > > doesn't work. > > Files are removed, but processes aren't. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v3] tags: much faster, parallel "make tags"
On Sat, May 09, 2015 at 06:07:18AM +0100, Pádraig Brady wrote: > On 08/05/15 14:26, Alexey Dobriyan wrote: > > exuberant() > > { > > - all_target_sources | xargs $1 -a\ > > + rm -f .make-tags.* > > + > > + all_target_sources >.make-tags.src > > + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1) > > `nproc` is simpler and available since coreutils 8.1 (2009-11-18) nproc was discarded because getconf is standartized. > > + NR_LINES=$(wc -l <.make-tags.src) > > + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS)) > > + > > + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src. > > `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 > (2010-12-22) -nl/ can't count and always make first file somewhat bigger, which is suspicious. What else it can't do right? > > + sort .make-tags.* >>$2 > > + rm -f .make-tags.* > > Using sort --merge would speed up significantly? By ~1 second, yes. > Even faster would be to get sort to skip the header lines, avoiding the need > for sed. > It's a bit awkward and was discussed at: > http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html > Summarising that, is if not using merge you can: > > tlines=$(($(wc -l < "$2") + 1)) > tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2 > > Or if merge is appropriate then: > > tlines=$(($(wc -l < "$2") + 1)) > eval "eval LC_ALL=C sort -m '<(tail -n+$tlines > .make-tags.'{1..$(nproc)}')'" >>$2 Might as well teach ctags to do real parallel processing. LC_* are set by top level Makefile. > p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.* The real question is how to kill ctags reliably. Naive trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT doesn't work. Files are removed, but processes aren't. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v3] tags: much faster, parallel make tags
On 10/05/15 14:26, Alexey Dobriyan wrote: On Sat, May 09, 2015 at 06:07:18AM +0100, Pádraig Brady wrote: On 08/05/15 14:26, Alexey Dobriyan wrote: exuberant() { - all_target_sources | xargs $1 -a\ + rm -f .make-tags.* + + all_target_sources .make-tags.src + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2/dev/null || echo 1) `nproc` is simpler and available since coreutils 8.1 (2009-11-18) nproc was discarded because getconf is standartized. Note getconf doesn't honor CPU affinity which may be fine here? $ taskset -c 0 getconf _NPROCESSORS_ONLN 4 $ taskset -c 0 nproc 1 + NR_LINES=$(wc -l .make-tags.src) + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS)) + + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src. `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22) -nl/ can't count and always make first file somewhat bigger, which is suspicious. What else it can't do right? It avoids the overhead of reading all data and counting the lines, by splitting the data into approx equal numbers of lines as detailed at: http://gnu.org/s/coreutils/split + sort .make-tags.* $2 + rm -f .make-tags.* Using sort --merge would speed up significantly? By ~1 second, yes. Even faster would be to get sort to skip the header lines, avoiding the need for sed. It's a bit awkward and was discussed at: http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html Summarising that, is if not using merge you can: tlines=$(($(wc -l $2) + 1)) tail -q -n+$tlines .make-tags.* | LC_ALL=C sort $2 Or if merge is appropriate then: tlines=$(($(wc -l $2) + 1)) eval eval LC_ALL=C sort -m '(tail -n+$tlines .make-tags.'{1..$(nproc)}')' $2 Might as well teach ctags to do real parallel processing. LC_* are set by top level Makefile. p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.* The real question is how to kill ctags reliably. Naive trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT doesn't work. Files are removed, but processes aren't. Is $(jobs -p) generating the correct list? On an interactive shell here it is. Perhaps you need to explicitly use #!/bin/sh -m at the start to enable job control like that? Another option would be to append each background $! pid to a list and kill that list. Note also you may want to `wait` after the kill too. cheers, Pádraig. -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v3] tags: much faster, parallel make tags
On Sat, May 09, 2015 at 06:07:18AM +0100, Pádraig Brady wrote: On 08/05/15 14:26, Alexey Dobriyan wrote: exuberant() { - all_target_sources | xargs $1 -a\ + rm -f .make-tags.* + + all_target_sources .make-tags.src + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2/dev/null || echo 1) `nproc` is simpler and available since coreutils 8.1 (2009-11-18) nproc was discarded because getconf is standartized. + NR_LINES=$(wc -l .make-tags.src) + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS)) + + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src. `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22) -nl/ can't count and always make first file somewhat bigger, which is suspicious. What else it can't do right? + sort .make-tags.* $2 + rm -f .make-tags.* Using sort --merge would speed up significantly? By ~1 second, yes. Even faster would be to get sort to skip the header lines, avoiding the need for sed. It's a bit awkward and was discussed at: http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html Summarising that, is if not using merge you can: tlines=$(($(wc -l $2) + 1)) tail -q -n+$tlines .make-tags.* | LC_ALL=C sort $2 Or if merge is appropriate then: tlines=$(($(wc -l $2) + 1)) eval eval LC_ALL=C sort -m '(tail -n+$tlines .make-tags.'{1..$(nproc)}')' $2 Might as well teach ctags to do real parallel processing. LC_* are set by top level Makefile. p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.* The real question is how to kill ctags reliably. Naive trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT doesn't work. Files are removed, but processes aren't. -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v3] tags: much faster, parallel make tags
[fix Andrew's email] On Sun, May 10, 2015 at 04:26:34PM +0300, Alexey Dobriyan wrote: On Sat, May 09, 2015 at 06:07:18AM +0100, Pádraig Brady wrote: On 08/05/15 14:26, Alexey Dobriyan wrote: exuberant() { - all_target_sources | xargs $1 -a\ + rm -f .make-tags.* + + all_target_sources .make-tags.src + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2/dev/null || echo 1) `nproc` is simpler and available since coreutils 8.1 (2009-11-18) nproc was discarded because getconf is standartized. + NR_LINES=$(wc -l .make-tags.src) + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS)) + + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src. `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22) -nl/ can't count and always make first file somewhat bigger, which is suspicious. What else it can't do right? + sort .make-tags.* $2 + rm -f .make-tags.* Using sort --merge would speed up significantly? By ~1 second, yes. Even faster would be to get sort to skip the header lines, avoiding the need for sed. It's a bit awkward and was discussed at: http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html Summarising that, is if not using merge you can: tlines=$(($(wc -l $2) + 1)) tail -q -n+$tlines .make-tags.* | LC_ALL=C sort $2 Or if merge is appropriate then: tlines=$(($(wc -l $2) + 1)) eval eval LC_ALL=C sort -m '(tail -n+$tlines .make-tags.'{1..$(nproc)}')' $2 Might as well teach ctags to do real parallel processing. LC_* are set by top level Makefile. p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.* The real question is how to kill ctags reliably. Naive trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT doesn't work. Files are removed, but processes aren't. -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v3] tags: much faster, parallel "make tags"
On 08/05/15 14:26, Alexey Dobriyan wrote: > ctags is single-threaded program. Split list of files to be tagged into > equal parts, 1 part for each CPU and then merge the results. > > Speedup on one 2-way box I have is ~143 s => ~99 s (-31%). > On another 4-way box: ~120 s => ~65 s (-46%!). > > Resulting "tags" files aren't byte-for-byte identical because ctags > program numbers anon struct and enum declarations with "__anonNNN" > symbols. If those lines are removed, "tags" file becomes byte-for-byte > identical with those generated with current code. > > Signed-off-by: Alexey Dobriyan > --- > > scripts/tags.sh | 36 +++- > 1 file changed, 31 insertions(+), 5 deletions(-) > > --- a/scripts/tags.sh > +++ b/scripts/tags.sh > @@ -152,7 +152,19 @@ dogtags() > > exuberant() > { > - all_target_sources | xargs $1 -a\ > + rm -f .make-tags.* > + > + all_target_sources >.make-tags.src > + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1) `nproc` is simpler and available since coreutils 8.1 (2009-11-18) > + NR_LINES=$(wc -l <.make-tags.src) > + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS)) > + > + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src. `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22) > + > + for i in .make-tags.src.*; do > + N=$(echo $i | sed -e 's/.*\.//') > + # -u: don't sort now, sort later > + xargs <$i $1 -a -f .make-tags.$N -u \ > -I __initdata,__exitdata,__initconst, \ > -I __cpuinitdata,__initdata_memblock\ > -I __refdata,__attribute,__maybe_unused,__always_unused \ > @@ -211,7 +223,21 @@ exuberant() > --regex-c='/DEFINE_PCI_DEVICE_TABLE\((\w*)/\1/v/' \ > --regex-c='/(^\s)OFFSET\((\w*)/\2/v/' \ > --regex-c='/(^\s)DEFINE\((\w*)/\2/v/' \ > - --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/' > + --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/' \ > + & > + done > + wait > + rm -f .make-tags.src .make-tags.src.* > + > + # write header > + $1 -f $2 /dev/null > + # remove headers > + for i in .make-tags.*; do > + sed -i -e '/^!/d' $i & > + done > + wait > + sort .make-tags.* >>$2 > + rm -f .make-tags.* Using sort --merge would speed up significantly? Even faster would be to get sort to skip the header lines, avoiding the need for sed. It's a bit awkward and was discussed at: http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html Summarising that, is if not using merge you can: tlines=$(($(wc -l < "$2") + 1)) tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2 Or if merge is appropriate then: tlines=$(($(wc -l < "$2") + 1)) eval "eval LC_ALL=C sort -m '<(tail -n+$tlines .make-tags.'{1..$(nproc)}')'" >>$2 Note eval is fine here as inputs are controlled within the script cheers, Pádraig. p.s. To avoid temp files altogether you could wire everything up through fifos, though that's probably overkill here TBH p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.* -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v3] tags: much faster, parallel "make tags"
ctags is single-threaded program. Split list of files to be tagged into equal parts, 1 part for each CPU and then merge the results. Speedup on one 2-way box I have is ~143 s => ~99 s (-31%). On another 4-way box: ~120 s => ~65 s (-46%!). Resulting "tags" files aren't byte-for-byte identical because ctags program numbers anon struct and enum declarations with "__anonNNN" symbols. If those lines are removed, "tags" file becomes byte-for-byte identical with those generated with current code. Signed-off-by: Alexey Dobriyan --- scripts/tags.sh | 36 +++- 1 file changed, 31 insertions(+), 5 deletions(-) --- a/scripts/tags.sh +++ b/scripts/tags.sh @@ -152,7 +152,19 @@ dogtags() exuberant() { - all_target_sources | xargs $1 -a\ + rm -f .make-tags.* + + all_target_sources >.make-tags.src + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1) + NR_LINES=$(wc -l <.make-tags.src) + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS)) + + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src. + + for i in .make-tags.src.*; do + N=$(echo $i | sed -e 's/.*\.//') + # -u: don't sort now, sort later + xargs <$i $1 -a -f .make-tags.$N -u \ -I __initdata,__exitdata,__initconst, \ -I __cpuinitdata,__initdata_memblock\ -I __refdata,__attribute,__maybe_unused,__always_unused \ @@ -211,7 +223,21 @@ exuberant() --regex-c='/DEFINE_PCI_DEVICE_TABLE\((\w*)/\1/v/' \ --regex-c='/(^\s)OFFSET\((\w*)/\2/v/' \ --regex-c='/(^\s)DEFINE\((\w*)/\2/v/' \ - --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/' + --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/' \ + & + done + wait + rm -f .make-tags.src .make-tags.src.* + + # write header + $1 -f $2 /dev/null + # remove headers + for i in .make-tags.*; do + sed -i -e '/^!/d' $i & + done + wait + sort .make-tags.* >>$2 + rm -f .make-tags.* all_kconfigs | xargs $1 -a \ --langdef=kconfig --language-force=kconfig \ @@ -276,7 +302,7 @@ emacs() xtags() { if $1 --version 2>&1 | grep -iq exuberant; then - exuberant $1 + exuberant $1 $2 elif $1 --version 2>&1 | grep -iq emacs; then emacs $1 else @@ -322,13 +348,13 @@ case "$1" in "tags") rm -f tags - xtags ctags + xtags ctags tags remove_structs=y ;; "TAGS") rm -f TAGS - xtags etags + xtags etags TAGS remove_structs=y ;; esac -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v3] tags: much faster, parallel make tags
ctags is single-threaded program. Split list of files to be tagged into equal parts, 1 part for each CPU and then merge the results. Speedup on one 2-way box I have is ~143 s = ~99 s (-31%). On another 4-way box: ~120 s = ~65 s (-46%!). Resulting tags files aren't byte-for-byte identical because ctags program numbers anon struct and enum declarations with __anonNNN symbols. If those lines are removed, tags file becomes byte-for-byte identical with those generated with current code. Signed-off-by: Alexey Dobriyan adobri...@gmail.com --- scripts/tags.sh | 36 +++- 1 file changed, 31 insertions(+), 5 deletions(-) --- a/scripts/tags.sh +++ b/scripts/tags.sh @@ -152,7 +152,19 @@ dogtags() exuberant() { - all_target_sources | xargs $1 -a\ + rm -f .make-tags.* + + all_target_sources .make-tags.src + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2/dev/null || echo 1) + NR_LINES=$(wc -l .make-tags.src) + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS)) + + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src. + + for i in .make-tags.src.*; do + N=$(echo $i | sed -e 's/.*\.//') + # -u: don't sort now, sort later + xargs $i $1 -a -f .make-tags.$N -u \ -I __initdata,__exitdata,__initconst, \ -I __cpuinitdata,__initdata_memblock\ -I __refdata,__attribute,__maybe_unused,__always_unused \ @@ -211,7 +223,21 @@ exuberant() --regex-c='/DEFINE_PCI_DEVICE_TABLE\((\w*)/\1/v/' \ --regex-c='/(^\s)OFFSET\((\w*)/\2/v/' \ --regex-c='/(^\s)DEFINE\((\w*)/\2/v/' \ - --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/' + --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/' \ + + done + wait + rm -f .make-tags.src .make-tags.src.* + + # write header + $1 -f $2 /dev/null + # remove headers + for i in .make-tags.*; do + sed -i -e '/^!/d' $i + done + wait + sort .make-tags.* $2 + rm -f .make-tags.* all_kconfigs | xargs $1 -a \ --langdef=kconfig --language-force=kconfig \ @@ -276,7 +302,7 @@ emacs() xtags() { if $1 --version 21 | grep -iq exuberant; then - exuberant $1 + exuberant $1 $2 elif $1 --version 21 | grep -iq emacs; then emacs $1 else @@ -322,13 +348,13 @@ case $1 in tags) rm -f tags - xtags ctags + xtags ctags tags remove_structs=y ;; TAGS) rm -f TAGS - xtags etags + xtags etags TAGS remove_structs=y ;; esac -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v3] tags: much faster, parallel make tags
On 08/05/15 14:26, Alexey Dobriyan wrote: ctags is single-threaded program. Split list of files to be tagged into equal parts, 1 part for each CPU and then merge the results. Speedup on one 2-way box I have is ~143 s = ~99 s (-31%). On another 4-way box: ~120 s = ~65 s (-46%!). Resulting tags files aren't byte-for-byte identical because ctags program numbers anon struct and enum declarations with __anonNNN symbols. If those lines are removed, tags file becomes byte-for-byte identical with those generated with current code. Signed-off-by: Alexey Dobriyan adobri...@gmail.com --- scripts/tags.sh | 36 +++- 1 file changed, 31 insertions(+), 5 deletions(-) --- a/scripts/tags.sh +++ b/scripts/tags.sh @@ -152,7 +152,19 @@ dogtags() exuberant() { - all_target_sources | xargs $1 -a\ + rm -f .make-tags.* + + all_target_sources .make-tags.src + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2/dev/null || echo 1) `nproc` is simpler and available since coreutils 8.1 (2009-11-18) + NR_LINES=$(wc -l .make-tags.src) + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS)) + + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src. `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22) + + for i in .make-tags.src.*; do + N=$(echo $i | sed -e 's/.*\.//') + # -u: don't sort now, sort later + xargs $i $1 -a -f .make-tags.$N -u \ -I __initdata,__exitdata,__initconst, \ -I __cpuinitdata,__initdata_memblock\ -I __refdata,__attribute,__maybe_unused,__always_unused \ @@ -211,7 +223,21 @@ exuberant() --regex-c='/DEFINE_PCI_DEVICE_TABLE\((\w*)/\1/v/' \ --regex-c='/(^\s)OFFSET\((\w*)/\2/v/' \ --regex-c='/(^\s)DEFINE\((\w*)/\2/v/' \ - --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/' + --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/' \ + + done + wait + rm -f .make-tags.src .make-tags.src.* + + # write header + $1 -f $2 /dev/null + # remove headers + for i in .make-tags.*; do + sed -i -e '/^!/d' $i + done + wait + sort .make-tags.* $2 + rm -f .make-tags.* Using sort --merge would speed up significantly? Even faster would be to get sort to skip the header lines, avoiding the need for sed. It's a bit awkward and was discussed at: http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html Summarising that, is if not using merge you can: tlines=$(($(wc -l $2) + 1)) tail -q -n+$tlines .make-tags.* | LC_ALL=C sort $2 Or if merge is appropriate then: tlines=$(($(wc -l $2) + 1)) eval eval LC_ALL=C sort -m '(tail -n+$tlines .make-tags.'{1..$(nproc)}')' $2 Note eval is fine here as inputs are controlled within the script cheers, Pádraig. p.s. To avoid temp files altogether you could wire everything up through fifos, though that's probably overkill here TBH p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.* -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/