On Mon, Apr 4, 2011 at 3:02 PM, Bjoern Michaelsen <bjoern.michael...@canonical.com> wrote: > Hi Norbert, > > On Mon, 4 Apr 2011 11:33:40 -0500 > Norbert Thiebaud <nthieb...@gmail.com> > wrote: > >> iow a likely poor performing sort. > > Still: sorting and uniqing <1000 strings pales compared to the > parsing and compiling of a huge amount of C++ source code (one file per > string above). It will even pale compared to preprocessing the files > (which is what ccache does). Even when using gawk hashtables for that. >
Sure, but I was not comparing the extra cost to the cost of a compile, but to the intended savings. on one hand, every time I run make I pay the cost of these redundant stats() on the other hand every time make has to compile something I have to pay the cost of that extra sort (including the cost of the fork, load program, parse it - if awk or other interpreted. And these are just fix cost, on top of the actual sort) Note that the root cause of this evil has a lot to do with our somewhat anarchic include strategy... maybe we should re-introduce include guards ;-> Norbert _______________________________________________ LibreOffice mailing list LibreOffice@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/libreoffice