On Fri, 16 Sep 2016 07:29:28 -0400 Richard Hipp <d...@sqlite.org> wrote:
> The algorithm used for "ORDER BY ... LIMIT N" uses much less memory > than a full-up "ORDER BY" because is only keeps track of the top N > entries seen so far, discarding the rest. But it also only uses a > single thread. My immediate thought was that this is an optimization opportunity. As the OP alludes to, N is the sum of LIMIT and OFFSET. Would you have information on how these are typically used? My guess is that the LIMIT argument is typically small, less than 20, but that OFFSET marches on, and grows to be a significant fraction of the table. If LIMIT N is small and OFFSET is not used, a memory-efficient, nonlocking parallel algorithm would reserve N slots for each thread, and divide the table among the threads, each processing 1/threads rows. Then merge-sort their outputs. Humbly submitted, --jkl _______________________________________________ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users