To perform a count one has to read the entire dataset regardless.  Why
not implement your logic within your own program, using prepare and step and ceasing to unload data from the columns when you hit your predefined limit?
JS
Mikey C wrote:
Hi,

I think this has been discussed before, but I can't find a good solution so
I'll post it again to see what people think.

Here's the problem.  I have a large number of records in a table, which
contains many columns.  Hence a large amount of data.

I have a SQL query that filters the results by search criteria across many
of these columns.

I am using a web front end to display paged results.  I need to tell the
user how many records there are in total, how many pages and which page they
are viewing.

I would like to use the LIMIT keyword to restrict the result using the two
parameters (offset and limit count) so that I do not waste resources loading
up 1000's of records just to discard the ones not on the current page.

However if LIMIT is added to the SQL, I do not get a count of the records
that the SQL select would have produced if I had not limited the query with
LIMIT.

I could do two selects with the same WHERE restriction, one with SELECT
COUNT(*) and the other with SELECT field1, field2, etc but this seems
wasteful, especially if the query is expensive in resources.

Perhaps a new function could be added to SQLite that returns the record
count regardless of any LIMIT applied?
e.g.

SELECT field1, field2, field3, fieldN, count_rows()
FROM table1
WHERE field99 LIKE 'G%'
AND field66 = 7
OR field18 <= 5.7
LIMIT 341, 10

will return a maximum of 10 records but the count_rows() will return how
many rows the query would return if the LIMIT was not in place?

This would make paging of results very straight foreward and efficient.

Anyone agree/disagree this would be useful?


--
View this message in context: 
http://www.nabble.com/LIMIT+and+paging+records-t1698512.html#a4609360
Sent from the SQLite forum at Nabble.com.


Reply via email to