Sorry if this is a silly question - I don't have much experience with
databases.
Say I have a table with many (millions+) of rows and I have a query:
SELECT * FROM mytable WHERE some_condition ORDER BY rowid
First, I'm assuming that in addition to whatever time some_condition
takes, I'll see an overhead of O( N log N ) for the sort in the worst
case, but probably much less (O(N) or O(1)?) because it's probably be
sorted anyway by rowid. Is that correct?
My real question is if there is an efficient way to index the results
of such a query. In other words, I'm looking for rows N through N+100
of the result. Can I do much better than just executing the query and
throwing away the first N rows? I thought of making an auxiliary
table to map rowid in the table with row number of the query for large
chunks of the table, but that can get to be a big memory footprint if
some_condition changes often.
Does anyone have any suggestions?
Thanks,
Jeff
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users