Hi,

The most efficient way for H2 (probably also other databases) is:

1) Use a PreparedStatement.
2) Limit the result (let's say use LIMIT 1000 - I don't know what is
the best value but 1000 is probably good).
3) Use 'WHERE primaryKey > ?' together with 'ORDER BY primaryKey'.
Begin with ?=-1.
4) Re-run the query with the parameter always the last value of the
previous result; until you don't get any rows.

Example:

drop table test;
create table test(id identity, name varchar)
as select x, 'Hello' from system_range(1, 10000);
select * from test where id > 0 order by id asc limit 10;
select * from test where id > 10 order by id asc limit 10;

Regards,
Thomas


On Thu, Mar 26, 2009 at 6:55 PM, Peter Chan <pe...@oaktop.com> wrote:
>
> I have a rather large table, and I have a query that does what is
> essentially a table scan, but in batches of 100 rows:
>
> select limit 19200 10 * FROM copies  ORDER BY copies.id ASC
>
> I noticed that the performance degrades rapidly once the offset
> reaches a certain point. From inspecting the source code, it looks
> like once the offset is at MAX_ROW_COUNTS, the result is buffered to
> disk and becomes much slower, in the order of 100s of ms from the
> previous 10s of ms.
>
> The current table size is 160k, but this table has the potential to
> store tens of millions of rows, so buffering to disk is going to be
> very slow. Is there an efficient way to  batch-iterate through all the
> items without having to buffer to disk for each query?
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to h2-database@googlegroups.com
To unsubscribe from this group, send email to 
h2-database+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to