On Nov 4, 12:27 pm, Michael Lang <[email protected]> wrote:
> Are there any known issues with Dataset's each iterator for large
> dataset results (100,000 records) on CentOS?

Dataset#each does iterate over the dataset, but almost all adapters
load the entire result set in memory.  So while only 1 ruby hash/model
object may be active per iteration, the backend data is still sitting
somewhere in memory.

One adapter that has a workaround for this is the native postgres
adapter, which has a Dataset#use_cursor method that will load objects
using a cursor.  With this, you can iterate over an arbitrarily large
dataset without running into memory issues.

In most other cases, you'll need to use the pagination extension.

Jeremy

-- 
You received this message because you are subscribed to the Google Groups 
"sequel-talk" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/sequel-talk?hl=en.

Reply via email to