I'm afraid, limitby will not work, since it returns limited set, and I
guess it's not possible to dynamically change the limit, so I'll have to
sort of loop through some kind of subqueries, or use the original query
with limited set of fields (takes 60secs for 700k records, not ready to
test on 2-3M records), insert them into scheduler and then let scheduler
deal with updating full records in it's own time :) Will try now using
cacheable=True.

I know I sound like a kid in a candy store, and I'm not patronizing, but
truly love scheduler... Reminds me of 8 years ago working with FileNet
Visual Workflow :)

On Fri, Oct 19, 2012 at 2:51 PM, Niphlod <niph...@gmail.com> wrote:

> the set returned by select is always a full result set, because it is
> extracted and parsed alltogether.
> Slicing with limits is good (and recommended, if possible). Just remember
> that you can save a lot of time and memory passing cacheable=True to the
> select() function. References will be "stripped" off the normal Rows
> record, as long as other convenience functions like update_record and
> delete_record. If you can live without those, for large queries I'd go with
> select(cacheable=True) all the time
>
>
> On Friday, October 19, 2012 8:17:38 PM UTC+2, Adi wrote:
>>
>> Thank you Vasile for your quick response. This will be perfect.
>>
>> On Friday, October 19, 2012 2:02:41 PM UTC-4, Vasile Ermicioi wrote:
>>>
>>> _last_id = 0
>>> _items_per_page=1000
>>> for row in db(db.table.id>_last_id).**select(limitby=(0,_items_per_**page),
>>> orderby=db.table.id):
>>>     #do something
>>>     _last_id = row.id
>>>
>>>  --
>
>
>

-- 



Reply via email to