Hi all,

I am working with a Document-Partitioned Index table whose index sections 
are accessed using ranges over the indexed properties (e.g. property A ∈ 
[500,000 - 600,000], property B ∈ [0.1 - 0.4], etc.). The iterator that 
handles this table works by: 1st - calculating (doing intersection and 
union on different properties) all the result from the index section of a 
single bin; 2nd - using the ids retrieved from the index, it goes over the 
data section of the specific bin. 
This iterator has proved to have significant performance penalty whenever 
the amount of data retrieved from the index is orders of magnitude bigger 
than the table_scan_max_memory i.e. the iterator is teardown tens of times 
for each bin. Since there is no explicit way to save the state of an 
iterator, is there any other mechanism/approach that I could use/follow in 
order to avoid to re-calculate the index result set after each teardown?
Thanks.


Regards,
Max


Reply via email to