You could start by just selecting and caching the ROWIDs.

A more sophisticated, and portable, approach is to query the primary
key columns and then fetch the first page's worth and record the
highest pk value.  For the next page start at the previous pk value.
All queries have an order by that matches the pk columns. The DB can
read the rows efficiently through the index.

I'm obviously being a little simplistic here - there are many design
and implementation issues to consider. It isnt easy to do well. And,
for the record, I've never had the need so I'm just spouting unproven
theory :)

Tim.

On Tue, Aug 07, 2001 at 08:36:02AM -0400, Stan Brown wrote:
> I'm writing a generic table editor in perltk for some operations types to
> use.
> 
> It's going pretty well, but I have run inot a problem that I can't figure
> out how to solve. I'm runing against Oracle 7 on HP-UX, and what I have
> works pretty well for small (< 10,000) row tables. But it does not work
> (for reasons that will become obvious in a moment) on table with, say, a
> million rows.
> 
> Here is waht I starte out with:
> 
> $MW->Busy;
> while ( @extracted_data = $sths->fetchrow_array ) {
>       my $i = 0;
>       foreach $col (keys  %cols)
>       {
>               # print "Got record for column $col record
>               $record_qty\n";
>               $records[$record_qty]{$col} =
>               $extracted_data[$i];
>               ++$i;
>       }
>       ++$record_qty;
>  }
>  $MW->Unbusy();
> 
>  Then the operator can move around and siplay/edit each record by setting
>  the fileds displayed to the appropriate row of the @records array.
> 
>  The problem with this is obvious, on large data sets. I am making a compy
>  of the data, and I run out of virtual memory. So I decided to change to 
>  fetchall_arrayref, however, I just ran a test with this, and even though
>  it did manage to aquire all the rows of data for 750,000 row table, the
>  resident set size of the perl task exceded 1GB. It als took several
>  minutes to get all the records.
> 
>  Presently the only way that I have provided, to move around in teh fetch's
>  result record set is "next", and previous" buttons. I probably need to add
>  the ability to go directly to record n of the result record set also.
> 
>  So, here is teh question. Givne that there is (as far as I know) now way
>  of getting just the desired record from the result set, how can I rework
>  this to be:
> 
>  1. Faster
>  2. More robust (IE not liekly to exced system memory resources).
> 
>  Sugestions?
> 
> 
> 
> -- 
> Stan Brown     [EMAIL PROTECTED]                                    843-745-3154
> Charleston SC.
> -- 
> Windows 98: n.
>       useless extension to a minor patch release for 32-bit extensions and
>       a graphical shell for a 16-bit patch to an 8-bit operating system
>       originally coded for a 4-bit microprocessor, written by a 2-bit 
>       company that can't stand for 1 bit of competition.
> -
> (c) 2000 Stan Brown.  Redistribution via the Microsoft Network is prohibited.

Reply via email to