Hi all, see below
> That's not going to scale well to large tables/results. > I'd go with caching the rowid's from the query somewhere (db or file). > > Tim. > > On Fri, Nov 30, 2001 at 10:52:34AM -0800, Brent Cowgill wrote: > > I solved this problem just the other day after searching google. I > > needed to have an order by and where clause preserved. Here's what I > > came up with. Note, I haven't done any stress testing on this query > > yet. If anyone comes up with an improvement I'd be interested in > > getting a CC. > > > > SELECT > > fields > > FROM > > table > > WHERE > > rowid IN ( > > SELECT > > rowid > > FROM > > (SELECT rowid > > FROM table > > WHERE criteria... > > ORDER BY order... > > WHERE rownum <= last_row_num_wanted > > MINUS > > SELECT > > rowid > > FROM > > (SELECT rowid > > FROM table > > WHERE criteria... > > ORDER BY order...) > > WHERE rownum < first_row_num_wanted > > ) > > ORDER BY order... > > > > Siiskonen, Pekka wrote: > > > > >What is Oracle's equivalent to mySQL's "limit"? > > > > > >Sorry if considered off-topic! (I use DBD::Oracle and DBD::mysql) > > > > > >Pekka Siiskonen > > > > > >>On Fri, 30 Nov 2001 10:34:11 +0000, Tim Bunce wrote: > > >> > > >>>On Thu, Nov 29, 2001 at 03:08:32PM -0500, Charles Day wrote: > > >>> > > >>>>Might be slightly OT, sorry. I've been researching this > > >>>> > > >>for some time now. > > >> > > >>>>Exactly like search engines. Past ideas were to create a > > >>>> > > >>temporary db or > > >> > > >>>>use cursors. Anybody have ideas? > > >>>> > > >>>http://search.cpan.org/doc/TIMB/DBI_Talk5_2001/sld059.htm > > >>> > > >>You've got some pretty good ideas up there... My favourite is > > >>"Select and cache only the row keys, fetch full rows as needed". > > >> > > >>But you might not need the cache, then. When combined with the > > >>first idea, "re-execute query and discard unneeded", this can > > >>become: > > >> > > >> Re-execute full query only fetching the row id's. > > >> Count and discard (this can be done in a perl array), > > >> and then fetch the full rows only for those rows you want. > > >> > > >>-- > > >> Bart. > > >> > > > > > > > > > > -- > > Regards, > > Brent > > What about a combined approach: Cache the first, say 20 results pages as static temp html files and re-perform the query from cached row ids only if the user clicks a "more..." link. This way it should be possible to avoid repeated queries based on identical search parameters amd make the whole thing fast.. Bodo