Thanks.
One of the trade-offs we are considering, along the lines of what you
mentioned, has to do with whether or not to cache the Hits. The benefit
being that we'd avoid re-running the search if requests for hits past the
first page do come in, the cost being that we'd have to keep around all the
hits, that are unlikely to be requested.
So far, we have been leaning toward a stateless approach: we are re-running
the search with no prior knowledge of whether or not it ran before, and
serving up the hits at the requested offset. I could see caching a fixed
list of search results, and only re-running the search if it has been
evicted from that cache.
Jean 

-----Original Message-----
From: karl wettin [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 27, 2006 12:09 PM
To: java-user@lucene.apache.org
Subject: Re: Efficiently paginating results.


27 apr 2006 kl. 20.44 skrev Jean Sini:
> Our application presents search results in a paginated form.
>
> We were unable to find Searcher methods that would return, say, 'n'
> (typically, 10) hits after a start offset 'k'.
>
> So we're currently using the Hits collection returned by  
> Searcher.search,
> and using its Hits.doc(i) method to get the ith hit, with i between  
> k and
> k+n. Is that the most efficient way to do that? Is there a better  
> way (e.g.
> some form of Filter on the query itself)?

You probably want to do it just the way you do.

But cache the Hits somehow. Perhaps in a session, perhaps globally  
in  /your/ searcher. Perhaps the session points at the global cache  
so it doesn't change within a session when you flush the cache on  
index update.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to