Daniel Naber wrote:
After fixing this I can reproduce the problem with a local index that
contains about 220.000 documents (700MB). Fetching the first document
takes for example 30ms, fetching the last one takes >100ms. Of course I
tested this with a query that returns many results (ab
he problem with a local index that
contains about 220.000 documents (700MB). Fetching the first document
takes for example 30ms, fetching the last one takes >100ms. Of course I
tested this with a query that returns many results (about 50.000).
Actually it happens even with the default sorting, no
Stanislav Jordanov wrote:
startTs = System.currentTimeMillis();
dummyMethod(hits.doc(nHits - nHits));
stopTs = System.currentTimeMillis();
System.out.println("Last doc accessed in " + (stopTs -
startTs)
// The test source code (second attempt).
// Just in case the .txt attachment does not pass through
// I am pasting the code here:
package index_test;
import org.apache.lucene.search.*;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.store.Directory;
import org.apache.lucene.
On Feb 28, 2005, at 10:39 AM, Stanislav Jordanov wrote:
> What did you do in your private investigation?
1. empirical tests with an index of nearly 75,000 docs (I am attaching
the test source)
Only certain (.txt?) attachments are allowed to come through on the
mailing list.
> Sorted by descendin
> What did you do in your private
investigation?1. empirical tests with an index of
nearly 75,000 docs (I am attaching the test source)
2. reviewing and tracing the source code of
Lucene
(I do not claim I have gained a deep understanding
of it ;-)
> Sorted by descending relevance (the defa
that the client is
allowed to quickly access (by scrolling) a random page of the result
set.
Put in different words the app must quickly (in less that a second)
respond
to requests like: "Give me the results from No 567100 to No 567200"
(remember the results are sorted thus ordered).
Sorte
ethod).
Now comes the problem: it is a product requirement that the client is
allowed to quickly access (by scrolling) a random page of the result set.
Put in different words the app must quickly (in less that a second) respond
to requests like: "Give me the results from No 567100 to No 567200&
ng) a random page of the result set.
Put in different words the app must quickly (in less that a second) respond
to requests like: "Give me the results from No 567100 to No 567200"
(remember the results are sorted thus ordered).
I took a look at Lucene's internals which only left m
markharw00d wrote:
The highlighter uses a number of "pluggable" services, one of which is the
choice of "Fragmenter" implementation. This interface is for classes which
decide the boundaries where to cut the original text into snippets. The
default
implementation used simply breaks up text into ev
Hi Pierre,
Here's the response I gave the last time this question was raised::
The highlighter uses a number of "pluggable" services, one of which is the
choice of "Fragmenter" implementation. This interface is for classes which
decide the boundaries where to cut the original text into snippets. Th
Thank for reply Daniel,
But is there anything to do then to avoid such a thing to happen ?
Regards
Daniel Naber a écrit :
On Tuesday 15 February 2005 09:39, Pierre VANNIER wrote:
String fragment = highlighter.getBestFragment(stream,
introduction);
The highlighter breaks up text into
On Tuesday 15 February 2005 09:39, Pierre VANNIER wrote:
> String fragment = highlighter.getBestFragment(stream,
> introduction);
The highlighter breaks up text into same-size chunks (100 characters by
default). If the matching term now appears just at the end or at the start of
such a
Hi all,
I'm quite a newbie for Lucene, but I bought "Lucene In Action" and I'm
trying to customize few examples caught from there.
I Have this sample code of JSP (bad JSP caus' I'm also a jsp newbie - :-)) :
Here's the code
Hi,
> From: mahaveer jain [mailto:[EMAIL PROTECTED]
>
> Thanks Pasha for the link, but right now I am interested in
> KWIC and not highlighting part.. can you help we start of with KWIC.
KWIC mean "Keyword In Context"? If yes, then the Highlighter package do it.
See the highlighter's sample.
Thanks Pasha for the link, but right now I am interested in KWIC and not
highlighting part.. can you help we start of with KWIC.
Thanks
Pasha Bizhan <[EMAIL PROTECTED]> wrote:
Hi,
> From: mahaveer jain [mailto:[EMAIL PROTECTED]
> I am using lucene to index and search my app. Till date I am
Hi,
> From: mahaveer jain [mailto:[EMAIL PROTECTED]
> I am using lucene to index and search my app. Till date I am
> just showing file name or title based on my application. We
> want to show, pharse that contain the keyword searched.
> Has anybody tried this ? Can someone help me start this
Hi All,
I am using lucene to index and search my app. Till
date I am just showing file name or title based on my
application. We want to show, pharse that contain the
keyword searched.
Has anybody tried this ? Can someone help me start
this ?
Thanks
Mahaveer
__
On Feb 1, 2005, at 7:36 PM, Hetan Shah wrote:
Another question for the day:
How to make sure that the results shown are the only one containing
the keywords specified?
e.g.
the result for the query Red AND HAT AND Linux
should result in documents which has all the three key words and not
show
Another question for the day:
How to make sure that the results shown are the only one containing the
keywords specified?
e.g.
the result for the query Red AND HAT AND Linux
should result in documents which has all the three key words and not
show documents that only has one or two keywords
On Feb 1, 2005, at 4:21 AM, Jingkang Zhang wrote:
Lucene support sort by score or docID.Now I want to
sort search results by score and docID or by two
fields at one time, like sql
command " order by score,docID" , how can I do it?
Sorting by multiple fields (including score and docum
Lucene support sort by score or docID.Now I want to
sort search results by score and docID or by two
fields at one time, like sql
command " order by score,docID" , how can I do it?
_
Do You Yahoo!?
150万曲MP3疯狂搜,带您闯入音乐殿堂
http://music
Storing in the index has some performance benefits in the CVS
version of Lucene, as you can store term position offset information and
avoid having to re-analyze for highlighting.
Speaking of which, is there a planned release date for a version that
contains this feature?
--
Maik Schreiber *
MAIL PROTECTED]>
To: "Lucene"
Sent: Friday, January 28, 2005 5:08 PM
Subject: Search results excerpt similar to Google
Hi
Is it hard to implement a function that displays the search results
excerpts similar to Google?
Is it just string manipulations or there are some logic
>
To: "Lucene"
Sent: Friday, January 28, 2005 5:08 PM
Subject: Search results excerpt similar to Google
Hi
Is it hard to implement a function that displays the search results
excerpts similar to Google?
Is it just string manipulations or there are some logic
Hi
Is it hard to implement a function that displays the search results
excerpts similar to Google?
Is it just string manipulations or there are some logic behind it? I
like their excerpts.
Thanks
-
To unsubscribe, e-mail
query an index and return sorted results.
> The "author" and "title" fields are Indexed, Tokenized and Stored.
> The are added in the following way:
>doc.add(Field.Text("title"...
> doc.add(Field.Text("author"...
>
> ..
Hi ALL,
I am using a java class to query an index and return sorted results.
The "author" and "title" fields are Indexed, Tokenized and Stored.
The are added in the following way:
doc.add(Field.Text("title"...
doc.add(Field.Text("author"...
...
On Mon, 2005-01-10 at 10:12 +0100, Daniel Cortes wrote:
> My question are what I do when I show results if documents haven't a
> summary? I show the first lines of documents? Perhaps it is a silly
> question but until now I haven't a solution.
Take a look at the highlighter
indexed)
keys Field.Unstored (token, not
stored,indexed)
dateField.UnIndexed (not
token,stored, not indexed)
body Field.Text (token,
not stored, indexed)
I want to show the results like
In web search, link information helps greatly. (This was Google's big
discovery.) There are lots more links that point to
http://www.slashdot.org/ than to http://www.slashdot.org/xxx/yyy, and
many (if not most) of these links have the term "slashdot", while links
to http://www.slashdot.org/xx
Perhaps look at Nutch to see whether (and if so, how) it deals with
this situation.
Determining the root seems to be a pretty tricky endeavor. Each of
these could be a root:
http://www.ehatchersolutions.com/JavaDevWithAnt
http://www.example.com/~username
And certainly lots of o
I do this to some extent... currently I apply a boost if its as best i
can tell a root page. But I am more asking how to determine root
pages... content obviously isn't easy to use ... the url is the main
key... but that can be tricky as well... Basically the pages are from
a crawl.. so their urls
On Dec 6, 2004, at 4:53 AM, Chris Fraschetti wrote:
My lucene implementation works great, its basically an index of many
web crawls. The main thing my users complain about is say a search for
"slashdot" will return the
http://www.slashdot.org/soem_dir/somepage.asp as the top result
because the fact
My lucene implementation works great, its basically an index of many
web crawls. The main thing my users complain about is say a search for
"slashdot" will return the
http://www.slashdot.org/soem_dir/somepage.asp as the top result
because the factors i have scoring it determine it as so... but
obvi
Thanks a lot for the solution / explanation. Saved the day Erik.
Summary
Observation: Using a wild carded search term with queryParser and the
WhitespaceAnalyser returned no hits when when hits where expected.
Reason: This was caused by the default behaviour of queryParser to lower
case wildcar
On Nov 17, 2004, at 7:44 AM, [EMAIL PROTECTED] wrote:
I then try a search using the term
ResponseHelper.writeNoCachingHeaders\(*\);
now I'm expecting this to be a wider search term and it should find at
least two, possibly more docs?
the query parser produces the query
+contents:responsehelper.writ
to see if you get the
results you expect. Exact case matters.
Also, when troubleshooting issues with QueryParser, it is helpful to
see what the actual Query returned is - try displaying its toString
output.
Erik
On Nov 16, 2004, at 6:25 AM, [EMAIL PROTECTED] wrote:
> Hi,
>
> We ha
Try using a TermQuery instead of QueryParser to see if you get the
results you expect. Exact case matters.
Also, when troubleshooting issues with QueryParser, it is helpful to
see what the actual Query returned is - try displaying its toString
output.
Erik
On Nov 16, 2004, at 6:25
Hi,
We have indexed a set of web files (jsp , js , xslt , java properties and
html) using the lucene Whitespace Analyzer.
The purpose is to allow developers to find where code / functions are used
and defined across a large and dissperate
content management repository. Hopefully to aid code re-use
Rupinder Singh Mazara wrote:
I want to be able to draw a venn diagramm or something similar that shows
how results of different queries overlapp.
the idea is to draw a diagramm using applets or gif's or the like
which shows how different results match,
This is *exactly* what w
hi all
I want to be able to draw a venn diagramm or something similar that shows
how results of different queries overlapp.
the idea is to draw a diagramm using applets or gif's or the like
which shows how different results match,
gratefull for any help in this m
ch taxonomy
> to present to the user as drill down options, and provide the counts
> regarding how many results fall under each of these nodes. At present I
> only have about 25,000 indexed objects and usually no more than 1,000
> results from the initial query. To determine the dr
ry, I determine the best small set of nodes from each taxonomy
to present to the user as drill down options, and provide the counts
regarding how many results fall under each of these nodes. At present I
only have about 25,000 indexed objects and usually no more than 1,000
results from the initial
It depends on how many results they're looking through, here are two
scenarios I see:
1] If you don't have that many records you can fetch all the results and
then do a post parsing step the determine totals
2] If you have a lot of entries in each category and you're worried
I'd like to implement a search across several types of "entities",
let's say, classes, professors, and departments. I want the user to
be able to enter a simple, single query and not have to specify what
they're looking for. Then I want the search results to be som
Sent: Thursday, October 14, 2004 11:22 AM
> To: [EMAIL PROTECTED]
> Subject: RE: Filtering Results?
>
> Thanks Chuck.
> Meanwhile searching on net and found this link
> http://wiki.apache.org/jakarta-lucene/SearchNumericalFields
> Thanks again
>
>
> >From: "
<[EMAIL PROTECTED]>
Subject: RE: Filtering Results?
Date: Thu, 14 Oct 2004 09:55:07 -0700
Sam,
You can pick any encoding such that lexicographic order (alphabetic
order) is consistent with the numeric order you want. E.g., if a single
field can contain positive or negative integers or floats, the
sam s [mailto:[EMAIL PROTECTED]
> Sent: Thursday, October 14, 2004 6:40 AM
> To: [EMAIL PROTECTED]
> Subject: RE: Filtering Results?
>
> Thanks Chuck.
>
> What is the workaround for filtering (preferably using RangeQuery)
> following?
> 1. Float values. Do I have to pad those
e Users List" <[EMAIL PROTECTED]>
To: "Lucene Users List" <[EMAIL PROTECTED]>
Subject: RE: Filtering Results?
Date: Wed, 13 Oct 2004 21:49:30 -0700
RangeQuery is a good approach. Put fields on your documents like age.
The only tricky thing is that the comparisons are al
Use {} instead of [] for < queries.
Good luck,
Chuck
> -Original Message-
> From: sam s [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, October 13, 2004 12:55 PM
> To: [EMAIL PROTECTED]
> Subject: Filtering Results?
>
> Hi,
> I want to do filtering on matched
EMAIL PROTECTED]>
Subject: Re: Clustering lucene's results
Date: Thu, 07 Oct 2004 10:39:26 +0200
Hi William,
Ok, here is some demo code I've put together that shows how you can
achieve clustering of Lucene's results. I hope this will get you
started on your projects. If you ha
Thanks Dawid ! :)
From: Dawid Weiss <[EMAIL PROTECTED]>
Reply-To: "Lucene Users List" <[EMAIL PROTECTED]>
To: Lucene Users List <[EMAIL PROTECTED]>
Subject: Re: Clustering lucene's results
Date: Thu, 07 Oct 2004 10:39:26 +0200
Hi William,
Ok, here is some
Nope, because the example I showed is based on the "local interfaces"
pipeline and output-xsltrenderer is for remote components only.
Anyway, I don't think it makes much sense -- if you need xslt badly,
just modify the source code to output the results as XML and put an xslt
f
That's great, thanks dawid.
Just a question, how can I modify your code in order to use the
carrot2-output-xsltrenderer to output the clustering results in a html page?
Can you provide an example?
Thanks
Dawid Weiss wrote:
Hi William,
Ok, here is some demo code I've put together that
Hi William,
Ok, here is some demo code I've put together that shows how you can
achieve clustering of Lucene's results. I hope this will get you started
on your projects. If you have questions, please don't hesitate to ask --
cross posts to carrot2-developers would be a good idea
Ulrich Mayring writes:
> Daniel Naber wrote:
> >
> > AND always refers to the terms on both sides, +/- only refers to the term
> > on the right. So "a AND b" -> "+a +b" is correct.
>
> *slap forehead* - you're right. Wasn't there something about operator
> precedence way back when ;-)
>
Ye
Daniel Naber wrote:
AND always refers to the terms on both sides, +/- only refers to the term
on the right. So "a AND b" -> "+a +b" is correct.
*slap forehead* - you're right. Wasn't there something about operator
precedence way back when ;-)
Anyway, thanks to my stupidity and the help on th
On Thursday 23 September 2004 17:53, Ulrich Mayring wrote:
> field1:foo field2:bar AND field3:true
>
> turns into
>
> field1:foo +field2:bar +field3:true
AND always refers to the terms on both sides, +/- only refers to the term
on the right. So "a AND b" -> "+a +b" is correct.
Regards
Daniel
yeah... I know there have to be demos... I tried to be lazy, you know :)
Anyway, as I told Andrzej -- I'll take a look at it (and with a
pleasure) after I come back. i don't think the delay will matter much.
And if it does, ask Andrzej -- he has excellent experience with both
projects -- he's ju
Hi Dawid,
The demos (under /src/demo) are very good. They have the basic usage
scenario.
Thanks Andrzej.
William.
Dawid Weiss wrote:
Hi William,
No, I don't have examples because I never used Lucene directly. If you
provide me with a sample index and an API that executes a query on this
index
Hi Andrzej :)
Yep, ok, I'll take a look at it. After I come back from abroad (next
week). I just wanted to save myself some time and have an already
written code that fetches the information we need for clustering; you
know what I mean, I'm sure. But I'll start from scratch when I get back.
D.
Dawid Weiss wrote:
Hi William,
No, I don't have examples because I never used Lucene directly. If you
provide me with a sample index and an API that executes a query on this
index (I need document titles, summaries, or snippets and an anchor
(identifier), can be an URL).
Hi Dawid :-)
I believe t
On Sep 23, 2004, at 11:00 AM, Ulrich Mayring wrote:
Erik Hatcher wrote:
Look at AnalysisDemo referred to here:
http://wiki.apache.org/jakarta-lucene/AnalysisParalysis
Keep in mind that phrase queries do not support wildcards - they are
analyzed and any wildcard characters are likely stripped a
Ulrich Mayring wrote:
If the user searches for "007001 handle", the MultiFieldQueryParser,
which searches in the fields "title" and "contents", changes that query to:
(title:007001 +title:handl) (contents:007001 +contents:handl)
Ok, I cleared this up, there was some invisible magic going on in th
Erik Hatcher wrote:
Look at AnalysisDemo referred to here:
http://wiki.apache.org/jakarta-lucene/AnalysisParalysis
Keep in mind that phrase queries do not support wildcards - they are
analyzed and any wildcard characters are likely stripped and cause
tokens to split.
Ok, I did all that and id
om: Dawid Weiss <[EMAIL PROTECTED]>
Reply-To: "Lucene Users List" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Subject: Clustering lucene's results
Date: Thu, 23 Sep 2004 13:36:03 +0200
Dear all,
I saw a post about an attempt to integrate Carrot2 with Lucene. It was
a while a
Hi Dawid,
I would like to use Carrot2 with lucene. Do you have examples ?
Thanks a lot,
William.
From: Dawid Weiss <[EMAIL PROTECTED]>
Reply-To: "Lucene Users List" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Subject: Clustering lucene's results
Date: Thu, 23 Sep 2004 13:
On Sep 23, 2004, at 5:49 AM, Morus Walter wrote:
Ulrich Mayring writes:
Will do, thank you very much. However, how do I get at the analyzed
form
of my terms?
Instanciate the analyzer, create a token stream feeding your input,
loop over the tokens, output the results.
Look at AnalysisDemo
Dear all,
I saw a post about an attempt to integrate Carrot2 with Lucene. It was a
while ago, so I'm curious if any outcome has been achieved.
Anyway, as the project coordinator I can offer my help with such
integration; if you're looking for some ready-to-use code then there is
a clustering pl
Ulrich Mayring writes:
>
> Will do, thank you very much. However, how do I get at the analyzed form
> of my terms?
>
Instanciate the analyzer, create a token stream feeding your input,
loop over the tokens, output the res
Morus Walter wrote:
Your number/handle samples look ok to me if the default operator is AND.
But it's OR ;-)
Using AND explicitly I get different results and using OR explicitly I
get the same results as documented.
Note that wildcard expressions are not analyzed so if service is
stemm
u only get 1 hit.
>
> The above website is running Lucene 1.3rc3, but I was able to reproduce
> this locally with 1.4.1. Here are my local results with "controlled
> pseudo" documents, perhaps you can see a pattern:
>
> searching for "00700*" gets two
3, but I was able to reproduce
this locally with 1.4.1. Here are my local results with "controlled
pseudo" documents, perhaps you can see a pattern:
searching for "00700*" gets two documents:
"007001 action" and "007002 handle"
searching for "handle&qu
Can anyone help me with code to get the topterms of a given field for a
query resultset?
Here is code modified from Luke to get the topterms for a field:
public TermInfo[] mostCommonTerms( String fieldName, int numberOfTerms )
{
//make sure min will get a positive number
i
ublic class LuceneSearchResults extends HitCollector implements Serializable,
SearchResults {
/*
* This class collects all non-zero search results
* and presents them in descending order. This class
* should be cached when possible to avoid having to re-run
* the s
AM
To: Lucene Users List
Subject: Re: displaying 'pages' of search results...
The way we do it is: Get all the document ids, cache them and then get the
first 50, second 50 documents etc. We wrote a light weight paging api on top
of lucene. We call searcher.search(query, hitCollector); Our
H
.
Praveen
- Original Message -
From: "Chris Fraschetti" <[EMAIL PROTECTED]>
To: "Lucene Users List" <[EMAIL PROTECTED]>
Sent: Tuesday, September 21, 2004 3:33 PM
Subject: displaying 'pages' of search results...
I was wondering was the best w
On Tuesday 21 September 2004 21:33, Chris Fraschetti wrote:
> I was wondering was the best way was to go about returning say
> 1,000,000 results, divided up into say 50 element sections and then
> accessing them via the first 50, second 50, etc etc.
>
> Is there a way to keep the
d consider caching Hits and your
IndexSearcher to re-use when paging.
Erik
On Sep 21, 2004, at 3:33 PM, Chris Fraschetti wrote:
I was wondering was the best way was to go about returning say
1,000,000 results, divided up into say 50 element sections and then
accessing them via the first 50,
I was wondering was the best way was to go about returning say
1,000,000 results, divided up into say 50 element sections and then
accessing them via the first 50, second 50, etc etc.
Is there a way to keep the query around so that lucene doesn't need to
search again, or would the sear
Redirecting to lucene-user list (more appropriate).
1. You could use IndexReader's termDocs() method and iterate through
results, filtered by the desired field.
2. You can't. You could add a special and constant field and value to
_every_ Document in your index, and then you could ge
our own HitCollector and pass it to IndexSearcher.search()
Have a look at the javadocs of the org.apache.lucene.search package,
it's quite straightforward. The PriorityQueue from the
util package is useful to collect results. For every distinct score you could
store an int[] of document nrs in
hi there,
i browsed through the list and had some different searches but i do not
find, what i'm looking for.
i got an index which is generated by a bot, collecting websites. there
are sites like www.domain.de/article/1 and www.domain.de/article/1?page=1
these different urls have the same conten
See the explain functionality in the Javadocs and previous threads. You can ask
Lucene to explain why it got the results it did for a give hit.
>>> [EMAIL PROTECTED] 07/12/04 04:52PM >>>
I search the index on multiple fields. Could the search results also
tell me which field m
I search the index on multiple fields. Could the search results also
tell me which field matched so that the document was selected? From what
I can tell, only the document number and a score are returned, is there
a way to also find out what was the field(s) of the document matched the
query
Do you know:
http://websom.hut.fi/websom/comp.ai.neural-nets-new/html/root.html ?
Interesting - is there any code avail to draw the maps?
The algorithm is described here;
http://www.cis.hut.fi/research/som-research/book/
A short summary and some sample code is available here:
http://davis.wpi.edu/
these guys who put results from Google into a treemap...
http://google.hivegroup.com/
I did up my own version running against my index of OSS/javadoc trees.
This query for "thread pool" shows it off nicely:
http://www.searchmorph.com/kat/tsearch.jsp?
s=thread%20pool&side=300&goa
Dave,
cool stuff, think aboout to contribute that to nutch.. ;-)!
Do you know:
http://websom.hut.fi/websom/comp.ai.neural-nets-new/html/root.html ?
Cheers,
Stefan
Am 01.07.2004 um 23:28 schrieb David Spencer:
Inspired by these guys who put results from Google into a treemap...
http
Inspired by these guys who put results from Google into a treemap...
http://google.hivegroup.com/
I did up my own version running against my index of OSS/javadoc trees.
This query for "thread pool" shows it off nicely:
http://www.searchmorph.com/kat/tsearch.jsp?s=thread%20pool&sid
.
Thanks a lot,
Polina
-Original Message-
From: Brisbart Franck [mailto:[EMAIL PROTECTED]
Sent: June 28, 2004 10:25 AM
To: Lucene Users List
Subject: Re: how to get all terms as search results (or "*" equivalent)
When you use wildcards like that, the parser builds 1 query for
When you use wildcards like that, the parser builds 1 query for each
term matching the wildcarded term. With this approach, it should have
created n queries (ie n boolean clauses) where n is number of terms.
The number of clauses for a BooleanQuery is limited to 1024 by default.
You can change th
Since it is not allowed to use "*" or "?" symbols as the first character
of a search, I tried the following query as an alternative:
"Field_1: ([a* TO z*] OR [A* TO Z*] OR [0* TO 9*])"
but the QueryParser complains saying:
"org.apache.lucene.search.BooleanQuery$TooManyClauses".
Any idea why this
> p.s. This ought to go on the wiki :)
It's now included in a Lucene FAQ.
Otis
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
On Jun 4, 2004, at 3:07 AM, Antoine Brun wrote:
We are investigating the possibility to insert previous search results
to a new query.
Does anyone knows if it is possible or if such an evolution is under
development
I suppose you mean "search within search", so that the second
Hello,
I am new to Lucene and more generally to search engines. As my company has decided to
base its new software on Lucene, I have one first question about Lucene querying
functionnalities.
We are investigating the possibility to insert previous search results to a new query.
Does
On Tuesday 11 May 2004 15:58, Ryan Sonnek wrote:
> When performing a search with lucene, is it possible to only return a
> subset of the results? I need to be able to page through results, and it
Yes, http://www.nitwit.d
CTED]
> Sent: Tuesday, May 11, 2004 11:05 AM
> To: Lucene Users List
> Subject: RE: pagable results
>
>
> I'd be curious what that 3rd party product is, if you are allowed to
> share that.
>
> Otis
>
> --- Ryan Sonnek <[EMAIL PROTECTED]> wrote:
> >
10 thousand). I've been
> working with a commercial search engine who's API had pagable results
> built in, and so I just assumed that it existed for lucene. I'm glad
> to hear that it's handled internally and think that's a much better
> route. I'd just like t
Great. Thanks Erik.
I haven't experienced any performance problems with lucene, and our indexes are
reletively small (less than 10 thousand). I've been working with a commercial search
engine who's API had pagable results built in, and so I just assumed that it existed
for lu
1 - 100 of 248 matches
Mail list logo