Hola / Hi,
queria pedir disculpas y advertir a todo aquel que haya recibido una
invitacion mia para quechup.com. Yo me registre por una invitacion que
a mi vez recibi de un contacto de confianza, sin prestar demasiada
atencion al procedimiento y sin saber muy bien de que se trataba. En
Hi all,
I need to find a couple of result sets at the same time from the same
search-criteria. The two sets are sorted according different sort-criteria.
From both I need just the few top N results, but anyway, because of business
rules, I need to process the entire hit set for the search.
/PriorityQueue.html
On 6/1/07, Carlos Pita [EMAIL PROTECTED] wrote:
Hi all,
I need to find a couple of result sets at the same time from the same
search-criteria. The two sets are sorted according different sort-criteria.
From both I need just the few top N results, but anyway, because of business
rules
Hi all,
I have a searcher and a writer, the writer writes N changes, then the
searcher is reopened to reflect them. Depending on whether autoCommit is
false or true for the writer it could have to be closed after the N-changes
batch too, just to make visible the flushed changes. But suppose for
Hi again,
On 5/24/07, Yonik Seeley [EMAIL PROTECTED] wrote:
Currently, a deleted doc is removed when the segment containing it is
involved in a segment merge. A merge could be triggered on any
addDocument(), making it difficult to incrementally update anything.
sorry but is the document
and at the same time will keep most of
the data in memory in its definitive format.
Thank you for your answer.
Cheers,
Carlos
On 5/25/07, Antony Bowesman [EMAIL PROTECTED] wrote:
Carlos Pita wrote:
Hi all,
Is there any guaranty that the maxDoc returned by a reader will be about
the
total
behavior, I recommend using
a TopDocs/TopDocCollector.
But be aware that if you load the document for each one, you may incur
a significant penalty, although the lazy-loading helped me a lot, see
FieldSelector.
On 5/23/07, Carlos Pita [EMAIL PROTECTED] wrote:
Hi folks,
I need to collect some
Hi all,
Is there any guaranty that the maxDoc returned by a reader will be about the
total number of indexed documents?
The motivation of this question is that I want to associate some info to
each document in the index, and in order to access this additional data in
O(1) I would like to do
place...
Best
Erick
On 5/24/07, Carlos Pita [EMAIL PROTECTED] wrote:
Hi Erick,
thank you for your prompt answer. What do you mean by loading the
document?
Accessing one of the stored fields? In that case I'm afraid I would need
to
do it. For example, in the aforementioned case of a result
Why wouldn't numdocs serve?
Because the document id (which is the array index) would be in the range 0
... maxDoc and not 0 ... numDocs, wouldn't it?
Cheers,
Carlos
Best
Erick
The motivation of this question is that I want to associate some info to
each document in the index, and in
No. It will always be at least as large as the total documents. But that
will also count deleted documents.
Do you mean that deleted document ids won't be reutilized, so the index
maxDoc will grow more and more with time? Isn't there any way to compress
the range? It seems strange to me,
be shorter than maxdocs.
That's what I get for reading quickly...
Best
Erick
On 5/24/07, Carlos Pita [EMAIL PROTECTED] wrote:
No. It will always be at least as large as the total documents. But
that
will also count deleted documents.
Do you mean that deleted document ids won't
be
used to, e.g., determine how big to allocate an array which will have an
element for every document number in an index.
Isn't that what you're wondering about?
Erick
On 5/24/07, Carlos Pita [EMAIL PROTECTED] wrote:
That's no problem, I can regenerate my entire extra data structure upon
but
anyway, just to be sure, is there a way to make the index inform my
application of merging events?
Cheers,
Carlos
On 5/24/07, Yonik Seeley [EMAIL PROTECTED] wrote:
On 5/24/07, Carlos Pita [EMAIL PROTECTED] wrote:
Yes Erick, that's fine. But the fact is that I'm not sure whether the
next
added
Mh, some of my fields are in fact multivaluated. But anyway, I could store
them as a single string and split after retrieval.
Will FieldCache work for the first search with some query or just for the
successive ones, for which the fields are already cached?
Cheers,
Carlos
On 5/24/07, Chris
Nice, I will write the ids into a byte array with a DataOutputStream and
then marshal that array into a String with a UTF8 encoding. This way there
is no need for parsing or splitting, and the encoding is space efficient.
This marshaled String will be cached with a FieldCache. Thank you for your
Hi folks,
I need to collect some global information from my first 1000 search results
in order to build up some search refining components containing only
relevant values (those which correspond to at least one of the first 1000
hits). For example, the results are products and there is a store
17 matches
Mail list logo