: Thank you, this was exactly what I needed. So "tokenizing" really denotes a
: more general process that can involve normalizing the case or whatever else
: can be done with a filter. This is where I was confused.
When constructing a Document/Field "TOKENIZED" really referes to the
broader se
Hi all,
I'm using the new payloads feature to assign types to tokens as I
index. The type is based on the surrounding text in the document, and
I want to filter my searches based on this token type.
For example, I may index the token "house" maybe found in different
places with different
: Just out of interest, why does field:* go via getWildcardQuery instead of
: getPrefixQuery? It seems to me that it should be treated as a prefix of "",
: but am I missing something important?
i think it's just an artifact of the grammer ... the first character of
"the string" is a wildcard
I am facing an Out Of Memory error during indexing my files. It doesn't
happen consistently. I had read through some previous posts and
documentation and came up with a solution. Appreciate if some one can let me
know if its the right approach.
my code goes as below; the text in bold is the code
you are running into one of hte problems relating to "Field" being reused
by both the indexing code and the searching code.
things like the tokenStreamValue() and readerValue() only have meaning on
Fields that are about to be indexed ... Field objects returned from
searches will never return d
Thanks for your clarifications, Mark!
Jay
Mark Miller wrote:
5. Although currently IndexSearcher.close() does almost nothing except
to close the internal index reader, it might be a safer to close
searcher itself as well in closeCachedSearcher(), just in case, the
searcher may have other
5. Although currently IndexSearcher.close() does almost nothing except
to close the internal index reader, it might be a safer to close
searcher itself as well in closeCachedSearcher(), just in case, the
searcher may have other resources to release in the future version of
Lucene.
Didn't c
Payloads are added during analysis onto the Token. Have a look in the
contrib/analyzers module in 2.3 or the trunk version of Lucene. There
are a couple of examples in there that add payloads to tokens (see the
payloads package at
http://lucene.apache.org/java/2_3_0/api/contrib-analyzers/or
Thanks for the feedback jay. One at a time:
Jay wrote:
Great effort for much improved indexaccessor, Mark!
A couple questions and observations:
1. In release(Searcher), you removed a check if the given searcher is
the cached one from an earlier version. This could potentially cause
problems
Hi All,
I want to store information in Payload. How do I write Payload value to Index.
How to I sort value depending upon the Payload?
I could not find any method in Document class which takes Payload as argument.
Regards,
Allahbaksh
CAUTION - Disclaimer *
T
10 matches
Mail list logo