On 5/25/07, karl wettin <[EMAIL PROTECTED]> wrote:
PerFieldAnalyzerWrapper
that was fast! thanks!
http://lucene.zones.apache.org:8080/hudson/job/Lucene-Nightly/javadoc/
org/apache/lucene/analysis/PerFieldAnalyzerWrapper.html
--
karl
---
Hello!
I have a Document with tow fields: one I would like to write with
SimpleAnalyzer, the other I want to use StandardAnalyzer, is there a
simple way to do it?
thanks
--
Paulo E. A. Silveira
Caelum Ensino e Soluções em Java
http://www.caelum.com.br/
-
Ok, I just tested it.
So consider:
String string = "word -foo";
String[] fields = { "title", "body" };
For the MultField I have:
MultiFieldQueryParser qp = new MultiFieldQueryParser(fields,
SearchEngine.ANALYZER);
Query fieldsQuery = qp.parse(string);
System.out.
hey doron, I solved the problem with
for (String field : fields) {
QueryParser qp = new QueryParser(field, SearchEngine.ANALYZER);
fieldsQuery.add(qp.parse(string), BooleanClause.Occur.SHOULD);
}
that seems to have the exact same effect of your suggestion
MultiFieldQuery
Hello
What can I use as a drop in replacement? I mean, about the (String,
String[], Analyzer) one.
The 1.9.1 javadoc says to use QueryParser.parse, but I need to
construct the query first. Any util method or do I need to do the for?
If this is the solution, maybe it is more elegant to use the fo
Hello
What is the best way to search? Should I separate all the fields, or
create a big one that have all fields? Does this impact the
performance dramatically?
Creating a big field I would not need to create a BooleanQuery...
last time I did not get any clues, lets see if this time will be bet
Hello
An example: my document has 3 fields: field1, field2 and field3. I
have to make queries for each field, and sometimes using all the
fields. Should I use a BooleanQuery when searching for a string in the
3 fields, or should I create a redundant field4 (where field4 is the
concat of field1+fi
Chris,
I really would like only this extra files, but I have the same problem here.
If I interrupt my IndexWriter with a kill signal, must of the time I
will be left with a lock file AND corrupted index files (the searcher
will throw some IllegalStateExceptions after the lock file is
deleted).
P
Nick
it is a guess, but the only difference between my approach and yours
is that I am optimizing as soon as I open the writer, and you are
optimizing after the last (100th) document is written.
At the same time I am using:
writer.setUseCompoundFile(true);
writer.
Nick!
I had also the same problem. Now on my SearchEngine class, when I
write a document to the index, I check if the number of documents mod
100 is 0. if it is, optimize().
Optimize() reduces the number of documents used by the index, so the
number of open files also is reduced.
Take a look:
I ve just get a "docs out of order".
I have a database that is indexed everytime an update occurs. The
index was ok for the last 3 weeks, and now, after the system throwed
an exception because of a write lock that was not released (and I
deleted it) I am recebing this:
Can anyone help
Full stack
hello
IndexReader.delete receives a docNum
How do I know a docNum given a document? I will always need to get
this number (sometimes called id in the javadocs) from the Hits.id?
thanks
--
Paulo Silveira
http://www.paulo.com.br
ld it be a lot better to batch updates? I could have a stack of
documents to be updated, and I would only write to the indexes when
the documentsToBeUpdated.size() reaches a certain number.
thanks a lot.
--
Paulo Silveira
http://www.pa
13 matches
Mail list logo