Hi,
I have to deal with serious bug problem. My program doesn't work correctly.
Problem shows it self probably in this part of code:

[code]
StandardAnalyzer stAnalyzer = new StandardAnalyzer(Version.LUCENE_31);
TokenStream stStream = stAnalyzer.tokenStream("analizedContent", new
StringReader(handler.toString()));
SimpleAnalyzer siAnalyzer = new SimpleAnalyzer(Version.LUCENE_31);
TokenStream siStream = siAnalyzer.tokenStream("content", new
StringReader(handler.toString()));

TermAttribute term = siStream.addAttribute(TermAttribute.class);
while(siStream.incrementToken()){
System.out.print(term.term() + ":");
}
System.out.println();
doc.add(new Field("analizedContent", stStream, TermVector.YES)); // 10 NO
doc.add(new Field("content", siStream, TermVector.YES)); // 10 NO

System.out.println("en string Value " +
doc.getField("analizedContent").stringValue());
System.out.println("en to String " +
doc.getField("analizedContent").toString());
System.out.println("en isStored " +
doc.getField("analizedContent").isStored());
[/code]

I get some text from TIKA, and put it into two different analyzers. As U can
see thanks to this while loop im sure that TokenStreams are ok. After that
I'm adding two fields into doc. After all when I'am checking content of
those fields (like in this code or later in the lucene index they are empty
(null value). Can U help me with that? I have already try  wtih other
analizers im stuck! It is part of my final project and I'm out of time :(

help...
Andrew

PS.
i've already posted on java forum - for now no replays
http://www.java-forums.org/lucene/46766-adding-field-tokenstream-field-name-tokenstream-termvector-constructor.html

Reply via email to