AHMET ARSLAN wrote:
>> Looks like its because the query
>> coming in is a ComplexPhraseQuery and
>> the Highlighter doesn't current know how to handle that
>> type.
>>
>> It would need to be rewritten first barring the special
>> handling it
>> needs - but unfortunately, that will break multi-term
> Looks like its because the query
> coming in is a ComplexPhraseQuery and
> the Highlighter doesn't current know how to handle that
> type.
>
> It would need to be rewritten first barring the special
> handling it
> needs - but unfortunately, that will break multi-term query
> highlighting
> unle
Looks like its because the query coming in is a ComplexPhraseQuery and
the Highlighter doesn't current know how to handle that type.
It would need to be rewritten first barring the special handling it
needs - but unfortunately, that will break multi-term query highlighting
unless you use boolean r
I think there is a problem about attachment. I am re-sending it.
>
> Thank you for your interest, Mark.
>
> I am sending a java code (using lucene 2.9.0) that simply
> demonstrates the problem. When the same query string is
> parsed by Lucene's default QueryParser highlighting comes.
>
> I am
> Yes - please share your test programs
> and I can investigate (ApacheCon
> this week, so I'm not sure when).
Thank you for your interest, Mark.
I am sending a java code (using lucene 2.9.0) that simply demonstrates the
problem. When the same query string is parsed by Lucene's default QueryPars
Yes - please share your test programs and I can investigate (ApacheCon
this week, so I'm not sure when).
And its best to keep communications on the list - that allows others
with similar issues (now or in the future) to benefit from whatever goes
on. You will also reach a wider pool of people that
Thanks for pointing out this issue.
The bug was related to having a doc bigger than the maxNumDocsToAnalyze
setting. In this situation, the last fragment created was always sized
from maxNumDocsToAnalyze position to the remainder of the doc (in your
case, quite large!)
I have fixed this in SVN
All looks OK with that bit.
At the risk of sounding obvious - are you mistaking the results from
multiple documents as the highlighted content from just one document?
eg the end of your "for" loop looks like this:
System.out.print(result);
}
and you assume the printed display is from just
Hi, Mark,
Please ignore my previous posting. I sent it by accident.
Sorry for the confusing. The complete code is here:
===
Analyzer analyzer = new StandardAnalyzer();
BufferedReader in = new BufferedReader(new InputStreamReader(System.
Ying - please properly subscribe to the java-user list - I've
moderated in each of your mails thus far.
Erik
On May 5, 2005, at 2:18 PM, [EMAIL PROTECTED] wrote:
Hi, Mark,
Sorry for the confusing. The complete code is here:
===
Analyzer analyzer =
Hi, Mark,
Sorry for the confusing. The complete code is here:
===
Analyzer analyzer = new StandardAnalyzer();
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
String line = in.readLine();
if (line.length() == -
As much as you have shown of the example output is
roughly what I would expect - using the default
simpleFragmenter you get roughly 100 character sized
fragments and you have shown 3 fragments sized 97, 100
and 105 chars long separated by "...".
> Of course the result is far more than this.
So ar
Quoting mark harwood <[EMAIL PROTECTED]>:
Hi, Mark,
I just used StandardAnalyzer and code is as following:
=
Analyzer analyzer = new StandardAnalyzer();
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
>> One of my
>> search results from our
>> records contains far too much of the text
This is a problem I haven't seen before. I suspect it
may have something to do with your choice of analyzer.
Your paper will only ever be fragmented on "token gap"
boundaries ie points in the token stream where t
Hi, All,
I use lucene highlight package to generate KWIC for our application.
The part of the code is as following:
=
if(text != null ){
TokenStream tokenStream = analyzer.tokenStream("contents",
new StringReade
15 matches
Mail list logo