[
https://issues.apache.org/jira/browse/LUCENE-2939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13002582#comment-13002582
]
Mark Miller commented on LUCENE-2939:
-------------------------------------
Grant: in terms of the back compat issue - I'm not really worried about it
myself since this is contrib and we have changed these interfaces before with
no complaint -
but another tmp option is to special case and do an instanceOf check on the
Scorer - and if its our QueryScorer, cast and set the max chars to analyze.
It's not as pretty, but it avoids the method sig change.
> Highlighter should try and use maxDocCharsToAnalyze in
> WeightedSpanTermExtractor when adding a new field to MemoryIndex as well as
> when using CachingTokenStream
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: LUCENE-2939
> URL: https://issues.apache.org/jira/browse/LUCENE-2939
> Project: Lucene - Java
> Issue Type: Bug
> Components: contrib/highlighter
> Reporter: Mark Miller
> Assignee: Mark Miller
> Priority: Minor
> Fix For: 3.1, 4.0
>
> Attachments: LUCENE-2939.patch, LUCENE-2939.patch, LUCENE-2939.patch
>
>
> huge documents can be drastically slower than need be because the entire
> field is added to the memory index
> this cost can be greatly reduced in many cases if we try and respect
> maxDocCharsToAnalyze
> things can be improved even further by respecting this setting with
> CachingTokenStream
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]