[
https://issues.apache.org/jira/browse/LUCENE-2939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13002607#comment-13002607
]
Robert Muir commented on LUCENE-2939:
-------------------------------------
I think 3.2 is a good tradeoff, unless we introduced this slowdown in 3.1 (my
earlier question).
If we are introducing this slowdown in the 3.1 release, then I think its much
more serious, and I would instead suggest we set the issue to blocker.
Regardless I think there are some technical steps that can be taken to easy my
mind about the patch, for example the TokenFilter here can be tested
independently with BaseTokenStreamTestCase (this is good at catching reuse bugs
like the one I hinted at).
> Highlighter should try and use maxDocCharsToAnalyze in
> WeightedSpanTermExtractor when adding a new field to MemoryIndex as well as
> when using CachingTokenStream
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: LUCENE-2939
> URL: https://issues.apache.org/jira/browse/LUCENE-2939
> Project: Lucene - Java
> Issue Type: Bug
> Components: contrib/highlighter
> Reporter: Mark Miller
> Assignee: Mark Miller
> Priority: Minor
> Fix For: 3.1.1, 3.2, 4.0
>
> Attachments: LUCENE-2939.patch, LUCENE-2939.patch, LUCENE-2939.patch
>
>
> huge documents can be drastically slower than need be because the entire
> field is added to the memory index
> this cost can be greatly reduced in many cases if we try and respect
> maxDocCharsToAnalyze
> things can be improved even further by respecting this setting with
> CachingTokenStream
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]