[
https://issues.apache.org/jira/browse/LUCENE-2939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13002562#comment-13002562
]
Robert Muir commented on LUCENE-2939:
-------------------------------------
Given the back compat breaks in the API, are we sure we should try to shove
this into 3.1?
I am sympathetic to performance bugs, BUT it seems that one could use
TermVectors and FastVectorHighlighter for these large documents, the user is
hardly left without options.
As a safer alternative we can document the issue in CHANGES.txt and recommend
that users take that approach for large documents, and take our time and fix
for 3.2
> Highlighter should try and use maxDocCharsToAnalyze in
> WeightedSpanTermExtractor when adding a new field to MemoryIndex as well as
> when using CachingTokenStream
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: LUCENE-2939
> URL: https://issues.apache.org/jira/browse/LUCENE-2939
> Project: Lucene - Java
> Issue Type: Bug
> Components: contrib/highlighter
> Reporter: Mark Miller
> Assignee: Mark Miller
> Priority: Minor
> Fix For: 3.1, 4.0
>
> Attachments: LUCENE-2939.patch, LUCENE-2939.patch, LUCENE-2939.patch
>
>
> huge documents can be drastically slower than need be because the entire
> field is added to the memory index
> this cost can be greatly reduced in many cases if we try and respect
> maxDocCharsToAnalyze
> things can be improved even further by respecting this setting with
> CachingTokenStream
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]