[
https://issues.apache.org/jira/browse/LUCENE-2939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Mark Miller updated LUCENE-2939:
--------------------------------
Attachment: LUCENE-2939.patch
The other problem was that CachingTokenFilter was exhausting the entire stream
eagerly - which could be a spin through a very large TokenStream - uselessly if
a user has set the maxDocCharOffset setting.
This and adding the whole stream to the MemoryIndex was a very large
performance bug in the span highlighter for some time now.
In my test case, using Solr's DEFAULT_MAX_CHARS_TO_ANALYZE = 50*1024,
highlighting 10 very large PDF docs I have dropped from 20 some seconds to
300ms.
New patch with some fixes and cleanup. I don't see the above error with a more
correct TokenFilter impl.
> Highlighter should try and use maxDocCharsToAnalyze in
> WeightedSpanTermExtractor when adding a new field to MemoryIndex
> -----------------------------------------------------------------------------------------------------------------------
>
> Key: LUCENE-2939
> URL: https://issues.apache.org/jira/browse/LUCENE-2939
> Project: Lucene - Java
> Issue Type: Bug
> Components: contrib/highlighter
> Reporter: Mark Miller
> Assignee: Mark Miller
> Priority: Minor
> Attachments: LUCENE-2939.patch, LUCENE-2939.patch
>
>
> huge documents can be drastically slower than need be because the entire
> field is added to the memory index
> this cost can be greatly reduced in many cases if we try and respect
> maxDocCharsToAnalyze
> the cost is still not fantastic, but is at least improved in many situations
> and can be influenced with this change
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]