[ 
https://issues.apache.org/jira/browse/LUCENE-2939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13002716#comment-13002716
 ] 

Michael McCandless commented on LUCENE-2939:
--------------------------------------------

bq. In order to release more often we have to stop this cycle of shoving things 
in at the last minute

+1

Dev is a constant thing around here and we keep holding back a release
for the one-more-issue to get in we will never release. 

Our lack-of-release reflects badly on Lucene/Solr -- the outside world
uses this as the proxy for our health and we know we get bad marks.

Worse, this whole situation (people getting angry at the RM for doing
*precisely* what the RM is supposed to do) is a disincentive for
future RMs to volunteer doing releases, thus causing even less
frequent releases.  It's already hard enough for us to get a release
out as it is.

The RM is *supposed* to be an asshole (not that Robert has acted like
one, here, imho).  S/he has full authority to draw the line, crack the
whip, do whatever it takes to get the release out.  We all cannot
question that, unless we want to step up and be the RM because it is
NOT an easy job.

I think this issue should wait for 3.2.


> Highlighter should try and use maxDocCharsToAnalyze in 
> WeightedSpanTermExtractor when adding a new field to MemoryIndex as well as 
> when using CachingTokenStream
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: LUCENE-2939
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2939
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: contrib/highlighter
>            Reporter: Mark Miller
>            Assignee: Mark Miller
>            Priority: Minor
>             Fix For: 3.1.1, 3.2, 4.0
>
>         Attachments: LUCENE-2939.patch, LUCENE-2939.patch, LUCENE-2939.patch
>
>
> huge documents can be drastically slower than need be because the entire 
> field is added to the memory index
> this cost can be greatly reduced in many cases if we try and respect 
> maxDocCharsToAnalyze
> things can be improved even further by respecting this setting with 
> CachingTokenStream

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to