[
https://issues.apache.org/jira/browse/LUCENE-845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12520378
]
Michael McCandless commented on LUCENE-845:
-------------------------------------------
> I think this would be great. It's always been a pet peeve of mine
> that even in low pressure/activity environments, there is often a
> delay from write to read.
I'll open a new issue.
> Sounds like this would help take most of the work/risk off the
> developer.
Precisely! Out of the box we can have very low latency from
IndexWriter flushing single doc segments, and not having to pay the
O(N^2) merge cost of merging down such segments to be "at all moments"
ready for an IndexReader to open the index, while IndexReader can load
such an index (or re-open by loading only the "new" segments) and very
quickly reduce the # segments so that searching is still fast.
> If you "flush by RAM usage" then IndexWriter may over-merge
> -----------------------------------------------------------
>
> Key: LUCENE-845
> URL: https://issues.apache.org/jira/browse/LUCENE-845
> Project: Lucene - Java
> Issue Type: Bug
> Components: Index
> Affects Versions: 2.1
> Reporter: Michael McCandless
> Assignee: Michael McCandless
> Priority: Minor
> Attachments: LUCENE-845.patch
>
>
> I think a good way to maximize performance of Lucene's indexing for a
> given amount of RAM is to flush (writer.flush()) the added documents
> whenever the RAM usage (writer.ramSizeInBytes()) has crossed the max
> RAM you can afford.
> But, this can confuse the merge policy and cause over-merging, unless
> you set maxBufferedDocs properly.
> This is because the merge policy looks at the current maxBufferedDocs
> to figure out which segments are level 0 (first flushed) or level 1
> (merged from <mergeFactor> level 0 segments).
> I'm not sure how to fix this. Maybe we can look at net size (bytes)
> of a segment and "infer" level from this? Still we would have to be
> resilient to the application suddenly increasing the RAM allowed.
> The good news is to workaround this bug I think you just need to
> ensure that your maxBufferedDocs is less than mergeFactor *
> typical-number-of-docs-flushed.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]