[ 
https://issues.apache.org/jira/browse/LUCENE-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12978401#action_12978401
 ] 

Michael McCandless commented on LUCENE-2324:
--------------------------------------------

{quote}
We're going to great lengths it seems to emulate a producer consumer queue (eg,
ordering of calls with sequence ids, thread pooling) without actually
implementing one. A fixed size blocking queue would simply block threads as
needed and would probably look cleaner in code. We could still implement thread
affinities though I simply can't see most applications requiring affinity, so
perhaps we can avoid it for now and put it back in later?
{quote}

I'm not sure we should queue.  I wonder how much this'd slow down the single 
threaded case?

Also: I thought we don't have sequence IDs anymore?  (At least, for landing 
DWPT; after that (for "true RT") we need something like sequence IDs?).

I think thread/doc-class affinity is fairly important.  Docs compress better if 
they are indexed together with similar docs.

bq. I'm just not sure we still need FC's global waiting during flush, that'd 
seem to go away because the RAM usage tracking is in DW.

We shouldn't do global waiting anymore -- this is what's great about DWPT.

bq. However once the affinity DWPT flush completed, we'd need logic to revert 
back to the original?

I don't think so?  I mean a DWPT post-flush is a clean slate.  Some other 
thread/doc-class can stick to it.

{quote}
I think the 5% model of LUCENE-2573 may typically yield flushing that occurs in
near intervals of each other, ie, it's going to slow down the aggregate
indexing if they're flushing on top of each other. Maybe we should start at 60%
then the multiple of 40% divided by maxthreadstate - 1? Ideally we'd
statistically optimize the flush interval per machine, eg, SSDs and RAM disks
will likely require only a small flush percentage interval.
{quote}

Yeah we'll have to run tests to try to gauge the best "default" policy.  And 
you're right that it'll depend on the relative strength of IO vs CPU on the 
machine.  Fast IO system means we can flush "later".

> Per thread DocumentsWriters that write their own private segments
> -----------------------------------------------------------------
>
>                 Key: LUCENE-2324
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2324
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Index
>            Reporter: Michael Busch
>            Assignee: Michael Busch
>            Priority: Minor
>             Fix For: Realtime Branch
>
>         Attachments: LUCENE-2324-SMALL.patch, LUCENE-2324-SMALL.patch, 
> LUCENE-2324-SMALL.patch, lucene-2324.patch, lucene-2324.patch, 
> LUCENE-2324.patch, test.out, test.out
>
>
> See LUCENE-2293 for motivation and more details.
> I'm copying here Mike's summary he posted on 2293:
> Change the approach for how we buffer in RAM to a more isolated
> approach, whereby IW has N fully independent RAM segments
> in-process and when a doc needs to be indexed it's added to one of
> them. Each segment would also write its own doc stores and
> "normal" segment merging (not the inefficient merge we now do on
> flush) would merge them. This should be a good simplification in
> the chain (eg maybe we can remove the *PerThread classes). The
> segments can flush independently, letting us make much better
> concurrent use of IO & CPU.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to