[ https://issues.apache.org/jira/browse/LUCENE-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12984780#action_12984780 ]
Jason Rutherglen commented on LUCENE-2324: ------------------------------------------ bq. How would we handle wraparound (in a concurrent way)? Also, 16 fold increase in RAM usage is not cheap! Instantiate a new array, and the next reader's seqid is set to 0, while a second value is incremented to guarantee uniqueness of the reader. A short's 8 bytes * 500,000 docs (in the RAM buffer, is that a lot?) = ~4 MB? Right, it'll eat into the RAM buffer but it's not extreme (or is it?!). bq. we drop the BV, let GC recycle it, allocate a new BV (same size), copy in nearly the same bits that we just discarded, set a few more bits. Right, that's probably our best option for DF, BV, norms, and any other similar array. I did propose that a while back, and I'm not sure why, but I don't think you were a big fan: LUCENE-1574 Would this also be used for DW's deletes? At 16 times the minimum size of the sequence-ids then the pooled approach would allow the equivalent of 16 BVs! The paged approach I think'll have issues in a low reader latency enviro, ie, create overhead from all the changes. Whereas an array is fast to change, and fast to copy. bq. also tracked their "state" ie what "delete gen" they were at, and then when we need a new BV for that same segment, we pull a free one, catch it up Couldn't we simply use System.arraycopy and be done? > Per thread DocumentsWriters that write their own private segments > ----------------------------------------------------------------- > > Key: LUCENE-2324 > URL: https://issues.apache.org/jira/browse/LUCENE-2324 > Project: Lucene - Java > Issue Type: Improvement > Components: Index > Reporter: Michael Busch > Assignee: Michael Busch > Priority: Minor > Fix For: Realtime Branch > > Attachments: LUCENE-2324-SMALL.patch, LUCENE-2324-SMALL.patch, > LUCENE-2324-SMALL.patch, LUCENE-2324-SMALL.patch, LUCENE-2324-SMALL.patch, > LUCENE-2324.patch, LUCENE-2324.patch, LUCENE-2324.patch, lucene-2324.patch, > lucene-2324.patch, LUCENE-2324.patch, test.out, test.out, test.out, test.out > > > See LUCENE-2293 for motivation and more details. > I'm copying here Mike's summary he posted on 2293: > Change the approach for how we buffer in RAM to a more isolated > approach, whereby IW has N fully independent RAM segments > in-process and when a doc needs to be indexed it's added to one of > them. Each segment would also write its own doc stores and > "normal" segment merging (not the inefficient merge we now do on > flush) would merge them. This should be a good simplification in > the chain (eg maybe we can remove the *PerThread classes). The > segments can flush independently, letting us make much better > concurrent use of IO & CPU. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org