The 100 items will be written separately. After all 100 are written, they will 
*all* will be made visible to consumers using 1 publish() operation... so the 
consumers suddenly see a bump of 100 new items in the Q.
-roshan


From: Adam Meyerowitz <[email protected]>
Reply-To: "[email protected]" <[email protected]>
Date: Friday, June 2, 2017 at 9:14 AM
To: "[email protected]" <[email protected]>
Subject: Re: LMAX queue batch size

Hi Roshan, thanks!

Maybe an example will clarify what I'm after.

Let's say that there is a batch of 100 items to be written to the disruptor 
queue and those are to be written to the queue in one go as you mentioned.  
Does the write to the queue result in one queue entry that is actually a list 
of those 100 items?  If we looked at the read and write positions in the queue 
after the write would they only differ by 1?  Or are the 100 items written to 
the queue separately such that there will be a 100 position different between 
the read and write positions?

Appreciate the info.

On Fri, Jun 2, 2017 at 11:52 AM, Roshan Naik 
<[email protected]<mailto:[email protected]>> wrote:
That’s the batch size on the writer side.
Inserts to DisruptorQ from writer threads are buffered, once that many items 
are buffered (or TOPOLOGY_DISRUPTOR_BATCH_TIMEOUT_MILLIS elapses) the items are 
actually written to the Q in one go.

roshan

From: Adam Meyerowitz <[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Thursday, June 1, 2017 at 1:47 PM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: LMAX queue batch size

Hello, can someone clarify how the disruptor queue batch size setting, 
TOPOLOGY_DISRUPTOR_BATCH_SIZE,
impacts insertion?  More specifically is each disruptor queue entry a single 
item or a list of items up to the configured batch size?

Thanks.

Reply via email to