I did some benchmark and actually writing a batch of 100 (little, 500 bytes) 
entries using asyncAddEntry is as fast as writing only one entry, that is 200 
ms.

I will resolve my problems trying to avoid to single entry writes

Thanks

Enrico Olivelli

Da: Ivan Kelly [mailto:[email protected]]
Inviato: lunedì 14 settembre 2015 15:46
A: [email protected]
Oggetto: Re: Fastest way to write to bookkeeper

May I suggest, before making any code changes, you measure the difference in 
MB/s between large writes and small writes? I do recall there was some 
advantage to using large entries in the past, more than 1k, but I don't 
remember what it was, and it may not still be true. A lot of code has changed 
since then.
In theory, anything greater than the MTU shouldn't give too much of a boost.
-Ivan

On Mon, Sep 14, 2015 at 3:35 PM Flavio Junqueira 
<[email protected]<mailto:[email protected]>> wrote:
Hi Enrico,

What's the size of each entry? If they are small say just a few bytes, then 
you're indeed better off grouping them. If they are 1k or more, then the 
benefit of grouping shouldn't be much.

About extending the API, the only disadvantage I can see of grouping writes 
into an entry rather than writing a batch of entries is that a read request 
will have to read them all. I personally don't like so much the idea of a batch 
call because it makes the code a bit messier. You need to start a batch, add a 
bunch of stuff, flush the batch, start a new batch, add a bunch of stuff, and 
so on. With addEntry, you just invoke it every time you have a new message.

-Flavio

On 14 Sep 2015, at 05:02, Enrico Olivelli - Diennea 
<[email protected]<mailto:[email protected]>> wrote:

Hi,
What is the fastest way to write to BookKeeper a batch of entries ?

I’m using a sequence of asyncAddEntry, some thing like the code below:

List<Long> res= new ArrayList<>(); // holds entry sequence numbers
CountDownLatch latch = new CountDownLatch(size);
for (int i = 0; i < size; i++) {
…..
   this.out.asyncAddEntry(entry, new AsyncCallback.AddCallback() {

    public void addComplete(int rc, LedgerHandle lh, long entryId, Object i) {
                            int index = (Integer) i;
                            if (rc != BKException.Code.OK) {
                                BKException error = BKException.create(rc);
                                exception.value = error;
                                res.set(index, null);
                                for (int j = 0; j < size; j++) {
                                    // early exit
                                    latch.countDown();
                                }
                            } else {
                                res.set(index, entryId);
                                latch.countDown();
                            }
                        }
                    }, i);
    }
latch.await();

Would it be faster to group all the entries in one “large” entry ? This may 
alter application semantics but if it would be faster I will do the refactor

Can I file an issue in order to implement a “batchAddEntries” which implements 
the write of a batch of entries within the native Bookkeeper client ?





Enrico Olivelli
Software Development Manager @Diennea
Tel.: (+39) 0546 066100 - Int. 925
Viale G.Marconi 30/14 - 48018 Faenza (RA)

MagNews - E-mail Marketing Solutions
http://www.magnews.it<http://www.magnews.it/>
Diennea - Digital Marketing Solutions
http://www.diennea.com<http://www.diennea.com/>


________________________________
Iscriviti alla nostra newsletter per rimanere aggiornato su digital ed email 
marketing! http://www.magnews.it/newsletter/


________________________________
Iscriviti alla nostra newsletter per rimanere aggiornato su digital ed email 
marketing! http://www.magnews.it/newsletter/

Reply via email to