Slowly trying to understand it, have to wramp up on my scala.

When the flush/sink occurrs, does it pull items of the collection 1 by 1 or
does it do this in bulk somehow while locking the collection?

On Mon, May 7, 2012 at 3:14 PM, Neha Narkhede <neha.narkh...@gmail.com>wrote:

> Ahmed,
>
> The related code is in kafka.log.*. The message to file persistence is
> inside FileMessageSet.scala.
>
> Thanks,
> Neha
>
> On Mon, May 7, 2012 at 12:12 PM, S Ahmed <sahmed1...@gmail.com> wrote:
>
> > I can barely read scala, but I'm curious where the applications performs
> > the operation of taking the in-memory log and persisting it to the
> > database, all the while accepting new log messages and removing the keys
> of
> > the messages that have been persisted to disk.
> >
> > I'm guessing you have used the concurrenthashmap where the key is a
> topic,
> > and once the flush timeout has been reached a background thread will
> >  somehow persist and remove the keys.
> >
>

Reply via email to