Hey David,

The commit always happens at a "safe point", when the local portion of the
processing topology has fully processed a set of inputs. The frequency is
controlled by the property commit.interval.ms.

-Jay

On Fri, Mar 11, 2016 at 9:28 AM, David Buschman <david.busch...@timeli.io>
wrote:

> @Jay, I currently use reactive-kaka for my Kafka sources and sinks in my
> stream processing apps. I was interested to see if this new stream API
> would make that setup easier/simpler/better in the future when it becomes
> available.
>
> How does the Streams API handle the commit offsets? Since you are
> processing "1-at-a-time”, is it auto magic on commit handling at the
> beginning/end of the processing or can we specify where in the processing
> an offset commit happens?
>
> Thanks,
>     DaVe.
>
> David Buschman
> d...@timeli.io
>
>
>
> > On Mar 11, 2016, at 7:21 AM, Dick Davies <d...@hellooperator.net> wrote:
> >
> > Nice - I've read topics on the idea of a database as the 'now' view of a
> stream
> > of updates, it's a very powerful concept.
> >
> > Reminds me of Rich Hickeys talk on DAtomic, if anyone's seen that.
> >
> >
> >
> > On 10 March 2016 at 21:26, Jay Kreps <j...@confluent.io> wrote:
> >> Hey all,
> >>
> >> Lot's of people have probably seen the ongoing work on Kafka Streams
> >> happening. There is no real way to design a system like this in a
> vacuum,
> >> so we put up a blog, some snapshot docs, and something you can download
> and
> >> use easily to get feedback:
> >>
> >>
> http://www.confluent.io/blog/introducing-kafka-streams-stream-processing-made-simple
> >>
> >> We'd love comments or thoughts from anyone...
> >>
> >> -Jay
>
>

Reply via email to