Re: Using Kafka Stream in the cluster of kafka on one or multiple docker-machine/s

2017-03-29 Thread Mina Aslani
Hi, Do we have an example of a container with an instance of the jar file by any chance? I am wondering if I should have a container of headless java or should I have a container of Kafka? And after I have the container running, in my container should I run Java -cp ... same as

working of Kafka quota for clients and users

2017-03-29 Thread Archie
Hi, I am pretty new to kafka and want to understand how the quota system works for kafka. Till now I have been following the document here I have been able to set the quotas (produce and consume) for new clients using the following command

Re: Understanding ReadOnlyWindowStore.fetch

2017-03-29 Thread Jon Yeargers
I remain more than mystified about the workings of the StateStore. I tried making aggregations with a 1minute window, 10 second advance and a _12 hour_ retention (which is longer than the retention.ms of the topic). I still couldn't get more than a 15% hit rate on the StateStore. Are there

Re: Understanding ReadOnlyWindowStore.fetch

2017-03-29 Thread Matthias J. Sax
It's based in "stream time", ie, the internally tracked progress based on the timestamps return by TimestampExtractor. -Matthias On 3/29/17 12:52 PM, Jon Yeargers wrote: > So '.until()' is based on clock time / elapsed time (IE record age) / > something else? > > The fact that Im seeing lots of

weird SerializationException when consumer is fetching and parsing record in streams application

2017-03-29 Thread Sachin Mittal
Hi, This is for first time we are getting a weird exception. After this the streams caches. Only work around is to manually seek and commit offset to a greater number and we are needing this manual intervention again and again. Any idea what is causing it and how can we circumvent this. Note

Re: Understanding ReadOnlyWindowStore.fetch

2017-03-29 Thread Jon Yeargers
So '.until()' is based on clock time / elapsed time (IE record age) / something else? The fact that Im seeing lots of records come through that can't be found in the Store - these are 'old' and already expired? Going forward - it would be useful to have different forms of '.until()' so one could

Re: ThoughWorks Tech Radar: Assess Kafka Streams

2017-03-29 Thread Eno Thereska
Thanks for the heads up Jan! Eno > On 29 Mar 2017, at 19:08, Jan Filipiak wrote: > > Regardless of how usefull you find the tech radar. > > Well deserved! even though we all here agree that trial or adopt is in reach > >

ThoughWorks Tech Radar: Assess Kafka Streams

2017-03-29 Thread Jan Filipiak
Regardless of how usefull you find the tech radar. Well deserved! even though we all here agree that trial or adopt is in reach https://www.thoughtworks.com/radar/platforms/kafka-streams Best Jan

Re: Understanding ReadOnlyWindowStore.fetch

2017-03-29 Thread Damian Guy
Jon, You should be able to query anything that has not expired, i.e., based on TimeWindows.until(..). Thanks, Damian On Wed, 29 Mar 2017 at 17:24 Jon Yeargers wrote: > To be a bit more specific: > > If I call this: KTable kt = >

Re: Understanding ReadOnlyWindowStore.fetch

2017-03-29 Thread Jon Yeargers
To be a bit more specific: If I call this: KTable kt = sourceStream.groupByKey().reduce(..., "somekeystore"); and then call this: kt.forEach()-> ... Can I assume that everything that comes out will be available in "somekeystore"? If not, what subset should I expect to

Re: Understanding ReadOnlyWindowStore.fetch

2017-03-29 Thread Jon Yeargers
But if a key shows up in a KTable->forEach should it be available in the StateStore (from the KTable)? On Wed, Mar 29, 2017 at 6:31 AM, Michael Noll wrote: > Jon, > > there's a related example, using a window store and a key-value store, at >

Kafka running but not listening to port 9092

2017-03-29 Thread Rafael Telles
Hello there! I have two clusters of Kafka brokers, one of them (with 15 brokers + 3 Zookeeper servers) became sick (a lot of under-replicated partitions, throwing a lot of NotEnoughReplicasExceptions). I logged in some of the brokers that other couldn't connect to, and I found out that they were

Re: Understanding ReadOnlyWindowStore.fetch

2017-03-29 Thread Michael Noll
Jon, there's a related example, using a window store and a key-value store, at https://github.com/confluentinc/examples/blob/3.2.x/kafka-streams/src/test/java/io/confluent/examples/streams/ValidateStateWithInteractiveQueriesLambdaIntegrationTest.java (this is for Confluent 3.2 / Kafka 0.10.2).

Re: Understanding ReadOnlyWindowStore.fetch

2017-03-29 Thread Jon Yeargers
Im only running one instance (locally) to keep things simple. Reduction: KTable hourAggStore = sourceStream.groupByKey().reduce(rowReducer, TimeWindows.of(65 * 60 * 1000L).advanceBy(5 * 60 * 1000).until(70 * 60 * 1000L), "HourAggStore");

Re: Kafka broker went down with "No space left on device" when there is a lot more

2017-03-29 Thread Manikumar
apache mailing list doesn't allow attachments. Can you paste the error message here? also check for free inodes on disk https://www.ivankuznetsov.com/2010/02/no-space-left-on-device-running-out-of-inodes.html On Wed, Mar 29, 2017 at 5:48 PM, Nomar Morado wrote: > Two of

Kafka broker went down with "No space left on device" when there is a lot more

2017-03-29 Thread Nomar Morado
Two of my brokers went down today with the same error - see attachment for details. The device though is 55% free which is about over 100GB in space. The entire kafka logs is only 1.3GB. Any thoughts on what might be tripping this one? I am using kafka 0.9.0.1. Thanks Nomar

Re: Understanding ReadOnlyWindowStore.fetch

2017-03-29 Thread Damian Guy
Hi Jon, If you are able to get a handle on the store, i.e., via KafkaStreams.store(...) and call fetch without any exceptions, then the store is available. The time params to fetch are the boundaries to search for windows for the given key. They relate to the start time of the window, so if you