RE: A question about kafka

2016-10-23 Thread ZHU Hua B
Hi,


Anybody could help to answer below question? If compression type could be 
modified through command " bin/kafka-console-producer.sh --producer.config 
"? Thanks!





Best Regards

Johnny


-Original Message-
From: ZHU Hua B 
Sent: 2016年10月17日 14:52
To: users@kafka.apache.org; Radoslaw Gruchalski
Subject: RE: A question about kafka

Hi,


Thanks for your reply!

OK, I got it. And, there is a parameter named compression.type in 
config/producer.properties, which is same usage as "--compression-codec " I 
think. I modify compression.type in config/producer.properties firstly, then 
run console producer with option "--producer.config " and send 
message, but the compression codec could not change as modification. Do you 
know the reason about it? Thanks!


# bin/kafka-console-producer.sh
Read data from standard input and publish it to Kafka.
Option   Description
--   ---
--producer.config   Producer config properties file. Note
   that [producer-property] takes
   precedence over this config.
# bin/kafka-console-producer.sh --producer.config config/producer.properties 
--broker-list localhost:9092 --topic test



Best Regards

Johnny


-Original Message-
From: Hans Jespersen [mailto:h...@confluent.io]
Sent: 2016年10月17日 14:29
To: users@kafka.apache.org; Radoslaw Gruchalski
Subject: RE: A question about kafka

Because the producer-property option is used to set other properties that are 
not compression type.
//h...@confluent.io
 Original message From: ZHU Hua B 
 Date: 10/16/16  11:20 PM  (GMT-08:00) To: 
Radoslaw Gruchalski , users@kafka.apache.org Subject: RE: 
A question about kafka Hi,


Thanks for your reply!

If console producer only allows for compression codec argument, why we could 
found option —producer-property defined in ConsoleProducer.scala? And we could 
find the usage also if we running console producer? The version we used is 
Kafka 0.10.0.0. Thanks!


# ./kafka-console-producer.sh
Read data from standard input and publish it to Kafka.
Option   Description
--   --- --compression-codec 
[compression-codec]  The compression codec: either 'none',
   'gzip', 'snappy', or 'lz4'.If
   specified without value, then it
   defaults to 'gzip'
--producer-property   A mechanism to pass user-defined
   properties in the form key=value to
   the producer.
--producer.config   Producer config properties file. Note
   that [producer-property] takes
   precedence over this config.
--property     A mechanism to pass user-defined
   properties in the form key=value to
   the message reader. This allows
   custom configuration for a user-
   defined message reader.



Best Regards

Johnny

From: Radoslaw Gruchalski [mailto:ra...@gruchalski.com]
Sent: 2016年10月17日 14:02
To: ZHU Hua B; users@kafka.apache.org
Subject: RE: A question about kafka

Hi,

I believe the answer is in the code. This is where the --compression-codec is 
processed:
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ConsoleProducer.scala#L143
and this is —producer-property:
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ConsoleProducer.scala#L234

The usage is here:
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ConsoleProducer.scala#L114

The answer is: The console producer allows for compression codec only with 
—compression-codec argument.

–
Best regards,

Radek Gruchalski

ra...@gruchalski.com


On October 17, 2016 at 7:46:41 AM, ZHU Hua B 
(hua.b@alcatel-lucent.com) wrote:
Hi,


Anybody could help to answer this question? Thanks!






Best Regards

Johnny

-Original Message-
From: ZHU Hua B
Sent: 2016年10月14日 16:41
To: users@kafka.apache.org
Subject: [COMMERCIAL] A question about kafka

Hi,


I have a question about kafka, could you please help to have a look?

I want to send a message from producer with snappy compression codec. So I run 
the command "bin/kafka-console-producer.sh --compression-codec snappy 
--broker-list localhost:9092 --topic test", after that I checked the data log, 
compresscodec is SnappyCompressionCodec as expectation.

Then I tried another command "bin/kafka-console-producer.sh --producer-prop

RE: Mirror multi-embedded consumer's configuration

2016-10-23 Thread ZHU Hua B
Hi,


Because from some Kafka wiki I saw "At minimum, the mirror maker takes one or 
more consumer configurations, a producer configuration and either a whitelist 
or a blacklist", but the test failed, so I want to know if Kafka mirror really 
support more consumer configurations at minimum? Thanks!




Best Regards

Johnny


-Original Message-
From: Manikumar [mailto:manikumar.re...@gmail.com] 
Sent: 2016年10月24日 13:48
To: users@kafka.apache.org
Subject: Re: Mirror multi-embedded consumer's configuration

why are you passing "consumer.config" twice?

On Mon, Oct 24, 2016 at 11:07 AM, ZHU Hua B 
wrote:

> Hi,
>
>
> The version of Kafka I used is 0.10.0.0. Thanks!
>
>
>
>
>
>
> Best Regards
>
> Johnny
>
> -Original Message-
> From: Guozhang Wang [mailto:wangg...@gmail.com]
> Sent: 2016年10月24日 12:22
> To: users@kafka.apache.org
> Subject: Re: Mirror multi-embedded consumer's configuration
>
> Which version are you using for the MM?
>
> Guozhang
>
> On Thu, Oct 20, 2016 at 10:13 PM, ZHU Hua B 
> 
> wrote:
>
> > Hi,
> >
> >
> > Anybody could help to answer below question? Thanks!
> >
> >
> >
> >
> >
> > Best Regards
> >
> > Johnny
> >
> > From: ZHU Hua B
> > Sent: 2016年10月19日 16:22
> > To: 'users@kafka.apache.org'
> > Subject: Mirror multi-embedded consumer's configuration
> >
> > Hi,
> >
> >
> > I launch Kafka mirror maker with multi-embedded consumer's 
> > configuration but failed as below, what’s the mean of “you asked for 
> > only one”, is there an option control it? Thanks!
> >
> > # bin/kafka-mirror-maker.sh --consumer.config 
> > config/consumer-1.properties --consumer.config 
> > config/consumer-2.properties --num.streams 2 --producer.config
> config/producer.properties --whitelist '.*'
> > [2016-10-19 16:00:14,183] ERROR Exception when starting mirror maker.
> > (kafka.tools.MirrorMaker$)
> > joptsimple.MultipleArgumentsForOptionException: Found multiple 
> > arguments for option consumer.config, but you asked for only one
> > at joptsimple.OptionSet.valueOf(OptionSet.java:179)
> > at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:235)
> > at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
> > Exception in thread "main" java.lang.NullPointerException
> > at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:286)
> > at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
> >
> >
> >
> >
> >
> >
> > Best Regards
> >
> > Johnny
> >
> >
>
>
> --
> -- Guozhang
>


Re: Mirror multi-embedded consumer's configuration

2016-10-23 Thread Manikumar
why are you passing "consumer.config" twice?

On Mon, Oct 24, 2016 at 11:07 AM, ZHU Hua B 
wrote:

> Hi,
>
>
> The version of Kafka I used is 0.10.0.0. Thanks!
>
>
>
>
>
>
> Best Regards
>
> Johnny
>
> -Original Message-
> From: Guozhang Wang [mailto:wangg...@gmail.com]
> Sent: 2016年10月24日 12:22
> To: users@kafka.apache.org
> Subject: Re: Mirror multi-embedded consumer's configuration
>
> Which version are you using for the MM?
>
> Guozhang
>
> On Thu, Oct 20, 2016 at 10:13 PM, ZHU Hua B 
> wrote:
>
> > Hi,
> >
> >
> > Anybody could help to answer below question? Thanks!
> >
> >
> >
> >
> >
> > Best Regards
> >
> > Johnny
> >
> > From: ZHU Hua B
> > Sent: 2016年10月19日 16:22
> > To: 'users@kafka.apache.org'
> > Subject: Mirror multi-embedded consumer's configuration
> >
> > Hi,
> >
> >
> > I launch Kafka mirror maker with multi-embedded consumer's
> > configuration but failed as below, what’s the mean of “you asked for
> > only one”, is there an option control it? Thanks!
> >
> > # bin/kafka-mirror-maker.sh --consumer.config
> > config/consumer-1.properties --consumer.config
> > config/consumer-2.properties --num.streams 2 --producer.config
> config/producer.properties --whitelist '.*'
> > [2016-10-19 16:00:14,183] ERROR Exception when starting mirror maker.
> > (kafka.tools.MirrorMaker$)
> > joptsimple.MultipleArgumentsForOptionException: Found multiple
> > arguments for option consumer.config, but you asked for only one
> > at joptsimple.OptionSet.valueOf(OptionSet.java:179)
> > at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:235)
> > at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
> > Exception in thread "main" java.lang.NullPointerException
> > at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:286)
> > at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
> >
> >
> >
> >
> >
> >
> > Best Regards
> >
> > Johnny
> >
> >
>
>
> --
> -- Guozhang
>


RE: Mirror multi-embedded consumer's configuration

2016-10-23 Thread ZHU Hua B
Hi,


The version of Kafka I used is 0.10.0.0. Thanks!






Best Regards

Johnny

-Original Message-
From: Guozhang Wang [mailto:wangg...@gmail.com] 
Sent: 2016年10月24日 12:22
To: users@kafka.apache.org
Subject: Re: Mirror multi-embedded consumer's configuration

Which version are you using for the MM?

Guozhang

On Thu, Oct 20, 2016 at 10:13 PM, ZHU Hua B 
wrote:

> Hi,
>
>
> Anybody could help to answer below question? Thanks!
>
>
>
>
>
> Best Regards
>
> Johnny
>
> From: ZHU Hua B
> Sent: 2016年10月19日 16:22
> To: 'users@kafka.apache.org'
> Subject: Mirror multi-embedded consumer's configuration
>
> Hi,
>
>
> I launch Kafka mirror maker with multi-embedded consumer's 
> configuration but failed as below, what’s the mean of “you asked for 
> only one”, is there an option control it? Thanks!
>
> # bin/kafka-mirror-maker.sh --consumer.config 
> config/consumer-1.properties --consumer.config 
> config/consumer-2.properties --num.streams 2 --producer.config 
> config/producer.properties --whitelist '.*'
> [2016-10-19 16:00:14,183] ERROR Exception when starting mirror maker.
> (kafka.tools.MirrorMaker$)
> joptsimple.MultipleArgumentsForOptionException: Found multiple 
> arguments for option consumer.config, but you asked for only one
> at joptsimple.OptionSet.valueOf(OptionSet.java:179)
> at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:235)
> at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
> Exception in thread "main" java.lang.NullPointerException
> at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:286)
> at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
>
>
>
>
>
>
> Best Regards
>
> Johnny
>
>


--
-- Guozhang


Re: Exception when accessing partition, offset and timestamp in processor class

2016-10-23 Thread saiprasad mishra
Sorry for the email again

I was expecting it to work always when accessed from process() method as
this corresponds to each kafka message/record processing.
I understand illegalstate by the time punctuate() is called as its already
batched by time interval

Regards
Sai

On Sun, Oct 23, 2016 at 9:18 PM, saiprasad mishra  wrote:

> Hi
>
> his is with my streaming app kafka 10.1.0.
>
> My flow looks something like below
>
> source topic stream -> filter for null value ->map to make it keyed by id
> ->custom processor to mystore -> to another topic -> ktable
>
> I am hitting the below type of exception in a custom processor class if I
> try to access offset() or partition() or timestamp() from the
> ProcessorContext in the process() method. I was hoping it would return the
> partition and offset for the enclosing topic(in this case source topic)
> where its consuming from or -1 based on the api docs.
>
> Looks like only in certain cases it is accessible. is it getting lost in
> transformation phases.
>
> Same issue happens on if i try to access them in punctuate() method but
> some where I saw that it might not work in punctuate(). Any reason for this
> or any link describing this will be helpful
>
>
> 
>
> java.lang.IllegalStateException: This should not happen as offset()
> should only be called while a record is processed
> at org.apache.kafka.streams.processor.internals.
> ProcessorContextImpl.offset(ProcessorContextImpl.java:181)
> ~[kafka-streams-0.10.1.0.jar!/:?]
> at com.sai.repo.MyStore.process(MyStore.java:72) ~[classes!/:?]
> at com.sai.repo.MyStore.process(MyStore.java:39) ~[classes!/:?]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
> ~[kafka-streams-0.10.1.0.jar!/:?]
> at org.apache.kafka.streams.processor.internals.
> ProcessorContextImpl.forward(ProcessorContextImpl.java:204)
> ~[kafka-streams-0.10.1.0.jar!/:?]
> at org.apache.kafka.streams.kstream.internals.KStreamMap$
> KStreamMapProcessor.process(KStreamMap.java:43)
> ~[kafka-streams-0.10.1.0.jar!/:?]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
> ~[kafka-streams-0.10.1.0.jar!/:?]
> at org.apache.kafka.streams.processor.internals.
> ProcessorContextImpl.forward(ProcessorContextImpl.java:204)
> ~[kafka-streams-0.10.1.0.jar!/:?]
> at org.apache.kafka.streams.kstream.internals.KStreamFilter$
> KStreamFilterProcessor.process(KStreamFilter.java:44)
> ~[kafka-streams-0.10.1.0.jar!/:?]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
> ~[kafka-streams-0.10.1.0.jar!/:?]
> at org.apache.kafka.streams.processor.internals.
> ProcessorContextImpl.forward(ProcessorContextImpl.java:204)
> ~[kafka-streams-0.10.1.0.jar!/:?]
> at org.apache.kafka.streams.processor.internals.
> SourceNode.process(SourceNode.java:66) ~[kafka-streams-0.10.1.0.jar!/:?]
> at org.apache.kafka.streams.processor.internals.
> StreamTask.process(StreamTask.java:181) ~[kafka-streams-0.10.1.0.jar!/:?]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:436)
> ~[kafka-streams-0.10.1.0.jar!/:?]
> at org.apache.kafka.streams.processor.internals.
> StreamThread.run(StreamThread.java:242) [kafka-streams-0.10.1.0.jar!/:?]
> =
>
>
> Regards
> Sai
>


Re: Kafka Streaming

2016-10-23 Thread Sachin Mittal
You can build librocksdbjni locally to fix it.
I did that in my case.

It is bit tricky and you need MS visual studio 15.

I suggest use the following link:
http://mail-archives.apache.org/mod_mbox/kafka-users/201608.mbox/%3CCAHoiPjweo-xSj3TiodcDVf4wNnnJ8u6PcwWDPF7LT5ps%2BxQ3eA%40mail.gmail.com%3E

Few point to remember
1. download cmake for windows
2. make sure to checkout 4.8.fb branch
3. make sure to merge this PR to you local branch

https://github.com/facebook/rocksdb/pull/1223/files

Follow the instructions in the mail and this

https://github.com/facebook/rocksdb/blob/v4.8/CMakeLists.txt

You are good to go. I suggest kafka binary to include this dll in future
releases.

Thanks
Sachin



On Fri, Oct 21, 2016 at 3:40 AM, Michael Noll  wrote:

> I suspect you are running Kafka 0.10.0.x on Windows?  If so, this is a
> known issue that is fixed in Kafka 0.10.1 that was just released today.
>
> Also: which examples are you referring to?  And, to confirm: which git
> branch / Kafka version / OS in case my guess above was wrong.
>
>
> On Thursday, October 20, 2016, Mohit Anchlia 
> wrote:
>
> > I am trying to run the examples from git. While running the wordcount
> > example I see this error:
> >
> > Caused by: *java.lang.RuntimeException*: librocksdbjni-win64.dll was not
> > found inside JAR.
> >
> >
> > Am I expected to include this jar locally?
> >
>
>
> --
> *Michael G. Noll*
> Product Manager | Confluent
> +1 650 453 5860 | @miguno 
> Follow us: Twitter  | Blog
> 
>


Re: Mirror multi-embedded consumer's configuration

2016-10-23 Thread Guozhang Wang
Which version are you using for the MM?

Guozhang

On Thu, Oct 20, 2016 at 10:13 PM, ZHU Hua B 
wrote:

> Hi,
>
>
> Anybody could help to answer below question? Thanks!
>
>
>
>
>
> Best Regards
>
> Johnny
>
> From: ZHU Hua B
> Sent: 2016年10月19日 16:22
> To: 'users@kafka.apache.org'
> Subject: Mirror multi-embedded consumer's configuration
>
> Hi,
>
>
> I launch Kafka mirror maker with multi-embedded consumer's configuration
> but failed as below, what’s the mean of “you asked for only one”, is there
> an option control it? Thanks!
>
> # bin/kafka-mirror-maker.sh --consumer.config config/consumer-1.properties
> --consumer.config config/consumer-2.properties --num.streams 2
> --producer.config config/producer.properties --whitelist '.*'
> [2016-10-19 16:00:14,183] ERROR Exception when starting mirror maker.
> (kafka.tools.MirrorMaker$)
> joptsimple.MultipleArgumentsForOptionException: Found multiple arguments
> for option consumer.config, but you asked for only one
> at joptsimple.OptionSet.valueOf(OptionSet.java:179)
> at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:235)
> at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
> Exception in thread "main" java.lang.NullPointerException
> at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:286)
> at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
>
>
>
>
>
>
> Best Regards
>
> Johnny
>
>


-- 
-- Guozhang


Exception when accessing partition, offset and timestamp in processor class

2016-10-23 Thread saiprasad mishra
Hi

his is with my streaming app kafka 10.1.0.

My flow looks something like below

source topic stream -> filter for null value ->map to make it keyed by id
->custom processor to mystore -> to another topic -> ktable

I am hitting the below type of exception in a custom processor class if I
try to access offset() or partition() or timestamp() from the
ProcessorContext in the process() method. I was hoping it would return the
partition and offset for the enclosing topic(in this case source topic)
where its consuming from or -1 based on the api docs.

Looks like only in certain cases it is accessible. is it getting lost in
transformation phases.

Same issue happens on if i try to access them in punctuate() method but
some where I saw that it might not work in punctuate(). Any reason for this
or any link describing this will be helpful




java.lang.IllegalStateException: This should not happen as offset() should
only be called while a record is processed
at
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.offset(ProcessorContextImpl.java:181)
~[kafka-streams-0.10.1.0.jar!/:?]
at com.sai.repo.MyStore.process(MyStore.java:72) ~[classes!/:?]
at com.sai.repo.MyStore.process(MyStore.java:39) ~[classes!/:?]
at
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
~[kafka-streams-0.10.1.0.jar!/:?]
at
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:204)
~[kafka-streams-0.10.1.0.jar!/:?]
at
org.apache.kafka.streams.kstream.internals.KStreamMap$KStreamMapProcessor.process(KStreamMap.java:43)
~[kafka-streams-0.10.1.0.jar!/:?]
at
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
~[kafka-streams-0.10.1.0.jar!/:?]
at
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:204)
~[kafka-streams-0.10.1.0.jar!/:?]
at
org.apache.kafka.streams.kstream.internals.KStreamFilter$KStreamFilterProcessor.process(KStreamFilter.java:44)
~[kafka-streams-0.10.1.0.jar!/:?]
at
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
~[kafka-streams-0.10.1.0.jar!/:?]
at
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:204)
~[kafka-streams-0.10.1.0.jar!/:?]
at
org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:66)
~[kafka-streams-0.10.1.0.jar!/:?]
at
org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:181)
~[kafka-streams-0.10.1.0.jar!/:?]
at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:436)
~[kafka-streams-0.10.1.0.jar!/:?]
at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
[kafka-streams-0.10.1.0.jar!/:?]
=


Regards
Sai


Re: kafka streaming rocks db lock bug?

2016-10-23 Thread Ara Ebrahimi
And then this on a different node:

2016-10-23 13:43:57 INFO  StreamThread:286 - stream-thread [StreamThread-3] 
Stream thread shutdown complete
2016-10-23 13:43:57 ERROR StreamPipeline:169 - An exception has occurred
org.apache.kafka.streams.errors.StreamsException: stream-thread 
[StreamThread-3] Failed to rebalance
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:401)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:235)
Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error while 
creating the state manager
at 
org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:72)
at 
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:90)
at 
org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:622)
at 
org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:649)
at 
org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:69)
at 
org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:120)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:228)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:313)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:398)
... 1 more
Caused by: java.io.IOException: task [7_1] Failed to lock the state directory: 
/tmp/kafka-streams/argyle-streams/7_1
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:98)
at 
org.apache.kafka.streams.processor.internals.AbstractTask.(AbstractTask.java:69)
... 13 more

Ara.

On Oct 23, 2016, at 1:24 PM, Ara Ebrahimi 
mailto:ara.ebrah...@argyledata.com>> wrote:

Hi,

This happens when I hammer our 5 kafka streaming nodes (each with 4 streaming 
threads) hard enough for an hour or so:

2016-10-23 13:04:17 ERROR StreamThread:324 - stream-thread [StreamThread-2] 
Failed to flush state for StreamTask 3_8:
org.apache.kafka.streams.errors.ProcessorStateException: task [3_8] Failed to 
flush state store streams-data-record-stats-avro-br-store
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:322)
at 
org.apache.kafka.streams.processor.internals.AbstractTask.flushState(AbstractTask.java:181)
at 
org.apache.kafka.streams.processor.internals.StreamThread$4.apply(StreamThread.java:360)
at 
org.apache.kafka.streams.processor.internals.StreamThread.performOnAllTasks(StreamThread.java:322)
at 
org.apache.kafka.streams.processor.internals.StreamThread.flushAllState(StreamThread.java:357)
at 
org.apache.kafka.streams.processor.internals.StreamThread.shutdownTasksAndState(StreamThread.java:295)
at 
org.apache.kafka.streams.processor.internals.StreamThread.shutdown(StreamThread.java:262)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:245)
Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error 
opening store streams-data-record-stats-avro-br-store-20150516 at location 
/tmp/kafka-streams/argyle-streams/3_8/streams-data-record-stats-avro-br-store/streams-data-record-stats-avro-br-store-20150516
at 
org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:196)
at 
org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:158)
at 
org.apache.kafka.streams.state.internals.RocksDBWindowStore$Segment.openDB(RocksDBWindowStore.java:72)
at 
org.apache.kafka.streams.state.internals.RocksDBWindowStore.getOrCreateSegment(RocksDBWindowStore.java:402)
at 
org.apache.kafka.streams.state.internals.RocksDBWindowStore.putAndReturnInternalKey(RocksDBWindowStore.java:310)
at 
org.apache.kafka.streams.state.internals.RocksDBWindowStore.put(RocksDBWindowStore.java:292)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.put(MeteredWindowStore.java:101)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:87)
at 
org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:117)
at 
org.apache.kafka.streams.state.internals.ThreadCache.flush(ThreadCache.java:100)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.flush(CachingWindowStore.java:118)
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:320)
... 7 more
Caused by: org.rocksdb.RocksDBExcepti

kafka streaming rocks db lock bug?

2016-10-23 Thread Ara Ebrahimi
Hi,

This happens when I hammer our 5 kafka streaming nodes (each with 4 streaming 
threads) hard enough for an hour or so:

2016-10-23 13:04:17 ERROR StreamThread:324 - stream-thread [StreamThread-2] 
Failed to flush state for StreamTask 3_8:
org.apache.kafka.streams.errors.ProcessorStateException: task [3_8] Failed to 
flush state store streams-data-record-stats-avro-br-store
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:322)
at 
org.apache.kafka.streams.processor.internals.AbstractTask.flushState(AbstractTask.java:181)
at 
org.apache.kafka.streams.processor.internals.StreamThread$4.apply(StreamThread.java:360)
at 
org.apache.kafka.streams.processor.internals.StreamThread.performOnAllTasks(StreamThread.java:322)
at 
org.apache.kafka.streams.processor.internals.StreamThread.flushAllState(StreamThread.java:357)
at 
org.apache.kafka.streams.processor.internals.StreamThread.shutdownTasksAndState(StreamThread.java:295)
at 
org.apache.kafka.streams.processor.internals.StreamThread.shutdown(StreamThread.java:262)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:245)
Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error 
opening store streams-data-record-stats-avro-br-store-20150516 at location 
/tmp/kafka-streams/argyle-streams/3_8/streams-data-record-stats-avro-br-store/streams-data-record-stats-avro-br-store-20150516
at 
org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:196)
at 
org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:158)
at 
org.apache.kafka.streams.state.internals.RocksDBWindowStore$Segment.openDB(RocksDBWindowStore.java:72)
at 
org.apache.kafka.streams.state.internals.RocksDBWindowStore.getOrCreateSegment(RocksDBWindowStore.java:402)
at 
org.apache.kafka.streams.state.internals.RocksDBWindowStore.putAndReturnInternalKey(RocksDBWindowStore.java:310)
at 
org.apache.kafka.streams.state.internals.RocksDBWindowStore.put(RocksDBWindowStore.java:292)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.put(MeteredWindowStore.java:101)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:87)
at 
org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:117)
at 
org.apache.kafka.streams.state.internals.ThreadCache.flush(ThreadCache.java:100)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.flush(CachingWindowStore.java:118)
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:320)
... 7 more
Caused by: org.rocksdb.RocksDBException: IO error: lock 
/tmp/kafka-streams/argyle-streams/3_8/streams-data-record-stats-avro-br-store/streams-data-record-stats-avro-br-store-20150516/LOCK:
 No locks available
at org.rocksdb.RocksDB.open(Native Method)
at org.rocksdb.RocksDB.open(RocksDB.java:184)
at 
org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:189)
... 18 more

Some sort of a locking bug?

Note that when this happen this node stops processing anything and the other 
nodes seem to want to pick up the load, which brings the whole streaming 
cluster to a stand still. That’s very worrying. Is a document somewhere 
describing *in detail* how failover for streaming works?

Ara.





This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise confidential information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the e-mail by you is prohibited. Thank you in advance for your 
cooperation.




Re: Kafka Streaming

2016-10-23 Thread Mohit Anchlia
So if I get it right I will not have this fix until 4 months? Should I just
create my own example with the next version of Kafka?

On Sat, Oct 22, 2016 at 9:04 PM, Matthias J. Sax 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Current version is 3.0.1
> CP 3.1 should be release the next weeks
>
> So CP 3.2 should be there is about 4 month (Kafka follows a time base
> release cycle of 4 month and CP usually aligns with Kafka releases)
>
> - -Matthias
>
>
> On 10/20/16 5:10 PM, Mohit Anchlia wrote:
> > Any idea of when 3.2 is coming?
> >
> > On Thu, Oct 20, 2016 at 4:53 PM, Matthias J. Sax
> >  wrote:
> >
> > No problem. Asking questions is the purpose of mailing lists. :)
> >
> > The issue will be fixed in next version of examples branch.
> >
> > Examples branch is build with CP dependency and not with Kafka
> > dependency. CP-3.2 is not available yet; only Kafka 0.10.1.0.
> > Nevertheless, they should work with Kafka dependency, too. I never
> > tried it, but you should give it a shot...
> >
> > But you should use example master branch because of API changes
> > from 0.10.0.x to 0.10.1 (and thus, changing CP-3.1 to 0.10.1.0 will
> > not be compatible and not compile, while changing CP-3.2-SNAPSHOT
> > to 0.10.1.0 should work -- hopefully ;) )
> >
> >
> > -Matthias
> >
> > On 10/20/16 4:02 PM, Mohit Anchlia wrote:
>  So this issue I am seeing is fixed in the next version of
>  example branch? Can I change my pom to point it the higher
>  version of Kafka if that is the issue? Or do I need to wait
>  until new branch is made available? Sorry lot of questions
>  :)
> 
>  On Thu, Oct 20, 2016 at 3:56 PM, Matthias J. Sax
>   wrote:
> 
>  The branch is 0.10.0.1 and not 0.10.1.0 (sorry for so many
>  zeros and ones -- super easy to mix up)
> 
>  However, examples master branch uses CP-3.1-SNAPSHOT (ie,
>  Kafka 0.10.1.0) -- there will be a 0.10.1 examples branch,
>  after CP-3.1 was released
> 
> 
>  -Matthias
> 
>  On 10/20/16 3:48 PM, Mohit Anchlia wrote:
> >>> I just now cloned this repo. It seems to be using 10.1
> >>>
> >>> https://github.com/confluentinc/examples and running
> >>> examples in
> >>> https://github.com/confluentinc/examples/tree/kafka-0.10.0.1-cp-
> 3.0
> >
> >>>
> .1/
> 
> >>>
> > kafka-streams
> >>>
> >>> On Thu, Oct 20, 2016 at 3:10 PM, Michael Noll
> >>>  wrote:
> >>>
>  I suspect you are running Kafka 0.10.0.x on Windows?
>  If so, this is a known issue that is fixed in Kafka
>  0.10.1 that was just released today.
> 
>  Also: which examples are you referring to?  And, to
>  confirm: which git branch / Kafka version / OS in
>  case my guess above was wrong.
> 
> 
>  On Thursday, October 20, 2016, Mohit Anchlia
>   wrote:
> 
> > I am trying to run the examples from git. While
> > running the wordcount example I see this error:
> >
> > Caused by: *java.lang.RuntimeException*:
> > librocksdbjni-win64.dll was not found inside JAR.
> >
> >
> > Am I expected to include this jar locally?
> >
> 
> 
>  -- *Michael G. Noll* Product Manager | Confluent +1
>  650 453 5860 | @miguno 
>  Follow us: Twitter 
>  | Blog 
> 
> >>>
> >
> 
> >>
> >
> -BEGIN PGP SIGNATURE-
> Comment: GPGTools - https://gpgtools.org
>
> iQIcBAEBCgAGBQJYDDbqAAoJECnhiMLycopPwVUP/RVRc1XjpUYt3aX4gHOw8eXq
> 3n4BwhxOyDNvWLSgkc+HsQmVxVdJOToN8ELRut7xXci7Z65p4J8llXhazO/8rs5N
> ZfW5nMfdgH82388UqizcNQ6BXeI89/nffZ85wL3S+b8NStC0YxpX2JoNPK1HydbM
> cPfgmAjuTsUpKRbHuUocGQK3qOROHk7nX7n75PdzdXRfJvtNVat2t9/uzEbuQb7H
> g1KtZKDEizCpKO6wFBgEr/K7Y0LUqvWPFA5PmsopBmg+ghBmwnbAUQl4M8MMYD02
> 5clTYDIv/t+ff9jUPBiIxc+i0y/2UH5GBBabZ/bIEmjmy2taabnpL9PZl+dHTm1h
> P3kqI+yiz4qstwzaYVb4er7vHv7LiahqIEKjoivtf7ZBWPC1mlISC3K8ZATV+0w3
> RdJ+7Ly1iUbPPNjrRfTeDAqT55CnRJYEyRzTeGR6MuwnDj7pZHGuZ0G8XPlPgmHs
> ucEqA3cOStcdMw83gM0bUezul4guaoR8Paj4Ky9E1JtMo1UjMWzfIGLVlqFAG7OB
> zNyq+xp+NoCXg6hZS9iU45fgWEx4vXfgRIC2sqIZRWL37CbAgR4WMqJ9TCn6Dc7A
> ZV/5q8Nr+dgWFia5i8fwvOoSeKLrydLo9BACJd9wnYDur3qx3euONsOxjQJ6it6K
> 1ABJ8pskAOdMiXQDtr+M
> =xB9H
> -END PGP SIGNATURE-
>


Consumer error : This consumer has already been closed

2016-10-23 Thread Koen Vantomme
Hello,

I'm creating a simple consumer in JAVA, the first time I run the consumer
it works fine.
I stop the application. When I want to rerun the consumer I get error
message "This consumer has already been closed"

Any suggestions ?
Regards,
Koen

2016-10-23 17:17:34,261 [main] INFO   AppInfoParser - Kafka commitId :
23c69d62a0cabf06
Exception in thread "main" java.lang.IllegalStateException: This consumer
has already been closed.
at
org.apache.kafka.clients.consumer.KafkaConsumer.ensureNotClosed(KafkaConsumer.java:1310)
at
org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:1321)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:844)


The code :

String topic ="testmetrics";
String group ="cg1";

Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", group);
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("session.timeout.ms", "3");
props.put("key.deserializer",
"org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer",
"org.apache.kafka.common.serialization.StringDeserializer");

KafkaConsumer consumer = new KafkaConsumer(props);


consumer.subscribe(Arrays.asList(topic));
System.out.println("Subscribed to topic " + topic);
int i = 0;

while (true) {
ConsumerRecords records = consumer.poll(100);
for (ConsumerRecord record : records)
{
System.out.printf("offset = %d, key = %s, value = %s\n",
record.offset(), record.key(), record.value());
}
c