Kafka Error while doing SSL authentication on 0.9.0

2016-03-09 Thread Ranjan Baral
i getting below warning while doing produce from client side which is connecting to server side which contains SSL based authentication. *[2016-03-10 07:09:13,018] WARN The configuration ssl.keystore.location = /etc/pki/tls/certs/keystore-hpfs.jks was supplied but isn't a known config.

Re: Kafka 0.9.0.1 broker 0.9 consumer location of consumer group data

2016-03-09 Thread Rajiv Kurian
Thanks! That worked. So I see that for the host that I am not getting messages for has a massive lag for the 0th partition (the only one I send messages on). The other 19 groups are all caught up which explains why they have no issues. The lag is just increasing with time which confirms my

Re: Kafka 0.9.0.1 broker 0.9 consumer location of consumer group data

2016-03-09 Thread Manikumar Reddy
We need to pass "--new-consumer" property to kafka-consumer-groups.sh command to use new consumer. sh kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list --new-consumer On Thu, Mar 10, 2016 at 12:02 PM, Rajiv Kurian wrote: > Hi Guozhang, > > I tried using

Re: Kafka 0.9.0.1 broker 0.9 consumer location of consumer group data

2016-03-09 Thread Rajiv Kurian
Hi Guozhang, I tried using the kafka-consumer-groups.sh --list command and it says I have no consumer groups set up at all. Yet I am receiving data on 19 out of 20 consumer processes (each with their own topic and consumer group). Here is my full kafka config as printed when my process started

Re: seekToBeginning doesn't work without auto.offset.reset

2016-03-09 Thread Guozhang Wang
In order to do anything meaningful with the consumer itself in rebalance callback (e.g. commit offset), you would need to hold on the consumer reference; admittedly it sounds a bit awkward, but by design we choose to not enforce it in the interface itself. Guozhang On Wed, Mar 9, 2016 at 3:39

Re: Kafka 0.9.0.1 broker 0.9 consumer location of consumer group data

2016-03-09 Thread Guozhang Wang
Rajiv, In the new Java consumer you used, the ZK dependency has been removed and hence you wont see any data from ZK path. To check the group metadata you can use the ConsumerGroupCommand, wrapped in bin/kafka-consumer-groups.sh. Guozhang On Wed, Mar 9, 2016 at 5:48 PM, Rajiv Kurian

Re: Connect bug in 0.9.0.1 client

2016-03-09 Thread Ismael Juma
Well spotted Larkin. Please file an issue as we definitely want to fix this before the next release. Ismael On Wed, Mar 9, 2016 at 10:46 PM, Christian Posta wrote: > Open a JIRA here: https://issues.apache.org/jira/browse/KAFKA > and open a github.com pull request

Re: Kafka 0.9.0.1 broker 0.9 consumer location of consumer group data

2016-03-09 Thread Rajiv Kurian
Don't think I made my questions clear: On Kafka 0.9.0.1 broker and 0.9 consumer how do I tell what my consumer-groups are? Can I still get this information in ZK? I don't see anything in the consumers folder which is alarming to me. This is especially alarming because I do see that 8 partitions

Re: Kafka 0.9.0.1 broker 0.9 consumer location of consumer group data

2016-03-09 Thread Rajiv Kurian
Also forgot to mention that when I do consume with the console consumer I do see data coming through. On Wed, Mar 9, 2016 at 3:44 PM, Rajiv Kurian wrote: > I am running the 0.9.0.1 broker with the 0.9 consumer. I am using the > subscribe feature on the consumer to subscribe

Re: seekToBeginning doesn't work without auto.offset.reset

2016-03-09 Thread Cody Koeninger
So what about my comments regarding the consumer rebalance listener interface not providing access to a consumer? I can probably work around it, but it seems odd. On Mar 9, 2016 5:04 PM, "Guozhang Wang" wrote: > One thing proposed by Jason: > > If you want to only reset

Kafka 0.9.0.1 broker 0.9 consumer location of consumer group data

2016-03-09 Thread Rajiv Kurian
I am running the 0.9.0.1 broker with the 0.9 consumer. I am using the subscribe feature on the consumer to subscribe to a topic with 8 partitions. consumer.subscribe(Arrays.asList(myTopic)); I have a single consumer group for said topic and a single process subscribed with 8 partitions. When I

Re: Connect bug in 0.9.0.1 client

2016-03-09 Thread Christian Posta
Open a JIRA here: https://issues.apache.org/jira/browse/KAFKA and open a github.com pull request here: https://github.com/apache/kafka May wish to peak at this too: https://github.com/apache/kafka/blob/trunk/CONTRIBUTING.md I think you need an apache ICLA too

Connect bug in 0.9.0.1 client

2016-03-09 Thread Larkin Lowrey
There is a bug in the 0.9.0.1 client which causes consumers to get stuck waiting for a connection to be ready to complete. The root cause is in the connect(...) method of clients/src/main/java/org/apache/kafka/common/network/Selector.java Here's the trouble item: try {

Re: unsubscribe fro this mailing list

2016-03-09 Thread Imre Nagi
Just send an email to users-unsubscr...@kafka.apache.org Thanks, Imre On Thu, Mar 10, 2016 at 5:18 AM, Kudumula, Surender < surender.kudum...@hpe.com> wrote: > Hi All, > Can anyone help me to unsubscribe from this mailing list please. Thank you. > > Regards > > Surender >

unsubscribe fro this mailing list

2016-03-09 Thread Kudumula, Surender
Hi All, Can anyone help me to unsubscribe from this mailing list please. Thank you. Regards Surender

Re: seekToBeginning doesn't work without auto.offset.reset

2016-03-09 Thread Guozhang Wang
Filed https://issues.apache.org/jira/browse/KAFKA-3370. On Wed, Mar 9, 2016 at 1:11 PM, Cody Koeninger wrote: > That sounds like an interesting way of addressing the problem, can > continue further discussions on the JIRA > > > > On Wed, Mar 9, 2016 at 2:59 PM, Guozhang Wang

Re: Retry Message Consumption On Database Failure

2016-03-09 Thread Christian Posta
So can you have to decide how long you're willing to "wait" for the mongo db to come back, and what you'd like to do with that message. So for example, do you just retry inserting to Mongo for a predefined period of time? Do you try forever? If you try forever, are you okay with the consumer

Re: seekToBeginning doesn't work without auto.offset.reset

2016-03-09 Thread Cody Koeninger
That sounds like an interesting way of addressing the problem, can continue further discussions on the JIRA On Wed, Mar 9, 2016 at 2:59 PM, Guozhang Wang wrote: > Cody: > > More specifically, you do not need the "listTopics" function if you already > know your subscribed

Retry Message Consumption On Database Failure

2016-03-09 Thread Michael Freeman
Hey, My team is new to Kafka and we are using the examples found at. http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client We process messages from kafka and persist them to Mongo. If Mongo is unavailable we are wondering how we can re-consume

Re: seekToBeginning doesn't work without auto.offset.reset

2016-03-09 Thread Guozhang Wang
Cody: More specifically, you do not need the "listTopics" function if you already know your subscribed topics, just use "partitionsFor" is sufficient. About the fix, I'm thinking of adding two more options in the auto.offset.rest, say namely "earliest-on-start" and "latest-on-start", which sets

Re: seekToBeginning doesn't work without auto.offset.reset

2016-03-09 Thread Cody Koeninger
Yeah, I think I understood what you were saying. What I'm saying is that if there were a way to just fetch metadata without doing the rest of the work poll() does, it wouldn't be necessary. I guess I can do listTopics to get all metadata for all topics and then parse it. Regarding running a

Re: seekToBeginning doesn't work without auto.offset.reset

2016-03-09 Thread Guozhang Wang
Cody, What I meant for a special case of `seekToXX` is that, today when the function is called with no partition parameters. It will try to execute the logic on all "assigned" partitions for the consumer. And once that is done, the subsequent poll() will not throw the exception since it knows

Re: seekToBeginning doesn't work without auto.offset.reset

2016-03-09 Thread Cody Koeninger
Another unfortunate thing about ConsumerRebalanceListener is that in order to do meaningful work in the callback, you need a reference to the consumer that called it. But that reference isn't provided to the callback, which means the listener implementation needs to hold a reference to the

RE: Mirror maker Configs 0.9.0

2016-03-09 Thread Prabhu.V
Thanks a lot Stephen.. It worked:) -Original Message- From: Stephen Powis [mailto:spo...@salesforce.com] Sent: Wednesday, March 09, 2016 6:58 PM To: users@kafka.apache.org Cc: Kishore N R Subject: Re: Mirror maker Configs 0.9.0 I've attached my two configs here. Pay close attention

Re: 0.9.0.1 Kafka assign partition to new Consumer error

2016-03-09 Thread Ken Cheng
Hi Jason, In my test case, I set enable.auto.commit=false and then I need to use commitSync() in the onPartitionsAssigned() to avoid consumer rebalanced. Actually, I place it in for(TopicPartition partition: partitions), so that commitSync() will not execute when partitions are first assigned.

Re: seekToBeginning doesn't work without auto.offset.reset

2016-03-09 Thread Cody Koeninger
I thought about ConsumerRebalanceListener, but seeking to the beginning any time there's a rebalance for whatever reason is not necessarily the same thing as seeking to the beginning before first starting the consumer. On Wed, Mar 9, 2016 at 2:24 AM, Kamal C wrote: > Cody,

Re: Mirror maker Configs 0.9.0

2016-03-09 Thread Stephen Powis
I've attached my two configs here. Pay close attention to the --num-streams argument to mirror-maker. I have a lot of thru-put on my topics so I ended up matching the number of streams = number of partitions for each of my topics. A stream is essentially just a consumer and producer thread. If

Kafka topics with infinite retention?

2016-03-09 Thread Daniel Schierbeck
I'm considering an architecture where Kafka acts as the primary datastore, with infinite retention of messages. The messages in this case will be domain events that must not be lost. Different downstream consumers would ingest the events and build up various views on them, e.g. aggregated stats,

Re: Kafka 0.9.0.1 new consumer - when no longer considered beta?

2016-03-09 Thread Sean Morris (semorris)
Anyone have any idea? From: semorris > Date: Wednesday, March 2, 2016 at 8:36 AM To: "users@kafka.apache.org" > Subject: Kafka 0.9.0.1 new consumer - when no longer

Re: Mirror maker Configs 0.9.0

2016-03-09 Thread prabhu v
Thanks for the reply.. I will remove the bootstrap.servers property and add zookeeper.connect in consumer properties and let you know Also, is there any way we can check how much data the target data center is lagging behind source DC? On Wed, Mar 9, 2016 at 3:41 PM, Gerard Klijs

Re: Mirror maker Configs 0.9.0

2016-03-09 Thread Gerard Klijs
What do you see in the logs? It could be it goes wrong because you have the bootstrap.servers property which is not supported for the old consumer. On Wed, Mar 9, 2016 at 11:05 AM Gerard Klijs wrote: > Don't know the actual question, it matters what you want to do. >

Re: Mirror maker Configs 0.9.0

2016-03-09 Thread Gerard Klijs
Don't know the actual question, it matters what you want to do. Just watch out trying to copy every topic using a new consumer, cause then internal topics are copied, leading to errors. Here is a temple start script we used: #!/usr/bin/env bash export

Re: Mirror maker Configs 0.9.0

2016-03-09 Thread prabhu v
Hi Experts, I am trying to replicate data between different data centers using mirror maker tool. kafka-run-class.bat kafka.tools.MirrorMaker --consumer.config consumer.properties --producer.config producer.properties --whitelist * Can someone provide the sample consumer.properties and

Mirror maker Configs 0.9.0

2016-03-09 Thread prabhu v
Hi Experts, I am trying to mirror -- Regards, Prabhu.V

Re: seekToBeginning doesn't work without auto.offset.reset

2016-03-09 Thread Kamal C
Cody, Use ConsumerRebalanceListener to achieve that, ConsumerRebalanceListener listener = new ConsumerRebalanceListener() { @Override public void onPartitionsRevoked(Collection partitions) { } @Override public void

Re: Poll Interval for Kafka Connect Source

2016-03-09 Thread Ewen Cheslack-Postava
Shiti, There's not a built-in parameter because not all sources are based on polling -- some may use Selector-like APIs which simply block indefinitely while waiting for new data (and support some sort of wakeup mechanism to support prompt shutdown). -Ewen On Tue, Mar 8, 2016 at 9:37 PM, Shiti