Re: KStreams.reduceByKey passing nulls to my Deserializer?

2016-06-10 Thread Guozhang Wang
Hello Avi, Yes, this is possible: although we checked nullable keys when doing reduce / aggregations: https://github.com/apache/kafka/blob/0.10.0/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamReduce.java#L67 We do not check if the there are any values returned from the

Manual update offset for new consumer

2016-06-10 Thread Henry Cai
When we were using the old consumer, we can use zookeeper client tool to manually set the offset for a consumer group. For the new consumer when the offsets is stored in broker, is there a tool to do the same thing?

Re: Question about heterogeneous brokers in a cluster

2016-06-10 Thread Kevin A
Thanks Alex and Todd, I really appreciate the insights. Based on what you've shared: spending more time up-front to homogenize the nodes reduces some cognitive load for day-to-day support. Keeping things simple generally wins me over. Thanks again. -Kevin On Thu, Jun 9, 2016 at 10:37 PM, Todd

Re: KStreams.reduceByKey passing nulls to my Deserializer?

2016-06-10 Thread Avi Flax
> On Jun 10, 2016, at 14:24, Avi Flax wrote: > > Hi, I’m using Kafka Streams (0.10.0) with JRuby, most of my scripts/nodes are > working well at this point, except for one which is using reduceByKey. Whoops, I should probably share my code as well! Here’s the

KStreams.reduceByKey passing nulls to my Deserializer?

2016-06-10 Thread Avi Flax
Hi, I’m using Kafka Streams (0.10.0) with JRuby, most of my scripts/nodes are working well at this point, except for one which is using reduceByKey. This is the first time I’m trying to use the local state store so it’s possible there’s something misconfigured, I’m not sure. My config is pretty

ELB for Kafka

2016-06-10 Thread Ram Chander
Hi, Is it possible to have Kafka brokers behind ELB and producers and consumers talk only to ELB ? If not, should we directly expose all brokers to all producers/consumers ? Please advise. Regards, Ram

Re: dynamically changing peer role from voting to observer and vice versa

2016-06-10 Thread Nomar Morado
Anyone? Printing e-mails wastes valuable natural resources. Please don't print this message unless it is absolutely necessary. Thank you for thinking green! Sent from my iPhone > On Jun 8, 2016, at 2:31 PM, Nomar Morado wrote: > > Is there any way in the current API

RE: error: ... protocols are incompatible with those of existing members ??

2016-06-10 Thread Martin Gainty
> Date: Fri, 10 Jun 2016 16:54:47 +0530 > Subject: Re: error: ... protocols are incompatible with those of existing > members ?? > From: bkap...@memelet.com > To: users@kafka.apache.org > > I delete the group using kafka-consumer-groups.sh --delete and still I get > the error.

KafkaHighLevel consumer in java returns topics which are removed before?

2016-06-10 Thread shahab
Hi, I have strange problem. I have kafka 9 cluster of 3 nodes. I removed all topics from zookepper, but when I query topics using Java High Level consumer it returns all three removed kafka topics. But how this is possible when I removed them manually, by going into zookeeper shell and removing

Re: error: ... protocols are incompatible with those of existing members ??

2016-06-10 Thread Dana Powers
Barry - i believe the error refers to the consumer group "protocol" that is used to decide which partitions get assigned to which consumers. The way it works is that each consumer says it wants to join X group and it can support protocols (1, 2, 3...). The broker looks at all consumers in group X

Re: ELB for Kafka

2016-06-10 Thread Tom Crayford
Kafka itself handles distribution among brokers and which broker consumers and producers connect to. There's no need for an ELB, and you have to directly expose all brokers to producers and consumers. On Friday, 10 June 2016, Ram Chander wrote: > Hi, > > > I am trying to

ELB for Kafka

2016-06-10 Thread Ram Chander
Hi, I am trying to setup Kafka cluster in AWS. Is it possible to have Kafka brokers behind ELB and producers and consumers talk only to ELB ? If not, should we directly expose all brokers to all producers/consumers ? Please advise. Regards, Ram

Re: error: ... protocols are incompatible with those of existing members ??

2016-06-10 Thread Barry Kaplan
I delete the group using kafka-consumer-groups.sh --delete and still I get the error.

Re: error: ... protocols are incompatible with those of existing members ??

2016-06-10 Thread Barry Kaplan
I didn't really expect this to help, but still, I tried deleting /all/ topics and recreating them. But still my connect app will no longer run due to this error. Does this error even have anything to do with persisted state, or is the broker complaining about live client connections?

Re: JVM Optimizations

2016-06-10 Thread Barry Kaplan
Tom, Thanks, that's very good to know. What kind of instances EC2 instances are you using for your brokers? -barry On Fri, Jun 10, 2016 at 4:17 PM, Tom Crayford wrote: > Barry, > > No, because Kafka also relies heavily on the OS page cache, which uses > memory. You'd

Re: JVM Optimizations

2016-06-10 Thread Tom Crayford
Barry, No, because Kafka also relies heavily on the OS page cache, which uses memory. You'd roughly want to allocate enough page cache to hold all the messages for your consumers for say, 30s. Kafka also (in our experience on EC2) tends to run out of network far before it runs out of memory or

error: ... protocols are incompatible with those of existing members ??

2016-06-10 Thread Barry Kaplan
I am getting this error: Attempt to join group connect-elasticsearch-indexer failed due to: The > group member's supported protocols are incompatible with those of existing > members. This is a single kafka-connect process consuming two topics. The brokers have never changed, and the version of

Fwd: Monitor the lag for the consumers that are assigned to partitions topic

2016-06-10 Thread Spico Florin
Hello! I'm working with Kafka 0.9.1 new consumer API. The consumer is manually assigned to a partition. For this consumer I would like to see its progress (meaning the lag). Since I added the group id consumer-tutorial as property, I assumed that I can use the command

Skipping assignment for topic * since no metadata is available

2016-06-10 Thread Patrick Kaufmann
Hello Recently we’ve run into a problem when starting our application for the first time. At the moment all our topics are auto-created. Now, at the first start there are no topics, so naturally some consumers try to connect to topics which don’t exist. Those consumers now fail quite

Re: JVM Optimizations

2016-06-10 Thread Barry Kaplan
If too much heap cause problems, would it make sense to run multiple brokers on a box with lots memory? For example, an EC2 D2 instance types has way way more ram than kafka could ever use - -but it has fast connected disks. Would running a broker per disk make sense in this case? -barry