Re: Mirrormaker schema exception

2016-05-16 Thread Gwen Shapira
It looks like you are using MirrorMaker from 0.9.0.1 while the source broker is older. MirrorMaker needs to be older than the older broker involved in the replication. Gwen On Mon, May 16, 2016 at 2:12 PM, Meghana Narasimhan wrote: > Hi, > I came across the following mirrormaker issue today whi

[VOTE] 0.10.0.0 RC5

2016-05-16 Thread Gwen Shapira
Hello Kafka users, developers and client-developers, This is the sixth (!) candidate for release of Apache Kafka 0.10.0.0. This is a major release that includes: (1) New message format including timestamps (2) client interceptor API (3) Kafka Streams. Since this is a major release, we will give pe

Mirrormaker schema exception

2016-05-16 Thread Meghana Narasimhan
Hi, I came across the following mirrormaker issue today which caused it to shutdown. Don't see any indications of any issues in any of the other logs. Any insight on this error will be much appreciated. Kafka version is 0.9.0.1 (confluent platform 2.0.1). FATAL [mirrormaker-thread-0] Mirror maker

Re: Increase number of topic in Kafka leads zookeeper fail

2016-05-16 Thread Abhaya P
I was reading a nice summary article by Jun Rao on the implications of # of topics/partitions. http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/ There are many trade-offs to be considered, as it looks. Finding the partition for a key: Can 'custom partit

Re: mbeans missing in 0.9.0.1?

2016-05-16 Thread Tom Crayford
Hi there, The Kafka producer and consumer are libraries you run inside your application. As such, the beans from them do not exist on the brokers. Thanks Tom Crayford, Heroku Kafka On Mon, May 16, 2016 at 8:29 PM, Russ Lavoie wrote: > I am using JMX to gather kafka metrics. It states here > h

mbeans missing in 0.9.0.1?

2016-05-16 Thread Russ Lavoie
I am using JMX to gather kafka metrics. It states here http://docs.confluent.io/1.0/kafka/monitoring.html that they should be there. But when I run a jmx client and show beans kafka.consumer and kafka.producer do not exist. Is there something special I have to do to get these metrics? Thanks

Re: ISR shrinking/expanding problem

2016-05-16 Thread Russ Lavoie
They did not Even after 1.5 days of waiting... I had drop everything and start over because the entire kafka cluster was in an ISR shrink/expand loop with larger hardware and and lower replica threads. On Mon, May 16, 2016 at 1:05 PM, Alex Loddengaard wrote: > Hi Russ, > > They should eventual

Re: Kafka Consumer rebalancing frequently

2016-05-16 Thread Jason Gustafson
To pile on a little bit, the API is designed to ensure consumer liveness so that partitions cannot be held indefinitely by a defunct process. Since heartbeating and message processing are done in the same thread, the consumer needs to demonstrate "progress" by calling poll() often enough not to get

Re: Kafka Connect tasks consumers issue

2016-05-16 Thread Liquan Pei
Hi Matteo, There was a bug in the 0.9.1 such that task.close() can be invoked both in the Worker thread and Herder thread. There can be a race condition that consumer.close() is invoked in multiple threads at the same time. As the consumer is designed to be used in single thread, thus the concurre

Re: Producer offset commit API

2016-05-16 Thread Kanagha
Thanks for providing the links. I 'll test it out using offsetStorageReader. Kanagha On Mon, May 16, 2016 at 10:09 AM, Christian Posta wrote: > If you're using KafkaConnect, it does it for you! > > basically you set the sourceRecord's "sourcePartition" and "sourceOffset" > fields ( > > https://

Re: ISR shrinking/expanding problem

2016-05-16 Thread Alex Loddengaard
Hi Russ, They should eventually catch back up and rejoin the ISR. Did they not? Alex On Fri, May 13, 2016 at 6:33 PM, Russ Lavoie wrote: > Hello, > > I moved an entire topic from one set of brokers to another set of brokers. > The network throughput was so high, that they fell behind the leade

Re: Producer offset commit API

2016-05-16 Thread Christian Posta
If you're using KafkaConnect, it does it for you! basically you set the sourceRecord's "sourcePartition" and "sourceOffset" fields ( https://github.com/christian-posta/kafka/blob/8db55618d5d5d5de97feab2bf8da4dc45387a76a/connect/api/src/main/java/org/apache/kafka/connect/source/SourceRecord.java#L5

Re: Producer offset commit API

2016-05-16 Thread Tom Crayford
Hi, Producers don't track offsets in the same way, so there is no producer offset API. Thanks Tom Crayford Heroku Kafka On Mon, May 16, 2016 at 5:25 PM, Kanagha wrote: > Hi, > > I am trying to find out the API for committing producer offset for Kafka > > I found this example: > > https://cwik

Producer offset commit API

2016-05-16 Thread Kanagha
Hi, I am trying to find out the API for committing producer offset for Kafka I found this example: https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka Which would work for committing consumer offsets. Is there a separate API for committing producer

RE: Increase number of topic in Kafka leads zookeeper fail

2016-05-16 Thread Paolo Patierno
I agree with Tom but ... ... to reply Christina question I guess that Anas thought about this kind of solution in relation to the simplicity to read data from a specific devices from a backend service point of view. Using one topic per device means that on the backend side you know exactly wh

Re: Increase number of topic in Kafka leads zookeeper fail

2016-05-16 Thread Christian Posta
+1 what Tom said. Curious though Anas, what motivated you to try a topic per device? was there something regarding management or security that you believe you can achieve with topic per device? On Mon, May 16, 2016 at 4:11 AM, Tom Crayford wrote: > Hi there, > > Generally you don't use a single

Re: Increase number of topic in Kafka leads zookeeper fail

2016-05-16 Thread Tom Crayford
Hi there, Generally you don't use a single topic per device in this use case, but one topic with some number of partitions and the key distribution based on device id. Kafka isn't designed for millions of low volume topics, but a few high volume ones. Thanks Tom Crayford Heroku Kafka On Mon, Ma