RE: FW: Zookeeper Configuration Question
Apologies for not responding sooner. My mail client must have been malfunctioning at the time as I never saw your responses until today. As for the error, it looks like it's a bug on my part that just didn't click until I read Jim's responses. I have a config file that I specify the options in and I copied it from a Configuration object to a Properties object, as the producer/consumer requires. Didn't realize until this morning that it was not working as expected. On a different note: does anyone know how to create a namespace in Zookeeper? We're having some issues I'm trying to debug so I want to isolate some of our brokers, but finding documentation on this has been fruitless. Thanks! From: Neha Narkhede [neha.narkh...@gmail.com] Sent: Thursday, November 29, 2012 4:39 PM To: users@kafka.apache.org Subject: Re: FW: Zookeeper Configuration Question Please can you send around the log that shows the zookeeper connection error ? I would like to see if it fails at connection establishment or session establishment. Thanks, Neha On Thu, Nov 29, 2012 at 1:19 PM, James A. Robinson jim.robin...@stanford.edu wrote: On Thu, Nov 29, 2012 at 1:15 PM, James A. Robinson jim.robin...@stanford.edu wrote: For my kafka startup I point to the zookeeper cluster like so: --kafka-zk-connect logproc-dev-03:2181,logproc-dev-03:2182,logproc-dev-03:2183 Sorry, wrong copy and paste! For the kafka startup I point to the zookeeper cluster like so (in the properties file): zk.connect=logproc-dev-03.highwire.org:2181,logproc-dev-03.highwire.org:2182,logproc-dev-03.highwire.org:2183
RE: Logging which broker a message was sent to
I'll try that out. Thanks! From: Jun Rao [jun...@gmail.com] Sent: Monday, December 10, 2012 1:04 PM To: users@kafka.apache.org Subject: Re: Logging which broker a message was sent to So, you are using Producer, not SyncProducer. Assuming that you are using DefaultEventHandler, there is a trace level logging that tells you which broker a request is sent to. Thanks, Jun On Mon, Dec 10, 2012 at 8:10 AM, Sybrandy, Casey casey.sybra...@six3systems.com wrote: Is it at least possible to see which broker a message is sent to? I'm using a Zookeeper based producer and we have multiple brokers in our environment. If I can tell which broker a message is sent to, that would be a big help. From: Jun Rao [jun...@gmail.com] Sent: Monday, December 10, 2012 11:07 AM To: users@kafka.apache.org Subject: Re: Logging which broker a message was sent to If you use -1 (ie, a random partition) as the partition #, there is no easy way to know which partition that the broker picks. However, you can explicitly specify the partition # in the request itself. Thanks, Jun On Mon, Dec 10, 2012 at 7:26 AM, Sybrandy, Casey casey.sybra...@six3systems.com wrote: Is it possible to log/see which broker, and perhaps partition, a producer sent a message to? I'm using the SyncProducer if that matters.
Re: Upgrade from 0.8 trunk to 0.8 GA?
Hi, Thanks Neha. We made protocol changes up until last Friday, so I would wait for another few weeks before deploying 0.8 in production, if you'd like to avoid incompatible releases. However, upgrading from 0.7.2 to 0.8 is also backwards incompatible and will require careful deployment planning to correctly release 0.8 without service interruptions. Do you plan on writing this up? Thanks, Otis Performance Monitoring for Solr / ElasticSearch / HBase - http://sematext.com/spm From: Neha Narkhede neha.narkh...@gmail.com To: users@kafka.apache.org; Otis Gospodnetic otis_gospodne...@yahoo.com Sent: Sunday, December 9, 2012 12:57 PM Subject: Re: Upgrade from 0.8 trunk to 0.8 GA? Otis, I wouldn't call 0.8 HEAD production ready right now, there are still some performance issues that we are working on. At LinkedIn, we are at least 2 months away from deploying 0.8 in production. Questions: * Would it be wiser/simpler for us to switch our dependency from 0.7.2 to 0.8, code to 0.8 APIs, and deploy a version of 0.8 from HEAD or nightly build? Not yet. 0.7.2 is very stable and is deployed in production for several months without problems. * How stable or buggy is Kafka 0.8 HEAD? It is feature complete, and like I said, there are performance bugs like memory leaks, that we are working on. * If we switch to using 0.8 now, what are the chances of incompatibilities in terms of how data in Kafka is stored? (I would like to avoid having to do data conversions or anything that can interrupt our service) We made protocol changes up until last Friday, so I would wait for another few weeks before deploying 0.8 in production, if you'd like to avoid incompatible releases. However, upgrading from 0.7.2 to 0.8 is also backwards incompatible and will require careful deployment planning to correctly release 0.8 without service interruptions. HTH, Neha On Sat, Dec 8, 2012 at 5:49 AM, Otis Gospodnetic otis_gospodne...@yahoo.com wrote: Hello, How production-ready and how GA-compatible is 0.8 HEAD now? Context: We're starting to use Kafka in a few projects at Sematext (one of them is in my sig below) and are using Kafka 0.7.2. One of them will go live in about 10 days. Questions: * Would it be wiser/simpler for us to switch our dependency from 0.7.2 to 0.8, code to 0.8 APIs, and deploy a version of 0.8 from HEAD or nightly build? * How stable or buggy is Kafka 0.8 HEAD? * If we switch to using 0.8 now, what are the chances of incompatibilities in terms of how data in Kafka is stored? (I would like to avoid having to do data conversions or anything that can interrupt our service) Is anyone running a recent Kafka 0.8 checkout in high-volume production? Thanks, Otis Performance Monitoring for Solr / ElasticSearch / HBase - http://sematext.com/spm
Re: first steps with the codebase
You can take a look at one of the producer tests and attach breakpoints in the code. Ensure you pick the Debug Test instead of Run Test option. Thanks, Neha On Mon, Dec 10, 2012 at 7:31 PM, S Ahmed sahmed1...@gmail.com wrote: Hi, So I followed the instructions from here: https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup So I pulled down the latest from github, ran; sbt update idea open it up in idea, and builds fine in idea also (version 12). Everything is fine so far. Questions: From just using the IDE, how can I start the neccessary services so I can debug a producer call so I can trace the code line by line as it executes to create a message, and then set a breakpoint on the kafka server side of things to see how it goes about processing an inbound message. Is this possible, or is the general workflow first starting the services using some .sh scripts? My goal here is to be able to set breakpoints on both the producer and broker side of things. Much appreciated!
Re: first steps with the codebase
Neha, But what do I need to start before running the tests, I tried to run the test testAsyncSendCanCorrectlyFailWithTimeout but I got this: 2012-12-11 00:01:08,974] WARN EndOfStreamException: Unable to read additional data from client sessionid 0x13b8856456a0002, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) [2012-12-11 00:01:11,231] WARN EndOfStreamException: Unable to read additional data from client sessionid 0x13b8856456a0001, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) [2012-12-11 00:01:26,561] WARN EndOfStreamException: Unable to read additional data from client sessionid 0x13b8856456a0003, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) [2012-12-11 00:01:26,563] WARN EndOfStreamException: Unable to read additional data from client sessionid 0x13b8856456a0004, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) [2012-12-11 00:01:30,661] ERROR [TopicChangeListener on Controller 1]: Error while handling new topic (kafka.controller.PartitionStateMachine$TopicChangeListener:102) java.lang.NullPointerException at scala.collection.JavaConversions$JListWrapper.iterator(JavaConversions.scala:524) at scala.collection.IterableLike$class.foreach(IterableLike.scala:79) at scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:521) at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:176) at scala.collection.JavaConversions$JListWrapper.foldLeft(JavaConversions.scala:521) at scala.collection.TraversableOnce$class.$div$colon(TraversableOnce.scala:139) at scala.collection.JavaConversions$JListWrapper.$div$colon(JavaConversions.scala:521) at scala.collection.generic.Addable$class.$plus$plus(Addable.scala:54) at scala.collection.immutable.Set$EmptySet$.$plus$plus(Set.scala:47) at scala.collection.TraversableOnce$class.toSet(TraversableOnce.scala:436) at scala.collection.JavaConversions$JListWrapper.toSet(JavaConversions.scala:521) at kafka.controller.PartitionStateMachine$TopicChangeListener.liftedTree1$1(PartitionStateMachine.scala:337) at kafka.controller.PartitionStateMachine$TopicChangeListener.handleChildChange(PartitionStateMachine.scala:335) at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:570) at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71) Disconnected from the target VM, address: '127.0.0.1:64026', transport: 'socket' On Mon, Dec 10, 2012 at 11:54 PM, Neha Narkhede neha.narkh...@gmail.comwrote: You can take a look at one of the producer tests and attach breakpoints in the code. Ensure you pick the Debug Test instead of Run Test option. Thanks, Neha On Mon, Dec 10, 2012 at 7:31 PM, S Ahmed sahmed1...@gmail.com wrote: Hi, So I followed the instructions from here: https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup So I pulled down the latest from github, ran; sbt update idea open it up in idea, and builds fine in idea also (version 12). Everything is fine so far. Questions: From just using the IDE, how can I start the neccessary services so I can debug a producer call so I can trace the code line by line as it executes to create a message, and then set a breakpoint on the kafka server side of things to see how it goes about processing an inbound message. Is this possible, or is the general workflow first starting the services using some .sh scripts? My goal here is to be able to set breakpoints on both the producer and broker side of things. Much appreciated!
Re: first steps with the codebase
BTW, where exactly will the broker be writing these messages? Is it in a /tmp folder? On Tue, Dec 11, 2012 at 12:02 AM, S Ahmed sahmed1...@gmail.com wrote: Neha, But what do I need to start before running the tests, I tried to run the test testAsyncSendCanCorrectlyFailWithTimeout but I got this: 2012-12-11 00:01:08,974] WARN EndOfStreamException: Unable to read additional data from client sessionid 0x13b8856456a0002, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) [2012-12-11 00:01:11,231] WARN EndOfStreamException: Unable to read additional data from client sessionid 0x13b8856456a0001, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) [2012-12-11 00:01:26,561] WARN EndOfStreamException: Unable to read additional data from client sessionid 0x13b8856456a0003, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) [2012-12-11 00:01:26,563] WARN EndOfStreamException: Unable to read additional data from client sessionid 0x13b8856456a0004, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) [2012-12-11 00:01:30,661] ERROR [TopicChangeListener on Controller 1]: Error while handling new topic (kafka.controller.PartitionStateMachine$TopicChangeListener:102) java.lang.NullPointerException at scala.collection.JavaConversions$JListWrapper.iterator(JavaConversions.scala:524) at scala.collection.IterableLike$class.foreach(IterableLike.scala:79) at scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:521) at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:176) at scala.collection.JavaConversions$JListWrapper.foldLeft(JavaConversions.scala:521) at scala.collection.TraversableOnce$class.$div$colon(TraversableOnce.scala:139) at scala.collection.JavaConversions$JListWrapper.$div$colon(JavaConversions.scala:521) at scala.collection.generic.Addable$class.$plus$plus(Addable.scala:54) at scala.collection.immutable.Set$EmptySet$.$plus$plus(Set.scala:47) at scala.collection.TraversableOnce$class.toSet(TraversableOnce.scala:436) at scala.collection.JavaConversions$JListWrapper.toSet(JavaConversions.scala:521) at kafka.controller.PartitionStateMachine$TopicChangeListener.liftedTree1$1(PartitionStateMachine.scala:337) at kafka.controller.PartitionStateMachine$TopicChangeListener.handleChildChange(PartitionStateMachine.scala:335) at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:570) at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71) Disconnected from the target VM, address: '127.0.0.1:64026', transport: 'socket' On Mon, Dec 10, 2012 at 11:54 PM, Neha Narkhede neha.narkh...@gmail.comwrote: You can take a look at one of the producer tests and attach breakpoints in the code. Ensure you pick the Debug Test instead of Run Test option. Thanks, Neha On Mon, Dec 10, 2012 at 7:31 PM, S Ahmed sahmed1...@gmail.com wrote: Hi, So I followed the instructions from here: https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup So I pulled down the latest from github, ran; sbt update idea open it up in idea, and builds fine in idea also (version 12). Everything is fine so far. Questions: From just using the IDE, how can I start the neccessary services so I can debug a producer call so I can trace the code line by line as it executes to create a message, and then set a breakpoint on the kafka server side of things to see how it goes about processing an inbound message. Is this possible, or is the general workflow first starting the services using some .sh scripts? My goal here is to be able to set breakpoints on both the producer and broker side of things. Much appreciated!