Re: first steps with the codebase
>> but still wanted to know where folder it writes too). It picks a random string as the kafka log directory in /tmp. You might have to turn on some logging to see the name. Thanks, Neha On Wed, Dec 12, 2012 at 12:37 PM, S Ahmed wrote: > but still wanted to know where folder it writes > too).
Re: first steps with the codebase
Thanks, it looks like the test passed so it was as you suggested. I couldn't find the location where the files were written too (I'm guessing it cleans up after itself, but still wanted to know where folder it writes too). On Tue, Dec 11, 2012 at 6:45 PM, Jay Kreps wrote: > All of the unit tests start and stop all their dependencies. You shouldn't > have to do anything prior to running ./sbt test. Did the test fail or did > it just print an exception? The sbt test runner will print exceptions and > other logging to the console. Some of the tests specifically invoke error > conditions, and some do their cleanup by interrupting I/O both of which may > produce exceptions. But provided they are just logged that is fine. > > -Jay > > > On Tue, Dec 11, 2012 at 2:16 PM, S Ahmed wrote: > > > And my question before that regarding what services do I need to start > and > > how do I do this? > > > > Please see my previous post, I wrote how I tried to run the unit test " > > testAsyncSendCanCorrectlyFailWithTimeout" > > > > > > > > > > On Tue, Dec 11, 2012 at 4:24 PM, Jain Rahul > wrote: > > > > > You can check in config/server.properties. By default it writes in > > > /tmp/kafka-logs/ . > > > > > > -Original Message----- > > > From: S Ahmed [mailto:sahmed1...@gmail.com] > > > Sent: 12 December 2012 02:51 > > > To: users@kafka.apache.org > > > Subject: Re: first steps with the codebase > > > > > > help anyone? :) > > > > > > Much much appreciated! > > > > > > > > > On Tue, Dec 11, 2012 at 12:03 AM, S Ahmed > wrote: > > > > > > > BTW, where exactly will the broker be writing these messages? Is it > > > > in a /tmp folder? > > > > > > > > > > > > On Tue, Dec 11, 2012 at 12:02 AM, S Ahmed > > wrote: > > > > > > > >> Neha, > > > >> > > > >> But what do I need to start before running the tests, I tried to run > > > >> the test "testAsyncSendCanCorrectlyFailWithTimeout" but I got this: > > > >> > > > >> 2012-12-11 00:01:08,974] WARN EndOfStreamException: Unable to read > > > >> additional data from client sessionid 0x13b8856456a0002, likely > > > >> client has closed socket > > > >> (org.apache.zookeeper.server.NIOServerCnxn:634) > > > >> [2012-12-11 00:01:11,231] WARN EndOfStreamException: Unable to read > > > >> additional data from client sessionid 0x13b8856456a0001, likely > > > >> client has closed socket > > > >> (org.apache.zookeeper.server.NIOServerCnxn:634) > > > >> [2012-12-11 00:01:26,561] WARN EndOfStreamException: Unable to read > > > >> additional data from client sessionid 0x13b8856456a0003, likely > > > >> client has closed socket > > > >> (org.apache.zookeeper.server.NIOServerCnxn:634) > > > >> [2012-12-11 00:01:26,563] WARN EndOfStreamException: Unable to read > > > >> additional data from client sessionid 0x13b8856456a0004, likely > > > >> client has closed socket > > > >> (org.apache.zookeeper.server.NIOServerCnxn:634) > > > >> [2012-12-11 00:01:30,661] ERROR [TopicChangeListener on Controller > 1]: > > > >> Error while handling new topic > > > >> (kafka.controller.PartitionStateMachine$TopicChangeListener:102) > > > >> java.lang.NullPointerException > > > >> at > > > >> > scala.collection.JavaConversions$JListWrapper.iterator(JavaConversion > > > >> s.scala:524) at > > > >> scala.collection.IterableLike$class.foreach(IterableLike.scala:79) > > > >> at > > > >> > scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions > > > >> .scala:521) > > > >> at > > > >> > scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala > > > >> :176) > > > >> at > > > >> > scala.collection.JavaConversions$JListWrapper.foldLeft(JavaConversion > > > >> s.scala:521) > > > >> at > > > >> > scala.collection.TraversableOnce$class.$div$colon(TraversableOnce.sca > > > >> la:139) > > > >> at > > > >> > scala.collection.JavaConversions$JListWrapper.$div$colon(JavaConversi > > > >> ons.scala:521) at > > > >> scala.collection.generic.Addable
Re: first steps with the codebase
All of the unit tests start and stop all their dependencies. You shouldn't have to do anything prior to running ./sbt test. Did the test fail or did it just print an exception? The sbt test runner will print exceptions and other logging to the console. Some of the tests specifically invoke error conditions, and some do their cleanup by interrupting I/O both of which may produce exceptions. But provided they are just logged that is fine. -Jay On Tue, Dec 11, 2012 at 2:16 PM, S Ahmed wrote: > And my question before that regarding what services do I need to start and > how do I do this? > > Please see my previous post, I wrote how I tried to run the unit test " > testAsyncSendCanCorrectlyFailWithTimeout" > > > > > On Tue, Dec 11, 2012 at 4:24 PM, Jain Rahul wrote: > > > You can check in config/server.properties. By default it writes in > > /tmp/kafka-logs/ . > > > > -Original Message- > > From: S Ahmed [mailto:sahmed1...@gmail.com] > > Sent: 12 December 2012 02:51 > > To: users@kafka.apache.org > > Subject: Re: first steps with the codebase > > > > help anyone? :) > > > > Much much appreciated! > > > > > > On Tue, Dec 11, 2012 at 12:03 AM, S Ahmed wrote: > > > > > BTW, where exactly will the broker be writing these messages? Is it > > > in a /tmp folder? > > > > > > > > > On Tue, Dec 11, 2012 at 12:02 AM, S Ahmed > wrote: > > > > > >> Neha, > > >> > > >> But what do I need to start before running the tests, I tried to run > > >> the test "testAsyncSendCanCorrectlyFailWithTimeout" but I got this: > > >> > > >> 2012-12-11 00:01:08,974] WARN EndOfStreamException: Unable to read > > >> additional data from client sessionid 0x13b8856456a0002, likely > > >> client has closed socket > > >> (org.apache.zookeeper.server.NIOServerCnxn:634) > > >> [2012-12-11 00:01:11,231] WARN EndOfStreamException: Unable to read > > >> additional data from client sessionid 0x13b8856456a0001, likely > > >> client has closed socket > > >> (org.apache.zookeeper.server.NIOServerCnxn:634) > > >> [2012-12-11 00:01:26,561] WARN EndOfStreamException: Unable to read > > >> additional data from client sessionid 0x13b8856456a0003, likely > > >> client has closed socket > > >> (org.apache.zookeeper.server.NIOServerCnxn:634) > > >> [2012-12-11 00:01:26,563] WARN EndOfStreamException: Unable to read > > >> additional data from client sessionid 0x13b8856456a0004, likely > > >> client has closed socket > > >> (org.apache.zookeeper.server.NIOServerCnxn:634) > > >> [2012-12-11 00:01:30,661] ERROR [TopicChangeListener on Controller 1]: > > >> Error while handling new topic > > >> (kafka.controller.PartitionStateMachine$TopicChangeListener:102) > > >> java.lang.NullPointerException > > >> at > > >> scala.collection.JavaConversions$JListWrapper.iterator(JavaConversion > > >> s.scala:524) at > > >> scala.collection.IterableLike$class.foreach(IterableLike.scala:79) > > >> at > > >> scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions > > >> .scala:521) > > >> at > > >> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala > > >> :176) > > >> at > > >> scala.collection.JavaConversions$JListWrapper.foldLeft(JavaConversion > > >> s.scala:521) > > >> at > > >> scala.collection.TraversableOnce$class.$div$colon(TraversableOnce.sca > > >> la:139) > > >> at > > >> scala.collection.JavaConversions$JListWrapper.$div$colon(JavaConversi > > >> ons.scala:521) at > > >> scala.collection.generic.Addable$class.$plus$plus(Addable.scala:54) > > >> at scala.collection.immutable.Set$EmptySet$.$plus$plus(Set.scala:47) > > >> at > > >> scala.collection.TraversableOnce$class.toSet(TraversableOnce.scala:43 > > >> 6) > > >> at > > >> scala.collection.JavaConversions$JListWrapper.toSet(JavaConversions.s > > >> cala:521) > > >> at > > >> kafka.controller.PartitionStateMachine$TopicChangeListener.liftedTree > > >> 1$1(PartitionStateMachine.scala:337) > > >> at > > >> kafka.controller.PartitionStateMachine$TopicChangeListener.handleChil > > >> dChange(PartitionStateMachine.scala:335) > > >> at org.I0Itec.zkclient.ZkClie
Re: first steps with the codebase
And my question before that regarding what services do I need to start and how do I do this? Please see my previous post, I wrote how I tried to run the unit test " testAsyncSendCanCorrectlyFailWithTimeout" On Tue, Dec 11, 2012 at 4:24 PM, Jain Rahul wrote: > You can check in config/server.properties. By default it writes in > /tmp/kafka-logs/ . > > -Original Message- > From: S Ahmed [mailto:sahmed1...@gmail.com] > Sent: 12 December 2012 02:51 > To: users@kafka.apache.org > Subject: Re: first steps with the codebase > > help anyone? :) > > Much much appreciated! > > > On Tue, Dec 11, 2012 at 12:03 AM, S Ahmed wrote: > > > BTW, where exactly will the broker be writing these messages? Is it > > in a /tmp folder? > > > > > > On Tue, Dec 11, 2012 at 12:02 AM, S Ahmed wrote: > > > >> Neha, > >> > >> But what do I need to start before running the tests, I tried to run > >> the test "testAsyncSendCanCorrectlyFailWithTimeout" but I got this: > >> > >> 2012-12-11 00:01:08,974] WARN EndOfStreamException: Unable to read > >> additional data from client sessionid 0x13b8856456a0002, likely > >> client has closed socket > >> (org.apache.zookeeper.server.NIOServerCnxn:634) > >> [2012-12-11 00:01:11,231] WARN EndOfStreamException: Unable to read > >> additional data from client sessionid 0x13b8856456a0001, likely > >> client has closed socket > >> (org.apache.zookeeper.server.NIOServerCnxn:634) > >> [2012-12-11 00:01:26,561] WARN EndOfStreamException: Unable to read > >> additional data from client sessionid 0x13b8856456a0003, likely > >> client has closed socket > >> (org.apache.zookeeper.server.NIOServerCnxn:634) > >> [2012-12-11 00:01:26,563] WARN EndOfStreamException: Unable to read > >> additional data from client sessionid 0x13b8856456a0004, likely > >> client has closed socket > >> (org.apache.zookeeper.server.NIOServerCnxn:634) > >> [2012-12-11 00:01:30,661] ERROR [TopicChangeListener on Controller 1]: > >> Error while handling new topic > >> (kafka.controller.PartitionStateMachine$TopicChangeListener:102) > >> java.lang.NullPointerException > >> at > >> scala.collection.JavaConversions$JListWrapper.iterator(JavaConversion > >> s.scala:524) at > >> scala.collection.IterableLike$class.foreach(IterableLike.scala:79) > >> at > >> scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions > >> .scala:521) > >> at > >> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala > >> :176) > >> at > >> scala.collection.JavaConversions$JListWrapper.foldLeft(JavaConversion > >> s.scala:521) > >> at > >> scala.collection.TraversableOnce$class.$div$colon(TraversableOnce.sca > >> la:139) > >> at > >> scala.collection.JavaConversions$JListWrapper.$div$colon(JavaConversi > >> ons.scala:521) at > >> scala.collection.generic.Addable$class.$plus$plus(Addable.scala:54) > >> at scala.collection.immutable.Set$EmptySet$.$plus$plus(Set.scala:47) > >> at > >> scala.collection.TraversableOnce$class.toSet(TraversableOnce.scala:43 > >> 6) > >> at > >> scala.collection.JavaConversions$JListWrapper.toSet(JavaConversions.s > >> cala:521) > >> at > >> kafka.controller.PartitionStateMachine$TopicChangeListener.liftedTree > >> 1$1(PartitionStateMachine.scala:337) > >> at > >> kafka.controller.PartitionStateMachine$TopicChangeListener.handleChil > >> dChange(PartitionStateMachine.scala:335) > >> at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:570) > >> at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71) > >> Disconnected from the target VM, address: '127.0.0.1:64026', transport: > >> 'socket' > >> > >> > >> > >> > >> On Mon, Dec 10, 2012 at 11:54 PM, Neha Narkhede < > neha.narkh...@gmail.com>wrote: > >> > >>> You can take a look at one of the producer tests and attach > >>> breakpoints in the code. Ensure you pick the Debug Test instead of > >>> Run Test option. > >>> > >>> Thanks, > >>> Neha > >>> > >>> On Mon, Dec 10, 2012 at 7:31 PM, S Ahmed wrote: > >>> > Hi, > >>> > > >>> > So I followed the instructions from here: > >>> > https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup
RE: first steps with the codebase
You can check in config/server.properties. By default it writes in /tmp/kafka-logs/ . -Original Message- From: S Ahmed [mailto:sahmed1...@gmail.com] Sent: 12 December 2012 02:51 To: users@kafka.apache.org Subject: Re: first steps with the codebase help anyone? :) Much much appreciated! On Tue, Dec 11, 2012 at 12:03 AM, S Ahmed wrote: > BTW, where exactly will the broker be writing these messages? Is it > in a /tmp folder? > > > On Tue, Dec 11, 2012 at 12:02 AM, S Ahmed wrote: > >> Neha, >> >> But what do I need to start before running the tests, I tried to run >> the test "testAsyncSendCanCorrectlyFailWithTimeout" but I got this: >> >> 2012-12-11 00:01:08,974] WARN EndOfStreamException: Unable to read >> additional data from client sessionid 0x13b8856456a0002, likely >> client has closed socket >> (org.apache.zookeeper.server.NIOServerCnxn:634) >> [2012-12-11 00:01:11,231] WARN EndOfStreamException: Unable to read >> additional data from client sessionid 0x13b8856456a0001, likely >> client has closed socket >> (org.apache.zookeeper.server.NIOServerCnxn:634) >> [2012-12-11 00:01:26,561] WARN EndOfStreamException: Unable to read >> additional data from client sessionid 0x13b8856456a0003, likely >> client has closed socket >> (org.apache.zookeeper.server.NIOServerCnxn:634) >> [2012-12-11 00:01:26,563] WARN EndOfStreamException: Unable to read >> additional data from client sessionid 0x13b8856456a0004, likely >> client has closed socket >> (org.apache.zookeeper.server.NIOServerCnxn:634) >> [2012-12-11 00:01:30,661] ERROR [TopicChangeListener on Controller 1]: >> Error while handling new topic >> (kafka.controller.PartitionStateMachine$TopicChangeListener:102) >> java.lang.NullPointerException >> at >> scala.collection.JavaConversions$JListWrapper.iterator(JavaConversion >> s.scala:524) at >> scala.collection.IterableLike$class.foreach(IterableLike.scala:79) >> at >> scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions >> .scala:521) >> at >> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala >> :176) >> at >> scala.collection.JavaConversions$JListWrapper.foldLeft(JavaConversion >> s.scala:521) >> at >> scala.collection.TraversableOnce$class.$div$colon(TraversableOnce.sca >> la:139) >> at >> scala.collection.JavaConversions$JListWrapper.$div$colon(JavaConversi >> ons.scala:521) at >> scala.collection.generic.Addable$class.$plus$plus(Addable.scala:54) >> at scala.collection.immutable.Set$EmptySet$.$plus$plus(Set.scala:47) >> at >> scala.collection.TraversableOnce$class.toSet(TraversableOnce.scala:43 >> 6) >> at >> scala.collection.JavaConversions$JListWrapper.toSet(JavaConversions.s >> cala:521) >> at >> kafka.controller.PartitionStateMachine$TopicChangeListener.liftedTree >> 1$1(PartitionStateMachine.scala:337) >> at >> kafka.controller.PartitionStateMachine$TopicChangeListener.handleChil >> dChange(PartitionStateMachine.scala:335) >> at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:570) >> at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71) >> Disconnected from the target VM, address: '127.0.0.1:64026', transport: >> 'socket' >> >> >> >> >> On Mon, Dec 10, 2012 at 11:54 PM, Neha Narkhede >> wrote: >> >>> You can take a look at one of the producer tests and attach >>> breakpoints in the code. Ensure you pick the Debug Test instead of >>> Run Test option. >>> >>> Thanks, >>> Neha >>> >>> On Mon, Dec 10, 2012 at 7:31 PM, S Ahmed wrote: >>> > Hi, >>> > >>> > So I followed the instructions from here: >>> > https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup >>> > >>> > So I pulled down the latest from github, ran; sbt >>> >> update >>> >>idea >>> > >>> > open it up in idea, and builds fine in idea also (version 12). >>> > >>> > Everything is fine so far. >>> > >>> > Questions: >>> > >>> > From just using the IDE, how can I start the neccessary services >>> > so I >>> can >>> > debug a producer call so I can trace the code line by line as it >>> executes >>> > to create a message, and then set a breakpoint on the kafka server >>> side of >>> > things to see how it goes about processing an inbound message. >>> > >>
Re: first steps with the codebase
help anyone? :) Much much appreciated! On Tue, Dec 11, 2012 at 12:03 AM, S Ahmed wrote: > BTW, where exactly will the broker be writing these messages? Is it in a > /tmp folder? > > > On Tue, Dec 11, 2012 at 12:02 AM, S Ahmed wrote: > >> Neha, >> >> But what do I need to start before running the tests, I tried to run the >> test "testAsyncSendCanCorrectlyFailWithTimeout" but I got this: >> >> 2012-12-11 00:01:08,974] WARN EndOfStreamException: Unable to read >> additional data from client sessionid 0x13b8856456a0002, likely client has >> closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) >> [2012-12-11 00:01:11,231] WARN EndOfStreamException: Unable to read >> additional data from client sessionid 0x13b8856456a0001, likely client has >> closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) >> [2012-12-11 00:01:26,561] WARN EndOfStreamException: Unable to read >> additional data from client sessionid 0x13b8856456a0003, likely client has >> closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) >> [2012-12-11 00:01:26,563] WARN EndOfStreamException: Unable to read >> additional data from client sessionid 0x13b8856456a0004, likely client has >> closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) >> [2012-12-11 00:01:30,661] ERROR [TopicChangeListener on Controller 1]: >> Error while handling new topic >> (kafka.controller.PartitionStateMachine$TopicChangeListener:102) >> java.lang.NullPointerException >> at >> scala.collection.JavaConversions$JListWrapper.iterator(JavaConversions.scala:524) >> at scala.collection.IterableLike$class.foreach(IterableLike.scala:79) >> at >> scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:521) >> at >> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:176) >> at >> scala.collection.JavaConversions$JListWrapper.foldLeft(JavaConversions.scala:521) >> at >> scala.collection.TraversableOnce$class.$div$colon(TraversableOnce.scala:139) >> at >> scala.collection.JavaConversions$JListWrapper.$div$colon(JavaConversions.scala:521) >> at scala.collection.generic.Addable$class.$plus$plus(Addable.scala:54) >> at scala.collection.immutable.Set$EmptySet$.$plus$plus(Set.scala:47) >> at scala.collection.TraversableOnce$class.toSet(TraversableOnce.scala:436) >> at >> scala.collection.JavaConversions$JListWrapper.toSet(JavaConversions.scala:521) >> at >> kafka.controller.PartitionStateMachine$TopicChangeListener.liftedTree1$1(PartitionStateMachine.scala:337) >> at >> kafka.controller.PartitionStateMachine$TopicChangeListener.handleChildChange(PartitionStateMachine.scala:335) >> at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:570) >> at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71) >> Disconnected from the target VM, address: '127.0.0.1:64026', transport: >> 'socket' >> >> >> >> >> On Mon, Dec 10, 2012 at 11:54 PM, Neha Narkhede >> wrote: >> >>> You can take a look at one of the producer tests and attach >>> breakpoints in the code. Ensure you pick the Debug Test instead of Run >>> Test option. >>> >>> Thanks, >>> Neha >>> >>> On Mon, Dec 10, 2012 at 7:31 PM, S Ahmed wrote: >>> > Hi, >>> > >>> > So I followed the instructions from here: >>> > https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup >>> > >>> > So I pulled down the latest from github, ran; >>> > sbt >>> >> update >>> >>idea >>> > >>> > open it up in idea, and builds fine in idea also (version 12). >>> > >>> > Everything is fine so far. >>> > >>> > Questions: >>> > >>> > From just using the IDE, how can I start the neccessary services so I >>> can >>> > debug a producer call so I can trace the code line by line as it >>> executes >>> > to create a message, and then set a breakpoint on the kafka server >>> side of >>> > things to see how it goes about processing an inbound message. >>> > >>> > Is this possible, or is the general workflow first starting the >>> services >>> > using some .sh scripts? >>> > >>> > My goal here is to be able to set breakpoints on both the producer and >>> > broker side of things. >>> > >>> > Much appreciated! >>> >> >> >
Re: first steps with the codebase
BTW, where exactly will the broker be writing these messages? Is it in a /tmp folder? On Tue, Dec 11, 2012 at 12:02 AM, S Ahmed wrote: > Neha, > > But what do I need to start before running the tests, I tried to run the > test "testAsyncSendCanCorrectlyFailWithTimeout" but I got this: > > 2012-12-11 00:01:08,974] WARN EndOfStreamException: Unable to read > additional data from client sessionid 0x13b8856456a0002, likely client has > closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) > [2012-12-11 00:01:11,231] WARN EndOfStreamException: Unable to read > additional data from client sessionid 0x13b8856456a0001, likely client has > closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) > [2012-12-11 00:01:26,561] WARN EndOfStreamException: Unable to read > additional data from client sessionid 0x13b8856456a0003, likely client has > closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) > [2012-12-11 00:01:26,563] WARN EndOfStreamException: Unable to read > additional data from client sessionid 0x13b8856456a0004, likely client has > closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) > [2012-12-11 00:01:30,661] ERROR [TopicChangeListener on Controller 1]: > Error while handling new topic > (kafka.controller.PartitionStateMachine$TopicChangeListener:102) > java.lang.NullPointerException > at > scala.collection.JavaConversions$JListWrapper.iterator(JavaConversions.scala:524) > at scala.collection.IterableLike$class.foreach(IterableLike.scala:79) > at > scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:521) > at > scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:176) > at > scala.collection.JavaConversions$JListWrapper.foldLeft(JavaConversions.scala:521) > at > scala.collection.TraversableOnce$class.$div$colon(TraversableOnce.scala:139) > at > scala.collection.JavaConversions$JListWrapper.$div$colon(JavaConversions.scala:521) > at scala.collection.generic.Addable$class.$plus$plus(Addable.scala:54) > at scala.collection.immutable.Set$EmptySet$.$plus$plus(Set.scala:47) > at scala.collection.TraversableOnce$class.toSet(TraversableOnce.scala:436) > at > scala.collection.JavaConversions$JListWrapper.toSet(JavaConversions.scala:521) > at > kafka.controller.PartitionStateMachine$TopicChangeListener.liftedTree1$1(PartitionStateMachine.scala:337) > at > kafka.controller.PartitionStateMachine$TopicChangeListener.handleChildChange(PartitionStateMachine.scala:335) > at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:570) > at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71) > Disconnected from the target VM, address: '127.0.0.1:64026', transport: > 'socket' > > > > > On Mon, Dec 10, 2012 at 11:54 PM, Neha Narkhede > wrote: > >> You can take a look at one of the producer tests and attach >> breakpoints in the code. Ensure you pick the Debug Test instead of Run >> Test option. >> >> Thanks, >> Neha >> >> On Mon, Dec 10, 2012 at 7:31 PM, S Ahmed wrote: >> > Hi, >> > >> > So I followed the instructions from here: >> > https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup >> > >> > So I pulled down the latest from github, ran; >> > sbt >> >> update >> >>idea >> > >> > open it up in idea, and builds fine in idea also (version 12). >> > >> > Everything is fine so far. >> > >> > Questions: >> > >> > From just using the IDE, how can I start the neccessary services so I >> can >> > debug a producer call so I can trace the code line by line as it >> executes >> > to create a message, and then set a breakpoint on the kafka server side >> of >> > things to see how it goes about processing an inbound message. >> > >> > Is this possible, or is the general workflow first starting the services >> > using some .sh scripts? >> > >> > My goal here is to be able to set breakpoints on both the producer and >> > broker side of things. >> > >> > Much appreciated! >> > >
Re: first steps with the codebase
Neha, But what do I need to start before running the tests, I tried to run the test "testAsyncSendCanCorrectlyFailWithTimeout" but I got this: 2012-12-11 00:01:08,974] WARN EndOfStreamException: Unable to read additional data from client sessionid 0x13b8856456a0002, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) [2012-12-11 00:01:11,231] WARN EndOfStreamException: Unable to read additional data from client sessionid 0x13b8856456a0001, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) [2012-12-11 00:01:26,561] WARN EndOfStreamException: Unable to read additional data from client sessionid 0x13b8856456a0003, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) [2012-12-11 00:01:26,563] WARN EndOfStreamException: Unable to read additional data from client sessionid 0x13b8856456a0004, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn:634) [2012-12-11 00:01:30,661] ERROR [TopicChangeListener on Controller 1]: Error while handling new topic (kafka.controller.PartitionStateMachine$TopicChangeListener:102) java.lang.NullPointerException at scala.collection.JavaConversions$JListWrapper.iterator(JavaConversions.scala:524) at scala.collection.IterableLike$class.foreach(IterableLike.scala:79) at scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:521) at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:176) at scala.collection.JavaConversions$JListWrapper.foldLeft(JavaConversions.scala:521) at scala.collection.TraversableOnce$class.$div$colon(TraversableOnce.scala:139) at scala.collection.JavaConversions$JListWrapper.$div$colon(JavaConversions.scala:521) at scala.collection.generic.Addable$class.$plus$plus(Addable.scala:54) at scala.collection.immutable.Set$EmptySet$.$plus$plus(Set.scala:47) at scala.collection.TraversableOnce$class.toSet(TraversableOnce.scala:436) at scala.collection.JavaConversions$JListWrapper.toSet(JavaConversions.scala:521) at kafka.controller.PartitionStateMachine$TopicChangeListener.liftedTree1$1(PartitionStateMachine.scala:337) at kafka.controller.PartitionStateMachine$TopicChangeListener.handleChildChange(PartitionStateMachine.scala:335) at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:570) at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71) Disconnected from the target VM, address: '127.0.0.1:64026', transport: 'socket' On Mon, Dec 10, 2012 at 11:54 PM, Neha Narkhede wrote: > You can take a look at one of the producer tests and attach > breakpoints in the code. Ensure you pick the Debug Test instead of Run > Test option. > > Thanks, > Neha > > On Mon, Dec 10, 2012 at 7:31 PM, S Ahmed wrote: > > Hi, > > > > So I followed the instructions from here: > > https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup > > > > So I pulled down the latest from github, ran; > > sbt > >> update > >>idea > > > > open it up in idea, and builds fine in idea also (version 12). > > > > Everything is fine so far. > > > > Questions: > > > > From just using the IDE, how can I start the neccessary services so I can > > debug a producer call so I can trace the code line by line as it executes > > to create a message, and then set a breakpoint on the kafka server side > of > > things to see how it goes about processing an inbound message. > > > > Is this possible, or is the general workflow first starting the services > > using some .sh scripts? > > > > My goal here is to be able to set breakpoints on both the producer and > > broker side of things. > > > > Much appreciated! >
Re: first steps with the codebase
You can take a look at one of the producer tests and attach breakpoints in the code. Ensure you pick the Debug Test instead of Run Test option. Thanks, Neha On Mon, Dec 10, 2012 at 7:31 PM, S Ahmed wrote: > Hi, > > So I followed the instructions from here: > https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup > > So I pulled down the latest from github, ran; > sbt >> update >>idea > > open it up in idea, and builds fine in idea also (version 12). > > Everything is fine so far. > > Questions: > > From just using the IDE, how can I start the neccessary services so I can > debug a producer call so I can trace the code line by line as it executes > to create a message, and then set a breakpoint on the kafka server side of > things to see how it goes about processing an inbound message. > > Is this possible, or is the general workflow first starting the services > using some .sh scripts? > > My goal here is to be able to set breakpoints on both the producer and > broker side of things. > > Much appreciated!