You can check in config/server.properties. By  default it writes in 
/tmp/kafka-logs/ .

-----Original Message-----
From: S Ahmed [mailto:sahmed1...@gmail.com]
Sent: 12 December 2012 02:51
To: users@kafka.apache.org
Subject: Re: first steps with the codebase

help anyone? :)

Much much appreciated!


On Tue, Dec 11, 2012 at 12:03 AM, S Ahmed <sahmed1...@gmail.com> wrote:

> BTW, where exactly will the broker be writing these messages?  Is it
> in a /tmp folder?
>
>
> On Tue, Dec 11, 2012 at 12:02 AM, S Ahmed <sahmed1...@gmail.com> wrote:
>
>> Neha,
>>
>> But what do I need to start before running the tests, I tried to run
>> the test "testAsyncSendCanCorrectlyFailWithTimeout" but I got this:
>>
>> 2012-12-11 00:01:08,974] WARN EndOfStreamException: Unable to read
>> additional data from client sessionid 0x13b8856456a0002, likely
>> client has closed socket
>> (org.apache.zookeeper.server.NIOServerCnxn:634)
>> [2012-12-11 00:01:11,231] WARN EndOfStreamException: Unable to read
>> additional data from client sessionid 0x13b8856456a0001, likely
>> client has closed socket
>> (org.apache.zookeeper.server.NIOServerCnxn:634)
>> [2012-12-11 00:01:26,561] WARN EndOfStreamException: Unable to read
>> additional data from client sessionid 0x13b8856456a0003, likely
>> client has closed socket
>> (org.apache.zookeeper.server.NIOServerCnxn:634)
>> [2012-12-11 00:01:26,563] WARN EndOfStreamException: Unable to read
>> additional data from client sessionid 0x13b8856456a0004, likely
>> client has closed socket
>> (org.apache.zookeeper.server.NIOServerCnxn:634)
>> [2012-12-11 00:01:30,661] ERROR [TopicChangeListener on Controller 1]:
>> Error while handling new topic
>> (kafka.controller.PartitionStateMachine$TopicChangeListener:102)
>> java.lang.NullPointerException
>> at
>> scala.collection.JavaConversions$JListWrapper.iterator(JavaConversion
>> s.scala:524) at
>> scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
>>  at
>> scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions
>> .scala:521)
>> at
>> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala
>> :176)
>>  at
>> scala.collection.JavaConversions$JListWrapper.foldLeft(JavaConversion
>> s.scala:521)
>> at
>> scala.collection.TraversableOnce$class.$div$colon(TraversableOnce.sca
>> la:139)
>>  at
>> scala.collection.JavaConversions$JListWrapper.$div$colon(JavaConversi
>> ons.scala:521) at
>> scala.collection.generic.Addable$class.$plus$plus(Addable.scala:54)
>>  at scala.collection.immutable.Set$EmptySet$.$plus$plus(Set.scala:47)
>> at
>> scala.collection.TraversableOnce$class.toSet(TraversableOnce.scala:43
>> 6)
>>  at
>> scala.collection.JavaConversions$JListWrapper.toSet(JavaConversions.s
>> cala:521)
>> at
>> kafka.controller.PartitionStateMachine$TopicChangeListener.liftedTree
>> 1$1(PartitionStateMachine.scala:337)
>>  at
>> kafka.controller.PartitionStateMachine$TopicChangeListener.handleChil
>> dChange(PartitionStateMachine.scala:335)
>> at org.I0Itec.zkclient.ZkClient$7.run(ZkClient.java:570)
>>  at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
>> Disconnected from the target VM, address: '127.0.0.1:64026', transport:
>> 'socket'
>>
>>
>>
>>
>> On Mon, Dec 10, 2012 at 11:54 PM, Neha Narkhede 
>> <neha.narkh...@gmail.com>wrote:
>>
>>> You can take a look at one of the producer tests and attach
>>> breakpoints in the code. Ensure you pick the Debug Test instead of
>>> Run Test option.
>>>
>>> Thanks,
>>> Neha
>>>
>>> On Mon, Dec 10, 2012 at 7:31 PM, S Ahmed <sahmed1...@gmail.com> wrote:
>>> > Hi,
>>> >
>>> > So I followed the instructions from here:
>>> > https://cwiki.apache.org/confluence/display/KAFKA/Developer+Setup
>>> >
>>> > So I pulled down the latest from github, ran; sbt
>>> >> update
>>> >>idea
>>> >
>>> > open it up in idea, and builds fine in idea also (version 12).
>>> >
>>> > Everything is fine so far.
>>> >
>>> > Questions:
>>> >
>>> > From just using the IDE, how can I start the neccessary services
>>> > so I
>>> can
>>> > debug a producer call so I can trace the code line by line as it
>>> executes
>>> > to create a message, and then set a breakpoint on the kafka server
>>> side of
>>> > things to see how it goes about processing an inbound message.
>>> >
>>> > Is this possible, or is the general workflow first starting the
>>> services
>>> > using some .sh scripts?
>>> >
>>> > My goal here is to be able to set breakpoints on both the producer
>>> > and broker side of things.
>>> >
>>> > Much appreciated!
>>>
>>
>>
>
This email and any attachments are confidential, and may be legally privileged 
and protected by copyright. If you are not the intended recipient dissemination 
or copying of this email is prohibited. If you have received this in error, 
please notify the sender by replying by email and then delete the email 
completely from your system. Any views or opinions are solely those of the 
sender. This communication is not intended to form a binding contract unless 
expressly indicated to the contrary and properly authorised. Any actions taken 
on the basis of this email are at the recipient's own risk.

Reply via email to