Kafka 0.8 delete log failed

2013-12-10 Thread CuiLiang
Hi All,

I'm use Kafka 0.8 release build with 1 partition, 1 replica. My OS is
Windows server 2012 and JDK is 1.7. I got below error when Kafka delete
logs. Any guidance would be of great help.

[2013-12-09 04:00:10,525] ERROR error in loggedRunnable
(kafka.utils.Utils$)kafka.common.KafkaStorageException: Deleting log
segment 140332200 failed. at
kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:613) at
kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:608) at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32) at
kafka.log.Log.deleteSegments(Log.scala:608) at
kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:242)
at kafka.log.LogManager$$anonfun$cleanupLogs$2.apply(LogManager.scala:277)
at kafka.log.LogManager$$anonfun$cleanupLogs$2.apply(LogManager.scala:275)
at scala.collection.Iterator$class.foreach(Iterator.scala:631) at
scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:474)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:79) at
scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:521)
at kafka.log.LogManager.cleanupLogs(LogManager.scala:275) at
kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:141)
at kafka.utils.Utils$$anon$2.run(Utils.scala:68) at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)



-- 
Thanks,
Liang Cui


Re: Anyone working on a Kafka book?

2013-12-10 Thread S Ahmed
Is there a book or this was just an idea?


On Mon, Mar 25, 2013 at 12:42 PM, Chris Curtin curtin.ch...@gmail.comwrote:

 Thanks Jun,

 I've updated the example with this information.

 I've also removed some of the unnecessary newlines.

 Thanks,

 Chris


 On Mon, Mar 25, 2013 at 12:04 PM, Jun Rao jun...@gmail.com wrote:

  Chris,
 
  This looks good. One thing about partitioning. Currently, if a message
  doesn't have a key, we always use the random partitioner (regardless of
  what partitioner.class is set to).
 
  Thanks,
 
  Jun
 
 
 



error from adding a partition

2013-12-10 Thread Yu, Libo
Hi folks,

I got this error when I tried to test the partition addition tool.
bin/kafka-add-partitions.sh --partition 1 --topic libotesttopic --zookeeper 
xx.xxx.xxx.xx:
adding partitions failed because of 
kafka.admin.AdminUtils$.assignReplicasToBrokers(Lscala/collection/Seq;)Lscala/collection/Map;
java.lang.NoSuchMethodError: 
kafka.admin.AdminUtils$.assignReplicasToBrokers(Lscala/collection/Seq;)Lscala/collection/Map;
at 
kafka.admin.AddPartitionsCommand$.addPartitions(AddPartitionsCommand.scala:90)
at kafka.admin.AddPartitionsCommand$.main(AddPartitionsCommand.scala:68)
at kafka.admin.AddPartitionsCommand.main(AddPartitionsCommand.scala)

Did I miss anything here? Thanks.

Regards,

Libo



Re: Anyone working on a Kafka book?

2013-12-10 Thread Steve Morin
I forget but think Chetan was with oreilly

 On Dec 10, 2013, at 7:01, S Ahmed sahmed1...@gmail.com wrote:
 
 Is there a book or this was just an idea?
 
 
 On Mon, Mar 25, 2013 at 12:42 PM, Chris Curtin curtin.ch...@gmail.comwrote:
 
 Thanks Jun,
 
 I've updated the example with this information.
 
 I've also removed some of the unnecessary newlines.
 
 Thanks,
 
 Chris
 
 
 On Mon, Mar 25, 2013 at 12:04 PM, Jun Rao jun...@gmail.com wrote:
 
 Chris,
 
 This looks good. One thing about partitioning. Currently, if a message
 doesn't have a key, we always use the random partitioner (regardless of
 what partitioner.class is set to).
 
 Thanks,
 
 Jun
 


Data generator losses some data if kafka is restarted

2013-12-10 Thread Nishant Kumar
Hi All,

I am using kafka 0.8.


My producers configurations are as follows

kafka8.bytearray.producer.type=sync

kafka8.producer.batch.num.messages=100

kafka8.producer.topic.metadata.refresh.interval.ms=60

kafka8.producer.retry.backoff.ms=100

kafka8.producer.message.send.max.retries=3

My Kafaka server. properties are as

# The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages=500

# The maximum amount of time a message can sit in a log before we
force a flush
log.flush.interval.ms=100

# Per-topic overrides for log.flush.interval.ms
#log.flush.intervals.ms.per.topic=topic1:1000, topic2:3000


Specified Sync property in producer. properties file

# specifies whether the messages are sent asynchronously (async)
or synchronously (sync)
producer.type=sync



My consumer is running in a separate jar. Consumer config are

zookeeper.connect=IP
group.id=consumerGroup
fetch.message.max.bytes=10
zookeeper.session.timeout.ms=6
auto.offset.reset=smallest
zookeeper.sync.time.ms=200
auto.commit.enable=false

If my data generator and consumer are running parallel and suddenly
kafka is restarted, less
records are consumed then expected.

e.g.  If i set the number of records to be produce are 3000 after that
it throws an exception. My consumer runs
in parallel to that, mean while if i restart my kafka ,my consumer is
only able to
get 2400 approx records. approximately 600 records are missing even if
i am running kafaka in synchronized mode.

 I am not able to know why this data lose is happening. If you have
any idea regarding this.
Please help me to know what i am missing here in this case.

Regards,

Nishant Kumar


Re: error from adding a partition

2013-12-10 Thread Jun Rao
Tried this on the 0.8.0 release and it works for me. Could you make sure
there are no duplicated kafka jars?

Thanks,

Jun


On Tue, Dec 10, 2013 at 7:08 AM, Yu, Libo libo...@citi.com wrote:

 Hi folks,

 I got this error when I tried to test the partition addition tool.
 bin/kafka-add-partitions.sh --partition 1 --topic libotesttopic
 --zookeeper xx.xxx.xxx.xx:
 adding partitions failed because of
 kafka.admin.AdminUtils$.assignReplicasToBrokers(Lscala/collection/Seq;)Lscala/collection/Map;
 java.lang.NoSuchMethodError:
 kafka.admin.AdminUtils$.assignReplicasToBrokers(Lscala/collection/Seq;)Lscala/collection/Map;
 at
 kafka.admin.AddPartitionsCommand$.addPartitions(AddPartitionsCommand.scala:90)
 at
 kafka.admin.AddPartitionsCommand$.main(AddPartitionsCommand.scala:68)
 at
 kafka.admin.AddPartitionsCommand.main(AddPartitionsCommand.scala)

 Did I miss anything here? Thanks.

 Regards,

 Libo




Re: Data generator losses some data if kafka is restarted

2013-12-10 Thread Jun Rao
You will need to configure request.required.acks properly. See
http://kafka.apache.org/documentation.html#producerconfigs for details.

Thanks,

Jun


On Tue, Dec 10, 2013 at 1:55 AM, Nishant Kumar nish.a...@gmail.com wrote:

 Hi All,

 I am using kafka 0.8.


 My producers configurations are as follows

 kafka8.bytearray.producer.type=sync

 kafka8.producer.batch.num.messages=100

 kafka8.producer.topic.metadata.refresh.interval.ms=60

 kafka8.producer.retry.backoff.ms=100

 kafka8.producer.message.send.max.retries=3

 My Kafaka server. properties are as

 # The number of messages to accept before forcing a flush of data to
 disk
 log.flush.interval.messages=500

 # The maximum amount of time a message can sit in a log before we
 force a flush
 log.flush.interval.ms=100

 # Per-topic overrides for log.flush.interval.ms
 #log.flush.intervals.ms.per.topic=topic1:1000, topic2:3000


 Specified Sync property in producer. properties file

 # specifies whether the messages are sent asynchronously (async)
 or synchronously (sync)
 producer.type=sync



 My consumer is running in a separate jar. Consumer config are

 zookeeper.connect=IP
 group.id=consumerGroup
 fetch.message.max.bytes=10
 zookeeper.session.timeout.ms=6
 auto.offset.reset=smallest
 zookeeper.sync.time.ms=200
 auto.commit.enable=false

 If my data generator and consumer are running parallel and suddenly
 kafka is restarted, less
 records are consumed then expected.

 e.g.  If i set the number of records to be produce are 3000 after that
 it throws an exception. My consumer runs
 in parallel to that, mean while if i restart my kafka ,my consumer is
 only able to
 get 2400 approx records. approximately 600 records are missing even if
 i am running kafaka in synchronized mode.

  I am not able to know why this data lose is happening. If you have
 any idea regarding this.
 Please help me to know what i am missing here in this case.

 Regards,

 Nishant Kumar



Re: Anyone working on a Kafka book?

2013-12-10 Thread Steve Morin
I'll let chetan comment if he's up for it.
-Steve


On Tue, Dec 10, 2013 at 8:40 AM, David Arthur mum...@gmail.com wrote:

 There was some talk a few months ago, not sure what the current status is.


 On 12/10/13 10:01 AM, S Ahmed wrote:

 Is there a book or this was just an idea?


 On Mon, Mar 25, 2013 at 12:42 PM, Chris Curtin curtin.ch...@gmail.com
 wrote:

  Thanks Jun,

 I've updated the example with this information.

 I've also removed some of the unnecessary newlines.

 Thanks,

 Chris


 On Mon, Mar 25, 2013 at 12:04 PM, Jun Rao jun...@gmail.com wrote:

  Chris,

 This looks good. One thing about partitioning. Currently, if a message
 doesn't have a key, we always use the random partitioner (regardless of
 what partitioner.class is set to).

 Thanks,

 Jun







Re: Anyone working on a Kafka book?

2013-12-10 Thread chetan conikee
Hey Guys

Yes, Ben Lorica (Oreilly) and I are planning to pen a Beginning Kafka
book.
We only finalized this late October are hoping to start this mid-month

Chetan


On Tue, Dec 10, 2013 at 8:45 AM, Steve Morin st...@stevemorin.com wrote:

 I'll let chetan comment if he's up for it.
 -Steve



 On Tue, Dec 10, 2013 at 8:40 AM, David Arthur mum...@gmail.com wrote:

 There was some talk a few months ago, not sure what the current status is.


 On 12/10/13 10:01 AM, S Ahmed wrote:

 Is there a book or this was just an idea?


 On Mon, Mar 25, 2013 at 12:42 PM, Chris Curtin curtin.ch...@gmail.com
 wrote:

  Thanks Jun,

 I've updated the example with this information.

 I've also removed some of the unnecessary newlines.

 Thanks,

 Chris


 On Mon, Mar 25, 2013 at 12:04 PM, Jun Rao jun...@gmail.com wrote:

  Chris,

 This looks good. One thing about partitioning. Currently, if a message
 doesn't have a key, we always use the random partitioner (regardless of
 what partitioner.class is set to).

 Thanks,

 Jun








Re: Anyone working on a Kafka book?

2013-12-10 Thread S Ahmed
Great, so its not even at MEAP stage then :(, let me guess, it is going to
take 6 months to decide on what animal to put on the cover! :)

Looking forward to in though!


On Tue, Dec 10, 2013 at 12:15 PM, chetan conikee coni...@gmail.com wrote:

 Hey Guys

 Yes, Ben Lorica (Oreilly) and I are planning to pen a Beginning Kafka
 book.
 We only finalized this late October are hoping to start this mid-month

 Chetan


 On Tue, Dec 10, 2013 at 8:45 AM, Steve Morin st...@stevemorin.com wrote:

  I'll let chetan comment if he's up for it.
  -Steve
 
 
 
  On Tue, Dec 10, 2013 at 8:40 AM, David Arthur mum...@gmail.com wrote:
 
  There was some talk a few months ago, not sure what the current status
 is.
 
 
  On 12/10/13 10:01 AM, S Ahmed wrote:
 
  Is there a book or this was just an idea?
 
 
  On Mon, Mar 25, 2013 at 12:42 PM, Chris Curtin curtin.ch...@gmail.com
  wrote:
 
   Thanks Jun,
 
  I've updated the example with this information.
 
  I've also removed some of the unnecessary newlines.
 
  Thanks,
 
  Chris
 
 
  On Mon, Mar 25, 2013 at 12:04 PM, Jun Rao jun...@gmail.com wrote:
 
   Chris,
 
  This looks good. One thing about partitioning. Currently, if a
 message
  doesn't have a key, we always use the random partitioner (regardless
 of
  what partitioner.class is set to).
 
  Thanks,
 
  Jun
 
 
 
 
 
 



Re: Anyone working on a Kafka book?

2013-12-10 Thread Shafaq
Hey Guys,
   I would love to contribute to the book specially in the portion of
Kafka-Spark integration or parts of kafka in general.
   Am building a Kafka-Spark Real-time framework here at Gree Intl Inc
processing order of MBs of data per second.

My profile:
   www.linkedin.com/in/shafaqabdullah/


Let me know, Im open in whatever ways.


Regards,
S.Abdullah



On Tue, Dec 10, 2013 at 9:15 AM, chetan conikee coni...@gmail.com wrote:

 Hey Guys

 Yes, Ben Lorica (Oreilly) and I are planning to pen a Beginning Kafka
 book.
 We only finalized this late October are hoping to start this mid-month

 Chetan


 On Tue, Dec 10, 2013 at 8:45 AM, Steve Morin st...@stevemorin.com wrote:

  I'll let chetan comment if he's up for it.
  -Steve
 
 
 
  On Tue, Dec 10, 2013 at 8:40 AM, David Arthur mum...@gmail.com wrote:
 
  There was some talk a few months ago, not sure what the current status
 is.
 
 
  On 12/10/13 10:01 AM, S Ahmed wrote:
 
  Is there a book or this was just an idea?
 
 
  On Mon, Mar 25, 2013 at 12:42 PM, Chris Curtin curtin.ch...@gmail.com
  wrote:
 
   Thanks Jun,
 
  I've updated the example with this information.
 
  I've also removed some of the unnecessary newlines.
 
  Thanks,
 
  Chris
 
 
  On Mon, Mar 25, 2013 at 12:04 PM, Jun Rao jun...@gmail.com wrote:
 
   Chris,
 
  This looks good. One thing about partitioning. Currently, if a
 message
  doesn't have a key, we always use the random partitioner (regardless
 of
  what partitioner.class is set to).
 
  Thanks,
 
  Jun
 
 
 
 
 
 




-- 
Kind Regards,
Shafaq


Re: Anyone working on a Kafka book?

2013-12-10 Thread Steve Morin
Shafaq,
  What does the architecture of what your building look like?
-Steve


On Tue, Dec 10, 2013 at 10:19 AM, Shafaq s.abdullah...@gmail.com wrote:

 Hey Guys,
I would love to contribute to the book specially in the portion of
 Kafka-Spark integration or parts of kafka in general.
Am building a Kafka-Spark Real-time framework here at Gree Intl Inc
 processing order of MBs of data per second.

 My profile:
www.linkedin.com/in/shafaqabdullah/


 Let me know, Im open in whatever ways.


 Regards,
 S.Abdullah



 On Tue, Dec 10, 2013 at 9:15 AM, chetan conikee coni...@gmail.com wrote:

 Hey Guys

 Yes, Ben Lorica (Oreilly) and I are planning to pen a Beginning Kafka
 book.
 We only finalized this late October are hoping to start this mid-month

 Chetan


 On Tue, Dec 10, 2013 at 8:45 AM, Steve Morin st...@stevemorin.com
 wrote:

  I'll let chetan comment if he's up for it.
  -Steve
 
 
 
  On Tue, Dec 10, 2013 at 8:40 AM, David Arthur mum...@gmail.com wrote:
 
  There was some talk a few months ago, not sure what the current status
 is.
 
 
  On 12/10/13 10:01 AM, S Ahmed wrote:
 
  Is there a book or this was just an idea?
 
 
  On Mon, Mar 25, 2013 at 12:42 PM, Chris Curtin 
 curtin.ch...@gmail.com
  wrote:
 
   Thanks Jun,
 
  I've updated the example with this information.
 
  I've also removed some of the unnecessary newlines.
 
  Thanks,
 
  Chris
 
 
  On Mon, Mar 25, 2013 at 12:04 PM, Jun Rao jun...@gmail.com wrote:
 
   Chris,
 
  This looks good. One thing about partitioning. Currently, if a
 message
  doesn't have a key, we always use the random partitioner
 (regardless of
  what partitioner.class is set to).
 
  Thanks,
 
  Jun
 
 
 
 
 
 




 --
 Kind Regards,
 Shafaq




Re: Kafka 0.8 delete log failed

2013-12-10 Thread Jay Kreps
What is your configuration for data.dirs (the path where data is) and what
is the set of disks/volumes on the machine?

-Jay


On Tue, Dec 10, 2013 at 12:50 AM, CuiLiang cuilian...@gmail.com wrote:

 Hi All,

 I'm use Kafka 0.8 release build with 1 partition, 1 replica. My OS is
 Windows server 2012 and JDK is 1.7. I got below error when Kafka delete
 logs. Any guidance would be of great help.

 [2013-12-09 04:00:10,525] ERROR error in loggedRunnable
 (kafka.utils.Utils$)kafka.common.KafkaStorageException: Deleting log
 segment 140332200 failed. at
 kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:613) at
 kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:608) at

 scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
 at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32) at
 kafka.log.Log.deleteSegments(Log.scala:608) at

 kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:242)
 at kafka.log.LogManager$$anonfun$cleanupLogs$2.apply(LogManager.scala:277)
 at kafka.log.LogManager$$anonfun$cleanupLogs$2.apply(LogManager.scala:275)
 at scala.collection.Iterator$class.foreach(Iterator.scala:631) at

 scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:474)
 at scala.collection.IterableLike$class.foreach(IterableLike.scala:79) at

 scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:521)
 at kafka.log.LogManager.cleanupLogs(LogManager.scala:275) at
 kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:141)
 at kafka.utils.Utils$$anon$2.run(Utils.scala:68) at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at
 java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) at

 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
 at

 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 at

 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at

 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)



 --
 Thanks,
 Liang Cui



Re: Partial Message Read by Consumer

2013-12-10 Thread Tom Brown
Having a partial message transfer over the network is the design of Kafka
0.7.x (I can't speak to 0.8.x, though it may still be).

When the request is made, you tell the server the partition number, the
byte offset into that partition, and the size of response that you want.
The server finds that offset in the partition, and sends N bytes back
(where N is the maximum response size specified). The server does not
inspect the contents of the reply to ensure that message boundaries line up
with the response size. This is by design, and the simplicity allows for
high throughput, at the cost of higher client complexity. In practice this
means is that the response often includes a partial message at the end
which the client drops. This means that if the response contains a single
message is larger than your maximum response size, you will not be able to
process that message or continue to the next message. Each time you request
it, it will only send the partial message, and the Kafka client will send
the request again.

If I understand the high-level consumer configuration, the fetch.size
parameter should be what you need to adjust. It's default is 300K, but I
see you have it set to roughly 50MB. Is there any chance your message is
larger than that?

--Tom


On Tue, Dec 10, 2013 at 1:52 PM, Guozhang Wang wangg...@gmail.com wrote:

 Hello Casey,

 What do you mean by part of a message is being read? Could you upload the
 output and also the log of the consumer here?

 Guozhang


 On Tue, Dec 10, 2013 at 12:26 PM, Sybrandy, Casey 
 casey.sybra...@six3systems.com wrote:

  Hello,
 
  First, I'm using version 0.7.2.
 
  I'm trying to read some messages from a broker, but looking at wireshark,
  it appears that only part of a message is being read by the consumer.
   After that, no other data is read and I can verify that there are 10
  messages on the broker.  I have the consumer configured as follows:
 
  kafka.zk.connectinfo=127.0.0.1
  kafka.zk.groupid=foo3
  kafka.topic=...
  fetch.size=52428800
  socket.buffersize=524288
 
  I only set socket.buffersize today to see if it helps.  Any help would be
  great because this is baffling, especially since this only started
  happening yesterday.
 
  Casey Sybrandy MSWE
  Six3Systems
  Cyber and Enterprise Systems Group
  www.six3systems.com
  301-206-6000 (Office)
  301-206-6020 (Fax)
  11820 West Market Place
  Suites N-P
  Fulton, MD. 20759
 



 --
 -- Guozhang



Re: Anyone working on a Kafka book?

2013-12-10 Thread Shafaq
Hi Steve,

   The first phase would be pretty simple, essentially hooking up the
Kafka-DStream-consumer to perform KPI aggregation over the streamed data
from Kafka Broker cluster in real-time.

  We would like to maximize the throughput by choosing the right message
payload size, correct kafka topic/partition mapped to  spark RRD/DStream
and minimizing I/O.

Next, we would be using featurizing the stream to be able to develop
machine learning models using SVMs (Support Vector Machines) etc to provide
rich insights.

Would be soon giving talk on this one, so keep tuned.

Regards,
S.Abdullah


On Tue, Dec 10, 2013 at 10:22 AM, Steve Morin st...@stevemorin.com wrote:

 Shafaq,
   What does the architecture of what your building look like?
 -Steve


 On Tue, Dec 10, 2013 at 10:19 AM, Shafaq s.abdullah...@gmail.com wrote:

 Hey Guys,
I would love to contribute to the book specially in the portion of
 Kafka-Spark integration or parts of kafka in general.
Am building a Kafka-Spark Real-time framework here at Gree Intl Inc
 processing order of MBs of data per second.

 My profile:
www.linkedin.com/in/shafaqabdullah/


 Let me know, Im open in whatever ways.


 Regards,
 S.Abdullah



 On Tue, Dec 10, 2013 at 9:15 AM, chetan conikee coni...@gmail.comwrote:

 Hey Guys

 Yes, Ben Lorica (Oreilly) and I are planning to pen a Beginning Kafka
 book.
 We only finalized this late October are hoping to start this mid-month

 Chetan


 On Tue, Dec 10, 2013 at 8:45 AM, Steve Morin st...@stevemorin.com
 wrote:

  I'll let chetan comment if he's up for it.
  -Steve
 
 
 
  On Tue, Dec 10, 2013 at 8:40 AM, David Arthur mum...@gmail.com
 wrote:
 
  There was some talk a few months ago, not sure what the current
 status is.
 
 
  On 12/10/13 10:01 AM, S Ahmed wrote:
 
  Is there a book or this was just an idea?
 
 
  On Mon, Mar 25, 2013 at 12:42 PM, Chris Curtin 
 curtin.ch...@gmail.com
  wrote:
 
   Thanks Jun,
 
  I've updated the example with this information.
 
  I've also removed some of the unnecessary newlines.
 
  Thanks,
 
  Chris
 
 
  On Mon, Mar 25, 2013 at 12:04 PM, Jun Rao jun...@gmail.com wrote:
 
   Chris,
 
  This looks good. One thing about partitioning. Currently, if a
 message
  doesn't have a key, we always use the random partitioner
 (regardless of
  what partitioner.class is set to).
 
  Thanks,
 
  Jun
 
 
 
 
 
 




 --
 Kind Regards,
 Shafaq





-- 
Kind Regards,
Shafaq


Re: How to set kafka path in zk

2013-12-10 Thread CuiLiang
Please try 10.237.0.1:2181,10.237.0.2:2181,10.237.0.3:2181/kafka.

Thanks,
Liang Cui


2013/12/6 Yonghui Zhao zhaoyong...@gmail.com

 Hi,

 If I don't want to register kafka in zk root and I want to make it under a
 namespace, for example kafka1.

 If I set only one host in zk property something like
 10.237.0.1:2181/kafka, it  works.
 But if I set zk property to 3 zk hosts  something like
 10.237.0.1:2181/kafka,10.237.0.2:2181/kafka,10.237.0.3:2181/kafka.

 Kafka won't start successfully.  How to solve it?




-- 
cuiliang
MSN : bypp1...@hotmail.com