[jira] [Created] (KAFKA-717) scala 2.10 build support

2013-01-21 Thread Viktor Taranenko (JIRA)
Viktor Taranenko created KAFKA-717:
--

 Summary: scala 2.10 build support
 Key: KAFKA-717
 URL: https://issues.apache.org/jira/browse/KAFKA-717
 Project: Kafka
  Issue Type: Improvement
  Components: packaging
Affects Versions: 0.8
Reporter: Viktor Taranenko




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-708) ISR becomes empty while marking a partition offline

2013-01-21 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559334#comment-13559334
 ] 

Jun Rao commented on KAFKA-708:
---

Thanks for the patch. Overall, looks good. A couple of minor comments:

1. PartitionStateMachine: The patch sets the ISR in ZK to empty every time the 
partition goes offline. This means that, in the most common case when we can 
elect another broker as the leader, we will need to update ISR in ZK twice when 
the leader gone, the first time to set it to empty and the second time to set 
it to the new leader. I am wondering if we should set the ISR in ZK to empty in 
OfflinePartitionLeaderSelector.selectLeader() just before we throw a 
PartitionOfflineException. This way, in the common case, we avoid an extra ZK 
write.

2. UtilTest,testCsvList(): assertTrue(emptyStringList!=null) should 
probably be  assertTrue(emptyList!=null)


> ISR becomes empty while marking a partition offline
> ---
>
> Key: KAFKA-708
> URL: https://issues.apache.org/jira/browse/KAFKA-708
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8, 0.8.1
>Reporter: Swapnil Ghike
>Assignee: Neha Narkhede
>Priority: Blocker
>  Labels: bugs, p1
> Fix For: 0.8
>
> Attachments: kafka-708-v1.patch, kafka-request.log.2013-01-16-15
>
>
> Attached state change log shows that ISR becomes empty when a partition is 
> being marked as offline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-139) cross-compile multiple Scala versions and upgrade to SBT 0.12.1

2013-01-21 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559309#comment-13559309
 ] 

Joe Stein commented on KAFKA-139:
-

i committed the changes so that the start script looks in the right ~/.ivy2 
directories and related changes so that jars are matching up

tested quick start script and looks good now on 0.8 branch

> cross-compile multiple Scala versions and upgrade to SBT 0.12.1
> ---
>
> Key: KAFKA-139
> URL: https://issues.apache.org/jira/browse/KAFKA-139
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.8
>Reporter: Chris Burroughs
>  Labels: build
> Fix For: 0.8
>
> Attachments: kafka-sbt0-11-3-0.8.patch, kafka-sbt0-11-3-0.8-v2.patch, 
> kafka-sbt0-11-3-0.8-v3.patch, kafka-sbt0-11-3-0.8-v4.patch, 
> kafka-sbt0-11-3-0.8-v5-smeder.patch, kafka-sbt0-11-3-0.8-v6-smeder.patch, 
> kafka-sbt0-11-3.patch
>
>
> Since scala does not maintain binary compatibly between versions, 
> organizations tend to have to move all of there code at the same time.  It 
> would thus be very helpful if we could cross build multiple scala versions.
> http://code.google.com/p/simple-build-tool/wiki/CrossBuild
> Unclear if this would require KAFKA-134 or just work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (KAFKA-139) cross-compile multiple Scala versions and upgrade to SBT 0.12.1

2013-01-21 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559282#comment-13559282
 ] 

Joe Stein edited comment on KAFKA-139 at 1/22/13 1:53 AM:
--

there are 2 issues I am fixing now in kafka-run-class.sh to get this working 
again on branch 0.8

1 is minor the changes one dir is - instead of _ now

the other issue is it looks like since SBT 0.10.0 all of the project/boot jars 
are now in ~/ivy2 no longer localized

I am changing the bash script now to look in the ~/iv2 directory we could 
(could) consider using assembly for stand alone jar

first step is getting it working again, we have to decide if the fix I am 
making right now to get it running pointing to ~/iv2 is what we want or if we 
want to start to use assembly for standalone jar and not looping through the 
directories in 

  was (Author: charmalloc):
there are 2 issues I am fixing now in kafka-run-class.sh to get this 
working again on branch 0.8

1 is minor the changes one dir is - instead of _ now

the other issue is it looks like since SBT 0.10.0 all of the project/build jars 
are now in ~/ivy2 no longer localized

I am changing the bash script now to look in the ~/iv2 directory we could 
(could) consider using assembly for stand alone jar

first step is getting it working again, we have to decide if the fix I am 
making right now to get it running pointing to ~/iv2 is what we want or if we 
want to start to use assembly for standalone jar and not looping through the 
directories in 
  
> cross-compile multiple Scala versions and upgrade to SBT 0.12.1
> ---
>
> Key: KAFKA-139
> URL: https://issues.apache.org/jira/browse/KAFKA-139
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.8
>Reporter: Chris Burroughs
>  Labels: build
> Fix For: 0.8
>
> Attachments: kafka-sbt0-11-3-0.8.patch, kafka-sbt0-11-3-0.8-v2.patch, 
> kafka-sbt0-11-3-0.8-v3.patch, kafka-sbt0-11-3-0.8-v4.patch, 
> kafka-sbt0-11-3-0.8-v5-smeder.patch, kafka-sbt0-11-3-0.8-v6-smeder.patch, 
> kafka-sbt0-11-3.patch
>
>
> Since scala does not maintain binary compatibly between versions, 
> organizations tend to have to move all of there code at the same time.  It 
> would thus be very helpful if we could cross build multiple scala versions.
> http://code.google.com/p/simple-build-tool/wiki/CrossBuild
> Unclear if this would require KAFKA-134 or just work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-139) cross-compile multiple Scala versions and upgrade to SBT 0.12.1

2013-01-21 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559282#comment-13559282
 ] 

Joe Stein commented on KAFKA-139:
-

there are 2 issues I am fixing now in kafka-run-class.sh to get this working 
again on branch 0.8

1 is minor the changes one dir is - instead of _ now

the other issue is it looks like since SBT 0.10.0 all of the project/build jars 
are now in ~/ivy2 no longer localized

I am changing the bash script now to look in the ~/iv2 directory we could 
(could) consider using assembly for stand alone jar

first step is getting it working again, we have to decide if the fix I am 
making right now to get it running pointing to ~/iv2 is what we want or if we 
want to start to use assembly for standalone jar and not looping through the 
directories in 

> cross-compile multiple Scala versions and upgrade to SBT 0.12.1
> ---
>
> Key: KAFKA-139
> URL: https://issues.apache.org/jira/browse/KAFKA-139
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.8
>Reporter: Chris Burroughs
>  Labels: build
> Fix For: 0.8
>
> Attachments: kafka-sbt0-11-3-0.8.patch, kafka-sbt0-11-3-0.8-v2.patch, 
> kafka-sbt0-11-3-0.8-v3.patch, kafka-sbt0-11-3-0.8-v4.patch, 
> kafka-sbt0-11-3-0.8-v5-smeder.patch, kafka-sbt0-11-3-0.8-v6-smeder.patch, 
> kafka-sbt0-11-3.patch
>
>
> Since scala does not maintain binary compatibly between versions, 
> organizations tend to have to move all of there code at the same time.  It 
> would thus be very helpful if we could cross build multiple scala versions.
> http://code.google.com/p/simple-build-tool/wiki/CrossBuild
> Unclear if this would require KAFKA-134 or just work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-696) Fix toString() API for all requests to make logging easier to read

2013-01-21 Thread Sriram Subramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriram Subramanian updated KAFKA-696:
-

Attachment: cleanup-v2.patch

1. The reason why I did not do that was because ReplicaRequest does not extend 
from RequestReponse. I kept what was there as is but moved the logging of the 
request outside to kafkaapi.
2. ProducerRequest logs the size for each message set.
3.1 It was an old patch and looks like there were conflicts.
3.2 Each request also prints the request name now. Hence moved these lines to 
the parent method.
4. Removed the deserializations and just log the request object.

> Fix toString() API for all requests to make logging easier to read
> --
>
> Key: KAFKA-696
> URL: https://issues.apache.org/jira/browse/KAFKA-696
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Neha Narkhede
>Assignee: Sriram Subramanian
> Attachments: cleanup-v1.patch, cleanup-v2.patch
>
>
> It will be useful to have consistent logging styles for all requests. Right 
> now, we depend on the default toString implementation and the problem is that 
> it is very hard to read and prints out unnecessary information like the 
> ByteBuffer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (KAFKA-716) SimpleConsumerPerformance does not consume all available messages

2013-01-21 Thread John Fung (JIRA)
John Fung created KAFKA-716:
---

 Summary: SimpleConsumerPerformance does not consume all available 
messages
 Key: KAFKA-716
 URL: https://issues.apache.org/jira/browse/KAFKA-716
 Project: Kafka
  Issue Type: Bug
Reporter: John Fung


To reproduce the issue:

1. Start 1 zookeeper

2. Start 1 broker

3. Send some messages

4. Start SimpleConsumerPerformance to consume messages. The only way to consume 
all messages is to set the fetch-size to be greater than the log segment file 
size.

5. This output shows that SimpleConsumerPerformance consumes only 6 messages:

$ bin/kafka-run-class.sh kafka.perf.SimpleConsumerPerformance --server 
kafka://host1:9092 --topic topic_001 --fetch-size 2048 --partition 0
start.time, end.time, fetch.size, data.consumed.in.MB, MB.sec, 
data.consumed.in.nMsg, nMsg.sec
2013-01-21 15:09:21:124, 2013-01-21 15:09:21:165, 2048, 0.0059, 0.1429, 6, 
146.3415

6. This output shows that ConsoleConsumer consumes all 5500 messages (same test 
as the above)

$ bin/kafka-run-class.sh kafka.consumer.ConsoleConsumer --zookeeper host2:2181 
--topic topic_001 --consumer-timeout-ms 5000   --formatter 
kafka.consumer.ChecksumMessageFormatter  --from-beginning | grep ^checksum | wc 
-l
Consumed 5500 messages
5500


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-631) Implement log compaction

2013-01-21 Thread Jay Kreps (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Kreps updated KAFKA-631:


Attachment: KAFKA-631-v4.patch

Did some testing with multithreading, resulting in...

Patch v4:
1. Bug: Log selection wasn't eliminating logs already being cleaned which could 
lead to two simultaneous cleaner threads both cleaning the same log.
2. Improve logging to always include the thread number.

> Implement log compaction
> 
>
> Key: KAFKA-631
> URL: https://issues.apache.org/jira/browse/KAFKA-631
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Affects Versions: 0.8.1
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-631-v1.patch, KAFKA-631-v2.patch, 
> KAFKA-631-v3.patch, KAFKA-631-v4.patch
>
>
> Currently Kafka has only one way to bound the space of the log, namely by 
> deleting old segments. The policy that controls which segments are deleted 
> can be configured based either on the number of bytes to retain or the age of 
> the messages. This makes sense for event or log data which has no notion of 
> primary key. However lots of data has a primary key and consists of updates 
> by primary key. For this data it would be nice to be able to ensure that the 
> log contained at least the last version of every key.
> As an example, say that the Kafka topic contains a sequence of User Account 
> messages, each capturing the current state of a given user account. Rather 
> than simply discarding old segments, since the set of user accounts is 
> finite, it might make more sense to delete individual records that have been 
> made obsolete by a more recent update for the same key. This would ensure 
> that the topic contained at least the current state of each record.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-570) Kafka should not need snappy jar at runtime

2013-01-21 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-570:


Labels: bugs p4  (was: bugs)

> Kafka should not need snappy jar at runtime
> ---
>
> Key: KAFKA-570
> URL: https://issues.apache.org/jira/browse/KAFKA-570
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Swapnil Ghike
>Priority: Blocker
>  Labels: bugs, p4
> Fix For: 0.8.1
>
>
> CompressionFactory imports snappy jar in a pattern match. The purpose of 
> importing it this way seems to be avoiding the import unless snappy 
> compression is actually required. However, kafka throws a 
> ClassNotFoundException if snappy jar is removed at runtime from lib_managed. 
> This exception can be easily seen by producing some data with the console 
> producer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-598) decouple fetch size from max message size

2013-01-21 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-598:


Labels: p4  (was: )

> decouple fetch size from max message size
> -
>
> Key: KAFKA-598
> URL: https://issues.apache.org/jira/browse/KAFKA-598
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8
>Reporter: Jun Rao
>Assignee: Joel Koshy
>Priority: Blocker
>  Labels: p4
> Attachments: KAFKA-598-v1.patch, KAFKA-598-v2.patch, 
> KAFKA-598-v3.patch
>
>
> Currently, a consumer has to set fetch size larger than the max message size. 
> This increases the memory footprint on the consumer, especially when a large 
> number of topic/partition is subscribed. By decoupling the fetch size from 
> max message size, we can use a smaller fetch size for normal consumption and 
> when hitting a large message (hopefully rare), we automatically increase 
> fetch size to max message size temporarily.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-513) Add state change log to Kafka brokers

2013-01-21 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-513:


Labels: p2 replication tools  (was: replication tools)

> Add state change log to Kafka brokers
> -
>
> Key: KAFKA-513
> URL: https://issues.apache.org/jira/browse/KAFKA-513
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.8
>Reporter: Neha Narkhede
>Assignee: Swapnil Ghike
>Priority: Blocker
>  Labels: p2, replication, tools
> Fix For: 0.8
>
> Attachments: kafka-513-v1.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Once KAFKA-499 is checked in, every controller to broker communication can be 
> modelled as a state change for one or more partitions. Every state change 
> request will carry the controller epoch. If there is a problem with the state 
> of some partitions, it will be good to have a tool that can create a timeline 
> of requested and completed state changes. This will require each broker to 
> output a state change log that has entries like
> [2012-09-10 10:06:17,280] broker 1 received request LeaderAndIsr() for 
> partition [foo, 0] from controller 2, epoch 1
> [2012-09-10 10:06:17,350] broker 1 completed request LeaderAndIsr() for 
> partition [foo, 0] from controller 2, epoch 1
> On controller, this will look like -
> [2012-09-10 10:06:17,198] controller 2, epoch 1, initiated state change 
> request LeaderAndIsr() for partition [foo, 0]
> We need a tool that can collect the state change log from all brokers and 
> create a per-partition timeline of state changes -
> [foo, 0]
> [2012-09-10 10:06:17,198] controller 2, epoch 1 initiated state change 
> request LeaderAndIsr() 
> [2012-09-10 10:06:17,280] broker 1 received request LeaderAndIsr() from 
> controller 2, epoch 1
> [2012-09-10 10:06:17,350] broker 1 completed request LeaderAndIsr() from 
> controller 2, epoch 1
> This JIRA involves adding the state change log to each broker and adding the 
> tool to create the timeline 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-695) Broker shuts down due to attempt to read a closed index file

2013-01-21 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-695:


Labels: p2  (was: )

> Broker shuts down due to attempt to read a closed index file
> 
>
> Key: KAFKA-695
> URL: https://issues.apache.org/jira/browse/KAFKA-695
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8
>Reporter: Neha Narkhede
>Priority: Blocker
>  Labels: p2
>
> Broker shuts down with the following error message -
> 013/01/11 01:43:51.320 ERROR [KafkaApis] [request-expiration-task] [kafka] [] 
>  [KafkaApi-277] error when processing request 
> (service_metrics,2,39192,200)
> java.nio.channels.ClosedChannelException
> at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:88)
> at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:613)
> at kafka.log.FileMessageSet.searchFor(FileMessageSet.scala:82)
> at kafka.log.LogSegment.translateOffset(LogSegment.scala:76)
> at kafka.log.LogSegment.read(LogSegment.scala:106)
> at kafka.log.Log.read(Log.scala:386)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:369)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:327)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:323)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:105)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:206)
> at scala.collection.immutable.Map$Map1.map(Map.scala:93)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:323)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:519)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:501)
> at 
> kafka.server.RequestPurgatory$ExpiredRequestReaper.run(RequestPurgatory.scala:222)
> at java.lang.Thread.run(Thread.java:619)
> 2013/01/11 01:43:52.815 INFO [Processor] [kafka-processor-10251-2] [kafka] [] 
>  Closing socket connection to /172.20.72.244.
> 2013/01/11 01:43:54.286 INFO [Processor] [kafka-processor-10251-3] [kafka] [] 
>  Closing socket connection to /172.20.72.243.
> 2013/01/11 01:43:54.385 ERROR [LogManager] [kafka-logflusher-1] [kafka] []  
> [Log Manager on Broker 277] Error flushing topic service_metrics
> java.nio.channels.ClosedChannelException
> at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:88)
> at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:349)
> at 
> kafka.log.FileMessageSet$$anonfun$flush$1.apply$mcV$sp(FileMessageSet.scala:154)
> at 
> kafka.log.FileMessageSet$$anonfun$flush$1.apply(FileMessageSet.scala:154)
> at 
> kafka.log.FileMessageSet$$anonfun$flush$1.apply(FileMessageSet.scala:154)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
> at kafka.log.FileMessageSet.flush(FileMessageSet.scala:153)
> at kafka.log.LogSegment.flush(LogSegment.scala:151)
> at kafka.log.Log.flush(Log.scala:493)
> at 
> kafka.log.LogManager$$anonfun$kafka$log$LogManager$$flushDirtyLogs$2.apply(LogManager.scala:319)
> at 
> kafka.log.LogManager$$anonfun$kafka$log$LogManager$$flushDirtyLogs$2.apply(LogManager.scala:310)
> at scala.collection.Iterator$class.foreach(Iterator.scala:631)
> at 
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:474)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
> at 
> scala.collection.JavaConversions$JCollectionWrapper.foreach(JavaConversions.scala:495)
> at 
> kafka.log.LogManager.kafka$log$LogManager$$flushDirtyLogs(LogManager.scala:310)
> at 
> kafka.log.LogManager$$anonfun$startup$2.apply$mcV$sp(LogManager.scala:144)
> at kafka.utils.Utils$$anon$2.run(Utils.scala:66)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
> at 
> java.util

[jira] [Updated] (KAFKA-714) ConsoleConsumer throws SocketTimeoutException when fetching topic metadata

2013-01-21 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-714:


Labels: bugs p1  (was: )

> ConsoleConsumer throws SocketTimeoutException when fetching topic metadata
> --
>
> Key: KAFKA-714
> URL: https://issues.apache.org/jira/browse/KAFKA-714
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8
>Reporter: John Fung
>Assignee: Jun Rao
>Priority: Blocker
>  Labels: bugs, p1
> Attachments: kafka-714.patch, kafka-714-reproduce-issue.patch
>
>
> Test Description:
> 1. 1 zookeeper
> 2. 3 brokers
> 3. Replication factor = 3
> 4. Partions = 2
> 5. No. of topics = 250
> There is no problem sending messages to brokers. But ConsoleConsumer is 
> throwing SocketTimeoutException when fetching topic metadata. Currently, 
> ConsoleConsumer doesn't provide a command line argument to configure Socket 
> timeout.
> Exception:
> [2013-01-21 10:10:08,915] WARN Fetching topic metadata with correlation id 0 
> for topics [Set(topic_0219, topic_0026, topic_0160, topic_0056, topic_0100, 
> topic_0146, topic_0103, topic_0179, topic_0078, topic_0098, topic_0102, 
> topic_0028, topic_0060, topic_0218, topic_0210, topic_0161, topic_0144, 
> topic_0101, topic_0104, topic_0186, topic_0040, topic_0027, topic_0093, 
> topic_0147, topic_0080, topic_0211, topic_0089, topic_0177, topic_0220, 
> topic_0097, topic_0079, topic_0187, topic_0105, topic_0178, topic_0096, 
> topic_0108, topic_0095, topic_0065, topic_0066, topic_0021, topic_0023, 
> topic_0109, topic_0058, topic_0092, topic_0149, topic_0150, topic_0250, 
> topic_0022, topic_0227, topic_0145, topic_0063, topic_0094, topic_0216, 
> topic_0185, topic_0057, topic_0141, topic_0215, topic_0184, topic_0024, 
> topic_0214, topic_0140, topic_0217, topic_0228, topic_0025, topic_0064, 
> topic_0044, topic_0043, topic_0152, topic_0009, topic_0029, topic_0151, 
> topic_0142, topic_0041, topic_0164, topic_0077, topic_0062, topic_0163, 
> topic_0046, topic_0061, topic_0190, topic_0162, topic_0143, topic_0165, 
> topic_0148, topic_0042, topic_0087, topic_0223, topic_0182, topic_0008, 
> topic_0132, topic_0204, topic_0007, topic_0067, topic_0181, topic_0169, 
> topic_0203, topic_0180, topic_0224, topic_0183, topic_0048, topic_0107, 
> topic_0069, topic_0130, topic_0106, topic_0047, topic_0068, topic_0222, 
> topic_0189, topic_0221, topic_0131, topic_0134, topic_0156, topic_0111, 
> topic_0246, topic_0110, topic_0245, topic_0171, topic_0240, topic_0010, 
> topic_0122, topic_0201, topic_0135, topic_0196, topic_0034, topic_0241, 
> topic_0012, topic_0230, topic_0082, topic_0188, topic_0195, topic_0166, 
> topic_0088, topic_0036, topic_0099, topic_0172, topic_0112, topic_0085, 
> topic_0202, topic_0123, topic_0011, topic_0115, topic_0084, topic_0121, 
> topic_0243, topic_0086, topic_0192, topic_0035, topic_0191, topic_0200, 
> topic_0242, topic_0231, topic_0133, topic_0229, topic_0116, topic_0167, 
> topic_0244, topic_0032, topic_0168, topic_0157, topic_0118, topic_0209, 
> topic_0045, topic_0226, topic_0119, topic_0076, topic_0117, topic_0006, 
> topic_0129, topic_0225, topic_0033, topic_0159, topic_0037, topic_0197, 
> topic_0030, topic_0049, topic_0205, topic_0238, topic_0004, topic_0153, 
> topic_0074, topic_0127, topic_0083, topic_0003, topic_0126, topic_0249, 
> topic_0158, topic_0005, topic_0081, topic_0155, topic_0031, topic_0198, 
> topic_0206, topic_0020, topic_0154, topic_0075, topic_0239, topic_0128, 
> topic_0212, topic_0017, topic_0054, topic_0174, topic_0073, topic_0072, 
> topic_0173, topic_0039, topic_0213, topic_0138, topic_0059, topic_0015, 
> topic_0055, topic_0052, topic_0237, topic_0038, topic_0091, topic_0236, 
> topic_0053, topic_0234, topic_0070, topic_0193, topic_0051, topic_0090, 
> topic_0248, topic_0125, topic_0002, topic_0050, topic_0247, topic_0137, 
> topic_0124, topic_0014, topic_0001, topic_0071, topic_0235, topic_0194, 
> topic_0120, topic_0232, topic_0175, topic_0208, topic_0170, topic_0114, 
> topic_0016, topic_0139, topic_0013, topic_0136, topic_0113, topic_0018, 
> topic_0233, topic_0019, topic_0176, topic_0199, topic_0207)] from broker 
> [id:1,host:esv4-app19.corp.linkedin.com,port:9091] failed 
> (kafka.client.ClientUtils$)
> java.net.SocketTimeoutException
> at 
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:201)
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
> at 
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:221)
> at kafka.utils.Utils$.read(Utils.scala:393)
> at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
> at kafka.network.Receive$class.readComplet

[jira] [Updated] (KAFKA-700) log client ip when we log each request on the broker

2013-01-21 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-700:


Labels: p1  (was: )

> log client ip when we log each request on the broker
> 
>
> Key: KAFKA-700
> URL: https://issues.apache.org/jira/browse/KAFKA-700
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.8
>Reporter: Jun Rao
>Assignee: Sriram Subramanian
>Priority: Blocker
>  Labels: p1
>
> For debugging purpose, it's useful to know the client ip from which a request 
> comes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-671) DelayedProduce requests should not hold full producer request data

2013-01-21 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-671:


Labels: bugs p1  (was: )

> DelayedProduce requests should not hold full producer request data
> --
>
> Key: KAFKA-671
> URL: https://issues.apache.org/jira/browse/KAFKA-671
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Joel Koshy
>Assignee: Sriram Subramanian
>Priority: Blocker
>  Labels: bugs, p1
> Fix For: 0.8.1
>
> Attachments: outOfMemFix-v1.patch
>
>
> Per summary, this leads to unnecessary memory usage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-649) Cleanup log4j logging

2013-01-21 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-649:


Labels: p4  (was: )

> Cleanup log4j logging
> -
>
> Key: KAFKA-649
> URL: https://issues.apache.org/jira/browse/KAFKA-649
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8
>Reporter: Jay Kreps
>Assignee: Jun Rao
>Priority: Blocker
>  Labels: p4
>
> Review the logs and do the following:
> 1. Fix confusing or duplicative messages
> 2. Assess that messages are at the right level (TRACE/DEBUG/INFO/WARN/ERROR)
> It would also be nice to add a log4j logger for the request logging (i.e. the 
> access log) and another for the controller state change log, since these 
> really have their own use cases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-708) ISR becomes empty while marking a partition offline

2013-01-21 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-708:


Labels: bugs p1  (was: bugs)

> ISR becomes empty while marking a partition offline
> ---
>
> Key: KAFKA-708
> URL: https://issues.apache.org/jira/browse/KAFKA-708
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8, 0.8.1
>Reporter: Swapnil Ghike
>Assignee: Neha Narkhede
>Priority: Blocker
>  Labels: bugs, p1
> Fix For: 0.8
>
> Attachments: kafka-708-v1.patch, kafka-request.log.2013-01-16-15
>
>
> Attached state change log shows that ISR becomes empty when a partition is 
> being marked as offline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-708) ISR becomes empty while marking a partition offline

2013-01-21 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-708:


Attachment: kafka-708-v1.patch

Changes in this patch include -

1. Allowing the isr list to be empty. This can happen if all the brokers in the 
isr fall out which can happen if all the replicas that host the partition are 
down. In other words, the partition is offline. The problem was that if the 
controller moves when the isr is empty or you restart the entire cluster, it 
relied on a non-empty isr.

2. Marking the partition offline by setting the leader to -1 in zookeeper. This 
is because, today there is no way for an external tool to figure out the list 
of all offline partitions. If we were to build a Kafka cluster dashboard and 
list partitions and their metadata, we would want to know the leader for each 
partition. Until a new leader is elected, we continue to store the previous 
leader in zookeeper. If the partition goes offline and no new leader will ever 
come up, we still store the previous leader. This is not ideal and it might be 
worth to store some value like -1 to denote an offline partition

3. Cleaned up logging for a partition. There were several places in the code 
that used a custom string like "[%s, %d]" or "(%s, %d)" to print a partition. 
This makes it very hard to trace the state changes on a partition while 
troubleshooting. I changed everything in kafka.controller to standardize on the 
toString() API of TopicAndPartition. I'm assuming the rest of the code will get 
cleaned up as part of KAFKA-649

> ISR becomes empty while marking a partition offline
> ---
>
> Key: KAFKA-708
> URL: https://issues.apache.org/jira/browse/KAFKA-708
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8, 0.8.1
>Reporter: Swapnil Ghike
>Assignee: Neha Narkhede
>Priority: Blocker
>  Labels: bugs
> Fix For: 0.8
>
> Attachments: kafka-708-v1.patch, kafka-request.log.2013-01-16-15
>
>
> Attached state change log shows that ISR becomes empty when a partition is 
> being marked as offline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (KAFKA-708) ISR becomes empty while marking a partition offline

2013-01-21 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-708 started by Neha Narkhede.

> ISR becomes empty while marking a partition offline
> ---
>
> Key: KAFKA-708
> URL: https://issues.apache.org/jira/browse/KAFKA-708
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8, 0.8.1
>Reporter: Swapnil Ghike
>Assignee: Neha Narkhede
>Priority: Blocker
>  Labels: bugs
> Fix For: 0.8
>
> Attachments: kafka-request.log.2013-01-16-15
>
>
> Attached state change log shows that ISR becomes empty when a partition is 
> being marked as offline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-714) ConsoleConsumer throws SocketTimeoutException when fetching topic metadata

2013-01-21 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559142#comment-13559142
 ] 

Neha Narkhede commented on KAFKA-714:
-

+1 on Jun's patch

> ConsoleConsumer throws SocketTimeoutException when fetching topic metadata
> --
>
> Key: KAFKA-714
> URL: https://issues.apache.org/jira/browse/KAFKA-714
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8
>Reporter: John Fung
>Assignee: Jun Rao
>Priority: Blocker
> Attachments: kafka-714.patch, kafka-714-reproduce-issue.patch
>
>
> Test Description:
> 1. 1 zookeeper
> 2. 3 brokers
> 3. Replication factor = 3
> 4. Partions = 2
> 5. No. of topics = 250
> There is no problem sending messages to brokers. But ConsoleConsumer is 
> throwing SocketTimeoutException when fetching topic metadata. Currently, 
> ConsoleConsumer doesn't provide a command line argument to configure Socket 
> timeout.
> Exception:
> [2013-01-21 10:10:08,915] WARN Fetching topic metadata with correlation id 0 
> for topics [Set(topic_0219, topic_0026, topic_0160, topic_0056, topic_0100, 
> topic_0146, topic_0103, topic_0179, topic_0078, topic_0098, topic_0102, 
> topic_0028, topic_0060, topic_0218, topic_0210, topic_0161, topic_0144, 
> topic_0101, topic_0104, topic_0186, topic_0040, topic_0027, topic_0093, 
> topic_0147, topic_0080, topic_0211, topic_0089, topic_0177, topic_0220, 
> topic_0097, topic_0079, topic_0187, topic_0105, topic_0178, topic_0096, 
> topic_0108, topic_0095, topic_0065, topic_0066, topic_0021, topic_0023, 
> topic_0109, topic_0058, topic_0092, topic_0149, topic_0150, topic_0250, 
> topic_0022, topic_0227, topic_0145, topic_0063, topic_0094, topic_0216, 
> topic_0185, topic_0057, topic_0141, topic_0215, topic_0184, topic_0024, 
> topic_0214, topic_0140, topic_0217, topic_0228, topic_0025, topic_0064, 
> topic_0044, topic_0043, topic_0152, topic_0009, topic_0029, topic_0151, 
> topic_0142, topic_0041, topic_0164, topic_0077, topic_0062, topic_0163, 
> topic_0046, topic_0061, topic_0190, topic_0162, topic_0143, topic_0165, 
> topic_0148, topic_0042, topic_0087, topic_0223, topic_0182, topic_0008, 
> topic_0132, topic_0204, topic_0007, topic_0067, topic_0181, topic_0169, 
> topic_0203, topic_0180, topic_0224, topic_0183, topic_0048, topic_0107, 
> topic_0069, topic_0130, topic_0106, topic_0047, topic_0068, topic_0222, 
> topic_0189, topic_0221, topic_0131, topic_0134, topic_0156, topic_0111, 
> topic_0246, topic_0110, topic_0245, topic_0171, topic_0240, topic_0010, 
> topic_0122, topic_0201, topic_0135, topic_0196, topic_0034, topic_0241, 
> topic_0012, topic_0230, topic_0082, topic_0188, topic_0195, topic_0166, 
> topic_0088, topic_0036, topic_0099, topic_0172, topic_0112, topic_0085, 
> topic_0202, topic_0123, topic_0011, topic_0115, topic_0084, topic_0121, 
> topic_0243, topic_0086, topic_0192, topic_0035, topic_0191, topic_0200, 
> topic_0242, topic_0231, topic_0133, topic_0229, topic_0116, topic_0167, 
> topic_0244, topic_0032, topic_0168, topic_0157, topic_0118, topic_0209, 
> topic_0045, topic_0226, topic_0119, topic_0076, topic_0117, topic_0006, 
> topic_0129, topic_0225, topic_0033, topic_0159, topic_0037, topic_0197, 
> topic_0030, topic_0049, topic_0205, topic_0238, topic_0004, topic_0153, 
> topic_0074, topic_0127, topic_0083, topic_0003, topic_0126, topic_0249, 
> topic_0158, topic_0005, topic_0081, topic_0155, topic_0031, topic_0198, 
> topic_0206, topic_0020, topic_0154, topic_0075, topic_0239, topic_0128, 
> topic_0212, topic_0017, topic_0054, topic_0174, topic_0073, topic_0072, 
> topic_0173, topic_0039, topic_0213, topic_0138, topic_0059, topic_0015, 
> topic_0055, topic_0052, topic_0237, topic_0038, topic_0091, topic_0236, 
> topic_0053, topic_0234, topic_0070, topic_0193, topic_0051, topic_0090, 
> topic_0248, topic_0125, topic_0002, topic_0050, topic_0247, topic_0137, 
> topic_0124, topic_0014, topic_0001, topic_0071, topic_0235, topic_0194, 
> topic_0120, topic_0232, topic_0175, topic_0208, topic_0170, topic_0114, 
> topic_0016, topic_0139, topic_0013, topic_0136, topic_0113, topic_0018, 
> topic_0233, topic_0019, topic_0176, topic_0199, topic_0207)] from broker 
> [id:1,host:esv4-app19.corp.linkedin.com,port:9091] failed 
> (kafka.client.ClientUtils$)
> java.net.SocketTimeoutException
> at 
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:201)
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
> at 
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:221)
> at kafka.utils.Utils$.read(Utils.scala:393)
> at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
> at kafka.network

[jira] [Updated] (KAFKA-714) ConsoleConsumer throws SocketTimeoutException when fetching topic metadata

2013-01-21 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-714:
--

Status: Patch Available  (was: Open)

> ConsoleConsumer throws SocketTimeoutException when fetching topic metadata
> --
>
> Key: KAFKA-714
> URL: https://issues.apache.org/jira/browse/KAFKA-714
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8
>Reporter: John Fung
>Assignee: Jun Rao
>Priority: Blocker
> Attachments: kafka-714.patch, kafka-714-reproduce-issue.patch
>
>
> Test Description:
> 1. 1 zookeeper
> 2. 3 brokers
> 3. Replication factor = 3
> 4. Partions = 2
> 5. No. of topics = 250
> There is no problem sending messages to brokers. But ConsoleConsumer is 
> throwing SocketTimeoutException when fetching topic metadata. Currently, 
> ConsoleConsumer doesn't provide a command line argument to configure Socket 
> timeout.
> Exception:
> [2013-01-21 10:10:08,915] WARN Fetching topic metadata with correlation id 0 
> for topics [Set(topic_0219, topic_0026, topic_0160, topic_0056, topic_0100, 
> topic_0146, topic_0103, topic_0179, topic_0078, topic_0098, topic_0102, 
> topic_0028, topic_0060, topic_0218, topic_0210, topic_0161, topic_0144, 
> topic_0101, topic_0104, topic_0186, topic_0040, topic_0027, topic_0093, 
> topic_0147, topic_0080, topic_0211, topic_0089, topic_0177, topic_0220, 
> topic_0097, topic_0079, topic_0187, topic_0105, topic_0178, topic_0096, 
> topic_0108, topic_0095, topic_0065, topic_0066, topic_0021, topic_0023, 
> topic_0109, topic_0058, topic_0092, topic_0149, topic_0150, topic_0250, 
> topic_0022, topic_0227, topic_0145, topic_0063, topic_0094, topic_0216, 
> topic_0185, topic_0057, topic_0141, topic_0215, topic_0184, topic_0024, 
> topic_0214, topic_0140, topic_0217, topic_0228, topic_0025, topic_0064, 
> topic_0044, topic_0043, topic_0152, topic_0009, topic_0029, topic_0151, 
> topic_0142, topic_0041, topic_0164, topic_0077, topic_0062, topic_0163, 
> topic_0046, topic_0061, topic_0190, topic_0162, topic_0143, topic_0165, 
> topic_0148, topic_0042, topic_0087, topic_0223, topic_0182, topic_0008, 
> topic_0132, topic_0204, topic_0007, topic_0067, topic_0181, topic_0169, 
> topic_0203, topic_0180, topic_0224, topic_0183, topic_0048, topic_0107, 
> topic_0069, topic_0130, topic_0106, topic_0047, topic_0068, topic_0222, 
> topic_0189, topic_0221, topic_0131, topic_0134, topic_0156, topic_0111, 
> topic_0246, topic_0110, topic_0245, topic_0171, topic_0240, topic_0010, 
> topic_0122, topic_0201, topic_0135, topic_0196, topic_0034, topic_0241, 
> topic_0012, topic_0230, topic_0082, topic_0188, topic_0195, topic_0166, 
> topic_0088, topic_0036, topic_0099, topic_0172, topic_0112, topic_0085, 
> topic_0202, topic_0123, topic_0011, topic_0115, topic_0084, topic_0121, 
> topic_0243, topic_0086, topic_0192, topic_0035, topic_0191, topic_0200, 
> topic_0242, topic_0231, topic_0133, topic_0229, topic_0116, topic_0167, 
> topic_0244, topic_0032, topic_0168, topic_0157, topic_0118, topic_0209, 
> topic_0045, topic_0226, topic_0119, topic_0076, topic_0117, topic_0006, 
> topic_0129, topic_0225, topic_0033, topic_0159, topic_0037, topic_0197, 
> topic_0030, topic_0049, topic_0205, topic_0238, topic_0004, topic_0153, 
> topic_0074, topic_0127, topic_0083, topic_0003, topic_0126, topic_0249, 
> topic_0158, topic_0005, topic_0081, topic_0155, topic_0031, topic_0198, 
> topic_0206, topic_0020, topic_0154, topic_0075, topic_0239, topic_0128, 
> topic_0212, topic_0017, topic_0054, topic_0174, topic_0073, topic_0072, 
> topic_0173, topic_0039, topic_0213, topic_0138, topic_0059, topic_0015, 
> topic_0055, topic_0052, topic_0237, topic_0038, topic_0091, topic_0236, 
> topic_0053, topic_0234, topic_0070, topic_0193, topic_0051, topic_0090, 
> topic_0248, topic_0125, topic_0002, topic_0050, topic_0247, topic_0137, 
> topic_0124, topic_0014, topic_0001, topic_0071, topic_0235, topic_0194, 
> topic_0120, topic_0232, topic_0175, topic_0208, topic_0170, topic_0114, 
> topic_0016, topic_0139, topic_0013, topic_0136, topic_0113, topic_0018, 
> topic_0233, topic_0019, topic_0176, topic_0199, topic_0207)] from broker 
> [id:1,host:esv4-app19.corp.linkedin.com,port:9091] failed 
> (kafka.client.ClientUtils$)
> java.net.SocketTimeoutException
> at 
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:201)
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
> at 
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:221)
> at kafka.utils.Utils$.read(Utils.scala:393)
> at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
> at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
>

[jira] [Updated] (KAFKA-714) ConsoleConsumer throws SocketTimeoutException when fetching topic metadata

2013-01-21 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-714:
--

  Component/s: core
 Priority: Blocker  (was: Critical)
Affects Version/s: 0.8
 Assignee: Jun Rao

Mark it as an 0.8 blocker since it affects mirrorMaker.

> ConsoleConsumer throws SocketTimeoutException when fetching topic metadata
> --
>
> Key: KAFKA-714
> URL: https://issues.apache.org/jira/browse/KAFKA-714
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8
>Reporter: John Fung
>Assignee: Jun Rao
>Priority: Blocker
> Attachments: kafka-714.patch, kafka-714-reproduce-issue.patch
>
>
> Test Description:
> 1. 1 zookeeper
> 2. 3 brokers
> 3. Replication factor = 3
> 4. Partions = 2
> 5. No. of topics = 250
> There is no problem sending messages to brokers. But ConsoleConsumer is 
> throwing SocketTimeoutException when fetching topic metadata. Currently, 
> ConsoleConsumer doesn't provide a command line argument to configure Socket 
> timeout.
> Exception:
> [2013-01-21 10:10:08,915] WARN Fetching topic metadata with correlation id 0 
> for topics [Set(topic_0219, topic_0026, topic_0160, topic_0056, topic_0100, 
> topic_0146, topic_0103, topic_0179, topic_0078, topic_0098, topic_0102, 
> topic_0028, topic_0060, topic_0218, topic_0210, topic_0161, topic_0144, 
> topic_0101, topic_0104, topic_0186, topic_0040, topic_0027, topic_0093, 
> topic_0147, topic_0080, topic_0211, topic_0089, topic_0177, topic_0220, 
> topic_0097, topic_0079, topic_0187, topic_0105, topic_0178, topic_0096, 
> topic_0108, topic_0095, topic_0065, topic_0066, topic_0021, topic_0023, 
> topic_0109, topic_0058, topic_0092, topic_0149, topic_0150, topic_0250, 
> topic_0022, topic_0227, topic_0145, topic_0063, topic_0094, topic_0216, 
> topic_0185, topic_0057, topic_0141, topic_0215, topic_0184, topic_0024, 
> topic_0214, topic_0140, topic_0217, topic_0228, topic_0025, topic_0064, 
> topic_0044, topic_0043, topic_0152, topic_0009, topic_0029, topic_0151, 
> topic_0142, topic_0041, topic_0164, topic_0077, topic_0062, topic_0163, 
> topic_0046, topic_0061, topic_0190, topic_0162, topic_0143, topic_0165, 
> topic_0148, topic_0042, topic_0087, topic_0223, topic_0182, topic_0008, 
> topic_0132, topic_0204, topic_0007, topic_0067, topic_0181, topic_0169, 
> topic_0203, topic_0180, topic_0224, topic_0183, topic_0048, topic_0107, 
> topic_0069, topic_0130, topic_0106, topic_0047, topic_0068, topic_0222, 
> topic_0189, topic_0221, topic_0131, topic_0134, topic_0156, topic_0111, 
> topic_0246, topic_0110, topic_0245, topic_0171, topic_0240, topic_0010, 
> topic_0122, topic_0201, topic_0135, topic_0196, topic_0034, topic_0241, 
> topic_0012, topic_0230, topic_0082, topic_0188, topic_0195, topic_0166, 
> topic_0088, topic_0036, topic_0099, topic_0172, topic_0112, topic_0085, 
> topic_0202, topic_0123, topic_0011, topic_0115, topic_0084, topic_0121, 
> topic_0243, topic_0086, topic_0192, topic_0035, topic_0191, topic_0200, 
> topic_0242, topic_0231, topic_0133, topic_0229, topic_0116, topic_0167, 
> topic_0244, topic_0032, topic_0168, topic_0157, topic_0118, topic_0209, 
> topic_0045, topic_0226, topic_0119, topic_0076, topic_0117, topic_0006, 
> topic_0129, topic_0225, topic_0033, topic_0159, topic_0037, topic_0197, 
> topic_0030, topic_0049, topic_0205, topic_0238, topic_0004, topic_0153, 
> topic_0074, topic_0127, topic_0083, topic_0003, topic_0126, topic_0249, 
> topic_0158, topic_0005, topic_0081, topic_0155, topic_0031, topic_0198, 
> topic_0206, topic_0020, topic_0154, topic_0075, topic_0239, topic_0128, 
> topic_0212, topic_0017, topic_0054, topic_0174, topic_0073, topic_0072, 
> topic_0173, topic_0039, topic_0213, topic_0138, topic_0059, topic_0015, 
> topic_0055, topic_0052, topic_0237, topic_0038, topic_0091, topic_0236, 
> topic_0053, topic_0234, topic_0070, topic_0193, topic_0051, topic_0090, 
> topic_0248, topic_0125, topic_0002, topic_0050, topic_0247, topic_0137, 
> topic_0124, topic_0014, topic_0001, topic_0071, topic_0235, topic_0194, 
> topic_0120, topic_0232, topic_0175, topic_0208, topic_0170, topic_0114, 
> topic_0016, topic_0139, topic_0013, topic_0136, topic_0113, topic_0018, 
> topic_0233, topic_0019, topic_0176, topic_0199, topic_0207)] from broker 
> [id:1,host:esv4-app19.corp.linkedin.com,port:9091] failed 
> (kafka.client.ClientUtils$)
> java.net.SocketTimeoutException
> at 
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:201)
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
> at 
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:221)
> at kafka.utils.Utils$.read(Utils.scala:393)
> at 
> kafka

[jira] [Updated] (KAFKA-714) ConsoleConsumer throws SocketTimeoutException when fetching topic metadata

2013-01-21 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-714:
--

Attachment: kafka-714.patch

Thanks for opening the jira. Attach a patch that allows socket timeout to be 
passed to  getMetadata requests on the consumer side.

> ConsoleConsumer throws SocketTimeoutException when fetching topic metadata
> --
>
> Key: KAFKA-714
> URL: https://issues.apache.org/jira/browse/KAFKA-714
> Project: Kafka
>  Issue Type: Bug
>Reporter: John Fung
>Priority: Critical
> Attachments: kafka-714.patch, kafka-714-reproduce-issue.patch
>
>
> Test Description:
> 1. 1 zookeeper
> 2. 3 brokers
> 3. Replication factor = 3
> 4. Partions = 2
> 5. No. of topics = 250
> There is no problem sending messages to brokers. But ConsoleConsumer is 
> throwing SocketTimeoutException when fetching topic metadata. Currently, 
> ConsoleConsumer doesn't provide a command line argument to configure Socket 
> timeout.
> Exception:
> [2013-01-21 10:10:08,915] WARN Fetching topic metadata with correlation id 0 
> for topics [Set(topic_0219, topic_0026, topic_0160, topic_0056, topic_0100, 
> topic_0146, topic_0103, topic_0179, topic_0078, topic_0098, topic_0102, 
> topic_0028, topic_0060, topic_0218, topic_0210, topic_0161, topic_0144, 
> topic_0101, topic_0104, topic_0186, topic_0040, topic_0027, topic_0093, 
> topic_0147, topic_0080, topic_0211, topic_0089, topic_0177, topic_0220, 
> topic_0097, topic_0079, topic_0187, topic_0105, topic_0178, topic_0096, 
> topic_0108, topic_0095, topic_0065, topic_0066, topic_0021, topic_0023, 
> topic_0109, topic_0058, topic_0092, topic_0149, topic_0150, topic_0250, 
> topic_0022, topic_0227, topic_0145, topic_0063, topic_0094, topic_0216, 
> topic_0185, topic_0057, topic_0141, topic_0215, topic_0184, topic_0024, 
> topic_0214, topic_0140, topic_0217, topic_0228, topic_0025, topic_0064, 
> topic_0044, topic_0043, topic_0152, topic_0009, topic_0029, topic_0151, 
> topic_0142, topic_0041, topic_0164, topic_0077, topic_0062, topic_0163, 
> topic_0046, topic_0061, topic_0190, topic_0162, topic_0143, topic_0165, 
> topic_0148, topic_0042, topic_0087, topic_0223, topic_0182, topic_0008, 
> topic_0132, topic_0204, topic_0007, topic_0067, topic_0181, topic_0169, 
> topic_0203, topic_0180, topic_0224, topic_0183, topic_0048, topic_0107, 
> topic_0069, topic_0130, topic_0106, topic_0047, topic_0068, topic_0222, 
> topic_0189, topic_0221, topic_0131, topic_0134, topic_0156, topic_0111, 
> topic_0246, topic_0110, topic_0245, topic_0171, topic_0240, topic_0010, 
> topic_0122, topic_0201, topic_0135, topic_0196, topic_0034, topic_0241, 
> topic_0012, topic_0230, topic_0082, topic_0188, topic_0195, topic_0166, 
> topic_0088, topic_0036, topic_0099, topic_0172, topic_0112, topic_0085, 
> topic_0202, topic_0123, topic_0011, topic_0115, topic_0084, topic_0121, 
> topic_0243, topic_0086, topic_0192, topic_0035, topic_0191, topic_0200, 
> topic_0242, topic_0231, topic_0133, topic_0229, topic_0116, topic_0167, 
> topic_0244, topic_0032, topic_0168, topic_0157, topic_0118, topic_0209, 
> topic_0045, topic_0226, topic_0119, topic_0076, topic_0117, topic_0006, 
> topic_0129, topic_0225, topic_0033, topic_0159, topic_0037, topic_0197, 
> topic_0030, topic_0049, topic_0205, topic_0238, topic_0004, topic_0153, 
> topic_0074, topic_0127, topic_0083, topic_0003, topic_0126, topic_0249, 
> topic_0158, topic_0005, topic_0081, topic_0155, topic_0031, topic_0198, 
> topic_0206, topic_0020, topic_0154, topic_0075, topic_0239, topic_0128, 
> topic_0212, topic_0017, topic_0054, topic_0174, topic_0073, topic_0072, 
> topic_0173, topic_0039, topic_0213, topic_0138, topic_0059, topic_0015, 
> topic_0055, topic_0052, topic_0237, topic_0038, topic_0091, topic_0236, 
> topic_0053, topic_0234, topic_0070, topic_0193, topic_0051, topic_0090, 
> topic_0248, topic_0125, topic_0002, topic_0050, topic_0247, topic_0137, 
> topic_0124, topic_0014, topic_0001, topic_0071, topic_0235, topic_0194, 
> topic_0120, topic_0232, topic_0175, topic_0208, topic_0170, topic_0114, 
> topic_0016, topic_0139, topic_0013, topic_0136, topic_0113, topic_0018, 
> topic_0233, topic_0019, topic_0176, topic_0199, topic_0207)] from broker 
> [id:1,host:esv4-app19.corp.linkedin.com,port:9091] failed 
> (kafka.client.ClientUtils$)
> java.net.SocketTimeoutException
> at 
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:201)
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
> at 
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:221)
> at kafka.utils.Utils$.read(Utils.scala:393)
> at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
> at kafka.network.Receiv

[jira] [Updated] (KAFKA-705) Controlled shutdown doesn't seem to work on more than one broker in a cluster

2013-01-21 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-705:
-

Attachment: kafka-705-incremental-v2.patch

Here is what I meant in my last comment.

> Controlled shutdown doesn't seem to work on more than one broker in a cluster
> -
>
> Key: KAFKA-705
> URL: https://issues.apache.org/jira/browse/KAFKA-705
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8
>Reporter: Neha Narkhede
>Assignee: Joel Koshy
>Priority: Critical
>  Labels: bugs
> Attachments: kafka-705-incremental-v2.patch, kafka-705-v1.patch, 
> shutdown_brokers_eat.py, shutdown-command
>
>
> I wrote a script (attached here) to basically round robin through the brokers 
> in a cluster doing the following 2 operations on each of them -
> 1. Send the controlled shutdown admin command. If it succeeds
> 2. Restart the broker
> What I've observed is that only one broker is able to finish the above 
> successfully the first time around. For the rest of the iterations, no broker 
> is able to shutdown using the admin command and every single time it fails 
> with the error message stating the same number of leaders on every broker. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-705) Controlled shutdown doesn't seem to work on more than one broker in a cluster

2013-01-21 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559121#comment-13559121
 ] 

Joel Koshy commented on KAFKA-705:
--

I committed the fix to 0.8 with a small edit: used the 
liveOrShuttingDownBrokers field.

Another small issue is that we send a stop replica fetchers to the shutting 
down broker even if
controlled shutdown did not complete. This "prematurely" forces the broker out 
of the ISR of those
partitions. I think it should be safe to avoid sending the stop replica request 
if controlled shutdown
has not completely moved leadership of partitions off the shutting down broker.


> Controlled shutdown doesn't seem to work on more than one broker in a cluster
> -
>
> Key: KAFKA-705
> URL: https://issues.apache.org/jira/browse/KAFKA-705
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8
>Reporter: Neha Narkhede
>Assignee: Joel Koshy
>Priority: Critical
>  Labels: bugs
> Attachments: kafka-705-v1.patch, shutdown_brokers_eat.py, 
> shutdown-command
>
>
> I wrote a script (attached here) to basically round robin through the brokers 
> in a cluster doing the following 2 operations on each of them -
> 1. Send the controlled shutdown admin command. If it succeeds
> 2. Restart the broker
> What I've observed is that only one broker is able to finish the above 
> successfully the first time around. For the rest of the iterations, no broker 
> is able to shutdown using the admin command and every single time it fails 
> with the error message stating the same number of leaders on every broker. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-139) cross-compile multiple Scala versions and upgrade to SBT 0.12.1

2013-01-21 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559073#comment-13559073
 ] 

Jun Rao commented on KAFKA-139:
---

Joe, after your check in. I saw a couple of changes: (1) ./sbt package received 
a bunch of warnings during compilation; (2) bin/kafka-server-start.sh 
config/server.properties stopped working with the following error

bin/kafka-server-start.sh config/server.properties 
Exception in thread "main" java.lang.NoClassDefFoundError: kafka/Kafka
Caused by: java.lang.ClassNotFoundException: kafka.Kafka
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: kafka.Kafka.  Program will exit.

It seems that kafka/kafka.class is no longer in the Kafka jar.

> cross-compile multiple Scala versions and upgrade to SBT 0.12.1
> ---
>
> Key: KAFKA-139
> URL: https://issues.apache.org/jira/browse/KAFKA-139
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.8
>Reporter: Chris Burroughs
>  Labels: build
> Fix For: 0.8
>
> Attachments: kafka-sbt0-11-3-0.8.patch, kafka-sbt0-11-3-0.8-v2.patch, 
> kafka-sbt0-11-3-0.8-v3.patch, kafka-sbt0-11-3-0.8-v4.patch, 
> kafka-sbt0-11-3-0.8-v5-smeder.patch, kafka-sbt0-11-3-0.8-v6-smeder.patch, 
> kafka-sbt0-11-3.patch
>
>
> Since scala does not maintain binary compatibly between versions, 
> organizations tend to have to move all of there code at the same time.  It 
> would thus be very helpful if we could cross build multiple scala versions.
> http://code.google.com/p/simple-build-tool/wiki/CrossBuild
> Unclear if this would require KAFKA-134 or just work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-139) cross-compile multiple Scala versions and upgrade to SBT 0.12.1

2013-01-21 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-139:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

added 2.8.2 and committed changes to 0.8 branch

> cross-compile multiple Scala versions and upgrade to SBT 0.12.1
> ---
>
> Key: KAFKA-139
> URL: https://issues.apache.org/jira/browse/KAFKA-139
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.8
>Reporter: Chris Burroughs
>  Labels: build
> Fix For: 0.8
>
> Attachments: kafka-sbt0-11-3-0.8.patch, kafka-sbt0-11-3-0.8-v2.patch, 
> kafka-sbt0-11-3-0.8-v3.patch, kafka-sbt0-11-3-0.8-v4.patch, 
> kafka-sbt0-11-3-0.8-v5-smeder.patch, kafka-sbt0-11-3-0.8-v6-smeder.patch, 
> kafka-sbt0-11-3.patch
>
>
> Since scala does not maintain binary compatibly between versions, 
> organizations tend to have to move all of there code at the same time.  It 
> would thus be very helpful if we could cross build multiple scala versions.
> http://code.google.com/p/simple-build-tool/wiki/CrossBuild
> Unclear if this would require KAFKA-134 or just work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-703) A fetch request in Fetch Purgatory can double count the bytes from the same delayed produce request

2013-01-21 Thread Jay Kreps (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Kreps updated KAFKA-703:


Fix Version/s: (was: 0.8)
   0.8.1

> A fetch request in Fetch Purgatory can double count the bytes from the same 
> delayed produce request
> ---
>
> Key: KAFKA-703
> URL: https://issues.apache.org/jira/browse/KAFKA-703
> Project: Kafka
>  Issue Type: Bug
>  Components: purgatory
>Affects Versions: 0.8
>Reporter: Sriram Subramanian
>Assignee: Sriram Subramanian
>Priority: Blocker
> Fix For: 0.8.1
>
>
> When a producer request is handled, the fetch purgatory is checked to ensure 
> any fetch requests are satisfied. When the produce request is satisfied we do 
> the check again and if the same fetch request was still in the fetch 
> purgatory it would end up double counting the bytes received.
> Possible Solutions
> 1. In the delayed produce request case, do the check only after the produce 
> request is satisfied. This could potentially delay the fetch request from 
> being satisfied.
> 2. Remove dependency of fetch request on produce request and just look at the 
> last logical log offset (which should mostly be cached). This would need the 
> replica.fetch.min.bytes to be number of messages rather than bytes. This also 
> helps KAFKA-671 in that we would no longer need to pass the ProduceRequest 
> object to the producer purgatory and hence not have to consume any memory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-715) NumberFormatException in PartitionStateInfo

2013-01-21 Thread Chris Riccomini (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559021#comment-13559021
 ] 

Chris Riccomini commented on KAFKA-715:
---

Indeed, it does!

> NumberFormatException in PartitionStateInfo
> ---
>
> Key: KAFKA-715
> URL: https://issues.apache.org/jira/browse/KAFKA-715
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.8
>Reporter: Chris Riccomini
>Assignee: Neha Narkhede
>
> Hey Guys,
> During a broker restart, I got this exception:
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:zookeeper.version=3.3.3-1203054, built on 11/17/2011 05:47 GMT
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:host.name=eat1-qa466.corp.linkedin.com
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:java.version=1.6.0_21
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:java.vendor=Sun Microsystems Inc.
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:java.home=/export/apps/jdk/JDK-1_6_0_21/jre
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:java.class.path=/export/apps/jdk/JDK-1_6_0_21/lib/tools.jar:lib/activation-1.0.2.jar:lib/ant-1.6.5.jar:lib/aopalliance-1.0.jar:lib/cfg-2.8.0.jar:lib/cfg-api-6.6.6.jar:lib/cfg-impl-6.6.6.jar:lib/com.linkedin.customlibrary.j2ee-1.0.jar:lib/com.linkedin.customlibrary.mx4j-3.0.2.jar:lib/com.linkedin.customlibrary.xmsg-0.6.jar:lib/commons-beanutils-1.7.0.jar:lib/commons-cli-1.0.jar:lib/commons-lang-2.4.jar:lib/commons-logging-1.1.jar:lib/configuration-api-1.4.8.jar:lib/configuration-repository-impl-1.4.8.jar:lib/container-management-impl-1.1.15.jar:lib/container-server-1.1.15.jar:lib/emweb-impl-1.1.15.jar:lib/jaxen-1.1.1.jar:lib/jdom-1.0.jar:lib/jetty-6.1.26.jar:lib/jetty-management-6.1.26.jar:lib/jetty-naming-6.1.26.jar:lib/jetty-plus-6.1.26.jar:lib/jetty-util5-6.1.26.jar:lib/jetty-util-6.1.26.jar:lib/jmx-impl-1.4.8.jar:lib/json-simple-1.1.jar:lib/jsp-2.1-6.1.1.jar:lib/jsp-api-2.1-6.1.1.jar:lib/lispring-lispring-core-1.4.8.jar:lib/lispring-lispring-servlet-1.4.8.jar:lib/log4j-1.2.15.jar:lib/mail-1.3.0.jar:lib/mx4j-tools-3.0.2.jar:lib/servlet-api-2.5.jar:lib/spring-aop-3.0.3.jar:lib/spring-asm-3.0.3.jar:lib/spring-aspects-3.0.3.jar:lib/spring-beans-3.0.3.jar:lib/spring-context-3.0.3.jar:lib/spring-context-support-3.0.3.jar:lib/spring-core-3.0.3.jar:lib/spring-expression-3.0.3.jar:lib/spring-jdbc-3.0.3.jar:lib/spring-jms-3.0.3.jar:lib/spring-orm-3.0.3.jar:lib/spring-transaction-3.0.3.jar:lib/spring-web-3.0.3.jar:lib/spring-web-servlet-3.0.3.jar:lib/util-core-4.0.40.jar:lib/util-i18n-4.0.40.jar:lib/util-jmx-4.0.22.jar:lib/util-log-4.0.40.jar:lib/util-servlet-4.0.40.jar:lib/util-xmsg-4.0.40.jar:lib/xml-apis-1.3.04.jar
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:java.library.path=/export/apps/jdk/JDK-1_6_0_21/jre/lib/amd64/server:/export/apps/jdk/JDK-1_6_0_21/jre/lib/amd64:/export/apps/jdk/JDK-1_6_0_21/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:java.io.tmpdir=/export/content/glu/apps/kafka/i001/tmp
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:java.compiler=
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:os.name=Linux
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:os.arch=amd64
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:os.version=2.6.32-220.13.1.el6.x86_64
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:user.name=app
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:user.home=/home/app
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:user.dir=/export/content/glu/apps/kafka/i001
> 2013/01/21 19:21:10.919 INFO [ZooKeeper] [main] [kafka] []  Initiating client 
> connection, 
> connectString=eat1-app309.corp.linkedin.com:12913,eat1-app310.corp.linkedin.com:12913,eat1-app311.corp.linkedin.com:12913,eat1-app312.corp.linkedin.com:12913,eat1-app313.corp.linkedin.com:12913/kafka-samsa
>  sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@1bfdbab5
> 2013/01/21 19:21:10.932 INFO [ClientCnxn] [main-SendThread()] [kafka] []  
> Opening socket connection to server 
> eat1-app313.corp.linkedin.com/172.20.72.73:12913
> 2013/01/21 19:21:10.933 INFO [ClientCnxn] 
> [main-SendThread(eat1-app313.corp.linkedin.com:12913)] [kafka] []  Socket 
> connection established to eat1-app313.corp.linkedin.com

[jira] [Commented] (KAFKA-703) A fetch request in Fetch Purgatory can double count the bytes from the same delayed produce request

2013-01-21 Thread Sriram Subramanian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559018#comment-13559018
 ] 

Sriram Subramanian commented on KAFKA-703:
--

Can we move this jira to the next version since we have decided to punt this?

> A fetch request in Fetch Purgatory can double count the bytes from the same 
> delayed produce request
> ---
>
> Key: KAFKA-703
> URL: https://issues.apache.org/jira/browse/KAFKA-703
> Project: Kafka
>  Issue Type: Bug
>  Components: purgatory
>Affects Versions: 0.8
>Reporter: Sriram Subramanian
>Assignee: Sriram Subramanian
>Priority: Blocker
> Fix For: 0.8
>
>
> When a producer request is handled, the fetch purgatory is checked to ensure 
> any fetch requests are satisfied. When the produce request is satisfied we do 
> the check again and if the same fetch request was still in the fetch 
> purgatory it would end up double counting the bytes received.
> Possible Solutions
> 1. In the delayed produce request case, do the check only after the produce 
> request is satisfied. This could potentially delay the fetch request from 
> being satisfied.
> 2. Remove dependency of fetch request on produce request and just look at the 
> last logical log offset (which should mostly be cached). This would need the 
> replica.fetch.min.bytes to be number of messages rather than bytes. This also 
> helps KAFKA-671 in that we would no longer need to pass the ProduceRequest 
> object to the producer purgatory and hence not have to consume any memory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-715) NumberFormatException in PartitionStateInfo

2013-01-21 Thread Swapnil Ghike (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559016#comment-13559016
 ] 

Swapnil Ghike commented on KAFKA-715:
-

Looks like a duplicate of KAFKA-708

> NumberFormatException in PartitionStateInfo
> ---
>
> Key: KAFKA-715
> URL: https://issues.apache.org/jira/browse/KAFKA-715
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.8
>Reporter: Chris Riccomini
>Assignee: Neha Narkhede
>
> Hey Guys,
> During a broker restart, I got this exception:
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:zookeeper.version=3.3.3-1203054, built on 11/17/2011 05:47 GMT
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:host.name=eat1-qa466.corp.linkedin.com
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:java.version=1.6.0_21
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:java.vendor=Sun Microsystems Inc.
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:java.home=/export/apps/jdk/JDK-1_6_0_21/jre
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:java.class.path=/export/apps/jdk/JDK-1_6_0_21/lib/tools.jar:lib/activation-1.0.2.jar:lib/ant-1.6.5.jar:lib/aopalliance-1.0.jar:lib/cfg-2.8.0.jar:lib/cfg-api-6.6.6.jar:lib/cfg-impl-6.6.6.jar:lib/com.linkedin.customlibrary.j2ee-1.0.jar:lib/com.linkedin.customlibrary.mx4j-3.0.2.jar:lib/com.linkedin.customlibrary.xmsg-0.6.jar:lib/commons-beanutils-1.7.0.jar:lib/commons-cli-1.0.jar:lib/commons-lang-2.4.jar:lib/commons-logging-1.1.jar:lib/configuration-api-1.4.8.jar:lib/configuration-repository-impl-1.4.8.jar:lib/container-management-impl-1.1.15.jar:lib/container-server-1.1.15.jar:lib/emweb-impl-1.1.15.jar:lib/jaxen-1.1.1.jar:lib/jdom-1.0.jar:lib/jetty-6.1.26.jar:lib/jetty-management-6.1.26.jar:lib/jetty-naming-6.1.26.jar:lib/jetty-plus-6.1.26.jar:lib/jetty-util5-6.1.26.jar:lib/jetty-util-6.1.26.jar:lib/jmx-impl-1.4.8.jar:lib/json-simple-1.1.jar:lib/jsp-2.1-6.1.1.jar:lib/jsp-api-2.1-6.1.1.jar:lib/lispring-lispring-core-1.4.8.jar:lib/lispring-lispring-servlet-1.4.8.jar:lib/log4j-1.2.15.jar:lib/mail-1.3.0.jar:lib/mx4j-tools-3.0.2.jar:lib/servlet-api-2.5.jar:lib/spring-aop-3.0.3.jar:lib/spring-asm-3.0.3.jar:lib/spring-aspects-3.0.3.jar:lib/spring-beans-3.0.3.jar:lib/spring-context-3.0.3.jar:lib/spring-context-support-3.0.3.jar:lib/spring-core-3.0.3.jar:lib/spring-expression-3.0.3.jar:lib/spring-jdbc-3.0.3.jar:lib/spring-jms-3.0.3.jar:lib/spring-orm-3.0.3.jar:lib/spring-transaction-3.0.3.jar:lib/spring-web-3.0.3.jar:lib/spring-web-servlet-3.0.3.jar:lib/util-core-4.0.40.jar:lib/util-i18n-4.0.40.jar:lib/util-jmx-4.0.22.jar:lib/util-log-4.0.40.jar:lib/util-servlet-4.0.40.jar:lib/util-xmsg-4.0.40.jar:lib/xml-apis-1.3.04.jar
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:java.library.path=/export/apps/jdk/JDK-1_6_0_21/jre/lib/amd64/server:/export/apps/jdk/JDK-1_6_0_21/jre/lib/amd64:/export/apps/jdk/JDK-1_6_0_21/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:java.io.tmpdir=/export/content/glu/apps/kafka/i001/tmp
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:java.compiler=
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:os.name=Linux
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:os.arch=amd64
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:os.version=2.6.32-220.13.1.el6.x86_64
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:user.name=app
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:user.home=/home/app
> 2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
> environment:user.dir=/export/content/glu/apps/kafka/i001
> 2013/01/21 19:21:10.919 INFO [ZooKeeper] [main] [kafka] []  Initiating client 
> connection, 
> connectString=eat1-app309.corp.linkedin.com:12913,eat1-app310.corp.linkedin.com:12913,eat1-app311.corp.linkedin.com:12913,eat1-app312.corp.linkedin.com:12913,eat1-app313.corp.linkedin.com:12913/kafka-samsa
>  sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@1bfdbab5
> 2013/01/21 19:21:10.932 INFO [ClientCnxn] [main-SendThread()] [kafka] []  
> Opening socket connection to server 
> eat1-app313.corp.linkedin.com/172.20.72.73:12913
> 2013/01/21 19:21:10.933 INFO [ClientCnxn] 
> [main-SendThread(eat1-app313.corp.linkedin.com:12913)] [kafka] []  Socket 
> connection established to eat1-app313.co

[jira] [Created] (KAFKA-715) NumberFormatException in PartitionStateInfo

2013-01-21 Thread Chris Riccomini (JIRA)
Chris Riccomini created KAFKA-715:
-

 Summary: NumberFormatException in PartitionStateInfo
 Key: KAFKA-715
 URL: https://issues.apache.org/jira/browse/KAFKA-715
 Project: Kafka
  Issue Type: Bug
  Components: replication
Affects Versions: 0.8
Reporter: Chris Riccomini
Assignee: Neha Narkhede


Hey Guys,

During a broker restart, I got this exception:

2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
environment:zookeeper.version=3.3.3-1203054, built on 11/17/2011 05:47 GMT
2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
environment:host.name=eat1-qa466.corp.linkedin.com
2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
environment:java.version=1.6.0_21
2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
environment:java.vendor=Sun Microsystems Inc.
2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
environment:java.home=/export/apps/jdk/JDK-1_6_0_21/jre
2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
environment:java.class.path=/export/apps/jdk/JDK-1_6_0_21/lib/tools.jar:lib/activation-1.0.2.jar:lib/ant-1.6.5.jar:lib/aopalliance-1.0.jar:lib/cfg-2.8.0.jar:lib/cfg-api-6.6.6.jar:lib/cfg-impl-6.6.6.jar:lib/com.linkedin.customlibrary.j2ee-1.0.jar:lib/com.linkedin.customlibrary.mx4j-3.0.2.jar:lib/com.linkedin.customlibrary.xmsg-0.6.jar:lib/commons-beanutils-1.7.0.jar:lib/commons-cli-1.0.jar:lib/commons-lang-2.4.jar:lib/commons-logging-1.1.jar:lib/configuration-api-1.4.8.jar:lib/configuration-repository-impl-1.4.8.jar:lib/container-management-impl-1.1.15.jar:lib/container-server-1.1.15.jar:lib/emweb-impl-1.1.15.jar:lib/jaxen-1.1.1.jar:lib/jdom-1.0.jar:lib/jetty-6.1.26.jar:lib/jetty-management-6.1.26.jar:lib/jetty-naming-6.1.26.jar:lib/jetty-plus-6.1.26.jar:lib/jetty-util5-6.1.26.jar:lib/jetty-util-6.1.26.jar:lib/jmx-impl-1.4.8.jar:lib/json-simple-1.1.jar:lib/jsp-2.1-6.1.1.jar:lib/jsp-api-2.1-6.1.1.jar:lib/lispring-lispring-core-1.4.8.jar:lib/lispring-lispring-servlet-1.4.8.jar:lib/log4j-1.2.15.jar:lib/mail-1.3.0.jar:lib/mx4j-tools-3.0.2.jar:lib/servlet-api-2.5.jar:lib/spring-aop-3.0.3.jar:lib/spring-asm-3.0.3.jar:lib/spring-aspects-3.0.3.jar:lib/spring-beans-3.0.3.jar:lib/spring-context-3.0.3.jar:lib/spring-context-support-3.0.3.jar:lib/spring-core-3.0.3.jar:lib/spring-expression-3.0.3.jar:lib/spring-jdbc-3.0.3.jar:lib/spring-jms-3.0.3.jar:lib/spring-orm-3.0.3.jar:lib/spring-transaction-3.0.3.jar:lib/spring-web-3.0.3.jar:lib/spring-web-servlet-3.0.3.jar:lib/util-core-4.0.40.jar:lib/util-i18n-4.0.40.jar:lib/util-jmx-4.0.22.jar:lib/util-log-4.0.40.jar:lib/util-servlet-4.0.40.jar:lib/util-xmsg-4.0.40.jar:lib/xml-apis-1.3.04.jar
2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
environment:java.library.path=/export/apps/jdk/JDK-1_6_0_21/jre/lib/amd64/server:/export/apps/jdk/JDK-1_6_0_21/jre/lib/amd64:/export/apps/jdk/JDK-1_6_0_21/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
environment:java.io.tmpdir=/export/content/glu/apps/kafka/i001/tmp
2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
environment:java.compiler=
2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
environment:os.name=Linux
2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
environment:os.arch=amd64
2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
environment:os.version=2.6.32-220.13.1.el6.x86_64
2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
environment:user.name=app
2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
environment:user.home=/home/app
2013/01/21 19:21:10.918 INFO [ZooKeeper] [main] [kafka] []  Client 
environment:user.dir=/export/content/glu/apps/kafka/i001
2013/01/21 19:21:10.919 INFO [ZooKeeper] [main] [kafka] []  Initiating client 
connection, 
connectString=eat1-app309.corp.linkedin.com:12913,eat1-app310.corp.linkedin.com:12913,eat1-app311.corp.linkedin.com:12913,eat1-app312.corp.linkedin.com:12913,eat1-app313.corp.linkedin.com:12913/kafka-samsa
 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@1bfdbab5
2013/01/21 19:21:10.932 INFO [ClientCnxn] [main-SendThread()] [kafka] []  
Opening socket connection to server 
eat1-app313.corp.linkedin.com/172.20.72.73:12913
2013/01/21 19:21:10.933 INFO [ClientCnxn] 
[main-SendThread(eat1-app313.corp.linkedin.com:12913)] [kafka] []  Socket 
connection established to eat1-app313.corp.linkedin.com/172.20.72.73:12913, 
initiating session
2013/01/21 19:21:10.963 INFO [ClientCnxn] 
[main-SendThread(eat1-app313.corp.linkedin.com:12913)] [kafka] []  Session 
establishment complete on server 
eat1-app313.corp.linkedin.com/172.20.72.73:12913, sessionid = 
0x53afd073784059c, negotiated timeout = 6000
2013/01/21 19:21:10.964 INFO [Z

[jira] Subscription: outstanding kafka patches

2013-01-21 Thread jira
Issue Subscription
Filter: outstanding kafka patches (60 issues)
The list of outstanding kafka patches
Subscriber: kafka-mailing-list

Key Summary
KAFKA-714   ConsoleConsumer throws SocketTimeoutException when fetching topic 
metadata
https://issues.apache.org/jira/browse/KAFKA-714
KAFKA-713   Update Hadoop producer for Kafka 0.8 changes
https://issues.apache.org/jira/browse/KAFKA-713
KAFKA-708   ISR becomes empty while marking a partition offline
https://issues.apache.org/jira/browse/KAFKA-708
KAFKA-705   Controlled shutdown doesn't seem to work on more than one broker in 
a cluster
https://issues.apache.org/jira/browse/KAFKA-705
KAFKA-696   Fix toString() API for all requests to make logging easier to read
https://issues.apache.org/jira/browse/KAFKA-696
KAFKA-682   java.lang.OutOfMemoryError: Java heap space
https://issues.apache.org/jira/browse/KAFKA-682
KAFKA-677   Retention process gives exception if an empty segment is chosen for 
collection
https://issues.apache.org/jira/browse/KAFKA-677
KAFKA-674   Clean Shutdown Testing - Log segments checksums mismatch
https://issues.apache.org/jira/browse/KAFKA-674
KAFKA-671   DelayedProduce requests should not hold full producer request data
https://issues.apache.org/jira/browse/KAFKA-671
KAFKA-652   Create testcases for clean shut-down
https://issues.apache.org/jira/browse/KAFKA-652
KAFKA-651   Create testcases on auto create topics
https://issues.apache.org/jira/browse/KAFKA-651
KAFKA-645   Create a shell script to run System Test with DEBUG details and 
"tee" console output to a file
https://issues.apache.org/jira/browse/KAFKA-645
KAFKA-637   Separate log4j environment variable from KAFKA_OPTS in 
kafka-run-class.sh
https://issues.apache.org/jira/browse/KAFKA-637
KAFKA-631   Implement log compaction
https://issues.apache.org/jira/browse/KAFKA-631
KAFKA-621   System Test 9051 : ConsoleConsumer doesn't receives any data for 20 
topics but works for 10
https://issues.apache.org/jira/browse/KAFKA-621
KAFKA-607   System Test Transient Failure (case 4011 Log Retention) - 
ConsoleConsumer receives less data
https://issues.apache.org/jira/browse/KAFKA-607
KAFKA-606   System Test Transient Failure (case 0302 GC Pause) - Log segments 
mismatched across replicas
https://issues.apache.org/jira/browse/KAFKA-606
KAFKA-604   Add missing metrics in 0.8
https://issues.apache.org/jira/browse/KAFKA-604
KAFKA-598   decouple fetch size from max message size
https://issues.apache.org/jira/browse/KAFKA-598
KAFKA-583   SimpleConsumerShell may receive less data inconsistently
https://issues.apache.org/jira/browse/KAFKA-583
KAFKA-552   No error messages logged for those failing-to-send messages from 
Producer
https://issues.apache.org/jira/browse/KAFKA-552
KAFKA-547   The ConsumerStats MBean name should include the groupid
https://issues.apache.org/jira/browse/KAFKA-547
KAFKA-530   kafka.server.KafkaApis: kafka.common.OffsetOutOfRangeException
https://issues.apache.org/jira/browse/KAFKA-530
KAFKA-493   High CPU usage on inactive server
https://issues.apache.org/jira/browse/KAFKA-493
KAFKA-479   ZK EPoll taking 100% CPU usage with Kafka Client
https://issues.apache.org/jira/browse/KAFKA-479
KAFKA-465   Performance test scripts - refactoring leftovers from tools to perf 
package
https://issues.apache.org/jira/browse/KAFKA-465
KAFKA-438   Code cleanup in MessageTest
https://issues.apache.org/jira/browse/KAFKA-438
KAFKA-419   Updated PHP client library to support kafka 0.7+
https://issues.apache.org/jira/browse/KAFKA-419
KAFKA-414   Evaluate mmap-based writes for Log implementation
https://issues.apache.org/jira/browse/KAFKA-414
KAFKA-411   Message Error in high cocurrent environment
https://issues.apache.org/jira/browse/KAFKA-411
KAFKA-404   When using chroot path, create chroot on startup if it doesn't exist
https://issues.apache.org/jira/browse/KAFKA-404
KAFKA-399   0.7.1 seems to show less performance than 0.7.0
https://issues.apache.org/jira/browse/KAFKA-399
KAFKA-398   Enhance SocketServer to Enable Sending Requests
https://issues.apache.org/jira/browse/KAFKA-398
KAFKA-397   kafka.common.InvalidMessageSizeException: null
https://issues.apache.org/jira/browse/KAFKA-397
KAFKA-388   Add a highly available consumer co-ordinator to a Kafka cluster
https://issues.apache.org/jira/browse/KAFKA-388
KAFKA-346   Don't call commitOffsets() during rebalance
https://issues.apache.org/jira/browse/KAFKA-346
KAFKA-345   Add a listener to ZookeeperConsumerConnector to get notified on 
rebalance events
https://issues.apache.org/jira/browse/KAFKA-345

[jira] [Updated] (KAFKA-631) Implement log compaction

2013-01-21 Thread Jay Kreps (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Kreps updated KAFKA-631:


Attachment: KAFKA-631-v3.patch

Attached patch v3. Two small changes:
1. Make memory usage more intuitive now that there is a read and write buffer 
for each cleaner thread. These are both fixed at 1MB per thread and taken out 
of the total buffer size given to cleaning.
2. Ensure that each new log segment is flushed before it is swapped into the 
log. Without this we can swap in a segment that is not on disk at all, delete 
the old segment, and then lose both in the event of a crash.

> Implement log compaction
> 
>
> Key: KAFKA-631
> URL: https://issues.apache.org/jira/browse/KAFKA-631
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Affects Versions: 0.8.1
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-631-v1.patch, KAFKA-631-v2.patch, 
> KAFKA-631-v3.patch
>
>
> Currently Kafka has only one way to bound the space of the log, namely by 
> deleting old segments. The policy that controls which segments are deleted 
> can be configured based either on the number of bytes to retain or the age of 
> the messages. This makes sense for event or log data which has no notion of 
> primary key. However lots of data has a primary key and consists of updates 
> by primary key. For this data it would be nice to be able to ensure that the 
> log contained at least the last version of every key.
> As an example, say that the Kafka topic contains a sequence of User Account 
> messages, each capturing the current state of a given user account. Rather 
> than simply discarding old segments, since the set of user accounts is 
> finite, it might make more sense to delete individual records that have been 
> made obsolete by a more recent update for the same key. This would ensure 
> that the topic contained at least the current state of each record.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-714) ConsoleConsumer throws SocketTimeoutException when fetching topic metadata

2013-01-21 Thread John Fung (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Fung updated KAFKA-714:


Attachment: kafka-714-reproduce-issue.patch

Uploaded kafka-714-reproduce-issue.patch to reproduce the issue.

Please note this issue should be reproduced in a distributed environment. 
Therefore, system_test/cluster_config.json should be updated with the 
corresponding remote brokers machine in the entity hostname accordingly.

> ConsoleConsumer throws SocketTimeoutException when fetching topic metadata
> --
>
> Key: KAFKA-714
> URL: https://issues.apache.org/jira/browse/KAFKA-714
> Project: Kafka
>  Issue Type: Bug
>Reporter: John Fung
>Priority: Critical
> Attachments: kafka-714-reproduce-issue.patch
>
>
> Test Description:
> 1. 1 zookeeper
> 2. 3 brokers
> 3. Replication factor = 3
> 4. Partions = 2
> 5. No. of topics = 250
> There is no problem sending messages to brokers. But ConsoleConsumer is 
> throwing SocketTimeoutException when fetching topic metadata. Currently, 
> ConsoleConsumer doesn't provide a command line argument to configure Socket 
> timeout.
> Exception:
> [2013-01-21 10:10:08,915] WARN Fetching topic metadata with correlation id 0 
> for topics [Set(topic_0219, topic_0026, topic_0160, topic_0056, topic_0100, 
> topic_0146, topic_0103, topic_0179, topic_0078, topic_0098, topic_0102, 
> topic_0028, topic_0060, topic_0218, topic_0210, topic_0161, topic_0144, 
> topic_0101, topic_0104, topic_0186, topic_0040, topic_0027, topic_0093, 
> topic_0147, topic_0080, topic_0211, topic_0089, topic_0177, topic_0220, 
> topic_0097, topic_0079, topic_0187, topic_0105, topic_0178, topic_0096, 
> topic_0108, topic_0095, topic_0065, topic_0066, topic_0021, topic_0023, 
> topic_0109, topic_0058, topic_0092, topic_0149, topic_0150, topic_0250, 
> topic_0022, topic_0227, topic_0145, topic_0063, topic_0094, topic_0216, 
> topic_0185, topic_0057, topic_0141, topic_0215, topic_0184, topic_0024, 
> topic_0214, topic_0140, topic_0217, topic_0228, topic_0025, topic_0064, 
> topic_0044, topic_0043, topic_0152, topic_0009, topic_0029, topic_0151, 
> topic_0142, topic_0041, topic_0164, topic_0077, topic_0062, topic_0163, 
> topic_0046, topic_0061, topic_0190, topic_0162, topic_0143, topic_0165, 
> topic_0148, topic_0042, topic_0087, topic_0223, topic_0182, topic_0008, 
> topic_0132, topic_0204, topic_0007, topic_0067, topic_0181, topic_0169, 
> topic_0203, topic_0180, topic_0224, topic_0183, topic_0048, topic_0107, 
> topic_0069, topic_0130, topic_0106, topic_0047, topic_0068, topic_0222, 
> topic_0189, topic_0221, topic_0131, topic_0134, topic_0156, topic_0111, 
> topic_0246, topic_0110, topic_0245, topic_0171, topic_0240, topic_0010, 
> topic_0122, topic_0201, topic_0135, topic_0196, topic_0034, topic_0241, 
> topic_0012, topic_0230, topic_0082, topic_0188, topic_0195, topic_0166, 
> topic_0088, topic_0036, topic_0099, topic_0172, topic_0112, topic_0085, 
> topic_0202, topic_0123, topic_0011, topic_0115, topic_0084, topic_0121, 
> topic_0243, topic_0086, topic_0192, topic_0035, topic_0191, topic_0200, 
> topic_0242, topic_0231, topic_0133, topic_0229, topic_0116, topic_0167, 
> topic_0244, topic_0032, topic_0168, topic_0157, topic_0118, topic_0209, 
> topic_0045, topic_0226, topic_0119, topic_0076, topic_0117, topic_0006, 
> topic_0129, topic_0225, topic_0033, topic_0159, topic_0037, topic_0197, 
> topic_0030, topic_0049, topic_0205, topic_0238, topic_0004, topic_0153, 
> topic_0074, topic_0127, topic_0083, topic_0003, topic_0126, topic_0249, 
> topic_0158, topic_0005, topic_0081, topic_0155, topic_0031, topic_0198, 
> topic_0206, topic_0020, topic_0154, topic_0075, topic_0239, topic_0128, 
> topic_0212, topic_0017, topic_0054, topic_0174, topic_0073, topic_0072, 
> topic_0173, topic_0039, topic_0213, topic_0138, topic_0059, topic_0015, 
> topic_0055, topic_0052, topic_0237, topic_0038, topic_0091, topic_0236, 
> topic_0053, topic_0234, topic_0070, topic_0193, topic_0051, topic_0090, 
> topic_0248, topic_0125, topic_0002, topic_0050, topic_0247, topic_0137, 
> topic_0124, topic_0014, topic_0001, topic_0071, topic_0235, topic_0194, 
> topic_0120, topic_0232, topic_0175, topic_0208, topic_0170, topic_0114, 
> topic_0016, topic_0139, topic_0013, topic_0136, topic_0113, topic_0018, 
> topic_0233, topic_0019, topic_0176, topic_0199, topic_0207)] from broker 
> [id:1,host:esv4-app19.corp.linkedin.com,port:9091] failed 
> (kafka.client.ClientUtils$)
> java.net.SocketTimeoutException
> at 
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:201)
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
> at 
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:221)
> at kafka.utils.Utils

[jira] [Created] (KAFKA-714) ConsoleConsumer throws SocketTimeoutException when fetching topic metadata

2013-01-21 Thread John Fung (JIRA)
John Fung created KAFKA-714:
---

 Summary: ConsoleConsumer throws SocketTimeoutException when 
fetching topic metadata
 Key: KAFKA-714
 URL: https://issues.apache.org/jira/browse/KAFKA-714
 Project: Kafka
  Issue Type: Bug
Reporter: John Fung
Priority: Critical


Test Description:

1. 1 zookeeper
2. 3 brokers
3. Replication factor = 3
4. Partions = 2
5. No. of topics = 250

There is no problem sending messages to brokers. But ConsoleConsumer is 
throwing SocketTimeoutException when fetching topic metadata. Currently, 
ConsoleConsumer doesn't provide a command line argument to configure Socket 
timeout.

Exception:

[2013-01-21 10:10:08,915] WARN Fetching topic metadata with correlation id 0 
for topics [Set(topic_0219, topic_0026, topic_0160, topic_0056, topic_0100, 
topic_0146, topic_0103, topic_0179, topic_0078, topic_0098, topic_0102, 
topic_0028, topic_0060, topic_0218, topic_0210, topic_0161, topic_0144, 
topic_0101, topic_0104, topic_0186, topic_0040, topic_0027, topic_0093, 
topic_0147, topic_0080, topic_0211, topic_0089, topic_0177, topic_0220, 
topic_0097, topic_0079, topic_0187, topic_0105, topic_0178, topic_0096, 
topic_0108, topic_0095, topic_0065, topic_0066, topic_0021, topic_0023, 
topic_0109, topic_0058, topic_0092, topic_0149, topic_0150, topic_0250, 
topic_0022, topic_0227, topic_0145, topic_0063, topic_0094, topic_0216, 
topic_0185, topic_0057, topic_0141, topic_0215, topic_0184, topic_0024, 
topic_0214, topic_0140, topic_0217, topic_0228, topic_0025, topic_0064, 
topic_0044, topic_0043, topic_0152, topic_0009, topic_0029, topic_0151, 
topic_0142, topic_0041, topic_0164, topic_0077, topic_0062, topic_0163, 
topic_0046, topic_0061, topic_0190, topic_0162, topic_0143, topic_0165, 
topic_0148, topic_0042, topic_0087, topic_0223, topic_0182, topic_0008, 
topic_0132, topic_0204, topic_0007, topic_0067, topic_0181, topic_0169, 
topic_0203, topic_0180, topic_0224, topic_0183, topic_0048, topic_0107, 
topic_0069, topic_0130, topic_0106, topic_0047, topic_0068, topic_0222, 
topic_0189, topic_0221, topic_0131, topic_0134, topic_0156, topic_0111, 
topic_0246, topic_0110, topic_0245, topic_0171, topic_0240, topic_0010, 
topic_0122, topic_0201, topic_0135, topic_0196, topic_0034, topic_0241, 
topic_0012, topic_0230, topic_0082, topic_0188, topic_0195, topic_0166, 
topic_0088, topic_0036, topic_0099, topic_0172, topic_0112, topic_0085, 
topic_0202, topic_0123, topic_0011, topic_0115, topic_0084, topic_0121, 
topic_0243, topic_0086, topic_0192, topic_0035, topic_0191, topic_0200, 
topic_0242, topic_0231, topic_0133, topic_0229, topic_0116, topic_0167, 
topic_0244, topic_0032, topic_0168, topic_0157, topic_0118, topic_0209, 
topic_0045, topic_0226, topic_0119, topic_0076, topic_0117, topic_0006, 
topic_0129, topic_0225, topic_0033, topic_0159, topic_0037, topic_0197, 
topic_0030, topic_0049, topic_0205, topic_0238, topic_0004, topic_0153, 
topic_0074, topic_0127, topic_0083, topic_0003, topic_0126, topic_0249, 
topic_0158, topic_0005, topic_0081, topic_0155, topic_0031, topic_0198, 
topic_0206, topic_0020, topic_0154, topic_0075, topic_0239, topic_0128, 
topic_0212, topic_0017, topic_0054, topic_0174, topic_0073, topic_0072, 
topic_0173, topic_0039, topic_0213, topic_0138, topic_0059, topic_0015, 
topic_0055, topic_0052, topic_0237, topic_0038, topic_0091, topic_0236, 
topic_0053, topic_0234, topic_0070, topic_0193, topic_0051, topic_0090, 
topic_0248, topic_0125, topic_0002, topic_0050, topic_0247, topic_0137, 
topic_0124, topic_0014, topic_0001, topic_0071, topic_0235, topic_0194, 
topic_0120, topic_0232, topic_0175, topic_0208, topic_0170, topic_0114, 
topic_0016, topic_0139, topic_0013, topic_0136, topic_0113, topic_0018, 
topic_0233, topic_0019, topic_0176, topic_0199, topic_0207)] from broker 
[id:1,host:esv4-app19.corp.linkedin.com,port:9091] failed 
(kafka.client.ClientUtils$)
java.net.SocketTimeoutException
at 
sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:201)
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:86)
at 
java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:221)
at kafka.utils.Utils$.read(Utils.scala:393)
at 
kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
at 
kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at 
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:71)
at kafka.producer.SyncProducer.send(SyncProducer.scala:105)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:33)
at kafka.client.

[jira] [Commented] (KAFKA-631) Implement log compaction

2013-01-21 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13558971#comment-13558971
 ] 

Jay Kreps commented on KAFKA-631:
-

I did some testing on the I/O throttling and verified that this does indeed 
maintain the expected I/O rate. Two gotchas in this, first you can't look at 
iostat because the OS will batch up writes and then asynchronously flush them 
out at a rate greater than what we requested. Second since the limit is on read 
and write combined a limit of 5MB/sec will lead to the offset map building 
happening at exactly 5MB/sec but the cleaning will be closer to 2.5MB/sec 
because cleaning involves first reading in messages then writing them back out 
so 1MB of cleaning does 2MB of I/O (assuming 100% retention).

> Implement log compaction
> 
>
> Key: KAFKA-631
> URL: https://issues.apache.org/jira/browse/KAFKA-631
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Affects Versions: 0.8.1
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-631-v1.patch, KAFKA-631-v2.patch
>
>
> Currently Kafka has only one way to bound the space of the log, namely by 
> deleting old segments. The policy that controls which segments are deleted 
> can be configured based either on the number of bytes to retain or the age of 
> the messages. This makes sense for event or log data which has no notion of 
> primary key. However lots of data has a primary key and consists of updates 
> by primary key. For this data it would be nice to be able to ensure that the 
> log contained at least the last version of every key.
> As an example, say that the Kafka topic contains a sequence of User Account 
> messages, each capturing the current state of a given user account. Rather 
> than simply discarding old segments, since the set of user accounts is 
> finite, it might make more sense to delete individual records that have been 
> made obsolete by a more recent update for the same key. This would ensure 
> that the topic contained at least the current state of each record.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-671) DelayedProduce requests should not hold full producer request data

2013-01-21 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13558969#comment-13558969
 ] 

Neha Narkhede commented on KAFKA-671:
-

Thinking about this a little more, the real problem seems to be that we hang 
onto the request object in DelayedProduce until we send out the response. There 
are 2 reasons for this -
1. The request latency metrics are part of the request object. These need to be 
updated when the response is created.
2. To send out the response, we need the selector key, which is inside the 
request object.

To handle delayed produce requests without hanging onto the produce request 
data, we will need to -
1. Remove the request object from DelayedProduce 
2. Pass in the selector key into DelayedProduce
3. Define the request metrics in a separate object and remove those from the 
Request object. Pass in the new RequestMetrics object into DelayedProduce

Since this requires changing the DelayedRequest object as well, it will affect 
all requests. My guess is that this refactoring is not that big of a change, 
but I could be wrong.

> DelayedProduce requests should not hold full producer request data
> --
>
> Key: KAFKA-671
> URL: https://issues.apache.org/jira/browse/KAFKA-671
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Joel Koshy
>Assignee: Sriram Subramanian
>Priority: Blocker
> Fix For: 0.8.1
>
> Attachments: outOfMemFix-v1.patch
>
>
> Per summary, this leads to unnecessary memory usage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-671) DelayedProduce requests should not hold full producer request data

2013-01-21 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13558946#comment-13558946
 ] 

Neha Narkhede commented on KAFKA-671:
-

Nullifying the request object seems like a bigger change. I'm wondering about 
the precise impact to GC that these changes will introduce. If the impact is 
just that the objects get garbage collected within 1 iteration of the young gen 
collector VS 2-3 iterations, I would say the performance upside of this change 
is not worth the risk. But if it significantly reduces garbage collected 
overhead, it might be worth looking further into. Even if we have to do this, I 
agree with Sriram that his earlier change is smaller impact than nullifying 
request object and caching a bunch of things to get around it. 

> DelayedProduce requests should not hold full producer request data
> --
>
> Key: KAFKA-671
> URL: https://issues.apache.org/jira/browse/KAFKA-671
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8
>Reporter: Joel Koshy
>Assignee: Sriram Subramanian
>Priority: Blocker
> Fix For: 0.8.1
>
> Attachments: outOfMemFix-v1.patch
>
>
> Per summary, this leads to unnecessary memory usage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-710) Some arguments are always set to default in ProducerPerformance

2013-01-21 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13558867#comment-13558867
 ] 

Jun Rao commented on KAFKA-710:
---

Just to clarify. The problem is that there are duplicated command line options. 
After this patch, those options are still available, but work correctly.

> Some arguments are always set to default in ProducerPerformance
> ---
>
> Key: KAFKA-710
> URL: https://issues.apache.org/jira/browse/KAFKA-710
> Project: Kafka
>  Issue Type: Bug
>Reporter: John Fung
>Assignee: John Fung
> Fix For: 0.8
>
> Attachments: kafka-710-v1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (KAFKA-139) cross-compile multiple Scala versions and upgrade to SBT 0.12.1

2013-01-21 Thread derek (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13558843#comment-13558843
 ] 

derek commented on KAFKA-139:
-

OK, 2.10.0 is going to be a non-starter at this point due to a number of 
removed methods in the Scala collections libs (they've been deprecated since 
2.8.1/2.9.0), but 2.8.2 seems fine.

> cross-compile multiple Scala versions and upgrade to SBT 0.12.1
> ---
>
> Key: KAFKA-139
> URL: https://issues.apache.org/jira/browse/KAFKA-139
> Project: Kafka
>  Issue Type: Improvement
>  Components: packaging
>Affects Versions: 0.8
>Reporter: Chris Burroughs
>  Labels: build
> Fix For: 0.8
>
> Attachments: kafka-sbt0-11-3-0.8.patch, kafka-sbt0-11-3-0.8-v2.patch, 
> kafka-sbt0-11-3-0.8-v3.patch, kafka-sbt0-11-3-0.8-v4.patch, 
> kafka-sbt0-11-3-0.8-v5-smeder.patch, kafka-sbt0-11-3-0.8-v6-smeder.patch, 
> kafka-sbt0-11-3.patch
>
>
> Since scala does not maintain binary compatibly between versions, 
> organizations tend to have to move all of there code at the same time.  It 
> would thus be very helpful if we could cross build multiple scala versions.
> http://code.google.com/p/simple-build-tool/wiki/CrossBuild
> Unclear if this would require KAFKA-134 or just work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-707) Improve error message in the producer when sending data to a partition without an active leader

2013-01-21 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-707:


Status: Patch Available  (was: Open)

> Improve error message in the producer when sending data to a partition 
> without an active leader
> ---
>
> Key: KAFKA-707
> URL: https://issues.apache.org/jira/browse/KAFKA-707
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.8
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
> Attachments: kafka-707.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> We log a very cryptic message when the producer tries to send data to a 
> partition that doesn't have a leader at that moment -
> Failed to send to broker -1 with data Map([PageViewEventByGroupJson,8] 
> Let's improve this to log a better error message

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (KAFKA-707) Improve error message in the producer when sending data to a partition without an active leader

2013-01-21 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede closed KAFKA-707.
---


> Improve error message in the producer when sending data to a partition 
> without an active leader
> ---
>
> Key: KAFKA-707
> URL: https://issues.apache.org/jira/browse/KAFKA-707
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.8
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
> Attachments: kafka-707.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> We log a very cryptic message when the producer tries to send data to a 
> partition that doesn't have a leader at that moment -
> Failed to send to broker -1 with data Map([PageViewEventByGroupJson,8] 
> Let's improve this to log a better error message

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (KAFKA-707) Improve error message in the producer when sending data to a partition without an active leader

2013-01-21 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-707:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank you for the review

> Improve error message in the producer when sending data to a partition 
> without an active leader
> ---
>
> Key: KAFKA-707
> URL: https://issues.apache.org/jira/browse/KAFKA-707
> Project: Kafka
>  Issue Type: Improvement
>  Components: producer 
>Affects Versions: 0.8
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
> Attachments: kafka-707.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> We log a very cryptic message when the producer tries to send data to a 
> partition that doesn't have a leader at that moment -
> Failed to send to broker -1 with data Map([PageViewEventByGroupJson,8] 
> Let's improve this to log a better error message

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira