[jira] [Commented] (KAFKA-1461) Replica fetcher thread does not implement any back-off behavior

2015-03-10 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14356344#comment-14356344
 ] 

Sriharsha Chintalapani commented on KAFKA-1461:
---

[~junrao] [~guozhang] please take a look at the above patch . Let me know if 
that's what you have in mind. I also added "replica.fetch.backoff.ms" and 
"controller.socket.timeout.ms" to TestUtils.createBrokerConfig this reduced the 
total test run time from 15mins to under 10mins on my machine.


> Replica fetcher thread does not implement any back-off behavior
> ---
>
> Key: KAFKA-1461
> URL: https://issues.apache.org/jira/browse/KAFKA-1461
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.1.1
>Reporter: Sam Meder
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1461.patch, KAFKA-1461.patch
>
>
> The current replica fetcher thread will retry in a tight loop if any error 
> occurs during the fetch call. For example, we've seen cases where the fetch 
> continuously throws a connection refused exception leading to several replica 
> fetcher threads that spin in a pretty tight loop.
> To a much lesser degree this is also an issue in the consumer fetcher thread, 
> although the fact that erroring partitions are removed so a leader can be 
> re-discovered helps some.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1461) Replica fetcher thread does not implement any back-off behavior

2015-03-10 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14356341#comment-14356341
 ] 

Sriharsha Chintalapani commented on KAFKA-1461:
---

Created reviewboard https://reviews.apache.org/r/31927/diff/
 against branch origin/trunk

> Replica fetcher thread does not implement any back-off behavior
> ---
>
> Key: KAFKA-1461
> URL: https://issues.apache.org/jira/browse/KAFKA-1461
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.1.1
>Reporter: Sam Meder
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1461.patch, KAFKA-1461.patch
>
>
> The current replica fetcher thread will retry in a tight loop if any error 
> occurs during the fetch call. For example, we've seen cases where the fetch 
> continuously throws a connection refused exception leading to several replica 
> fetcher threads that spin in a pretty tight loop.
> To a much lesser degree this is also an issue in the consumer fetcher thread, 
> although the fact that erroring partitions are removed so a leader can be 
> re-discovered helps some.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1461) Replica fetcher thread does not implement any back-off behavior

2015-03-10 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1461:
--
Attachment: KAFKA-1461.patch

> Replica fetcher thread does not implement any back-off behavior
> ---
>
> Key: KAFKA-1461
> URL: https://issues.apache.org/jira/browse/KAFKA-1461
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.1.1
>Reporter: Sam Meder
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1461.patch, KAFKA-1461.patch
>
>
> The current replica fetcher thread will retry in a tight loop if any error 
> occurs during the fetch call. For example, we've seen cases where the fetch 
> continuously throws a connection refused exception leading to several replica 
> fetcher threads that spin in a pretty tight loop.
> To a much lesser degree this is also an issue in the consumer fetcher thread, 
> although the fact that erroring partitions are removed so a leader can be 
> re-discovered helps some.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 31927: Patch for KAFKA-1461

2015-03-10 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31927/
---

Review request for kafka.


Bugs: KAFKA-1461
https://issues.apache.org/jira/browse/KAFKA-1461


Repository: kafka


Description
---

KAFKA-1461. Replica fetcher thread does not implement any back-off behavior.


Diffs
-

  core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
8c281d4668f92eff95a4a5df3c03c4b5b20e7095 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
48e33626695ad8a28b0018362ac225f11df94973 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
d6d14fbd167fb8b085729cda5158898b1a3ee314 
  core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
c124c8df5b5079e5ffbd0c4ea359562a66aaf317 
  core/src/test/scala/unit/kafka/utils/TestUtils.scala 
52c79201af7c60f9b44a0aaa09cdf968d89a7b87 

Diff: https://reviews.apache.org/r/31927/diff/


Testing
---


Thanks,

Sriharsha Chintalapani



[jira] [Commented] (KAFKA-1716) hang during shutdown of ZookeeperConsumerConnector

2015-03-10 Thread Ashwin Jayaprakash (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14356326#comment-14356326
 ] 

Ashwin Jayaprakash commented on KAFKA-1716:
---

I'm beginning to wonder if the {{shutdown()}} method on the consumer is safe to 
call from a different thread than the one that is currently using it.

> hang during shutdown of ZookeeperConsumerConnector
> --
>
> Key: KAFKA-1716
> URL: https://issues.apache.org/jira/browse/KAFKA-1716
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Sean Fay
>Assignee: Neha Narkhede
> Attachments: kafka-shutdown-stuck.log
>
>
> It appears to be possible for {{ZookeeperConsumerConnector.shutdown()}} to 
> wedge in the case that some consumer fetcher threads receive messages during 
> the shutdown process.
> Shutdown thread:
> {code}-- Parking to wait for: 
> java/util/concurrent/CountDownLatch$Sync@0x2aaaf3ef06d0
> at jrockit/vm/Locks.park0(J)V(Native Method)
> at jrockit/vm/Locks.park(Locks.java:2230)
> at sun/misc/Unsafe.park(ZJ)V(Native Method)
> at java/util/concurrent/locks/LockSupport.park(LockSupport.java:156)
> at 
> java/util/concurrent/locks/AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
> at 
> java/util/concurrent/locks/AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:969)
> at 
> java/util/concurrent/locks/AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1281)
> at java/util/concurrent/CountDownLatch.await(CountDownLatch.java:207)
> at kafka/utils/ShutdownableThread.shutdown(ShutdownableThread.scala:36)
> at 
> kafka/server/AbstractFetcherThread.shutdown(AbstractFetcherThread.scala:71)
> at 
> kafka/server/AbstractFetcherManager$$anonfun$closeAllFetchers$2.apply(AbstractFetcherManager.scala:121)
> at 
> kafka/server/AbstractFetcherManager$$anonfun$closeAllFetchers$2.apply(AbstractFetcherManager.scala:120)
> at 
> scala/collection/TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at 
> scala/collection/mutable/HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala/collection/mutable/HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala/collection/mutable/HashTable$class.foreachEntry(HashTable.scala:226)
> at scala/collection/mutable/HashMap.foreachEntry(HashMap.scala:39)
> at scala/collection/mutable/HashMap.foreach(HashMap.scala:98)
> at 
> scala/collection/TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka/server/AbstractFetcherManager.closeAllFetchers(AbstractFetcherManager.scala:120)
> ^-- Holding lock: java/lang/Object@0x2aaaebcc7318[thin lock]
> at 
> kafka/consumer/ConsumerFetcherManager.stopConnections(ConsumerFetcherManager.scala:148)
> at 
> kafka/consumer/ZookeeperConsumerConnector.liftedTree1$1(ZookeeperConsumerConnector.scala:171)
> at 
> kafka/consumer/ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:167){code}
> ConsumerFetcherThread:
> {code}-- Parking to wait for: 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject@0x2aaaebcc7568
> at jrockit/vm/Locks.park0(J)V(Native Method)
> at jrockit/vm/Locks.park(Locks.java:2230)
> at sun/misc/Unsafe.park(ZJ)V(Native Method)
> at java/util/concurrent/locks/LockSupport.park(LockSupport.java:156)
> at 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
> at 
> java/util/concurrent/LinkedBlockingQueue.put(LinkedBlockingQueue.java:306)
> at kafka/consumer/PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
> at 
> kafka/consumer/ConsumerFetcherThread.processPartitionData(ConsumerFetcherThread.scala:49)
> at 
> kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfun$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:130)
> at 
> kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfun$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:111)
> at scala/collection/immutable/HashMap$HashMap1.foreach(HashMap.scala:224)
> at 
> scala/collection/immutable/HashMap$HashTrieMap.foreach(HashMap.scala:403)
> at 
> kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$mcV$sp(AbstractFetcherThread.scala:111)
> at 
> kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(AbstractFetcherThread.scala:111)
> at 
> kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(AbstractFetcherThread.scala:111)
> at kafka/utils/Utils$.inLock(Utils.scala:538)
> at 
> kafka/server/AbstractFetche

[jira] [Comment Edited] (KAFKA-1054) Eliminate Compilation Warnings for 0.8 Final Release

2015-03-10 Thread Blake Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14356280#comment-14356280
 ] 

Blake Smith edited comment on KAFKA-1054 at 3/11/15 4:51 AM:
-

I attached an [updated patch|https://reviews.apache.org/r/31925/] that's 
rebased on the current trunk, and squashed 2 more compiler warnings.

Also: there are some surpressed scala "feature" warnings (looks like: postfix 
operators and implicit conversions). It looks like these warnings were added in 
2.10 as part of 
[SIP-18|http://docs.scala-lang.org/sips/completed/modularizing-language-features.html].
 I'm not sure what importing these language "features" will do in Scala 2.9 
given that they were considered standard scala, but are they worth addressing?

Feature warnings below:

{code}
/Users/blake/src/kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:978:
 postfix operator contains should be enabled
by making the implicit value scala.language.postfixOps visible.
This can be achieved by adding the import clause 'import 
scala.language.postfixOps'
or by setting the compiler option -language:postfixOps.
See the Scala docs for value scala.language.postfixOps for a discussion
why the feature should be explicitly enabled.
  val addedTopics = updatedTopics filterNot (wildcardTopics contains)
^
/Users/blake/src/kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:988:
 postfix operator contains should be enabled
by making the implicit value scala.language.postfixOps visible.
  val deletedTopics = wildcardTopics filterNot (updatedTopics contains)
  ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:23: 
implicit conversion method scalaMessageSetToJavaMessageSet should be enabled
by making the implicit value scala.language.implicitConversions visible.
This can be achieved by adding the import clause 'import 
scala.language.implicitConversions'
or by setting the compiler option -language:implicitConversions.
See the Scala docs for value scala.language.implicitConversions for a discussion
why the feature should be explicitly enabled.
  implicit def scalaMessageSetToJavaMessageSet(messageSet: 
kafka.message.ByteBufferMessageSet):
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:28: 
implicit conversion method toJavaFetchResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaFetchResponse(response: kafka.api.FetchResponse): 
kafka.javaapi.FetchResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:31: 
implicit conversion method toJavaTopicMetadataResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaTopicMetadataResponse(response: 
kafka.api.TopicMetadataResponse): kafka.javaapi.TopicMetadataResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:34: 
implicit conversion method toJavaOffsetResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaOffsetResponse(response: kafka.api.OffsetResponse): 
kafka.javaapi.OffsetResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:37: 
implicit conversion method toJavaOffsetFetchResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaOffsetFetchResponse(response: 
kafka.api.OffsetFetchResponse): kafka.javaapi.OffsetFetchResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:40: 
implicit conversion method toJavaOffsetCommitResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaOffsetCommitResponse(response: 
kafka.api.OffsetCommitResponse): kafka.javaapi.OffsetCommitResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:43: 
implicit conversion method optionToJavaRef should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def optionToJavaRef[T](opt: Option[T]): T = {
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:51: 
implicit conversion method javaListToScalaBuffer should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def javaListToScalaBuffer[A](l: java.util.List[A]) = {
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/TopicMetadata.scala:23:
 implicit conversion method toJavaTopicMetadataList should be enabled
by making the implic

[jira] [Comment Edited] (KAFKA-1054) Eliminate Compilation Warnings for 0.8 Final Release

2015-03-10 Thread Blake Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14356280#comment-14356280
 ] 

Blake Smith edited comment on KAFKA-1054 at 3/11/15 4:51 AM:
-

I attached an [updated patch|https://reviews.apache.org/r/31925/] that's 
rebased on the current trunk, and squashes 2 more compiler warnings.

Also: there are some surpressed scala "feature" warnings (looks like: postfix 
operators and implicit conrversions). It looks like these warnings were added 
in 2.10 as part of 
[SIP-18|http://docs.scala-lang.org/sips/completed/modularizing-language-features.html].
 I'm not sure what importing these language "features" will do in Scala 2.9 
given that they were considered standard scala, but are they worth addressing?

Feature warnings below:

{code}
/Users/blake/src/kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:978:
 postfix operator contains should be enabled
by making the implicit value scala.language.postfixOps visible.
This can be achieved by adding the import clause 'import 
scala.language.postfixOps'
or by setting the compiler option -language:postfixOps.
See the Scala docs for value scala.language.postfixOps for a discussion
why the feature should be explicitly enabled.
  val addedTopics = updatedTopics filterNot (wildcardTopics contains)
^
/Users/blake/src/kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:988:
 postfix operator contains should be enabled
by making the implicit value scala.language.postfixOps visible.
  val deletedTopics = wildcardTopics filterNot (updatedTopics contains)
  ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:23: 
implicit conversion method scalaMessageSetToJavaMessageSet should be enabled
by making the implicit value scala.language.implicitConversions visible.
This can be achieved by adding the import clause 'import 
scala.language.implicitConversions'
or by setting the compiler option -language:implicitConversions.
See the Scala docs for value scala.language.implicitConversions for a discussion
why the feature should be explicitly enabled.
  implicit def scalaMessageSetToJavaMessageSet(messageSet: 
kafka.message.ByteBufferMessageSet):
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:28: 
implicit conversion method toJavaFetchResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaFetchResponse(response: kafka.api.FetchResponse): 
kafka.javaapi.FetchResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:31: 
implicit conversion method toJavaTopicMetadataResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaTopicMetadataResponse(response: 
kafka.api.TopicMetadataResponse): kafka.javaapi.TopicMetadataResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:34: 
implicit conversion method toJavaOffsetResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaOffsetResponse(response: kafka.api.OffsetResponse): 
kafka.javaapi.OffsetResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:37: 
implicit conversion method toJavaOffsetFetchResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaOffsetFetchResponse(response: 
kafka.api.OffsetFetchResponse): kafka.javaapi.OffsetFetchResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:40: 
implicit conversion method toJavaOffsetCommitResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaOffsetCommitResponse(response: 
kafka.api.OffsetCommitResponse): kafka.javaapi.OffsetCommitResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:43: 
implicit conversion method optionToJavaRef should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def optionToJavaRef[T](opt: Option[T]): T = {
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:51: 
implicit conversion method javaListToScalaBuffer should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def javaListToScalaBuffer[A](l: java.util.List[A]) = {
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/TopicMetadata.scala:23:
 implicit conversion method toJavaTopicMetadataList should be enabled
by making the impli

[jira] [Comment Edited] (KAFKA-1054) Eliminate Compilation Warnings for 0.8 Final Release

2015-03-10 Thread Blake Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14356280#comment-14356280
 ] 

Blake Smith edited comment on KAFKA-1054 at 3/11/15 4:51 AM:
-

I attached an [updated patch|https://reviews.apache.org/r/31925/] that's 
rebased on the current trunk, and squashed 2 more compiler warnings.

Also: there are some surpressed scala "feature" warnings (looks like: postfix 
operators and implicit conrversions). It looks like these warnings were added 
in 2.10 as part of 
[SIP-18|http://docs.scala-lang.org/sips/completed/modularizing-language-features.html].
 I'm not sure what importing these language "features" will do in Scala 2.9 
given that they were considered standard scala, but are they worth addressing?

Feature warnings below:

{code}
/Users/blake/src/kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:978:
 postfix operator contains should be enabled
by making the implicit value scala.language.postfixOps visible.
This can be achieved by adding the import clause 'import 
scala.language.postfixOps'
or by setting the compiler option -language:postfixOps.
See the Scala docs for value scala.language.postfixOps for a discussion
why the feature should be explicitly enabled.
  val addedTopics = updatedTopics filterNot (wildcardTopics contains)
^
/Users/blake/src/kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:988:
 postfix operator contains should be enabled
by making the implicit value scala.language.postfixOps visible.
  val deletedTopics = wildcardTopics filterNot (updatedTopics contains)
  ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:23: 
implicit conversion method scalaMessageSetToJavaMessageSet should be enabled
by making the implicit value scala.language.implicitConversions visible.
This can be achieved by adding the import clause 'import 
scala.language.implicitConversions'
or by setting the compiler option -language:implicitConversions.
See the Scala docs for value scala.language.implicitConversions for a discussion
why the feature should be explicitly enabled.
  implicit def scalaMessageSetToJavaMessageSet(messageSet: 
kafka.message.ByteBufferMessageSet):
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:28: 
implicit conversion method toJavaFetchResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaFetchResponse(response: kafka.api.FetchResponse): 
kafka.javaapi.FetchResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:31: 
implicit conversion method toJavaTopicMetadataResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaTopicMetadataResponse(response: 
kafka.api.TopicMetadataResponse): kafka.javaapi.TopicMetadataResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:34: 
implicit conversion method toJavaOffsetResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaOffsetResponse(response: kafka.api.OffsetResponse): 
kafka.javaapi.OffsetResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:37: 
implicit conversion method toJavaOffsetFetchResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaOffsetFetchResponse(response: 
kafka.api.OffsetFetchResponse): kafka.javaapi.OffsetFetchResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:40: 
implicit conversion method toJavaOffsetCommitResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaOffsetCommitResponse(response: 
kafka.api.OffsetCommitResponse): kafka.javaapi.OffsetCommitResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:43: 
implicit conversion method optionToJavaRef should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def optionToJavaRef[T](opt: Option[T]): T = {
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:51: 
implicit conversion method javaListToScalaBuffer should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def javaListToScalaBuffer[A](l: java.util.List[A]) = {
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/TopicMetadata.scala:23:
 implicit conversion method toJavaTopicMetadataList should be enabled
by making the impli

[jira] [Comment Edited] (KAFKA-1054) Eliminate Compilation Warnings for 0.8 Final Release

2015-03-10 Thread Blake Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14356280#comment-14356280
 ] 

Blake Smith edited comment on KAFKA-1054 at 3/11/15 4:48 AM:
-

I attached an [updated patch|https://reviews.apache.org/r/31925/] that's 
rebased on the current trunk, and squashes 2 more compiler warnings.

Also: there are some surpressed scala "feature" warnings (looks like: postfix 
operators and implicit conrversions). It looks like these warnings were added 
in 2.10 as part of 
[SIP-18|http://docs.scala-lang.org/sips/completed/modularizing-language-features.html].
 I'm not sure what importing these language "features" will do in Scala 2.9, 
but are they worth addressing?

Feature warnings below:

{code}
/Users/blake/src/kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:978:
 postfix operator contains should be enabled
by making the implicit value scala.language.postfixOps visible.
This can be achieved by adding the import clause 'import 
scala.language.postfixOps'
or by setting the compiler option -language:postfixOps.
See the Scala docs for value scala.language.postfixOps for a discussion
why the feature should be explicitly enabled.
  val addedTopics = updatedTopics filterNot (wildcardTopics contains)
^
/Users/blake/src/kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:988:
 postfix operator contains should be enabled
by making the implicit value scala.language.postfixOps visible.
  val deletedTopics = wildcardTopics filterNot (updatedTopics contains)
  ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:23: 
implicit conversion method scalaMessageSetToJavaMessageSet should be enabled
by making the implicit value scala.language.implicitConversions visible.
This can be achieved by adding the import clause 'import 
scala.language.implicitConversions'
or by setting the compiler option -language:implicitConversions.
See the Scala docs for value scala.language.implicitConversions for a discussion
why the feature should be explicitly enabled.
  implicit def scalaMessageSetToJavaMessageSet(messageSet: 
kafka.message.ByteBufferMessageSet):
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:28: 
implicit conversion method toJavaFetchResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaFetchResponse(response: kafka.api.FetchResponse): 
kafka.javaapi.FetchResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:31: 
implicit conversion method toJavaTopicMetadataResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaTopicMetadataResponse(response: 
kafka.api.TopicMetadataResponse): kafka.javaapi.TopicMetadataResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:34: 
implicit conversion method toJavaOffsetResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaOffsetResponse(response: kafka.api.OffsetResponse): 
kafka.javaapi.OffsetResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:37: 
implicit conversion method toJavaOffsetFetchResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaOffsetFetchResponse(response: 
kafka.api.OffsetFetchResponse): kafka.javaapi.OffsetFetchResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:40: 
implicit conversion method toJavaOffsetCommitResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaOffsetCommitResponse(response: 
kafka.api.OffsetCommitResponse): kafka.javaapi.OffsetCommitResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:43: 
implicit conversion method optionToJavaRef should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def optionToJavaRef[T](opt: Option[T]): T = {
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:51: 
implicit conversion method javaListToScalaBuffer should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def javaListToScalaBuffer[A](l: java.util.List[A]) = {
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/TopicMetadata.scala:23:
 implicit conversion method toJavaTopicMetadataList should be enabled
by making the implicit value scala.language.implicitConversions vi

[jira] [Updated] (KAFKA-1716) hang during shutdown of ZookeeperConsumerConnector

2015-03-10 Thread Ashwin Jayaprakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwin Jayaprakash updated KAFKA-1716:
--
Attachment: kafka-shutdown-stuck.log

> hang during shutdown of ZookeeperConsumerConnector
> --
>
> Key: KAFKA-1716
> URL: https://issues.apache.org/jira/browse/KAFKA-1716
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Sean Fay
>Assignee: Neha Narkhede
> Attachments: kafka-shutdown-stuck.log
>
>
> It appears to be possible for {{ZookeeperConsumerConnector.shutdown()}} to 
> wedge in the case that some consumer fetcher threads receive messages during 
> the shutdown process.
> Shutdown thread:
> {code}-- Parking to wait for: 
> java/util/concurrent/CountDownLatch$Sync@0x2aaaf3ef06d0
> at jrockit/vm/Locks.park0(J)V(Native Method)
> at jrockit/vm/Locks.park(Locks.java:2230)
> at sun/misc/Unsafe.park(ZJ)V(Native Method)
> at java/util/concurrent/locks/LockSupport.park(LockSupport.java:156)
> at 
> java/util/concurrent/locks/AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
> at 
> java/util/concurrent/locks/AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:969)
> at 
> java/util/concurrent/locks/AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1281)
> at java/util/concurrent/CountDownLatch.await(CountDownLatch.java:207)
> at kafka/utils/ShutdownableThread.shutdown(ShutdownableThread.scala:36)
> at 
> kafka/server/AbstractFetcherThread.shutdown(AbstractFetcherThread.scala:71)
> at 
> kafka/server/AbstractFetcherManager$$anonfun$closeAllFetchers$2.apply(AbstractFetcherManager.scala:121)
> at 
> kafka/server/AbstractFetcherManager$$anonfun$closeAllFetchers$2.apply(AbstractFetcherManager.scala:120)
> at 
> scala/collection/TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at 
> scala/collection/mutable/HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala/collection/mutable/HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala/collection/mutable/HashTable$class.foreachEntry(HashTable.scala:226)
> at scala/collection/mutable/HashMap.foreachEntry(HashMap.scala:39)
> at scala/collection/mutable/HashMap.foreach(HashMap.scala:98)
> at 
> scala/collection/TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka/server/AbstractFetcherManager.closeAllFetchers(AbstractFetcherManager.scala:120)
> ^-- Holding lock: java/lang/Object@0x2aaaebcc7318[thin lock]
> at 
> kafka/consumer/ConsumerFetcherManager.stopConnections(ConsumerFetcherManager.scala:148)
> at 
> kafka/consumer/ZookeeperConsumerConnector.liftedTree1$1(ZookeeperConsumerConnector.scala:171)
> at 
> kafka/consumer/ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:167){code}
> ConsumerFetcherThread:
> {code}-- Parking to wait for: 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject@0x2aaaebcc7568
> at jrockit/vm/Locks.park0(J)V(Native Method)
> at jrockit/vm/Locks.park(Locks.java:2230)
> at sun/misc/Unsafe.park(ZJ)V(Native Method)
> at java/util/concurrent/locks/LockSupport.park(LockSupport.java:156)
> at 
> java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
> at 
> java/util/concurrent/LinkedBlockingQueue.put(LinkedBlockingQueue.java:306)
> at kafka/consumer/PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:60)
> at 
> kafka/consumer/ConsumerFetcherThread.processPartitionData(ConsumerFetcherThread.scala:49)
> at 
> kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfun$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:130)
> at 
> kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1$$anonfun$apply$mcV$sp$2.apply(AbstractFetcherThread.scala:111)
> at scala/collection/immutable/HashMap$HashMap1.foreach(HashMap.scala:224)
> at 
> scala/collection/immutable/HashMap$HashTrieMap.foreach(HashMap.scala:403)
> at 
> kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1.apply$mcV$sp(AbstractFetcherThread.scala:111)
> at 
> kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(AbstractFetcherThread.scala:111)
> at 
> kafka/server/AbstractFetcherThread$$anonfun$processFetchRequest$1.apply(AbstractFetcherThread.scala:111)
> at kafka/utils/Utils$.inLock(Utils.scala:538)
> at 
> kafka/server/AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:110)
> at 
> kafka/server/AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
> at kafka/utils/

[jira] [Commented] (KAFKA-1716) hang during shutdown of ZookeeperConsumerConnector

2015-03-10 Thread Ashwin Jayaprakash (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14356289#comment-14356289
 ] 

Ashwin Jayaprakash commented on KAFKA-1716:
---

We upgraded to Kafka 0.8.2 last week and now we can reproduce this issue every 
time on our Kafka consumer JVMs.

Our setup is like this. We start {{ConsumerConnector}} instances dynamically 
based on a configurable property. Each of those {{ConsumerConnector}} instances 
creates a {{ConsumerIterator}}. Right now we have 4 such instances in each JVM. 
Naturally we have 4 separate threads consuming from those 4 iterators in 
parallel. 

All this worked ok until recently, where we faced some issues with consumer 
rebalancing and an overloaded ZK subtree, see 
http://markmail.org/thread/gnodacjjya6r573m. While we were trying to address 
that we changed the defaults to these {{rebalance.max.retries 16}} and 
{{rebalance.backoff.ms 1}}. Note that we also upgraded to 0.8.2.

Everytime we shutdown the JVM, we first try to shutdown the consumers one by 
one before exiting. With these recent changes, the JVM exit gets stuck because:
# The shutdown thread is different from the 4 consumer threads (in addition to 
the background threads that ZK and Kafka create)
# The shutdown thread shuts down the first consumer and so that consumer exits 
quickly and gracefully
# In the meanwhile the second, third and fourth consumers are trying to 
rebalance the partitions
# Shutdown thread proceeds to call shutdown on the second consumer
## The shutdown thread appears to make some progress in shutting down the 
second consumer but then gets stuck on a monitor that has been acquired by the 
{{xx_watcher_executor}}
## This appears to be a deadlock because the {{xx_watcher_executor}} thread has 
acquired the monitor lock and gone to sleep
# The shutdown then takes a long time because all the 3 remaining consumers 
retry for {{16}} times and then give up

The thread dumps here should make it clear.

{code}
"Thread-13@8222" prio=5 tid=0x53 nid=NA waiting for monitor entry
  java.lang.Thread.State: BLOCKED
 waiting for 
indexer-group_on-localhost-pid-58357-kafka-message-source-id-820_watcher_executor@8160
 to release lock on <0x2b5e> (a java.lang.Object)
  at 
kafka.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:191)
  at 
kafka.javaapi.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:119)
  at 
xx.yy.zz.processor.kafka.consumer.KafkaMessageSource.close(KafkaMessageSource.java:239)
  at 
xx.yy.zz.pipeline.source.MessageSourceStage.stop(MessageSourceStage.java:162)
  at 
xx.yy.common.util.lifecycle.LifecycleHelper.stopAll(LifecycleHelper.java:53)
  at xx.yy.zz.pipeline.framework.Pipeline.stop(Pipeline.java:205)
  at 
xx.yy.common.util.lifecycle.LifecycleHelper.stopAll(LifecycleHelper.java:53)
  at xx.yy.zz.pipeline.framework.Pipelines.stop(Pipelines.java:225)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-1)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606)
  at xx.yy.AppLifecycle.callAnnotatedMethods(AppLifecycle.java:163)
  at xx.yy.AppLifecycle.stop(AppLifecycle.java:144)
  - locked <0x2b31> (a xx.yy.AppLifecycle)
  at xx.yy.AppLifecycle$6.stop(AppLifecycle.java:247)
  at io.dropwizard.lifecycle.JettyManaged.doStop(JettyManaged.java:32)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:90)
  - locked <0x2b83> (a java.lang.Object)
  at 
org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:129)
  at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:148)
  at 
org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:71)
  at org.eclipse.jetty.server.Server.doStop(Server.java:410)
  at 
org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:90)
  - locked <0x2b84> (a java.lang.Object)
  at 
org.eclipse.jetty.util.thread.ShutdownThread.run(ShutdownThread.java:133)

"indexer-group_on-localhost-pid-58357-kafka-message-source-id-820_watcher_executor@8160"
 daemon prio=5 tid=0x74 nid=NA sleeping
  java.lang.Thread.State: TIMED_WAITING
 blocks Thread-13@8222
  at java.lang.Thread.sleep(Thread.java:-1)
  at 
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1$$anonfun$apply$mcV$sp$1.apply$mcVI$sp(ZookeeperConsumerConnector.scala:627)
  at scala.collection.immutable.Range.foreach$mVc$sp(Rang

[jira] [Comment Edited] (KAFKA-1054) Eliminate Compilation Warnings for 0.8 Final Release

2015-03-10 Thread Blake Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14356280#comment-14356280
 ] 

Blake Smith edited comment on KAFKA-1054 at 3/11/15 4:48 AM:
-

I attached an updated patch that's rebased on the current trunk, and squashes 2 
more compiler warnings.

Also: there are some surpressed scala "feature" warnings (looks like: postfix 
operators and implicit conrversions). It looks like these warnings were added 
in 2.10 as part of 
[SIP-18|http://docs.scala-lang.org/sips/completed/modularizing-language-features.html].
 I'm not sure what importing these language "features" will do in Scala 2.9, 
but are they worth addressing?

Feature warnings below:

{code}
/Users/blake/src/kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:978:
 postfix operator contains should be enabled
by making the implicit value scala.language.postfixOps visible.
This can be achieved by adding the import clause 'import 
scala.language.postfixOps'
or by setting the compiler option -language:postfixOps.
See the Scala docs for value scala.language.postfixOps for a discussion
why the feature should be explicitly enabled.
  val addedTopics = updatedTopics filterNot (wildcardTopics contains)
^
/Users/blake/src/kafka/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala:988:
 postfix operator contains should be enabled
by making the implicit value scala.language.postfixOps visible.
  val deletedTopics = wildcardTopics filterNot (updatedTopics contains)
  ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:23: 
implicit conversion method scalaMessageSetToJavaMessageSet should be enabled
by making the implicit value scala.language.implicitConversions visible.
This can be achieved by adding the import clause 'import 
scala.language.implicitConversions'
or by setting the compiler option -language:implicitConversions.
See the Scala docs for value scala.language.implicitConversions for a discussion
why the feature should be explicitly enabled.
  implicit def scalaMessageSetToJavaMessageSet(messageSet: 
kafka.message.ByteBufferMessageSet):
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:28: 
implicit conversion method toJavaFetchResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaFetchResponse(response: kafka.api.FetchResponse): 
kafka.javaapi.FetchResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:31: 
implicit conversion method toJavaTopicMetadataResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaTopicMetadataResponse(response: 
kafka.api.TopicMetadataResponse): kafka.javaapi.TopicMetadataResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:34: 
implicit conversion method toJavaOffsetResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaOffsetResponse(response: kafka.api.OffsetResponse): 
kafka.javaapi.OffsetResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:37: 
implicit conversion method toJavaOffsetFetchResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaOffsetFetchResponse(response: 
kafka.api.OffsetFetchResponse): kafka.javaapi.OffsetFetchResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:40: 
implicit conversion method toJavaOffsetCommitResponse should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaOffsetCommitResponse(response: 
kafka.api.OffsetCommitResponse): kafka.javaapi.OffsetCommitResponse =
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:43: 
implicit conversion method optionToJavaRef should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def optionToJavaRef[T](opt: Option[T]): T = {
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/Implicits.scala:51: 
implicit conversion method javaListToScalaBuffer should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def javaListToScalaBuffer[A](l: java.util.List[A]) = {
   ^
/Users/blake/src/kafka/core/src/main/scala/kafka/javaapi/TopicMetadata.scala:23:
 implicit conversion method toJavaTopicMetadataList should be enabled
by making the implicit value scala.language.implicitConversions visible.
  implicit def toJavaTopicMetad

[jira] [Updated] (KAFKA-1054) Eliminate Compilation Warnings for 0.8 Final Release

2015-03-10 Thread Blake Smith (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Smith updated KAFKA-1054:
---
Attachment: KAFKA-1054_Mar_10_2015.patch

Updated to latest trunk, and squash 2 more compiler warnings.

> Eliminate Compilation Warnings for 0.8 Final Release
> 
>
> Key: KAFKA-1054
> URL: https://issues.apache.org/jira/browse/KAFKA-1054
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Guozhang Wang
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: KAFKA-1054.patch, KAFKA-1054_Mar_10_2015.patch
>
>
> Currently we have a total number of 38 warnings for source code compilation 
> of 0.8.
> 1) 3 from "Unchecked type pattern"
> 2) 6 from "Unchecked conversion"
> 3) 29 from "Deprecated Hadoop API functions"
> It's better we finish these before the final release of 0.8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 31925: KAFKA-1054: Fix remaining compiler warnings

2015-03-10 Thread Blake Smith

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31925/
---

Review request for kafka.


Bugs: KAFKA-1054
https://issues.apache.org/jira/browse/KAFKA-1054


Repository: kafka


Description
---

Brought the patch outlined [here](https://reviews.apache.org/r/25461/diff/) up 
to date with the latest trunk and fixed 2 more lingering compiler warnings.


Diffs
-

  core/src/main/scala/kafka/admin/AdminUtils.scala b700110 
  core/src/main/scala/kafka/coordinator/DelayedJoinGroup.scala df60cbc 
  core/src/main/scala/kafka/server/KafkaServer.scala dddef93 
  core/src/main/scala/kafka/utils/Utils.scala 738c1af 
  core/src/test/scala/unit/kafka/server/ServerShutdownTest.scala b46daa4 

Diff: https://reviews.apache.org/r/31925/diff/


Testing
---

Ran full test suite.


Thanks,

Blake Smith



Re: [DISCUSSION] KIP-15 close(timeout) for producer

2015-03-10 Thread Jiangjie Qin
The KIP page has been updated per Jay¹s comments.
I¹d like to initiate the voting process if no further comments are
received by tomorrow.

Jiangjie (Becket) Qin

On 3/8/15, 9:45 AM, "Jay Kreps"  wrote:

>Hey Jiangjie,
>
>Can you capture the full motivation and use cases for the feature? This
>mentions your interest in having a way of aborting from inside the
>Callback. But it doesn't really explain that usage or why other people
>would want to do that. It also doesn't list the primary use case for
>having
>close with a bounded timeout which was to avoid blocking too long on
>shutdown.
>
>-Jay
>
>
>
>On Sat, Mar 7, 2015 at 12:25 PM, Jiangjie Qin 
>wrote:
>
>> Hi,
>>
>> I just created a KIP for adding a close(timeout) to new producer. Most
>>of
>> the previous discussions are in KAFKA-1660 where Parth Brahmbhatt has
>> already done a lot of work.
>> Since this is an interface change so we are going through the KIP
>>process.
>> Here is the KIP link:
>> 
>>https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=53739782
>>
>> Thanks.
>>
>> Jiangjie (Becket) Qin
>>



[jira] [Commented] (KAFKA-2011) Rebalance with auto.leader.rebalance.enable=false

2015-03-10 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14356192#comment-14356192
 ] 

Jiangjie Qin commented on KAFKA-2011:
-

I took a look a the log, it seems the network was not stable and brokers are 
disconnected all the time. There are also a lot of offline partitions shown in 
the log. Not sure if it is because of your network quality. Another possibility 
is that you are producing large amount of data to the broker that DDOS the 
broker.

> Rebalance with auto.leader.rebalance.enable=false 
> --
>
> Key: KAFKA-2011
> URL: https://issues.apache.org/jira/browse/KAFKA-2011
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
> Environment: 5 Hosts of below config:
> "x86_64" "32-bit, 64-bit" "Little Endian" "24 GenuineIntel CPUs Model 44 
> 1600.000MHz" "RAM 189 GB" GNU/Linux
>Reporter: K Zakee
>Priority: Blocker
> Attachments: controller-logs-1.zip, controller-logs-2.zip
>
>
> Started with clean cluster 0.8.2 with 5 brokers. Setting the properties as 
> below:
> auto.leader.rebalance.enable=false
> controlled.shutdown.enable=true
> controlled.shutdown.max.retries=1
> controlled.shutdown.retry.backoff.ms=5000
> default.replication.factor=3
> log.cleaner.enable=true
> log.cleaner.threads=5
> log.cleanup.policy=delete
> log.flush.scheduler.interval.ms=3000
> log.retention.minutes=1440
> log.segment.bytes=1073741824
> message.max.bytes=100
> num.io.threads=14
> num.network.threads=14
> num.partitions=10
> queued.max.requests=500
> num.replica.fetchers=4
> replica.fetch.max.bytes=1048576
> replica.fetch.min.bytes=51200
> replica.lag.max.messages=5000
> replica.lag.time.max.ms=3
> replica.fetch.wait.max.ms=1000
> fetch.purgatory.purge.interval.requests=5000
> producer.purgatory.purge.interval.requests=5000
> delete.topic.enable=true
> Logs show rebalance happening well up to 24 hours after the start.
> [2015-03-07 16:52:48,969] INFO [Controller 2]: Resuming preferred replica 
> election for partitions:  (kafka.controller.KafkaController)
> [2015-03-07 16:52:48,969] INFO [Controller 2]: Partitions that completed 
> preferred replica election:  (kafka.controller.KafkaController)
> …
> [2015-03-07 12:07:06,783] INFO [Controller 4]: Resuming preferred replica 
> election for partitions:  (kafka.controller.KafkaController)
> ...
> [2015-03-07 09:10:41,850] INFO [Controller 3]: Resuming preferred replica 
> election for partitions:  (kafka.controller.KafkaController)
> ...
> [2015-03-07 08:26:56,396] INFO [Controller 1]: Starting preferred replica 
> leader election for partitions  (kafka.controller.KafkaController)
> ...
> [2015-03-06 16:52:59,506] INFO [Controller 2]: Partitions undergoing 
> preferred replica election:  (kafka.controller.KafkaController)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1997) Refactor Mirror Maker

2015-03-10 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14356079#comment-14356079
 ] 

Jiangjie Qin commented on KAFKA-1997:
-

Updated reviewboard https://reviews.apache.org/r/31706/diff/
 against branch origin/trunk

> Refactor Mirror Maker
> -
>
> Key: KAFKA-1997
> URL: https://issues.apache.org/jira/browse/KAFKA-1997
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1997.patch, KAFKA-1997_2015-03-03_16:28:46.patch, 
> KAFKA-1997_2015-03-04_15:07:46.patch, KAFKA-1997_2015-03-04_15:42:45.patch, 
> KAFKA-1997_2015-03-05_20:14:58.patch, KAFKA-1997_2015-03-09_18:55:54.patch, 
> KAFKA-1997_2015-03-10_18:31:34.patch
>
>
> Refactor mirror maker based on KIP-3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1997) Refactor Mirror Maker

2015-03-10 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1997:

Attachment: KAFKA-1997_2015-03-10_18:31:34.patch

> Refactor Mirror Maker
> -
>
> Key: KAFKA-1997
> URL: https://issues.apache.org/jira/browse/KAFKA-1997
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1997.patch, KAFKA-1997_2015-03-03_16:28:46.patch, 
> KAFKA-1997_2015-03-04_15:07:46.patch, KAFKA-1997_2015-03-04_15:42:45.patch, 
> KAFKA-1997_2015-03-05_20:14:58.patch, KAFKA-1997_2015-03-09_18:55:54.patch, 
> KAFKA-1997_2015-03-10_18:31:34.patch
>
>
> Refactor mirror maker based on KIP-3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31706: Patch for KAFKA-1997

2015-03-10 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31706/
---

(Updated March 11, 2015, 1:31 a.m.)


Review request for kafka.


Bugs: KAFKA-1997
https://issues.apache.org/jira/browse/KAFKA-1997


Repository: kafka


Description (updated)
---

Addressed Guozhang's comments.


Changed the exit behavior on send failure because close(0) is not ready yet. 
Will submit followup patch after KAFKA-1660 is checked in.


Expanded imports from _ and * to full class path


Incorporated Joel's comments.


Addressed Joel's comments.


Addressed Guozhang's comments.


Diffs (updated)
-

  
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
 d5c79e2481d5e9a2524ac2ef6a6879f61cb7cb5f 
  core/src/main/scala/kafka/consumer/PartitionAssignor.scala 
e6ff7683a0df4a7d221e949767e57c34703d5aad 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
5487259751ebe19f137948249aa1fd2637d2deb4 
  core/src/main/scala/kafka/javaapi/consumer/ConsumerRebalanceListener.java 
7f45a90ba6676290172b7da54c15ee5dc1a42a2e 
  core/src/main/scala/kafka/tools/MirrorMaker.scala 
bafa379ff57bc46458ea8409406f5046dc9c973e 
  core/src/test/scala/unit/kafka/consumer/PartitionAssignorTest.scala 
543070f4fd3e96f3183cae9ee2ccbe843409ee58 
  core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
19640cc55b5baa0a26a808d708b7f4caf491c9f0 

Diff: https://reviews.apache.org/r/31706/diff/


Testing
---


Thanks,

Jiangjie Qin



Re: Review Request 31706: Patch for KAFKA-1997

2015-03-10 Thread Jiangjie Qin


> On March 10, 2015, 7:10 p.m., Guozhang Wang wrote:
> > core/src/main/scala/kafka/tools/MirrorMaker.scala, line 45
> > 
> >
> > I am still a bit concerned about bounding ourselves with one producer 
> > per MM, becausing scaling MMs for the sake of scaling producer is kind of a 
> > waste of JVM containers / memory / etc. On the other hand, having one 
> > producer does give us the benefit of simple partition ordering semantics 
> > for the destination cluster.
> > 
> > Maybe we can keep it as is but open for increasing the producer number 
> > if we bump into some cases that a single ioThread for sending is not 
> > sufficient.

I was worrying about this before, too. I think the main reason for having more 
than one producer is to ensure we can reach best consumer-producer ratio. Given 
that it is almost guranteed that consumer thread is slower than producer 
thread, whenever we want to have more producers, that means producer are 
serving too many consumer threads. So we can have two MM each with less 
consumer threads. It might cost more resource to run another JVM, but as long 
as we can achieve appropriate consumer-producer ratio, it might be OK. That 
said, I agree that we can keep one producer for now and can bump up producer 
number later if we need it.


> On March 10, 2015, 7:10 p.m., Guozhang Wang wrote:
> > core/src/main/scala/kafka/tools/MirrorMaker.scala, lines 510-512
> > 
> >
> > Could you use !exitingOnSendFailure only in the outer loop, and in the 
> > onComplete callback, setting both boolean to true upon exception?

Ah, this is actually a bug.
We need to check exitingOnSendFailure and shuttingDown in both inner loop and 
outer loop.
Normally the program will only run in inner loop. So the inner loop needs to 
check exitingOnSendFailure otherwise it will continue sending messages even 
something went wrong. And it needs to check shuttingDown to stop promptly also. 
The outer loop needs both for the same reason except it does not throw 
ConsumerTimeoutException.
I'm not sure if it is a good idea to set the shuttDown flag in the callback 
directly. 
Anyway, This part of code will change after close with timeout for producer is 
available. We will only depending on the shuttingDown flag in the future. In 
onComplete, we should just call cleanShutdown() and the cleanShutdown will 
check exitingOnSendFailure to decide whether to call producer.close() or 
producer.close(-1);


> On March 10, 2015, 7:10 p.m., Guozhang Wang wrote:
> > core/src/main/scala/kafka/tools/MirrorMaker.scala, lines 541-542
> > 
> >
> > Is there a race condition, such that a thread could call 
> > producer.flush() while some other threads are also calling send() to the 
> > producer (in the current implementation flush does not block concurrent 
> > send), such that when it calls commitOffsets there are some offsets that 
> > get committed but not acked yet?

This is a good catch! Because in the design with new consumer, we have one 
consumer per consumer thread, the offsets are not shared. But for now consumer 
threads share one consumer instance so it has this problem.
I'll just let each mirror maker thread has their own connector which will be 
our final design as well.


- Jiangjie


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31706/#review75921
---


On March 10, 2015, 1:55 a.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31706/
> ---
> 
> (Updated March 10, 2015, 1:55 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1997
> https://issues.apache.org/jira/browse/KAFKA-1997
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressed Guozhang's comments.
> 
> 
> Changed the exit behavior on send failure because close(0) is not ready yet. 
> Will submit followup patch after KAFKA-1660 is checked in.
> 
> 
> Expanded imports from _ and * to full class path
> 
> 
> Incorporated Joel's comments.
> 
> 
> Addressed Joel's comments.
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  d5c79e2481d5e9a2524ac2ef6a6879f61cb7cb5f 
>   core/src/main/scala/kafka/consumer/PartitionAssignor.scala 
> e6ff7683a0df4a7d221e949767e57c34703d5aad 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> 5487259751ebe19f137948249aa1fd2637d2deb4 
>   core/src/main/scala/kafka/javaapi/consumer/ConsumerRebalanceListener.java 
> 7f45a90ba6676290172b

Build failed in Jenkins: Kafka-trunk #422

2015-03-10 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-1910; missed follow-up changes

--
[...truncated 1726 lines...]
at kafka.integration.PrimitiveApiTest.setUp(PrimitiveApiTest.scala:40)

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at 
kafka.integration.PrimitiveApiTest.kafka$integration$KafkaServerTestHarness$$super$setUp(PrimitiveApiTest.scala:40)
at 
kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:44)
at 
kafka.integration.PrimitiveApiTest.kafka$integration$ProducerConsumerTestHarness$$super$setUp(PrimitiveApiTest.scala:40)
at 
kafka.integration.ProducerConsumerTestHarness$class.setUp(ProducerConsumerTestHarness.scala:33)
at kafka.integration.PrimitiveApiTest.setUp(PrimitiveApiTest.scala:40)

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at 
kafka.integration.AutoOffsetResetTest.kafka$integration$KafkaServerTestHarness$$super$setUp(AutoOffsetResetTest.scala:32)
at 
kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:44)
at 
kafka.integration.AutoOffsetResetTest.setUp(AutoOffsetResetTest.scala:46)

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at 
kafka.integration.AutoOffsetResetTest.kafka$integration$KafkaServerTestHarness$$super$setUp(AutoOffsetResetTest.scala:32)
at 
kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:44)
at 
kafka.integration.AutoOffsetResetTest.setUp(AutoOffsetResetTest.scala:46)

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at 
kafka.integration.AutoOffsetResetTest.kafka$integration$KafkaServerTestHarness$$super$setUp(AutoOffsetResetTest.scala:32)
at 
kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:44)
at 
kafka.integration.AutoOffsetResetTest.setUp(AutoOffsetResetTest.scala:46)

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZ

Build failed in Jenkins: KafkaPreCommit #35

2015-03-10 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-1910; missed follow-up changes

--
[...truncated 1922 lines...]
at kafka.integration.PrimitiveApiTest.setUp(PrimitiveApiTest.scala:40)

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at 
kafka.integration.PrimitiveApiTest.kafka$integration$KafkaServerTestHarness$$super$setUp(PrimitiveApiTest.scala:40)
at 
kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:44)
at 
kafka.integration.PrimitiveApiTest.kafka$integration$ProducerConsumerTestHarness$$super$setUp(PrimitiveApiTest.scala:40)
at 
kafka.integration.ProducerConsumerTestHarness$class.setUp(ProducerConsumerTestHarness.scala:33)
at kafka.integration.PrimitiveApiTest.setUp(PrimitiveApiTest.scala:40)

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at 
kafka.integration.AutoOffsetResetTest.kafka$integration$KafkaServerTestHarness$$super$setUp(AutoOffsetResetTest.scala:32)
at 
kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:44)
at 
kafka.integration.AutoOffsetResetTest.setUp(AutoOffsetResetTest.scala:46)

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at 
kafka.integration.AutoOffsetResetTest.kafka$integration$KafkaServerTestHarness$$super$setUp(AutoOffsetResetTest.scala:32)
at 
kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:44)
at 
kafka.integration.AutoOffsetResetTest.setUp(AutoOffsetResetTest.scala:46)

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.EmbeddedZookeeper.(EmbeddedZookeeper.scala:33)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:33)
at 
kafka.integration.AutoOffsetResetTest.kafka$integration$KafkaServerTestHarness$$super$setUp(AutoOffsetResetTest.scala:32)
at 
kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:44)
at 
kafka.integration.AutoOffsetResetTest.setUp(AutoOffsetResetTest.scala:46)

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow FAILED
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:95)
at kafka.zk.Embedde

Build failed in Jenkins: Kafka-trunk #421

2015-03-10 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-1910; follow-up on fixing buffer.flip on produce requests

--
[...truncated 466 lines...]

org.apache.kafka.common.config.AbstractConfigTest > testConfiguredInstances 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testBasicTypes PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > testNullDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired PASSED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice PASSED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultRange PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultString PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidators PASSED

org.apache.kafka.common.serialization.SerializationTest > testStringSerializer 
PASSED

org.apache.kafka.common.network.SelectorTest > testServerDisconnect PASSED

org.apache.kafka.common.network.SelectorTest > testClientDisconnect PASSED

org.apache.kafka.common.network.SelectorTest > testCantSendWithInProgress PASSED

org.apache.kafka.common.network.SelectorTest > testCantSendWithoutConnecting 
PASSED

org.apache.kafka.common.network.SelectorTest > testNoRouteToHost PASSED

org.apache.kafka.common.network.SelectorTest > testConnectionRefused PASSED

org.apache.kafka.common.network.SelectorTest > testNormalOperation PASSED

org.apache.kafka.common.network.SelectorTest > testSendLargeRequest PASSED

org.apache.kafka.common.network.SelectorTest > testEmptyRequest PASSED

org.apache.kafka.common.network.SelectorTest > testExistingConnectionId PASSED

org.apache.kafka.common.network.SelectorTest > testMute PASSED

org.apache.kafka.common.utils.AbstractIteratorTest > testIterator PASSED

org.apache.kafka.common.utils.AbstractIteratorTest > testEmptyIterator PASSED

org.apache.kafka.common.utils.UtilsTest > testGetHost PASSED

org.apache.kafka.common.utils.UtilsTest > testGetPort PASSED

org.apache.kafka.common.utils.UtilsTest > testFormatAddress PASSED

org.apache.kafka.common.utils.UtilsTest > testJoin PASSED

org.apache.kafka.common.utils.UtilsTest > testAbs PASSED

org.apache.kafka.common.utils.CrcTest > testUpdate PASSED

org.apache.kafka.common.utils.CrcTest > testUpdateInt PASSED

org.apache.kafka.common.metrics.MetricsTest > testMetricName PASSED

org.apache.kafka.common.metrics.MetricsTest > testSimpleStats PASSED

org.apache.kafka.common.metrics.MetricsTest > testHierarchicalSensors PASSED

org.apache.kafka.common.metrics.MetricsTest > testBadSensorHiearchy PASSED

org.apache.kafka.common.metrics.MetricsTest > testEventWindowing PASSED

org.apache.kafka.common.metrics.MetricsTest > testTimeWindowing PASSED

org.apache.kafka.common.metrics.MetricsTest > testOldDataHasNoEffect PASSED

org.apache.kafka.common.metrics.MetricsTest > testDuplicateMetricName PASSED

org.apache.kafka.common.metrics.MetricsTest > testQuotas PASSED

org.apache.kafka.common.metrics.MetricsTest > testPercentiles PASSED

org.apache.kafka.common.metrics.JmxReporterTest > testJmxRegistration PASSED

org.apache.kafka.common.metrics.stats.HistogramTest > testHistogram PASSED

org.apache.kafka.common.metrics.stats.HistogramTest > testConstantBinScheme 
PASSED

org.apache.kafka.common.metrics.stats.HistogramTest > testLinearBinScheme PASSED

org.apache.kafka.common.requests.RequestResponseTest > testSerialization PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testSimple 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testNulls 
PASSED

org.apache.kafka.common.protocol.types.ProtocolSerializationTest > testDefault 
PASSED

org.apache.kafka.clients.NetworkClientTest > testReadyAndDisconnect PASSED

org.apache.kafka.clients.NetworkClientTest > testSendToUnreadyNode PASSED

org.apache.kafka.clients.NetworkClientTest > testSimpleRequestResponse PASSED

org.apache.kafka.clients.MetadataTest > testMetadata PASSED

org.apache.kafka.clients.MetadataTest > testMetadataUpdateWaitTime PASSED

org.apache.kafka.clients.ClientUtilsTest > testParseAndValidateAddresses PASSED

org.apache.kafka.clients.ClientUtilsTest > testNoPort PASSED

org.apache.kafka.clients.producer.ProducerRecordTest > testEqualsAndHashCode 
PASSED

org.apache.kafka.clients.producer.MockProducerTest > testAutoCompleteMock PASSED

org.apache.kafka.clients.producer.MockProducerTest > testManualCompletion PASSED

org.apache.kafka.clients.producer.RecordSendTest > testTimeout PASSED

org.apache.kafka.clients.producer.RecordSendTest > testError PASSED

org.apache.kafka.clients.producer.RecordSendTest > testBlocking PASSED

org.apache.kafka.clients.producer.internals.BufferPoolTest > testSimple PASSED

org.apache.kafka.clients.producer.internals.BufferPoolTest > 
testCantAllocateMoreMem

[jira] [Commented] (KAFKA-1910) Refactor KafkaConsumer

2015-03-10 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355969#comment-14355969
 ] 

Guozhang Wang commented on KAFKA-1910:
--

Some changes are reverted by accident in the last-pass checking, re-committing 
them.

> Refactor KafkaConsumer
> --
>
> Key: KAFKA-1910
> URL: https://issues.apache.org/jira/browse/KAFKA-1910
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Attachments: KAFKA-1910.patch, KAFKA-1910_2015-03-05_14:55:33.patch
>
>
> KafkaConsumer now contains all the logic on the consumer side, making it a 
> very huge class file, better re-factoring it to have multiple layers on top 
> of KafkaClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Create a space template for KIP

2015-03-10 Thread Gwen Shapira
Thanks!

On Tue, Mar 10, 2015 at 4:37 PM, Guozhang Wang  wrote:
> Got it.
>
> I have created the space template and modified the instructions on the KIP
> wiki:
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
>
> Guozhang
>
> On Sat, Mar 7, 2015 at 11:05 PM, Jiangjie Qin 
> wrote:
>
>> This is the template I am thinking about:
>> https://confluence.atlassian.com/display/DOC/Working+with+Templates
>>
>> Jiangjie (Becket) Qin
>>
>> On 3/7/15, 1:35 PM, "Guozhang Wang"  wrote:
>>
>> >Jiangjie,
>> >
>> >You can find the template here:
>> >
>> >https://cwiki.apache.org/confluence/display/KAFKA/KIP-Template
>> >
>> >Guozhang
>> >
>> >On Sat, Mar 7, 2015 at 12:35 PM, Jiangjie Qin 
>> >wrote:
>> >
>> >> I am not sure how others are creating KIP. I just copy/paste the
>> >>template
>> >> page to a blank page.  I think it would good to add a space template
>> >>under
>> >> Apache Kafka so when creating the child page, people can just use that.
>> >>Is
>> >> it worth doing? I tried but it requires space administrator permission.
>> >>
>> >> Jiangjie (Becket) Qin
>> >>
>> >>
>> >
>> >
>> >--
>> >-- Guozhang
>>
>>
>
>
> --
> -- Guozhang


Re: Review Request 30809: Patch for KAFKA-1888

2015-03-10 Thread Guozhang Wang


> On March 10, 2015, 11:44 p.m., Guozhang Wang wrote:
> >

This is from some old review comments, I will upload rest of them soon.


- Guozhang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30809/#review73382
---


On March 9, 2015, 11:55 p.m., Abhishek Nigam wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30809/
> ---
> 
> (Updated March 9, 2015, 11:55 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1888
> https://issues.apache.org/jira/browse/KAFKA-1888
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Fixing the tests based on Mayuresh comments, code cleanup after proper IDE 
> setup
> 
> 
> Diffs
> -
> 
>   build.gradle 0f0fe60a74542efa91a0e727146e896edcaa38af 
>   core/src/main/scala/kafka/tools/ContinuousValidationTest.java PRE-CREATION 
>   system_test/broker_upgrade/bin/kafka-run-class.sh PRE-CREATION 
>   system_test/broker_upgrade/bin/test.sh PRE-CREATION 
>   system_test/broker_upgrade/configs/server1.properties PRE-CREATION 
>   system_test/broker_upgrade/configs/server2.properties PRE-CREATION 
>   system_test/broker_upgrade/configs/zookeeper_source.properties PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/30809/diff/
> 
> 
> Testing
> ---
> 
> Scripted it to run 20 times without any failures.
> Command-line: broker-upgrade/bin/test.sh  
> 
> 
> Thanks,
> 
> Abhishek Nigam
> 
>



Re: Review Request 30809: Patch for KAFKA-1888

2015-03-10 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30809/#review73382
---



build.gradle


Additional dependency:



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


Better have specific imports than wildcards.



core/src/main/scala/kafka/tools/ContinuousValidationTest.java


"..Test-" + time?


- Guozhang Wang


On March 9, 2015, 11:55 p.m., Abhishek Nigam wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30809/
> ---
> 
> (Updated March 9, 2015, 11:55 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1888
> https://issues.apache.org/jira/browse/KAFKA-1888
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Fixing the tests based on Mayuresh comments, code cleanup after proper IDE 
> setup
> 
> 
> Diffs
> -
> 
>   build.gradle 0f0fe60a74542efa91a0e727146e896edcaa38af 
>   core/src/main/scala/kafka/tools/ContinuousValidationTest.java PRE-CREATION 
>   system_test/broker_upgrade/bin/kafka-run-class.sh PRE-CREATION 
>   system_test/broker_upgrade/bin/test.sh PRE-CREATION 
>   system_test/broker_upgrade/configs/server1.properties PRE-CREATION 
>   system_test/broker_upgrade/configs/server2.properties PRE-CREATION 
>   system_test/broker_upgrade/configs/zookeeper_source.properties PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/30809/diff/
> 
> 
> Testing
> ---
> 
> Scripted it to run 20 times without any failures.
> Command-line: broker-upgrade/bin/test.sh  
> 
> 
> Thanks,
> 
> Abhishek Nigam
> 
>



Re: Create a space template for KIP

2015-03-10 Thread Guozhang Wang
Got it.

I have created the space template and modified the instructions on the KIP
wiki:

https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals

Guozhang

On Sat, Mar 7, 2015 at 11:05 PM, Jiangjie Qin 
wrote:

> This is the template I am thinking about:
> https://confluence.atlassian.com/display/DOC/Working+with+Templates
>
> Jiangjie (Becket) Qin
>
> On 3/7/15, 1:35 PM, "Guozhang Wang"  wrote:
>
> >Jiangjie,
> >
> >You can find the template here:
> >
> >https://cwiki.apache.org/confluence/display/KAFKA/KIP-Template
> >
> >Guozhang
> >
> >On Sat, Mar 7, 2015 at 12:35 PM, Jiangjie Qin 
> >wrote:
> >
> >> I am not sure how others are creating KIP. I just copy/paste the
> >>template
> >> page to a blank page.  I think it would good to add a space template
> >>under
> >> Apache Kafka so when creating the child page, people can just use that.
> >>Is
> >> it worth doing? I tried but it requires space administrator permission.
> >>
> >> Jiangjie (Becket) Qin
> >>
> >>
> >
> >
> >--
> >-- Guozhang
>
>


-- 
-- Guozhang


Build failed in Jenkins: KafkaPreCommit #34

2015-03-10 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-1910; follow-up on fixing buffer.flip on produce requests

--
[...truncated 466 lines...]
org.apache.kafka.common.record.RecordTest > testChecksum[25] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[25] PASSED

org.apache.kafka.common.record.RecordTest > testFields[26] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[26] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[26] PASSED

org.apache.kafka.common.record.RecordTest > testFields[27] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[27] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[27] PASSED

org.apache.kafka.common.record.RecordTest > testFields[28] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[28] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[28] PASSED

org.apache.kafka.common.record.RecordTest > testFields[29] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[29] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[29] PASSED

org.apache.kafka.common.record.RecordTest > testFields[30] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[30] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[30] PASSED

org.apache.kafka.common.record.RecordTest > testFields[31] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[31] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[31] PASSED

org.apache.kafka.common.record.RecordTest > testFields[32] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[32] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[32] PASSED

org.apache.kafka.common.record.RecordTest > testFields[33] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[33] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[33] PASSED

org.apache.kafka.common.record.RecordTest > testFields[34] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[34] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[34] PASSED

org.apache.kafka.common.record.RecordTest > testFields[35] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[35] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[35] PASSED

org.apache.kafka.common.record.RecordTest > testFields[36] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[36] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[36] PASSED

org.apache.kafka.common.record.RecordTest > testFields[37] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[37] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[37] PASSED

org.apache.kafka.common.record.RecordTest > testFields[38] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[38] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[38] PASSED

org.apache.kafka.common.record.RecordTest > testFields[39] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[39] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[39] PASSED

org.apache.kafka.common.record.RecordTest > testFields[40] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[40] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[40] PASSED

org.apache.kafka.common.record.RecordTest > testFields[41] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[41] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[41] PASSED

org.apache.kafka.common.record.RecordTest > testFields[42] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[42] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[42] PASSED

org.apache.kafka.common.record.RecordTest > testFields[43] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[43] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[43] PASSED

org.apache.kafka.common.record.RecordTest > testFields[44] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[44] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[44] PASSED

org.apache.kafka.common.record.RecordTest > testFields[45] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[45] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[45] PASSED

org.apache.kafka.common.record.RecordTest > testFields[46] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[46] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[46] PASSED

org.apache.kafka.common.record.RecordTest > testFields[47] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[47] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[47] PASSED

org.apache.kafka.common.record.RecordTest > testFields[48] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[48] PAS

[jira] [Commented] (KAFKA-2011) Rebalance with auto.leader.rebalance.enable=false

2015-03-10 Thread K Zakee (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355936#comment-14355936
 ] 

K Zakee commented on KAFKA-2011:


The preferred replica leader election occurred again today, and below logs I 
see during that time. 
---
[2015-03-10 13:51:08,834] INFO Client session timed out, have not heard from 
server in 16526ms for sessionid 0x24bf1b6f531006b, closing socket connection 
and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2015-03-10 13:51:09,414] INFO Socket connection established to {host/ip}:2181, 
initiating session (org.apache.zookeeper.ClientCnxn)
[2015-03-10 13:51:09,415] INFO Unable to reconnect to ZooKeeper service, 
session 0x24bf1b6f531006b has expired, closing socket connection 
(org.apache.zookeeper.ClientCnxn)
[2015-03-10 13:51:09,458] INFO Socket connection established to {host/ip}:2181, 
initiating session (org.apache.zookeeper.ClientCnxn)
[2015-03-10 13:51:09,730] INFO Session establishment complete on server 
{host/ip}:2181, sessionid = 0x14bf1b6f53301b7, negotiated timeout = 6000 
(org.apache.zookeeper.ClientCnxn)
---

Not sure why ZK session time out should trigger controller migration. 

> Rebalance with auto.leader.rebalance.enable=false 
> --
>
> Key: KAFKA-2011
> URL: https://issues.apache.org/jira/browse/KAFKA-2011
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
> Environment: 5 Hosts of below config:
> "x86_64" "32-bit, 64-bit" "Little Endian" "24 GenuineIntel CPUs Model 44 
> 1600.000MHz" "RAM 189 GB" GNU/Linux
>Reporter: K Zakee
>Priority: Blocker
> Attachments: controller-logs-1.zip, controller-logs-2.zip
>
>
> Started with clean cluster 0.8.2 with 5 brokers. Setting the properties as 
> below:
> auto.leader.rebalance.enable=false
> controlled.shutdown.enable=true
> controlled.shutdown.max.retries=1
> controlled.shutdown.retry.backoff.ms=5000
> default.replication.factor=3
> log.cleaner.enable=true
> log.cleaner.threads=5
> log.cleanup.policy=delete
> log.flush.scheduler.interval.ms=3000
> log.retention.minutes=1440
> log.segment.bytes=1073741824
> message.max.bytes=100
> num.io.threads=14
> num.network.threads=14
> num.partitions=10
> queued.max.requests=500
> num.replica.fetchers=4
> replica.fetch.max.bytes=1048576
> replica.fetch.min.bytes=51200
> replica.lag.max.messages=5000
> replica.lag.time.max.ms=3
> replica.fetch.wait.max.ms=1000
> fetch.purgatory.purge.interval.requests=5000
> producer.purgatory.purge.interval.requests=5000
> delete.topic.enable=true
> Logs show rebalance happening well up to 24 hours after the start.
> [2015-03-07 16:52:48,969] INFO [Controller 2]: Resuming preferred replica 
> election for partitions:  (kafka.controller.KafkaController)
> [2015-03-07 16:52:48,969] INFO [Controller 2]: Partitions that completed 
> preferred replica election:  (kafka.controller.KafkaController)
> …
> [2015-03-07 12:07:06,783] INFO [Controller 4]: Resuming preferred replica 
> election for partitions:  (kafka.controller.KafkaController)
> ...
> [2015-03-07 09:10:41,850] INFO [Controller 3]: Resuming preferred replica 
> election for partitions:  (kafka.controller.KafkaController)
> ...
> [2015-03-07 08:26:56,396] INFO [Controller 1]: Starting preferred replica 
> leader election for partitions  (kafka.controller.KafkaController)
> ...
> [2015-03-06 16:52:59,506] INFO [Controller 2]: Partitions undergoing 
> preferred replica election:  (kafka.controller.KafkaController)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1888) Add a "rolling upgrade" system test

2015-03-10 Thread Abhishek Nigam (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355921#comment-14355921
 ] 

Abhishek Nigam commented on KAFKA-1888:
---

Hi Gwen,
I am not sure why the link is not showing up. Here you go:
https://reviews.apache.org/r/30809/

> Add a "rolling upgrade" system test
> ---
>
> Key: KAFKA-1888
> URL: https://issues.apache.org/jira/browse/KAFKA-1888
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Gwen Shapira
>Assignee: Abhishek Nigam
> Fix For: 0.9.0
>
>
> To help test upgrades and compatibility between versions, it will be cool to 
> add a rolling-upgrade test to system tests:
> Given two versions (just a path to the jars?), check that you can do a
> rolling upgrade of the brokers from one version to another (using clients 
> from the old version) without losing data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1910) Refactor KafkaConsumer

2015-03-10 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355916#comment-14355916
 ] 

Guozhang Wang commented on KAFKA-1910:
--

Just check-in the fix, which resolves the root cause of these failures.

> Refactor KafkaConsumer
> --
>
> Key: KAFKA-1910
> URL: https://issues.apache.org/jira/browse/KAFKA-1910
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Attachments: KAFKA-1910.patch, KAFKA-1910_2015-03-05_14:55:33.patch
>
>
> KafkaConsumer now contains all the logic on the consumer side, making it a 
> very huge class file, better re-factoring it to have multiple layers on top 
> of KafkaClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1910) Refactor KafkaConsumer

2015-03-10 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355739#comment-14355739
 ] 

Guozhang Wang commented on KAFKA-1910:
--

Jun, I will take a look at it now.

> Refactor KafkaConsumer
> --
>
> Key: KAFKA-1910
> URL: https://issues.apache.org/jira/browse/KAFKA-1910
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Attachments: KAFKA-1910.patch, KAFKA-1910_2015-03-05_14:55:33.patch
>
>
> KafkaConsumer now contains all the logic on the consumer side, making it a 
> very huge class file, better re-factoring it to have multiple layers on top 
> of KafkaClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31850: Patch for KAFKA-1660

2015-03-10 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31850/#review75968
---

Ship it!


LGTM. Just a minor comment on one of your replies.

- Guozhang Wang


On March 9, 2015, 7:56 p.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31850/
> ---
> 
> (Updated March 9, 2015, 7:56 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1660
> https://issues.apache.org/jira/browse/KAFKA-1660
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> A minor fix.
> 
> 
> Incorporated Guozhang's comments.
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> 7397e565fd865214529ffccadd4222d835ac8110 
>   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
> 6913090af03a455452b0b5c3df78f266126b3854 
>   clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
> 5b3e75ed83a940bc922f9eca10d4008d67aa37c9 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  d5c79e2481d5e9a2524ac2ef6a6879f61cb7cb5f 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> ed9c63a6679e3aaf83d19fde19268553a4c107c2 
>   
> clients/src/main/java/org/apache/kafka/common/errors/InterruptException.java 
> fee322fa0dd9704374db4a6964246a7d2918d3e4 
>   clients/src/main/java/org/apache/kafka/common/serialization/Serializer.java 
> c2fdc23239bd2196cd912c3d121b591f21393eab 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/internals/RecordAccumulatorTest.java
>  c1bc40648479d4c2ae4ac52f40dadc070a6bcf6f 
>   core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
> 3df450784592b894008e7507b2737f9bb07f7bd2 
> 
> Diff: https://reviews.apache.org/r/31850/diff/
> 
> 
> Testing
> ---
> 
> Unit tests passed.
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



[jira] [Commented] (KAFKA-1910) Refactor KafkaConsumer

2015-03-10 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355640#comment-14355640
 ] 

Jun Rao commented on KAFKA-1910:


The most recent jenkins run has the following two unit test failures. Are they 
due to this commit?

kafka.api.ProducerFailureHandlingTest > testBrokerFailure FAILED
java.lang.AssertionError: Should have fetched 128000 unique messages 
expected:<128000> but was:<127915>
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.failNotEquals(Assert.java:689)
at org.junit.Assert.assertEquals(Assert.java:127)
at org.junit.Assert.assertEquals(Assert.java:514)
at 
kafka.api.ProducerFailureHandlingTest.testBrokerFailure(ProducerFailureHandlingTest.scala:301)

kafka.api.ConsumerTest > testConsumptionWithBrokerFailures FAILED
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
server does not host this topic-partition.


> Refactor KafkaConsumer
> --
>
> Key: KAFKA-1910
> URL: https://issues.apache.org/jira/browse/KAFKA-1910
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Attachments: KAFKA-1910.patch, KAFKA-1910_2015-03-05_14:55:33.patch
>
>
> KafkaConsumer now contains all the logic on the consumer side, making it a 
> very huge class file, better re-factoring it to have multiple layers on top 
> of KafkaClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31850: Patch for KAFKA-1660

2015-03-10 Thread Guozhang Wang


> On March 9, 2015, 6:37 p.m., Guozhang Wang wrote:
> > clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java, 
> > line 418
> > 
> >
> > This is not related to this ticket, but I think we can just throw e 
> > here as well.
> 
> Jiangjie Qin wrote:
> The InterruptException is a RuntimeException wrapper around 
> InterruptedException (a little bit confusing...). I think it was 
> intentionally thrown InterruptException instead of InterruptedException here.

This is very confusing, I am wondering why we have to throw a checked exception 
here. Anyways, if we are stick with InterruptException I think it is better to 
just throw the exception with e.message if it is null, without wrapping the 
original exception.


- Guozhang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31850/#review75731
---


On March 9, 2015, 7:56 p.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31850/
> ---
> 
> (Updated March 9, 2015, 7:56 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1660
> https://issues.apache.org/jira/browse/KAFKA-1660
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> A minor fix.
> 
> 
> Incorporated Guozhang's comments.
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> 7397e565fd865214529ffccadd4222d835ac8110 
>   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
> 6913090af03a455452b0b5c3df78f266126b3854 
>   clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
> 5b3e75ed83a940bc922f9eca10d4008d67aa37c9 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  d5c79e2481d5e9a2524ac2ef6a6879f61cb7cb5f 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Sender.java 
> ed9c63a6679e3aaf83d19fde19268553a4c107c2 
>   
> clients/src/main/java/org/apache/kafka/common/errors/InterruptException.java 
> fee322fa0dd9704374db4a6964246a7d2918d3e4 
>   clients/src/main/java/org/apache/kafka/common/serialization/Serializer.java 
> c2fdc23239bd2196cd912c3d121b591f21393eab 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/internals/RecordAccumulatorTest.java
>  c1bc40648479d4c2ae4ac52f40dadc070a6bcf6f 
>   core/src/test/scala/integration/kafka/api/ProducerSendTest.scala 
> 3df450784592b894008e7507b2737f9bb07f7bd2 
> 
> Diff: https://reviews.apache.org/r/31850/diff/
> 
> 
> Testing
> ---
> 
> Unit tests passed.
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



Build failed in Jenkins: Kafka-trunk #420

2015-03-10 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-1910; Refactor new consumer and fixed a bunch of corner cases 
/ unit tests; reviewed by Onur Karaman and Jay Kreps

--
[...truncated 1120 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogConfigTest > testFromPropsDefaults PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testFromPropsToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegm

Re: Review Request 31706: Patch for KAFKA-1997

2015-03-10 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31706/#review75921
---



core/src/main/scala/kafka/consumer/PartitionAssignor.scala


Could we just import collection.mutable.HashMap since there seems no Java 
HashMap conflicts here?



core/src/main/scala/kafka/consumer/PartitionAssignor.scala


Ditto.



core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala


Ditto.



core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala


Could we change it to "Partition assignor returns the global partition 
assignment organized as a map of [TopicPartition, ThreadId] per consumer, and 
we need to re-organize it to a map of [Partition, ThreadId] per topic before 
passing to the rebalance callback".



core/src/main/scala/kafka/tools/MirrorMaker.scala


I am still a bit concerned about bounding ourselves with one producer per 
MM, becausing scaling MMs for the sake of scaling producer is kind of a waste 
of JVM containers / memory / etc. On the other hand, having one producer does 
give us the benefit of simple partition ordering semantics for the destination 
cluster.

Maybe we can keep it as is but open for increasing the producer number if 
we bump into some cases that a single ioThread for sending is not sufficient.



core/src/main/scala/kafka/tools/MirrorMaker.scala


There is no offset commit thread anymore right?



core/src/main/scala/kafka/tools/MirrorMaker.scala


Capitalized "C"



core/src/main/scala/kafka/tools/MirrorMaker.scala


Why add ";"?



core/src/main/scala/kafka/tools/MirrorMaker.scala


Could you use !exitingOnSendFailure only in the outer loop, and in the 
onComplete callback, setting both boolean to true upon exception?



core/src/main/scala/kafka/tools/MirrorMaker.scala


Is there a race condition, such that a thread could call producer.flush() 
while some other threads are also calling send() to the producer (in the 
current implementation flush does not block concurrent send), such that when it 
calls commitOffsets there are some offsets that get committed but not acked yet?



core/src/main/scala/kafka/tools/MirrorMaker.scala


Is there another potential race condition, as concurrent threads could call 
send() while flush() is in progress?



core/src/main/scala/kafka/tools/MirrorMaker.scala


Could we use Collection.singletonList here?


- Guozhang Wang


On March 10, 2015, 1:55 a.m., Jiangjie Qin wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31706/
> ---
> 
> (Updated March 10, 2015, 1:55 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1997
> https://issues.apache.org/jira/browse/KAFKA-1997
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressed Guozhang's comments.
> 
> 
> Changed the exit behavior on send failure because close(0) is not ready yet. 
> Will submit followup patch after KAFKA-1660 is checked in.
> 
> 
> Expanded imports from _ and * to full class path
> 
> 
> Incorporated Joel's comments.
> 
> 
> Addressed Joel's comments.
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java
>  d5c79e2481d5e9a2524ac2ef6a6879f61cb7cb5f 
>   core/src/main/scala/kafka/consumer/PartitionAssignor.scala 
> e6ff7683a0df4a7d221e949767e57c34703d5aad 
>   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
> 5487259751ebe19f137948249aa1fd2637d2deb4 
>   core/src/main/scala/kafka/javaapi/consumer/ConsumerRebalanceListener.java 
> 7f45a90ba6676290172b7da54c15ee5dc1a42a2e 
>   core/src/main/scala/kafka/tools/MirrorMaker.scala 
> bafa379ff57bc46458ea8409406f5046dc9c973e 
>   core/src/test/scala/unit/kafka/consumer/PartitionAssignorTest.scala 
> 543070f4fd3e96f3183cae9ee2ccbe843409ee58 
>   
> core/src/test/scala/unit/kafka/consumer/ZookeeperConsumerConnectorTest.scala 
> 19640cc55b5baa0a26a808d708b7f4caf491c9f0 
> 
> Diff: https://reviews.apache.org/r/31706/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jiangjie Qin
> 
>



Build failed in Jenkins: KafkaPreCommit #33

2015-03-10 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-1910; Refactor new consumer and fixed a bunch of corner cases 
/ unit tests; reviewed by Onur Karaman and Jay Kreps

--
[...truncated 1193 lines...]
kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.LogConfigTest > testFromPropsDefaults PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testFromPropsToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testAppendAndReadWithNonSequentialOffsets PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testCompressedMessages PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testLogRecoversToCorrectOffset PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testBogusIndexSegmentsAreRemoved PASSED

kafka.log.LogTest > testReopenThenTruncate PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testParseTopicPartitionNameForNull PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingSeparator PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentG

[jira] [Resolved] (KAFKA-1975) testGroupConsumption occasionally hang

2015-03-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-1975.
--
Resolution: Fixed

> testGroupConsumption occasionally hang
> --
>
> Key: KAFKA-1975
> URL: https://issues.apache.org/jira/browse/KAFKA-1975
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.3
>Reporter: Jun Rao
> Attachments: stack.out
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1910) Refactor KafkaConsumer

2015-03-10 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355388#comment-14355388
 ] 

Guozhang Wang commented on KAFKA-1910:
--

Thanks for the reviews, committed to trunk. Also closing relevant tickets on 
unit tests.

> Refactor KafkaConsumer
> --
>
> Key: KAFKA-1910
> URL: https://issues.apache.org/jira/browse/KAFKA-1910
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Attachments: KAFKA-1910.patch, KAFKA-1910_2015-03-05_14:55:33.patch
>
>
> KafkaConsumer now contains all the logic on the consumer side, making it a 
> very huge class file, better re-factoring it to have multiple layers on top 
> of KafkaClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1948) kafka.api.consumerTests are hanging

2015-03-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1948:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> kafka.api.consumerTests are hanging
> ---
>
> Key: KAFKA-1948
> URL: https://issues.apache.org/jira/browse/KAFKA-1948
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Guozhang Wang
> Attachments: KAFKA-1948.patch
>
>
> Noticed today that very often when I run the full test suite, it hangs on 
> kafka.api.consumerTest (not always same test though). It doesn't reproduce 
> 100% of the time, but enough to be very annoying.
> I also saw it happening on trunk after KAFKA-1333:
> https://builds.apache.org/view/All/job/Kafka-trunk/389/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1964) testPartitionReassignmentCallback hangs occasionally

2015-03-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-1964.
--
Resolution: Fixed
  Assignee: Guozhang Wang

>  testPartitionReassignmentCallback hangs occasionally
> -
>
> Key: KAFKA-1964
> URL: https://issues.apache.org/jira/browse/KAFKA-1964
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.3
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>  Labels: newbie++
> Attachments: stack.out
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1930) Move server over to new metrics library

2015-03-10 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355335#comment-14355335
 ] 

Joe Stein commented on KAFKA-1930:
--

+1 to KIP, I think one thing that is going to be important for folks is "how do 
I use my existing trusted reporter library with the new version" or "how do I 
change it so it works". I don't think the project should have to develop a 
Riemann, statsd, graphite, ganglia, etc reporters but make it as easy to ramp 
as possible for folks doing this today.

> Move server over to new metrics library
> ---
>
> Key: KAFKA-1930
> URL: https://issues.apache.org/jira/browse/KAFKA-1930
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>
> We are using org.apache.kafka.common.metrics on the clients, but using Coda 
> Hale metrics on the server. We should move the server over to the new metrics 
> package as well. This will help to make all our metrics self-documenting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1930) Move server over to new metrics library

2015-03-10 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355329#comment-14355329
 ] 

Gwen Shapira commented on KAFKA-1930:
-

Regardless, if we decide to switch the server to Kafka-Metrics, I think we need 
a KIP?
Metrics are a public API and we'll be potentially changing lots of them.


> Move server over to new metrics library
> ---
>
> Key: KAFKA-1930
> URL: https://issues.apache.org/jira/browse/KAFKA-1930
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>
> We are using org.apache.kafka.common.metrics on the clients, but using Coda 
> Hale metrics on the server. We should move the server over to the new metrics 
> package as well. This will help to make all our metrics self-documenting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1461) Replica fetcher thread does not implement any back-off behavior

2015-03-10 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355301#comment-14355301
 ] 

Jun Rao commented on KAFKA-1461:


[~guozhang], my concern is on the implementation of the DelayedItem. If you 
create a bunch of DelayedItems with the same timeout, they may timeout slightly 
differently since the calculation depends on the current time, which can 
change. In the second case when the leaders are moved one at time, what's going 
to happen is that the controller will tell the broker to move to the right 
leader right away. This typically happens within a few milli seconds. We could 
optimize this case, but I am not sure if it's worth the extra complexity in the 
code. In the first case, the remaining shutdown process could take seconds 
after the socket server is shut down. So backing off will definitely help.

Perhaps we can just do a simple experiment with controlled shutdown and see how 
serious the issue is w/o backing off.

> Replica fetcher thread does not implement any back-off behavior
> ---
>
> Key: KAFKA-1461
> URL: https://issues.apache.org/jira/browse/KAFKA-1461
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.1.1
>Reporter: Sam Meder
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1461.patch
>
>
> The current replica fetcher thread will retry in a tight loop if any error 
> occurs during the fetch call. For example, we've seen cases where the fetch 
> continuously throws a connection refused exception leading to several replica 
> fetcher threads that spin in a pretty tight loop.
> To a much lesser degree this is also an issue in the consumer fetcher thread, 
> although the fact that erroring partitions are removed so a leader can be 
> re-discovered helps some.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [KIP-DISCUSSION] KIP-13 Quotas

2015-03-10 Thread Todd Palino
Thanks, Jay. On the interface, I agree with Aditya (and you, I believe)
that we don't need to expose the public API contract at this time, but
structuring the internal logic to allow for it later with low cost is a
good idea.

Glad you explained the thoughts on where to hold requests. While my gut
reaction is to not like processing a produce request that is over quota, it
makes sense to do it that way if you are going to have your quota action be
a delay.

On the delay, I see your point on the bootstrap cases. However, one of the
places I differ, and part of the reason that I prefer the error, is that I
would never allow a producer who is over quota to resend a produce request.
A producer should identify itself at the start of it's connection, and at
that point if it is over quota, the broker would return an error and close
the connection. The same goes for a consumer. I'm a fan, in general, of
pushing all error cases and handling down to the client and doing as little
special work to accommodate those cases on the broker side as possible.

A case to consider here is what does this mean for REST endpoints to Kafka?
Are you going to hold the HTTP connection open as well? Is the endpoint
going to queue and hold requests?

I think the point that we can only delay as long as the producer's timeout
is a valid one, especially given that we do not have any means for the
broker and client to negotiate settings, whether that is timeouts or
message sizes or anything else. There are a lot of things that you have to
know when setting up a Kafka client about what your settings should be,
when much of that should be provided for in the protocol handshake. It's
not as critical in an environment like ours, where we have central
configuration for most clients, but we still see issues with it. I think
being able to have the client and broker negotiate a minimum timeout
allowed would make the delay more palatable.

I'm still not sure it's the right solution, and that we're not just going
with what's fast and cheap as opposed to what is good (or right). But given
the details of where to hold the request, I have less of a concern with the
burden on the broker.

-Todd


On Mon, Mar 9, 2015 at 5:01 PM, Jay Kreps  wrote:

> Hey Todd,
>
> Nice points, let me try to respond:
>
> Plugins
>
> Yeah let me explain what I mean about plugins. The contracts that matter
> for us are public contracts, i.e. the protocol, the binary format, stuff in
> zk, and the various plug-in apis we expose. Adding an internal interface
> later will not be hard--the quota check is going to be done in 2-6 places
> in the code which would need to be updated, all internal to the broker.
>
> The challenge with making things pluggable up front is that the policy is
> usually fairly trivial to plug in but each policy requires different
> inputs--the number of partitions, different metrics, etc. Once we give a
> public api it is hard to add to it without breaking the original contract
> and hence breaking everyones plugins. So if we do this we want to get it
> right early if possible. In any case I think whether we want to design a
> pluggable api or just improve a single implementation, the work we need to
> do is the same: brainstorm the set of use cases the feature has and then
> figure out the gap in our proposed implementation that leaves some use case
> uncovered. Once we have these specific cases we can try to figure out if
> that could be solved with a plugin or by improving our default proposal.
>
> Enforcement
>
> I started out arguing your side (immediate error), but I have switched to
> preferring delay. Here is what convinced me, let's see if it moves you.
>
> First, the delay quota need not hold onto any request data. The produce
> response can be delayed after the request is completed and the fetch can be
> delayed prior to the fetch being executed. So no state needs to be
> maintained in memory, other than a single per-connection token. This is a
> really important implementation detail for large scale usage that I didn't
> realize at first. I would agree that maintaining a request per connection
> in memory is a non-starter for an environment with 10s of thousands of
> connections.
>
> The second argument is that I think this really expands the use cases where
> the quotas can be applicable.
>
> The use case I have heard people talk about is event collection from apps.
> In this use case the app is directly sending data at a more or less steady
> state and never really has a performance spike unless the app has a bug or
> the application itself experiences more traffic. So in this case you should
> actually never hit the quota, and if you do, the data is going to be
> dropped wither it is dropped by the server with an error or by the client.
> These use cases will never block the app (which would be dangerous) since
> the client is always non-blocking and drops data when it's buffer is full
> rather than blocking. I agree that for this use case e

Re: adding replicas to existing topic partitions - KAFKA-1313

2015-03-10 Thread Neha Narkhede
As Ewen suggests, you might want to take a look at KIP-6. The only thing to
be
careful about is not moving data for partitions other than those belonging
to the topics
whose replication factor is being changed by the tool.

It seems logical to expose this via the kafka-topics --alter option.

Thanks,
Neha





On Mon, Mar 9, 2015 at 9:55 PM, Ewen Cheslack-Postava 
wrote:

> Geoff,
>
> First, if you haven't already, take a look at
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-6+-+New+reassignment+partition+logic+for+rebalancing
> and the associated JIRA.
>
> My opinion is that we shouldn't do anything to the existing partitions --
> it would probably violate the principle of least surprise if you started
> moving data around when all I asked you to do was add new partitions.
>
> I think this aligns well with the basic approach of KIP 6 (which is not
> approved yet, but I think folks are generally in favor of once it is fully
> specified). The goal is to move as little data as possible, but achieve the
> best balance. I think if the user wanted any *other* partitions rebalanced,
> they should use the reassignment tool, which KIP 6 will hopefully make a
> lot easier soon. But at a minimum, adding the new partitions should try not
> to make the balance any worse than it already is.
>
> Not sure what this means for fixing the JIRA though -- reusing what KIP 6
> is doing but applying it to the new partitions seems like the ideal
> solution. The current createTopic code doesn't take into account existing
> assignments, but I don't think it makes sense to duplicate effort between
> the two patches. I'm not sure what the timeframe of KIP 6, but if its not
> too far away you might just mark it as blocking KAFKA-1313, prepare a patch
> against the KAFKA-1792's latest version, and maybe update KAFKA-1313's fix
> version to 0.8.3 to match KAFKA-1792's.
>
> -Ewen
>
> On Mon, Mar 9, 2015 at 3:14 PM, Geoffrey Anderson 
> wrote:
>
> > Hi dev list, I have a few questions regarding KAFKA-1313 (
> > https://issues.apache.org/jira/browse/KAFKA-1313)
> >
> > It is currently possible albeit in a kindof hacky way to increase
> > replication factor on a topic by using the partition reassignment tool
> with
> > a hand-crafted reassignment plan.
> >
> > The preferred approach to making this better is to add support for
> > replication-factor in the alter-topic command. For example,
> >
> > bin/kafka-topics.sh --zookeeper localhost:2181 --topic test --alter
> > --replication-factor 2
> >
> >
> > Another approach is use the partition reassignment tool to auto-generate
> a
> > reassignment plan which results in increased replication.
> >
> > The advantage of the alter topics approach is that it seems logical to
> > group this functionality with the other alter topic stuff. However, a
> > couple questions:
> > - Should partitions automatically be reassigned to brokers and shuffled
> > when the replication factor is changed?
> > - If so, it would be best to be able to update the replication factor of
> > multiple topics at once so that the automatic reassignment would take
> place
> > only once for the whole batch of changes.
> >
> > One advantage of the partition reassignment approach is that it's easier
> to
> > generate a reassignment plan which updates multiple topics at once, and
> you
> > have a chance to review the reassignment plan before putting it into
> effect
> > with the --execute flag.
> >
> > Thanks,
> > Geoff
> >
>
>
>
> --
> Thanks,
> Ewen
>



-- 
Thanks,
Neha


[jira] [Comment Edited] (KAFKA-2013) benchmark test for the purgatory

2015-03-10 Thread Yasuhiro Matsuda (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355189#comment-14355189
 ] 

Yasuhiro Matsuda edited comment on KAFKA-2013 at 3/10/15 4:53 PM:
--

This benchmark test measures the enqueue rate.

Parameters
- the number of requests
- the target enqueue rate (request per second)
  A request interval follows an exponential distribution. 
- the timeout (milliseconds)
- a distribution of time of request completion (75th percentile and 50th 
percentile)
  It follows a log-normal distribution
- a data size per request

Each request has three keys. All requests have the identical set of keys.

After a run it shows the actual target rate and the actual enqueue rate.



was (Author: yasuhiro.matsuda):
This benchmark test measures the enqueue rate.

Parameters
- the number of requests
- the target enqueue rate (request per second)
  A request interval follows an exponential distribution. 
- the timeout (milliseconds)
- a distribution of time to request completion (75th percentile and 50th 
percentile)
  It follows a log-normal distribution
- a data size per request

Each request has three keys. All requests have the identical set of keys.

After a run it shows the actual target rate and the actual enqueue rate.


> benchmark test for the purgatory
> 
>
> Key: KAFKA-2013
> URL: https://issues.apache.org/jira/browse/KAFKA-2013
> Project: Kafka
>  Issue Type: Test
>  Components: purgatory
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>Priority: Trivial
> Attachments: KAFKA-2013.patch
>
>
> We need a micro benchmark test for measuring the purgatory performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2013) benchmark test for the purgatory

2015-03-10 Thread Yasuhiro Matsuda (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355189#comment-14355189
 ] 

Yasuhiro Matsuda commented on KAFKA-2013:
-

This benchmark test measures the enqueue rate.

Parameters
- the number of requests
- the target enqueue rate (request per second)
  A request interval follows an exponential distribution. 
- the timeout (milliseconds)
- a distribution of time to completion (75th percentile and 50th percentile)
  It follows a log-normal distribution
- a data size per request

Each request has three keys. All requests have the identical set of keys.

After a run it shows the actual target rate and the actual enqueue rate.


> benchmark test for the purgatory
> 
>
> Key: KAFKA-2013
> URL: https://issues.apache.org/jira/browse/KAFKA-2013
> Project: Kafka
>  Issue Type: Test
>  Components: purgatory
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>Priority: Trivial
> Attachments: KAFKA-2013.patch
>
>
> We need a micro benchmark test for measuring the purgatory performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2013) benchmark test for the purgatory

2015-03-10 Thread Yasuhiro Matsuda (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355189#comment-14355189
 ] 

Yasuhiro Matsuda edited comment on KAFKA-2013 at 3/10/15 4:53 PM:
--

This benchmark test measures the enqueue rate.

Parameters
- the number of requests
- the target enqueue rate (request per second)
  A request interval follows an exponential distribution. 
- the timeout (milliseconds)
- a distribution of time to request completion (75th percentile and 50th 
percentile)
  It follows a log-normal distribution
- a data size per request

Each request has three keys. All requests have the identical set of keys.

After a run it shows the actual target rate and the actual enqueue rate.



was (Author: yasuhiro.matsuda):
This benchmark test measures the enqueue rate.

Parameters
- the number of requests
- the target enqueue rate (request per second)
  A request interval follows an exponential distribution. 
- the timeout (milliseconds)
- a distribution of time to completion (75th percentile and 50th percentile)
  It follows a log-normal distribution
- a data size per request

Each request has three keys. All requests have the identical set of keys.

After a run it shows the actual target rate and the actual enqueue rate.


> benchmark test for the purgatory
> 
>
> Key: KAFKA-2013
> URL: https://issues.apache.org/jira/browse/KAFKA-2013
> Project: Kafka
>  Issue Type: Test
>  Components: purgatory
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>Priority: Trivial
> Attachments: KAFKA-2013.patch
>
>
> We need a micro benchmark test for measuring the purgatory performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2013) benchmark test for the purgatory

2015-03-10 Thread Yasuhiro Matsuda (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yasuhiro Matsuda updated KAFKA-2013:

Status: Patch Available  (was: Open)

> benchmark test for the purgatory
> 
>
> Key: KAFKA-2013
> URL: https://issues.apache.org/jira/browse/KAFKA-2013
> Project: Kafka
>  Issue Type: Test
>  Components: purgatory
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>Priority: Trivial
> Attachments: KAFKA-2013.patch
>
>
> We need a micro benchmark test for measuring the purgatory performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2013) benchmark test for the purgatory

2015-03-10 Thread Yasuhiro Matsuda (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355174#comment-14355174
 ] 

Yasuhiro Matsuda commented on KAFKA-2013:
-

Created reviewboard https://reviews.apache.org/r/31893/diff/
 against branch origin/trunk

> benchmark test for the purgatory
> 
>
> Key: KAFKA-2013
> URL: https://issues.apache.org/jira/browse/KAFKA-2013
> Project: Kafka
>  Issue Type: Test
>  Components: purgatory
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>Priority: Trivial
> Attachments: KAFKA-2013.patch
>
>
> We need a micro benchmark test for measuring the purgatory performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2013) benchmark test for the purgatory

2015-03-10 Thread Yasuhiro Matsuda (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yasuhiro Matsuda updated KAFKA-2013:

Attachment: KAFKA-2013.patch

> benchmark test for the purgatory
> 
>
> Key: KAFKA-2013
> URL: https://issues.apache.org/jira/browse/KAFKA-2013
> Project: Kafka
>  Issue Type: Test
>  Components: purgatory
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>Priority: Trivial
> Attachments: KAFKA-2013.patch
>
>
> We need a micro benchmark test for measuring the purgatory performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 31893: Patch for KAFKA-2013

2015-03-10 Thread Yasuhiro Matsuda

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31893/
---

Review request for kafka.


Bugs: KAFKA-2013
https://issues.apache.org/jira/browse/KAFKA-2013


Repository: kafka


Description
---

purgatory micro benchmark


Diffs
-

  core/src/test/scala/other/kafka/TestPurgatoryPerformance.scala PRE-CREATION 

Diff: https://reviews.apache.org/r/31893/diff/


Testing
---


Thanks,

Yasuhiro Matsuda



[jira] [Commented] (KAFKA-1461) Replica fetcher thread does not implement any back-off behavior

2015-03-10 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355161#comment-14355161
 ] 

Guozhang Wang commented on KAFKA-1461:
--

[~junrao] Could you elaborate a bit on "different partitions become active at 
slightly different times and the fetcher doesn't actually back off"? Not sure I 
understand why the fetcher does not actually back off.

I agree that upon IOException thrown in SimpleConsumer.fetch, we should back 
off the thread as a whole for common case #1 you mentioned above; but at the 
same time we should still consider backing off for partition-specific error 
codes, as otherwise the broker logs will be kind of polluted with all error 
messages from continuous retries we have seen before. Do you agree?

> Replica fetcher thread does not implement any back-off behavior
> ---
>
> Key: KAFKA-1461
> URL: https://issues.apache.org/jira/browse/KAFKA-1461
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.8.1.1
>Reporter: Sam Meder
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1461.patch
>
>
> The current replica fetcher thread will retry in a tight loop if any error 
> occurs during the fetch call. For example, we've seen cases where the fetch 
> continuously throws a connection refused exception leading to several replica 
> fetcher threads that spin in a pretty tight loop.
> To a much lesser degree this is also an issue in the consumer fetcher thread, 
> although the fact that erroring partitions are removed so a leader can be 
> re-discovered helps some.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-527) Compression support does numerous byte copies

2015-03-10 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14355133#comment-14355133
 ] 

Guozhang Wang commented on KAFKA-527:
-

Hi Yasuhiro,

I thought for compressed writes, the linked list buffers in 
BufferingOutputStream still need to be copied to a newly allocated buffer (in 
line 54/55 of ByteBufferMessageSet) whereas for MemoryRecord, it append 
messages to the compressed stream in-place and no extra copy is required at the 
end of the writes, but I may misunderstood Scala's function-parameter syntax 
and please let me know if I did.

As for the migration plan, I agree that ByteBufferMessageSet replacement would 
not come in the near future, and we can definitely commit the patches now as 
compress / de-compress has been a pain for us.

> Compression support does numerous byte copies
> -
>
> Key: KAFKA-527
> URL: https://issues.apache.org/jira/browse/KAFKA-527
> Project: Kafka
>  Issue Type: Bug
>  Components: compression
>Reporter: Jay Kreps
>Assignee: Yasuhiro Matsuda
>Priority: Critical
> Attachments: KAFKA-527.message-copy.history, KAFKA-527.patch, 
> java.hprof.no-compression.txt, java.hprof.snappy.text
>
>
> The data path for compressing or decompressing messages is extremely 
> inefficient. We do something like 7 (?) complete copies of the data, often 
> for simple things like adding a 4 byte size to the front. I am not sure how 
> this went by unnoticed.
> This is likely the root cause of the performance issues we saw in doing bulk 
> recompression of data in mirror maker.
> The mismatch between the InputStream and OutputStream interfaces and the 
> Message/MessageSet interfaces which are based on byte buffers is the cause of 
> many of these.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Subscript

2015-03-10 Thread Liangchen li



Re: [kafka-clients] Re: [VOTE] 0.8.2.1 Candidate 2

2015-03-10 Thread Joe Stein
Thanks Jun for getting this release out the door and everyone that
contributed to the work in 0.8.2.1, awesome!

~ Joe Stein
- - - - - - - - - - - - - - - - -

  http://www.stealth.ly
- - - - - - - - - - - - - - - - -

On Mon, Mar 9, 2015 at 2:12 PM, Jun Rao  wrote:

> The following are the results of the votes.
>
> +1 binding = 3 votes
> +1 non-binding = 2 votes
> -1 = 0 votes
> 0 = 0 votes
>
> The vote passes.
>
> I will release artifacts to maven central, update the dist svn and download
> site. Will send out an announce after that.
>
> Thanks everyone that contributed to the work in 0.8.2.1!
>
> Jun
>
> On Thu, Feb 26, 2015 at 2:59 PM, Jun Rao  wrote:
>
>> This is the second candidate for release of Apache Kafka 0.8.2.1. This
>> fixes 4 critical issue in 0.8.2.0.
>>
>> Release Notes for the 0.8.2.1 release
>>
>> https://people.apache.org/~junrao/kafka-0.8.2.1-candidate2/RELEASE_NOTES.html
>>
>> *** Please download, test and vote by Monday, Mar 2, 3pm PT
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> http://kafka.apache.org/KEYS in addition to the md5, sha1
>> and sha2 (SHA256) checksum.
>>
>> * Release artifacts to be voted upon (source and binary):
>> https://people.apache.org/~junrao/kafka-0.8.2.1-candidate2/
>>
>> * Maven artifacts to be voted upon prior to release:
>> https://repository.apache.org/content/groups/staging/
>>
>> * scala-doc
>> https://people.apache.org/~junrao/kafka-0.8.2.1-candidate2/scaladoc/
>>
>> * java-doc
>> https://people.apache.org/~junrao/kafka-0.8.2.1-candidate2/javadoc/
>>
>> * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.1 tag
>>
>> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=bd1bfb63ec73c10d08432ac893a23f28281ea021
>> (git commit ee1267b127f3081db491fa1bf9a287084c324e36)
>>
>> /***
>>
>> Thanks,
>>
>> Jun
>>
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at http://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAFc58G9w_bhrPq1wqEJe-R7-_DecvMNCTuOuLtrA%3DUzXXs%2Bt%3Dg%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>


Re: Can I be added as a contributor?

2015-03-10 Thread Grant Henke
Thanks Joe. I added a Confluence account.

On Tue, Mar 10, 2015 at 12:04 AM, Joe Stein  wrote:

> Grant, I added you.
>
> Grayson and Grant, you should both also please setup Confluence accounts
> and we can grant perms to Confluence also too for your username.
>
> ~ Joe Stein
> - - - - - - - - - - - - - - - - -
>
>   http://www.stealth.ly
> - - - - - - - - - - - - - - - - -
>
> On Tue, Mar 10, 2015 at 12:54 AM, Grant Henke  wrote:
>
> > I am also starting to work with the Kafka codebase with plans to
> contribute
> > more significantly in the near future. Could I also be added to the
> > contributor list so that I can assign myself tickets?
> >
> > Thank you,
> > Grant
> >
> > On Mon, Mar 9, 2015 at 1:39 PM, Guozhang Wang 
> wrote:
> >
> > > Added grayson.c...@gmail.com to the list.
> > >
> > > On Mon, Mar 9, 2015 at 10:41 AM, Grayson Chao
>  > >
> > > wrote:
> > >
> > > > Hello Kafka devs,
> > > >
> > > > I'm working on the ops side of Kafka at LinkedIn (embedded SRE on the
> > > > Kafka team) and would like to start familiarizing myself with the
> > > codebase
> > > > with a view to eventually making substantial contributions. Could you
> > > > please add me as a contributor to the Kafka JIRA so that I can assign
> > > > myself a newbie ticket?
> > > >
> > > > Thanks!
> > > > Grayson
> > > > --
> > > > Grayson Chao
> > > > Data Infra Streaming SRE
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
> >
> >
> > --
> > Grant Henke
> > Solutions Consultant | Cloudera
> > ghe...@cloudera.com | 920-980-8979
> > twitter.com/ghenke  |
> > linkedin.com/in/granthenke
> >
>



-- 
Grant Henke
Solutions Consultant | Cloudera
ghe...@cloudera.com | 920-980-8979
twitter.com/ghenke  | linkedin.com/in/granthenke