[jira] [Commented] (KAFKA-1581) Log cleaner should have an option to ignore messages without keys

2015-04-18 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14501228#comment-14501228
 ] 

Manikumar Reddy commented on KAFKA-1581:


[~jjkoshy]  I think this can be closed now. KAFKA-1755 fixed this issue.

> Log cleaner should have an option to ignore messages without keys
> -
>
> Key: KAFKA-1581
> URL: https://issues.apache.org/jira/browse/KAFKA-1581
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Manikumar Reddy
>
> Right now, there is a strict requirement that compacted topics contain only 
> messages with keys. This makes sense, but the issue with a hard requirement 
> is that if it fails the cleaner quits. We should probably allow ignoring 
> these messages (with a warning). Alternatively, we can catch this scenario 
> (instead of the hard requirement) and just skip compaction for that partition.
> This came up because I saw an invalid message (compressed and without a key) 
> in the offsets topic which broke both log compaction and the offset load 
> process. I filed KAFKA-1580 to prevent that from happening in the first place 
> but KAFKA-1580 is only for internal topics. In the general case (compacted 
> non-internal topics) we would not want the cleaners to exit permanently due 
> to an invalid (key-less) message in one of the partitions since that would 
> prevent compaction for other partitions as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1758) corrupt recovery file prevents startup

2015-04-18 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14501295#comment-14501295
 ] 

Manikumar Reddy commented on KAFKA-1758:


[~jkreps] Can I get review for this simple patch?

> corrupt recovery file prevents startup
> --
>
> Key: KAFKA-1758
> URL: https://issues.apache.org/jira/browse/KAFKA-1758
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Jason Rosenberg
>Assignee: Manikumar Reddy
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: KAFKA-1758.patch
>
>
> Hi,
> We recently had a kafka node go down suddenly. When it came back up, it 
> apparently had a corrupt recovery file, and refused to startup:
> {code}
> 2014-11-06 08:17:19,299  WARN [main] server.KafkaServer - Error starting up 
> KafkaServer
> java.lang.NumberFormatException: For input string: 
> "^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:481)
> at java.lang.Integer.parseInt(Integer.java:527)
> at 
> scala.collection.immutable.StringLike$class.toInt(StringLike.scala:229)
> at scala.collection.immutable.StringOps.toInt(StringOps.scala:31)
> at kafka.server.OffsetCheckpoint.read(OffsetCheckpoint.scala:76)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:106)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
> at kafka.log.LogManager.loadLogs(LogManager.scala:105)
> at kafka.log.LogManager.(LogManager.scala:57)
> at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
> {code}
> And the app is under a monitor (so it was repeatedly restarting and failing 
> with this error for several minutes before we got to it)…
> We moved the ‘recovery-point-offset-checkpoint’ file out of the way, and it 
> then restarted cleanly (but of course re-synced all it’s data from replicas, 
> so we had no data loss).
> Anyway, I’m wondering if that’s the expected behavior? Or should it not 
> declare it corrupt and then proceed automatically to an unclean restart?
> Should this NumberFormatException be handled a bit more gracefully?
> We saved the corrupt file if it’s worth inspecting (although I doubt it will 
> be useful!)….
> The corrupt files appeared to be all zeroes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2088) kafka-console-consumer.sh should not create zookeeper path when no brokers found and chroot was set in zookeeper.connect

2015-04-18 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2088:
---
   Resolution: Fixed
Fix Version/s: 0.8.3
   Status: Resolved  (was: Patch Available)

Thanks for the explanation. That makes sense. +1 on the patch and committed to 
trunk.

> kafka-console-consumer.sh should not create zookeeper path when no brokers 
> found and chroot was set in zookeeper.connect
> 
>
> Key: KAFKA-2088
> URL: https://issues.apache.org/jira/browse/KAFKA-2088
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.1
>Reporter: Zhiqiang He
>Assignee: Zhiqiang He
>Priority: Minor
> Fix For: 0.8.3
>
> Attachments: kafka-2088.1.patch
>
>
> 1. set server.properties
> server.properties:
> zookeeper.connect = 
> 192.168.0.10:2181,192.168.0.10:2181,192.168.0.10:2181/kafka
> 2 default zookeeepr path:
> [zk: 192.168.0.10:2181(CONNECTED) 10] ls /
> [zookeeper, kafka, storm]
> 3.start console consumer use a not exist topic and zookeeper address without 
> chroot.
> [root@stream client_0402]# kafka-console-consumer.sh --zookeeper 
> 192.168.0.10:2181 --topic test --from-beginning
> [2015-04-03 18:15:28,599] WARN 
> [console-consumer-63060_stream-1428056127990-d35ca648], no brokers found when 
> trying to rebalance. (kafka.consumer.ZookeeperConsumerConnector)
> 4.then "/consumer" and "/brokers" path was create in zookeeper.
> [zk: 192.168.0.10:2181(CONNECTED) 2] ls /
> [zookeeper, consumers, kafka, storm, brokers]
> so it is a bug. consumer should not create "/consumer" and "/brokers" path .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 31369: Patch for KAFKA-1982

2015-04-18 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31369/#review80607
---


There seems to be a bunch of checkstyle failures. Could you address those?

./gradlew -Dtest.single=SerializationTest cleanTest client:test
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java:17:8:
 Unused import - java.nio.ByteBuffer.
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java:27:
 if at indentation level 6 not at correct indentation, 8
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java:28:
 if child at indentation level 8 not at correct indentation, 12
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java:29:
 if at indentation level 6 not at correct indentation, 8
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java:30:
 if child at indentation level 8 not at correct indentation, 12
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java:32:
 if rcurly at indentation level 6 not at correct indentation, 8
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java:34:
 method def child at indentation level 6 not at correct indentation, 8
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java:35:
 for at indentation level 6 not at correct indentation, 8
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java:36:
 for child at indentation level 8 not at correct indentation, 12
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java:37:
 for child at indentation level 8 not at correct indentation, 12
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java:37:15:
 Unnecessary parentheses around assignment right-hand side.
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java:38:
 for rcurly at indentation level 6 not at correct indentation, 8
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerDeserializer.java:39:
 method def child at indentation level 6 not at correct indentation, 8
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerSerializer.java:24:
 if at indentation level 6 not at correct indentation, 8
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerSerializer.java:25:
 if child at indentation level 8 not at correct indentation, 12
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerSerializer.java:27:
 method def child at indentation level 6 not at correct indentation, 8
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerSerializer.java:28:
 array initialization child at indentation level 8 not at correct indentation, 
10
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerSerializer.java:29:
 array initialization child at indentation level 8 not at correct indentation, 
10
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerSerializer.java:30:
 array initialization child at indentation level 8 not at correct indentation, 
10
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerSerializer.java:31:
 method call child at indentation level 8 not at correct indentation, 10
[ant:checkstyle] 
/Users/junrao/intellij/kafka/clients/src/main/java/org/apache/kafka/common/serialization/IntegerSerializer.java:35:
 method def modifier at indentation level 2 not at correct indentation, 4
:clients:checkstyleMain FAILED

- Jun Rao


On April 18, 2015, midnight, Ashish Singh wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/31369/
> ---
> 
> (Updated April 18, 2015, midnight)
> 
> 
> Review request for kafka.
> 

[jira] [Updated] (KAFKA-2036) Consumer and broker have different networks

2015-04-18 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2036:
---
Status: In Progress  (was: Patch Available)

Thanks for the patch. The new consumer/producer client will be using the 
network code in org.apache.kafka.common.network. Does it have the same problem? 
It's more important to fix the issue there since BlockingChannel will be go 
away in the future.

> Consumer and broker have different networks
> ---
>
> Key: KAFKA-2036
> URL: https://issues.apache.org/jira/browse/KAFKA-2036
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.8.2.1
> Environment: oracle java {7,8}, ipv6 only consumer, ipv4 + ipv6 broker
>Reporter: Arsenii Krasikov
>Assignee: Jun Rao
> Attachments: patch
>
>
> If broker is connected to several networks ( for example ipv6 and ipv4 ) and 
> not all of them are reachable to consumer then 
> {{kafka.network.BlockingChannel}} gives up to connect after the first 
> "Network is unreachable" error not triyng remaining networks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1994) Evaluate performance effect of chroot check on Topic creation

2015-04-18 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1994:
---
   Resolution: Fixed
Fix Version/s: 0.8.3
   Status: Resolved  (was: Patch Available)

Thanks for the latest patch. +1 and committed to trunk.

> Evaluate performance effect of chroot check on Topic creation
> -
>
> Key: KAFKA-1994
> URL: https://issues.apache.org/jira/browse/KAFKA-1994
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
> Fix For: 0.8.3
>
> Attachments: KAFKA-1994.patch, KAFKA-1994_2015-03-03_18:19:45.patch
>
>
> KAFKA-1664 adds check for chroot while creating a node in ZK. ZkPath checks 
> if namespace exists before trying to create a path in ZK. This raises a 
> concern that checking namespace for each path creation might be unnecessary 
> and can potentially make creations expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Kafka-trunk #466

2015-04-18 Thread Apache Jenkins Server
See 



Re: Review Request 33088: add heartbeat to coordinator

2015-04-18 Thread Onur Karaman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33088/
---

(Updated April 18, 2015, 5:16 p.m.)


Review request for kafka.


Bugs: KAFKA-1334
https://issues.apache.org/jira/browse/KAFKA-1334


Repository: kafka


Description (updated)
---

add heartbeat to coordinator

todo:
- see how it performs under real load
- add error code handling on the consumer side
- implement the partition assignors


Diffs (updated)
-

  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
 e55ab11df4db0b0084f841a74cbcf819caf780d5 
  clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
36aa412404ff1458c7bef0feecaaa8bc45bed9c7 
  core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
456b602245e111880e1b8b361319cabff38ee0e9 
  core/src/main/scala/kafka/coordinator/ConsumerRegistry.scala 
2f5797064d4131ecfc9d2750d9345a9fa3972a9a 
  core/src/main/scala/kafka/coordinator/DelayedHeartbeat.scala 
6a6bc7bc4ceb648b67332e789c2c33de88e4cd86 
  core/src/main/scala/kafka/coordinator/DelayedJoinGroup.scala 
df60cbc35d09937b4e9c737c67229889c69d8698 
  core/src/main/scala/kafka/coordinator/DelayedRebalance.scala 
8defa2e41c92f1ebe255177679d275c70dae5b3e 
  core/src/main/scala/kafka/coordinator/Group.scala PRE-CREATION 
  core/src/main/scala/kafka/coordinator/GroupRegistry.scala 
94ef5829b3a616c90018af1db7627bfe42e259e5 
  core/src/main/scala/kafka/coordinator/HeartbeatBucket.scala 
821e26e97eaa97b5f4520474fff0fedbf406c82a 
  core/src/main/scala/kafka/coordinator/PartitionAssignor.scala PRE-CREATION 
  core/src/main/scala/kafka/server/DelayedOperationKey.scala 
b673e43b0ba401b2e22f27aef550e3ab0ef4323c 
  core/src/main/scala/kafka/server/KafkaApis.scala 
b4004aa3a1456d337199aa1245fb0ae61f6add46 
  core/src/main/scala/kafka/server/KafkaServer.scala 
c63f4ba9d622817ea8636d4e6135fba917ce085a 
  core/src/main/scala/kafka/server/OffsetManager.scala 
420e2c3535e722c503f13d093849469983f6f08d 
  core/src/test/scala/unit/kafka/coordinator/GroupTest.scala PRE-CREATION 

Diff: https://reviews.apache.org/r/33088/diff/


Testing
---


Thanks,

Onur Karaman



[jira] [Commented] (KAFKA-1334) Add failure detection capability to the coordinator / consumer

2015-04-18 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14501479#comment-14501479
 ] 

Onur Karaman commented on KAFKA-1334:
-

Updated reviewboard https://reviews.apache.org/r/33088/diff/
 against branch origin/trunk

> Add failure detection capability to the coordinator / consumer
> --
>
> Key: KAFKA-1334
> URL: https://issues.apache.org/jira/browse/KAFKA-1334
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0
>Reporter: Neha Narkhede
>Assignee: Onur Karaman
> Attachments: KAFKA-1334.patch, KAFKA-1334_2015-04-11_22:47:27.patch, 
> KAFKA-1334_2015-04-13_11:55:06.patch, KAFKA-1334_2015-04-13_11:58:53.patch, 
> KAFKA-1334_2015-04-18_10:16:23.patch
>
>
> 1) Add coordinator discovery and failure detection to the consumer.
> 2) Add failure detection capability to the coordinator when group management 
> is used.
> This will not include any rebalancing logic, just the logic to detect 
> consumer failures using session.timeout.ms. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1334) Add failure detection capability to the coordinator / consumer

2015-04-18 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman updated KAFKA-1334:

Attachment: KAFKA-1334_2015-04-18_10:16:23.patch

> Add failure detection capability to the coordinator / consumer
> --
>
> Key: KAFKA-1334
> URL: https://issues.apache.org/jira/browse/KAFKA-1334
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0
>Reporter: Neha Narkhede
>Assignee: Onur Karaman
> Attachments: KAFKA-1334.patch, KAFKA-1334_2015-04-11_22:47:27.patch, 
> KAFKA-1334_2015-04-13_11:55:06.patch, KAFKA-1334_2015-04-13_11:58:53.patch, 
> KAFKA-1334_2015-04-18_10:16:23.patch
>
>
> 1) Add coordinator discovery and failure detection to the consumer.
> 2) Add failure detection capability to the coordinator when group management 
> is used.
> This will not include any rebalancing logic, just the logic to detect 
> consumer failures using session.timeout.ms. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #467

2015-04-18 Thread Apache Jenkins Server
See 

Changes:

[junrao] kafka-1994; Evaluate performance effect of chroot check on Topic 
creation; patched by Ashish Singh; reviewed by Gwen Shapira and Jun Rao

--
[...truncated 1551 lines...]
kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNoResponse PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.ConsumerTest > testSimpleConsumption PASSED

kafka.api.ConsumerTest > testAutoOffsetReset PASSED

kafka.api.ConsumerTest > testSeek PASSED

kafka.api.ConsumerTest > testGroupConsumption PASSED

kafka.api.ConsumerTest > testPositionAndCommit PASSED

kafka.api.ConsumerTest > testPartitionsFor PASSED

kafka.api.ConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures FAILED
java.lang.AssertionError: expected:<1000> but was:<957>
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.failNotEquals(Assert.java:689)
at org.junit.Assert.assertEquals(Assert.java:127)
at org.junit.Assert.assertEquals(Assert.java:514)
at org.junit.Assert.assertEquals(Assert.java:498)
at 
kafka.api.ConsumerBounceTest.seekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:117)
at 
kafka.api.ConsumerBounceTest.testSeekAndCommitWithBrokerFailures(ConsumerBounceTest.scala:98)

kafka.api.RequestResponseSerializationTest > 
testSerializationAndDeserialization PASSED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[3] PASSED

kafka.cluster.BrokerEndPointTest > testSerDe PASSED

kafka.cluster.BrokerEndPointTest > testHashAndEquals PASSED

kafka.cluster.BrokerEndPointTest > testFromJSON PASSED

kafka.cluster.BrokerEndPointTest > testFromOldJSON PASSED

kafka.cluster.BrokerEndPointTest > testBrokerEndpointFromURI PASSED

kafka.cluster.BrokerEndPointTest > testEndpointFromURI PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.RollingBounceTest > testRollingBounce PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMet

Re: Review Request 33088: add heartbeat to coordinator

2015-04-18 Thread Onur Karaman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33088/
---

(Updated April 18, 2015, 7:16 p.m.)


Review request for kafka.


Bugs: KAFKA-1334
https://issues.apache.org/jira/browse/KAFKA-1334


Repository: kafka


Description
---

add heartbeat to coordinator

todo:
- see how it performs under real load
- add error code handling on the consumer side
- implement the partition assignors


Diffs (updated)
-

  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
 e55ab11df4db0b0084f841a74cbcf819caf780d5 
  clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
36aa412404ff1458c7bef0feecaaa8bc45bed9c7 
  core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
456b602245e111880e1b8b361319cabff38ee0e9 
  core/src/main/scala/kafka/coordinator/ConsumerRegistry.scala 
2f5797064d4131ecfc9d2750d9345a9fa3972a9a 
  core/src/main/scala/kafka/coordinator/DelayedHeartbeat.scala 
6a6bc7bc4ceb648b67332e789c2c33de88e4cd86 
  core/src/main/scala/kafka/coordinator/DelayedJoinGroup.scala 
df60cbc35d09937b4e9c737c67229889c69d8698 
  core/src/main/scala/kafka/coordinator/DelayedRebalance.scala 
8defa2e41c92f1ebe255177679d275c70dae5b3e 
  core/src/main/scala/kafka/coordinator/Group.scala PRE-CREATION 
  core/src/main/scala/kafka/coordinator/GroupRegistry.scala 
94ef5829b3a616c90018af1db7627bfe42e259e5 
  core/src/main/scala/kafka/coordinator/HeartbeatBucket.scala 
821e26e97eaa97b5f4520474fff0fedbf406c82a 
  core/src/main/scala/kafka/coordinator/PartitionAssignor.scala PRE-CREATION 
  core/src/main/scala/kafka/server/DelayedOperationKey.scala 
b673e43b0ba401b2e22f27aef550e3ab0ef4323c 
  core/src/main/scala/kafka/server/KafkaApis.scala 
b4004aa3a1456d337199aa1245fb0ae61f6add46 
  core/src/main/scala/kafka/server/KafkaServer.scala 
c63f4ba9d622817ea8636d4e6135fba917ce085a 
  core/src/main/scala/kafka/server/OffsetManager.scala 
18680ce100f10035175cc0263ba7787ab0f6a17a 
  core/src/test/scala/unit/kafka/coordinator/GroupTest.scala PRE-CREATION 

Diff: https://reviews.apache.org/r/33088/diff/


Testing
---


Thanks,

Onur Karaman



[jira] [Commented] (KAFKA-1334) Add failure detection capability to the coordinator / consumer

2015-04-18 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14501538#comment-14501538
 ] 

Onur Karaman commented on KAFKA-1334:
-

Updated reviewboard https://reviews.apache.org/r/33088/diff/
 against branch origin/trunk

> Add failure detection capability to the coordinator / consumer
> --
>
> Key: KAFKA-1334
> URL: https://issues.apache.org/jira/browse/KAFKA-1334
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0
>Reporter: Neha Narkhede
>Assignee: Onur Karaman
> Attachments: KAFKA-1334.patch, KAFKA-1334_2015-04-11_22:47:27.patch, 
> KAFKA-1334_2015-04-13_11:55:06.patch, KAFKA-1334_2015-04-13_11:58:53.patch, 
> KAFKA-1334_2015-04-18_10:16:23.patch, KAFKA-1334_2015-04-18_12:16:39.patch
>
>
> 1) Add coordinator discovery and failure detection to the consumer.
> 2) Add failure detection capability to the coordinator when group management 
> is used.
> This will not include any rebalancing logic, just the logic to detect 
> consumer failures using session.timeout.ms. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1334) Add failure detection capability to the coordinator / consumer

2015-04-18 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman updated KAFKA-1334:

Attachment: KAFKA-1334_2015-04-18_12:16:39.patch

> Add failure detection capability to the coordinator / consumer
> --
>
> Key: KAFKA-1334
> URL: https://issues.apache.org/jira/browse/KAFKA-1334
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0
>Reporter: Neha Narkhede
>Assignee: Onur Karaman
> Attachments: KAFKA-1334.patch, KAFKA-1334_2015-04-11_22:47:27.patch, 
> KAFKA-1334_2015-04-13_11:55:06.patch, KAFKA-1334_2015-04-13_11:58:53.patch, 
> KAFKA-1334_2015-04-18_10:16:23.patch, KAFKA-1334_2015-04-18_12:16:39.patch
>
>
> 1) Add coordinator discovery and failure detection to the consumer.
> 2) Add failure detection capability to the coordinator when group management 
> is used.
> This will not include any rebalancing logic, just the logic to detect 
> consumer failures using session.timeout.ms. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33088: add heartbeat to coordinator

2015-04-18 Thread Jay Kreps

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33088/#review80619
---



core/src/main/scala/kafka/coordinator/DelayedHeartbeat.scala


Is it possible to get rid of this kind of ad hoc synchronization? You seem 
to be passing around the group object between parts of the code to use as a 
lock. It is virtually impossible to keep this kind of usage working over time. 
If there is any way to put all the synchronized methods in one class and pass 
that around that tends to stay working much longer...


- Jay Kreps


On April 18, 2015, 7:16 p.m., Onur Karaman wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33088/
> ---
> 
> (Updated April 18, 2015, 7:16 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1334
> https://issues.apache.org/jira/browse/KAFKA-1334
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> add heartbeat to coordinator
> 
> todo:
> - see how it performs under real load
> - add error code handling on the consumer side
> - implement the partition assignors
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
>  e55ab11df4db0b0084f841a74cbcf819caf780d5 
>   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
> 36aa412404ff1458c7bef0feecaaa8bc45bed9c7 
>   core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
> 456b602245e111880e1b8b361319cabff38ee0e9 
>   core/src/main/scala/kafka/coordinator/ConsumerRegistry.scala 
> 2f5797064d4131ecfc9d2750d9345a9fa3972a9a 
>   core/src/main/scala/kafka/coordinator/DelayedHeartbeat.scala 
> 6a6bc7bc4ceb648b67332e789c2c33de88e4cd86 
>   core/src/main/scala/kafka/coordinator/DelayedJoinGroup.scala 
> df60cbc35d09937b4e9c737c67229889c69d8698 
>   core/src/main/scala/kafka/coordinator/DelayedRebalance.scala 
> 8defa2e41c92f1ebe255177679d275c70dae5b3e 
>   core/src/main/scala/kafka/coordinator/Group.scala PRE-CREATION 
>   core/src/main/scala/kafka/coordinator/GroupRegistry.scala 
> 94ef5829b3a616c90018af1db7627bfe42e259e5 
>   core/src/main/scala/kafka/coordinator/HeartbeatBucket.scala 
> 821e26e97eaa97b5f4520474fff0fedbf406c82a 
>   core/src/main/scala/kafka/coordinator/PartitionAssignor.scala PRE-CREATION 
>   core/src/main/scala/kafka/server/DelayedOperationKey.scala 
> b673e43b0ba401b2e22f27aef550e3ab0ef4323c 
>   core/src/main/scala/kafka/server/KafkaApis.scala 
> b4004aa3a1456d337199aa1245fb0ae61f6add46 
>   core/src/main/scala/kafka/server/KafkaServer.scala 
> c63f4ba9d622817ea8636d4e6135fba917ce085a 
>   core/src/main/scala/kafka/server/OffsetManager.scala 
> 18680ce100f10035175cc0263ba7787ab0f6a17a 
>   core/src/test/scala/unit/kafka/coordinator/GroupTest.scala PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/33088/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Onur Karaman
> 
>



[jira] [Created] (KAFKA-2132) Move Log4J appender to clients module

2015-04-18 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2132:
---

 Summary: Move Log4J appender to clients module
 Key: KAFKA-2132
 URL: https://issues.apache.org/jira/browse/KAFKA-2132
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Gwen Shapira


Log4j appender is just a producer.
Since we have a new producer in the clients module, no need to keep Log4J 
appender in "core" and force people to package all of Kafka with their apps.

Lets move the Log4jAppender to clients module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1554) Corrupt index found on clean startup

2015-04-18 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1554:
---
Status: In Progress  (was: Patch Available)

> Corrupt index found on clean startup
> 
>
> Key: KAFKA-1554
> URL: https://issues.apache.org/jira/browse/KAFKA-1554
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
> Environment: ubuntu 12.04, oracle jdk 1.7
>Reporter: Alexis Midon
>Assignee: Mayuresh Gharat
>Priority: Critical
> Fix For: 0.9.0
>
> Attachments: KAFKA-1554.patch
>
>
> On a clean start up, corrupted index files are found.
> After investigations, it appears that some pre-allocated index files are not 
> "compacted" correctly and the end of the file is full of zeroes.
> As a result, on start up, the last relative offset is zero which yields an 
> offset equal to the base offset.
> The workaround is to delete all index files of size 10MB (the size of the 
> pre-allocated files), and restart. Index files will be re-created.
> {code}
> find $your_data_directory -size 10485760c -name *.index #-delete
> {code}
> This is issue might be related/similar to 
> https://issues.apache.org/jira/browse/KAFKA-1112
> {code}
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,696 
> INFO main kafka.server.KafkaServer.info - [Kafka Server 847605514], starting
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,698 
> INFO main kafka.server.KafkaServer.info - [Kafka Server 847605514], 
> Connecting to zookeeper on 
> zk-main0.XXX:2181,zk-main1.XXX:2181,zk-main2.:2181/production/kafka/main
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,708 
> INFO 
> ZkClient-EventThread-14-zk-main0.XXX.com:2181,zk-main1.XXX.com:2181,zk-main2.XXX.com:2181,zk-main3.XXX.com:2181,zk-main4.XXX.com:2181/production/kafka/main
>  org.I0Itec.zkclient.ZkEventThread.run - Starting ZkClient event thread.
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,714 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:zookeeper.version=3.3.3-1203054, built on 11/17/2011 05:47 GMT
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,714 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:host.name=i-6b948138.inst.aws.airbnb.com
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,714 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:java.version=1.7.0_55
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:java.vendor=Oracle Corporation
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:java.home=/usr/lib/jvm/jre-7-oracle-x64/jre
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:java.class.path=libs/snappy-java-1.0.5.jar:libs/scala-library-2.10.1.jar:libs/slf4j-api-1.7.2.jar:libs/jopt-simple-3.2.jar:libs/metrics-annotation-2.2.0.jar:libs/log4j-1.2.15.jar:libs/kafka_2.10-0.8.1.jar:libs/zkclient-0.3.jar:libs/zookeeper-3.3.4.jar:libs/metrics-core-2.2.0.jar
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,715 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:java.io.tmpdir=/tmp
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:java.compiler=
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:os.name=Linux
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,716 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:os.arch=amd64
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,717 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:os.version=3.2.0-61-virtual
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,717 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:user.name=kafka
> 2014-07-11T00:53:17+00:00 i-6b948138 local3.info 2014-07-11 - 00:53:17,717 
> INFO main org.apache.zookeeper.ZooKeeper.logEnv - Client 
> environment:user.home=/srv/kafka
> 2014-07-11T00:53:17+00:00 i-6

[jira] [Updated] (KAFKA-1420) Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests

2015-04-18 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1420:
---
Status: In Progress  (was: Patch Available)

> Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with 
> TestUtils.createTopic in unit tests
> --
>
> Key: KAFKA-1420
> URL: https://issues.apache.org/jira/browse/KAFKA-1420
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Jonathan Natkins
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1420.patch, KAFKA-1420_2014-07-30_11:18:26.patch, 
> KAFKA-1420_2014-07-30_11:24:55.patch, KAFKA-1420_2014-08-02_11:04:15.patch, 
> KAFKA-1420_2014-08-10_14:12:05.patch, KAFKA-1420_2014-08-10_23:03:46.patch
>
>
> This is a follow-up JIRA from KAFKA-1389.
> There are a bunch of places in the unit tests where we misuse 
> AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK to create topics, 
> where TestUtils.createTopic needs to be used instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 24006: Patch for KAFKA-1420

2015-04-18 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24006/#review80620
---


Thanks for the patch. Sorry for the late review. A few comments below.


core/src/test/scala/unit/kafka/admin/AdminTest.scala


This is a case where we probably want to explicitly specify the replica 
assignment. We don't want to depend on the current assignment strategy since it 
may change in the future.



core/src/test/scala/unit/kafka/admin/AdminTest.scala


Ditto as the above.



core/src/test/scala/unit/kafka/admin/AdminTest.scala


Ditto as the above.



core/src/test/scala/unit/kafka/admin/AdminTest.scala


This may not be a reliable way to wait for replica 0 to be in isr. Perhaps, 
we can explicitly check the isr set in Partition through ReplicaManager in the 
leader broker.



core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala


Should we use replicaIdsForPartition instead of Seq(0,1,2,3)?


- Jun Rao


On Aug. 11, 2014, 6:03 a.m., Jonathan Natkins wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24006/
> ---
> 
> (Updated Aug. 11, 2014, 6:03 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1420
> https://issues.apache.org/jira/browse/KAFKA-1420
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1420 Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK 
> with TestUtils.createTopic in unit tests
> 
> 
> Diffs
> -
> 
>   core/src/test/scala/unit/kafka/admin/AdminTest.scala 
> e28979827110dfbbb92fe5b152e7f1cc973de400 
>   core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 
> 29cc01bcef9cacd8dec1f5d662644fc6fe4994bc 
>   core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala 
> f44568cb25edf25db857415119018fd4c9922f61 
>   core/src/test/scala/unit/kafka/utils/TestUtils.scala 
> c4e13c5240c8303853d08cc3b40088f8c7dae460 
> 
> Diff: https://reviews.apache.org/r/24006/diff/
> 
> 
> Testing
> ---
> 
> Automated
> 
> 
> Thanks,
> 
> Jonathan Natkins
> 
>



[jira] [Resolved] (KAFKA-2116) Add code that uses new consumer to system_tests

2015-04-18 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2116.
-
Resolution: Duplicate

> Add code that uses new consumer to system_tests
> ---
>
> Key: KAFKA-2116
> URL: https://issues.apache.org/jira/browse/KAFKA-2116
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gwen Shapira
>
> It takes us longer to catch obvious new bugs because our automated test-suite 
> doesn't test the new consumer yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1974) NPE in SelectorTest on trunk

2015-04-18 Thread Eddie Hao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14501667#comment-14501667
 ] 

Eddie Hao commented on KAFKA-1974:
--

Hi I just ran this test from HEAD and everything passed. Does this still need 
to be open?

> NPE in SelectorTest on trunk
> 
>
> Key: KAFKA-1974
> URL: https://issues.apache.org/jira/browse/KAFKA-1974
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Neha Narkhede
>  Labels: newbie
>
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.SelectorTest.teardown(SelectorTest.java:57)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:86)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:49)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:48)
>   at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:105)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:355)
>   at 
> org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:64)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2121) error handling in KafkaProducer constructor

2015-04-18 Thread Steven Zhen Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Zhen Wu updated KAFKA-2121:
--
Attachment: KAFKA-2121_2015-04-18_20:09:20.patch

> error handling in KafkaProducer constructor
> ---
>
> Key: KAFKA-2121
> URL: https://issues.apache.org/jira/browse/KAFKA-2121
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.0
>Reporter: Steven Zhen Wu
>Assignee: Jun Rao
> Attachments: KAFKA-2121.patch, KAFKA-2121_2015-04-16_09:55:14.patch, 
> KAFKA-2121_2015-04-16_10:43:55.patch, KAFKA-2121_2015-04-18_20:09:20.patch
>
>
> On Mon, Apr 13, 2015 at 7:17 PM, Guozhang Wang  wrote:
> It is a valid problem and we should correct it as soon as possible, I'm
> with Ewen regarding the solution.
> On Mon, Apr 13, 2015 at 5:05 PM, Ewen Cheslack-Postava 
> wrote:
> > Steven,
> >
> > Looks like there is even more that could potentially be leaked -- since key
> > and value serializers are created and configured at the end, even the IO
> > thread allocated by the producer could leak. Given that, I think 1 isn't a
> > great option since, as you said, it doesn't really address the underlying
> > issue.
> >
> > 3 strikes me as bad from a user experience perspective. It's true we might
> > want to introduce additional constructors to make testing easier, but the
> > more components I need to allocate myself and inject into the producer's
> > constructor, the worse the default experience is. And since you would have
> > to inject the dependencies to get correct, non-leaking behavior, it will
> > always be more code than previously (and a backwards incompatible change).
> > Additionally, the code creating a the producer would have be more
> > complicated since it would have to deal with the cleanup carefully whereas
> > it previously just had to deal with the exception. Besides, for testing
> > specifically, you can avoid exposing more constructors just for testing by
> > using something like PowerMock that let you mock private methods. That
> > requires a bit of code reorganization, but doesn't affect the public
> > interface at all.
> >
> > So my take is that a variant of 2 is probably best. I'd probably do two
> > things. First, make close() safe to call even if some fields haven't been
> > initialized, which presumably just means checking for null fields. (You
> > might also want to figure out if all the methods close() calls are
> > idempotent and decide whether some fields should be marked non-final and
> > cleared to null when close() is called). Second, add the try/catch as you
> > suggested, but just use close().
> >
> > -Ewen
> >
> >
> > On Mon, Apr 13, 2015 at 3:53 PM, Steven Wu  wrote:
> >
> > > Here is the resource leak problem that we have encountered when 0.8.2
> > java
> > > KafkaProducer failed in constructor. here is the code snippet of
> > > KafkaProducer to illustrate the problem.
> > >
> > > ---
> > > public KafkaProducer(ProducerConfig config, Serializer keySerializer,
> > > Serializer valueSerializer) {
> > >
> > > // create metrcis reporter via reflection
> > > List reporters =
> > >
> > >
> > config.getConfiguredInstances(ProducerConfig.METRIC_REPORTER_CLASSES_CONFIG,
> > > MetricsReporter.class);
> > >
> > > // validate bootstrap servers
> > > List addresses =
> > >
> > >
> > ClientUtils.parseAndValidateAddresses(config.getList(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG));
> > >
> > > }
> > > ---
> > >
> > > let's say MyMetricsReporter creates a thread in constructor. if hostname
> > > validation threw an exception, constructor won't call the close method of
> > > MyMetricsReporter to clean up the resource. as a result, we created
> > thread
> > > leak issue. this becomes worse when we try to auto recovery (i.e. keep
> > > creating KafkaProducer again -> failing again -> more thread leaks).
> > >
> > > there are multiple options of fixing this.
> > >
> > > 1) just move the hostname validation to the beginning. but this is only
> > fix
> > > one symtom. it didn't fix the fundamental problem. what if some other
> > lines
> > > throw an exception.
> > >
> > > 2) use try-catch. in the catch section, try to call close methods for any
> > > non-null objects constructed so far.
> > >
> > > 3) explicitly declare the dependency in the constructor. this way, when
> > > KafkaProducer threw an exception, I can call close method of metrics
> > > reporters for releasing resources.
> > > KafkaProducer(..., List reporters)
> > > we don't have to dependency injection framework. but generally hiding
> > > dependency is a bad coding practice. it is also hard to plug in mocks for
> > > dependencies. this is probably the most intrusive change.
> > >
> > > I am willing to submit a p

[jira] [Commented] (KAFKA-2121) error handling in KafkaProducer constructor

2015-04-18 Thread Steven Zhen Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14501669#comment-14501669
 ] 

Steven Zhen Wu commented on KAFKA-2121:
---

Updated reviewboard https://reviews.apache.org/r/33242/diff/
 against branch apache/trunk

> error handling in KafkaProducer constructor
> ---
>
> Key: KAFKA-2121
> URL: https://issues.apache.org/jira/browse/KAFKA-2121
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.0
>Reporter: Steven Zhen Wu
>Assignee: Jun Rao
> Attachments: KAFKA-2121.patch, KAFKA-2121_2015-04-16_09:55:14.patch, 
> KAFKA-2121_2015-04-16_10:43:55.patch, KAFKA-2121_2015-04-18_20:09:20.patch
>
>
> On Mon, Apr 13, 2015 at 7:17 PM, Guozhang Wang  wrote:
> It is a valid problem and we should correct it as soon as possible, I'm
> with Ewen regarding the solution.
> On Mon, Apr 13, 2015 at 5:05 PM, Ewen Cheslack-Postava 
> wrote:
> > Steven,
> >
> > Looks like there is even more that could potentially be leaked -- since key
> > and value serializers are created and configured at the end, even the IO
> > thread allocated by the producer could leak. Given that, I think 1 isn't a
> > great option since, as you said, it doesn't really address the underlying
> > issue.
> >
> > 3 strikes me as bad from a user experience perspective. It's true we might
> > want to introduce additional constructors to make testing easier, but the
> > more components I need to allocate myself and inject into the producer's
> > constructor, the worse the default experience is. And since you would have
> > to inject the dependencies to get correct, non-leaking behavior, it will
> > always be more code than previously (and a backwards incompatible change).
> > Additionally, the code creating a the producer would have be more
> > complicated since it would have to deal with the cleanup carefully whereas
> > it previously just had to deal with the exception. Besides, for testing
> > specifically, you can avoid exposing more constructors just for testing by
> > using something like PowerMock that let you mock private methods. That
> > requires a bit of code reorganization, but doesn't affect the public
> > interface at all.
> >
> > So my take is that a variant of 2 is probably best. I'd probably do two
> > things. First, make close() safe to call even if some fields haven't been
> > initialized, which presumably just means checking for null fields. (You
> > might also want to figure out if all the methods close() calls are
> > idempotent and decide whether some fields should be marked non-final and
> > cleared to null when close() is called). Second, add the try/catch as you
> > suggested, but just use close().
> >
> > -Ewen
> >
> >
> > On Mon, Apr 13, 2015 at 3:53 PM, Steven Wu  wrote:
> >
> > > Here is the resource leak problem that we have encountered when 0.8.2
> > java
> > > KafkaProducer failed in constructor. here is the code snippet of
> > > KafkaProducer to illustrate the problem.
> > >
> > > ---
> > > public KafkaProducer(ProducerConfig config, Serializer keySerializer,
> > > Serializer valueSerializer) {
> > >
> > > // create metrcis reporter via reflection
> > > List reporters =
> > >
> > >
> > config.getConfiguredInstances(ProducerConfig.METRIC_REPORTER_CLASSES_CONFIG,
> > > MetricsReporter.class);
> > >
> > > // validate bootstrap servers
> > > List addresses =
> > >
> > >
> > ClientUtils.parseAndValidateAddresses(config.getList(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG));
> > >
> > > }
> > > ---
> > >
> > > let's say MyMetricsReporter creates a thread in constructor. if hostname
> > > validation threw an exception, constructor won't call the close method of
> > > MyMetricsReporter to clean up the resource. as a result, we created
> > thread
> > > leak issue. this becomes worse when we try to auto recovery (i.e. keep
> > > creating KafkaProducer again -> failing again -> more thread leaks).
> > >
> > > there are multiple options of fixing this.
> > >
> > > 1) just move the hostname validation to the beginning. but this is only
> > fix
> > > one symtom. it didn't fix the fundamental problem. what if some other
> > lines
> > > throw an exception.
> > >
> > > 2) use try-catch. in the catch section, try to call close methods for any
> > > non-null objects constructed so far.
> > >
> > > 3) explicitly declare the dependency in the constructor. this way, when
> > > KafkaProducer threw an exception, I can call close method of metrics
> > > reporters for releasing resources.
> > > KafkaProducer(..., List reporters)
> > > we don't have to dependency injection framework. but generally hiding
> > > dependency is a bad coding practice. it is also hard to plug in mocks for
> > > d

Re: Review Request 33242: Patch for KAFKA-2121

2015-04-18 Thread Steven Wu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33242/
---

(Updated April 19, 2015, 3:09 a.m.)


Review request for kafka.


Bugs: KAFKA-2121
https://issues.apache.org/jira/browse/KAFKA-2121


Repository: kafka


Description (updated)
---

fix potential resource leak when KafkaProducer contructor failed in the middle


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
b91e2c52ed0acb1faa85915097d97bafa28c413a 
  clients/src/main/java/org/apache/kafka/common/metrics/Metrics.java 
b3d3d7c56acb445be16a3fbe00f05eaba659be46 
  clients/src/main/java/org/apache/kafka/common/serialization/Serializer.java 
c2fdc23239bd2196cd912c3d121b591f21393eab 
  
clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/33242/diff/


Testing
---


Thanks,

Steven Wu



Re: Review Request 33242: Patch for KAFKA-2121

2015-04-18 Thread Steven Wu


> On April 16, 2015, 5:29 p.m., Ewen Cheslack-Postava wrote:
> > clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java, 
> > line 548
> > 
> >
> > One idea for making this less verbose and redundant: make all of these 
> > classes implement Closeable so we can just write one utility method for 
> > trying to close something and catching the exception.
> 
> Steven Wu wrote:
> yes. I thought about it. it may break binary compatibility, e.g. 
> Serializer. Sender and Metrics classes are probably only used internally. let 
> me know your thoughts.
> 
> Ewen Cheslack-Postava wrote:
> I'm pretty sure it's fine, based on this
> 
> "Changing the direct superclass or the set of direct superinterfaces of a 
> class type will not break compatibility with pre-existing binaries, provided 
> that the total set of superclasses or superinterfaces, respectively, of the 
> class type loses no members."
> 
> from https://docs.oracle.com/javase/specs/jls/se7/html/jls-13.html

ok. you could be correct. here is another reference (easier to understand than 
the jls doc)

"Expand superinterface set (direct or inherited)-   Binary 
compatible"

from https://wiki.eclipse.org/Evolving_Java-based_APIs_2#Evolving_API_Interfaces


- Steven


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33242/#review80346
---


On April 19, 2015, 3:09 a.m., Steven Wu wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/33242/
> ---
> 
> (Updated April 19, 2015, 3:09 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2121
> https://issues.apache.org/jira/browse/KAFKA-2121
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> fix potential resource leak when KafkaProducer contructor failed in the middle
> 
> 
> Diffs
> -
> 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> b91e2c52ed0acb1faa85915097d97bafa28c413a 
>   clients/src/main/java/org/apache/kafka/common/metrics/Metrics.java 
> b3d3d7c56acb445be16a3fbe00f05eaba659be46 
>   clients/src/main/java/org/apache/kafka/common/serialization/Serializer.java 
> c2fdc23239bd2196cd912c3d121b591f21393eab 
>   
> clients/src/test/java/org/apache/kafka/clients/producer/KafkaProducerTest.java
>  PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/33242/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Steven Wu
> 
>