[jira] [Commented] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper

2014-09-23 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14144434#comment-14144434
 ] 

Joel Koshy commented on KAFKA-1367:
---

I definitely agree with [~edenhill] that if such a field exists in the response 
then the information populated in the field should be accurate (or we may as 
well not include the field) - so we should fix this.

 Broker topic metadata not kept in sync with ZooKeeper
 -

 Key: KAFKA-1367
 URL: https://issues.apache.org/jira/browse/KAFKA-1367
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.0, 0.8.1
Reporter: Ryan Berdeen
  Labels: newbie++
 Attachments: KAFKA-1367.txt


 When a broker is restarted, the topic metadata responses from the brokers 
 will be incorrect (different from ZooKeeper) until a preferred replica leader 
 election.
 In the metadata, it looks like leaders are correctly removed from the ISR 
 when a broker disappears, but followers are not. Then, when a broker 
 reappears, the ISR is never updated.
 I used a variation of the Vagrant setup created by Joe Stein to reproduce 
 this with latest from the 0.8.1 branch: 
 https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1019) kafka-preferred-replica-election.sh will fail without clear error message if /brokers/topics/[topic]/partitions does not exist

2014-09-23 Thread hongyu bi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14144507#comment-14144507
 ] 

hongyu bi commented on KAFKA-1019:
--

Thanks @Guozhang.
Yes, followed the same process to re-produce this issue.
Sorry I don't quit understand why KAFKA-1558 lead to this issue?

From the method below I don't see any process on topic delete:  
PartitionStateMachine$TopicChangeListener handleChildChange:
val newTopics = currentChildren -- controllerContext.allTopics
val deletedTopics = controllerContext.allTopics -- currentChildren
controllerContext.allTopics = currentChildren
val addedPartitionReplicaAssignment = 
ZkUtils.getReplicaAssignmentForTopics(zkClient, newTopics.toSeq)
controllerContext.partitionReplicaAssignment = 
controllerContext.partitionReplicaAssignment.filter(p =
  !deletedTopics.contains(p._1.topic))

controllerContext.partitionReplicaAssignment.++=(addedPartitionReplicaAssignment)
if(newTopics.size  0)
  controller.onNewTopicCreation(newTopics, 
addedPartitionReplicaAssignment.keySet.toSet)



 kafka-preferred-replica-election.sh will fail without clear error message if 
 /brokers/topics/[topic]/partitions does not exist
 --

 Key: KAFKA-1019
 URL: https://issues.apache.org/jira/browse/KAFKA-1019
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Guozhang Wang
  Labels: newbie
 Fix For: 0.9.0


 From Libo Yu:
 I tried to run kafka-preferred-replica-election.sh on our kafka cluster.
 But I got this expection:
 Failed to start preferred replica election
 org.I0Itec.zkclient.exception.ZkNoNodeException: 
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for /brokers/topics/uattoqaaa.default/partitions
 I checked zookeeper and there is no 
 /brokers/topics/uattoqaaa.default/partitions. All I found is
 /brokers/topics/uattoqaaa.default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1019) kafka-preferred-replica-election.sh will fail without clear error message if /brokers/topics/[topic]/partitions does not exist

2014-09-23 Thread hongyu bi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14144507#comment-14144507
 ] 

hongyu bi edited comment on KAFKA-1019 at 9/23/14 8:42 AM:
---

Thanks @Guozhang.
Yes, followed the same process to re-produce this issue.
Sorry I don't quit understand why KAFKA-1558 lead to this issue?

From the method below I don't see any process on topic delete:  
PartitionStateMachine$TopicChangeListener handleChildChange:
val newTopics = currentChildren -- controllerContext.allTopics
val deletedTopics = controllerContext.allTopics -- currentChildren
controllerContext.allTopics = currentChildren
val addedPartitionReplicaAssignment = 
ZkUtils.getReplicaAssignmentForTopics(zkClient, newTopics.toSeq)
controllerContext.partitionReplicaAssignment = 
controllerContext.partitionReplicaAssignment.filter(p =
  !deletedTopics.contains(p._1.topic))

controllerContext.partitionReplicaAssignment.++=(addedPartitionReplicaAssignment)
if(newTopics.size  0)
  controller.onNewTopicCreation(newTopics, 
addedPartitionReplicaAssignment.keySet.toSet)

plus: From the stat-change.log:
when I create the same topic after deleting it , PartitionStateMachine report:
ERROR Controller 20225 epoch 1 initiated state change for partition [hobi1,1] 
from OnlinePartition to NewPartition failed (state.change.logger)
java.lang.IllegalStateException: Partition [hobi1,1] should be in the 
NonExistentPartition states before moving to NewPartition state. Instead it is 
in OnlinePartition state
at 
kafka.controller.PartitionStateMachine.assertValidPreviousStates(PartitionStateMachine.scala:243)

It seems when deleting topic , kafka doesn't sync some internal data structure?



was (Author: hongyu.bi):
Thanks @Guozhang.
Yes, followed the same process to re-produce this issue.
Sorry I don't quit understand why KAFKA-1558 lead to this issue?

From the method below I don't see any process on topic delete:  
PartitionStateMachine$TopicChangeListener handleChildChange:
val newTopics = currentChildren -- controllerContext.allTopics
val deletedTopics = controllerContext.allTopics -- currentChildren
controllerContext.allTopics = currentChildren
val addedPartitionReplicaAssignment = 
ZkUtils.getReplicaAssignmentForTopics(zkClient, newTopics.toSeq)
controllerContext.partitionReplicaAssignment = 
controllerContext.partitionReplicaAssignment.filter(p =
  !deletedTopics.contains(p._1.topic))

controllerContext.partitionReplicaAssignment.++=(addedPartitionReplicaAssignment)
if(newTopics.size  0)
  controller.onNewTopicCreation(newTopics, 
addedPartitionReplicaAssignment.keySet.toSet)



 kafka-preferred-replica-election.sh will fail without clear error message if 
 /brokers/topics/[topic]/partitions does not exist
 --

 Key: KAFKA-1019
 URL: https://issues.apache.org/jira/browse/KAFKA-1019
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Guozhang Wang
  Labels: newbie
 Fix For: 0.9.0


 From Libo Yu:
 I tried to run kafka-preferred-replica-election.sh on our kafka cluster.
 But I got this expection:
 Failed to start preferred replica election
 org.I0Itec.zkclient.exception.ZkNoNodeException: 
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for /brokers/topics/uattoqaaa.default/partitions
 I checked zookeeper and there is no 
 /brokers/topics/uattoqaaa.default/partitions. All I found is
 /brokers/topics/uattoqaaa.default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1019) kafka-preferred-replica-election.sh will fail without clear error message if /brokers/topics/[topic]/partitions does not exist

2014-09-23 Thread hongyu bi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14144507#comment-14144507
 ] 

hongyu bi edited comment on KAFKA-1019 at 9/23/14 9:10 AM:
---

Thanks @Guozhang.
Yes, followed the same process to re-produce this issue.
Sorry I don't quit understand why KAFKA-1558 lead to this issue?

From the method below I don't see any process on topic delete:  
PartitionStateMachine$TopicChangeListener handleChildChange:
val newTopics = currentChildren -- controllerContext.allTopics
val deletedTopics = controllerContext.allTopics -- currentChildren
controllerContext.allTopics = currentChildren
val addedPartitionReplicaAssignment = 
ZkUtils.getReplicaAssignmentForTopics(zkClient, newTopics.toSeq)
controllerContext.partitionReplicaAssignment = 
controllerContext.partitionReplicaAssignment.filter(p =
  !deletedTopics.contains(p._1.topic))

controllerContext.partitionReplicaAssignment.++=(addedPartitionReplicaAssignment)
if(newTopics.size  0)
  controller.onNewTopicCreation(newTopics, 
addedPartitionReplicaAssignment.keySet.toSet)

plus1: From the stat-change.log:
when I create the same topic after deleting it , PartitionStateMachine report:
ERROR Controller 20225 epoch 1 initiated state change for partition [hobi1,1] 
from OnlinePartition to NewPartition failed (state.change.logger)
java.lang.IllegalStateException: Partition [hobi1,1] should be in the 
NonExistentPartition states before moving to NewPartition state. Instead it is 
in OnlinePartition state
at 
kafka.controller.PartitionStateMachine.assertValidPreviousStates(PartitionStateMachine.scala:243)

It seems when deleting topic , kafka doesn't sync some internal data structure?



plus2:
From TopicDeleteManager , it seems deleting topic doesn't enter this block:
if(controller.replicaStateMachine.areAllReplicasForTopicDeleted(topic)) {
// clear up all state for this topic from controller cache and 
zookeeper
completeDeleteTopic(topic)
info(Deletion of topic %s successfully completed.format(topic))
  }


was (Author: hongyu.bi):
Thanks @Guozhang.
Yes, followed the same process to re-produce this issue.
Sorry I don't quit understand why KAFKA-1558 lead to this issue?

From the method below I don't see any process on topic delete:  
PartitionStateMachine$TopicChangeListener handleChildChange:
val newTopics = currentChildren -- controllerContext.allTopics
val deletedTopics = controllerContext.allTopics -- currentChildren
controllerContext.allTopics = currentChildren
val addedPartitionReplicaAssignment = 
ZkUtils.getReplicaAssignmentForTopics(zkClient, newTopics.toSeq)
controllerContext.partitionReplicaAssignment = 
controllerContext.partitionReplicaAssignment.filter(p =
  !deletedTopics.contains(p._1.topic))

controllerContext.partitionReplicaAssignment.++=(addedPartitionReplicaAssignment)
if(newTopics.size  0)
  controller.onNewTopicCreation(newTopics, 
addedPartitionReplicaAssignment.keySet.toSet)

plus: From the stat-change.log:
when I create the same topic after deleting it , PartitionStateMachine report:
ERROR Controller 20225 epoch 1 initiated state change for partition [hobi1,1] 
from OnlinePartition to NewPartition failed (state.change.logger)
java.lang.IllegalStateException: Partition [hobi1,1] should be in the 
NonExistentPartition states before moving to NewPartition state. Instead it is 
in OnlinePartition state
at 
kafka.controller.PartitionStateMachine.assertValidPreviousStates(PartitionStateMachine.scala:243)

It seems when deleting topic , kafka doesn't sync some internal data structure?


 kafka-preferred-replica-election.sh will fail without clear error message if 
 /brokers/topics/[topic]/partitions does not exist
 --

 Key: KAFKA-1019
 URL: https://issues.apache.org/jira/browse/KAFKA-1019
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Guozhang Wang
  Labels: newbie
 Fix For: 0.9.0


 From Libo Yu:
 I tried to run kafka-preferred-replica-election.sh on our kafka cluster.
 But I got this expection:
 Failed to start preferred replica election
 org.I0Itec.zkclient.exception.ZkNoNodeException: 
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for /brokers/topics/uattoqaaa.default/partitions
 I checked zookeeper and there is no 
 /brokers/topics/uattoqaaa.default/partitions. All I found is
 /brokers/topics/uattoqaaa.default.



--
This message was sent by Atlassian JIRA

Re: Review Request 24704: Patch for KAFKA-1499

2014-09-23 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24704/
---

(Updated Sept. 23, 2014, 9:17 a.m.)


Review request for kafka.


Bugs: KAFKA-1499
https://issues.apache.org/jira/browse/KAFKA-1499


Repository: kafka


Description
---

Addresing Joel's comments


Diffs (updated)
-

  core/src/main/scala/kafka/log/Log.scala 
0ddf97bd30311b6039e19abade41d2fbbad2f59b 
  core/src/main/scala/kafka/log/LogConfig.scala 
5746ad4767589594f904aa085131dd95e56d72bb 
  core/src/main/scala/kafka/message/ByteBufferMessageSet.scala 
788c7864bc881b935975ab4a4e877b690e65f1f1 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
165c816a9f4c925f6e46560e7e2ff9cf7591946b 
  core/src/main/scala/kafka/server/KafkaServer.scala 
390fef500d7e0027e698c259d777454ba5a0f5e8 
  core/src/test/scala/unit/kafka/log/BrokerCompressionTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/message/ByteBufferMessageSetTest.scala 
4e45d965bc423192ac704883ee75e9727006f89b 
  core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
2377abe4933e065d037828a214c3a87e1773a8ef 

Diff: https://reviews.apache.org/r/24704/diff/


Testing
---


Thanks,

Manikumar Reddy O



[jira] [Commented] (KAFKA-1499) Broker-side compression configuration

2014-09-23 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14144587#comment-14144587
 ] 

Manikumar Reddy commented on KAFKA-1499:


Updated reviewboard https://reviews.apache.org/r/24704/diff/
 against branch origin/trunk

 Broker-side compression configuration
 -

 Key: KAFKA-1499
 URL: https://issues.apache.org/jira/browse/KAFKA-1499
 Project: Kafka
  Issue Type: New Feature
Reporter: Joel Koshy
Assignee: Manikumar Reddy
  Labels: newbie++
 Fix For: 0.8.2

 Attachments: KAFKA-1499.patch, KAFKA-1499.patch, 
 KAFKA-1499_2014-08-15_14:20:27.patch, KAFKA-1499_2014-08-21_21:44:27.patch, 
 KAFKA-1499_2014-09-21_15:57:23.patch, KAFKA-1499_2014-09-23_14:45:38.patch

   Original Estimate: 72h
  Remaining Estimate: 72h

 A given topic can have messages in mixed compression codecs. i.e., it can
 also have a mix of uncompressed/compressed messages.
 It will be useful to support a broker-side configuration to recompress
 messages to a specific compression codec. i.e., all messages (for all
 topics) on the broker will be compressed to this codec. We could have
 per-topic overrides as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 24704: Patch for KAFKA-1499

2014-09-23 Thread Manikumar Reddy O


 On Sept. 23, 2014, 6:45 a.m., Joel Koshy wrote:
  core/src/main/scala/kafka/message/ByteBufferMessageSet.scala, line 205
  https://reviews.apache.org/r/24704/diff/7/?file=699221#file699221line205
 
  If the message set is uncompressed and the broker-side config is set to 
  enable broker-side compression but the compression type is uncompressed it 
  will still make a copy - right?
  
  Would it be clearer to just change the (existing) second parameter to 
  targetCodec and the new parameter to sourceCodec? So the condition would be 
  (if (sourceCodec == NoCompressionCodec  targetCodec == 
  NoCompressionCodec) // do in-place conversion)

Ok..agreed.. done the required chnages


- Manikumar Reddy


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24704/#review54255
---


On Sept. 23, 2014, 9:17 a.m., Manikumar Reddy O wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/24704/
 ---
 
 (Updated Sept. 23, 2014, 9:17 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1499
 https://issues.apache.org/jira/browse/KAFKA-1499
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addresing Joel's comments
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/log/Log.scala 
 0ddf97bd30311b6039e19abade41d2fbbad2f59b 
   core/src/main/scala/kafka/log/LogConfig.scala 
 5746ad4767589594f904aa085131dd95e56d72bb 
   core/src/main/scala/kafka/message/ByteBufferMessageSet.scala 
 788c7864bc881b935975ab4a4e877b690e65f1f1 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 165c816a9f4c925f6e46560e7e2ff9cf7591946b 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 390fef500d7e0027e698c259d777454ba5a0f5e8 
   core/src/test/scala/unit/kafka/log/BrokerCompressionTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/message/ByteBufferMessageSetTest.scala 
 4e45d965bc423192ac704883ee75e9727006f89b 
   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
 2377abe4933e065d037828a214c3a87e1773a8ef 
 
 Diff: https://reviews.apache.org/r/24704/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Manikumar Reddy O
 




[jira] [Updated] (KAFKA-1490) remove gradlew initial setup output from source distribution

2014-09-23 Thread Ivan Lyutov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Lyutov updated KAFKA-1490:
---
Attachment: rb25703.patch

Rebased to the latest trunk. Reviewboard has been updated accordingly.

 remove gradlew initial setup output from source distribution
 

 Key: KAFKA-1490
 URL: https://issues.apache.org/jira/browse/KAFKA-1490
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1490-2.patch, KAFKA-1490.patch, rb25703.patch


 Our current source releases contains lots of stuff in the gradle folder we do 
 not need



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1019) kafka-preferred-replica-election.sh will fail without clear error message if /brokers/topics/[topic]/partitions does not exist

2014-09-23 Thread hongyu bi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14144507#comment-14144507
 ] 

hongyu bi edited comment on KAFKA-1019 at 9/23/14 9:57 AM:
---

Thanks @Guozhang.
Yes, followed the same process to re-produce this issue.
Sorry I don't quit understand why KAFKA-1558 lead to this issue?

From the method below I don't see any process on topic delete:  
PartitionStateMachine$TopicChangeListener handleChildChange:
val newTopics = currentChildren -- controllerContext.allTopics
val deletedTopics = controllerContext.allTopics -- currentChildren
controllerContext.allTopics = currentChildren
val addedPartitionReplicaAssignment = 
ZkUtils.getReplicaAssignmentForTopics(zkClient, newTopics.toSeq)
controllerContext.partitionReplicaAssignment = 
controllerContext.partitionReplicaAssignment.filter(p =
  !deletedTopics.contains(p._1.topic))

controllerContext.partitionReplicaAssignment.++=(addedPartitionReplicaAssignment)
if(newTopics.size  0)
  controller.onNewTopicCreation(newTopics, 
addedPartitionReplicaAssignment.keySet.toSet)

plus1: From the stat-change.log:
when I create the same topic after deleting it , PartitionStateMachine report:
ERROR Controller 20225 epoch 1 initiated state change for partition [hobi1,1] 
from OnlinePartition to NewPartition failed (state.change.logger)
java.lang.IllegalStateException: Partition [hobi1,1] should be in the 
NonExistentPartition states before moving to NewPartition state. Instead it is 
in OnlinePartition state
at 
kafka.controller.PartitionStateMachine.assertValidPreviousStates(PartitionStateMachine.scala:243)

It seems when deleting topic , kafka doesn't sync some internal data structure?



plus2:
From TopicDeleteManager , it seems deleting topic doesn't enter this block:
if(controller.replicaStateMachine.areAllReplicasForTopicDeleted(topic)) {
// clear up all state for this topic from controller cache and 
zookeeper

completeDeleteTopic(topic)

info(Deletion of topic %s successfully completed.format(topic))
  }

plus3:
Finally, i found that i didn't enable delete.topic. My fault.


was (Author: hongyu.bi):
Thanks @Guozhang.
Yes, followed the same process to re-produce this issue.
Sorry I don't quit understand why KAFKA-1558 lead to this issue?

From the method below I don't see any process on topic delete:  
PartitionStateMachine$TopicChangeListener handleChildChange:
val newTopics = currentChildren -- controllerContext.allTopics
val deletedTopics = controllerContext.allTopics -- currentChildren
controllerContext.allTopics = currentChildren
val addedPartitionReplicaAssignment = 
ZkUtils.getReplicaAssignmentForTopics(zkClient, newTopics.toSeq)
controllerContext.partitionReplicaAssignment = 
controllerContext.partitionReplicaAssignment.filter(p =
  !deletedTopics.contains(p._1.topic))

controllerContext.partitionReplicaAssignment.++=(addedPartitionReplicaAssignment)
if(newTopics.size  0)
  controller.onNewTopicCreation(newTopics, 
addedPartitionReplicaAssignment.keySet.toSet)

plus1: From the stat-change.log:
when I create the same topic after deleting it , PartitionStateMachine report:
ERROR Controller 20225 epoch 1 initiated state change for partition [hobi1,1] 
from OnlinePartition to NewPartition failed (state.change.logger)
java.lang.IllegalStateException: Partition [hobi1,1] should be in the 
NonExistentPartition states before moving to NewPartition state. Instead it is 
in OnlinePartition state
at 
kafka.controller.PartitionStateMachine.assertValidPreviousStates(PartitionStateMachine.scala:243)

It seems when deleting topic , kafka doesn't sync some internal data structure?



plus2:
From TopicDeleteManager , it seems deleting topic doesn't enter this block:
if(controller.replicaStateMachine.areAllReplicasForTopicDeleted(topic)) {
// clear up all state for this topic from controller cache and 
zookeeper
completeDeleteTopic(topic)
info(Deletion of topic %s successfully completed.format(topic))
  }

 kafka-preferred-replica-election.sh will fail without clear error message if 
 /brokers/topics/[topic]/partitions does not exist
 --

 Key: KAFKA-1019
 URL: https://issues.apache.org/jira/browse/KAFKA-1019
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Guozhang Wang
  Labels: newbie
 Fix For: 0.9.0


 From Libo Yu:
 I tried to run kafka-preferred-replica-election.sh on our kafka 

[jira] [Comment Edited] (KAFKA-1019) kafka-preferred-replica-election.sh will fail without clear error message if /brokers/topics/[topic]/partitions does not exist

2014-09-23 Thread hongyu bi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14144507#comment-14144507
 ] 

hongyu bi edited comment on KAFKA-1019 at 9/23/14 10:15 AM:


Thanks @Guozhang.
Yes, followed the same process to re-produce this issue.
Sorry I don't quit understand why KAFKA-1558 lead to this issue?

From the method below I don't see any process on topic delete:  
PartitionStateMachine$TopicChangeListener handleChildChange:
val newTopics = currentChildren -- controllerContext.allTopics
val deletedTopics = controllerContext.allTopics -- currentChildren
controllerContext.allTopics = currentChildren
val addedPartitionReplicaAssignment = 
ZkUtils.getReplicaAssignmentForTopics(zkClient, newTopics.toSeq)
controllerContext.partitionReplicaAssignment = 
controllerContext.partitionReplicaAssignment.filter(p =
  !deletedTopics.contains(p._1.topic))

controllerContext.partitionReplicaAssignment.++=(addedPartitionReplicaAssignment)
if(newTopics.size  0)
  controller.onNewTopicCreation(newTopics, 
addedPartitionReplicaAssignment.keySet.toSet)

plus1: From the stat-change.log:
when I create the same topic after deleting it , PartitionStateMachine report:
ERROR Controller 20225 epoch 1 initiated state change for partition [hobi1,1] 
from OnlinePartition to NewPartition failed (state.change.logger)
java.lang.IllegalStateException: Partition [hobi1,1] should be in the 
NonExistentPartition states before moving to NewPartition state. Instead it is 
in OnlinePartition state
at 
kafka.controller.PartitionStateMachine.assertValidPreviousStates(PartitionStateMachine.scala:243)

It seems when deleting topic , kafka doesn't sync some internal data structure?



plus2:
From TopicDeleteManager , it seems deleting topic doesn't enter this block:
if(controller.replicaStateMachine.areAllReplicasForTopicDeleted(topic)) {
// clear up all state for this topic from controller cache and 
zookeeper

completeDeleteTopic(topic)

info(Deletion of topic %s successfully completed.format(topic))
  }

plus3:
I enable the delete topic thread, but the issue didn't disappearred 


was (Author: hongyu.bi):
Thanks @Guozhang.
Yes, followed the same process to re-produce this issue.
Sorry I don't quit understand why KAFKA-1558 lead to this issue?

From the method below I don't see any process on topic delete:  
PartitionStateMachine$TopicChangeListener handleChildChange:
val newTopics = currentChildren -- controllerContext.allTopics
val deletedTopics = controllerContext.allTopics -- currentChildren
controllerContext.allTopics = currentChildren
val addedPartitionReplicaAssignment = 
ZkUtils.getReplicaAssignmentForTopics(zkClient, newTopics.toSeq)
controllerContext.partitionReplicaAssignment = 
controllerContext.partitionReplicaAssignment.filter(p =
  !deletedTopics.contains(p._1.topic))

controllerContext.partitionReplicaAssignment.++=(addedPartitionReplicaAssignment)
if(newTopics.size  0)
  controller.onNewTopicCreation(newTopics, 
addedPartitionReplicaAssignment.keySet.toSet)

plus1: From the stat-change.log:
when I create the same topic after deleting it , PartitionStateMachine report:
ERROR Controller 20225 epoch 1 initiated state change for partition [hobi1,1] 
from OnlinePartition to NewPartition failed (state.change.logger)
java.lang.IllegalStateException: Partition [hobi1,1] should be in the 
NonExistentPartition states before moving to NewPartition state. Instead it is 
in OnlinePartition state
at 
kafka.controller.PartitionStateMachine.assertValidPreviousStates(PartitionStateMachine.scala:243)

It seems when deleting topic , kafka doesn't sync some internal data structure?



plus2:
From TopicDeleteManager , it seems deleting topic doesn't enter this block:
if(controller.replicaStateMachine.areAllReplicasForTopicDeleted(topic)) {
// clear up all state for this topic from controller cache and 
zookeeper

completeDeleteTopic(topic)

info(Deletion of topic %s successfully completed.format(topic))
  }

plus3:
Finally, i found that i didn't enable delete.topic. My fault.

 kafka-preferred-replica-election.sh will fail without clear error message if 
 /brokers/topics/[topic]/partitions does not exist
 --

 Key: KAFKA-1019
 URL: https://issues.apache.org/jira/browse/KAFKA-1019
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Guozhang Wang
  Labels: newbie
 Fix For: 0.9.0


 

[jira] [Comment Edited] (KAFKA-1019) kafka-preferred-replica-election.sh will fail without clear error message if /brokers/topics/[topic]/partitions does not exist

2014-09-23 Thread hongyu bi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14144507#comment-14144507
 ] 

hongyu bi edited comment on KAFKA-1019 at 9/23/14 10:38 AM:


Thanks @Guozhang.
Yes, followed the same process to re-produce this issue.
Sorry I don't quit understand why KAFKA-1558 lead to this issue?

PS:
I enabled the topic delete thread, from the controller.log i saw that 
TopicDeleteThread is hanging on awaitTopicDeletionNotification() which means 
TopicDeletionManager didn't signal the thread to doWork.


was (Author: hongyu.bi):
Thanks @Guozhang.
Yes, followed the same process to re-produce this issue.
Sorry I don't quit understand why KAFKA-1558 lead to this issue?

From the method below I don't see any process on topic delete:  
PartitionStateMachine$TopicChangeListener handleChildChange:
val newTopics = currentChildren -- controllerContext.allTopics
val deletedTopics = controllerContext.allTopics -- currentChildren
controllerContext.allTopics = currentChildren
val addedPartitionReplicaAssignment = 
ZkUtils.getReplicaAssignmentForTopics(zkClient, newTopics.toSeq)
controllerContext.partitionReplicaAssignment = 
controllerContext.partitionReplicaAssignment.filter(p =
  !deletedTopics.contains(p._1.topic))

controllerContext.partitionReplicaAssignment.++=(addedPartitionReplicaAssignment)
if(newTopics.size  0)
  controller.onNewTopicCreation(newTopics, 
addedPartitionReplicaAssignment.keySet.toSet)

plus1: From the stat-change.log:
when I create the same topic after deleting it , PartitionStateMachine report:
ERROR Controller 20225 epoch 1 initiated state change for partition [hobi1,1] 
from OnlinePartition to NewPartition failed (state.change.logger)
java.lang.IllegalStateException: Partition [hobi1,1] should be in the 
NonExistentPartition states before moving to NewPartition state. Instead it is 
in OnlinePartition state
at 
kafka.controller.PartitionStateMachine.assertValidPreviousStates(PartitionStateMachine.scala:243)

It seems when deleting topic , kafka doesn't sync some internal data structure?



plus2:
From TopicDeleteManager , it seems deleting topic doesn't enter this block:
if(controller.replicaStateMachine.areAllReplicasForTopicDeleted(topic)) {
// clear up all state for this topic from controller cache and 
zookeeper

completeDeleteTopic(topic)

info(Deletion of topic %s successfully completed.format(topic))
  }

plus3:
I enable the delete topic thread, but the issue didn't disappearred 

 kafka-preferred-replica-election.sh will fail without clear error message if 
 /brokers/topics/[topic]/partitions does not exist
 --

 Key: KAFKA-1019
 URL: https://issues.apache.org/jira/browse/KAFKA-1019
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Guozhang Wang
  Labels: newbie
 Fix For: 0.9.0


 From Libo Yu:
 I tried to run kafka-preferred-replica-election.sh on our kafka cluster.
 But I got this expection:
 Failed to start preferred replica election
 org.I0Itec.zkclient.exception.ZkNoNodeException: 
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for /brokers/topics/uattoqaaa.default/partitions
 I checked zookeeper and there is no 
 /brokers/topics/uattoqaaa.default/partitions. All I found is
 /brokers/topics/uattoqaaa.default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 22131: Patch for KAFKA-1477

2014-09-23 Thread Ivan Lyutov

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22131/
---

(Updated Sept. 23, 2014, 11:11 a.m.)


Review request for kafka.


Bugs: KAFKA-1477
https://issues.apache.org/jira/browse/KAFKA-1477


Repository: kafka


Description (updated)
---

Minor fixes, cleanup

Refactoring

Fixed tests compilation error.

Updated according to requested changes: refactoring, minor edits.


Added basic functionality for new producer.


bug fixes after rebase


bug fix after rebase


updated tests after rebase


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
f58b8508d3f813a51015abed772c704390887d7e 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
f9de4af426449cceca12a8de9a9f54a6241d28d8 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/SSLSocketChannel.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/errors/UnknownKeyStoreException.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/Selector.java 
4dd2cdf773f7eb01a93d7f994383088960303dfc 
  
clients/src/main/java/org/apache/kafka/common/network/security/AuthConfig.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/security/KeyStores.java 
PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/security/SecureAuth.java 
PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/security/StoreInitializer.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/security/store/JKSInitializer.java
 PRE-CREATION 
  clients/src/test/java/org/apache/kafka/common/network/SelectorTest.java 
5c5e3d40819e41cab7b52a0eeaee5f2e7317b7b3 
  config/client.keystore PRE-CREATION 
  config/client.public-key PRE-CREATION 
  config/client.security.properties PRE-CREATION 
  config/consumer.properties 83847de30d10b6e78bb8de28e0bb925d7c0e6ca2 
  config/producer.properties 39d65d7c6c21f4fccd7af89be6ca12a088d5dd98 
  config/server.keystore PRE-CREATION 
  config/server.properties 5c0905a572b1f0d8b07bfca967a09cb856a6b09f 
  config/server.public-key PRE-CREATION 
  config/server.security.properties PRE-CREATION 
  core/src/main/scala/kafka/api/FetchRequest.scala 
59c09155dd25fad7bed07d3d00039e3dc66db95c 
  core/src/main/scala/kafka/client/ClientUtils.scala 
ebba87f0566684c796c26cb76c64b4640a5ccfde 
  core/src/main/scala/kafka/cluster/Broker.scala 
0060add008bb3bc4b0092f2173c469fce0120be6 
  core/src/main/scala/kafka/common/UnknownKeyStoreException.scala PRE-CREATION 
  core/src/main/scala/kafka/consumer/ConsumerConfig.scala 
9ebbee6c16dc83767297c729d2d74ebbd063a993 
  core/src/main/scala/kafka/consumer/ConsumerFetcherManager.scala 
b9e2bea7b442a19bcebd1b350d39541a8c9dd068 
  core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
d349a3000feb9ccd57d1f3cb163548d5bf432186 
  core/src/main/scala/kafka/controller/ControllerChannelManager.scala 
ecbfa0f328ba6a652a758ab20cacef324a8b2fb8 
  core/src/main/scala/kafka/network/BlockingChannel.scala 
eb7bb14d94cb3648c06d4de36a3b34aacbde4556 
  core/src/main/scala/kafka/network/SocketServer.scala 
3a6f8d121e822e7b6ec32c9147829e91f40e9038 
  core/src/main/scala/kafka/network/security/AuthConfig.scala PRE-CREATION 
  core/src/main/scala/kafka/network/security/KeyStores.scala PRE-CREATION 
  core/src/main/scala/kafka/network/security/SSLSocketChannel.scala 
PRE-CREATION 
  core/src/main/scala/kafka/network/security/SecureAuth.scala PRE-CREATION 
  core/src/main/scala/kafka/network/security/store/JKSInitializer.scala 
PRE-CREATION 
  core/src/main/scala/kafka/producer/ProducerConfig.scala 
3cdf23dce3407f1770b9c6543e3a8ae8ab3ff255 
  core/src/main/scala/kafka/producer/ProducerPool.scala 
43df70bb461dd3e385e6b20396adef3c4016a3fc 
  core/src/main/scala/kafka/producer/SyncProducer.scala 
42c950375098b51f45c79c6a4a99a36f387bf02b 
  core/src/main/scala/kafka/producer/SyncProducerConfig.scala 
69b2d0c11bb1412ce76d566f285333c806be301a 
  core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
2e9532e820b5b5c63dfd55f5454b32866d084a37 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
165c816a9f4c925f6e46560e7e2ff9cf7591946b 
  core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
4acdd70fe9c1ee78d6510741006c2ece65450671 
  core/src/main/scala/kafka/server/KafkaServer.scala 
390fef500d7e0027e698c259d777454ba5a0f5e8 
  core/src/main/scala/kafka/tools/ConsoleConsumer.scala 
323fc8566d974acc4e5c7d7c2a065794f3b5df4a 
  core/src/main/scala/kafka/tools/ConsoleProducer.scala 
da4dad405c8d8f26a64cda78a292e1f5bfbdcc22 
  core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala 
d1e7c434e77859d746b8dc68dd5d5a3740425e79 
  core/src/main/scala/kafka/tools/GetOffsetShell.scala 
9c6064e201eebbcd5b276a0dedd02937439edc94 
  core/src/main/scala/kafka/tools/ReplicaVerificationTool.scala 

Re: Review Request 22131: Patch for KAFKA-1477

2014-09-23 Thread Ivan Lyutov

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22131/
---

(Updated Sept. 23, 2014, 11:14 a.m.)


Review request for kafka.


Bugs: KAFKA-1477
https://issues.apache.org/jira/browse/KAFKA-1477


Repository: kafka


Description (updated)
---

Minor fixes, cleanup

Refactoring

Fixed tests compilation error.

Updated according to requested changes: refactoring, minor edits.


Added basic functionality for new producer.


bug fixes after rebase


bug fix after rebase


updated tests after rebase


changed default security to false


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
f58b8508d3f813a51015abed772c704390887d7e 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
f9de4af426449cceca12a8de9a9f54a6241d28d8 
  
clients/src/main/java/org/apache/kafka/clients/producer/internals/SSLSocketChannel.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/errors/UnknownKeyStoreException.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/Selector.java 
4dd2cdf773f7eb01a93d7f994383088960303dfc 
  
clients/src/main/java/org/apache/kafka/common/network/security/AuthConfig.java 
PRE-CREATION 
  clients/src/main/java/org/apache/kafka/common/network/security/KeyStores.java 
PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/security/SecureAuth.java 
PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/security/StoreInitializer.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/common/network/security/store/JKSInitializer.java
 PRE-CREATION 
  clients/src/test/java/org/apache/kafka/common/network/SelectorTest.java 
5c5e3d40819e41cab7b52a0eeaee5f2e7317b7b3 
  config/client.keystore PRE-CREATION 
  config/client.public-key PRE-CREATION 
  config/client.security.properties PRE-CREATION 
  config/consumer.properties 83847de30d10b6e78bb8de28e0bb925d7c0e6ca2 
  config/producer.properties 39d65d7c6c21f4fccd7af89be6ca12a088d5dd98 
  config/server.keystore PRE-CREATION 
  config/server.properties 5c0905a572b1f0d8b07bfca967a09cb856a6b09f 
  config/server.public-key PRE-CREATION 
  config/server.security.properties PRE-CREATION 
  core/src/main/scala/kafka/api/FetchRequest.scala 
59c09155dd25fad7bed07d3d00039e3dc66db95c 
  core/src/main/scala/kafka/client/ClientUtils.scala 
ebba87f0566684c796c26cb76c64b4640a5ccfde 
  core/src/main/scala/kafka/cluster/Broker.scala 
0060add008bb3bc4b0092f2173c469fce0120be6 
  core/src/main/scala/kafka/common/UnknownKeyStoreException.scala PRE-CREATION 
  core/src/main/scala/kafka/consumer/ConsumerConfig.scala 
9ebbee6c16dc83767297c729d2d74ebbd063a993 
  core/src/main/scala/kafka/consumer/ConsumerFetcherManager.scala 
b9e2bea7b442a19bcebd1b350d39541a8c9dd068 
  core/src/main/scala/kafka/consumer/SimpleConsumer.scala 
d349a3000feb9ccd57d1f3cb163548d5bf432186 
  core/src/main/scala/kafka/controller/ControllerChannelManager.scala 
ecbfa0f328ba6a652a758ab20cacef324a8b2fb8 
  core/src/main/scala/kafka/network/BlockingChannel.scala 
eb7bb14d94cb3648c06d4de36a3b34aacbde4556 
  core/src/main/scala/kafka/network/SocketServer.scala 
3a6f8d121e822e7b6ec32c9147829e91f40e9038 
  core/src/main/scala/kafka/network/security/AuthConfig.scala PRE-CREATION 
  core/src/main/scala/kafka/network/security/KeyStores.scala PRE-CREATION 
  core/src/main/scala/kafka/network/security/SSLSocketChannel.scala 
PRE-CREATION 
  core/src/main/scala/kafka/network/security/SecureAuth.scala PRE-CREATION 
  core/src/main/scala/kafka/network/security/store/JKSInitializer.scala 
PRE-CREATION 
  core/src/main/scala/kafka/producer/ProducerConfig.scala 
3cdf23dce3407f1770b9c6543e3a8ae8ab3ff255 
  core/src/main/scala/kafka/producer/ProducerPool.scala 
43df70bb461dd3e385e6b20396adef3c4016a3fc 
  core/src/main/scala/kafka/producer/SyncProducer.scala 
42c950375098b51f45c79c6a4a99a36f387bf02b 
  core/src/main/scala/kafka/producer/SyncProducerConfig.scala 
69b2d0c11bb1412ce76d566f285333c806be301a 
  core/src/main/scala/kafka/server/AbstractFetcherThread.scala 
2e9532e820b5b5c63dfd55f5454b32866d084a37 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
165c816a9f4c925f6e46560e7e2ff9cf7591946b 
  core/src/main/scala/kafka/server/KafkaHealthcheck.scala 
4acdd70fe9c1ee78d6510741006c2ece65450671 
  core/src/main/scala/kafka/server/KafkaServer.scala 
390fef500d7e0027e698c259d777454ba5a0f5e8 
  core/src/main/scala/kafka/tools/ConsoleConsumer.scala 
323fc8566d974acc4e5c7d7c2a065794f3b5df4a 
  core/src/main/scala/kafka/tools/ConsoleProducer.scala 
da4dad405c8d8f26a64cda78a292e1f5bfbdcc22 
  core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala 
d1e7c434e77859d746b8dc68dd5d5a3740425e79 
  core/src/main/scala/kafka/tools/GetOffsetShell.scala 
9c6064e201eebbcd5b276a0dedd02937439edc94 
  

[jira] [Commented] (KAFKA-1477) add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication

2014-09-23 Thread Ivan Lyutov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14144682#comment-14144682
 ] 

Ivan Lyutov commented on KAFKA-1477:


I have updated the reviewboard https://reviews.apache.org/r/22131/

 add authentication layer and initial JKS x509 implementation for brokers, 
 producers and consumer for network communication
 --

 Key: KAFKA-1477
 URL: https://issues.apache.org/jira/browse/KAFKA-1477
 Project: Kafka
  Issue Type: New Feature
Reporter: Joe Stein
Assignee: Ivan Lyutov
 Fix For: 0.9.0

 Attachments: KAFKA-1477-binary.patch, KAFKA-1477.patch, 
 KAFKA-1477_2014-06-02_16:59:40.patch, KAFKA-1477_2014-06-02_17:24:26.patch, 
 KAFKA-1477_2014-06-03_13:46:17.patch, KAFKA-1477_trunk.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1477) add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication

2014-09-23 Thread Ivan Lyutov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14144683#comment-14144683
 ] 

Ivan Lyutov commented on KAFKA-1477:


Rajasekar Elango, ssl branch is located here 
http://github.com/edgefox/kafka/tree/kafka-ssl

 add authentication layer and initial JKS x509 implementation for brokers, 
 producers and consumer for network communication
 --

 Key: KAFKA-1477
 URL: https://issues.apache.org/jira/browse/KAFKA-1477
 Project: Kafka
  Issue Type: New Feature
Reporter: Joe Stein
Assignee: Ivan Lyutov
 Fix For: 0.9.0

 Attachments: KAFKA-1477-binary.patch, KAFKA-1477.patch, 
 KAFKA-1477_2014-06-02_16:59:40.patch, KAFKA-1477_2014-06-02_17:24:26.patch, 
 KAFKA-1477_2014-06-03_13:46:17.patch, KAFKA-1477_trunk.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1019) kafka-preferred-replica-election.sh will fail without clear error message if /brokers/topics/[topic]/partitions does not exist

2014-09-23 Thread hongyu bi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14144507#comment-14144507
 ] 

hongyu bi edited comment on KAFKA-1019 at 9/23/14 12:20 PM:


Thanks @Guozhang.
After diving into source code i got it.


was (Author: hongyu.bi):
Thanks @Guozhang.
Yes, followed the same process to re-produce this issue.
Sorry I don't quit understand why KAFKA-1558 lead to this issue?

PS:
I enabled the topic delete thread, from the controller.log i saw that 
TopicDeleteThread is hanging on awaitTopicDeletionNotification() which means 
TopicDeletionManager didn't signal the thread to doWork.
After diving into the source code i doubt that DeleteTopicsListener didn't take 
affect

 kafka-preferred-replica-election.sh will fail without clear error message if 
 /brokers/topics/[topic]/partitions does not exist
 --

 Key: KAFKA-1019
 URL: https://issues.apache.org/jira/browse/KAFKA-1019
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Guozhang Wang
  Labels: newbie
 Fix For: 0.9.0


 From Libo Yu:
 I tried to run kafka-preferred-replica-election.sh on our kafka cluster.
 But I got this expection:
 Failed to start preferred replica election
 org.I0Itec.zkclient.exception.ZkNoNodeException: 
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for /brokers/topics/uattoqaaa.default/partitions
 I checked zookeeper and there is no 
 /brokers/topics/uattoqaaa.default/partitions. All I found is
 /brokers/topics/uattoqaaa.default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1644) Inherit FetchResponse from RequestOrResponse

2014-09-23 Thread Anton Karamanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Karamanov updated KAFKA-1644:
---
Status: Patch Available  (was: Open)

 Inherit FetchResponse from RequestOrResponse
 

 Key: KAFKA-1644
 URL: https://issues.apache.org/jira/browse/KAFKA-1644
 Project: Kafka
  Issue Type: Bug
Reporter: Anton Karamanov
Assignee: Anton Karamanov
 Attachments: 
 0001-KAFKA-1644-Inherit-FetchResponse-from-RequestOrRespo.patch


 Unlike all other Kafka API responses {{FetchResponse}} is not a subclass of 
 RequestOrResponse, which requires handling it as a special case while 
 processing responses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1490) remove gradlew initial setup output from source distribution

2014-09-23 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein resolved KAFKA-1490.
--
Resolution: Fixed

+1 committed to trunk, also update minor updates to readme 

{code}

diff --git a/README.md b/README.md
index 18a65b1..8e50945 100644
--- a/README.md
+++ b/README.md
@@ -2,6 +2,13 @@ Apache Kafka
 =
 See our [web site](http://kafka.apache.org) for details on the project.
 
+You need to have [gradle](http://www.gradle.org/installation) installed.
+
+### First bootstrap and download the wrapper ###
+gradle
+
+Now everything else will work
+
 ### Building a jar and running it ###
 ./gradlew jar  
 

{code}

 remove gradlew initial setup output from source distribution
 

 Key: KAFKA-1490
 URL: https://issues.apache.org/jira/browse/KAFKA-1490
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1490-2.patch, KAFKA-1490.patch, rb25703.patch


 Our current source releases contains lots of stuff in the gradle folder we do 
 not need



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1490) remove gradlew initial setup output from source distribution

2014-09-23 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14144953#comment-14144953
 ] 

Joe Stein edited comment on KAFKA-1490 at 9/23/14 4:06 PM:
---

+1 committed to trunk, also minor update to readme 

{code}

diff --git a/README.md b/README.md
index 18a65b1..8e50945 100644
--- a/README.md
+++ b/README.md
@@ -2,6 +2,13 @@ Apache Kafka
 =
 See our [web site](http://kafka.apache.org) for details on the project.
 
+You need to have [gradle](http://www.gradle.org/installation) installed.
+
+### First bootstrap and download the wrapper ###
+gradle
+
+Now everything else will work
+
 ### Building a jar and running it ###
 ./gradlew jar  
 

{code}


was (Author: joestein):
+1 committed to trunk, also update minor updates to readme 

{code}

diff --git a/README.md b/README.md
index 18a65b1..8e50945 100644
--- a/README.md
+++ b/README.md
@@ -2,6 +2,13 @@ Apache Kafka
 =
 See our [web site](http://kafka.apache.org) for details on the project.
 
+You need to have [gradle](http://www.gradle.org/installation) installed.
+
+### First bootstrap and download the wrapper ###
+gradle
+
+Now everything else will work
+
 ### Building a jar and running it ###
 ./gradlew jar  
 

{code}

 remove gradlew initial setup output from source distribution
 

 Key: KAFKA-1490
 URL: https://issues.apache.org/jira/browse/KAFKA-1490
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1490-2.patch, KAFKA-1490.patch, rb25703.patch


 Our current source releases contains lots of stuff in the gradle folder we do 
 not need



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #272

2014-09-23 Thread Apache Jenkins Server
See https://builds.apache.org/job/Kafka-trunk/272/changes

Changes:

[joe.stein] KAFKA-1490 remove gradlew initial setup output from source 
distribution patch by Ivan Lyutov reviewed by Joe Stein

--
Started by an SCM change
Building remotely on ubuntu-1 (Ubuntu ubuntu) in workspace 
https://builds.apache.org/job/Kafka-trunk/ws/
  git rev-parse --is-inside-work-tree
Fetching changes from the remote Git repository
  git config remote.origin.url 
  https://git-wip-us.apache.org/repos/asf/kafka.git
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
  git --version
  git fetch --tags --progress 
  https://git-wip-us.apache.org/repos/asf/kafka.git 
  +refs/heads/*:refs/remotes/origin/*
  git rev-parse origin/trunk^{commit}
Checking out Revision d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011 (origin/trunk)
  git config core.sparsecheckout
  git checkout -f d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
  git rev-list 6d7057566f0c3a872f625957aa086b993e76071f
[Kafka-trunk] $ /bin/bash -xe /tmp/hudson229233895233879711.sh
+ ./gradlew -PscalaVersion=2.10.1 test
Exception in thread main java.lang.NoClassDefFoundError: 
org/gradle/wrapper/GradleWrapperMain
Caused by: java.lang.ClassNotFoundException: 
org.gradle.wrapper.GradleWrapperMain
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: org.gradle.wrapper.GradleWrapperMain.  Program 
will exit.
Build step 'Execute shell' marked build as failure


jenkins changes required

2014-09-23 Thread Joe Stein
Hey, so it looks like the gradlew changes I just removed the jar is going
to require something to be done on the jenkins side since gradle has to be
installed and run gradle first to download the wrapper... then everything
else works.

I haven't updated jenkins @ apache not sure (probably have permission) how
to-do that?

Anyone else familiar with this and can either fix or point me in the right
direction please.

This is what folks will see if they don't do the two steps I added to the
README (1) install gradle 2) gradle //default task is to download wrapper
to bootstrap)

[joe.stein] KAFKA-1490 remove gradlew initial setup output from source
distribution patch by Ivan Lyutov reviewed by Joe Stein

--
Started by an SCM change
Building remotely on ubuntu-1 (Ubuntu ubuntu) in workspace 
https://builds.apache.org/job/Kafka-trunk/ws/
  git rev-parse --is-inside-work-tree
Fetching changes from the remote Git repository
  git config remote.origin.url
https://git-wip-us.apache.org/repos/asf/kafka.git
Fetching upstream changes from
https://git-wip-us.apache.org/repos/asf/kafka.git
  git --version
  git fetch --tags --progress
https://git-wip-us.apache.org/repos/asf/kafka.git
 +refs/heads/*:refs/remotes/origin/*
  git rev-parse origin/trunk^{commit}
Checking out Revision d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
(origin/trunk)
  git config core.sparsecheckout
  git checkout -f d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
  git rev-list 6d7057566f0c3a872f625957aa086b993e76071f
[Kafka-trunk] $ /bin/bash -xe /tmp/hudson229233895233879711.sh
+ ./gradlew -PscalaVersion=2.10.1 test
Exception in thread main java.lang.NoClassDefFoundError:
org/gradle/wrapper/GradleWrapperMain
Caused by: java.lang.ClassNotFoundException: org.gradle.wrapper.
GradleWrapperMain
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: org.gradle.wrapper.GradleWrapperMain.
Program will exit.
Build step 'Execute shell' marked build as failure

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/


Re: Review Request 24214: Patch for KAFKA-1374

2014-09-23 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24214/
---

(Updated Sept. 23, 2014, 4:20 p.m.)


Review request for kafka.


Bugs: KAFKA-1374
https://issues.apache.org/jira/browse/KAFKA-1374


Repository: kafka


Description (updated)
---

Addresing Jun's comments


Diffs (updated)
-

  core/src/main/scala/kafka/log/LogCleaner.scala 
c20de4ad4734c0bd83c5954fdb29464a27b91dff 
  core/src/main/scala/kafka/tools/TestLogCleaning.scala 
1d4ea93f2ba8d4d4d47a307cd47f54a15d3d30dd 
  core/src/test/scala/unit/kafka/log/LogCleanerIntegrationTest.scala 
5bfa764638e92f217d0ff7108ec8f53193c22978 

Diff: https://reviews.apache.org/r/24214/diff/


Testing
---


Thanks,

Manikumar Reddy O



[jira] [Updated] (KAFKA-1374) LogCleaner (compaction) does not support compressed topics

2014-09-23 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1374:
---
Attachment: KAFKA-1374_2014-09-23_21:47:12.patch

 LogCleaner (compaction) does not support compressed topics
 --

 Key: KAFKA-1374
 URL: https://issues.apache.org/jira/browse/KAFKA-1374
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Koshy
Assignee: Manikumar Reddy
  Labels: newbie++
 Fix For: 0.8.2

 Attachments: KAFKA-1374.patch, KAFKA-1374_2014-08-09_16:18:55.patch, 
 KAFKA-1374_2014-08-12_22:23:06.patch, KAFKA-1374_2014-09-23_21:47:12.patch


 This is a known issue, but opening a ticket to track.
 If you try to compact a topic that has compressed messages you will run into
 various exceptions - typically because during iteration we advance the
 position based on the decompressed size of the message. I have a bunch of
 stack traces, but it should be straightforward to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1374) LogCleaner (compaction) does not support compressed topics

2014-09-23 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1374:
---
Status: Patch Available  (was: Open)

 LogCleaner (compaction) does not support compressed topics
 --

 Key: KAFKA-1374
 URL: https://issues.apache.org/jira/browse/KAFKA-1374
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Koshy
Assignee: Manikumar Reddy
  Labels: newbie++
 Fix For: 0.8.2

 Attachments: KAFKA-1374.patch, KAFKA-1374_2014-08-09_16:18:55.patch, 
 KAFKA-1374_2014-08-12_22:23:06.patch, KAFKA-1374_2014-09-23_21:47:12.patch


 This is a known issue, but opening a ticket to track.
 If you try to compact a topic that has compressed messages you will run into
 various exceptions - typically because during iteration we advance the
 position based on the decompressed size of the message. I have a bunch of
 stack traces, but it should be straightforward to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 24214: Patch for KAFKA-1374

2014-09-23 Thread Manikumar Reddy O


 On Aug. 18, 2014, 5:21 p.m., Jun Rao wrote:
  core/src/main/scala/kafka/log/LogCleaner.scala, lines 436-438
  https://reviews.apache.org/r/24214/diff/3-5/?file=657031#file657031line436
 
  Hmm, I think the original approach of throwing an exception is probably 
  better. When handling the produce requests, we can reject messages w/o a 
  key, if the topic is configured with compaction. Once we do that, there 
  should be no messages with null key during compaction. If that happens, we 
  should just fail the broker.

Ok.. I reverted the changes. We will revisit the solution in KAFKA-1581


 On Aug. 18, 2014, 5:21 p.m., Jun Rao wrote:
  core/src/main/scala/kafka/log/LogCleaner.scala, lines 479-481
  https://reviews.apache.org/r/24214/diff/5/?file=658590#file658590line479
 
  Could we use MemoryRecords.RecordsIterator to iterate compressed 
  messages?

This chage required some complicated, So i am dropping this issue.


- Manikumar Reddy


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24214/#review50893
---


On Sept. 23, 2014, 4:20 p.m., Manikumar Reddy O wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/24214/
 ---
 
 (Updated Sept. 23, 2014, 4:20 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1374
 https://issues.apache.org/jira/browse/KAFKA-1374
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addresing Jun's comments
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/log/LogCleaner.scala 
 c20de4ad4734c0bd83c5954fdb29464a27b91dff 
   core/src/main/scala/kafka/tools/TestLogCleaning.scala 
 1d4ea93f2ba8d4d4d47a307cd47f54a15d3d30dd 
   core/src/test/scala/unit/kafka/log/LogCleanerIntegrationTest.scala 
 5bfa764638e92f217d0ff7108ec8f53193c22978 
 
 Diff: https://reviews.apache.org/r/24214/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Manikumar Reddy O
 




Re: Review Request 24214: Patch for KAFKA-1374

2014-09-23 Thread Manikumar Reddy O


 On Aug. 18, 2014, 5:32 p.m., Joel Koshy wrote:
  I should be able to review this later today. However, as Jun also mentioned 
  can you please run the stress test? When I was working on the original 
  (WIP) patch it worked but eventually failed (due to various reasons such as 
  corrupt message sizes, etc) on a stress test after several segments had 
  rolled and after several log cleaner runs - although I didn't get time to 
  look into it your patch should have hopefully addressed these issues.

I tested the patch with my own test code and it is working fine.

I ran TestLogCleaning stress test.  Some times this test is failing. 
But i am not getting any broker-side errors/corrupt messages.  

I also ran TestLogCleaning on trunk (without my patch). This test is failing 
for multiple topics.
I am looking into TestLogCleaning code and trying fix if any issue.

I will keep you updated on the testing status.


- Manikumar Reddy


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24214/#review50901
---


On Sept. 23, 2014, 4:20 p.m., Manikumar Reddy O wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/24214/
 ---
 
 (Updated Sept. 23, 2014, 4:20 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1374
 https://issues.apache.org/jira/browse/KAFKA-1374
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addresing Jun's comments
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/log/LogCleaner.scala 
 c20de4ad4734c0bd83c5954fdb29464a27b91dff 
   core/src/main/scala/kafka/tools/TestLogCleaning.scala 
 1d4ea93f2ba8d4d4d47a307cd47f54a15d3d30dd 
   core/src/test/scala/unit/kafka/log/LogCleanerIntegrationTest.scala 
 5bfa764638e92f217d0ff7108ec8f53193c22978 
 
 Diff: https://reviews.apache.org/r/24214/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Manikumar Reddy O
 




[jira] [Commented] (KAFKA-1555) provide strong consistency with reasonable availability

2014-09-23 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14144999#comment-14144999
 ] 

Gwen Shapira commented on KAFKA-1555:
-

[~joestein] can you share your producer configuration? 
NotEnoughReplicasException is expected only where request.required.acks=-1. Can 
you validate that this is the case?



 provide strong consistency with reasonable availability
 ---

 Key: KAFKA-1555
 URL: https://issues.apache.org/jira/browse/KAFKA-1555
 Project: Kafka
  Issue Type: Improvement
  Components: controller
Affects Versions: 0.8.1.1
Reporter: Jiang Wu
Assignee: Gwen Shapira
 Fix For: 0.8.2

 Attachments: KAFKA-1555.0.patch, KAFKA-1555.1.patch, 
 KAFKA-1555.2.patch, KAFKA-1555.3.patch


 In a mission critical application, we expect a kafka cluster with 3 brokers 
 can satisfy two requirements:
 1. When 1 broker is down, no message loss or service blocking happens.
 2. In worse cases such as two brokers are down, service can be blocked, but 
 no message loss happens.
 We found that current kafka versoin (0.8.1.1) cannot achieve the requirements 
 due to its three behaviors:
 1. when choosing a new leader from 2 followers in ISR, the one with less 
 messages may be chosen as the leader.
 2. even when replica.lag.max.messages=0, a follower can stay in ISR when it 
 has less messages than the leader.
 3. ISR can contains only 1 broker, therefore acknowledged messages may be 
 stored in only 1 broker.
 The following is an analytical proof. 
 We consider a cluster with 3 brokers and a topic with 3 replicas, and assume 
 that at the beginning, all 3 replicas, leader A, followers B and C, are in 
 sync, i.e., they have the same messages and are all in ISR.
 According to the value of request.required.acks (acks for short), there are 
 the following cases.
 1. acks=0, 1, 3. Obviously these settings do not satisfy the requirement.
 2. acks=2. Producer sends a message m. It's acknowledged by A and B. At this 
 time, although C hasn't received m, C is still in ISR. If A is killed, C can 
 be elected as the new leader, and consumers will miss m.
 3. acks=-1. B and C restart and are removed from ISR. Producer sends a 
 message m to A, and receives an acknowledgement. Disk failure happens in A 
 before B and C replicate m. Message m is lost.
 In summary, any existing configuration cannot satisfy the requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1490) remove gradlew initial setup output from source distribution

2014-09-23 Thread Chris Cope (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145006#comment-14145006
 ] 

Chris Cope commented on KAFKA-1490:
---

Ok so the wrapper.gradle was in the attached patch on this issue, just not in 
the github commit. However, I there may be another change that didn't make it 
in to the commit?
{noformat}
ubuntu@ip-10-183-61-60:~/kafka$ gradle

FAILURE: Build failed with an exception.

* Where:
Script '/home/ubuntu/kafka/gradle/license.gradle' line: 2

* What went wrong:
A problem occurred evaluating script.
 Could not find method create() for arguments [downloadLicenses, class 
 nl.javadude.gradle.plugins.license.DownloadLicenses] on task set.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 3.303 secs
{noformat}

 remove gradlew initial setup output from source distribution
 

 Key: KAFKA-1490
 URL: https://issues.apache.org/jira/browse/KAFKA-1490
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1490-2.patch, KAFKA-1490.patch, rb25703.patch


 Our current source releases contains lots of stuff in the gradle folder we do 
 not need



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 25942: Patch for KAFKA-1013

2014-09-23 Thread Mayuresh Gharat

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25942/
---

Review request for kafka.


Bugs: KAFKA-1013
https://issues.apache.org/jira/browse/KAFKA-1013


Repository: kafka


Description
---

SRE Offset tool + Offset Client


Diffs
-

  config/consumer.properties 83847de30d10b6e78bb8de28e0bb925d7c0e6ca2 
  core/src/main/scala/kafka/tools/OffsetClient.scala PRE-CREATION 
  core/src/main/scala/kafka/tools/SreOffsetTool.scala PRE-CREATION 

Diff: https://reviews.apache.org/r/25942/diff/


Testing
---


Thanks,

Mayuresh Gharat



[jira] [Commented] (KAFKA-1013) Modify existing tools as per the changes in KAFKA-1000

2014-09-23 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145012#comment-14145012
 ] 

Mayuresh Gharat commented on KAFKA-1013:


Created reviewboard https://reviews.apache.org/r/25942/diff/
 against branch origin/trunk

 Modify existing tools as per the changes in KAFKA-1000
 --

 Key: KAFKA-1013
 URL: https://issues.apache.org/jira/browse/KAFKA-1013
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Tejas Patil
Assignee: Mayuresh Gharat
Priority: Minor
 Attachments: KAFKA-1013.patch


 Modify existing tools as per the changes in KAFKA-1000. AFAIK, the tools 
 below would be affected:
 - ConsumerOffsetChecker
 - ExportZkOffsets
 - ImportZkOffsets
 - UpdateOffsetsInZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1013) Modify existing tools as per the changes in KAFKA-1000

2014-09-23 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-1013:
---
Attachment: KAFKA-1013.patch

 Modify existing tools as per the changes in KAFKA-1000
 --

 Key: KAFKA-1013
 URL: https://issues.apache.org/jira/browse/KAFKA-1013
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Tejas Patil
Assignee: Mayuresh Gharat
Priority: Minor
 Attachments: KAFKA-1013.patch


 Modify existing tools as per the changes in KAFKA-1000. AFAIK, the tools 
 below would be affected:
 - ConsumerOffsetChecker
 - ExportZkOffsets
 - ImportZkOffsets
 - UpdateOffsetsInZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1490) remove gradlew initial setup output from source distribution

2014-09-23 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145016#comment-14145016
 ] 

Joe Stein commented on KAFKA-1490:
--

can you clone latest trunk now please, I did fresh and it is working for me

{code}

new-host:apache_kafka joestein$ git clone 
https://git-wip-us.apache.org/repos/asf/kafka.git trunk
Cloning into 'trunk'...
remote: Counting objects: 20685, done.
remote: Compressing objects: 100% (10682/10682), done.
Receiving objects: 100% (20685/20685), 14.96 MiB | 620 KiB/s, done.
remote: Total 20685 (delta 12343), reused 11450 (delta 7041)
Resolving deltas: 100% (12343/12343), done.
new-host:apache_kafka joestein$ cd trunk/
new-host:trunk joestein$ gradle
Building project 'core' with Scala version 2.10.1
:downloadWrapper

BUILD SUCCESSFUL

Total time: 11.1 secs
new-host:trunk joestein$ ./gradlew jar
Building project 'core' with Scala version 2.10.1
:clients:compileJava
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
:clients:processResources UP-TO-DATE
:clients:classes
:clients:jar
:contrib:compileJava UP-TO-DATE
:contrib:processResources UP-TO-DATE
:contrib:classes UP-TO-DATE
:contrib:jar
:core:compileJava UP-TO-DATE
:core:compileScala
/opt/apache_kafka/trunk/core/src/main/scala/kafka/admin/AdminUtils.scala:259: 
non-variable type argument String in type pattern 
scala.collection.Map[String,_] is unchecked since it is eliminated by erasure
case Some(map: Map[String, _]) = 
   ^
/opt/apache_kafka/trunk/core/src/main/scala/kafka/admin/AdminUtils.scala:262: 
non-variable type argument String in type pattern 
scala.collection.Map[String,String] is unchecked since it is eliminated by 
erasure
case Some(config: Map[String, String]) =
  ^
/opt/apache_kafka/trunk/core/src/main/scala/kafka/server/KafkaServer.scala:142: 
a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
/opt/apache_kafka/trunk/core/src/main/scala/kafka/server/KafkaServer.scala:143: 
a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
/opt/apache_kafka/trunk/core/src/main/scala/kafka/utils/Utils.scala:81: a pure 
expression does nothing in statement position; you may be omitting necessary 
parentheses
daemonThread(name, runnable(fun))
^
/opt/apache_kafka/trunk/core/src/main/scala/kafka/network/SocketServer.scala:359:
 Visited SCOPE_EXIT before visiting corresponding SCOPE_ENTER. SI-6049
  maybeCloseOldestConnection
  ^
/opt/apache_kafka/trunk/core/src/main/scala/kafka/network/SocketServer.scala:379:
 Visited SCOPE_EXIT before visiting corresponding SCOPE_ENTER. SI-6049
  try {
  ^
there were 12 feature warning(s); re-run with -feature for details
8 warnings found
:core:processResources UP-TO-DATE
:core:classes
:core:copyDependantLibs
:core:jar
:examples:compileJava
:examples:processResources UP-TO-DATE
:examples:classes
:examples:jar
:contrib:hadoop-consumer:compileJava
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
:contrib:hadoop-consumer:processResources UP-TO-DATE
:contrib:hadoop-consumer:classes
:contrib:hadoop-consumer:jar
:contrib:hadoop-producer:compileJava
:contrib:hadoop-producer:processResources UP-TO-DATE
:contrib:hadoop-producer:classes
:contrib:hadoop-producer:jar

BUILD SUCCESSFUL

Total time: 2 mins 3.497 secs

{code}

 remove gradlew initial setup output from source distribution
 

 Key: KAFKA-1490
 URL: https://issues.apache.org/jira/browse/KAFKA-1490
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1490-2.patch, KAFKA-1490.patch, rb25703.patch


 Our current source releases contains lots of stuff in the gradle folder we do 
 not need



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: jenkins changes required

2014-09-23 Thread Steve Morin
Why did you add the steps to have to download gradle and not just use the 
wrapper embedded?

 On Sep 23, 2014, at 9:12, Joe Stein joe.st...@stealth.ly wrote:
 
 Hey, so it looks like the gradlew changes I just removed the jar is going
 to require something to be done on the jenkins side since gradle has to be
 installed and run gradle first to download the wrapper... then everything
 else works.
 
 I haven't updated jenkins @ apache not sure (probably have permission) how
 to-do that?
 
 Anyone else familiar with this and can either fix or point me in the right
 direction please.
 
 This is what folks will see if they don't do the two steps I added to the
 README (1) install gradle 2) gradle //default task is to download wrapper
 to bootstrap)
 
 [joe.stein] KAFKA-1490 remove gradlew initial setup output from source
 distribution patch by Ivan Lyutov reviewed by Joe Stein
 
 --
 Started by an SCM change
 Building remotely on ubuntu-1 (Ubuntu ubuntu) in workspace 
 https://builds.apache.org/job/Kafka-trunk/ws/
 git rev-parse --is-inside-work-tree
 Fetching changes from the remote Git repository
 git config remote.origin.url
 https://git-wip-us.apache.org/repos/asf/kafka.git
 Fetching upstream changes from
 https://git-wip-us.apache.org/repos/asf/kafka.git
 git --version
 git fetch --tags --progress
 https://git-wip-us.apache.org/repos/asf/kafka.git
 +refs/heads/*:refs/remotes/origin/*
 git rev-parse origin/trunk^{commit}
 Checking out Revision d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
 (origin/trunk)
 git config core.sparsecheckout
 git checkout -f d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
 git rev-list 6d7057566f0c3a872f625957aa086b993e76071f
 [Kafka-trunk] $ /bin/bash -xe /tmp/hudson229233895233879711.sh
 + ./gradlew -PscalaVersion=2.10.1 test
 Exception in thread main java.lang.NoClassDefFoundError:
 org/gradle/wrapper/GradleWrapperMain
 Caused by: java.lang.ClassNotFoundException: org.gradle.wrapper.
 GradleWrapperMain
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
 Could not find the main class: org.gradle.wrapper.GradleWrapperMain.
 Program will exit.
 Build step 'Execute shell' marked build as failure
 
 /***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
 /


Re: jenkins changes required

2014-09-23 Thread Joe Stein
details here
http://mail-archives.apache.org/mod_mbox/incubator-general/201406.mbox/%3CCADiKvVs%3DtKDbp3TWRnxds5dVepqcX4kWeYbj7xUx%2BZoDNM_Lyg%40mail.gmail.com%3E

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/

On Tue, Sep 23, 2014 at 1:04 PM, Steve Morin steve.mo...@gmail.com wrote:

 Why did you add the steps to have to download gradle and not just use the
 wrapper embedded?

  On Sep 23, 2014, at 9:12, Joe Stein joe.st...@stealth.ly wrote:
 
  Hey, so it looks like the gradlew changes I just removed the jar is going
  to require something to be done on the jenkins side since gradle has to
 be
  installed and run gradle first to download the wrapper... then everything
  else works.
 
  I haven't updated jenkins @ apache not sure (probably have permission)
 how
  to-do that?
 
  Anyone else familiar with this and can either fix or point me in the
 right
  direction please.
 
  This is what folks will see if they don't do the two steps I added to the
  README (1) install gradle 2) gradle //default task is to download wrapper
  to bootstrap)
 
  [joe.stein] KAFKA-1490 remove gradlew initial setup output from source
  distribution patch by Ivan Lyutov reviewed by Joe Stein
 
  --
  Started by an SCM change
  Building remotely on ubuntu-1 (Ubuntu ubuntu) in workspace 
  https://builds.apache.org/job/Kafka-trunk/ws/
  git rev-parse --is-inside-work-tree
  Fetching changes from the remote Git repository
  git config remote.origin.url
  https://git-wip-us.apache.org/repos/asf/kafka.git
  Fetching upstream changes from
  https://git-wip-us.apache.org/repos/asf/kafka.git
  git --version
  git fetch --tags --progress
  https://git-wip-us.apache.org/repos/asf/kafka.git
  +refs/heads/*:refs/remotes/origin/*
  git rev-parse origin/trunk^{commit}
  Checking out Revision d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
  (origin/trunk)
  git config core.sparsecheckout
  git checkout -f d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
  git rev-list 6d7057566f0c3a872f625957aa086b993e76071f
  [Kafka-trunk] $ /bin/bash -xe /tmp/hudson229233895233879711.sh
  + ./gradlew -PscalaVersion=2.10.1 test
  Exception in thread main java.lang.NoClassDefFoundError:
  org/gradle/wrapper/GradleWrapperMain
  Caused by: java.lang.ClassNotFoundException: org.gradle.wrapper.
  GradleWrapperMain
 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
  Could not find the main class: org.gradle.wrapper.GradleWrapperMain.
  Program will exit.
  Build step 'Execute shell' marked build as failure
 
  /***
  Joe Stein
  Founder, Principal Consultant
  Big Data Open Source Security LLC
  http://www.stealth.ly
  Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
  /



Build failed in Jenkins: Kafka-trunk #273

2014-09-23 Thread Apache Jenkins Server
See https://builds.apache.org/job/Kafka-trunk/273/changes

Changes:

[joe.stein] KAFKA-1490 remove gradlew initial setup output from source 
distribution patch by Ivan Lyutov reviewed by Joe Stein

--
Started by an SCM change
Building remotely on H11 (Ubuntu ubuntu) in workspace 
https://builds.apache.org/job/Kafka-trunk/ws/
  git rev-parse --is-inside-work-tree
Fetching changes from the remote Git repository
  git config remote.origin.url 
  https://git-wip-us.apache.org/repos/asf/kafka.git
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
  git --version
  git fetch --tags --progress 
  https://git-wip-us.apache.org/repos/asf/kafka.git 
  +refs/heads/*:refs/remotes/origin/*
  git rev-parse origin/trunk^{commit}
Checking out Revision db0b0b8509ce904a0eab021a0453a80cecf3efb7 (origin/trunk)
  git config core.sparsecheckout
  git checkout -f db0b0b8509ce904a0eab021a0453a80cecf3efb7
  git rev-list d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
[Kafka-trunk] $ /bin/bash -xe /tmp/hudson7293020236838749194.sh
+ ./gradlew -PscalaVersion=2.10.1 test
Exception in thread main java.lang.NoClassDefFoundError: 
org/gradle/wrapper/GradleWrapperMain
Caused by: java.lang.ClassNotFoundException: 
org.gradle.wrapper.GradleWrapperMain
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: org.gradle.wrapper.GradleWrapperMain.  Program 
will exit.
Build step 'Execute shell' marked build as failure


Review Request 25944: Patch for KAFKA-1013

2014-09-23 Thread Mayuresh Gharat

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25944/
---

Review request for kafka.


Bugs: KAFKA-1013
https://issues.apache.org/jira/browse/KAFKA-1013


Repository: kafka


Description
---

OffsetCLient Tool API. ImportZkOffsets and ExportZkOffsets replaced by 
ImportOffsets and ExportOffsets


Diffs
-

  config/consumer.properties 83847de30d10b6e78bb8de28e0bb925d7c0e6ca2 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
fbc680fde21b02f11285a4f4b442987356abd17b 
  core/src/main/scala/kafka/tools/ConfigConstants.scala PRE-CREATION 
  core/src/main/scala/kafka/tools/ExportOffsets.scala PRE-CREATION 
  core/src/main/scala/kafka/tools/ImportOffsets.scala PRE-CREATION 
  core/src/main/scala/kafka/tools/OffsetClient.scala PRE-CREATION 
  core/src/main/scala/kafka/tools/SreOffsetTool.scala PRE-CREATION 

Diff: https://reviews.apache.org/r/25944/diff/


Testing
---


Thanks,

Mayuresh Gharat



Re: Review Request 25944: Patch for KAFKA-1013

2014-09-23 Thread Mayuresh Gharat

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25944/#review54286
---



config/consumer.properties
https://reviews.apache.org/r/25944/#comment94288

Please discard this



core/src/main/scala/kafka/tools/SreOffsetTool.scala
https://reviews.apache.org/r/25944/#comment94289

Please discard this!!


- Mayuresh Gharat


On Sept. 23, 2014, 5:18 p.m., Mayuresh Gharat wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/25944/
 ---
 
 (Updated Sept. 23, 2014, 5:18 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1013
 https://issues.apache.org/jira/browse/KAFKA-1013
 
 
 Repository: kafka
 
 
 Description
 ---
 
 OffsetCLient Tool API. ImportZkOffsets and ExportZkOffsets replaced by 
 ImportOffsets and ExportOffsets
 
 
 Diffs
 -
 
   config/consumer.properties 83847de30d10b6e78bb8de28e0bb925d7c0e6ca2 
   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
 fbc680fde21b02f11285a4f4b442987356abd17b 
   core/src/main/scala/kafka/tools/ConfigConstants.scala PRE-CREATION 
   core/src/main/scala/kafka/tools/ExportOffsets.scala PRE-CREATION 
   core/src/main/scala/kafka/tools/ImportOffsets.scala PRE-CREATION 
   core/src/main/scala/kafka/tools/OffsetClient.scala PRE-CREATION 
   core/src/main/scala/kafka/tools/SreOffsetTool.scala PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/25944/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Mayuresh Gharat
 




[jira] [Commented] (KAFKA-1490) remove gradlew initial setup output from source distribution

2014-09-23 Thread Chris Cope (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145072#comment-14145072
 ] 

Chris Cope commented on KAFKA-1490:
---

I'm still getting that DownloadLicenses error. By default it's building with 
Scala 2.9.1, but trying Scala 2.10.1 like you also gets the same error. Do you 
mind listing your dependency versions? There may be something out of date on my 
end.

 remove gradlew initial setup output from source distribution
 

 Key: KAFKA-1490
 URL: https://issues.apache.org/jira/browse/KAFKA-1490
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1490-2.patch, KAFKA-1490.patch, rb25703.patch


 Our current source releases contains lots of stuff in the gradle folder we do 
 not need



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1013) Modify existing tools as per the changes in KAFKA-1000

2014-09-23 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145060#comment-14145060
 ] 

Mayuresh Gharat commented on KAFKA-1013:


Created reviewboard https://reviews.apache.org/r/25944/diff/
 against branch origin/trunk

 Modify existing tools as per the changes in KAFKA-1000
 --

 Key: KAFKA-1013
 URL: https://issues.apache.org/jira/browse/KAFKA-1013
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Tejas Patil
Assignee: Mayuresh Gharat
Priority: Minor
 Attachments: KAFKA-1013.patch, KAFKA-1013.patch


 Modify existing tools as per the changes in KAFKA-1000. AFAIK, the tools 
 below would be affected:
 - ConsumerOffsetChecker
 - ExportZkOffsets
 - ImportZkOffsets
 - UpdateOffsetsInZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1477) add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication

2014-09-23 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145077#comment-14145077
 ] 

Gwen Shapira commented on KAFKA-1477:
-

Quick question regarding the patch:
In the wiki we mention using a separate port for SSL. From what I've seen this 
patch doesn't add an SSL port, it simply allows making the existing port 
secure. Is that correct?



 add authentication layer and initial JKS x509 implementation for brokers, 
 producers and consumer for network communication
 --

 Key: KAFKA-1477
 URL: https://issues.apache.org/jira/browse/KAFKA-1477
 Project: Kafka
  Issue Type: New Feature
Reporter: Joe Stein
Assignee: Ivan Lyutov
 Fix For: 0.9.0

 Attachments: KAFKA-1477-binary.patch, KAFKA-1477.patch, 
 KAFKA-1477_2014-06-02_16:59:40.patch, KAFKA-1477_2014-06-02_17:24:26.patch, 
 KAFKA-1477_2014-06-03_13:46:17.patch, KAFKA-1477_trunk.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1477) add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication

2014-09-23 Thread Ivan Lyutov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145087#comment-14145087
 ] 

Ivan Lyutov commented on KAFKA-1477:


Gwen Shapira, this patch is for securing existing port and nothing more. Of 
course, we plan to add support for simultaneous usage of secure and non-secure 
ports. 

 add authentication layer and initial JKS x509 implementation for brokers, 
 producers and consumer for network communication
 --

 Key: KAFKA-1477
 URL: https://issues.apache.org/jira/browse/KAFKA-1477
 Project: Kafka
  Issue Type: New Feature
Reporter: Joe Stein
Assignee: Ivan Lyutov
 Fix For: 0.9.0

 Attachments: KAFKA-1477-binary.patch, KAFKA-1477.patch, 
 KAFKA-1477_2014-06-02_16:59:40.patch, KAFKA-1477_2014-06-02_17:24:26.patch, 
 KAFKA-1477_2014-06-03_13:46:17.patch, KAFKA-1477_trunk.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1477) add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication

2014-09-23 Thread Ivan Lyutov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145087#comment-14145087
 ] 

Ivan Lyutov edited comment on KAFKA-1477 at 9/23/14 5:33 PM:
-

Gwen Shapira, this patch is for securing existing port and nothing more. Of 
course, we plan to add support for simultaneous usage of secure and non-secure 
ports in future. 


was (Author: edgefox):
Gwen Shapira, this patch is for securing existing port and nothing more. Of 
course, we plan to add support for simultaneous usage of secure and non-secure 
ports. 

 add authentication layer and initial JKS x509 implementation for brokers, 
 producers and consumer for network communication
 --

 Key: KAFKA-1477
 URL: https://issues.apache.org/jira/browse/KAFKA-1477
 Project: Kafka
  Issue Type: New Feature
Reporter: Joe Stein
Assignee: Ivan Lyutov
 Fix For: 0.9.0

 Attachments: KAFKA-1477-binary.patch, KAFKA-1477.patch, 
 KAFKA-1477_2014-06-02_16:59:40.patch, KAFKA-1477_2014-06-02_17:24:26.patch, 
 KAFKA-1477_2014-06-03_13:46:17.patch, KAFKA-1477_trunk.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1477) add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication

2014-09-23 Thread Rajasekar Elango (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145089#comment-14145089
 ] 

Rajasekar Elango commented on KAFKA-1477:
-

This is great [~edgefox] . I did a quick test and able to get it working in 
both secure and non-secure mode. But I noticed that we original used format 
broker:port:secureflag to specify brokerlist for producer, now looks like 
new secure property is added to producer config and specified separately. Do 
you have usage documented anywhere? I also noticed a change to 
security.config.file option in console producer and left a comment in review.

 add authentication layer and initial JKS x509 implementation for brokers, 
 producers and consumer for network communication
 --

 Key: KAFKA-1477
 URL: https://issues.apache.org/jira/browse/KAFKA-1477
 Project: Kafka
  Issue Type: New Feature
Reporter: Joe Stein
Assignee: Ivan Lyutov
 Fix For: 0.9.0

 Attachments: KAFKA-1477-binary.patch, KAFKA-1477.patch, 
 KAFKA-1477_2014-06-02_16:59:40.patch, KAFKA-1477_2014-06-02_17:24:26.patch, 
 KAFKA-1477_2014-06-03_13:46:17.patch, KAFKA-1477_trunk.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1490) remove gradlew initial setup output from source distribution

2014-09-23 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145093#comment-14145093
 ] 

Joe Stein commented on KAFKA-1490:
--

I think your trunk is out of date we bumped the Scala version back in August

{code}

4d075971 (Ivan Lyutov 2014-08-10 21:20:30 -0700 18) scalaVersion=2.10.1

{code}

here are my dependencies for core from a fresh clone of trunk

{code}

new-host:trunk joestein$ ./gradlew core:dependencies
Building project 'core' with Scala version 2.10.1
:core:dependencies


Project :core


archives - Configuration for archive artifacts.
No dependencies

compile - Compile classpath for source set 'main'.
+--- project :clients
|+--- org.slf4j:slf4j-api:1.7.6
|+--- org.xerial.snappy:snappy-java:1.1.1.3
|\--- net.jpountz.lz4:lz4:1.2.0
+--- org.scala-lang:scala-library:2.10.1
+--- org.apache.zookeeper:zookeeper:3.4.6
|+--- org.slf4j:slf4j-api:1.6.1 - 1.7.6
|+--- org.slf4j:slf4j-log4j12:1.6.1
||+--- org.slf4j:slf4j-api:1.6.1 - 1.7.6
||\--- log4j:log4j:1.2.16
|+--- log4j:log4j:1.2.16
|\--- io.netty:netty:3.7.0.Final
+--- com.101tec:zkclient:0.3
|+--- org.apache.zookeeper:zookeeper:3.3.1 - 3.4.6 (*)
|\--- log4j:log4j:1.2.14 - 1.2.16
+--- com.yammer.metrics:metrics-core:2.2.0
|\--- org.slf4j:slf4j-api:1.7.2 - 1.7.6
\--- net.sf.jopt-simple:jopt-simple:3.2

default - Configuration for default artifacts.
+--- project :clients
|+--- org.slf4j:slf4j-api:1.7.6
|+--- org.xerial.snappy:snappy-java:1.1.1.3
|\--- net.jpountz.lz4:lz4:1.2.0
+--- org.scala-lang:scala-library:2.10.1
+--- org.apache.zookeeper:zookeeper:3.4.6
|+--- org.slf4j:slf4j-api:1.6.1 - 1.7.6
|+--- org.slf4j:slf4j-log4j12:1.6.1
||+--- org.slf4j:slf4j-api:1.6.1 - 1.7.6
||\--- log4j:log4j:1.2.16
|+--- log4j:log4j:1.2.16
|\--- io.netty:netty:3.7.0.Final
+--- com.101tec:zkclient:0.3
|+--- org.apache.zookeeper:zookeeper:3.3.1 - 3.4.6 (*)
|\--- log4j:log4j:1.2.14 - 1.2.16
+--- com.yammer.metrics:metrics-core:2.2.0
|\--- org.slf4j:slf4j-api:1.7.2 - 1.7.6
\--- net.sf.jopt-simple:jopt-simple:3.2

runtime - Runtime classpath for source set 'main'.
+--- project :clients
|+--- org.slf4j:slf4j-api:1.7.6
|+--- org.xerial.snappy:snappy-java:1.1.1.3
|\--- net.jpountz.lz4:lz4:1.2.0
+--- org.scala-lang:scala-library:2.10.1
+--- org.apache.zookeeper:zookeeper:3.4.6
|+--- org.slf4j:slf4j-api:1.6.1 - 1.7.6
|+--- org.slf4j:slf4j-log4j12:1.6.1
||+--- org.slf4j:slf4j-api:1.6.1 - 1.7.6
||\--- log4j:log4j:1.2.16
|+--- log4j:log4j:1.2.16
|\--- io.netty:netty:3.7.0.Final
+--- com.101tec:zkclient:0.3
|+--- org.apache.zookeeper:zookeeper:3.3.1 - 3.4.6 (*)
|\--- log4j:log4j:1.2.14 - 1.2.16
+--- com.yammer.metrics:metrics-core:2.2.0
|\--- org.slf4j:slf4j-api:1.7.2 - 1.7.6
\--- net.sf.jopt-simple:jopt-simple:3.2

signatures
No dependencies

testCompile - Compile classpath for source set 'test'.
+--- project :clients
|+--- org.slf4j:slf4j-api:1.7.6
|+--- org.xerial.snappy:snappy-java:1.1.1.3
|\--- net.jpountz.lz4:lz4:1.2.0
+--- org.scala-lang:scala-library:2.10.1
+--- org.apache.zookeeper:zookeeper:3.4.6
|+--- org.slf4j:slf4j-api:1.6.1 - 1.7.6
|+--- org.slf4j:slf4j-log4j12:1.6.1
||+--- org.slf4j:slf4j-api:1.6.1 - 1.7.6
||\--- log4j:log4j:1.2.16
|+--- log4j:log4j:1.2.16
|\--- io.netty:netty:3.7.0.Final
+--- com.101tec:zkclient:0.3
|+--- org.apache.zookeeper:zookeeper:3.3.1 - 3.4.6 (*)
|\--- log4j:log4j:1.2.14 - 1.2.16
+--- com.yammer.metrics:metrics-core:2.2.0
|\--- org.slf4j:slf4j-api:1.7.2 - 1.7.6
+--- net.sf.jopt-simple:jopt-simple:3.2
+--- junit:junit:4.1
+--- org.easymock:easymock:3.0
|+--- cglib:cglib-nodep:2.2
|\--- org.objenesis:objenesis:1.2
+--- org.objenesis:objenesis:1.2
\--- org.scalatest:scalatest_2.10:1.9.1
 +--- org.scala-lang:scala-library:2.10.0 - 2.10.1
 +--- org.scala-lang:scala-actors:2.10.0
 |\--- org.scala-lang:scala-library:2.10.0 - 2.10.1
 \--- org.scala-lang:scala-reflect:2.10.0
  \--- org.scala-lang:scala-library:2.10.0 - 2.10.1

testRuntime - Runtime classpath for source set 'test'.
+--- project :clients
|+--- org.slf4j:slf4j-api:1.7.6
|+--- org.xerial.snappy:snappy-java:1.1.1.3
|\--- net.jpountz.lz4:lz4:1.2.0
+--- org.scala-lang:scala-library:2.10.1
+--- org.apache.zookeeper:zookeeper:3.4.6
|+--- org.slf4j:slf4j-api:1.6.1 - 1.7.6
|+--- org.slf4j:slf4j-log4j12:1.6.1 - 1.7.6
||+--- org.slf4j:slf4j-api:1.7.6
||\--- log4j:log4j:1.2.17
|+--- log4j:log4j:1.2.16 - 1.2.17
|\--- io.netty:netty:3.7.0.Final
+--- com.101tec:zkclient:0.3
|+--- org.apache.zookeeper:zookeeper:3.3.1 - 3.4.6 (*)
|\--- log4j:log4j:1.2.14 - 1.2.17

[jira] [Created] (KAFKA-1647) Replication offset checkpoints (high water marks) can be lost on hard kills and restarts

2014-09-23 Thread Joel Koshy (JIRA)
Joel Koshy created KAFKA-1647:
-

 Summary: Replication offset checkpoints (high water marks) can be 
lost on hard kills and restarts
 Key: KAFKA-1647
 URL: https://issues.apache.org/jira/browse/KAFKA-1647
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Koshy


We ran into this scenario recently in a production environment. This can happen 
when enough brokers in a cluster are taken down. i.e., a rolling bounce done 
properly should not cause this issue. It can occur if all replicas for any 
partition are taken down.

Here is a sample scenario:

* Cluster of three brokers: b0, b1, b2
* Two partitions (of some topic) with replication factor two: p0, p1
* Initial state:
** p0: leader = b0, ISR = {b0, b1}
** p1: leader = b1, ISR = {b0, b1}
* Do a parallel hard-kill of all brokers
* Bring up b2, so it is the new controller
* b2 initializes its controller context and populates its leader/ISR cache 
(i.e., controllerContext.partitionLeadershipInfo) from zookeeper. The last 
known leaders are b0 (for p0) and b1 (for p2)
* Bring up b1
* The controller's onBrokerStartup procedure initiates a replica state change 
for all replicas on b1 to become online. As part of this replica state change 
it gets the last known leader and ISR and sends a LeaderAndIsrRequest to b1 
(for p1 and p2). This LeaderAndIsr request contains: {{p0: leader=b0; p1: 
leader=b1;} leaders=b1}. b0 is indicated as the leader of p0 but it is not 
included in the leaders field because b0 is down.
* On receiving the LeaderAndIsrRequest, b1's replica manager will successfully 
make b2 the leader for p1 (and create the local replica object corresponding to 
p1). It will however abort the become follower transition for p0 because the 
designated leader b2 is offline. So it will not create the local replica object 
for p0.
* It will then start the high water mark checkpoint thread. Since only p1 has a 
local replica object, only p1's high water mark will be checkpointed to disk. 
p0's previously written checkpoint  if any will be lost.

So in summary it seems we should always create the local replica object even if 
the online transition does not happen.

Possible symptoms of the above bug could be one or more of the following (we 
saw 2 and 3):
# Data loss; yes on a hard-kill data loss is expected, but this can actually 
cause loss of nearly all data if the broker becomes follower, truncates, and 
soon after happens to become leader.
# High IO on brokers that lose their high water mark then subsequently (on a 
successful become follower transition) truncate their log to zero and start 
catching up from the beginning.
# If the offsets topic is affected, then offsets can get reset. This is because 
during an offset load we don't read past the high water mark. So if a water 
mark is missing then we don't load anything (even if the offsets are there in 
the log).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1555) provide strong consistency with reasonable availability

2014-09-23 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145096#comment-14145096
 ] 

Joe Stein commented on KAFKA-1555:
--

[~gwenshap] yup, in that case the issue was between the chair and the keyboard 
(me)

{code}

root@precise64:/opt/apache/kafka# bin/kafka-console-producer.sh --broker-list 
localhost:9092 --topic testNew --sync --request-required-acks -1
A
[2014-09-23 17:36:37,127] WARN Produce request with correlation id 2 failed due 
to [testNew,1]: kafka.common.NotEnoughReplicasException 
(kafka.producer.async.DefaultEventHandler)
[2014-09-23 17:36:37,248] WARN Produce request with correlation id 5 failed due 
to [testNew,2]: kafka.common.NotEnoughReplicasException 
(kafka.producer.async.DefaultEventHandler)
[2014-09-23 17:36:37,364] WARN Produce request with correlation id 8 failed due 
to [testNew,2]: kafka.common.NotEnoughReplicasException 
(kafka.producer.async.DefaultEventHandler)
[2014-09-23 17:36:37,480] WARN Produce request with correlation id 11 failed 
due to [testNew,1]: kafka.common.NotEnoughReplicasException 
(kafka.producer.async.DefaultEventHandler)
[2014-09-23 17:36:37,591] ERROR Failed to send requests for topics testNew with 
correlation ids in [0,12] (kafka.producer.async.DefaultEventHandler)
kafka.common.FailedToSendMessageException: Failed to send messages after 3 
tries.
at 
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:76)
at kafka.producer.OldProducer.send(BaseProducer.scala:62)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:95)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)

{code}

awesome!

 provide strong consistency with reasonable availability
 ---

 Key: KAFKA-1555
 URL: https://issues.apache.org/jira/browse/KAFKA-1555
 Project: Kafka
  Issue Type: Improvement
  Components: controller
Affects Versions: 0.8.1.1
Reporter: Jiang Wu
Assignee: Gwen Shapira
 Fix For: 0.8.2

 Attachments: KAFKA-1555.0.patch, KAFKA-1555.1.patch, 
 KAFKA-1555.2.patch, KAFKA-1555.3.patch


 In a mission critical application, we expect a kafka cluster with 3 brokers 
 can satisfy two requirements:
 1. When 1 broker is down, no message loss or service blocking happens.
 2. In worse cases such as two brokers are down, service can be blocked, but 
 no message loss happens.
 We found that current kafka versoin (0.8.1.1) cannot achieve the requirements 
 due to its three behaviors:
 1. when choosing a new leader from 2 followers in ISR, the one with less 
 messages may be chosen as the leader.
 2. even when replica.lag.max.messages=0, a follower can stay in ISR when it 
 has less messages than the leader.
 3. ISR can contains only 1 broker, therefore acknowledged messages may be 
 stored in only 1 broker.
 The following is an analytical proof. 
 We consider a cluster with 3 brokers and a topic with 3 replicas, and assume 
 that at the beginning, all 3 replicas, leader A, followers B and C, are in 
 sync, i.e., they have the same messages and are all in ISR.
 According to the value of request.required.acks (acks for short), there are 
 the following cases.
 1. acks=0, 1, 3. Obviously these settings do not satisfy the requirement.
 2. acks=2. Producer sends a message m. It's acknowledged by A and B. At this 
 time, although C hasn't received m, C is still in ISR. If A is killed, C can 
 be elected as the new leader, and consumers will miss m.
 3. acks=-1. B and C restart and are removed from ISR. Producer sends a 
 message m to A, and receives an acknowledgement. Disk failure happens in A 
 before B and C replicate m. Message m is lost.
 In summary, any existing configuration cannot satisfy the requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-1555) provide strong consistency with reasonable availability

2014-09-23 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145096#comment-14145096
 ] 

Joe Stein edited comment on KAFKA-1555 at 9/23/14 5:39 PM:
---

[~gwenshap] yup, in that case the issue was between the chair and the keyboard 
(me)

{code}

root@precise64:/opt/apache/kafka# bin/kafka-console-producer.sh --broker-list 
localhost:9092 --topic testNew --sync --request-required-acks -1
A
[2014-09-23 17:36:37,127] WARN Produce request with correlation id 2 failed due 
to [testNew,1]: kafka.common.NotEnoughReplicasException 
(kafka.producer.async.DefaultEventHandler)
[2014-09-23 17:36:37,248] WARN Produce request with correlation id 5 failed due 
to [testNew,2]: kafka.common.NotEnoughReplicasException 
(kafka.producer.async.DefaultEventHandler)
[2014-09-23 17:36:37,364] WARN Produce request with correlation id 8 failed due 
to [testNew,2]: kafka.common.NotEnoughReplicasException 
(kafka.producer.async.DefaultEventHandler)
[2014-09-23 17:36:37,480] WARN Produce request with correlation id 11 failed 
due to [testNew,1]: kafka.common.NotEnoughReplicasException 
(kafka.producer.async.DefaultEventHandler)
[2014-09-23 17:36:37,591] ERROR Failed to send requests for topics testNew with 
correlation ids in [0,12] (kafka.producer.async.DefaultEventHandler)
kafka.common.FailedToSendMessageException: Failed to send messages after 3 
tries.
at 
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:76)
at kafka.producer.OldProducer.send(BaseProducer.scala:62)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:95)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)

{code}

It is working, awesome!


was (Author: joestein):
[~gwenshap] yup, in that case the issue was between the chair and the keyboard 
(me)

{code}

root@precise64:/opt/apache/kafka# bin/kafka-console-producer.sh --broker-list 
localhost:9092 --topic testNew --sync --request-required-acks -1
A
[2014-09-23 17:36:37,127] WARN Produce request with correlation id 2 failed due 
to [testNew,1]: kafka.common.NotEnoughReplicasException 
(kafka.producer.async.DefaultEventHandler)
[2014-09-23 17:36:37,248] WARN Produce request with correlation id 5 failed due 
to [testNew,2]: kafka.common.NotEnoughReplicasException 
(kafka.producer.async.DefaultEventHandler)
[2014-09-23 17:36:37,364] WARN Produce request with correlation id 8 failed due 
to [testNew,2]: kafka.common.NotEnoughReplicasException 
(kafka.producer.async.DefaultEventHandler)
[2014-09-23 17:36:37,480] WARN Produce request with correlation id 11 failed 
due to [testNew,1]: kafka.common.NotEnoughReplicasException 
(kafka.producer.async.DefaultEventHandler)
[2014-09-23 17:36:37,591] ERROR Failed to send requests for topics testNew with 
correlation ids in [0,12] (kafka.producer.async.DefaultEventHandler)
kafka.common.FailedToSendMessageException: Failed to send messages after 3 
tries.
at 
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:76)
at kafka.producer.OldProducer.send(BaseProducer.scala:62)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:95)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)

{code}

awesome!

 provide strong consistency with reasonable availability
 ---

 Key: KAFKA-1555
 URL: https://issues.apache.org/jira/browse/KAFKA-1555
 Project: Kafka
  Issue Type: Improvement
  Components: controller
Affects Versions: 0.8.1.1
Reporter: Jiang Wu
Assignee: Gwen Shapira
 Fix For: 0.8.2

 Attachments: KAFKA-1555.0.patch, KAFKA-1555.1.patch, 
 KAFKA-1555.2.patch, KAFKA-1555.3.patch


 In a mission critical application, we expect a kafka cluster with 3 brokers 
 can satisfy two requirements:
 1. When 1 broker is down, no message loss or service blocking happens.
 2. In worse cases such as two brokers are down, service can be blocked, but 
 no message loss happens.
 We found that current kafka versoin (0.8.1.1) cannot achieve the requirements 
 due to its three behaviors:
 1. when choosing a new leader from 2 followers in ISR, the one with less 
 messages may be chosen as the leader.
 2. even when replica.lag.max.messages=0, a follower can stay in ISR when it 
 has less messages than the leader.
 3. ISR can contains only 1 broker, therefore acknowledged messages may be 
 stored in only 1 broker.
 The following is an analytical proof. 
 We consider a cluster with 3 brokers and a topic with 3 replicas, and assume 
 that at the beginning, all 3 replicas, leader A, followers B and C, are in 
 sync, i.e., they have the same messages and are all in ISR.

[jira] [Commented] (KAFKA-1477) add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication

2014-09-23 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145090#comment-14145090
 ] 

Gwen Shapira commented on KAFKA-1477:
-

Another question: 
The change mentions that there will be config/server.security.properties file.
I did not see it in the patch, so I'm wondering what goes in there.

One of the options I'd like to see is whether client certificates are required 
or not (i.e. choosing whether we want to authenticate or just encrypt).



 add authentication layer and initial JKS x509 implementation for brokers, 
 producers and consumer for network communication
 --

 Key: KAFKA-1477
 URL: https://issues.apache.org/jira/browse/KAFKA-1477
 Project: Kafka
  Issue Type: New Feature
Reporter: Joe Stein
Assignee: Ivan Lyutov
 Fix For: 0.9.0

 Attachments: KAFKA-1477-binary.patch, KAFKA-1477.patch, 
 KAFKA-1477_2014-06-02_16:59:40.patch, KAFKA-1477_2014-06-02_17:24:26.patch, 
 KAFKA-1477_2014-06-03_13:46:17.patch, KAFKA-1477_trunk.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 25944: Patch for KAFKA-1013

2014-09-23 Thread Mayuresh Gharat

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25944/
---

(Updated Sept. 23, 2014, 5:45 p.m.)


Review request for kafka.


Bugs: KAFKA-1013
https://issues.apache.org/jira/browse/KAFKA-1013


Repository: kafka


Description (updated)
---

OffsetCLient Tool API. ImportZkOffsets and ExportZkOffsets replaced by 
ImportOffsets and ExportOffsets


Modified the comments in the headers


Diffs (updated)
-

  config/consumer.properties 83847de30d10b6e78bb8de28e0bb925d7c0e6ca2 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
fbc680fde21b02f11285a4f4b442987356abd17b 
  core/src/main/scala/kafka/tools/ConfigConstants.scala PRE-CREATION 
  core/src/main/scala/kafka/tools/ExportOffsets.scala PRE-CREATION 
  core/src/main/scala/kafka/tools/ImportOffsets.scala PRE-CREATION 
  core/src/main/scala/kafka/tools/OffsetClient.scala PRE-CREATION 

Diff: https://reviews.apache.org/r/25944/diff/


Testing
---


Thanks,

Mayuresh Gharat



[jira] [Updated] (KAFKA-1013) Modify existing tools as per the changes in KAFKA-1000

2014-09-23 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-1013:
---
Attachment: KAFKA-1013_2014-09-23_10:45:59.patch

 Modify existing tools as per the changes in KAFKA-1000
 --

 Key: KAFKA-1013
 URL: https://issues.apache.org/jira/browse/KAFKA-1013
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Tejas Patil
Assignee: Mayuresh Gharat
Priority: Minor
 Attachments: KAFKA-1013.patch, KAFKA-1013.patch, 
 KAFKA-1013_2014-09-23_10:45:59.patch


 Modify existing tools as per the changes in KAFKA-1000. AFAIK, the tools 
 below would be affected:
 - ConsumerOffsetChecker
 - ExportZkOffsets
 - ImportZkOffsets
 - UpdateOffsetsInZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1013) Modify existing tools as per the changes in KAFKA-1000

2014-09-23 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145110#comment-14145110
 ] 

Mayuresh Gharat commented on KAFKA-1013:


Updated reviewboard https://reviews.apache.org/r/25944/diff/
 against branch origin/trunk

 Modify existing tools as per the changes in KAFKA-1000
 --

 Key: KAFKA-1013
 URL: https://issues.apache.org/jira/browse/KAFKA-1013
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Tejas Patil
Assignee: Mayuresh Gharat
Priority: Minor
 Attachments: KAFKA-1013.patch, KAFKA-1013.patch, 
 KAFKA-1013_2014-09-23_10:45:59.patch


 Modify existing tools as per the changes in KAFKA-1000. AFAIK, the tools 
 below would be affected:
 - ConsumerOffsetChecker
 - ExportZkOffsets
 - ImportZkOffsets
 - UpdateOffsetsInZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 25944: Patch for KAFKA-1013

2014-09-23 Thread Mayuresh Gharat

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25944/
---

(Updated Sept. 23, 2014, 5:48 p.m.)


Review request for kafka.


Bugs: KAFKA-1013
https://issues.apache.org/jira/browse/KAFKA-1013


Repository: kafka


Description (updated)
---

OffsetCLient Tool API. ImportZkOffsets and ExportZkOffsets replaced by 
ImportOffsets and ExportOffsets


Modified the comments in the headers


Corrected a value


Diffs (updated)
-

  config/consumer.properties 83847de30d10b6e78bb8de28e0bb925d7c0e6ca2 
  core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
fbc680fde21b02f11285a4f4b442987356abd17b 
  core/src/main/scala/kafka/tools/ConfigConstants.scala PRE-CREATION 
  core/src/main/scala/kafka/tools/ExportOffsets.scala PRE-CREATION 
  core/src/main/scala/kafka/tools/ImportOffsets.scala PRE-CREATION 
  core/src/main/scala/kafka/tools/OffsetClient.scala PRE-CREATION 

Diff: https://reviews.apache.org/r/25944/diff/


Testing
---


Thanks,

Mayuresh Gharat



[jira] [Updated] (KAFKA-1013) Modify existing tools as per the changes in KAFKA-1000

2014-09-23 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-1013:
---
Attachment: KAFKA-1013_2014-09-23_10:48:07.patch

 Modify existing tools as per the changes in KAFKA-1000
 --

 Key: KAFKA-1013
 URL: https://issues.apache.org/jira/browse/KAFKA-1013
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Tejas Patil
Assignee: Mayuresh Gharat
Priority: Minor
 Attachments: KAFKA-1013.patch, KAFKA-1013.patch, 
 KAFKA-1013_2014-09-23_10:45:59.patch, KAFKA-1013_2014-09-23_10:48:07.patch


 Modify existing tools as per the changes in KAFKA-1000. AFAIK, the tools 
 below would be affected:
 - ConsumerOffsetChecker
 - ExportZkOffsets
 - ImportZkOffsets
 - UpdateOffsetsInZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 25944: Patch for KAFKA-1013

2014-09-23 Thread Mayuresh Gharat

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25944/#review54288
---



config/consumer.properties
https://reviews.apache.org/r/25944/#comment94290

Please discard this file for review


- Mayuresh Gharat


On Sept. 23, 2014, 5:48 p.m., Mayuresh Gharat wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/25944/
 ---
 
 (Updated Sept. 23, 2014, 5:48 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1013
 https://issues.apache.org/jira/browse/KAFKA-1013
 
 
 Repository: kafka
 
 
 Description
 ---
 
 OffsetCLient Tool API. ImportZkOffsets and ExportZkOffsets replaced by 
 ImportOffsets and ExportOffsets
 
 
 Modified the comments in the headers
 
 
 Corrected a value
 
 
 Diffs
 -
 
   config/consumer.properties 83847de30d10b6e78bb8de28e0bb925d7c0e6ca2 
   core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala 
 fbc680fde21b02f11285a4f4b442987356abd17b 
   core/src/main/scala/kafka/tools/ConfigConstants.scala PRE-CREATION 
   core/src/main/scala/kafka/tools/ExportOffsets.scala PRE-CREATION 
   core/src/main/scala/kafka/tools/ImportOffsets.scala PRE-CREATION 
   core/src/main/scala/kafka/tools/OffsetClient.scala PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/25944/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Mayuresh Gharat
 




[jira] [Commented] (KAFKA-1013) Modify existing tools as per the changes in KAFKA-1000

2014-09-23 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145116#comment-14145116
 ] 

Mayuresh Gharat commented on KAFKA-1013:


Updated reviewboard https://reviews.apache.org/r/25944/diff/
 against branch origin/trunk

 Modify existing tools as per the changes in KAFKA-1000
 --

 Key: KAFKA-1013
 URL: https://issues.apache.org/jira/browse/KAFKA-1013
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Tejas Patil
Assignee: Mayuresh Gharat
Priority: Minor
 Attachments: KAFKA-1013.patch, KAFKA-1013.patch, 
 KAFKA-1013_2014-09-23_10:45:59.patch, KAFKA-1013_2014-09-23_10:48:07.patch


 Modify existing tools as per the changes in KAFKA-1000. AFAIK, the tools 
 below would be affected:
 - ConsumerOffsetChecker
 - ExportZkOffsets
 - ImportZkOffsets
 - UpdateOffsetsInZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1611) Improve system test configuration

2014-09-23 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145137#comment-14145137
 ] 

Gwen Shapira commented on KAFKA-1611:
-

Updated wiki. Mentioned JAVA_HOME in quickstart guide and explained the full 
behavior in How does it work section.

 Improve system test configuration
 -

 Key: KAFKA-1611
 URL: https://issues.apache.org/jira/browse/KAFKA-1611
 Project: Kafka
  Issue Type: Sub-task
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.9.0

 Attachments: KAFKA-1611.0.patch


 I'd like to make the config a bit more out of the box for the common case 
 of local cluster. This will include:
 1. Fix cluster_config.json of migration testsuite, it has hardcoded path that 
 prevents it from working out of the box at all. 
 2. Use JAVA_HOME environment variable if default is specified and if 
 JAVA_HOME is defined. The current guessing method is a bit broken and using 
 JAVA_HOME will allow devs to configure their default java dir without editing 
 multiple cluster_config.json files in multiple places. 
 3. (if feasible without too much headache): Configure remote hosts only for 
 test packages that will not be skipped. This will reduce some overhead in the 
 common use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1611) Improve system test configuration

2014-09-23 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1611:
-
Fix Version/s: (was: 0.9.0)
   0.8.2

 Improve system test configuration
 -

 Key: KAFKA-1611
 URL: https://issues.apache.org/jira/browse/KAFKA-1611
 Project: Kafka
  Issue Type: Sub-task
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.2

 Attachments: KAFKA-1611.0.patch


 I'd like to make the config a bit more out of the box for the common case 
 of local cluster. This will include:
 1. Fix cluster_config.json of migration testsuite, it has hardcoded path that 
 prevents it from working out of the box at all. 
 2. Use JAVA_HOME environment variable if default is specified and if 
 JAVA_HOME is defined. The current guessing method is a bit broken and using 
 JAVA_HOME will allow devs to configure their default java dir without editing 
 multiple cluster_config.json files in multiple places. 
 3. (if feasible without too much headache): Configure remote hosts only for 
 test packages that will not be skipped. This will reduce some overhead in the 
 common use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1648) Round robin consumer balance throws an NPE when there are no topics

2014-09-23 Thread Todd Palino (JIRA)
Todd Palino created KAFKA-1648:
--

 Summary: Round robin consumer balance throws an NPE when there are 
no topics
 Key: KAFKA-1648
 URL: https://issues.apache.org/jira/browse/KAFKA-1648
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Reporter: Todd Palino
Assignee: Neha Narkhede


If you use the roundrobin rebalance method with a wildcard consumer, and there 
are no topics in the cluster, rebalance throws a NullPointerException in the 
consumer and fails. It retries the rebalance, but will continue to throw the 
NPE.

2014/09/23 17:51:16.147 [ZookeeperConsumerConnector] 
[kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Cleared all relevant 
queues for this fetcher
2014/09/23 17:51:16.147 [ZookeeperConsumerConnector] 
[kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Cleared the data chunks 
in all the consumer message iterators
2014/09/23 17:51:16.148 [ZookeeperConsumerConnector] 
[kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Committing all offsets 
after clearing the fetcher queues
2014/09/23 17:51:46.148 [ZookeeperConsumerConnector] 
[kafka-audit_lva1-app0007.corp-1411494404908-4e620544], begin rebalancing 
consumer kafka-audit_lva1-app0007.corp-1411494404908-4e620544 try #0
2014/09/23 17:51:46.148 ERROR [OffspringServletRuntime] [main] 
[kafka-console-audit] [] Boot listener 
com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditBootListener failed
kafka.common.ConsumerRebalanceFailedException: 
kafka-audit_lva1-app0007.corp-1411494404908-4e620544 can't rebalance after 10 
retries
at 
kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:630)
at 
kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:897)
at 
kafka.consumer.ZookeeperConsumerConnector$WildcardStreamsHandler.init(ZookeeperConsumerConnector.scala:931)
at 
kafka.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:159)
at 
kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:101)
at 
com.linkedin.tracker.consumer.TrackingConsumerImpl.initWildcardIterators(TrackingConsumerImpl.java:88)
at 
com.linkedin.tracker.consumer.TrackingConsumerImpl.getWildcardIterators(TrackingConsumerImpl.java:116)
at 
com.linkedin.kafkaconsoleaudit.KafkaConsoleAudit.createAuditThreads(KafkaConsoleAudit.java:59)
at 
com.linkedin.kafkaconsoleaudit.KafkaConsoleAudit.initializeAudit(KafkaConsoleAudit.java:50)
at 
com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditFactory.createInstance(KafkaConsoleAuditFactory.java:125)
at 
com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditFactory.createInstance(KafkaConsoleAuditFactory.java:20)
at 
com.linkedin.util.factory.SimpleSingletonFactory.createInstance(SimpleSingletonFactory.java:20)
at 
com.linkedin.util.factory.SimpleSingletonFactory.createInstance(SimpleSingletonFactory.java:14)
at com.linkedin.util.factory.Generator.doGetBean(Generator.java:337)
at com.linkedin.util.factory.Generator.getBean(Generator.java:270)
at 
com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditBootListener.onBoot(KafkaConsoleAuditBootListener.java:16)
at 
com.linkedin.offspring.servlet.OffspringServletRuntime.startGenerator(OffspringServletRuntime.java:147)
at 
com.linkedin.offspring.servlet.OffspringServletRuntime.start(OffspringServletRuntime.java:73)
at 
com.linkedin.offspring.servlet.OffspringServletContextListener.contextInitialized(OffspringServletContextListener.java:28)
at 
org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:771)
at 
org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:424)
at 
org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:763)
at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:249)
at 
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250)
at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:706)
at 
org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:492)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at 
com.linkedin.emweb.ContextBasedHandlerImpl.doStart(ContextBasedHandlerImpl.java:105)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at 
com.linkedin.emweb.WebappDeployerImpl.start(WebappDeployerImpl.java:333)
at 
com.linkedin.emweb.WebappDeployerImpl.deploy(WebappDeployerImpl.java:187)

[jira] [Assigned] (KAFKA-1648) Round robin consumer balance throws an NPE when there are no topics

2014-09-23 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat reassigned KAFKA-1648:
--

Assignee: Mayuresh Gharat  (was: Neha Narkhede)

 Round robin consumer balance throws an NPE when there are no topics
 ---

 Key: KAFKA-1648
 URL: https://issues.apache.org/jira/browse/KAFKA-1648
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Reporter: Todd Palino
Assignee: Mayuresh Gharat
  Labels: newbie

 If you use the roundrobin rebalance method with a wildcard consumer, and 
 there are no topics in the cluster, rebalance throws a NullPointerException 
 in the consumer and fails. It retries the rebalance, but will continue to 
 throw the NPE.
 2014/09/23 17:51:16.147 [ZookeeperConsumerConnector] 
 [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Cleared all relevant 
 queues for this fetcher
 2014/09/23 17:51:16.147 [ZookeeperConsumerConnector] 
 [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Cleared the data 
 chunks in all the consumer message iterators
 2014/09/23 17:51:16.148 [ZookeeperConsumerConnector] 
 [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], Committing all 
 offsets after clearing the fetcher queues
 2014/09/23 17:51:46.148 [ZookeeperConsumerConnector] 
 [kafka-audit_lva1-app0007.corp-1411494404908-4e620544], begin rebalancing 
 consumer kafka-audit_lva1-app0007.corp-1411494404908-4e620544 try #0
 2014/09/23 17:51:46.148 ERROR [OffspringServletRuntime] [main] 
 [kafka-console-audit] [] Boot listener 
 com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditBootListener failed
 kafka.common.ConsumerRebalanceFailedException: 
 kafka-audit_lva1-app0007.corp-1411494404908-4e620544 can't rebalance after 10 
 retries
   at 
 kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:630)
   at 
 kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:897)
   at 
 kafka.consumer.ZookeeperConsumerConnector$WildcardStreamsHandler.init(ZookeeperConsumerConnector.scala:931)
   at 
 kafka.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:159)
   at 
 kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:101)
   at 
 com.linkedin.tracker.consumer.TrackingConsumerImpl.initWildcardIterators(TrackingConsumerImpl.java:88)
   at 
 com.linkedin.tracker.consumer.TrackingConsumerImpl.getWildcardIterators(TrackingConsumerImpl.java:116)
   at 
 com.linkedin.kafkaconsoleaudit.KafkaConsoleAudit.createAuditThreads(KafkaConsoleAudit.java:59)
   at 
 com.linkedin.kafkaconsoleaudit.KafkaConsoleAudit.initializeAudit(KafkaConsoleAudit.java:50)
   at 
 com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditFactory.createInstance(KafkaConsoleAuditFactory.java:125)
   at 
 com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditFactory.createInstance(KafkaConsoleAuditFactory.java:20)
   at 
 com.linkedin.util.factory.SimpleSingletonFactory.createInstance(SimpleSingletonFactory.java:20)
   at 
 com.linkedin.util.factory.SimpleSingletonFactory.createInstance(SimpleSingletonFactory.java:14)
   at com.linkedin.util.factory.Generator.doGetBean(Generator.java:337)
   at com.linkedin.util.factory.Generator.getBean(Generator.java:270)
   at 
 com.linkedin.kafkaconsoleaudit.KafkaConsoleAuditBootListener.onBoot(KafkaConsoleAuditBootListener.java:16)
   at 
 com.linkedin.offspring.servlet.OffspringServletRuntime.startGenerator(OffspringServletRuntime.java:147)
   at 
 com.linkedin.offspring.servlet.OffspringServletRuntime.start(OffspringServletRuntime.java:73)
   at 
 com.linkedin.offspring.servlet.OffspringServletContextListener.contextInitialized(OffspringServletContextListener.java:28)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:771)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:424)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:763)
   at 
 org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:249)
   at 
 org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250)
   at 
 org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:706)
   at 
 org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:492)
   at 
 org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
   at 
 

[jira] [Updated] (KAFKA-1555) provide strong consistency with reasonable availability

2014-09-23 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1555:
-
Reviewer: Jun Rao

 provide strong consistency with reasonable availability
 ---

 Key: KAFKA-1555
 URL: https://issues.apache.org/jira/browse/KAFKA-1555
 Project: Kafka
  Issue Type: Improvement
  Components: controller
Affects Versions: 0.8.1.1
Reporter: Jiang Wu
Assignee: Gwen Shapira
 Fix For: 0.8.2

 Attachments: KAFKA-1555.0.patch, KAFKA-1555.1.patch, 
 KAFKA-1555.2.patch, KAFKA-1555.3.patch


 In a mission critical application, we expect a kafka cluster with 3 brokers 
 can satisfy two requirements:
 1. When 1 broker is down, no message loss or service blocking happens.
 2. In worse cases such as two brokers are down, service can be blocked, but 
 no message loss happens.
 We found that current kafka versoin (0.8.1.1) cannot achieve the requirements 
 due to its three behaviors:
 1. when choosing a new leader from 2 followers in ISR, the one with less 
 messages may be chosen as the leader.
 2. even when replica.lag.max.messages=0, a follower can stay in ISR when it 
 has less messages than the leader.
 3. ISR can contains only 1 broker, therefore acknowledged messages may be 
 stored in only 1 broker.
 The following is an analytical proof. 
 We consider a cluster with 3 brokers and a topic with 3 replicas, and assume 
 that at the beginning, all 3 replicas, leader A, followers B and C, are in 
 sync, i.e., they have the same messages and are all in ISR.
 According to the value of request.required.acks (acks for short), there are 
 the following cases.
 1. acks=0, 1, 3. Obviously these settings do not satisfy the requirement.
 2. acks=2. Producer sends a message m. It's acknowledged by A and B. At this 
 time, although C hasn't received m, C is still in ISR. If A is killed, C can 
 be elected as the new leader, and consumers will miss m.
 3. acks=-1. B and C restart and are removed from ISR. Producer sends a 
 message m to A, and receives an acknowledgement. Disk failure happens in A 
 before B and C replicate m. Message m is lost.
 In summary, any existing configuration cannot satisfy the requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1490) remove gradlew initial setup output from source distribution

2014-09-23 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145234#comment-14145234
 ] 

Guozhang Wang commented on KAFKA-1490:
--

Hey Joe,

After pulling in this patch the gradlew command seems not working for us any 
more:

{code}
./gradlew jar
Exception in thread main java.lang.NoClassDefFoundError: 
org/gradle/wrapper/GradleWrapperMain
Caused by: java.lang.ClassNotFoundException: 
org.gradle.wrapper.GradleWrapperMain
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: org.gradle.wrapper.GradleWrapperMain.  Program 
will exit.
{code}

Do you know why?

 remove gradlew initial setup output from source distribution
 

 Key: KAFKA-1490
 URL: https://issues.apache.org/jira/browse/KAFKA-1490
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1490-2.patch, KAFKA-1490.patch, rb25703.patch


 Our current source releases contains lots of stuff in the gradle folder we do 
 not need



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1490) remove gradlew initial setup output from source distribution

2014-09-23 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145246#comment-14145246
 ] 

Joe Stein commented on KAFKA-1490:
--

1) You now need to have gradle installed http://www.gradle.org/installation
2) after that 

{code}
gradle
{code}

That will execute the default task which is to download the wrapper and then 
everything else works the same. :)

It is kind of like a new bootstrap to get things working required now.

I updated README to explain 
https://github.com/apache/kafka/blob/trunk/README.md#apache-kafka 



 remove gradlew initial setup output from source distribution
 

 Key: KAFKA-1490
 URL: https://issues.apache.org/jira/browse/KAFKA-1490
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1490-2.patch, KAFKA-1490.patch, rb25703.patch


 Our current source releases contains lots of stuff in the gradle folder we do 
 not need



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-404) When using chroot path, create chroot on startup if it doesn't exist

2014-09-23 Thread Jonathan Creasy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Creasy updated KAFKA-404:
--
Attachment: KAFKA-404-auto-create-zookeeper-chroot-on-start-up-v4.patch

 When using chroot path, create chroot on startup if it doesn't exist
 

 Key: KAFKA-404
 URL: https://issues.apache.org/jira/browse/KAFKA-404
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.1
 Environment: CentOS 5.5, Linux 2.6.18-194.32.1.el5 x86_64 GNU/Linux
Reporter: Jonathan Creasy
  Labels: newbie, patch
 Fix For: 0.8.2

 Attachments: KAFKA-404-0.7.1.patch, KAFKA-404-0.8.patch, 
 KAFKA-404-auto-create-zookeeper-chroot-on-start-up-i.patch, 
 KAFKA-404-auto-create-zookeeper-chroot-on-start-up-v2.patch, 
 KAFKA-404-auto-create-zookeeper-chroot-on-start-up-v3.patch, 
 KAFKA-404-auto-create-zookeeper-chroot-on-start-up-v4.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-404) When using chroot path, create chroot on startup if it doesn't exist

2014-09-23 Thread Jonathan Creasy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145291#comment-14145291
 ] 

Jonathan Creasy commented on KAFKA-404:
---

Version 4 of the patch, let me know if it needs anything else. Hopefully I made 
it in time. Thanks [~marek.dolgos] for your work!

 When using chroot path, create chroot on startup if it doesn't exist
 

 Key: KAFKA-404
 URL: https://issues.apache.org/jira/browse/KAFKA-404
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.1
 Environment: CentOS 5.5, Linux 2.6.18-194.32.1.el5 x86_64 GNU/Linux
Reporter: Jonathan Creasy
  Labels: newbie, patch
 Fix For: 0.8.2

 Attachments: KAFKA-404-0.7.1.patch, KAFKA-404-0.8.patch, 
 KAFKA-404-auto-create-zookeeper-chroot-on-start-up-i.patch, 
 KAFKA-404-auto-create-zookeeper-chroot-on-start-up-v2.patch, 
 KAFKA-404-auto-create-zookeeper-chroot-on-start-up-v3.patch, 
 KAFKA-404-auto-create-zookeeper-chroot-on-start-up-v4.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-404) When using chroot path, create chroot on startup if it doesn't exist

2014-09-23 Thread Marek Dolgos (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145352#comment-14145352
 ] 

Marek Dolgos commented on KAFKA-404:


[~jcreasy]

Thanks for finishing it!

 When using chroot path, create chroot on startup if it doesn't exist
 

 Key: KAFKA-404
 URL: https://issues.apache.org/jira/browse/KAFKA-404
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.1
 Environment: CentOS 5.5, Linux 2.6.18-194.32.1.el5 x86_64 GNU/Linux
Reporter: Jonathan Creasy
  Labels: newbie, patch
 Fix For: 0.8.2

 Attachments: KAFKA-404-0.7.1.patch, KAFKA-404-0.8.patch, 
 KAFKA-404-auto-create-zookeeper-chroot-on-start-up-i.patch, 
 KAFKA-404-auto-create-zookeeper-chroot-on-start-up-v2.patch, 
 KAFKA-404-auto-create-zookeeper-chroot-on-start-up-v3.patch, 
 KAFKA-404-auto-create-zookeeper-chroot-on-start-up-v4.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 24704: Patch for KAFKA-1499

2014-09-23 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24704/#review54327
---



core/src/main/scala/kafka/log/Log.scala
https://reviews.apache.org/r/24704/#comment94437

minor nit: would prefer that we make source codec the first argument (only 
because source appearing on the left side; source - target seems more natural)



core/src/main/scala/kafka/log/Log.scala
https://reviews.apache.org/r/24704/#comment94438

(just like this)



core/src/main/scala/kafka/log/Log.scala
https://reviews.apache.org/r/24704/#comment94439

Another minor comment. This can be written as:

val targetCodec = if (config.enableBrokerCompression)
  config.compressionType
else
  sourceCodec



core/src/main/scala/kafka/message/ByteBufferMessageSet.scala
https://reviews.apache.org/r/24704/#comment94446

Thanks for making this change - looks much clearer. Just the minor edit 
that I suggested above wrt ordering of the arguments.



core/src/test/scala/unit/kafka/log/BrokerCompressionTest.scala
https://reviews.apache.org/r/24704/#comment9

whitespace



core/src/test/scala/unit/kafka/log/BrokerCompressionTest.scala
https://reviews.apache.org/r/24704/#comment94442

Can you fix all the whitespace inconsistencies in this patch? We generally 
use two space indentation on scala - however, there are a bunch of places in 
this patch where there is a mix of two-space/four-space indent.


- Joel Koshy


On Sept. 23, 2014, 9:17 a.m., Manikumar Reddy O wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/24704/
 ---
 
 (Updated Sept. 23, 2014, 9:17 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1499
 https://issues.apache.org/jira/browse/KAFKA-1499
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addresing Joel's comments
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/log/Log.scala 
 0ddf97bd30311b6039e19abade41d2fbbad2f59b 
   core/src/main/scala/kafka/log/LogConfig.scala 
 5746ad4767589594f904aa085131dd95e56d72bb 
   core/src/main/scala/kafka/message/ByteBufferMessageSet.scala 
 788c7864bc881b935975ab4a4e877b690e65f1f1 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 165c816a9f4c925f6e46560e7e2ff9cf7591946b 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 390fef500d7e0027e698c259d777454ba5a0f5e8 
   core/src/test/scala/unit/kafka/log/BrokerCompressionTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/message/ByteBufferMessageSetTest.scala 
 4e45d965bc423192ac704883ee75e9727006f89b 
   core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala 
 2377abe4933e065d037828a214c3a87e1773a8ef 
 
 Diff: https://reviews.apache.org/r/24704/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Manikumar Reddy O
 




[jira] [Updated] (KAFKA-1013) Modify existing tools as per the changes in KAFKA-1000

2014-09-23 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-1013:
--
Reviewer: Joel Koshy

 Modify existing tools as per the changes in KAFKA-1000
 --

 Key: KAFKA-1013
 URL: https://issues.apache.org/jira/browse/KAFKA-1013
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Tejas Patil
Assignee: Mayuresh Gharat
Priority: Minor
 Attachments: KAFKA-1013.patch, KAFKA-1013.patch, 
 KAFKA-1013_2014-09-23_10:45:59.patch, KAFKA-1013_2014-09-23_10:48:07.patch


 Modify existing tools as per the changes in KAFKA-1000. AFAIK, the tools 
 below would be affected:
 - ConsumerOffsetChecker
 - ExportZkOffsets
 - ImportZkOffsets
 - UpdateOffsetsInZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1490) remove gradlew initial setup output from source distribution

2014-09-23 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145523#comment-14145523
 ] 

Jun Rao commented on KAFKA-1490:


Joe, Chris,

It wasn't obvious that gradle needs to be run from the kafka source dir. Added 
that to README to make this clear.

Thanks,

 remove gradlew initial setup output from source distribution
 

 Key: KAFKA-1490
 URL: https://issues.apache.org/jira/browse/KAFKA-1490
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2

 Attachments: KAFKA-1490-2.patch, KAFKA-1490.patch, rb25703.patch


 Our current source releases contains lots of stuff in the gradle folder we do 
 not need



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Kafka-trunk #274

2014-09-23 Thread Apache Jenkins Server
See https://builds.apache.org/job/Kafka-trunk/274/changes

Changes:

[junrao] trivial change to README to make the gradle wrapper download clearer

--
Started by an SCM change
Building remotely on ubuntu-1 (Ubuntu ubuntu) in workspace 
https://builds.apache.org/job/Kafka-trunk/ws/
  git rev-parse --is-inside-work-tree
Fetching changes from the remote Git repository
  git config remote.origin.url 
  https://git-wip-us.apache.org/repos/asf/kafka.git
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
  git --version
  git fetch --tags --progress 
  https://git-wip-us.apache.org/repos/asf/kafka.git 
  +refs/heads/*:refs/remotes/origin/*
  git rev-parse origin/trunk^{commit}
Checking out Revision 27bc37289c316fb5a978423f4b2c88f49ae15e52 (origin/trunk)
  git config core.sparsecheckout
  git checkout -f 27bc37289c316fb5a978423f4b2c88f49ae15e52
  git rev-list db0b0b8509ce904a0eab021a0453a80cecf3efb7
[Kafka-trunk] $ /bin/bash -xe /tmp/hudson316103313200194835.sh
+ ./gradlew -PscalaVersion=2.10.1 test
Exception in thread main java.lang.NoClassDefFoundError: 
org/gradle/wrapper/GradleWrapperMain
Caused by: java.lang.ClassNotFoundException: 
org.gradle.wrapper.GradleWrapperMain
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: org.gradle.wrapper.GradleWrapperMain.  Program 
will exit.
Build step 'Execute shell' marked build as failure


[jira] [Created] (KAFKA-1649) Protocol documentation does not indicate that ReplicaNotAvailable can be ignored

2014-09-23 Thread Hernan Rivas Inaka (JIRA)
Hernan Rivas Inaka created KAFKA-1649:
-

 Summary: Protocol documentation does not indicate that 
ReplicaNotAvailable can be ignored
 Key: KAFKA-1649
 URL: https://issues.apache.org/jira/browse/KAFKA-1649
 Project: Kafka
  Issue Type: Improvement
  Components: website
Affects Versions: 0.8.1.1
Reporter: Hernan Rivas Inaka
Priority: Minor


The protocol documentation here 
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-ErrorCodes
 should indicate that error 9 (ReplicaNotAvailable) can be safely ignored on 
producers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1649) Protocol documentation does not indicate that ReplicaNotAvailable can be ignored

2014-09-23 Thread Hernan Rivas Inaka (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145569#comment-14145569
 ] 

Hernan Rivas Inaka commented on KAFKA-1649:
---

I would love to submit the patch, but I don't know how to change that 
documentation.

 Protocol documentation does not indicate that ReplicaNotAvailable can be 
 ignored
 

 Key: KAFKA-1649
 URL: https://issues.apache.org/jira/browse/KAFKA-1649
 Project: Kafka
  Issue Type: Improvement
  Components: website
Affects Versions: 0.8.1.1
Reporter: Hernan Rivas Inaka
Priority: Minor
  Labels: protocol-documentation
   Original Estimate: 10m
  Remaining Estimate: 10m

 The protocol documentation here 
 https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-ErrorCodes
  should indicate that error 9 (ReplicaNotAvailable) can be safely ignored on 
 producers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1649) Protocol documentation does not indicate that ReplicaNotAvailable can be ignored

2014-09-23 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein resolved KAFKA-1649.
--
Resolution: Not a Problem

wiki is editable just need to know your confluence username and I can add 
permissions for you

 Protocol documentation does not indicate that ReplicaNotAvailable can be 
 ignored
 

 Key: KAFKA-1649
 URL: https://issues.apache.org/jira/browse/KAFKA-1649
 Project: Kafka
  Issue Type: Improvement
  Components: website
Affects Versions: 0.8.1.1
Reporter: Hernan Rivas Inaka
Priority: Minor
  Labels: protocol-documentation
   Original Estimate: 10m
  Remaining Estimate: 10m

 The protocol documentation here 
 https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-ErrorCodes
  should indicate that error 9 (ReplicaNotAvailable) can be safely ignored on 
 producers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1650) Mirror Maker could lose data on unclean shutdown.

2014-09-23 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-1650:
---

 Summary: Mirror Maker could lose data on unclean shutdown.
 Key: KAFKA-1650
 URL: https://issues.apache.org/jira/browse/KAFKA-1650
 Project: Kafka
  Issue Type: Improvement
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin


Currently if mirror maker got shutdown uncleanly, the data in the data channel 
and buffer could potentially be lost. With the new producer's callback, this 
issue could be solved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-1019) kafka-preferred-replica-election.sh will fail without clear error message if /brokers/topics/[topic]/partitions does not exist

2014-09-23 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani reassigned KAFKA-1019:
-

Assignee: Sriharsha Chintalapani

 kafka-preferred-replica-election.sh will fail without clear error message if 
 /brokers/topics/[topic]/partitions does not exist
 --

 Key: KAFKA-1019
 URL: https://issues.apache.org/jira/browse/KAFKA-1019
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Guozhang Wang
Assignee: Sriharsha Chintalapani
  Labels: newbie
 Fix For: 0.9.0


 From Libo Yu:
 I tried to run kafka-preferred-replica-election.sh on our kafka cluster.
 But I got this expection:
 Failed to start preferred replica election
 org.I0Itec.zkclient.exception.ZkNoNodeException: 
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for /brokers/topics/uattoqaaa.default/partitions
 I checked zookeeper and there is no 
 /brokers/topics/uattoqaaa.default/partitions. All I found is
 /brokers/topics/uattoqaaa.default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1019) kafka-preferred-replica-election.sh will fail without clear error message if /brokers/topics/[topic]/partitions does not exist

2014-09-23 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145744#comment-14145744
 ] 

Sriharsha Chintalapani commented on KAFKA-1019:
---

[~guozhang] [~nehanarkhede]  I don't think this issues exists in the trunk
I ran the above steps specified by [~draiwn] with zookeeper 3.4.6

 bin/kafka-topics.sh --describe --topic testid  --zookeeper 
zookeeper1:2181,zookeeper2:2181,zookeeper3:2181   
Topic:testidPartitionCount:3ReplicationFactor:3 Configs:
Topic: testid   Partition: 0Leader: 3   Replicas: 3,2,1 Isr: 
3,2,1
Topic: testid   Partition: 1Leader: 1   Replicas: 1,3,2 Isr: 
1,3,2
Topic: testid   Partition: 2Leader: 2   Replicas: 2,1,3 Isr: 
2,1,3
[kafka@zookeeper1 kafka]$ bin/kafka-topics.sh --delete --topic testid  
--zookeeper zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
Topic testid is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
[kafka@zookeeper1 kafka]$ bin/kafka-topics.sh --describe --topic testid  
--zookeeper zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
[kafka@zookeeper1 kafka]$ bin/kafka-topics.sh --create --topic testid 
--replication-factor 3 --partition 3 --zookeeper 
zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
Created topic testid.
[kafka@zookeeper1 kafka]$ bin/kafka-topics.sh --describe --topic testid  
--zookeeper zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
Topic:testidPartitionCount:3ReplicationFactor:3 Configs:
Topic: testid   Partition: 0Leader: 3   Replicas: 3,1,2 Isr: 
3,1,2
Topic: testid   Partition: 1Leader: 1   Replicas: 1,2,3 Isr: 
1,2,3
Topic: testid   Partition: 2Leader: 2   Replicas: 2,3,1 Isr: 
2,3,1


 kafka-preferred-replica-election.sh will fail without clear error message if 
 /brokers/topics/[topic]/partitions does not exist
 --

 Key: KAFKA-1019
 URL: https://issues.apache.org/jira/browse/KAFKA-1019
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.1
Reporter: Guozhang Wang
Assignee: Sriharsha Chintalapani
  Labels: newbie
 Fix For: 0.9.0


 From Libo Yu:
 I tried to run kafka-preferred-replica-election.sh on our kafka cluster.
 But I got this expection:
 Failed to start preferred replica election
 org.I0Itec.zkclient.exception.ZkNoNodeException: 
 org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
 NoNode for /brokers/topics/uattoqaaa.default/partitions
 I checked zookeeper and there is no 
 /brokers/topics/uattoqaaa.default/partitions. All I found is
 /brokers/topics/uattoqaaa.default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: jenkins changes required

2014-09-23 Thread Jun Rao
Joe,

You can configure the kafka-trunk job at
https://builds.apache.org/job/Kafka-trunk/. You may need to ask infra to
grant you the permission. One thing that I am not sure is whether
those ubuntu (where kafka-trunk job runs) jenkins machines have gradle
installed. You can probably follow up on that infra ticket.

Thanks,

Jun

On Tue, Sep 23, 2014 at 10:21 AM, Joe Stein joe.st...@stealth.ly wrote:

 I created an infrastructure ticket
 https://issues.apache.org/jira/browse/INFRA-8395

 On Tue, Sep 23, 2014 at 12:12 PM, Joe Stein joe.st...@stealth.ly wrote:

  Hey, so it looks like the gradlew changes I just removed the jar is going
  to require something to be done on the jenkins side since gradle has to
 be
  installed and run gradle first to download the wrapper... then everything
  else works.
 
  I haven't updated jenkins @ apache not sure (probably have permission)
 how
  to-do that?
 
  Anyone else familiar with this and can either fix or point me in the
 right
  direction please.
 
  This is what folks will see if they don't do the two steps I added to the
  README (1) install gradle 2) gradle //default task is to download wrapper
  to bootstrap)
 
  [joe.stein] KAFKA-1490 remove gradlew initial setup output from source
  distribution patch by Ivan Lyutov reviewed by Joe Stein
 
  --
  Started by an SCM change
  Building remotely on ubuntu-1 (Ubuntu ubuntu) in workspace 
  https://builds.apache.org/job/Kafka-trunk/ws/
git rev-parse --is-inside-work-tree
  Fetching changes from the remote Git repository
git config remote.origin.url
  https://git-wip-us.apache.org/repos/asf/kafka.git
  Fetching upstream changes from
  https://git-wip-us.apache.org/repos/asf/kafka.git
git --version
git fetch --tags --progress
  https://git-wip-us.apache.org/repos/asf/kafka.git
   +refs/heads/*:refs/remotes/origin/*
git rev-parse origin/trunk^{commit}
  Checking out Revision d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
  (origin/trunk)
git config core.sparsecheckout
git checkout -f d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
git rev-list 6d7057566f0c3a872f625957aa086b993e76071f
  [Kafka-trunk] $ /bin/bash -xe /tmp/hudson229233895233879711.sh
  + ./gradlew -PscalaVersion=2.10.1 test
  Exception in thread main java.lang.NoClassDefFoundError:
  org/gradle/wrapper/GradleWrapperMain
  Caused by: java.lang.ClassNotFoundException: org.gradle.wrapper.
  GradleWrapperMain
  at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
  at java.security.AccessController.doPrivileged(Native Method)
  at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
  Could not find the main class: org.gradle.wrapper.GradleWrapperMain.
  Program will exit.
  Build step 'Execute shell' marked build as failure
 
  /***
   Joe Stein
   Founder, Principal Consultant
   Big Data Open Source Security LLC
   http://www.stealth.ly
   Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
  /
 



Build failed in Jenkins: Kafka-trunk #275

2014-09-23 Thread Apache Jenkins Server
See https://builds.apache.org/job/Kafka-trunk/275/

--
Started by user junrao
Building remotely on ubuntu-1 (Ubuntu ubuntu) in workspace 
https://builds.apache.org/job/Kafka-trunk/ws/
  git rev-parse --is-inside-work-tree
Fetching changes from the remote Git repository
  git config remote.origin.url 
  https://git-wip-us.apache.org/repos/asf/kafka.git
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
  git --version
  git fetch --tags --progress 
  https://git-wip-us.apache.org/repos/asf/kafka.git 
  +refs/heads/*:refs/remotes/origin/*
  git rev-parse origin/trunk^{commit}
Checking out Revision 27bc37289c316fb5a978423f4b2c88f49ae15e52 (origin/trunk)
  git config core.sparsecheckout
  git checkout -f 27bc37289c316fb5a978423f4b2c88f49ae15e52
  git rev-list 27bc37289c316fb5a978423f4b2c88f49ae15e52
[Kafka-trunk] $ /bin/bash -xe /tmp/hudson77096854886494881.sh
+ gradle
/tmp/hudson77096854886494881.sh: line 2: gradle: command not found
Build step 'Execute shell' marked build as failure


Re: jenkins changes required

2014-09-23 Thread Jun Rao
Joe,

I added the gradle step in kafka-trunk job. However, it doesn't seem that
gradle is installed there.

Thanks,

Jun

On Tue, Sep 23, 2014 at 6:33 PM, Jun Rao jun...@gmail.com wrote:

 Joe,

 You can configure the kafka-trunk job at
 https://builds.apache.org/job/Kafka-trunk/. You may need to ask infra to
 grant you the permission. One thing that I am not sure is whether
 those ubuntu (where kafka-trunk job runs) jenkins machines have gradle
 installed. You can probably follow up on that infra ticket.

 Thanks,

 Jun

 On Tue, Sep 23, 2014 at 10:21 AM, Joe Stein joe.st...@stealth.ly wrote:

 I created an infrastructure ticket
 https://issues.apache.org/jira/browse/INFRA-8395

 On Tue, Sep 23, 2014 at 12:12 PM, Joe Stein joe.st...@stealth.ly wrote:

  Hey, so it looks like the gradlew changes I just removed the jar is
 going
  to require something to be done on the jenkins side since gradle has to
 be
  installed and run gradle first to download the wrapper... then
 everything
  else works.
 
  I haven't updated jenkins @ apache not sure (probably have permission)
 how
  to-do that?
 
  Anyone else familiar with this and can either fix or point me in the
 right
  direction please.
 
  This is what folks will see if they don't do the two steps I added to
 the
  README (1) install gradle 2) gradle //default task is to download
 wrapper
  to bootstrap)
 
  [joe.stein] KAFKA-1490 remove gradlew initial setup output from source
  distribution patch by Ivan Lyutov reviewed by Joe Stein
 
  --
  Started by an SCM change
  Building remotely on ubuntu-1 (Ubuntu ubuntu) in workspace 
  https://builds.apache.org/job/Kafka-trunk/ws/
git rev-parse --is-inside-work-tree
  Fetching changes from the remote Git repository
git config remote.origin.url
  https://git-wip-us.apache.org/repos/asf/kafka.git
  Fetching upstream changes from
  https://git-wip-us.apache.org/repos/asf/kafka.git
git --version
git fetch --tags --progress
  https://git-wip-us.apache.org/repos/asf/kafka.git
   +refs/heads/*:refs/remotes/origin/*
git rev-parse origin/trunk^{commit}
  Checking out Revision d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
  (origin/trunk)
git config core.sparsecheckout
git checkout -f d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
git rev-list 6d7057566f0c3a872f625957aa086b993e76071f
  [Kafka-trunk] $ /bin/bash -xe /tmp/hudson229233895233879711.sh
  + ./gradlew -PscalaVersion=2.10.1 test
  Exception in thread main java.lang.NoClassDefFoundError:
  org/gradle/wrapper/GradleWrapperMain
  Caused by: java.lang.ClassNotFoundException: org.gradle.wrapper.
  GradleWrapperMain
  at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
  at java.security.AccessController.doPrivileged(Native Method)
  at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
  at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
  Could not find the main class: org.gradle.wrapper.GradleWrapperMain.
  Program will exit.
  Build step 'Execute shell' marked build as failure
 
  /***
   Joe Stein
   Founder, Principal Consultant
   Big Data Open Source Security LLC
   http://www.stealth.ly
   Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
  /
 





Re: jenkins changes required

2014-09-23 Thread Joe Stein
Jun,

I don't have permission.  I updated the INFRA ticket.

Thanks!

Joestein

On Tue, Sep 23, 2014 at 9:33 PM, Jun Rao jun...@gmail.com wrote:

 Joe,

 You can configure the kafka-trunk job at
 https://builds.apache.org/job/Kafka-trunk/. You may need to ask infra to
 grant you the permission. One thing that I am not sure is whether
 those ubuntu (where kafka-trunk job runs) jenkins machines have gradle
 installed. You can probably follow up on that infra ticket.

 Thanks,

 Jun

 On Tue, Sep 23, 2014 at 10:21 AM, Joe Stein joe.st...@stealth.ly wrote:

  I created an infrastructure ticket
  https://issues.apache.org/jira/browse/INFRA-8395
 
  On Tue, Sep 23, 2014 at 12:12 PM, Joe Stein joe.st...@stealth.ly
 wrote:
 
   Hey, so it looks like the gradlew changes I just removed the jar is
 going
   to require something to be done on the jenkins side since gradle has to
  be
   installed and run gradle first to download the wrapper... then
 everything
   else works.
  
   I haven't updated jenkins @ apache not sure (probably have permission)
  how
   to-do that?
  
   Anyone else familiar with this and can either fix or point me in the
  right
   direction please.
  
   This is what folks will see if they don't do the two steps I added to
 the
   README (1) install gradle 2) gradle //default task is to download
 wrapper
   to bootstrap)
  
   [joe.stein] KAFKA-1490 remove gradlew initial setup output from source
   distribution patch by Ivan Lyutov reviewed by Joe Stein
  
   --
   Started by an SCM change
   Building remotely on ubuntu-1 (Ubuntu ubuntu) in workspace 
   https://builds.apache.org/job/Kafka-trunk/ws/
 git rev-parse --is-inside-work-tree
   Fetching changes from the remote Git repository
 git config remote.origin.url
   https://git-wip-us.apache.org/repos/asf/kafka.git
   Fetching upstream changes from
   https://git-wip-us.apache.org/repos/asf/kafka.git
 git --version
 git fetch --tags --progress
   https://git-wip-us.apache.org/repos/asf/kafka.git
+refs/heads/*:refs/remotes/origin/*
 git rev-parse origin/trunk^{commit}
   Checking out Revision d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
   (origin/trunk)
 git config core.sparsecheckout
 git checkout -f d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
 git rev-list 6d7057566f0c3a872f625957aa086b993e76071f
   [Kafka-trunk] $ /bin/bash -xe /tmp/hudson229233895233879711.sh
   + ./gradlew -PscalaVersion=2.10.1 test
   Exception in thread main java.lang.NoClassDefFoundError:
   org/gradle/wrapper/GradleWrapperMain
   Caused by: java.lang.ClassNotFoundException: org.gradle.wrapper.
   GradleWrapperMain
   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
   at
 sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
   Could not find the main class: org.gradle.wrapper.GradleWrapperMain.
   Program will exit.
   Build step 'Execute shell' marked build as failure
  
   /***
Joe Stein
Founder, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
   /
  
 



Re: jenkins changes required

2014-09-23 Thread Joe Stein
Thanks Jun, I will update the INFRA ticket.

On Tue, Sep 23, 2014 at 11:30 PM, Jun Rao jun...@gmail.com wrote:

 Joe,

 I added the gradle step in kafka-trunk job. However, it doesn't seem that
 gradle is installed there.

 Thanks,

 Jun

 On Tue, Sep 23, 2014 at 6:33 PM, Jun Rao jun...@gmail.com wrote:

  Joe,
 
  You can configure the kafka-trunk job at
  https://builds.apache.org/job/Kafka-trunk/. You may need to ask infra to
  grant you the permission. One thing that I am not sure is whether
  those ubuntu (where kafka-trunk job runs) jenkins machines have gradle
  installed. You can probably follow up on that infra ticket.
 
  Thanks,
 
  Jun
 
  On Tue, Sep 23, 2014 at 10:21 AM, Joe Stein joe.st...@stealth.ly
 wrote:
 
  I created an infrastructure ticket
  https://issues.apache.org/jira/browse/INFRA-8395
 
  On Tue, Sep 23, 2014 at 12:12 PM, Joe Stein joe.st...@stealth.ly
 wrote:
 
   Hey, so it looks like the gradlew changes I just removed the jar is
  going
   to require something to be done on the jenkins side since gradle has
 to
  be
   installed and run gradle first to download the wrapper... then
  everything
   else works.
  
   I haven't updated jenkins @ apache not sure (probably have permission)
  how
   to-do that?
  
   Anyone else familiar with this and can either fix or point me in the
  right
   direction please.
  
   This is what folks will see if they don't do the two steps I added to
  the
   README (1) install gradle 2) gradle //default task is to download
  wrapper
   to bootstrap)
  
   [joe.stein] KAFKA-1490 remove gradlew initial setup output from source
   distribution patch by Ivan Lyutov reviewed by Joe Stein
  
   --
   Started by an SCM change
   Building remotely on ubuntu-1 (Ubuntu ubuntu) in workspace 
   https://builds.apache.org/job/Kafka-trunk/ws/
 git rev-parse --is-inside-work-tree
   Fetching changes from the remote Git repository
 git config remote.origin.url
   https://git-wip-us.apache.org/repos/asf/kafka.git
   Fetching upstream changes from
   https://git-wip-us.apache.org/repos/asf/kafka.git
 git --version
 git fetch --tags --progress
   https://git-wip-us.apache.org/repos/asf/kafka.git
+refs/heads/*:refs/remotes/origin/*
 git rev-parse origin/trunk^{commit}
   Checking out Revision d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
   (origin/trunk)
 git config core.sparsecheckout
 git checkout -f d2d1ef357b3b07e83f880ee2a6e02fb3c18ae011
 git rev-list 6d7057566f0c3a872f625957aa086b993e76071f
   [Kafka-trunk] $ /bin/bash -xe /tmp/hudson229233895233879711.sh
   + ./gradlew -PscalaVersion=2.10.1 test
   Exception in thread main java.lang.NoClassDefFoundError:
   org/gradle/wrapper/GradleWrapperMain
   Caused by: java.lang.ClassNotFoundException: org.gradle.wrapper.
   GradleWrapperMain
   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
   at
 sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
   Could not find the main class: org.gradle.wrapper.GradleWrapperMain.
   Program will exit.
   Build step 'Execute shell' marked build as failure
  
   /***
Joe Stein
Founder, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
   /
  
 
 
 



Re: Review Request 25886: KAFKA-1555: provide strong consistency with reasonable availability

2014-09-23 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25886/#review54386
---


Thanks for the patch. Looks good. A few comments below.


clients/src/main/java/org/apache/kafka/common/errors/NotEnoughReplicasException.java
https://reviews.apache.org/r/25886/#comment94521

Arguably, this is a retriable exception since the ISR size changes over 
time.



core/src/main/scala/kafka/cluster/Partition.scala
https://reviews.apache.org/r/25886/#comment94522

Perhaps we should just do leaderReplica.log.get.config.minInSyncReplicas.



core/src/main/scala/kafka/log/LogConfig.scala
https://reviews.apache.org/r/25886/#comment94523

Lower case Topic?



core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala
https://reviews.apache.org/r/25886/#comment94524

Could we try/catch the exception and check the cause is NotEnoughReplicas?



core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala
https://reviews.apache.org/r/25886/#comment94525

Ditto as the above.


- Jun Rao


On Sept. 23, 2014, 12:28 a.m., Gwen Shapira wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/25886/
 ---
 
 (Updated Sept. 23, 2014, 12:28 a.m.)
 
 
 Review request for kafka.
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1555: provide strong consistency with reasonable availability
 
 
 Diffs
 -
 
   
 clients/src/main/java/org/apache/kafka/common/errors/NotEnoughReplicasException.java
  PRE-CREATION 
   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java d434f42 
   core/src/main/scala/kafka/cluster/Partition.scala ff106b4 
   core/src/main/scala/kafka/common/ErrorMapping.scala 3fae791 
   core/src/main/scala/kafka/common/NotEnoughReplicasException.scala 
 PRE-CREATION 
   core/src/main/scala/kafka/log/LogConfig.scala 5746ad4 
   core/src/test/scala/integration/kafka/api/ProducerFailureHandlingTest.scala 
 39f777b 
   core/src/test/scala/unit/kafka/producer/SyncProducerTest.scala 24deea0 
   core/src/test/scala/unit/kafka/utils/TestUtils.scala 2dbdd3c 
 
 Diff: https://reviews.apache.org/r/25886/diff/
 
 
 Testing
 ---
 
 With 3 broker cluster, created 3 topics each with 1 partition and 3 replicas, 
 with 1,3 and 4 min.insync.replicas.
 * min.insync.replicas=1 behaved normally (all writes succeeded as long as a 
 broker was up)
 * min.insync.replicas=3 returned NotEnoughReplicas when required.acks=-1 and 
 one broker was down
 * min.insync.replicas=4 returned NotEnoughReplicas when required.acks=-1
 
 See notes about retry behavior in the JIRA.
 
 
 Thanks,
 
 Gwen Shapira
 




[jira] [Commented] (KAFKA-1555) provide strong consistency with reasonable availability

2014-09-23 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145894#comment-14145894
 ] 

Jun Rao commented on KAFKA-1555:


Sriram,

In my mind, min.isr can be 1 to n-1 where n is the replication factor (except 
when n = 1). The reason that one wants to have more than 1 replica is to 
tolerate failures. Setting min.isr to n means that one can't tolerant any 
failure, which defeats the purpose of replication. I am not sure how widely 
this min.isr feature will be used. Given that, the current approach is probably 
the least intrusive. If this is indeed a feature that many people want to use 
and find the configuration confusing, we can revisit the issue in the future.

 provide strong consistency with reasonable availability
 ---

 Key: KAFKA-1555
 URL: https://issues.apache.org/jira/browse/KAFKA-1555
 Project: Kafka
  Issue Type: Improvement
  Components: controller
Affects Versions: 0.8.1.1
Reporter: Jiang Wu
Assignee: Gwen Shapira
 Fix For: 0.8.2

 Attachments: KAFKA-1555.0.patch, KAFKA-1555.1.patch, 
 KAFKA-1555.2.patch, KAFKA-1555.3.patch


 In a mission critical application, we expect a kafka cluster with 3 brokers 
 can satisfy two requirements:
 1. When 1 broker is down, no message loss or service blocking happens.
 2. In worse cases such as two brokers are down, service can be blocked, but 
 no message loss happens.
 We found that current kafka versoin (0.8.1.1) cannot achieve the requirements 
 due to its three behaviors:
 1. when choosing a new leader from 2 followers in ISR, the one with less 
 messages may be chosen as the leader.
 2. even when replica.lag.max.messages=0, a follower can stay in ISR when it 
 has less messages than the leader.
 3. ISR can contains only 1 broker, therefore acknowledged messages may be 
 stored in only 1 broker.
 The following is an analytical proof. 
 We consider a cluster with 3 brokers and a topic with 3 replicas, and assume 
 that at the beginning, all 3 replicas, leader A, followers B and C, are in 
 sync, i.e., they have the same messages and are all in ISR.
 According to the value of request.required.acks (acks for short), there are 
 the following cases.
 1. acks=0, 1, 3. Obviously these settings do not satisfy the requirement.
 2. acks=2. Producer sends a message m. It's acknowledged by A and B. At this 
 time, although C hasn't received m, C is still in ISR. If A is killed, C can 
 be elected as the new leader, and consumers will miss m.
 3. acks=-1. B and C restart and are removed from ISR. Producer sends a 
 message m to A, and receives an acknowledgement. Disk failure happens in A 
 before B and C replicate m. Message m is lost.
 In summary, any existing configuration cannot satisfy the requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1645) some more jars in our src release

2014-09-23 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145919#comment-14145919
 ] 

Jun Rao commented on KAFKA-1645:


For all the jars under migration tool, they are used for system tests. Now that 
0.8 has been out for a year. I fell there is not a strong need to maintain the 
migration tool for much longer.  We can probably just remove the migration tool 
test suite. For the piggybank jar, it's used only in hadoop-producer. 
piggybank-0.12.0.jar is available in maven and seems to work with the hadoop 
producer code. We can just remove the jar and add the dependency to piggybank 
in the build.gradle file.

 some more jars in our src release
 -

 Key: KAFKA-1645
 URL: https://issues.apache.org/jira/browse/KAFKA-1645
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Priority: Blocker
 Fix For: 0.8.2


 The first one is being taken care of in KAFKA-1490 
 the rest... can we just delete them? Do we need/want them anymore? 
 {code}
 root@precise64:~/kafka-0.8.1.1-src# find ./ -name *jar
 ./gradle/wrapper/gradle-wrapper.jar
 ./lib/apache-rat-0.8.jar
 ./system_test/migration_tool_testsuite/0.7/lib/kafka-0.7.0.jar
 ./system_test/migration_tool_testsuite/0.7/lib/kafka-perf-0.7.0.jar
 ./system_test/migration_tool_testsuite/0.7/lib/zkclient-0.1.jar
 ./contrib/hadoop-consumer/lib/piggybank.jar
 ./contrib/hadoop-producer/lib/piggybank.jar
 {code}
 rat is not required in the project I can speak for that file +1 to remove it
 I don't see why we have to keep the other ones nor what code changes we have 
 to make for getting rid of them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)