[jira] [Commented] (KAFKA-2345) Attempt to delete a topic already marked for deletion throws ZkNodeExistsException

2015-07-17 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14631645#comment-14631645
 ] 

Gwen Shapira commented on KAFKA-2345:
-

Thanks for the patch [~singhashish] and for the reviews [~harsha_ch] and 
[~ijuma].
Pushed to trunk.

 Attempt to delete a topic already marked for deletion throws 
 ZkNodeExistsException
 --

 Key: KAFKA-2345
 URL: https://issues.apache.org/jira/browse/KAFKA-2345
 Project: Kafka
  Issue Type: Bug
Reporter: Ashish K Singh
Assignee: Ashish K Singh
 Fix For: 0.8.3

 Attachments: KAFKA-2345.patch, KAFKA-2345_2015-07-17_10:20:55.patch


 Throwing a TopicAlreadyMarkedForDeletionException will make much more sense. 
 A user does not necessarily have to know about involvement of zk in the 
 process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2344) kafka-merge-pr should support reviewers in commit message

2015-07-17 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14631614#comment-14631614
 ] 

Gwen Shapira commented on KAFKA-2344:
-

Good with me [~ijuma].

The JIRA issue was my oversight, I'm not sure we need to change it at this 
point. I'll open a new JIRA if we decide to keep both tools.

 kafka-merge-pr should support reviewers in commit message
 -

 Key: KAFKA-2344
 URL: https://issues.apache.org/jira/browse/KAFKA-2344
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Ismael Juma
Priority: Minor

 Two suggestions for the new pr-merge tool:
 * The tool doesn't allow to credit reviewers while committing. I thought the 
 review credits were a nice habit of the Kafka community and I hate losing it. 
 OTOH, I don't want to force-push to trunk just to add credits. Perhaps the 
 tool can ask about committers?
 * Looks like the tool doesn't automatically resolve the JIRA. Would be nice 
 if it did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2345) Attempt to delete a topic already marked for deletion throws ZkNodeExistsException

2015-07-17 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2345:

   Resolution: Fixed
 Reviewer: Gwen Shapira
Fix Version/s: 0.8.3
   Status: Resolved  (was: Patch Available)

 Attempt to delete a topic already marked for deletion throws 
 ZkNodeExistsException
 --

 Key: KAFKA-2345
 URL: https://issues.apache.org/jira/browse/KAFKA-2345
 Project: Kafka
  Issue Type: Bug
Reporter: Ashish K Singh
Assignee: Ashish K Singh
 Fix For: 0.8.3

 Attachments: KAFKA-2345.patch, KAFKA-2345_2015-07-17_10:20:55.patch


 Throwing a TopicAlreadyMarkedForDeletionException will make much more sense. 
 A user does not necessarily have to know about involvement of zk in the 
 process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1595) Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount

2015-07-17 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14631700#comment-14631700
 ] 

Gwen Shapira commented on KAFKA-1595:
-

[~ijuma] - I'll be happy to review, but can't commit on when I'll get around to 
it - so I hope its not blocking anything you are working on.

 Remove deprecated and slower scala JSON parser from kafka.consumer.TopicCount
 -

 Key: KAFKA-1595
 URL: https://issues.apache.org/jira/browse/KAFKA-1595
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.8.1.1
Reporter: Jagbir
Assignee: Ismael Juma
  Labels: newbie
 Fix For: 0.8.3


 The following issue is created as a follow up suggested by Jun Rao
 in a kafka news group message with the Subject
 Blocking Recursive parsing from 
 kafka.consumer.TopicCount$.constructTopicCount
 SUMMARY:
 An issue was detected in a typical cluster of 3 kafka instances backed
 by 3 zookeeper instances (kafka version 0.8.1.1, scala version 2.10.3,
 java version 1.7.0_65). On consumer end, when consumers get recycled,
 there is a troubling JSON parsing recursion which takes a busy lock and
 blocks consumers thread pool.
 In 0.8.1.1 scala client library ZookeeperConsumerConnector.scala:355 takes
 a global lock (0xd3a7e1d0) during the rebalance, and fires an
 expensive JSON parsing, while keeping the other consumers from shutting
 down, see, e.g,
 at 
 kafka.consumer.ZookeeperConsumerConnector.shutdown(ZookeeperConsumerConnector.scala:161)
 The deep recursive JSON parsing should be deprecated in favor
 of a better JSON parser, see, e.g,
 http://engineering.ooyala.com/blog/comparing-scala-json-libraries?
 DETAILS:
 The first dump is for a recursive blocking thread holding the lock for 
 0xd3a7e1d0
 and the subsequent dump is for a waiting thread.
 (Please grep for 0xd3a7e1d0 to see the locked object.)
 Â 
 -8-
 Sa863f22b1e5hjh6788991800900b34545c_profile-a-prod1-s-140789080845312-c397945e8_watcher_executor
 prio=10 tid=0x7f24dc285800 nid=0xda9 runnable [0x7f249e40b000]
 java.lang.Thread.State: RUNNABLE
 at 
 scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.p$7(Parsers.scala:722)
 at 
 scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.continue$1(Parsers.scala:726)
 at 
 scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.apply(Parsers.scala:737)
 at 
 scala.util.parsing.combinator.Parsers$$anonfun$rep1$1.apply(Parsers.scala:721)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Success.flatMapWithNext(Parsers.scala:142)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$flatMap$1.apply(Parsers.scala:239)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
 at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
 at 
 scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
 at 

[jira] [Commented] (KAFKA-2328) merge-kafka-pr.py script should not leave user in a detached branch

2015-07-16 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14630033#comment-14630033
 ] 

Gwen Shapira commented on KAFKA-2328:
-

2 or 3 both seem reasonable to me

 merge-kafka-pr.py script should not leave user in a detached branch
 ---

 Key: KAFKA-2328
 URL: https://issues.apache.org/jira/browse/KAFKA-2328
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Priority: Minor

 [~gwenshap] asked:
 If I start a merge and cancel (say, by choosing 'n' when asked if I want to 
 proceed), I'm left on a detached branch. Any chance the script can put me 
 back in the original branch? or in trunk?
 Reference 
 https://issues.apache.org/jira/browse/KAFKA-2187?focusedCommentId=14621243page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14621243



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2344) Improvements for the pr-merge tool

2015-07-16 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2344:
---

 Summary: Improvements for the pr-merge tool
 Key: KAFKA-2344
 URL: https://issues.apache.org/jira/browse/KAFKA-2344
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Priority: Minor


Two suggestions for the new pr-merge tool:

* The tool doesn't allow to credit reviewers while committing. I thought the 
review credits were a nice habit of the Kafka community and I hate losing it. 
OTOH, I don't want to force-push to trunk just to add credits. Perhaps the tool 
can ask about committers?

* Looks like the tool doesn't automatically resolve the JIRA. Would be nice if 
it did.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2324) Update to Scala 2.11.7

2015-07-16 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14630770#comment-14630770
 ] 

Gwen Shapira commented on KAFKA-2324:
-

+1 and pushed to trunk. Thanks for the patch [~ijuma] and for the review 
[~harsha_ch].

 Update to Scala 2.11.7
 --

 Key: KAFKA-2324
 URL: https://issues.apache.org/jira/browse/KAFKA-2324
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor

 There are a number of fixes and improvements in the Scala 2.11.7 release, 
 which is backwards and forwards compatible with 2.11.6:
 http://www.scala-lang.org/news/2.11.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2324) Update to Scala 2.11.7

2015-07-16 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2324:

   Resolution: Fixed
Fix Version/s: 0.8.3
   Status: Resolved  (was: Patch Available)

 Update to Scala 2.11.7
 --

 Key: KAFKA-2324
 URL: https://issues.apache.org/jira/browse/KAFKA-2324
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Fix For: 0.8.3


 There are a number of fixes and improvements in the Scala 2.11.7 release, 
 which is backwards and forwards compatible with 2.11.6:
 http://www.scala-lang.org/news/2.11.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2337) Verify that metric names will not collide when creating new topics

2015-07-15 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2337:
---

 Summary: Verify that metric names will not collide when creating 
new topics
 Key: KAFKA-2337
 URL: https://issues.apache.org/jira/browse/KAFKA-2337
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


When creating a new topic, convert the proposed topic name to the name that 
will be used in metrics and validate that there are no collisions with existing 
names.

See this discussion for context: http://s.apache.org/snW







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2333) Add rename topic support

2015-07-15 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628897#comment-14628897
 ] 

Gwen Shapira commented on KAFKA-2333:
-

In pretty much every database in existence writing to a renamed table will 
cause errors, so thats expected. 
I'm more concerned about in-flight replications, there's a reason delete took 3 
attempts to get right. I hope we can reuse some of that code / tests.

Does it make sense to say that for the future, anything with user-visible name 
should also have an internal name that will never ever changes?


 Add rename topic support
 

 Key: KAFKA-2333
 URL: https://issues.apache.org/jira/browse/KAFKA-2333
 Project: Kafka
  Issue Type: New Feature
Reporter: Grant Henke
Assignee: Grant Henke

 Add the ability to change the name of existing topics. 
 This likely needs an associated KIP. This Jira will be updated when one is 
 created.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2337) Verify that metric names will not collide when creating new topics

2015-07-15 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2337:

Reviewer: Gwen Shapira

 Verify that metric names will not collide when creating new topics
 --

 Key: KAFKA-2337
 URL: https://issues.apache.org/jira/browse/KAFKA-2337
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Grant Henke

 When creating a new topic, convert the proposed topic name to the name that 
 will be used in metrics and validate that there are no collisions with 
 existing names.
 See this discussion for context: http://s.apache.org/snW



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2337) Verify that metric names will not collide when creating new topics

2015-07-15 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2337:

Assignee: Grant Henke

 Verify that metric names will not collide when creating new topics
 --

 Key: KAFKA-2337
 URL: https://issues.apache.org/jira/browse/KAFKA-2337
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Grant Henke

 When creating a new topic, convert the proposed topic name to the name that 
 will be used in metrics and validate that there are no collisions with 
 existing names.
 See this discussion for context: http://s.apache.org/snW



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2288) Follow-up to KAFKA-2249 - reduce logging and testing

2015-07-15 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628995#comment-14628995
 ] 

Gwen Shapira commented on KAFKA-2288:
-

Will do. I missed the RB comment.

 Follow-up to KAFKA-2249 - reduce logging and testing
 

 Key: KAFKA-2288
 URL: https://issues.apache.org/jira/browse/KAFKA-2288
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2288.patch, KAFKA-2288_2015-06-22_10:02:27.patch


 As [~junrao] commented on KAFKA-2249, we have a needless test and we are 
 logging configuration for every single partition now. 
 Lets reduce the noise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2198) kafka-topics.sh exits with 0 status on failures

2015-07-13 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14625827#comment-14625827
 ] 

Gwen Shapira commented on KAFKA-2198:
-

Thanks for the patch! pushed to trunk.

 kafka-topics.sh exits with 0 status on failures
 ---

 Key: KAFKA-2198
 URL: https://issues.apache.org/jira/browse/KAFKA-2198
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 0.8.2.1
Reporter: Bob Halley
Assignee: Manikumar Reddy
 Attachments: KAFKA-2198.patch, KAFKA-2198_2015-05-19_18:27:01.patch, 
 KAFKA-2198_2015-05-19_18:41:25.patch, KAFKA-2198_2015-07-10_22:02:02.patch, 
 KAFKA-2198_2015-07-10_23:11:23.patch, KAFKA-2198_2015-07-13_19:24:46.patch


 In the two failure cases below, kafka-topics.sh exits with status 0.  You 
 shouldn't need to parse output from the command to know if it failed or not.
 Case 1: Forgetting to add Kafka zookeeper chroot path to zookeeper spec
 $ kafka-topics.sh --alter --topic foo --config min.insync.replicas=2 
 --zookeeper 10.0.0.1  echo succeeded
 succeeded
 Case 2: Bad config option.  (Also, do we really need the java backtrace?  
 It's a lot of noise most of the time.)
 $ kafka-topics.sh --alter --topic foo --config min.insync.replicasTYPO=2 
 --zookeeper 10.0.0.1/kafka  echo succeeded
 Error while executing topic command requirement failed: Unknown configuration 
 min.insync.replicasTYPO.
 java.lang.IllegalArgumentException: requirement failed: Unknown configuration 
 min.insync.replicasTYPO.
 at scala.Predef$.require(Predef.scala:233)
 at kafka.log.LogConfig$$anonfun$validateNames$1.apply(LogConfig.scala:183)
 at kafka.log.LogConfig$$anonfun$validateNames$1.apply(LogConfig.scala:182)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at kafka.log.LogConfig$.validateNames(LogConfig.scala:182)
 at kafka.log.LogConfig$.validate(LogConfig.scala:190)
 at 
 kafka.admin.TopicCommand$.parseTopicConfigsToBeAdded(TopicCommand.scala:205)
 at 
 kafka.admin.TopicCommand$$anonfun$alterTopic$1.apply(TopicCommand.scala:103)
 at 
 kafka.admin.TopicCommand$$anonfun$alterTopic$1.apply(TopicCommand.scala:100)
 at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
 at kafka.admin.TopicCommand$.alterTopic(TopicCommand.scala:100)
 at kafka.admin.TopicCommand$.main(TopicCommand.scala:57)
 at kafka.admin.TopicCommand.main(TopicCommand.scala)
 succeeded



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2214) kafka-reassign-partitions.sh --verify should return non-zero exit codes when reassignment is not completed yet

2015-07-13 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14625844#comment-14625844
 ] 

Gwen Shapira commented on KAFKA-2214:
-

Thank you!

Can you also address [~miguno] comment? 
I think your suggestion to return different error code for failed and in 
progress is reasonable.

 kafka-reassign-partitions.sh --verify should return non-zero exit codes when 
 reassignment is not completed yet
 --

 Key: KAFKA-2214
 URL: https://issues.apache.org/jira/browse/KAFKA-2214
 Project: Kafka
  Issue Type: Improvement
  Components: admin
Affects Versions: 0.8.1.1, 0.8.2.0
Reporter: Michael Noll
Assignee: Manikumar Reddy
Priority: Minor
 Attachments: KAFKA-2214.patch, KAFKA-2214_2015-07-10_21:56:04.patch, 
 KAFKA-2214_2015-07-13_21:10:58.patch


 h4. Background
 The admin script {{kafka-reassign-partitions.sh}} should integrate better 
 with automation tools such as Ansible, which rely on scripts adhering to Unix 
 best practices such as appropriate exit codes on success/failure.
 h4. Current behavior (incorrect)
 When reassignments are still in progress {{kafka-reassign-partitions.sh}} 
 prints {{ERROR}} messages but returns an exit code of zero, which indicates 
 success.  This behavior makes it a bit cumbersome to integrate the script 
 into automation tools such as Ansible.
 {code}
 $ kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 
 --reassignment-json-file partitions-to-move.json --verify
 Status of partition reassignment:
 ERROR: Assigned replicas (316,324,311) don't match the list of replicas for 
 reassignment (316,324) for partition [mytopic,2]
 Reassignment of partition [mytopic,0] completed successfully
 Reassignment of partition [myothertopic,1] completed successfully
 Reassignment of partition [myothertopic,3] completed successfully
 ...
 $ echo $?
 0
 # But preferably the exit code in the presence of ERRORs should be, say, 1.
 {code}
 h3. How to improve
 I'd suggest that, using the above as the running example, if there are any 
 {{ERROR}} entries in the output (i.e. if there are any assignments remaining 
 that don't match the desired assignments), then the 
 {{kafka-reassign-partitions.sh}}  should return a non-zero exit code.
 h3. Notes
 In Kafka 0.8.2 the output is a bit different: The ERROR messages are now 
 phrased differently.
 Before:
 {code}
 ERROR: Assigned replicas (316,324,311) don't match the list of replicas for 
 reassignment (316,324) for partition [mytopic,2]
 {code}
 Now:
 {code}
 Reassignment of partition [mytopic,2] is still in progress
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2162) Kafka Auditing functionality

2015-07-12 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14624158#comment-14624158
 ] 

Gwen Shapira commented on KAFKA-2162:
-

I like starting with requirements.
So I asked few security folks I work with what would they want to see in their 
audit log (after I explained that logging every single message is unfeasible...)

Here's what I got from them:
* Authorization failures / denies
* Ticket renewals for Kerberos (is it also a thing for SSL?)
* Session starts for SSL (and in other places where sessions apply.

The first item can be done with the authorizer, but I can't see how the 
authorizer will log ticket renewals and session expiration. Any thoughts?

 Kafka Auditing functionality
 

 Key: KAFKA-2162
 URL: https://issues.apache.org/jira/browse/KAFKA-2162
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Parth Brahmbhatt

 During Kafka authorization  discussion thread . There was concerns raised 
 about not having Auditing. Auditing is important functionality but its not 
 part of authorizer. This jira will track adding audit functionality to kafka.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1695) Authenticate connection to Zookeeper

2015-07-10 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14623072#comment-14623072
 ] 

Gwen Shapira commented on KAFKA-1695:
-

Sure, go ahead.

 Authenticate connection to Zookeeper
 

 Key: KAFKA-1695
 URL: https://issues.apache.org/jira/browse/KAFKA-1695
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira

 We need to make it possible to secure the Zookeeper cluster Kafka is using. 
 This would make use of the normal authentication ZooKeeper provides. 
 ZooKeeper supports a variety of authentication mechanisms so we will need to 
 figure out what has to be passed in to the zookeeper client.
 The intention is that when the current round of client work is done it should 
 be possible to run without clients needing access to Zookeeper so all we need 
 here is to make it so that only the Kafka cluster is able to read and write 
 to the Kafka znodes  (we shouldn't need to set any kind of acl on a per-znode 
 basis).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2198) kafka-topics.sh exits with 0 status on failures

2015-07-10 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622574#comment-14622574
 ] 

Gwen Shapira commented on KAFKA-2198:
-

Hope its ok if I review this, [~omkreddy].

 kafka-topics.sh exits with 0 status on failures
 ---

 Key: KAFKA-2198
 URL: https://issues.apache.org/jira/browse/KAFKA-2198
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 0.8.2.1
Reporter: Bob Halley
Assignee: Manikumar Reddy
 Attachments: KAFKA-2198.patch, KAFKA-2198_2015-05-19_18:27:01.patch, 
 KAFKA-2198_2015-05-19_18:41:25.patch, KAFKA-2198_2015-07-10_22:02:02.patch


 In the two failure cases below, kafka-topics.sh exits with status 0.  You 
 shouldn't need to parse output from the command to know if it failed or not.
 Case 1: Forgetting to add Kafka zookeeper chroot path to zookeeper spec
 $ kafka-topics.sh --alter --topic foo --config min.insync.replicas=2 
 --zookeeper 10.0.0.1  echo succeeded
 succeeded
 Case 2: Bad config option.  (Also, do we really need the java backtrace?  
 It's a lot of noise most of the time.)
 $ kafka-topics.sh --alter --topic foo --config min.insync.replicasTYPO=2 
 --zookeeper 10.0.0.1/kafka  echo succeeded
 Error while executing topic command requirement failed: Unknown configuration 
 min.insync.replicasTYPO.
 java.lang.IllegalArgumentException: requirement failed: Unknown configuration 
 min.insync.replicasTYPO.
 at scala.Predef$.require(Predef.scala:233)
 at kafka.log.LogConfig$$anonfun$validateNames$1.apply(LogConfig.scala:183)
 at kafka.log.LogConfig$$anonfun$validateNames$1.apply(LogConfig.scala:182)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at kafka.log.LogConfig$.validateNames(LogConfig.scala:182)
 at kafka.log.LogConfig$.validate(LogConfig.scala:190)
 at 
 kafka.admin.TopicCommand$.parseTopicConfigsToBeAdded(TopicCommand.scala:205)
 at 
 kafka.admin.TopicCommand$$anonfun$alterTopic$1.apply(TopicCommand.scala:103)
 at 
 kafka.admin.TopicCommand$$anonfun$alterTopic$1.apply(TopicCommand.scala:100)
 at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
 at kafka.admin.TopicCommand$.alterTopic(TopicCommand.scala:100)
 at kafka.admin.TopicCommand$.main(TopicCommand.scala:57)
 at kafka.admin.TopicCommand.main(TopicCommand.scala)
 succeeded



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2214) kafka-reassign-partitions.sh --verify should return non-zero exit codes when reassignment is not completed yet

2015-07-10 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2214:

Reviewer: Gwen Shapira  (was: Neha Narkhede)

 kafka-reassign-partitions.sh --verify should return non-zero exit codes when 
 reassignment is not completed yet
 --

 Key: KAFKA-2214
 URL: https://issues.apache.org/jira/browse/KAFKA-2214
 Project: Kafka
  Issue Type: Improvement
  Components: admin
Affects Versions: 0.8.1.1, 0.8.2.0
Reporter: Michael Noll
Assignee: Manikumar Reddy
Priority: Minor
 Attachments: KAFKA-2214.patch, KAFKA-2214_2015-07-10_21:56:04.patch


 h4. Background
 The admin script {{kafka-reassign-partitions.sh}} should integrate better 
 with automation tools such as Ansible, which rely on scripts adhering to Unix 
 best practices such as appropriate exit codes on success/failure.
 h4. Current behavior (incorrect)
 When reassignments are still in progress {{kafka-reassign-partitions.sh}} 
 prints {{ERROR}} messages but returns an exit code of zero, which indicates 
 success.  This behavior makes it a bit cumbersome to integrate the script 
 into automation tools such as Ansible.
 {code}
 $ kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 
 --reassignment-json-file partitions-to-move.json --verify
 Status of partition reassignment:
 ERROR: Assigned replicas (316,324,311) don't match the list of replicas for 
 reassignment (316,324) for partition [mytopic,2]
 Reassignment of partition [mytopic,0] completed successfully
 Reassignment of partition [myothertopic,1] completed successfully
 Reassignment of partition [myothertopic,3] completed successfully
 ...
 $ echo $?
 0
 # But preferably the exit code in the presence of ERRORs should be, say, 1.
 {code}
 h3. How to improve
 I'd suggest that, using the above as the running example, if there are any 
 {{ERROR}} entries in the output (i.e. if there are any assignments remaining 
 that don't match the desired assignments), then the 
 {{kafka-reassign-partitions.sh}}  should return a non-zero exit code.
 h3. Notes
 In Kafka 0.8.2 the output is a bit different: The ERROR messages are now 
 phrased differently.
 Before:
 {code}
 ERROR: Assigned replicas (316,324,311) don't match the list of replicas for 
 reassignment (316,324) for partition [mytopic,2]
 {code}
 Now:
 {code}
 Reassignment of partition [mytopic,2] is still in progress
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2327) broker doesn't start if config defines advertised.host but not advertised.port

2015-07-09 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621045#comment-14621045
 ] 

Gwen Shapira commented on KAFKA-2327:
-

Oops. My bug :)

Do you have a patch, or shall I fix this?

 broker doesn't start if config defines advertised.host but not advertised.port
 --

 Key: KAFKA-2327
 URL: https://issues.apache.org/jira/browse/KAFKA-2327
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.3
Reporter: Geoffrey Anderson
Assignee: Geoffrey Anderson
Priority: Minor

 To reproduce locally, in server.properties, define advertised.host and 
 port, but not advertised.port 
 port=9092
 advertised.host.name=localhost
 Then start zookeeper and try to start kafka. The result is an error like so:
 [2015-07-09 11:29:20,760] FATAL  (kafka.Kafka$)
 kafka.common.KafkaException: Unable to parse PLAINTEXT://localhost:null to a 
 broker endpoint
   at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:49)
   at 
 kafka.utils.CoreUtils$$anonfun$listenerListToEndPoints$1.apply(CoreUtils.scala:309)
   at 
 kafka.utils.CoreUtils$$anonfun$listenerListToEndPoints$1.apply(CoreUtils.scala:309)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
   at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
   at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:309)
   at 
 kafka.server.KafkaConfig.getAdvertisedListeners(KafkaConfig.scala:728)
   at kafka.server.KafkaConfig.init(KafkaConfig.scala:668)
   at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:541)
   at kafka.Kafka$.main(Kafka.scala:58)
   at kafka.Kafka.main(Kafka.scala)
 Looks like this was changed in 5c9040745466945a04ea0315de583ccdab0614ac
 the cause seems to be in KafkaConfig.scala in the getAdvertisedListeners 
 method, and I believe the fix is (starting at line 727)
 {code}
 ...
 } else if (getString(KafkaConfig.AdvertisedHostNameProp) != null || 
 getInt(KafkaConfig.AdvertisedPortProp) != null) {
   CoreUtils.listenerListToEndPoints(PLAINTEXT:// +
 getString(KafkaConfig.AdvertisedHostNameProp) + : + 
 getInt(KafkaConfig.AdvertisedPortProp))
 ...
 {code}
 -
 {code}
 } else if (getString(KafkaConfig.AdvertisedHostNameProp) != null || 
 getInt(KafkaConfig.AdvertisedPortProp) != null) {
   CoreUtils.listenerListToEndPoints(PLAINTEXT:// +
 advertisedHostName + : + advertisedPort
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2187) Introduce merge-kafka-pr.py script

2015-07-09 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621217#comment-14621217
 ] 

Gwen Shapira commented on KAFKA-2187:
-

I think that per Apache, we need to acknowledge Spark somewhere since we 
essentially lifted their code. LICENSE file, maybe?

 Introduce merge-kafka-pr.py script
 --

 Key: KAFKA-2187
 URL: https://issues.apache.org/jira/browse/KAFKA-2187
 Project: Kafka
  Issue Type: New Feature
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2187.patch, KAFKA-2187_2015-05-20_23:14:05.patch, 
 KAFKA-2187_2015-06-02_20:05:50.patch


 This script will be used to merge GitHub pull requests and it will pull from 
 the Apache Git repo to the current branch, squash and merge the PR, push the 
 commit to trunk, close the PR (via commit message) and close the relevant 
 JIRA issue (via JIRA API).
 Spark has a script that does most (if not all) of this and that will be used 
 as the starting point:
 https://github.com/apache/spark/blob/master/dev/merge_spark_pr.py



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2187) Introduce merge-kafka-pr.py script

2015-07-09 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621239#comment-14621239
 ] 

Gwen Shapira commented on KAFKA-2187:
-

Sorry, I missed that. I think its good and I don't see any legal verbiage about 
having in in LICENSE. 
(http://www.apache.org/dev/licensing-howto.html)

 Introduce merge-kafka-pr.py script
 --

 Key: KAFKA-2187
 URL: https://issues.apache.org/jira/browse/KAFKA-2187
 Project: Kafka
  Issue Type: New Feature
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2187.patch, KAFKA-2187_2015-05-20_23:14:05.patch, 
 KAFKA-2187_2015-06-02_20:05:50.patch


 This script will be used to merge GitHub pull requests and it will pull from 
 the Apache Git repo to the current branch, squash and merge the PR, push the 
 commit to trunk, close the PR (via commit message) and close the relevant 
 JIRA issue (via JIRA API).
 Spark has a script that does most (if not all) of this and that will be used 
 as the starting point:
 https://github.com/apache/spark/blob/master/dev/merge_spark_pr.py



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2187) Introduce merge-kafka-pr.py script

2015-07-09 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621243#comment-14621243
 ] 

Gwen Shapira commented on KAFKA-2187:
-

One more thing:
If I start a merge and cancel (say, by choosing 'n' when asked if I want to 
proceed), I'm left on a detached branch. Any chance the script can put me back 
in the original branch? or in trunk?

 Introduce merge-kafka-pr.py script
 --

 Key: KAFKA-2187
 URL: https://issues.apache.org/jira/browse/KAFKA-2187
 Project: Kafka
  Issue Type: New Feature
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Attachments: KAFKA-2187.patch, KAFKA-2187_2015-05-20_23:14:05.patch, 
 KAFKA-2187_2015-06-02_20:05:50.patch


 This script will be used to merge GitHub pull requests and it will pull from 
 the Apache Git repo to the current branch, squash and merge the PR, push the 
 commit to trunk, close the PR (via commit message) and close the relevant 
 JIRA issue (via JIRA API).
 Spark has a script that does most (if not all) of this and that will be used 
 as the starting point:
 https://github.com/apache/spark/blob/master/dev/merge_spark_pr.py



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2327) broker doesn't start if config defines advertised.host but not advertised.port

2015-07-09 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2327.
-
   Resolution: Fixed
Fix Version/s: 0.8.3

Merged PR.

 broker doesn't start if config defines advertised.host but not advertised.port
 --

 Key: KAFKA-2327
 URL: https://issues.apache.org/jira/browse/KAFKA-2327
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.3
Reporter: Geoffrey Anderson
Assignee: Geoffrey Anderson
Priority: Minor
 Fix For: 0.8.3


 To reproduce locally, in server.properties, define advertised.host and 
 port, but not advertised.port 
 port=9092
 advertised.host.name=localhost
 Then start zookeeper and try to start kafka. The result is an error like so:
 [2015-07-09 11:29:20,760] FATAL  (kafka.Kafka$)
 kafka.common.KafkaException: Unable to parse PLAINTEXT://localhost:null to a 
 broker endpoint
   at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:49)
   at 
 kafka.utils.CoreUtils$$anonfun$listenerListToEndPoints$1.apply(CoreUtils.scala:309)
   at 
 kafka.utils.CoreUtils$$anonfun$listenerListToEndPoints$1.apply(CoreUtils.scala:309)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
   at 
 scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
   at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
   at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:309)
   at 
 kafka.server.KafkaConfig.getAdvertisedListeners(KafkaConfig.scala:728)
   at kafka.server.KafkaConfig.init(KafkaConfig.scala:668)
   at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:541)
   at kafka.Kafka$.main(Kafka.scala:58)
   at kafka.Kafka.main(Kafka.scala)
 Looks like this was changed in 5c9040745466945a04ea0315de583ccdab0614ac
 the cause seems to be in KafkaConfig.scala in the getAdvertisedListeners 
 method, and I believe the fix is (starting at line 727)
 {code}
 ...
 } else if (getString(KafkaConfig.AdvertisedHostNameProp) != null || 
 getInt(KafkaConfig.AdvertisedPortProp) != null) {
   CoreUtils.listenerListToEndPoints(PLAINTEXT:// +
 getString(KafkaConfig.AdvertisedHostNameProp) + : + 
 getInt(KafkaConfig.AdvertisedPortProp))
 ...
 {code}
 -
 {code}
 } else if (getString(KafkaConfig.AdvertisedHostNameProp) != null || 
 getInt(KafkaConfig.AdvertisedPortProp) != null) {
   CoreUtils.listenerListToEndPoints(PLAINTEXT:// +
 advertisedHostName + : + advertisedPort
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2187) Introduce merge-kafka-pr.py script

2015-07-09 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2187:

   Resolution: Fixed
Fix Version/s: 0.8.3
   Status: Resolved  (was: Patch Available)

 Introduce merge-kafka-pr.py script
 --

 Key: KAFKA-2187
 URL: https://issues.apache.org/jira/browse/KAFKA-2187
 Project: Kafka
  Issue Type: New Feature
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2187.patch, KAFKA-2187_2015-05-20_23:14:05.patch, 
 KAFKA-2187_2015-06-02_20:05:50.patch


 This script will be used to merge GitHub pull requests and it will pull from 
 the Apache Git repo to the current branch, squash and merge the PR, push the 
 commit to trunk, close the PR (via commit message) and close the relevant 
 JIRA issue (via JIRA API).
 Spark has a script that does most (if not all) of this and that will be used 
 as the starting point:
 https://github.com/apache/spark/blob/master/dev/merge_spark_pr.py



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2187) Introduce merge-kafka-pr.py script

2015-07-09 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621384#comment-14621384
 ] 

Gwen Shapira commented on KAFKA-2187:
-

I accidentally committed this script as part of PR 73 (I did a git add, 
intending to commit this separately, and then up committing PR 73 from same 
branch).

I think the script is in good condition, so I'm not taking it out. Just file a 
new JIRA to fix the branch thing when you have a chance.

 Introduce merge-kafka-pr.py script
 --

 Key: KAFKA-2187
 URL: https://issues.apache.org/jira/browse/KAFKA-2187
 Project: Kafka
  Issue Type: New Feature
Reporter: Ismael Juma
Assignee: Ismael Juma
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2187.patch, KAFKA-2187_2015-05-20_23:14:05.patch, 
 KAFKA-2187_2015-06-02_20:05:50.patch


 This script will be used to merge GitHub pull requests and it will pull from 
 the Apache Git repo to the current branch, squash and merge the PR, push the 
 commit to trunk, close the PR (via commit message) and close the relevant 
 JIRA issue (via JIRA API).
 Spark has a script that does most (if not all) of this and that will be used 
 as the starting point:
 https://github.com/apache/spark/blob/master/dev/merge_spark_pr.py



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-08 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618873#comment-14618873
 ] 

Gwen Shapira commented on KAFKA-2308:
-

yes, I saw the Snappy test case too :)

Since its a confirmed Snappy bug, I don't think we need a Kafka test-case. We 
can just protect that call, right?

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:350)
 at 
 

[jira] [Commented] (KAFKA-2316) Drop java 1.6 support

2015-07-08 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619093#comment-14619093
 ] 

Gwen Shapira commented on KAFKA-2316:
-

ouch, yeah. I guess Jenkins is running with Java 6 still.

I don't have access to modify our build job, lets see if someone in the PMC can 
help:
[~joestein] [~junrao] [~guozhang] [~jjkoshy] ?

 Drop java 1.6 support
 -

 Key: KAFKA-2316
 URL: https://issues.apache.org/jira/browse/KAFKA-2316
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-2316.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2316) Drop java 1.6 support

2015-07-08 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14619143#comment-14619143
 ] 

Gwen Shapira commented on KAFKA-2316:
-

Apache Member [~jarcec] volunteered to help out! 
Please go ahead and upgrade our Java, Jarcec. 

 Drop java 1.6 support
 -

 Key: KAFKA-2316
 URL: https://issues.apache.org/jira/browse/KAFKA-2316
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-2316.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-08 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618901#comment-14618901
 ] 

Gwen Shapira commented on KAFKA-2308:
-

Was that a ship it, [~guozhang]? :)

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:350)
 at 
 kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:286)
 at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:270)
 at 

[jira] [Updated] (KAFKA-2316) Drop java 1.6 support

2015-07-08 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2316:

   Resolution: Fixed
Fix Version/s: 0.8.3
   Status: Resolved  (was: Patch Available)

 Drop java 1.6 support
 -

 Key: KAFKA-2316
 URL: https://issues.apache.org/jira/browse/KAFKA-2316
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-2316.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2316) Drop java 1.6 support

2015-07-08 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618976#comment-14618976
 ] 

Gwen Shapira commented on KAFKA-2316:
-

+1. Committed and pushed to trunk.

 Drop java 1.6 support
 -

 Key: KAFKA-2316
 URL: https://issues.apache.org/jira/browse/KAFKA-2316
 Project: Kafka
  Issue Type: Bug
Reporter: Sriharsha Chintalapani
Assignee: Sriharsha Chintalapani
 Fix For: 0.8.3

 Attachments: KAFKA-2316.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-08 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618981#comment-14618981
 ] 

Gwen Shapira commented on KAFKA-2308:
-

Thanks :)

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:350)
 at 
 kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:286)
 at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:270)
 at 

[jira] [Commented] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-07 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14617096#comment-14617096
 ] 

Gwen Shapira commented on KAFKA-2308:
-

I kinda know why it happens, a patch will take a bit of time since I can't 
reproduce the error in a unit test (although it reproduces nicely in a test 
environment). I'll put a preliminary patch without unit tests up in a sec, so 
people suffering from this issue can validate. Here's what I found:

When we get a retriable error in the producer (NETWORK_EXCEPTION for instance), 
the current record batch gets put first in its topic-partition message batch 
queue by completeBatch().
Next time Sender runs, it drains the queue and one of the things it does is to 
take the first batch from the queue and close() it. But if a batch was 
re-queued, it was already closed. Calling close() twice should be safe, and for 
un-compressed messages, it is. However, for compressed messages the logic in 
close() is rather complex, and I believe closing a batch twice messes up the 
record. I can't tell exactly where the close() logic becomes unsafe, but 
there's really no need to close a batch twice. MemoryRecords.close() can check 
if it is writable before closing, and only close() the record if it is 
writable. This guarantees closing will happen just once. 

Fixing this resolved the problem on my system.



 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira

 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 

[jira] [Commented] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-07 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14617878#comment-14617878
 ] 

Gwen Shapira commented on KAFKA-2308:
-

Thanks for your comments, [~guozhang].

Closing multiple times on Snappy is not an issue. Actually, even draining, 
requeueing and draining again is not an issue by itself.
That is, I'm still unable to create a test that replicates this error, even 
though it reproduces nicely in a real cluster with the performance producer.

I have a patch that I'm fairly certain fixes the problem (although I cannot say 
why). I'll attach it here, because someone may need it, and continue digging 
into when and why does double-close corrupt messages.

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira

 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at 

[jira] [Updated] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-07 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2308:

Status: Patch Available  (was: Open)

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:350)
 at 
 kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:286)
 at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:270)
 at kafka.server.KafkaApis.handle(KafkaApis.scala:57)
 at 

[jira] [Updated] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-07 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2308:

Attachment: KAFKA-2308.patch

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:350)
 at 
 kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:286)
 at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:270)
 at kafka.server.KafkaApis.handle(KafkaApis.scala:57)
 at 

[jira] [Commented] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-07 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14617887#comment-14617887
 ] 

Gwen Shapira commented on KAFKA-2308:
-

Created reviewboard https://reviews.apache.org/r/36290/diff/
 against branch trunk

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:350)
 at 
 kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:286)
 at 

[jira] [Comment Edited] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-07 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14617878#comment-14617878
 ] 

Gwen Shapira edited comment on KAFKA-2308 at 7/8/15 2:54 AM:
-

Thanks for your comments, [~guozhang].

Closing multiple times on Snappy is not an issue. Actually, even draining, 
requeueing and draining again is not an issue by itself.
That is, I'm still unable to create a test that replicates this error, even 
though it reproduces nicely in a real cluster with the performance producer.

I have a patch that I'm fairly certain fixes the problem (although I cannot say 
why). I'll attach it here, because someone may need it, and continue digging 
into when and why does double-close corrupt messages.

Here's a link to a test that should have caused an issue, but doesn't:
https://gist.github.com/gwenshap/1ec9cb55d704a82477d8


was (Author: gwenshap):
Thanks for your comments, [~guozhang].

Closing multiple times on Snappy is not an issue. Actually, even draining, 
requeueing and draining again is not an issue by itself.
That is, I'm still unable to create a test that replicates this error, even 
though it reproduces nicely in a real cluster with the performance producer.

I have a patch that I'm fairly certain fixes the problem (although I cannot say 
why). I'll attach it here, because someone may need it, and continue digging 
into when and why does double-close corrupt messages.

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at 

[jira] [Commented] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-07 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14617949#comment-14617949
 ] 

Gwen Shapira commented on KAFKA-2308:
-

Actually, looks likely that it is Snappy (even though I can't reproduce):
https://github.com/xerial/snappy-java/pull/108

Note that this is not in 1.1.1.7 (which we are using).

I suggest pushing our simple work-around (since its simple and nothing bad can 
happen from only closing once).

 New producer + Snappy face un-compression errors after broker restart
 -

 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2308.patch


 Looks like the new producer, when used with Snappy, following a broker 
 restart is sending messages the brokers can't decompress. This issue was 
 discussed at few mailing lists thread, but I don't think we ever resolved it.
 I can reproduce with trunk and Snappy 1.1.1.7. 
 To reproduce:
 1. Start 3 brokers
 2. Create a topic with 3 partitions and 3 replicas each.
 2. Start performance producer with --new-producer --compression-codec 2 (and 
 set the number of messages to fairly high, to give you time. I went with 10M)
 3. Bounce one of the brokers
 4. The log of one of the surviving nodes should contain errors like:
 {code}
 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
 on Broker 66]: Error processing append operation on partition [t3,0]
 kafka.common.KafkaException:
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
 at 
 kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
 at 
 kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
 at 
 kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
 at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
 at scala.collection.Iterator$class.foreach(Iterator.scala:727)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
 at 
 scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
 at 
 scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
 at 
 scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
 at scala.collection.AbstractIterator.to(Iterator.scala:1157)
 at 
 scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
 at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
 at 
 kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
 at kafka.log.Log.liftedTree1$1(Log.scala:327)
 at kafka.log.Log.append(Log.scala:326)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
 at 
 kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
 at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
 at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
 at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
 at 
 kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
 at 
 scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
 at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
 at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
 at 
 scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
 at scala.collection.AbstractTraversable.map(Traversable.scala:105)
 at 
 

[jira] [Created] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-07-02 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2308:
---

 Summary: New producer + Snappy face un-compression errors after 
broker restart
 Key: KAFKA-2308
 URL: https://issues.apache.org/jira/browse/KAFKA-2308
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira


Looks like the new producer, when used with Snappy, following a broker restart 
is sending messages the brokers can't decompress. This issue was discussed at 
few mailing lists thread, but I don't think we ever resolved it.

I can reproduce with trunk and Snappy 1.1.1.7. 

To reproduce:
1. Start 3 brokers
2. Create a topic with 3 partitions and 3 replicas each.
2. Start performance producer with --new-producer --compression-codec 2 (and 
set the number of messages to fairly high, to give you time. I went with 10M)
3. Bounce one of the brokers
4. The log of one of the surviving nodes should contain errors like:

{code}
2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager on 
Broker 66]: Error processing append operation on partition [t3,0]
kafka.common.KafkaException:
at 
kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
at 
kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
at 
kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
at 
kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
at 
kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
at 
kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
at 
kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at 
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at 
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at 
kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
at kafka.log.Log.liftedTree1$1(Log.scala:327)
at kafka.log.Log.append(Log.scala:326)
at 
kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
at 
kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
at 
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
at 
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at 
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at 
kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:350)
at kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:286)
at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:270)
at kafka.server.KafkaApis.handle(KafkaApis.scala:57)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: PARSING_ERROR(2)
at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:84)
at org.xerial.snappy.SnappyNative.uncompressedLength(Native Method)
at 

[jira] [Resolved] (KAFKA-2165) ReplicaFetcherThread: data loss on unknown exception

2015-07-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2165.
-
Resolution: Not A Problem

 ReplicaFetcherThread: data loss on unknown exception
 

 Key: KAFKA-2165
 URL: https://issues.apache.org/jira/browse/KAFKA-2165
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Alexey Ozeritskiy
 Attachments: KAFKA-2165.patch


 Sometimes in our cluster some replica gets out of the isr. Then broker 
 redownloads the partition from the beginning. We got the following messages 
 in logs:
 {code}
 # The leader:
 [2015-03-25 11:11:07,796] ERROR [Replica Manager on Broker 21]: Error when 
 processing fetch request for partition [topic,11] offset 54369274 from 
 follower with correlation id 2634499. Possible cause: Request for offset 
 54369274 but we only have log segments in the range 49322124 to 54369273. 
 (kafka.server.ReplicaManager)
 {code}
 {code}
 # The follower:
 [2015-03-25 11:11:08,816] WARN [ReplicaFetcherThread-0-21], Replica 31 for 
 partition [topic,11] reset its fetch offset from 49322124 to current leader 
 21's start offset 49322124 (kafka.server.ReplicaFetcherThread)
 [2015-03-25 11:11:08,816] ERROR [ReplicaFetcherThread-0-21], Current offset 
 54369274 for partition [topic,11] out of range; reset offset to 49322124 
 (kafka.server.ReplicaFetcherThread)
 {code}
 This occures because we update fetchOffset 
 [here|https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/kafka/server/AbstractFetcherThread.scala#L124]
  and then try to process message. 
 If any exception except OffsetOutOfRangeCode occures we get unsynchronized 
 fetchOffset and replica.logEndOffset.
 On next fetch iteration we can get 
 fetchOffsetreplica.logEndOffset==leaderEndOffset and OffsetOutOfRangeCode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2292) failed fetch request logging doesn't indicate source of request

2015-06-26 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2292:

Assignee: Ted Malaska

 failed fetch request logging doesn't indicate source of request
 ---

 Key: KAFKA-2292
 URL: https://issues.apache.org/jira/browse/KAFKA-2292
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Rosenberg
Assignee: Ted Malaska

 I am trying to figure out the source of a consumer client that is issuing out 
 of range offset requests for a topic, on one for our brokers (we are running 
 0.8.2.1).
 I see log lines like this:
 {code}
 2015-06-20 06:17:24,718 ERROR [kafka-request-handler-4] server.ReplicaManager 
 - [Replica Manager on Broker 123]: Error when processing fetch request for 
 partition [mytopic,0] offset 82754176 from consumer with correlation id 596. 
 Possible cause: Request for offset 82754176 but we only have log segments in 
 the range 82814171 to 83259786.
 {code}
 Successive log lines are similar, but with the correlation id incremented, 
 etc.
 Unfortunately, the correlation id is not particularly useful here in the 
 logging, because I have nothing to trace it back to to understand which 
 connected consumer is issuing this request.  It would be useful if the 
 logging included an ip address, or a clientId.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2203) Get gradle build to work with Java 8

2015-06-24 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599681#comment-14599681
 ] 

Gwen Shapira commented on KAFKA-2203:
-

This should be done pre-commit and nearly every Apache project does it.

Just finish this:
https://issues.apache.org/jira/browse/KAFKA-1856

Every time we attach a patch (or we can modify this to work with pull requests) 
- Apache Jenkins will run all the tests / builds we need and post results on 
the JIRA. Our review cycle is long enough that Apache infra latency shouldn't 
be a major concern, and committers will be able to know when a patch passes.

 Get gradle build to work with Java 8
 

 Key: KAFKA-2203
 URL: https://issues.apache.org/jira/browse/KAFKA-2203
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 0.8.1.1
Reporter: Gaju Bhat
Priority: Minor
 Fix For: 0.8.1.2

 Attachments: 0001-Special-case-java-8-and-javadoc-handling.patch


 The gradle build halts because javadoc in java 8 is a lot stricter about 
 valid html.
 It might be worthwhile to special case java 8 as described 
 [here|http://blog.joda.org/2014/02/turning-off-doclint-in-jdk-8-javadoc.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2203) Get gradle build to work with Java 8

2015-06-24 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599823#comment-14599823
 ] 

Gwen Shapira commented on KAFKA-2203:
-

Having pre-commit comments on a JIRA will be significantly better than what we 
have at the moment.

I agree that there are additional concerns:
* Speed / parallelism (I think Hive and Sqoop solved it by running tests 
themselves in parallel)
* We need a good way to test with multiple jdk versions - possibly with 
multiple Jenkins jobs
* Integration of Travis with JIRA (IMO thats the biggest concern at the moment)

However, none of those seem unsolvable, and even a partial solution will be 
very useful. I really miss the pre-commit comments I get with other projects.  
It just seems to me that we can get cool improvements with very little effort. 
KAFKA-1856 is waiting for someone to create a jenkins job and set up the 
precommit trigger.

 Get gradle build to work with Java 8
 

 Key: KAFKA-2203
 URL: https://issues.apache.org/jira/browse/KAFKA-2203
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 0.8.1.1
Reporter: Gaju Bhat
Priority: Minor
 Fix For: 0.8.1.2

 Attachments: 0001-Special-case-java-8-and-javadoc-handling.patch


 The gradle build halts because javadoc in java 8 is a lot stricter about 
 valid html.
 It might be worthwhile to special case java 8 as described 
 [here|http://blog.joda.org/2014/02/turning-off-doclint-in-jdk-8-javadoc.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2297) VerifiableProperties fails when constructed with Properties with numeric values

2015-06-23 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14597942#comment-14597942
 ] 

Gwen Shapira commented on KAFKA-2297:
-

Yeah, I think its a known issue - VerifiableProperties only takes String values.

Considering that VerifiableProperties will be deprecated in next release in 
favor of AbstractConfig and ConfigDef, I'm not sure its worth fixing.
But if you already have a patch, feel free to submit :)

 VerifiableProperties fails when constructed with Properties with numeric 
 values
 ---

 Key: KAFKA-2297
 URL: https://issues.apache.org/jira/browse/KAFKA-2297
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2.1
 Environment: Ubuntu 12.04, OSX 10.9, OpenJDK 7, Scala 2.10
Reporter: Grant Overby
Priority: Minor

 VarifiableProperties produces java.lang.NumberFormatException: null when a 
 get method for a numeric type is called and VerifiableProperties was 
 constructed with a Properties containing a numeric type.
 private static void works() {
 Properties props = new Properties();
 props.put(request.required.acks, 0);
 new VerifiableProperties(props).getShort(request.required.acks, 
 (short) 0);
 }
 private static void doesntWork() {
 Properties props = new Properties();
 props.put(request.required.acks, (short) 0);
 new VerifiableProperties(props).getShort(request.required.acks, 
 (short) 0);
 }
 Exception in thread main java.lang.NumberFormatException: null
   at java.lang.Integer.parseInt(Integer.java:454)
   at java.lang.Short.parseShort(Short.java:117)
   at java.lang.Short.parseShort(Short.java:143)
   at 
 scala.collection.immutable.StringLike$class.toShort(StringLike.scala:228)
   at scala.collection.immutable.StringOps.toShort(StringOps.scala:31)
   at 
 kafka.utils.VerifiableProperties.getShortInRange(VerifiableProperties.scala:85)
   at 
 kafka.utils.VerifiableProperties.getShort(VerifiableProperties.scala:61)
   at Main.doesntWork(Main.java:22)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2288) Follow-up to KAFKA-2249 - reduce logging and testing

2015-06-22 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2288:

Attachment: KAFKA-2288_2015-06-22_10:02:27.patch

 Follow-up to KAFKA-2249 - reduce logging and testing
 

 Key: KAFKA-2288
 URL: https://issues.apache.org/jira/browse/KAFKA-2288
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2288.patch, KAFKA-2288_2015-06-22_10:02:27.patch


 As [~junrao] commented on KAFKA-2249, we have a needless test and we are 
 logging configuration for every single partition now. 
 Lets reduce the noise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2288) Follow-up to KAFKA-2249 - reduce logging and testing

2015-06-19 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2288:
---

 Summary: Follow-up to KAFKA-2249 - reduce logging and testing
 Key: KAFKA-2288
 URL: https://issues.apache.org/jira/browse/KAFKA-2288
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


As [~junrao] commented on KAFKA-2249, we have a needless test and we are 
logging configuration for every single partition now. 

Lets reduce the noise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2288) Follow-up to KAFKA-2249 - reduce logging and testing

2015-06-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira reassigned KAFKA-2288:
---

Assignee: Gwen Shapira

 Follow-up to KAFKA-2249 - reduce logging and testing
 

 Key: KAFKA-2288
 URL: https://issues.apache.org/jira/browse/KAFKA-2288
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira

 As [~junrao] commented on KAFKA-2249, we have a needless test and we are 
 logging configuration for every single partition now. 
 Lets reduce the noise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2288) Follow-up to KAFKA-2249 - reduce logging and testing

2015-06-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2288:

Attachment: KAFKA-2288.patch

 Follow-up to KAFKA-2249 - reduce logging and testing
 

 Key: KAFKA-2288
 URL: https://issues.apache.org/jira/browse/KAFKA-2288
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2288.patch


 As [~junrao] commented on KAFKA-2249, we have a needless test and we are 
 logging configuration for every single partition now. 
 Lets reduce the noise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2288) Follow-up to KAFKA-2249 - reduce logging and testing

2015-06-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2288:

Status: Patch Available  (was: Open)

 Follow-up to KAFKA-2249 - reduce logging and testing
 

 Key: KAFKA-2288
 URL: https://issues.apache.org/jira/browse/KAFKA-2288
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2288.patch


 As [~junrao] commented on KAFKA-2249, we have a needless test and we are 
 logging configuration for every single partition now. 
 Lets reduce the noise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2288) Follow-up to KAFKA-2249 - reduce logging and testing

2015-06-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14594222#comment-14594222
 ] 

Gwen Shapira commented on KAFKA-2288:
-

Created reviewboard https://reviews.apache.org/r/35677/diff/
 against branch trunk

 Follow-up to KAFKA-2249 - reduce logging and testing
 

 Key: KAFKA-2288
 URL: https://issues.apache.org/jira/browse/KAFKA-2288
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2288.patch


 As [~junrao] commented on KAFKA-2249, we have a needless test and we are 
 logging configuration for every single partition now. 
 Lets reduce the noise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2249) KafkaConfig does not preserve original Properties

2015-06-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593555#comment-14593555
 ] 

Gwen Shapira commented on KAFKA-2249:
-

Agree that the test is kinda silly now :)

And yes, the printouts were useful when debugging test cases with 1 topic and 1 
partition, but PITA in production.

OTOH, printout of non-default config per topic in a debug level INFO or 
DEBUG makes sense IMO (even on large systems) - it can be super useful when 
people send logs and ask for help. I'll see if we can do it per topic without 
making too much of a mess.

Mind if I take those fixes in separate JIRA? 

 KafkaConfig does not preserve original Properties
 -

 Key: KAFKA-2249
 URL: https://issues.apache.org/jira/browse/KAFKA-2249
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Fix For: 0.8.3

 Attachments: KAFKA-2249.patch, KAFKA-2249_2015-06-17_17:35:35.patch


 We typically generate configuration from properties objects (or maps).
 The old KafkaConfig, and the new ProducerConfig and ConsumerConfig all retain 
 the original Properties object, which means that if the user specified 
 properties that are not part of ConfigDef definitions, they are still 
 accessible.
 This is important especially for MetricReporters where we want to allow users 
 to pass arbitrary properties for the reporter.
 One way to support this is by having KafkaConfig implement AbstractConfig, 
 which will give us other nice functionality too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2249) KafkaConfig does not preserve original Properties

2015-06-17 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2249:

Status: Patch Available  (was: In Progress)

 KafkaConfig does not preserve original Properties
 -

 Key: KAFKA-2249
 URL: https://issues.apache.org/jira/browse/KAFKA-2249
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2249.patch, KAFKA-2249_2015-06-17_17:35:35.patch


 We typically generate configuration from properties objects (or maps).
 The old KafkaConfig, and the new ProducerConfig and ConsumerConfig all retain 
 the original Properties object, which means that if the user specified 
 properties that are not part of ConfigDef definitions, they are still 
 accessible.
 This is important especially for MetricReporters where we want to allow users 
 to pass arbitrary properties for the reporter.
 One way to support this is by having KafkaConfig implement AbstractConfig, 
 which will give us other nice functionality too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2249) KafkaConfig does not preserve original Properties

2015-06-17 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2249:

Attachment: KAFKA-2249_2015-06-17_17:35:35.patch

 KafkaConfig does not preserve original Properties
 -

 Key: KAFKA-2249
 URL: https://issues.apache.org/jira/browse/KAFKA-2249
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2249.patch, KAFKA-2249_2015-06-17_17:35:35.patch


 We typically generate configuration from properties objects (or maps).
 The old KafkaConfig, and the new ProducerConfig and ConsumerConfig all retain 
 the original Properties object, which means that if the user specified 
 properties that are not part of ConfigDef definitions, they are still 
 accessible.
 This is important especially for MetricReporters where we want to allow users 
 to pass arbitrary properties for the reporter.
 One way to support this is by having KafkaConfig implement AbstractConfig, 
 which will give us other nice functionality too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2249) KafkaConfig does not preserve original Properties

2015-06-17 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590932#comment-14590932
 ] 

Gwen Shapira commented on KAFKA-2249:
-

Updated reviewboard https://reviews.apache.org/r/35347/diff/
 against branch trunk

 KafkaConfig does not preserve original Properties
 -

 Key: KAFKA-2249
 URL: https://issues.apache.org/jira/browse/KAFKA-2249
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2249.patch, KAFKA-2249_2015-06-17_17:35:35.patch


 We typically generate configuration from properties objects (or maps).
 The old KafkaConfig, and the new ProducerConfig and ConsumerConfig all retain 
 the original Properties object, which means that if the user specified 
 properties that are not part of ConfigDef definitions, they are still 
 accessible.
 This is important especially for MetricReporters where we want to allow users 
 to pass arbitrary properties for the reporter.
 One way to support this is by having KafkaConfig implement AbstractConfig, 
 which will give us other nice functionality too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2252) Socket connection closing is logged, but not corresponding opening of socket

2015-06-15 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14586445#comment-14586445
 ] 

Gwen Shapira commented on KAFKA-2252:
-

Ick! I lost your fix while working on KAFKA-1928. So sorry!

I'll move it back to DEBUG, add another DEBUG for creating connections (for 
symmetry), and I think we are good.

 Socket connection closing is logged, but not corresponding opening of socket
 

 Key: KAFKA-2252
 URL: https://issues.apache.org/jira/browse/KAFKA-2252
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Rosenberg

 (using 0.8.2.1)
 We see a large number of Closing socket connection logging to the broker 
 logs, e.g.:
 {code}
 2015-06-04 16:49:30,262  INFO [kafka-network-thread-27330-2] 
 network.Processor - Closing socket connection to /1.2.3.4.
 2015-06-04 16:49:30,262  INFO [kafka-network-thread-27330-0] 
 network.Processor - Closing socket connection to /5.6.7.8.
 2015-06-04 16:49:30,695  INFO [kafka-network-thread-27330-0] 
 network.Processor - Closing socket connection to /9.10.11.12.
 2015-06-04 16:49:31,465  INFO [kafka-network-thread-27330-1] 
 network.Processor - Closing socket connection to /13.14.15.16.
 2015-06-04 16:49:31,806  INFO [kafka-network-thread-27330-0] 
 network.Processor - Closing socket connection to /17.18.19.20.
 2015-06-04 16:49:31,842  INFO [kafka-network-thread-27330-2] 
 network.Processor - Closing socket connection to /21.22.23.24.
 {code}
 However, we have no corresponding logging for when these connections are 
 established.  Consequently, it's not very useful to see a flood of closed 
 connections, etc.  I'd think we'd want to see the corresponding 'connection 
 established' messages, also logged as INFO.
 Occasionally, we see a flood of the above messages, and have no idea as to 
 whether it indicates a problem, etc.  (Sometimes it might be due to an 
 ongoing rolling restart, or a change in the Zookeeper cluster).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2251) Connection reset by peer IOExceptions should not be logged as ERROR

2015-06-15 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14586383#comment-14586383
 ] 

Gwen Shapira commented on KAFKA-2251:
-

Haha, I accidentally fixed Kafka's most annoying issue :)

 Connection reset by peer IOExceptions should not be logged as ERROR
 -

 Key: KAFKA-2251
 URL: https://issues.apache.org/jira/browse/KAFKA-2251
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Rosenberg
 Fix For: 0.8.3


 It's normal to see lots of these exceptions logged in the broker logs:
 {code}
 2015-06-04 16:49:30,146 ERROR [kafka-network-thread-27330-1] 
 network.Processor - Closing socket for /1.2.3.4 because of error
 java.io.IOException: Connection reset by peer
 at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
 at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
 at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
 at sun.nio.ch.IOUtil.read(IOUtil.java:197)
 at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
 at kafka.utils.Utils$.read(Utils.scala:380)
 at 
 kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
 at kafka.network.Processor.read(SocketServer.scala:444)
 at kafka.network.Processor.run(SocketServer.scala:340)
 at java.lang.Thread.run(Thread.java:745)
 {code}
 These are routine exceptions, that occur regularly in response to clients 
 going away, etc.  The server should not log these as 'ERROR' level, instead 
 they should be probably just 'WARN', and should not log the full stack trace 
 (maybe just the exception message).
 The problem is that if we want to alert on actual errors, innocuous errors 
 such as this make it difficult to alert properly, etc.
 We are using 0.8.2.1, fwiw



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2252) Socket connection closing is logged, but not corresponding opening of socket

2015-06-15 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14586390#comment-14586390
 ] 

Gwen Shapira commented on KAFKA-2252:
-

We still have this exact issue. Only now its in Selector and not SocketServer.


 Socket connection closing is logged, but not corresponding opening of socket
 

 Key: KAFKA-2252
 URL: https://issues.apache.org/jira/browse/KAFKA-2252
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Rosenberg

 (using 0.8.2.1)
 We see a large number of Closing socket connection logging to the broker 
 logs, e.g.:
 {code}
 2015-06-04 16:49:30,262  INFO [kafka-network-thread-27330-2] 
 network.Processor - Closing socket connection to /1.2.3.4.
 2015-06-04 16:49:30,262  INFO [kafka-network-thread-27330-0] 
 network.Processor - Closing socket connection to /5.6.7.8.
 2015-06-04 16:49:30,695  INFO [kafka-network-thread-27330-0] 
 network.Processor - Closing socket connection to /9.10.11.12.
 2015-06-04 16:49:31,465  INFO [kafka-network-thread-27330-1] 
 network.Processor - Closing socket connection to /13.14.15.16.
 2015-06-04 16:49:31,806  INFO [kafka-network-thread-27330-0] 
 network.Processor - Closing socket connection to /17.18.19.20.
 2015-06-04 16:49:31,842  INFO [kafka-network-thread-27330-2] 
 network.Processor - Closing socket connection to /21.22.23.24.
 {code}
 However, we have no corresponding logging for when these connections are 
 established.  Consequently, it's not very useful to see a flood of closed 
 connections, etc.  I'd think we'd want to see the corresponding 'connection 
 established' messages, also logged as INFO.
 Occasionally, we see a flood of the above messages, and have no idea as to 
 whether it indicates a problem, etc.  (Sometimes it might be due to an 
 ongoing rolling restart, or a change in the Zookeeper cluster).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2252) Socket connection closing is logged, but not corresponding opening of socket

2015-06-15 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14586744#comment-14586744
 ] 

Gwen Shapira commented on KAFKA-2252:
-

Created reviewboard https://reviews.apache.org/r/35474/diff/
 against branch trunk

 Socket connection closing is logged, but not corresponding opening of socket
 

 Key: KAFKA-2252
 URL: https://issues.apache.org/jira/browse/KAFKA-2252
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Rosenberg
 Attachments: KAFKA-2252.patch


 (using 0.8.2.1)
 We see a large number of Closing socket connection logging to the broker 
 logs, e.g.:
 {code}
 2015-06-04 16:49:30,262  INFO [kafka-network-thread-27330-2] 
 network.Processor - Closing socket connection to /1.2.3.4.
 2015-06-04 16:49:30,262  INFO [kafka-network-thread-27330-0] 
 network.Processor - Closing socket connection to /5.6.7.8.
 2015-06-04 16:49:30,695  INFO [kafka-network-thread-27330-0] 
 network.Processor - Closing socket connection to /9.10.11.12.
 2015-06-04 16:49:31,465  INFO [kafka-network-thread-27330-1] 
 network.Processor - Closing socket connection to /13.14.15.16.
 2015-06-04 16:49:31,806  INFO [kafka-network-thread-27330-0] 
 network.Processor - Closing socket connection to /17.18.19.20.
 2015-06-04 16:49:31,842  INFO [kafka-network-thread-27330-2] 
 network.Processor - Closing socket connection to /21.22.23.24.
 {code}
 However, we have no corresponding logging for when these connections are 
 established.  Consequently, it's not very useful to see a flood of closed 
 connections, etc.  I'd think we'd want to see the corresponding 'connection 
 established' messages, also logged as INFO.
 Occasionally, we see a flood of the above messages, and have no idea as to 
 whether it indicates a problem, etc.  (Sometimes it might be due to an 
 ongoing rolling restart, or a change in the Zookeeper cluster).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2252) Socket connection closing is logged, but not corresponding opening of socket

2015-06-15 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2252:

Attachment: KAFKA-2252.patch

 Socket connection closing is logged, but not corresponding opening of socket
 

 Key: KAFKA-2252
 URL: https://issues.apache.org/jira/browse/KAFKA-2252
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Rosenberg
 Attachments: KAFKA-2252.patch


 (using 0.8.2.1)
 We see a large number of Closing socket connection logging to the broker 
 logs, e.g.:
 {code}
 2015-06-04 16:49:30,262  INFO [kafka-network-thread-27330-2] 
 network.Processor - Closing socket connection to /1.2.3.4.
 2015-06-04 16:49:30,262  INFO [kafka-network-thread-27330-0] 
 network.Processor - Closing socket connection to /5.6.7.8.
 2015-06-04 16:49:30,695  INFO [kafka-network-thread-27330-0] 
 network.Processor - Closing socket connection to /9.10.11.12.
 2015-06-04 16:49:31,465  INFO [kafka-network-thread-27330-1] 
 network.Processor - Closing socket connection to /13.14.15.16.
 2015-06-04 16:49:31,806  INFO [kafka-network-thread-27330-0] 
 network.Processor - Closing socket connection to /17.18.19.20.
 2015-06-04 16:49:31,842  INFO [kafka-network-thread-27330-2] 
 network.Processor - Closing socket connection to /21.22.23.24.
 {code}
 However, we have no corresponding logging for when these connections are 
 established.  Consequently, it's not very useful to see a flood of closed 
 connections, etc.  I'd think we'd want to see the corresponding 'connection 
 established' messages, also logged as INFO.
 Occasionally, we see a flood of the above messages, and have no idea as to 
 whether it indicates a problem, etc.  (Sometimes it might be due to an 
 ongoing rolling restart, or a change in the Zookeeper cluster).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2132) Move Log4J appender to a separate module

2015-06-11 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2132:

Summary: Move Log4J appender to a separate module  (was: Move Log4J 
appender to clients module)

 Move Log4J appender to a separate module
 

 Key: KAFKA-2132
 URL: https://issues.apache.org/jira/browse/KAFKA-2132
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Ashish K Singh
 Attachments: KAFKA-2132.patch, KAFKA-2132_2015-04-27_19:59:46.patch, 
 KAFKA-2132_2015-04-30_12:22:02.patch, KAFKA-2132_2015-04-30_15:53:17.patch


 Log4j appender is just a producer.
 Since we have a new producer in the clients module, no need to keep Log4J 
 appender in core and force people to package all of Kafka with their apps.
 Lets move the Log4jAppender to clients module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2238) KafkaMetricsConfig cannot be configured in broker (KafkaConfig)

2015-06-11 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582698#comment-14582698
 ] 

Gwen Shapira commented on KAFKA-2238:
-

[~aauradkar] I made significant changes to KafkaConfig in KAFKA-2249, which 
should resolve (or at least help with) some of the Metric issues.

Will KAFKA-2249 resolve your issues as well?

 KafkaMetricsConfig cannot be configured in broker (KafkaConfig)
 ---

 Key: KAFKA-2238
 URL: https://issues.apache.org/jira/browse/KAFKA-2238
 Project: Kafka
  Issue Type: Bug
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
 Attachments: KAFKA-2238.patch


 All metrics config values are not included in KafkaConfig and consequently 
 cannot be configured into the brokers. This is because the 
 KafkaMetricsReporter is passed a properties object generated by calling 
 toProps on KafkaConfig
 KafkaMetricsReporter.startReporters(new 
 VerifiableProperties(serverConfig.toProps))
 However, KafkaConfig never writes these values into the properties object and 
 hence these aren't configurable. The defaults always apply
 Add the following metrics to KafkaConfig
 kafka.metrics.reporters, kafka.metrics.polling.interval.secs, 
 kafka.csv.metrics.reporter.enabled, kafka.csv.metrics.dir



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2249) KafkaConfig does not preserve original Properties

2015-06-11 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14581505#comment-14581505
 ] 

Gwen Shapira commented on KAFKA-2249:
-

Created reviewboard https://reviews.apache.org/r/35347/diff/
 against branch trunk

 KafkaConfig does not preserve original Properties
 -

 Key: KAFKA-2249
 URL: https://issues.apache.org/jira/browse/KAFKA-2249
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
 Attachments: KAFKA-2249.patch


 We typically generate configuration from properties objects (or maps).
 The old KafkaConfig, and the new ProducerConfig and ConsumerConfig all retain 
 the original Properties object, which means that if the user specified 
 properties that are not part of ConfigDef definitions, they are still 
 accessible.
 This is important especially for MetricReporters where we want to allow users 
 to pass arbitrary properties for the reporter.
 One way to support this is by having KafkaConfig implement AbstractConfig, 
 which will give us other nice functionality too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2249) KafkaConfig does not preserve original Properties

2015-06-11 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2249:

Assignee: Gwen Shapira
  Status: Patch Available  (was: Open)

 KafkaConfig does not preserve original Properties
 -

 Key: KAFKA-2249
 URL: https://issues.apache.org/jira/browse/KAFKA-2249
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2249.patch


 We typically generate configuration from properties objects (or maps).
 The old KafkaConfig, and the new ProducerConfig and ConsumerConfig all retain 
 the original Properties object, which means that if the user specified 
 properties that are not part of ConfigDef definitions, they are still 
 accessible.
 This is important especially for MetricReporters where we want to allow users 
 to pass arbitrary properties for the reporter.
 One way to support this is by having KafkaConfig implement AbstractConfig, 
 which will give us other nice functionality too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2249) KafkaConfig does not preserve original Properties

2015-06-11 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2249:

Attachment: KAFKA-2249.patch

 KafkaConfig does not preserve original Properties
 -

 Key: KAFKA-2249
 URL: https://issues.apache.org/jira/browse/KAFKA-2249
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
 Attachments: KAFKA-2249.patch


 We typically generate configuration from properties objects (or maps).
 The old KafkaConfig, and the new ProducerConfig and ConsumerConfig all retain 
 the original Properties object, which means that if the user specified 
 properties that are not part of ConfigDef definitions, they are still 
 accessible.
 This is important especially for MetricReporters where we want to allow users 
 to pass arbitrary properties for the reporter.
 One way to support this is by having KafkaConfig implement AbstractConfig, 
 which will give us other nice functionality too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1882) Create extendable channel interface and default implementations

2015-06-10 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1882:

Fix Version/s: (was: 0.8.3)

 Create extendable channel interface and default implementations
 ---

 Key: KAFKA-1882
 URL: https://issues.apache.org/jira/browse/KAFKA-1882
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Gwen Shapira
Assignee: Gwen Shapira
Priority: Blocker

 For the security infrastructure, we need an extendible interface to replace 
 SocketChannel.
 KAFKA-1684 suggests extending SocketChannel itself, but since SocketChannel 
 is part of Java's standard library, the interface changes between different 
 Java versions, so extending it directly can become a compatibility issue.
 Instead, we can implement a KafkaChannel interface, which will implement 
 connect(), read(), write() and possibly other methods we use. 
 We will replace direct use of SocketChannel in our code with use of 
 KafkaChannel.
 Different implementations of KafkaChannel will be instantiated based on the 
 port/SecurityProtocol configuration. 
 This patch will provide at least the PLAINTEXT implementation for 
 KafkaChannel.
 I will validate that the SSL implementation in KAFKA-1684 can be refactored 
 to use a KafkaChannel interface rather than extend SocketChannel directly. 
 However, the patch will not include the SSL channel itself.
 The interface should also include setters/getters for principal and remote 
 IP, which will be used for the authentication code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1882) Create extendable channel interface and default implementations

2015-06-10 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-1882.
-
Resolution: Duplicate

I believe this is already handled in the SSL patches

 Create extendable channel interface and default implementations
 ---

 Key: KAFKA-1882
 URL: https://issues.apache.org/jira/browse/KAFKA-1882
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Gwen Shapira
Assignee: Gwen Shapira
Priority: Blocker
 Fix For: 0.8.3


 For the security infrastructure, we need an extendible interface to replace 
 SocketChannel.
 KAFKA-1684 suggests extending SocketChannel itself, but since SocketChannel 
 is part of Java's standard library, the interface changes between different 
 Java versions, so extending it directly can become a compatibility issue.
 Instead, we can implement a KafkaChannel interface, which will implement 
 connect(), read(), write() and possibly other methods we use. 
 We will replace direct use of SocketChannel in our code with use of 
 KafkaChannel.
 Different implementations of KafkaChannel will be instantiated based on the 
 port/SecurityProtocol configuration. 
 This patch will provide at least the PLAINTEXT implementation for 
 KafkaChannel.
 I will validate that the SSL implementation in KAFKA-1684 can be refactored 
 to use a KafkaChannel interface rather than extend SocketChannel directly. 
 However, the patch will not include the SSL channel itself.
 The interface should also include setters/getters for principal and remote 
 IP, which will be used for the authentication code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper

2015-06-08 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577728#comment-14577728
 ] 

Gwen Shapira commented on KAFKA-1367:
-

By zones do we mean rack-awareness? Or more general locality notion?
Sounds like something that may need its own JIRA and design.

 Broker topic metadata not kept in sync with ZooKeeper
 -

 Key: KAFKA-1367
 URL: https://issues.apache.org/jira/browse/KAFKA-1367
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.0, 0.8.1
Reporter: Ryan Berdeen
Assignee: Ashish K Singh
  Labels: newbie++
 Fix For: 0.8.3

 Attachments: KAFKA-1367.txt


 When a broker is restarted, the topic metadata responses from the brokers 
 will be incorrect (different from ZooKeeper) until a preferred replica leader 
 election.
 In the metadata, it looks like leaders are correctly removed from the ISR 
 when a broker disappears, but followers are not. Then, when a broker 
 reappears, the ISR is never updated.
 I used a variation of the Vagrant setup created by Joe Stein to reproduce 
 this with latest from the 0.8.1 branch: 
 https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2247) Merge kafka.utils.Time and kafka.common.utils.Time

2015-06-04 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573803#comment-14573803
 ] 

Gwen Shapira commented on KAFKA-2247:
-

It seems fairly easy except that kafka.utils.Time has milliseconds() exposed as 
static, while kafka.common.utils.Time is just an interface and therefore can't 
do that.
To avoid having our code instantiate new objects every time we need to check 
the time (see what I did there?), I suggest turning o.a.k.common.utils.Time 
into an abstract class and add some static methods there.

Does that make sense?

 Merge kafka.utils.Time and kafka.common.utils.Time
 --

 Key: KAFKA-2247
 URL: https://issues.apache.org/jira/browse/KAFKA-2247
 Project: Kafka
  Issue Type: Improvement
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
Priority: Minor

 We currently have 2 different versions of Time in clients and core. These 
 need to be merged



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2249) KafkaConfig does not preserve original Properties

2015-06-04 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2249:
---

 Summary: KafkaConfig does not preserve original Properties
 Key: KAFKA-2249
 URL: https://issues.apache.org/jira/browse/KAFKA-2249
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


We typically generate configuration from properties objects (or maps).
The old KafkaConfig, and the new ProducerConfig and ConsumerConfig all retain 
the original Properties object, which means that if the user specified 
properties that are not part of ConfigDef definitions, they are still 
accessible.

This is important especially for MetricReporters where we want to allow users 
to pass arbitrary properties for the reporter.

One way to support this is by having KafkaConfig implement AbstractConfig, 
which will give us other nice functionality too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-06-03 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14571796#comment-14571796
 ] 

Gwen Shapira commented on KAFKA-1928:
-

Updated reviewboard https://reviews.apache.org/r/33065/diff/
 against branch trunk

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jun Rao
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch, KAFKA-1928_2015-05-18_17:57:39.patch, 
 KAFKA-1928_2015-05-18_18:55:38.patch, KAFKA-1928_2015-05-19_11:26:18.patch, 
 KAFKA-1928_2015-05-20_13:41:37.patch, KAFKA-1928_2015-05-24_19:53:04.patch, 
 KAFKA-1928_2015-06-01_00:48:08.patch, KAFKA-1928_2015-06-03_15:59:34.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-06-03 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: KAFKA-1928_2015-06-03_15:59:34.patch

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jun Rao
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch, KAFKA-1928_2015-05-18_17:57:39.patch, 
 KAFKA-1928_2015-05-18_18:55:38.patch, KAFKA-1928_2015-05-19_11:26:18.patch, 
 KAFKA-1928_2015-05-20_13:41:37.patch, KAFKA-1928_2015-05-24_19:53:04.patch, 
 KAFKA-1928_2015-06-01_00:48:08.patch, KAFKA-1928_2015-06-03_15:59:34.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-31 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14566764#comment-14566764
 ] 

Gwen Shapira commented on KAFKA-1928:
-

Updated reviewboard https://reviews.apache.org/r/33065/diff/
 against branch trunk

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jun Rao
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch, KAFKA-1928_2015-05-18_17:57:39.patch, 
 KAFKA-1928_2015-05-18_18:55:38.patch, KAFKA-1928_2015-05-19_11:26:18.patch, 
 KAFKA-1928_2015-05-20_13:41:37.patch, KAFKA-1928_2015-05-24_19:53:04.patch, 
 KAFKA-1928_2015-06-01_00:48:08.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-31 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: KAFKA-1928_2015-06-01_00:48:08.patch

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jun Rao
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch, KAFKA-1928_2015-05-18_17:57:39.patch, 
 KAFKA-1928_2015-05-18_18:55:38.patch, KAFKA-1928_2015-05-19_11:26:18.patch, 
 KAFKA-1928_2015-05-20_13:41:37.patch, KAFKA-1928_2015-05-24_19:53:04.patch, 
 KAFKA-1928_2015-06-01_00:48:08.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-24 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14557790#comment-14557790
 ] 

Gwen Shapira commented on KAFKA-1928:
-

Updated reviewboard https://reviews.apache.org/r/33065/diff/
 against branch trunk

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jun Rao
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch, KAFKA-1928_2015-05-18_17:57:39.patch, 
 KAFKA-1928_2015-05-18_18:55:38.patch, KAFKA-1928_2015-05-19_11:26:18.patch, 
 KAFKA-1928_2015-05-20_13:41:37.patch, KAFKA-1928_2015-05-24_19:53:04.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-24 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: KAFKA-1928_2015-05-24_19:53:04.patch

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jun Rao
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch, KAFKA-1928_2015-05-18_17:57:39.patch, 
 KAFKA-1928_2015-05-18_18:55:38.patch, KAFKA-1928_2015-05-19_11:26:18.patch, 
 KAFKA-1928_2015-05-20_13:41:37.patch, KAFKA-1928_2015-05-24_19:53:04.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1814) FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)

2015-05-23 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14557175#comment-14557175
 ] 

Gwen Shapira commented on KAFKA-1814:
-

[~sanjubaba]: please send your problem to us...@kafka.apache.org, and include 
error messages you find in the log.

 FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown 
 (kafka.server.KafkaServerStartable)
 --

 Key: KAFKA-1814
 URL: https://issues.apache.org/jira/browse/KAFKA-1814
 Project: Kafka
  Issue Type: Bug
 Environment: OpenSuse 13.2, with installed jdk_1.7_u51, scala-2.11.4 
 and gradle-2.2.1
Reporter: Stefan 
   Original Estimate: 0h
  Remaining Estimate: 0h

 I have build Kafka on my system by executing command in order they where 
 written in the readme file in kafka downloaded folder. Every ./gradlew 
 command is done successfully, except tests (93% PASSED, 19 test FAIL (tests 
 in ProducerTest.class, SyncProducerTest.class, LeaderElectionTest.class, 
 LogOffsetTest.class) they are failing saying that thay can not access port, 
 so i thought ok, something is using that ports, i'll continue building). I 
 bulid kafka, and get .targz, and i run ./bin/zookeeper-server-start 
 ./confing/zookeeper.properties, but then i run ./bin/kafka-server-start.sh i 
 get errors and kafka immediately shutts down. I have posted my problems on 
 http://stackoverflow.com/questions/27381802/kafka-shutting-down-kafka-server-kafkaserver-problems-with-starting-kafka-s,
  could anyone help me?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-20 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552115#comment-14552115
 ] 

Gwen Shapira commented on KAFKA-1928:
-

Updated reviewboard https://reviews.apache.org/r/33065/diff/
 against branch trunk

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jun Rao
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch, KAFKA-1928_2015-05-18_17:57:39.patch, 
 KAFKA-1928_2015-05-18_18:55:38.patch, KAFKA-1928_2015-05-19_11:26:18.patch, 
 KAFKA-1928_2015-05-20_13:41:37.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: KAFKA-1928_2015-05-20_13:41:37.patch

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jun Rao
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch, KAFKA-1928_2015-05-18_17:57:39.patch, 
 KAFKA-1928_2015-05-18_18:55:38.patch, KAFKA-1928_2015-05-19_11:26:18.patch, 
 KAFKA-1928_2015-05-20_13:41:37.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-19 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14550030#comment-14550030
 ] 

Gwen Shapira commented on KAFKA-1928:
-

Updated reviewboard https://reviews.apache.org/r/33065/diff/
 against branch trunk

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch, KAFKA-1928_2015-05-18_17:57:39.patch, 
 KAFKA-1928_2015-05-18_18:55:38.patch, KAFKA-1928_2015-05-19_11:26:18.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-19 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: KAFKA-1928_2015-05-19_11:26:18.patch

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch, KAFKA-1928_2015-05-18_17:57:39.patch, 
 KAFKA-1928_2015-05-18_18:55:38.patch, KAFKA-1928_2015-05-19_11:26:18.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-18 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: KAFKA-1928_2015-05-18_17:57:39.patch

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch, KAFKA-1928_2015-05-18_17:57:39.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-18 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14548121#comment-14548121
 ] 

Gwen Shapira commented on KAFKA-1928:
-

Updated reviewboard https://reviews.apache.org/r/33065/diff/
 against branch trunk

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch, KAFKA-1928_2015-05-18_17:57:39.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-18 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14548197#comment-14548197
 ] 

Gwen Shapira commented on KAFKA-1928:
-

Updated reviewboard https://reviews.apache.org/r/33065/diff/
 against branch trunk

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch, KAFKA-1928_2015-05-18_17:57:39.patch, 
 KAFKA-1928_2015-05-18_18:55:38.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-18 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: KAFKA-1928_2015-05-18_18:55:38.patch

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch, KAFKA-1928_2015-05-18_17:57:39.patch, 
 KAFKA-1928_2015-05-18_18:55:38.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-15 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545084#comment-14545084
 ] 

Gwen Shapira commented on KAFKA-1928:
-

Updated reviewboard https://reviews.apache.org/r/33065/diff/
 against branch trunk

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-15 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: KAFKA-1928_2015-05-15_10:30:31.patch

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch, 
 KAFKA-1928_2015-05-15_10:30:31.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-14 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: KAFKA-1928_2015-05-15.patch

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-14 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: (was: KAFKA-1928_2015-05-15.patch)

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-14 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: KAFKA-1928_2015-05-15.patch

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-14 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: KAFKA-1928_2015-05-15_03%3A05%3A37.patch.patch

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-14 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: (was: KAFKA-1928_2015-05-15_03%3A05%3A37.patch.patch)

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-14 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: KAFKA-1928_2015-05-15_03.patch

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch, 
 KAFKA-1928_2015-05-15.patch, KAFKA-1928_2015-05-15_03.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2176) DefaultPartitioner doesn't perform consistent hashing based on

2015-05-12 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14539320#comment-14539320
 ] 

Gwen Shapira commented on KAFKA-2176:
-

Do we want to fix this?
It is not an issue in the new Producer, which MirrorMaker can use.

 DefaultPartitioner doesn't perform consistent hashing based on 
 ---

 Key: KAFKA-2176
 URL: https://issues.apache.org/jira/browse/KAFKA-2176
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.1
Reporter: Igor Maravić
  Labels: easyfix, newbie
 Fix For: 0.8.1

 Attachments: KAFKA-2176.patch


 While deploying MirrorMakers in production, we configured it to use 
 kafka.producer.DefaultPartitioner. By doing this and since we had the same 
 amount partitions for the topic in local and aggregation cluster, we expect 
 that the messages will be partitioned the same way.
 This wasn't the case. Messages were properly partitioned with 
 DefaultPartitioner on our local cluster, since the key was of the type String.
 On the MirrorMaker side, the messages were not properly partitioned.
 Problem is that the Array[Byte] doesn't implement hashCode function, since it 
 is mutable collection.
 Fix is to calculate the deep hash code if the key is of Array type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2176) DefaultPartitioner doesn't perform consistent hashing based on

2015-05-12 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14539320#comment-14539320
 ] 

Gwen Shapira edited comment on KAFKA-2176 at 5/12/15 6:11 AM:
--

Do we want to fix this?
It is not an issue in the new Producer, which MirrorMaker can use.

I'm somewhat against continuing to maintain the old clients.


was (Author: gwenshap):
Do we want to fix this?
It is not an issue in the new Producer, which MirrorMaker can use.

 DefaultPartitioner doesn't perform consistent hashing based on 
 ---

 Key: KAFKA-2176
 URL: https://issues.apache.org/jira/browse/KAFKA-2176
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.1
Reporter: Igor Maravić
  Labels: easyfix, newbie
 Fix For: 0.8.1

 Attachments: KAFKA-2176.patch


 While deploying MirrorMakers in production, we configured it to use 
 kafka.producer.DefaultPartitioner. By doing this and since we had the same 
 amount partitions for the topic in local and aggregation cluster, we expect 
 that the messages will be partitioned the same way.
 This wasn't the case. Messages were properly partitioned with 
 DefaultPartitioner on our local cluster, since the key was of the type String.
 On the MirrorMaker side, the messages were not properly partitioned.
 Problem is that the Array[Byte] doesn't implement hashCode function, since it 
 is mutable collection.
 Fix is to calculate the deep hash code if the key is of Array type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2176) DefaultPartitioner doesn't perform consistent hashing based on

2015-05-12 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14539409#comment-14539409
 ] 

Gwen Shapira commented on KAFKA-2176:
-

0.8.2 has new.producer parameter that can be set to true.

 DefaultPartitioner doesn't perform consistent hashing based on 
 ---

 Key: KAFKA-2176
 URL: https://issues.apache.org/jira/browse/KAFKA-2176
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.1
Reporter: Igor Maravić
  Labels: easyfix, newbie
 Fix For: 0.8.1

 Attachments: KAFKA-2176.patch


 While deploying MirrorMakers in production, we configured it to use 
 kafka.producer.DefaultPartitioner. By doing this and since we had the same 
 amount partitions for the topic in local and aggregation cluster, we expect 
 that the messages will be partitioned the same way.
 This wasn't the case. Messages were properly partitioned with 
 DefaultPartitioner on our local cluster, since the key was of the type String.
 On the MirrorMaker side, the messages were not properly partitioned.
 Problem is that the Array[Byte] doesn't implement hashCode function, since it 
 is mutable collection.
 Fix is to calculate the deep hash code if the key is of Array type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2176) DefaultPartitioner doesn't perform consistent hashing based on

2015-05-12 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14539451#comment-14539451
 ] 

Gwen Shapira commented on KAFKA-2176:
-

1. Regarding battle testing, LinkedIn is the biggest MirrorMaker user AFAIK, so 
I'll let [~guozhang], your original reviewer reply to that

2. As I said, I'd rather put efforts into improving the new clients and declare 
the existing as deprecated rather than keep maintaining them. 
I know the old producer is still used, but the way features are typically 
deprecated is that when someone runs into this issue, we recommend migrating to 
the newer code.

Anyway, since you already patched the old one, I'll make [~guozhang] make that 
call. Just giving my 2c and letting you know you can test the new producer (the 
one we actually put a lot of effort into improving).

 DefaultPartitioner doesn't perform consistent hashing based on 
 ---

 Key: KAFKA-2176
 URL: https://issues.apache.org/jira/browse/KAFKA-2176
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.1
Reporter: Igor Maravić
  Labels: easyfix, newbie
 Fix For: 0.8.1

 Attachments: KAFKA-2176.patch


 While deploying MirrorMakers in production, we configured it to use 
 kafka.producer.DefaultPartitioner. By doing this and since we had the same 
 amount partitions for the topic in local and aggregation cluster, we expect 
 that the messages will be partitioned the same way.
 This wasn't the case. Messages were properly partitioned with 
 DefaultPartitioner on our local cluster, since the key was of the type String.
 On the MirrorMaker side, the messages were not properly partitioned.
 Problem is that the Array[Byte] doesn't implement hashCode function, since it 
 is mutable collection.
 Fix is to calculate the deep hash code if the key is of Array type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-12 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: KAFKA-1928_2015-05-12_12:00:37.patch

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1928) Move kafka.network over to using the network classes in org.apache.kafka.common.network

2015-05-12 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-1928:

Attachment: KAFKA-1928_2015-05-12_12:57:57.patch

 Move kafka.network over to using the network classes in 
 org.apache.kafka.common.network
 ---

 Key: KAFKA-1928
 URL: https://issues.apache.org/jira/browse/KAFKA-1928
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Jay Kreps
Assignee: Gwen Shapira
 Attachments: KAFKA-1928.patch, KAFKA-1928_2015-04-28_00:09:40.patch, 
 KAFKA-1928_2015-04-30_17:48:33.patch, KAFKA-1928_2015-05-01_15:45:24.patch, 
 KAFKA-1928_2015-05-12_12:00:37.patch, KAFKA-1928_2015-05-12_12:57:57.patch


 As part of the common package we introduced a bunch of network related code 
 and abstractions.
 We should look into replacing a lot of what is in kafka.network with this 
 code. Duplicate classes include things like Receive, Send, etc. It is likely 
 possible to also refactor the SocketServer to make use of Selector which 
 should significantly simplify it's code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    5   6   7   8   9   10   11   12   13   14   >