Re: Review Request 36030: Patch for KAFKA-972

2015-07-01 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36030/
---

(Updated July 1, 2015, 8:37 a.m.)


Review request for kafka.


Bugs: KAFKA-972
https://issues.apache.org/jira/browse/KAFKA-972


Repository: kafka


Description
---

KAFKA-972: MetadataRequest returns stale list of brokers


Diffs (updated)
-

  core/src/main/scala/kafka/controller/KafkaController.scala 
36350579b16027359d237b64699003358704ac6f 
  core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
995b05901491bb0dbf0df210d44bd1d7f66fdc82 

Diff: https://reviews.apache.org/r/36030/diff/


Testing
---


Thanks,

Ashish Singh



Re: Review Request 36030: Patch for KAFKA-972

2015-07-01 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36030/
---

(Updated July 1, 2015, 8:43 a.m.)


Review request for kafka.


Bugs: KAFKA-972
https://issues.apache.org/jira/browse/KAFKA-972


Repository: kafka


Description
---

KAFKA-972: MetadataRequest returns stale list of brokers


Diffs (updated)
-

  core/src/main/scala/kafka/controller/KafkaController.scala 
36350579b16027359d237b64699003358704ac6f 
  core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
995b05901491bb0dbf0df210d44bd1d7f66fdc82 

Diff: https://reviews.apache.org/r/36030/diff/


Testing
---


Thanks,

Ashish Singh



Re: Review Request 36030: Patch for KAFKA-972

2015-07-01 Thread Ashish Singh


 On July 1, 2015, 4:37 a.m., Jun Rao wrote:
  core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala, line 69
  https://reviews.apache.org/r/36030/diff/2/?file=996282#file996282line69
 
  Do we need this? In tearDown(), ZookeeperTestHarness will delete all ZK 
  data.

Ahh.. I thought its called after all tests in the class are done. Thanks for 
pointing this out. Removed.


 On July 1, 2015, 4:37 a.m., Jun Rao wrote:
  core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala, lines 
  157-161
  https://reviews.apache.org/r/36030/diff/2/?file=996282#file996282line157
 
  Could we issue TopicMetadataRequest to every broker and make sure that 
  the correct metadata is propagated to every broker? Ditto in other places 
  as well.

Done!


- Ashish


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36030/#review90005
---


On July 1, 2015, 8:37 a.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36030/
 ---
 
 (Updated July 1, 2015, 8:37 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-972
 https://issues.apache.org/jira/browse/KAFKA-972
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-972: MetadataRequest returns stale list of brokers
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/controller/KafkaController.scala 
 36350579b16027359d237b64699003358704ac6f 
   core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
 995b05901491bb0dbf0df210d44bd1d7f66fdc82 
 
 Diff: https://reviews.apache.org/r/36030/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Ashish Singh
 




[jira] [Updated] (KAFKA-972) MetadataRequest returns stale list of brokers

2015-07-01 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-972:
-
Attachment: KAFKA-972_2015-07-01_01:36:56.patch

 MetadataRequest returns stale list of brokers
 -

 Key: KAFKA-972
 URL: https://issues.apache.org/jira/browse/KAFKA-972
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.0
Reporter: Vinicius Carvalho
Assignee: Ashish K Singh
 Attachments: BrokerMetadataTest.scala, KAFKA-972.patch, 
 KAFKA-972_2015-06-30_18:42:13.patch, KAFKA-972_2015-07-01_01:36:56.patch


 When we issue an metadatarequest towards the cluster, the list of brokers is 
 stale. I mean, even when a broker is down, it's returned back to the client. 
 The following are examples of two invocations one with both brokers online 
 and the second with a broker down:
 {
 brokers: [
 {
 nodeId: 0,
 host: 10.139.245.106,
 port: 9092,
 byteLength: 24
 },
 {
 nodeId: 1,
 host: localhost,
 port: 9093,
 byteLength: 19
 }
 ],
 topicMetadata: [
 {
 topicErrorCode: 0,
 topicName: foozbar,
 partitions: [
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 0,
 leader: 0,
 byteLength: 26
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 1,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 2,
 leader: 0,
 byteLength: 26
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 3,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 4,
 leader: 0,
 byteLength: 26
 }
 ],
 byteLength: 145
 }
 ],
 responseSize: 200,
 correlationId: -1000
 }
 {
 brokers: [
 {
 nodeId: 0,
 host: 10.139.245.106,
 port: 9092,
 byteLength: 24
 },
 {
 nodeId: 1,
 host: localhost,
 port: 9093,
 byteLength: 19
 }
 ],
 topicMetadata: [
 {
 topicErrorCode: 0,
 topicName: foozbar,
 partitions: [
 {
 replicas: [
 0
 ],
 isr: [],
 partitionErrorCode: 5,
 partitionId: 0,
 leader: -1,
 byteLength: 22
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 1,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [],
 partitionErrorCode: 5,
 partitionId: 2,
 leader: -1,
 byteLength: 22
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 3,
 leader: 1,

[jira] [Updated] (KAFKA-972) MetadataRequest returns stale list of brokers

2015-07-01 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-972:
-
Attachment: KAFKA-972_2015-07-01_01:42:42.patch

 MetadataRequest returns stale list of brokers
 -

 Key: KAFKA-972
 URL: https://issues.apache.org/jira/browse/KAFKA-972
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.0
Reporter: Vinicius Carvalho
Assignee: Ashish K Singh
 Attachments: BrokerMetadataTest.scala, KAFKA-972.patch, 
 KAFKA-972_2015-06-30_18:42:13.patch, KAFKA-972_2015-07-01_01:36:56.patch, 
 KAFKA-972_2015-07-01_01:42:42.patch


 When we issue an metadatarequest towards the cluster, the list of brokers is 
 stale. I mean, even when a broker is down, it's returned back to the client. 
 The following are examples of two invocations one with both brokers online 
 and the second with a broker down:
 {
 brokers: [
 {
 nodeId: 0,
 host: 10.139.245.106,
 port: 9092,
 byteLength: 24
 },
 {
 nodeId: 1,
 host: localhost,
 port: 9093,
 byteLength: 19
 }
 ],
 topicMetadata: [
 {
 topicErrorCode: 0,
 topicName: foozbar,
 partitions: [
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 0,
 leader: 0,
 byteLength: 26
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 1,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 2,
 leader: 0,
 byteLength: 26
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 3,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 4,
 leader: 0,
 byteLength: 26
 }
 ],
 byteLength: 145
 }
 ],
 responseSize: 200,
 correlationId: -1000
 }
 {
 brokers: [
 {
 nodeId: 0,
 host: 10.139.245.106,
 port: 9092,
 byteLength: 24
 },
 {
 nodeId: 1,
 host: localhost,
 port: 9093,
 byteLength: 19
 }
 ],
 topicMetadata: [
 {
 topicErrorCode: 0,
 topicName: foozbar,
 partitions: [
 {
 replicas: [
 0
 ],
 isr: [],
 partitionErrorCode: 5,
 partitionId: 0,
 leader: -1,
 byteLength: 22
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 1,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [],
 partitionErrorCode: 5,
 partitionId: 2,
 leader: -1,
 byteLength: 22
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 

[jira] [Commented] (KAFKA-972) MetadataRequest returns stale list of brokers

2015-07-01 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609749#comment-14609749
 ] 

Ashish K Singh commented on KAFKA-972:
--

Updated reviewboard https://reviews.apache.org/r/36030/
 against branch trunk

 MetadataRequest returns stale list of brokers
 -

 Key: KAFKA-972
 URL: https://issues.apache.org/jira/browse/KAFKA-972
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.0
Reporter: Vinicius Carvalho
Assignee: Ashish K Singh
 Attachments: BrokerMetadataTest.scala, KAFKA-972.patch, 
 KAFKA-972_2015-06-30_18:42:13.patch, KAFKA-972_2015-07-01_01:36:56.patch, 
 KAFKA-972_2015-07-01_01:42:42.patch


 When we issue an metadatarequest towards the cluster, the list of brokers is 
 stale. I mean, even when a broker is down, it's returned back to the client. 
 The following are examples of two invocations one with both brokers online 
 and the second with a broker down:
 {
 brokers: [
 {
 nodeId: 0,
 host: 10.139.245.106,
 port: 9092,
 byteLength: 24
 },
 {
 nodeId: 1,
 host: localhost,
 port: 9093,
 byteLength: 19
 }
 ],
 topicMetadata: [
 {
 topicErrorCode: 0,
 topicName: foozbar,
 partitions: [
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 0,
 leader: 0,
 byteLength: 26
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 1,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 2,
 leader: 0,
 byteLength: 26
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 3,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 4,
 leader: 0,
 byteLength: 26
 }
 ],
 byteLength: 145
 }
 ],
 responseSize: 200,
 correlationId: -1000
 }
 {
 brokers: [
 {
 nodeId: 0,
 host: 10.139.245.106,
 port: 9092,
 byteLength: 24
 },
 {
 nodeId: 1,
 host: localhost,
 port: 9093,
 byteLength: 19
 }
 ],
 topicMetadata: [
 {
 topicErrorCode: 0,
 topicName: foozbar,
 partitions: [
 {
 replicas: [
 0
 ],
 isr: [],
 partitionErrorCode: 5,
 partitionId: 0,
 leader: -1,
 byteLength: 22
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 1,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [],
 partitionErrorCode: 5,
 partitionId: 2,
 leader: -1,
 byteLength: 22
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
   

[jira] [Commented] (KAFKA-2092) New partitioning for better load balancing

2015-07-01 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610622#comment-14610622
 ] 

Jason Gustafson commented on KAFKA-2092:


[~azaroth], since the partitioner is exposed to the user on the client side, I 
don't think it's necessarily a problem that keys are written to two partitions 
(clients can partition however they want). However, wouldn't this make the 
consumer's partition assignment strategy a bit trickier? Right now the 
assignment of partition to consumer is static; each consumer in a consumer 
group consumes from a disjoint subset of the overall partitions. But it doesn't 
seem like that can work here since the keys might be paired across any 2 
partitions. If the consumer knows what key they want, then they can find the 
partitions to consume from, but generally to consume all keys from a given 
partition, wouldn't a consumer have to consume all other partitions as well? If 
we include this patch in kafka core, we might need to solve this problem.

 New partitioning for better load balancing
 --

 Key: KAFKA-2092
 URL: https://issues.apache.org/jira/browse/KAFKA-2092
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Reporter: Gianmarco De Francisci Morales
Assignee: Jun Rao
 Attachments: KAFKA-2092-v1.patch


 We have recently studied the problem of load balancing in distributed stream 
 processing systems such as Samza [1].
 In particular, we focused on what happens when the key distribution of the 
 stream is skewed when using key grouping.
 We developed a new stream partitioning scheme (which we call Partial Key 
 Grouping). It achieves better load balancing than hashing while being more 
 scalable than round robin in terms of memory.
 In the paper we show a number of mining algorithms that are easy to implement 
 with partial key grouping, and whose performance can benefit from it. We 
 think that it might also be useful for a larger class of algorithms.
 PKG has already been integrated in Storm [2], and I would like to be able to 
 use it in Samza as well. As far as I understand, Kafka producers are the ones 
 that decide how to partition the stream (or Kafka topic).
 I do not have experience with Kafka, however partial key grouping is very 
 easy to implement: it requires just a few lines of code in Java when 
 implemented as a custom grouping in Storm [3].
 I believe it should be very easy to integrate.
 For all these reasons, I believe it will be a nice addition to Kafka/Samza. 
 If the community thinks it's a good idea, I will be happy to offer support in 
 the porting.
 References:
 [1] 
 https://melmeric.files.wordpress.com/2014/11/the-power-of-both-choices-practical-load-balancing-for-distributed-stream-processing-engines.pdf
 [2] https://issues.apache.org/jira/browse/STORM-632
 [3] https://github.com/gdfm/partial-key-grouping



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2165) ReplicaFetcherThread: data loss on unknown exception

2015-07-01 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2165.
-
Resolution: Not A Problem

 ReplicaFetcherThread: data loss on unknown exception
 

 Key: KAFKA-2165
 URL: https://issues.apache.org/jira/browse/KAFKA-2165
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Alexey Ozeritskiy
 Attachments: KAFKA-2165.patch


 Sometimes in our cluster some replica gets out of the isr. Then broker 
 redownloads the partition from the beginning. We got the following messages 
 in logs:
 {code}
 # The leader:
 [2015-03-25 11:11:07,796] ERROR [Replica Manager on Broker 21]: Error when 
 processing fetch request for partition [topic,11] offset 54369274 from 
 follower with correlation id 2634499. Possible cause: Request for offset 
 54369274 but we only have log segments in the range 49322124 to 54369273. 
 (kafka.server.ReplicaManager)
 {code}
 {code}
 # The follower:
 [2015-03-25 11:11:08,816] WARN [ReplicaFetcherThread-0-21], Replica 31 for 
 partition [topic,11] reset its fetch offset from 49322124 to current leader 
 21's start offset 49322124 (kafka.server.ReplicaFetcherThread)
 [2015-03-25 11:11:08,816] ERROR [ReplicaFetcherThread-0-21], Current offset 
 54369274 for partition [topic,11] out of range; reset offset to 49322124 
 (kafka.server.ReplicaFetcherThread)
 {code}
 This occures because we update fetchOffset 
 [here|https://github.com/apache/kafka/blob/0.8.2/core/src/main/scala/kafka/server/AbstractFetcherThread.scala#L124]
  and then try to process message. 
 If any exception except OffsetOutOfRangeCode occures we get unsynchronized 
 fetchOffset and replica.logEndOffset.
 On next fetch iteration we can get 
 fetchOffsetreplica.logEndOffset==leaderEndOffset and OffsetOutOfRangeCode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36030: Patch for KAFKA-972

2015-07-01 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36030/#review90045
---


Thanks for the patch. Just one more comment.


core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala (lines 154 - 
164)
https://reviews.apache.org/r/36030/#comment142985

The propagation of the metadata to different brokers are independant. So,we 
will need to wrap the test on each broker with waitUntilTrue.


- Jun Rao


On July 1, 2015, 8:43 a.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36030/
 ---
 
 (Updated July 1, 2015, 8:43 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-972
 https://issues.apache.org/jira/browse/KAFKA-972
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-972: MetadataRequest returns stale list of brokers
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/controller/KafkaController.scala 
 36350579b16027359d237b64699003358704ac6f 
   core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
 995b05901491bb0dbf0df210d44bd1d7f66fdc82 
 
 Diff: https://reviews.apache.org/r/36030/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Ashish Singh
 




Re: Review Request 34494: Patch for KAFKA-2212

2015-07-01 Thread Tom Graves

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34494/#review90044
---


When running acls and not specifying the config I get a nullpointerexception.  
When I add in --config server.properties it works fine. It might be nice to 
have better error message

$ ./kafka-acl.sh --cluster --list --operations CREATE --allowprincipals user:foo
Exception in thread main java.lang.NullPointerException
at java.io.File.init(File.java:277)
at kafka.admin.AclCommand$.main(AclCommand.scala:43)
at kafka.admin.AclCommand.main(AclCommand.scala)

- Tom Graves


On May 20, 2015, 8:03 p.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34494/
 ---
 
 (Updated May 20, 2015, 8:03 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2212
 https://issues.apache.org/jira/browse/KAFKA-2212
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2212: Add CLI for acl management of authorizer.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/admin/AclCommand.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/admin/AclCommandTest.scala PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/34494/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Parth Brahmbhatt
 




[jira] [Updated] (KAFKA-972) MetadataRequest returns stale list of brokers

2015-07-01 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-972:
-
Attachment: KAFKA-972_2015-07-01_08:06:03.patch

 MetadataRequest returns stale list of brokers
 -

 Key: KAFKA-972
 URL: https://issues.apache.org/jira/browse/KAFKA-972
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.0
Reporter: Vinicius Carvalho
Assignee: Ashish K Singh
 Attachments: BrokerMetadataTest.scala, KAFKA-972.patch, 
 KAFKA-972_2015-06-30_18:42:13.patch, KAFKA-972_2015-07-01_01:36:56.patch, 
 KAFKA-972_2015-07-01_01:42:42.patch, KAFKA-972_2015-07-01_08:06:03.patch


 When we issue an metadatarequest towards the cluster, the list of brokers is 
 stale. I mean, even when a broker is down, it's returned back to the client. 
 The following are examples of two invocations one with both brokers online 
 and the second with a broker down:
 {
 brokers: [
 {
 nodeId: 0,
 host: 10.139.245.106,
 port: 9092,
 byteLength: 24
 },
 {
 nodeId: 1,
 host: localhost,
 port: 9093,
 byteLength: 19
 }
 ],
 topicMetadata: [
 {
 topicErrorCode: 0,
 topicName: foozbar,
 partitions: [
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 0,
 leader: 0,
 byteLength: 26
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 1,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 2,
 leader: 0,
 byteLength: 26
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 3,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 4,
 leader: 0,
 byteLength: 26
 }
 ],
 byteLength: 145
 }
 ],
 responseSize: 200,
 correlationId: -1000
 }
 {
 brokers: [
 {
 nodeId: 0,
 host: 10.139.245.106,
 port: 9092,
 byteLength: 24
 },
 {
 nodeId: 1,
 host: localhost,
 port: 9093,
 byteLength: 19
 }
 ],
 topicMetadata: [
 {
 topicErrorCode: 0,
 topicName: foozbar,
 partitions: [
 {
 replicas: [
 0
 ],
 isr: [],
 partitionErrorCode: 5,
 partitionId: 0,
 leader: -1,
 byteLength: 22
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 1,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [],
 partitionErrorCode: 5,
 partitionId: 2,
 leader: -1,
 byteLength: 22
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 

[jira] [Commented] (KAFKA-972) MetadataRequest returns stale list of brokers

2015-07-01 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610423#comment-14610423
 ] 

Ashish K Singh commented on KAFKA-972:
--

Updated reviewboard https://reviews.apache.org/r/36030/
 against branch trunk

 MetadataRequest returns stale list of brokers
 -

 Key: KAFKA-972
 URL: https://issues.apache.org/jira/browse/KAFKA-972
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.0
Reporter: Vinicius Carvalho
Assignee: Ashish K Singh
 Attachments: BrokerMetadataTest.scala, KAFKA-972.patch, 
 KAFKA-972_2015-06-30_18:42:13.patch, KAFKA-972_2015-07-01_01:36:56.patch, 
 KAFKA-972_2015-07-01_01:42:42.patch, KAFKA-972_2015-07-01_08:06:03.patch


 When we issue an metadatarequest towards the cluster, the list of brokers is 
 stale. I mean, even when a broker is down, it's returned back to the client. 
 The following are examples of two invocations one with both brokers online 
 and the second with a broker down:
 {
 brokers: [
 {
 nodeId: 0,
 host: 10.139.245.106,
 port: 9092,
 byteLength: 24
 },
 {
 nodeId: 1,
 host: localhost,
 port: 9093,
 byteLength: 19
 }
 ],
 topicMetadata: [
 {
 topicErrorCode: 0,
 topicName: foozbar,
 partitions: [
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 0,
 leader: 0,
 byteLength: 26
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 1,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 2,
 leader: 0,
 byteLength: 26
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 3,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [
 0
 ],
 partitionErrorCode: 0,
 partitionId: 4,
 leader: 0,
 byteLength: 26
 }
 ],
 byteLength: 145
 }
 ],
 responseSize: 200,
 correlationId: -1000
 }
 {
 brokers: [
 {
 nodeId: 0,
 host: 10.139.245.106,
 port: 9092,
 byteLength: 24
 },
 {
 nodeId: 1,
 host: localhost,
 port: 9093,
 byteLength: 19
 }
 ],
 topicMetadata: [
 {
 topicErrorCode: 0,
 topicName: foozbar,
 partitions: [
 {
 replicas: [
 0
 ],
 isr: [],
 partitionErrorCode: 5,
 partitionId: 0,
 leader: -1,
 byteLength: 22
 },
 {
 replicas: [
 1
 ],
 isr: [
 1
 ],
 partitionErrorCode: 0,
 partitionId: 1,
 leader: 1,
 byteLength: 26
 },
 {
 replicas: [
 0
 ],
 isr: [],
 partitionErrorCode: 5,
 partitionId: 2,
 leader: -1,
 byteLength: 22
 },
 {
 replicas: [
 1
 ],
 isr: [
 

Re: Review Request 36030: Patch for KAFKA-972

2015-07-01 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36030/
---

(Updated July 1, 2015, 3:06 p.m.)


Review request for kafka.


Bugs: KAFKA-972
https://issues.apache.org/jira/browse/KAFKA-972


Repository: kafka


Description
---

KAFKA-972: MetadataRequest returns stale list of brokers


Diffs (updated)
-

  core/src/main/scala/kafka/controller/KafkaController.scala 
36350579b16027359d237b64699003358704ac6f 
  core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
995b05901491bb0dbf0df210d44bd1d7f66fdc82 

Diff: https://reviews.apache.org/r/36030/diff/


Testing
---


Thanks,

Ashish Singh



Re: Review Request 36030: Patch for KAFKA-972

2015-07-01 Thread Ashish Singh


 On July 1, 2015, 2:18 p.m., Jun Rao wrote:
  core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala, lines 
  162-172
  https://reviews.apache.org/r/36030/diff/4/?file=996501#file996501line162
 
  The propagation of the metadata to different brokers are independant. 
  So,we will need to wrap the test on each broker with waitUntilTrue.

True, made the change.


- Ashish


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36030/#review90045
---


On July 1, 2015, 3:06 p.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36030/
 ---
 
 (Updated July 1, 2015, 3:06 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-972
 https://issues.apache.org/jira/browse/KAFKA-972
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-972: MetadataRequest returns stale list of brokers
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/controller/KafkaController.scala 
 36350579b16027359d237b64699003358704ac6f 
   core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
 995b05901491bb0dbf0df210d44bd1d7f66fdc82 
 
 Diff: https://reviews.apache.org/r/36030/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Ashish Singh
 




EOL JDK 1.6 for Kafka

2015-07-01 Thread Harsha
Hi, 
During our SSL Patch KAFKA-1690. Some of the reviewers/users
asked for support this config
https://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLParameters.html#setEndpointIdentificationAlgorithm-java.lang.String-
It allows clients to verify the server and prevent potential MITM. This
api doesn't exist in Java 1.6. 
Are there any users still want 1.6 support or can we stop supporting 1.6
from next release on wards. 

Thanks,
Harsha


Re: EOL JDK 1.6 for Kafka

2015-07-01 Thread Joel Koshy
+1

On Wednesday, July 1, 2015, Harsha ka...@harsha.io wrote:

 Hi,
 During our SSL Patch KAFKA-1690. Some of the reviewers/users
 asked for support this config

 https://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLParameters.html#setEndpointIdentificationAlgorithm-java.lang.String-
 It allows clients to verify the server and prevent potential MITM. This
 api doesn't exist in Java 1.6.
 Are there any users still want 1.6 support or can we stop supporting 1.6
 from next release on wards.

 Thanks,
 Harsha



-- 
Sent from Gmail Mobile


[jira] [Commented] (KAFKA-2092) New partitioning for better load balancing

2015-07-01 Thread Gianmarco De Francisci Morales (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609701#comment-14609701
 ] 

Gianmarco De Francisci Morales commented on KAFKA-2092:
---

Is splitting a key on two partitions something that is possible to consider in 
Kafka's model?
The key=partition mapping is still deterministic, but is no more a 1:1 mapping.
When creating views from a derived partition, the developer knows that the 
state is in 2 places (which can be queried or aggregated deterministically).

I'd appreciate to get some feedback, so that I can iterate over the design if 
needed.

 New partitioning for better load balancing
 --

 Key: KAFKA-2092
 URL: https://issues.apache.org/jira/browse/KAFKA-2092
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Reporter: Gianmarco De Francisci Morales
Assignee: Jun Rao
 Attachments: KAFKA-2092-v1.patch


 We have recently studied the problem of load balancing in distributed stream 
 processing systems such as Samza [1].
 In particular, we focused on what happens when the key distribution of the 
 stream is skewed when using key grouping.
 We developed a new stream partitioning scheme (which we call Partial Key 
 Grouping). It achieves better load balancing than hashing while being more 
 scalable than round robin in terms of memory.
 In the paper we show a number of mining algorithms that are easy to implement 
 with partial key grouping, and whose performance can benefit from it. We 
 think that it might also be useful for a larger class of algorithms.
 PKG has already been integrated in Storm [2], and I would like to be able to 
 use it in Samza as well. As far as I understand, Kafka producers are the ones 
 that decide how to partition the stream (or Kafka topic).
 I do not have experience with Kafka, however partial key grouping is very 
 easy to implement: it requires just a few lines of code in Java when 
 implemented as a custom grouping in Storm [3].
 I believe it should be very easy to integrate.
 For all these reasons, I believe it will be a nice addition to Kafka/Samza. 
 If the community thinks it's a good idea, I will be happy to offer support in 
 the porting.
 References:
 [1] 
 https://melmeric.files.wordpress.com/2014/11/the-power-of-both-choices-practical-load-balancing-for-distributed-stream-processing-engines.pdf
 [2] https://issues.apache.org/jira/browse/STORM-632
 [3] https://github.com/gdfm/partial-key-grouping



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: EOL JDK 1.6 for Kafka

2015-07-01 Thread Marina
+1 for deprecating JDK1.6

  From: Harsha ka...@harsha.io
 To: us...@kafka.apache.org; dev@kafka.apache.org 
 Sent: Wednesday, July 1, 2015 11:05 AM
 Subject: EOL JDK 1.6 for Kafka
   
Hi, 
        During our SSL Patch KAFKA-1690. Some of the reviewers/users
        asked for support this config
https://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLParameters.html#setEndpointIdentificationAlgorithm-java.lang.String-
It allows clients to verify the server and prevent potential MITM. This
api doesn't exist in Java 1.6. 
Are there any users still want 1.6 support or can we stop supporting 1.6
from next release on wards. 

Thanks,
Harsha


  

dev subscription request

2015-07-01 Thread Sethuram, Anup
Hi,
Could you please add me to the dev-subscriber list.

Regards,
anup


The information contained in this message may be confidential and legally 
protected under applicable law. The message is intended solely for the 
addressee(s). If you are not the intended recipient, you are hereby notified 
that any use, forwarding, dissemination, or reproduction of this message is 
strictly prohibited and may be unlawful. If you are not the intended recipient, 
please contact the sender by return e-mail and destroy all copies of the 
original message.


[jira] [Commented] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-07-01 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611311#comment-14611311
 ] 

Aditya Auradkar commented on KAFKA-2205:


[~junrao] - Addressed your comments. Can you take another look?

 Generalize TopicConfigManager to handle multiple entity configs
 ---

 Key: KAFKA-2205
 URL: https://issues.apache.org/jira/browse/KAFKA-2205
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2205.patch, KAFKA-2205_2015-07-01_18:38:18.patch


 Acceptance Criteria:
 - TopicConfigManager should be generalized to handle Topic and Client configs 
 (and any type of config in the future). As described in KIP-21
 - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: EOL JDK 1.6 for Kafka

2015-07-01 Thread Jiangjie Qin
+1

On 7/1/15, 1:00 PM, Gwen Shapira gshap...@cloudera.com wrote:

Huge +1.

I don't think there is any other project that still supports 1.6.

On Wed, Jul 1, 2015 at 8:05 AM, Harsha ka...@harsha.io wrote:
 Hi,
 During our SSL Patch KAFKA-1690. Some of the reviewers/users
 asked for support this config
 
https://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLParameters.htm
l#setEndpointIdentificationAlgorithm-java.lang.String-
 It allows clients to verify the server and prevent potential MITM. This
 api doesn't exist in Java 1.6.
 Are there any users still want 1.6 support or can we stop supporting 1.6
 from next release on wards.

 Thanks,
 Harsha



Re: EOL JDK 1.6 for Kafka

2015-07-01 Thread Guozhang Wang
+1.

On Wed, Jul 1, 2015 at 1:00 PM, Gwen Shapira gshap...@cloudera.com wrote:

 Huge +1.

 I don't think there is any other project that still supports 1.6.

 On Wed, Jul 1, 2015 at 8:05 AM, Harsha ka...@harsha.io wrote:
  Hi,
  During our SSL Patch KAFKA-1690. Some of the reviewers/users
  asked for support this config
 
 https://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLParameters.html#setEndpointIdentificationAlgorithm-java.lang.String-
  It allows clients to verify the server and prevent potential MITM. This
  api doesn't exist in Java 1.6.
  Are there any users still want 1.6 support or can we stop supporting 1.6
  from next release on wards.
 
  Thanks,
  Harsha




-- 
-- Guozhang


Re: Review Request 35231: Address Onur and Jason's comments

2015-07-01 Thread Jason Gustafson

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35231/#review90155
---

Ship it!


Ship It!

- Jason Gustafson


On June 30, 2015, 1:44 a.m., Guozhang Wang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35231/
 ---
 
 (Updated June 30, 2015, 1:44 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1740
 https://issues.apache.org/jira/browse/KAFKA-1740
 
 
 Repository: kafka
 
 
 Description
 ---
 
 v2
 
 
 minor
 
 
 coordinator response test
 
 
 comments
 
 
 Diffs
 -
 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
  6c26667182d7fa8153469a634881a7c34d8a0c91 
   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
 5b898c8f8ad5d0469f469b600c4b2eb13d1fc662 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitResponse.java
  70844d65369f6ff300cbeb513dbb6650050c7eec 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchRequest.java
  b5e8a0ff0aaf9df382b91f93e7ad51b1ed5f6209 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchResponse.java
  512a0ef7e619d54e74122c38119209f5cf9590e3 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/CoordinatorTest.java
  613b192ba84b66f79b45f3cd70418c3f503bee9e 
   core/src/main/scala/kafka/admin/TopicCommand.scala 
 dacbdd089f96c67b7bbc1c0cd36490d4fe094c1b 
   core/src/main/scala/kafka/cluster/Partition.scala 
 0990938b33ba7f3bccf373325dbbaee5e45ba8bb 
   core/src/main/scala/kafka/common/OffsetMetadataAndError.scala 
 6b4242c7cd1df9b3465db0fec35a25102c76cd60 
   core/src/main/scala/kafka/common/Topic.scala 
 ad759786d1c22f67c47808c0b8f227eb2b1a9aa8 
   core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
 a385adbd7cb6ed693957df571d175ec36b8eaf94 
   core/src/main/scala/kafka/coordinator/CoordinatorMetadata.scala 
 0cd5605bcca78dd8a80e8586e58f8b2529da97d7 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 ad6f05807c61c971e5e60d24bc0c87e989115961 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 52dc728bb1ab4b05e94dc528da1006040e2f28c9 
   core/src/main/scala/kafka/server/OffsetManager.scala 
 5cca85cf727975f6d3acb2223fd186753ad761dc 
   core/src/main/scala/kafka/server/ReplicaManager.scala 
 59c9bc3ac3a8afc07a6f8c88c5871304db588d17 
   core/src/test/scala/integration/kafka/api/ConsumerTest.scala 
 17b17b9b1520c7cc2e2ba96cdb1f9ff06e47bcad 
   core/src/test/scala/integration/kafka/api/IntegrationTestHarness.scala 
 07b1ff47bfc3cd3f948c9533c8dc977fa36d996f 
   core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala 
 c7136f20972614ac47aa57ab13e3c94ef775a4b7 
   core/src/test/scala/unit/kafka/consumer/TopicFilterTest.scala 
 4f124af5c3e946045a78ad1519c37372a72c8985 
   
 core/src/test/scala/unit/kafka/coordinator/ConsumerCoordinatorResponseTest.scala
  a44fbd653b53649368db2656c3be3e14e3455163 
   core/src/test/scala/unit/kafka/coordinator/CoordinatorMetadataTest.scala 
 08854c5e6ec249368206298b2ac2623df18f266a 
   core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
 528525b719ec916e16f8b3ae3715bec4b5dcc47d 
 
 Diff: https://reviews.apache.org/r/35231/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Guozhang Wang
 




[jira] [Updated] (KAFKA-1735) MemoryRecords.Iterator needs to handle partial reads from compressed stream

2015-07-01 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1735:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

 MemoryRecords.Iterator needs to handle partial reads from compressed stream
 ---

 Key: KAFKA-1735
 URL: https://issues.apache.org/jira/browse/KAFKA-1735
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Guozhang Wang
 Fix For: 0.9.0

 Attachments: KAFKA-1735.patch


 Found a bug in the MemoryRecords.Iterator implementation, where 
 {code}
 stream.read(recordBuffer, 0, size)
 {code}
 can read less than size'ed bytes, and rest of the recordBuffer would set to 
 \0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Kafka-trunk #523

2015-07-01 Thread Apache Jenkins Server
See https://builds.apache.org/job/Kafka-trunk/523/changes



[jira] [Updated] (KAFKA-2168) New consumer poll() can block other calls like position(), commit(), and close() indefinitely

2015-07-01 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2168:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

 New consumer poll() can block other calls like position(), commit(), and 
 close() indefinitely
 -

 Key: KAFKA-2168
 URL: https://issues.apache.org/jira/browse/KAFKA-2168
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Ewen Cheslack-Postava
Assignee: Jason Gustafson
Priority: Critical
 Fix For: 0.8.3

 Attachments: KAFKA-2168.patch, KAFKA-2168_2015-06-01_16:03:38.patch, 
 KAFKA-2168_2015-06-02_17:09:37.patch, KAFKA-2168_2015-06-03_18:20:23.patch, 
 KAFKA-2168_2015-06-03_21:06:42.patch, KAFKA-2168_2015-06-04_14:36:04.patch, 
 KAFKA-2168_2015-06-05_12:01:28.patch, KAFKA-2168_2015-06-05_12:44:48.patch, 
 KAFKA-2168_2015-06-11_14:09:59.patch, KAFKA-2168_2015-06-18_14:39:36.patch, 
 KAFKA-2168_2015-06-19_09:19:02.patch, KAFKA-2168_2015-06-22_16:34:37.patch, 
 KAFKA-2168_2015-06-23_09:39:07.patch, KAFKA-2168_2015-06-30_10:54:22.patch


 The new consumer is currently using very coarse-grained synchronization. For 
 most methods this isn't a problem since they finish quickly once the lock is 
 acquired, but poll() might run for a long time (and commonly will since 
 polling with long timeouts is a normal use case). This means any operations 
 invoked from another thread may block until the poll() call completes.
 Some example use cases where this can be a problem:
 * A shutdown hook is registered to trigger shutdown and invokes close(). It 
 gets invoked from another thread and blocks indefinitely.
 * User wants to manage offset commit themselves in a background thread. If 
 the commit policy is not purely time based, it's not currently possibly to 
 make sure the call to commit() will be processed promptly.
 Two possible solutions to this:
 1. Make sure a lock is not held during the actual select call. Since we have 
 multiple layers (KafkaConsumer - NetworkClient - Selector - nio Selector) 
 this is probably hard to make work cleanly since locking is currently only 
 performed at the KafkaConsumer level and we'd want it unlocked around a 
 single line of code in Selector.
 2. Wake up the selector before synchronizing for certain operations. This 
 would require some additional coordination to make sure the caller of 
 wakeup() is able to acquire the lock promptly (instead of, e.g., the poll() 
 thread being woken up and then promptly reacquiring the lock with a 
 subsequent long poll() call).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2168) New consumer poll() can block other calls like position(), commit(), and close() indefinitely

2015-07-01 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611104#comment-14611104
 ] 

Guozhang Wang commented on KAFKA-2168:
--

Committed the follow-up patch to trunk. Closing.

 New consumer poll() can block other calls like position(), commit(), and 
 close() indefinitely
 -

 Key: KAFKA-2168
 URL: https://issues.apache.org/jira/browse/KAFKA-2168
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Ewen Cheslack-Postava
Assignee: Jason Gustafson
Priority: Critical
 Fix For: 0.8.3

 Attachments: KAFKA-2168.patch, KAFKA-2168_2015-06-01_16:03:38.patch, 
 KAFKA-2168_2015-06-02_17:09:37.patch, KAFKA-2168_2015-06-03_18:20:23.patch, 
 KAFKA-2168_2015-06-03_21:06:42.patch, KAFKA-2168_2015-06-04_14:36:04.patch, 
 KAFKA-2168_2015-06-05_12:01:28.patch, KAFKA-2168_2015-06-05_12:44:48.patch, 
 KAFKA-2168_2015-06-11_14:09:59.patch, KAFKA-2168_2015-06-18_14:39:36.patch, 
 KAFKA-2168_2015-06-19_09:19:02.patch, KAFKA-2168_2015-06-22_16:34:37.patch, 
 KAFKA-2168_2015-06-23_09:39:07.patch, KAFKA-2168_2015-06-30_10:54:22.patch


 The new consumer is currently using very coarse-grained synchronization. For 
 most methods this isn't a problem since they finish quickly once the lock is 
 acquired, but poll() might run for a long time (and commonly will since 
 polling with long timeouts is a normal use case). This means any operations 
 invoked from another thread may block until the poll() call completes.
 Some example use cases where this can be a problem:
 * A shutdown hook is registered to trigger shutdown and invokes close(). It 
 gets invoked from another thread and blocks indefinitely.
 * User wants to manage offset commit themselves in a background thread. If 
 the commit policy is not purely time based, it's not currently possibly to 
 make sure the call to commit() will be processed promptly.
 Two possible solutions to this:
 1. Make sure a lock is not held during the actual select call. Since we have 
 multiple layers (KafkaConsumer - NetworkClient - Selector - nio Selector) 
 this is probably hard to make work cleanly since locking is currently only 
 performed at the KafkaConsumer level and we'd want it unlocked around a 
 single line of code in Selector.
 2. Wake up the selector before synchronizing for certain operations. This 
 would require some additional coordination to make sure the caller of 
 wakeup() is able to acquire the lock promptly (instead of, e.g., the poll() 
 thread being woken up and then promptly reacquiring the lock with a 
 subsequent long poll() call).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 35231: Address Onur and Jason's comments

2015-07-01 Thread Guozhang Wang


 On June 18, 2015, 12:50 a.m., Jason Gustafson wrote:
  core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala, lines 
  229-236
  https://reviews.apache.org/r/35231/diff/1/?file=980918#file980918line229
 
  In manual group management, would we expect consumerId and generationId 
  to be null?
 
 Guozhang Wang wrote:
 In that case, the consumerId would be UNKNOWN_CONSUMER_ID = , and the 
 generationId would be -1. These two values are only used inside 
 OffsetManager.storeOffsets for logging.
 
 Jason Gustafson wrote:
 I wonder if it is worthwhile checking that those values are set 
 accordingly? If the generationId were 5, for example, would we want to just 
 commit the offsets blindly? Or would we throw an error?
 
 Guozhang Wang wrote:
 As Onur mentioned, when group == null it is also possible that the group 
 has not been created on the coordinator (when coordinator migrated, for 
 example), and in this case the consumerId / generationId would not be /-1.
 
 Jason Gustafson wrote:
 That makes sense. I was just thinking this might open the door to having 
 commits from old or invalid generations go through. Unless we store group 
 metadata in zookeeper though, perhaps there is no way to prevent it.
 
 Onur Karaman wrote:
 So I've been meaning to ask something similar.
 
 Guozhang: offline we talked about all offset logic validating generation 
 id before attempting to perform the action. To adjust for this proposed 
 check, at one point we talked about making ConsumerCoordinator more strictly 
 follow the wiki and have the generation id bump happen at the end of 
 rebalance instead of at the beginning so that consumers would be able to 
 commit offsets prior to rebalancing. Given that this rb is about merging in 
 the OffsetManager, should those checks be added here or in a later rb?
 
 Onur Karaman wrote:
 My bad. I missed your generation id check in handleCommitOffsets. But I'm 
 still curious about the generation id bump placement with respect to 
 committing offsets before providing a JoinGroupRequest.

That is a good point. I think we should postpone the generation id bump from 
prepareRebalance() to rebalance(), before the line of 
group.transitionTo(Rebalancing). Does that sound right to you?


- Guozhang


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35231/#review88301
---


On June 30, 2015, 1:44 a.m., Guozhang Wang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35231/
 ---
 
 (Updated June 30, 2015, 1:44 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1740
 https://issues.apache.org/jira/browse/KAFKA-1740
 
 
 Repository: kafka
 
 
 Description
 ---
 
 v2
 
 
 minor
 
 
 coordinator response test
 
 
 comments
 
 
 Diffs
 -
 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
  6c26667182d7fa8153469a634881a7c34d8a0c91 
   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
 5b898c8f8ad5d0469f469b600c4b2eb13d1fc662 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitResponse.java
  70844d65369f6ff300cbeb513dbb6650050c7eec 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchRequest.java
  b5e8a0ff0aaf9df382b91f93e7ad51b1ed5f6209 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchResponse.java
  512a0ef7e619d54e74122c38119209f5cf9590e3 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/CoordinatorTest.java
  613b192ba84b66f79b45f3cd70418c3f503bee9e 
   core/src/main/scala/kafka/admin/TopicCommand.scala 
 dacbdd089f96c67b7bbc1c0cd36490d4fe094c1b 
   core/src/main/scala/kafka/cluster/Partition.scala 
 0990938b33ba7f3bccf373325dbbaee5e45ba8bb 
   core/src/main/scala/kafka/common/OffsetMetadataAndError.scala 
 6b4242c7cd1df9b3465db0fec35a25102c76cd60 
   core/src/main/scala/kafka/common/Topic.scala 
 ad759786d1c22f67c47808c0b8f227eb2b1a9aa8 
   core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
 a385adbd7cb6ed693957df571d175ec36b8eaf94 
   core/src/main/scala/kafka/coordinator/CoordinatorMetadata.scala 
 0cd5605bcca78dd8a80e8586e58f8b2529da97d7 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 ad6f05807c61c971e5e60d24bc0c87e989115961 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 52dc728bb1ab4b05e94dc528da1006040e2f28c9 
   core/src/main/scala/kafka/server/OffsetManager.scala 
 5cca85cf727975f6d3acb2223fd186753ad761dc 
   core/src/main/scala/kafka/server/ReplicaManager.scala 
 59c9bc3ac3a8afc07a6f8c88c5871304db588d17 
   core/src/test/scala/integration/kafka/api/ConsumerTest.scala 
 17b17b9b1520c7cc2e2ba96cdb1f9ff06e47bcad 
   

Re: EOL JDK 1.6 for Kafka

2015-07-01 Thread Jay Kreps
+1

-Jay

On Wed, Jul 1, 2015 at 3:32 PM, Guozhang Wang wangg...@gmail.com wrote:

 +1.

 On Wed, Jul 1, 2015 at 1:00 PM, Gwen Shapira gshap...@cloudera.com
 wrote:

  Huge +1.
 
  I don't think there is any other project that still supports 1.6.
 
  On Wed, Jul 1, 2015 at 8:05 AM, Harsha ka...@harsha.io wrote:
   Hi,
   During our SSL Patch KAFKA-1690. Some of the reviewers/users
   asked for support this config
  
 
 https://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLParameters.html#setEndpointIdentificationAlgorithm-java.lang.String-
   It allows clients to verify the server and prevent potential MITM. This
   api doesn't exist in Java 1.6.
   Are there any users still want 1.6 support or can we stop supporting
 1.6
   from next release on wards.
  
   Thanks,
   Harsha
 



 --
 -- Guozhang



Re: EOL JDK 1.6 for Kafka

2015-07-01 Thread Neha Narkhede
+1. It's about time.

On Wed, Jul 1, 2015 at 3:52 PM, Jay Kreps j...@confluent.io wrote:

 +1

 -Jay

 On Wed, Jul 1, 2015 at 3:32 PM, Guozhang Wang wangg...@gmail.com wrote:

  +1.
 
  On Wed, Jul 1, 2015 at 1:00 PM, Gwen Shapira gshap...@cloudera.com
  wrote:
 
   Huge +1.
  
   I don't think there is any other project that still supports 1.6.
  
   On Wed, Jul 1, 2015 at 8:05 AM, Harsha ka...@harsha.io wrote:
Hi,
During our SSL Patch KAFKA-1690. Some of the reviewers/users
asked for support this config
   
  
 
 https://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLParameters.html#setEndpointIdentificationAlgorithm-java.lang.String-
It allows clients to verify the server and prevent potential MITM.
 This
api doesn't exist in Java 1.6.
Are there any users still want 1.6 support or can we stop supporting
  1.6
from next release on wards.
   
Thanks,
Harsha
  
 
 
 
  --
  -- Guozhang
 




-- 
Thanks,
Neha


[jira] [Commented] (KAFKA-1735) MemoryRecords.Iterator needs to handle partial reads from compressed stream

2015-07-01 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611164#comment-14611164
 ] 

Guozhang Wang commented on KAFKA-1735:
--

This bug has been resolved in another ticket, closing.

 MemoryRecords.Iterator needs to handle partial reads from compressed stream
 ---

 Key: KAFKA-1735
 URL: https://issues.apache.org/jira/browse/KAFKA-1735
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
Assignee: Guozhang Wang
 Fix For: 0.9.0

 Attachments: KAFKA-1735.patch


 Found a bug in the MemoryRecords.Iterator implementation, where 
 {code}
 stream.read(recordBuffer, 0, size)
 {code}
 can read less than size'ed bytes, and rest of the recordBuffer would set to 
 \0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-996) Capitalize first letter for log entries

2015-07-01 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-996:

Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

 Capitalize first letter for log entries
 ---

 Key: KAFKA-996
 URL: https://issues.apache.org/jira/browse/KAFKA-996
 Project: Kafka
  Issue Type: Improvement
Reporter: Guozhang Wang
Assignee: Guozhang Wang
 Attachments: KAFKA-996.v1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2301) Deprecate ConsumerOffsetChecker

2015-07-01 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-2301:
--
Attachment: KAFKA-2301_2015-07-01_17:46:34.patch

 Deprecate ConsumerOffsetChecker
 ---

 Key: KAFKA-2301
 URL: https://issues.apache.org/jira/browse/KAFKA-2301
 Project: Kafka
  Issue Type: Sub-task
  Components: tools
Reporter: Ashish K Singh
Assignee: Ashish K Singh
 Fix For: 0.8.3

 Attachments: KAFKA-2301.patch, KAFKA-2301_2015-07-01_17:46:34.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2301) Deprecate ConsumerOffsetChecker

2015-07-01 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611253#comment-14611253
 ] 

Ashish K Singh commented on KAFKA-2301:
---

Updated reviewboard https://reviews.apache.org/r/35850/
 against branch trunk

 Deprecate ConsumerOffsetChecker
 ---

 Key: KAFKA-2301
 URL: https://issues.apache.org/jira/browse/KAFKA-2301
 Project: Kafka
  Issue Type: Sub-task
  Components: tools
Reporter: Ashish K Singh
Assignee: Ashish K Singh
 Fix For: 0.8.3

 Attachments: KAFKA-2301.patch, KAFKA-2301_2015-07-01_17:46:34.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2307) Drop ConsumerOffsetChecker completely

2015-07-01 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-2307:
-

 Summary: Drop ConsumerOffsetChecker completely
 Key: KAFKA-2307
 URL: https://issues.apache.org/jira/browse/KAFKA-2307
 Project: Kafka
  Issue Type: Sub-task
  Components: tools
Reporter: Ashish K Singh
Assignee: Ashish K Singh


ConsumerOffsetChecker has been replaced by ConsumerGroupCommand and is 
deprecated in 0.9.0. Should be dropped in 0.9.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Deprecation of ConsumerOffsetChecker

2015-07-01 Thread Ashish Singh
Hey Guys,

In last KIP hangout, we decided on following path for deprecating
ConsumerOffsetChecker.

1. Add deprecation warning to the tool for one release. In this case, the
warning will be added in 0.9.0.
2. Drop it completely in next release, 0.9.1.

I have updated the (KIP-23){
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=56852556}
accordingly. {KAFKA-2307}(https://issues.apache.org/jira/browse/KAFKA-2307)
is to remind up that we need to drop the tool in 0.9.1.

Let me know if I am missing out on any step that we decided on for the
deprecation.

-- 

Regards,
Ashish


Re: Review Request 34554: Patch for KAFKA-2205

2015-07-01 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34554/
---

(Updated July 2, 2015, 1:38 a.m.)


Review request for kafka and Joel Koshy.


Bugs: KAFKA-2205
https://issues.apache.org/jira/browse/KAFKA-2205


Repository: kafka


Description (updated)
---

Some fixes


KAFKA-2205


KAFKA-2205


Addressing Jun's comments


Diffs (updated)
-

  core/src/main/scala/kafka/admin/AdminUtils.scala 
f06edf41c732a7b794e496d0048b0ce6f897e72b 
  core/src/main/scala/kafka/admin/ConfigCommand.scala PRE-CREATION 
  core/src/main/scala/kafka/admin/TopicCommand.scala 
8e6f18633b25bf1beee3f813b28ef7aa7d779d7b 
  core/src/main/scala/kafka/cluster/Partition.scala 
730a232482fdf77be5704cdf5941cfab3828db88 
  core/src/main/scala/kafka/controller/KafkaController.scala 
69bba243a9a511cc5292b43da0cc48e421a428b0 
  core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
3b15ab4eef22c6f50a7483e99a6af40fb55aca9f 
  core/src/main/scala/kafka/controller/TopicDeletionManager.scala 
64ecb499f24bc801d48f86e1612d927cc08e006d 
  core/src/main/scala/kafka/server/ConfigHandler.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaServer.scala 
ea6d165d8e5c3146d2c65e8ad1a513308334bf6f 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
b31b432a226ba79546dd22ef1d2acbb439c2e9a3 
  core/src/main/scala/kafka/server/TopicConfigManager.scala 
b675a7e45ea4f4179f8b15fe221fd988aff13aa0 
  core/src/main/scala/kafka/utils/ZkUtils.scala 
2618dd39b925b979ad6e4c0abd5c6eaafb3db5d5 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
efb2f8e79b3faef78722774b951fea828cd50374 
  core/src/test/scala/unit/kafka/admin/ConfigCommandTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala 
c7136f20972614ac47aa57ab13e3c94ef775a4b7 
  core/src/test/scala/unit/kafka/server/DynamicConfigChangeTest.scala 
7877f6ca1845c2edbf96d4a9783a07a552db8f07 

Diff: https://reviews.apache.org/r/34554/diff/


Testing
---

1. Added new testcases for new code.
2. Verified that both topic and client configs can be changed dynamically by 
starting a local cluster


Thanks,

Aditya Auradkar



[jira] [Updated] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper

2015-07-01 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-1367:
--
Attachment: KAFKA-1367_2015-07-01_17:23:14.patch

 Broker topic metadata not kept in sync with ZooKeeper
 -

 Key: KAFKA-1367
 URL: https://issues.apache.org/jira/browse/KAFKA-1367
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.0, 0.8.1
Reporter: Ryan Berdeen
Assignee: Ashish K Singh
  Labels: newbie++
 Fix For: 0.8.3

 Attachments: KAFKA-1367.patch, KAFKA-1367.txt, 
 KAFKA-1367_2015-07-01_17:23:14.patch


 When a broker is restarted, the topic metadata responses from the brokers 
 will be incorrect (different from ZooKeeper) until a preferred replica leader 
 election.
 In the metadata, it looks like leaders are correctly removed from the ISR 
 when a broker disappears, but followers are not. Then, when a broker 
 reappears, the ISR is never updated.
 I used a variation of the Vagrant setup created by Joe Stein to reproduce 
 this with latest from the 0.8.1 branch: 
 https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper

2015-07-01 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611233#comment-14611233
 ] 

Ashish K Singh commented on KAFKA-1367:
---

Updated reviewboard https://reviews.apache.org/r/35820/
 against branch trunk

 Broker topic metadata not kept in sync with ZooKeeper
 -

 Key: KAFKA-1367
 URL: https://issues.apache.org/jira/browse/KAFKA-1367
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.0, 0.8.1
Reporter: Ryan Berdeen
Assignee: Ashish K Singh
  Labels: newbie++
 Fix For: 0.8.3

 Attachments: KAFKA-1367.patch, KAFKA-1367.txt, 
 KAFKA-1367_2015-07-01_17:23:14.patch


 When a broker is restarted, the topic metadata responses from the brokers 
 will be incorrect (different from ZooKeeper) until a preferred replica leader 
 election.
 In the metadata, it looks like leaders are correctly removed from the ISR 
 when a broker disappears, but followers are not. Then, when a broker 
 reappears, the ISR is never updated.
 I used a variation of the Vagrant setup created by Joe Stein to reproduce 
 this with latest from the 0.8.1 branch: 
 https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 35820: Patch for KAFKA-1367

2015-07-01 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35820/
---

(Updated July 2, 2015, 12:23 a.m.)


Review request for kafka.


Bugs: KAFKA-1367
https://issues.apache.org/jira/browse/KAFKA-1367


Repository: kafka


Description
---

KAFKA-1367: Broker topic metadata not kept in sync with ZooKeeper


Diffs (updated)
-

  core/src/main/scala/kafka/common/TopicAndPartition.scala 
df3db912f5daef6a25b4b2dd2220d2cc3795bce6 
  core/src/main/scala/kafka/controller/KafkaController.scala 
36350579b16027359d237b64699003358704ac6f 
  core/src/main/scala/kafka/utils/ReplicationUtils.scala 
60687332b4c9bee4d4c0851314cfb4b02d5d3489 
  core/src/main/scala/kafka/utils/ZkUtils.scala 
78475e3d5ec477cef00caeaa34ff2d196466be96 
  core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
995b05901491bb0dbf0df210d44bd1d7f66fdc82 
  core/src/test/scala/unit/kafka/utils/ReplicationUtilsTest.scala 
c96c0ffd958d63c09880d436b2e5ae96f51ead36 

Diff: https://reviews.apache.org/r/35820/diff/


Testing
---

Tested on a test cluster with 3 Kafka brokers


Thanks,

Ashish Singh



Re: Review Request 35820: Patch for KAFKA-1367

2015-07-01 Thread Ashish Singh


 On June 30, 2015, 4:42 p.m., Jun Rao wrote:
  Thanks for the patch. A few comments below.
  
  Also, could we add a unit test for this?

Thanks for the review Jun! Addressed your concerns and added a test that 
re-produces the issue and verifies the fix.


- Ashish


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35820/#review89911
---


On July 2, 2015, 12:23 a.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35820/
 ---
 
 (Updated July 2, 2015, 12:23 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1367
 https://issues.apache.org/jira/browse/KAFKA-1367
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-1367: Broker topic metadata not kept in sync with ZooKeeper
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/common/TopicAndPartition.scala 
 df3db912f5daef6a25b4b2dd2220d2cc3795bce6 
   core/src/main/scala/kafka/controller/KafkaController.scala 
 36350579b16027359d237b64699003358704ac6f 
   core/src/main/scala/kafka/utils/ReplicationUtils.scala 
 60687332b4c9bee4d4c0851314cfb4b02d5d3489 
   core/src/main/scala/kafka/utils/ZkUtils.scala 
 78475e3d5ec477cef00caeaa34ff2d196466be96 
   core/src/test/scala/unit/kafka/integration/TopicMetadataTest.scala 
 995b05901491bb0dbf0df210d44bd1d7f66fdc82 
   core/src/test/scala/unit/kafka/utils/ReplicationUtilsTest.scala 
 c96c0ffd958d63c09880d436b2e5ae96f51ead36 
 
 Diff: https://reviews.apache.org/r/35820/diff/
 
 
 Testing
 ---
 
 Tested on a test cluster with 3 Kafka brokers
 
 
 Thanks,
 
 Ashish Singh
 




[jira] [Updated] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper

2015-07-01 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-1367:
--
Status: Patch Available  (was: In Progress)

 Broker topic metadata not kept in sync with ZooKeeper
 -

 Key: KAFKA-1367
 URL: https://issues.apache.org/jira/browse/KAFKA-1367
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.0, 0.8.1
Reporter: Ryan Berdeen
Assignee: Ashish K Singh
  Labels: newbie++
 Fix For: 0.8.3

 Attachments: KAFKA-1367.patch, KAFKA-1367.txt, 
 KAFKA-1367_2015-07-01_17:23:14.patch


 When a broker is restarted, the topic metadata responses from the brokers 
 will be incorrect (different from ZooKeeper) until a preferred replica leader 
 election.
 In the metadata, it looks like leaders are correctly removed from the ISR 
 when a broker disappears, but followers are not. Then, when a broker 
 reappears, the ISR is never updated.
 I used a variation of the Vagrant setup created by Joe Stein to reproduce 
 this with latest from the 0.8.1 branch: 
 https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 35850: Patch for KAFKA-2301

2015-07-01 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35850/
---

(Updated July 2, 2015, 12:46 a.m.)


Review request for kafka.


Bugs: KAFKA-2301
https://issues.apache.org/jira/browse/KAFKA-2301


Repository: kafka


Description
---

KAFKA-2301: Deprecate ConsumerOffsetChecker


Diffs (updated)
-

  core/src/main/scala/kafka/tools/ConsumerOffsetChecker.scala 
3d52f62c88a509a655cf1df6232b738c25fa9b69 

Diff: https://reviews.apache.org/r/35850/diff/


Testing
---


Thanks,

Ashish Singh



[jira] [Updated] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-07-01 Thread Aditya A Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya A Auradkar updated KAFKA-2205:
-
Attachment: KAFKA-2205_2015-07-01_18:38:18.patch

 Generalize TopicConfigManager to handle multiple entity configs
 ---

 Key: KAFKA-2205
 URL: https://issues.apache.org/jira/browse/KAFKA-2205
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2205.patch, KAFKA-2205_2015-07-01_18:38:18.patch


 Acceptance Criteria:
 - TopicConfigManager should be generalized to handle Topic and Client configs 
 (and any type of config in the future). As described in KIP-21
 - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-07-01 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611309#comment-14611309
 ] 

Aditya A Auradkar commented on KAFKA-2205:
--

Updated reviewboard https://reviews.apache.org/r/34554/diff/
 against branch trunk

 Generalize TopicConfigManager to handle multiple entity configs
 ---

 Key: KAFKA-2205
 URL: https://issues.apache.org/jira/browse/KAFKA-2205
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2205.patch, KAFKA-2205_2015-07-01_18:38:18.patch


 Acceptance Criteria:
 - TopicConfigManager should be generalized to handle Topic and Client configs 
 (and any type of config in the future). As described in KIP-21
 - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34554: Patch for KAFKA-2205

2015-07-01 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34554/
---

(Updated July 2, 2015, 1:39 a.m.)


Review request for kafka and Joel Koshy.


Bugs: KAFKA-2205
https://issues.apache.org/jira/browse/KAFKA-2205


Repository: kafka


Description (updated)
---

KAFKA-2205. Summary of changes:

1. Generalized TopicConfigManager to DynamicConfigManager. It is now able to 
handle multiple types of entities.
2. Changed format of the notification znode as described in KIP-21
3. Replaced TopicConfigManager with DynamicConfigManager.
4. Added new testcases. Existing testcases all pass
5. Added ConfigCommand to handle all config changes. Eventually this will make 
calls to the broker once the new API's are built for now it speaks to ZK 
directly


Diffs
-

  core/src/main/scala/kafka/admin/AdminUtils.scala 
f06edf41c732a7b794e496d0048b0ce6f897e72b 
  core/src/main/scala/kafka/admin/ConfigCommand.scala PRE-CREATION 
  core/src/main/scala/kafka/admin/TopicCommand.scala 
8e6f18633b25bf1beee3f813b28ef7aa7d779d7b 
  core/src/main/scala/kafka/cluster/Partition.scala 
730a232482fdf77be5704cdf5941cfab3828db88 
  core/src/main/scala/kafka/controller/KafkaController.scala 
69bba243a9a511cc5292b43da0cc48e421a428b0 
  core/src/main/scala/kafka/controller/PartitionLeaderSelector.scala 
3b15ab4eef22c6f50a7483e99a6af40fb55aca9f 
  core/src/main/scala/kafka/controller/TopicDeletionManager.scala 
64ecb499f24bc801d48f86e1612d927cc08e006d 
  core/src/main/scala/kafka/server/ConfigHandler.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaServer.scala 
ea6d165d8e5c3146d2c65e8ad1a513308334bf6f 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
b31b432a226ba79546dd22ef1d2acbb439c2e9a3 
  core/src/main/scala/kafka/server/TopicConfigManager.scala 
b675a7e45ea4f4179f8b15fe221fd988aff13aa0 
  core/src/main/scala/kafka/utils/ZkUtils.scala 
2618dd39b925b979ad6e4c0abd5c6eaafb3db5d5 
  core/src/test/scala/unit/kafka/admin/AdminTest.scala 
efb2f8e79b3faef78722774b951fea828cd50374 
  core/src/test/scala/unit/kafka/admin/ConfigCommandTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala 
c7136f20972614ac47aa57ab13e3c94ef775a4b7 
  core/src/test/scala/unit/kafka/server/DynamicConfigChangeTest.scala 
7877f6ca1845c2edbf96d4a9783a07a552db8f07 

Diff: https://reviews.apache.org/r/34554/diff/


Testing
---

1. Added new testcases for new code.
2. Verified that both topic and client configs can be changed dynamically by 
starting a local cluster


Thanks,

Aditya Auradkar