[jira] [Updated] (KAFKA-1664) Kafka does not properly parse multiple ZK nodes with non-root chroot

2014-12-10 Thread Ashish Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Kumar Singh updated KAFKA-1664:
--
Attachment: (was: KAFKA-1664.1.patch)

> Kafka does not properly parse multiple ZK nodes with non-root chroot
> 
>
> Key: KAFKA-1664
> URL: https://issues.apache.org/jira/browse/KAFKA-1664
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Ricky Saltzer
>Assignee: Ashish Kumar Singh
>Priority: Minor
>  Labels: newbie
> Attachments: KAFKA-1664.1.patch, KAFKA-1664.2.patch, KAFKA-1664.patch
>
>
> When using a non-root ZK directory for Kafka, if you specify multiple ZK 
> servers, Kafka does not seem to properly parse the connection string. 
> *Error*
> {code}
> [root@hodor-001 bin]# ./kafka-console-consumer.sh --zookeeper 
> baelish-001.edh.cloudera.com:2181/kafka,baelish-002.edh.cloudera.com:2181/kafka,baelish-003.edh.cloudera.com:2181/kafka
>  --topic test-topic
> [2014-10-01 15:31:04,629] ERROR Error processing message, stopping consumer:  
> (kafka.consumer.ConsoleConsumer$)
> java.lang.IllegalArgumentException: Path length must be > 0
>   at org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:48)
>   at org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:35)
>   at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:766)
>   at org.I0Itec.zkclient.ZkConnection.create(ZkConnection.java:87)
>   at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:308)
>   at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:304)
>   at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
>   at org.I0Itec.zkclient.ZkClient.create(ZkClient.java:304)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:213)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
>   at kafka.utils.ZkUtils$.createParentPath(ZkUtils.scala:245)
>   at kafka.utils.ZkUtils$.createEphemeralPath(ZkUtils.scala:256)
>   at 
> kafka.utils.ZkUtils$.createEphemeralPathExpectConflict(ZkUtils.scala:268)
>   at 
> kafka.utils.ZkUtils$.createEphemeralPathExpectConflictHandleZKBug(ZkUtils.scala:306)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$registerConsumerInZK(ZookeeperConsumerConnector.scala:226)
>   at 
> kafka.consumer.ZookeeperConsumerConnector$WildcardStreamsHandler.(ZookeeperConsumerConnector.scala:755)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:145)
>   at kafka.consumer.ConsoleConsumer$.main(ConsoleConsumer.scala:196)
>   at kafka.consumer.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}
> *Working*
> {code}
> [root@hodor-001 bin]# ./kafka-console-consumer.sh --zookeeper 
> baelish-001.edh.cloudera.com:2181/kafka --topic test-topic
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1664) Kafka does not properly parse multiple ZK nodes with non-root chroot

2014-12-10 Thread Ashish Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Kumar Singh updated KAFKA-1664:
--
Attachment: KAFKA-1664.1.patch

> Kafka does not properly parse multiple ZK nodes with non-root chroot
> 
>
> Key: KAFKA-1664
> URL: https://issues.apache.org/jira/browse/KAFKA-1664
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Ricky Saltzer
>Assignee: Ashish Kumar Singh
>Priority: Minor
>  Labels: newbie
> Attachments: KAFKA-1664.1.patch, KAFKA-1664.2.patch, KAFKA-1664.patch
>
>
> When using a non-root ZK directory for Kafka, if you specify multiple ZK 
> servers, Kafka does not seem to properly parse the connection string. 
> *Error*
> {code}
> [root@hodor-001 bin]# ./kafka-console-consumer.sh --zookeeper 
> baelish-001.edh.cloudera.com:2181/kafka,baelish-002.edh.cloudera.com:2181/kafka,baelish-003.edh.cloudera.com:2181/kafka
>  --topic test-topic
> [2014-10-01 15:31:04,629] ERROR Error processing message, stopping consumer:  
> (kafka.consumer.ConsoleConsumer$)
> java.lang.IllegalArgumentException: Path length must be > 0
>   at org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:48)
>   at org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:35)
>   at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:766)
>   at org.I0Itec.zkclient.ZkConnection.create(ZkConnection.java:87)
>   at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:308)
>   at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:304)
>   at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
>   at org.I0Itec.zkclient.ZkClient.create(ZkClient.java:304)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:213)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
>   at kafka.utils.ZkUtils$.createParentPath(ZkUtils.scala:245)
>   at kafka.utils.ZkUtils$.createEphemeralPath(ZkUtils.scala:256)
>   at 
> kafka.utils.ZkUtils$.createEphemeralPathExpectConflict(ZkUtils.scala:268)
>   at 
> kafka.utils.ZkUtils$.createEphemeralPathExpectConflictHandleZKBug(ZkUtils.scala:306)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$registerConsumerInZK(ZookeeperConsumerConnector.scala:226)
>   at 
> kafka.consumer.ZookeeperConsumerConnector$WildcardStreamsHandler.(ZookeeperConsumerConnector.scala:755)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:145)
>   at kafka.consumer.ConsoleConsumer$.main(ConsoleConsumer.scala:196)
>   at kafka.consumer.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}
> *Working*
> {code}
> [root@hodor-001 bin]# ./kafka-console-consumer.sh --zookeeper 
> baelish-001.edh.cloudera.com:2181/kafka --topic test-topic
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1664) Kafka does not properly parse multiple ZK nodes with non-root chroot

2014-12-10 Thread Ashish Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Kumar Singh updated KAFKA-1664:
--
Attachment: KAFKA-1664.2.patch

> Kafka does not properly parse multiple ZK nodes with non-root chroot
> 
>
> Key: KAFKA-1664
> URL: https://issues.apache.org/jira/browse/KAFKA-1664
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Ricky Saltzer
>Assignee: Ashish Kumar Singh
>Priority: Minor
>  Labels: newbie
> Attachments: KAFKA-1664.1.patch, KAFKA-1664.2.patch, KAFKA-1664.patch
>
>
> When using a non-root ZK directory for Kafka, if you specify multiple ZK 
> servers, Kafka does not seem to properly parse the connection string. 
> *Error*
> {code}
> [root@hodor-001 bin]# ./kafka-console-consumer.sh --zookeeper 
> baelish-001.edh.cloudera.com:2181/kafka,baelish-002.edh.cloudera.com:2181/kafka,baelish-003.edh.cloudera.com:2181/kafka
>  --topic test-topic
> [2014-10-01 15:31:04,629] ERROR Error processing message, stopping consumer:  
> (kafka.consumer.ConsoleConsumer$)
> java.lang.IllegalArgumentException: Path length must be > 0
>   at org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:48)
>   at org.apache.zookeeper.common.PathUtils.validatePath(PathUtils.java:35)
>   at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:766)
>   at org.I0Itec.zkclient.ZkConnection.create(ZkConnection.java:87)
>   at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:308)
>   at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:304)
>   at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
>   at org.I0Itec.zkclient.ZkClient.create(ZkClient.java:304)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:213)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
>   at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:223)
>   at kafka.utils.ZkUtils$.createParentPath(ZkUtils.scala:245)
>   at kafka.utils.ZkUtils$.createEphemeralPath(ZkUtils.scala:256)
>   at 
> kafka.utils.ZkUtils$.createEphemeralPathExpectConflict(ZkUtils.scala:268)
>   at 
> kafka.utils.ZkUtils$.createEphemeralPathExpectConflictHandleZKBug(ZkUtils.scala:306)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$registerConsumerInZK(ZookeeperConsumerConnector.scala:226)
>   at 
> kafka.consumer.ZookeeperConsumerConnector$WildcardStreamsHandler.(ZookeeperConsumerConnector.scala:755)
>   at 
> kafka.consumer.ZookeeperConsumerConnector.createMessageStreamsByFilter(ZookeeperConsumerConnector.scala:145)
>   at kafka.consumer.ConsoleConsumer$.main(ConsoleConsumer.scala:196)
>   at kafka.consumer.ConsoleConsumer.main(ConsoleConsumer.scala)
> {code}
> *Working*
> {code}
> [root@hodor-001 bin]# ./kafka-console-consumer.sh --zookeeper 
> baelish-001.edh.cloudera.com:2181/kafka --topic test-topic
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28108: KAFKA-1664: Kafka does not properly parse multiple ZK nodes with non-root chroot

2014-12-10 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28108/
---

(Updated Dec. 11, 2014, 6:16 a.m.)


Review request for kafka.


Changes
---

Link JIRA.


Bugs: KAFKA-1664
https://issues.apache.org/jira/browse/KAFKA-1664


Repository: kafka


Description
---

KAFKA-1664: Kafka does not properly parse multiple ZK nodes with non-root chroot


Diffs
-

  core/src/main/scala/kafka/utils/ZkUtils.scala 
56e3e88e0cc6d917b0ffd1254e173295c1c4aabd 
  core/src/test/scala/unit/kafka/zk/ZKPathTest.scala PRE-CREATION 

Diff: https://reviews.apache.org/r/28108/diff/


Testing
---

Tested with and without the fix.


Thanks,

Ashish Singh



Re: Review Request 28108: KAFKA-1664: Kafka does not properly parse multiple ZK nodes with non-root chroot

2014-12-10 Thread Ashish Singh


> On Nov. 26, 2014, 1:05 a.m., Neha Narkhede wrote:
> > Can you add a few test cases?

Done.


- Ashish


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28108/#review63083
---


On Dec. 11, 2014, 6:11 a.m., Ashish Singh wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28108/
> ---
> 
> (Updated Dec. 11, 2014, 6:11 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1664: Kafka does not properly parse multiple ZK nodes with non-root 
> chroot
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/utils/ZkUtils.scala 
> 56e3e88e0cc6d917b0ffd1254e173295c1c4aabd 
>   core/src/test/scala/unit/kafka/zk/ZKPathTest.scala PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/28108/diff/
> 
> 
> Testing
> ---
> 
> Tested with and without the fix.
> 
> 
> Thanks,
> 
> Ashish Singh
> 
>



Re: Review Request 28108: KAFKA-1664: Kafka does not properly parse multiple ZK nodes with non-root chroot

2014-12-10 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28108/
---

(Updated Dec. 11, 2014, 6:11 a.m.)


Review request for kafka.


Changes
---

Add unit tests for ZkPath.


Repository: kafka


Description
---

KAFKA-1664: Kafka does not properly parse multiple ZK nodes with non-root chroot


Diffs (updated)
-

  core/src/main/scala/kafka/utils/ZkUtils.scala 
56e3e88e0cc6d917b0ffd1254e173295c1c4aabd 
  core/src/test/scala/unit/kafka/zk/ZKPathTest.scala PRE-CREATION 

Diff: https://reviews.apache.org/r/28108/diff/


Testing
---

Tested with and without the fix.


Thanks,

Ashish Singh



Re: Review Request 28859: Patch for KAFKA-1812

2014-12-10 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28859/#review64689
---

Ship it!


Non-binding +1

- Gwen Shapira


On Dec. 11, 2014, 2:39 a.m., Jeff Holoman wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28859/
> ---
> 
> (Updated Dec. 11, 2014, 2:39 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1812
> https://issues.apache.org/jira/browse/KAFKA-1812
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1812 Updated based on rb comments
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/utils/Utils.scala 
> 58685cc47b4c43e4ee68b73f1ee34eb99a5aa547 
>   core/src/test/scala/unit/kafka/utils/UtilsTest.scala 
> 0d0f0e2fba367180eeb718a259e8d680a73c3a73 
> 
> Diff: https://reviews.apache.org/r/28859/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jeff Holoman
> 
>



[jira] [Commented] (KAFKA-1812) Allow IpV6 in configuration with parseCsvMap

2014-12-10 Thread Jeff Holoman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14242036#comment-14242036
 ] 

Jeff Holoman commented on KAFKA-1812:
-

Updated reviewboard https://reviews.apache.org/r/28859/diff/
 against branch origin/trunk

>  Allow IpV6 in configuration with parseCsvMap
> -
>
> Key: KAFKA-1812
> URL: https://issues.apache.org/jira/browse/KAFKA-1812
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1812_2014-12-10_21:38:59.patch
>
>
> The current implementation of parseCsvMap in Utils expects k:v,k:v. This 
> modifies that function to accept a string with multiple ":" characters and 
> splitting on the last occurrence per pair. 
> This limitation is noted in the Reviewboard comments for KAFKA-1512



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1812) Allow IpV6 in configuration with parseCsvMap

2014-12-10 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1812:

Attachment: KAFKA-1812_2014-12-10_21:38:59.patch

>  Allow IpV6 in configuration with parseCsvMap
> -
>
> Key: KAFKA-1812
> URL: https://issues.apache.org/jira/browse/KAFKA-1812
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1812_2014-12-10_21:38:59.patch
>
>
> The current implementation of parseCsvMap in Utils expects k:v,k:v. This 
> modifies that function to accept a string with multiple ":" characters and 
> splitting on the last occurrence per pair. 
> This limitation is noted in the Reviewboard comments for KAFKA-1512



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1812) Allow IpV6 in configuration with parseCsvMap

2014-12-10 Thread Jeff Holoman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Holoman updated KAFKA-1812:

Status: Patch Available  (was: Open)

>  Allow IpV6 in configuration with parseCsvMap
> -
>
> Key: KAFKA-1812
> URL: https://issues.apache.org/jira/browse/KAFKA-1812
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.3
>
> Attachments: KAFKA-1812_2014-12-10_21:38:59.patch
>
>
> The current implementation of parseCsvMap in Utils expects k:v,k:v. This 
> modifies that function to accept a string with multiple ":" characters and 
> splitting on the last occurrence per pair. 
> This limitation is noted in the Reviewboard comments for KAFKA-1512



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28859: Patch for KAFKA-1812

2014-12-10 Thread Jeff Holoman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28859/
---

(Updated Dec. 11, 2014, 2:39 a.m.)


Review request for kafka.


Bugs: KAFKA-1812
https://issues.apache.org/jira/browse/KAFKA-1812


Repository: kafka


Description (updated)
---

KAFKA-1812 Updated based on rb comments


Diffs (updated)
-

  core/src/main/scala/kafka/utils/Utils.scala 
58685cc47b4c43e4ee68b73f1ee34eb99a5aa547 
  core/src/test/scala/unit/kafka/utils/UtilsTest.scala 
0d0f0e2fba367180eeb718a259e8d680a73c3a73 

Diff: https://reviews.apache.org/r/28859/diff/


Testing
---


Thanks,

Jeff Holoman



Re: Review Request 28423: Patch for kafka-1797

2014-12-10 Thread Neha Narkhede

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28423/#review64639
---


Build failed for Scala 2.9.1


clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


Can you also add @param entries for the serializers?



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


ditto here



clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java


Can you add param entries here?



clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java


and here



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


why are these imports required?



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


All the examples also need to be fixed with the type parameter changes



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


params for the serializers?



clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java


same here



clients/src/main/java/org/apache/kafka/clients/producer/ByteArraySerializer.java


minor nit: How about 'do nothing'?



clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java


params



clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java


ditto here



clients/src/main/java/org/apache/kafka/common/errors/DeserializationException.java


Should this be consumer?


- Neha Narkhede


On Dec. 10, 2014, 2:48 a.m., Jun Rao wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28423/
> ---
> 
> (Updated Dec. 10, 2014, 2:48 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: kafka-1797
> https://issues.apache.org/jira/browse/kafka-1797
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> add the constructor that takes serializer and seserializer
> 
> 
> fix java doc
> 
> 
> Diffs
> -
> 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ByteArrayDeserializer.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
> 227f5646ee708af1b861c15237eda2140cfd4900 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java 
> 46efc0c8483acacf42b2984ac3f3b9e0a4566187 
>   clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecord.java 
> 436d8a479166eda29f2672b50fc99f288bbe3fa9 
>   
> clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
> 2ecfc8aaea90a7353bd0dabc4c0ebcc6fd9535ec 
>   clients/src/main/java/org/apache/kafka/clients/consumer/Deserializer.java 
> PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> fe93afa24fc20b03830f1d190a276041d15bd3b9 
>   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
> c3aad3b4d6b677f759583f309061193f2f109250 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/ByteArraySerializer.java
>  PRE-CREATION 
>   clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
> 32f444ebbd27892275af7a0947b86a6b8317a374 
>   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
> c0f1d57e0feb894d9f246058cd0396461afe3225 
>   clients/src/main/java/org/apache/kafka/clients/producer/Producer.java 
> 36e8398416036cab84faad1f07159e5adefd8086 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
> 72d3ddd0c29bf6c08f9e122c8232bc07612cd448 
>   clients/src/main/java/org/apache/kafka/clients/producer/ProducerRecord.java 
> c3181b368b6cf15e7134b04e8ff5655a9321ee0b 
>   clients/src/main/java/org/apache/kafka/clients/producer/Serializer.java 
> PRE-CREATION 
>   
> clients/src/main/java/org/apache/kafka/clients/producer/internals/Partitioner.java
>  40e8234f8771098b097bf757a86d5ac98604df5e 
>   
> clients/src/main/java/org/apache/kafka/clients/tools/ProducerPerformance.java 
> 28175fb71edbe7f090119683b328d6dc4271d9fa 
>   
> clients/src/main/java/org/apache/kafka/common/errors/DeserializationException.java
>  PRE-CREATION 
>   
> clients/src/main/java/org/apach

Re: Review Request 28793: Patch for KAFKA-1784

2014-12-10 Thread Mayuresh Gharat


> On Dec. 10, 2014, 12:37 a.m., Neha Narkhede wrote:
> > core/src/main/scala/kafka/tools/OffsetClient.scala, line 142
> > 
> >
> > This and a lot of the rest of the code exists in ClientUtils. Until the 
> > refactoring is complete, your admin tool can just utilize the existing 
> > APIst to expose commit/fetch through the tool. This class may be 
> > unnecessary.

I can pull out the code and use client utils as of now and once we get the 
other patch with refactored code we can use this. Does that sound ok? I will 
upload a new patch accordingly.


- Mayuresh


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28793/#review64477
---


On Dec. 7, 2014, 7:43 p.m., Mayuresh Gharat wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28793/
> ---
> 
> (Updated Dec. 7, 2014, 7:43 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1784
> https://issues.apache.org/jira/browse/KAFKA-1784
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Offset Client for fetching and commiting offsets to kafka
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/tools/OffsetClient.scala PRE-CREATION 
>   core/src/main/scala/kafka/tools/OffsetClientConfig.scala PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/28793/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Mayuresh Gharat
> 
>



[jira] [Commented] (KAFKA-1806) broker can still expose uncommitted data to a consumer

2014-12-10 Thread lokesh Birla (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14241868#comment-14241868
 ] 

lokesh Birla commented on KAFKA-1806:
-

ok. I'll create the steps to reproducer this. Basically I am using sarama 
client for kafka which is go client. 

> broker can still expose uncommitted data to a consumer
> --
>
> Key: KAFKA-1806
> URL: https://issues.apache.org/jira/browse/KAFKA-1806
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: lokesh Birla
>Assignee: Neha Narkhede
>
> Although following issue: https://issues.apache.org/jira/browse/KAFKA-727
> is marked fixed but I still see this issue in 0.8.1.1. I am able to 
> reproducer the issue consistently. 
> [2014-08-18 06:43:58,356] ERROR [KafkaApi-1] Error when processing fetch 
> request for partition [mmetopic4,2] offset 1940029 from consumer with 
> correlation id 21 (kafka.server.Kaf
> kaApis)
> java.lang.IllegalArgumentException: Attempt to read with a maximum offset 
> (1818353) less than the start offset (1940029).
> at kafka.log.LogSegment.read(LogSegment.scala:136)
> at kafka.log.Log.read(Log.scala:386)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:530)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:476)
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:471)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:233)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:119)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
> at scala.collection.immutable.Map$Map1.map(Map.scala:107)
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:471)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:783)
> at 
> kafka.server.KafkaApis$FetchRequestPurgatory.expire(KafkaApis.scala:765)
> at 
> kafka.server.RequestPurgatory$ExpiredRequestReaper.run(RequestPurgatory.scala:216)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1689) automatic migration of log dirs to new locations

2014-12-10 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14241540#comment-14241540
 ] 

Chen He commented on KAFKA-1689:


Just a newbie to the Kafka community, maybe this one is a toy that I can play 
with. 

> automatic migration of log dirs to new locations
> 
>
> Key: KAFKA-1689
> URL: https://issues.apache.org/jira/browse/KAFKA-1689
> Project: Kafka
>  Issue Type: New Feature
>  Components: config, core
>Affects Versions: 0.8.1.1
>Reporter: Javier Alba
>Priority: Minor
>  Labels: newbie++
>
> There is no automated way in Kafka 0.8.1.1 to make a migration of log data if 
> we want to reconfigure our cluster nodes to use several data directories 
> where we have mounted new disks instead our original data directory.
> For example, say we have our brokers configured with:
>   log.dirs = /tmp/kafka-logs
> And we added 3 new disks and now we want our brokers to use them as log.dirs:
>   logs.dirs = /srv/data/1,/srv/data/2,/srv/data/3
> It would be great to have an automated way of doing such a migration, of 
> course without losing current data in the cluster.
> It would be ideal to be able to do this migration without losing service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1815) ServerShutdownTest fails in trunk.

2014-12-10 Thread Chris Cope (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14241460#comment-14241460
 ] 

Chris Cope commented on KAFKA-1815:
---

Dear Committers, I have verified that shutdown_test_fix.patch does indeed fix 
the test failures in trunk. Please apply.

Cheers!

> ServerShutdownTest fails in trunk.
> --
>
> Key: KAFKA-1815
> URL: https://issues.apache.org/jira/browse/KAFKA-1815
> Project: Kafka
>  Issue Type: Bug
>Reporter: Anatoly Fayngelerin
>Priority: Minor
> Attachments: shutdown_test_fix.patch
>
>
> I ran into these failures consistently when trying to build Kafka locally:
> kafka.server.ServerShutdownTest > testCleanShutdown FAILED
> java.lang.NullPointerException
> at 
> kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
> at 
> kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
> at 
> scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
> at scala.collection.mutable.ArrayOps$ofRef.count(ArrayOps.scala:105)
> at 
> kafka.server.ServerShutdownTest.verifyNonDaemonThreadsStatus(ServerShutdownTest.scala:147)
> at 
> kafka.server.ServerShutdownTest.testCleanShutdown(ServerShutdownTest.scala:101)
> kafka.server.ServerShutdownTest > testCleanShutdownWithDeleteTopicEnabled 
> FAILED
> java.lang.NullPointerException
> at 
> kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
> at 
> kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
> at 
> scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
> at scala.collection.mutable.ArrayOps$ofRef.count(ArrayOps.scala:105)
> at 
> kafka.server.ServerShutdownTest.verifyNonDaemonThreadsStatus(ServerShutdownTest.scala:147)
> at 
> kafka.server.ServerShutdownTest.testCleanShutdownWithDeleteTopicEnabled(ServerShutdownTest.scala:114)
> kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup FAILED
> java.lang.NullPointerException
> at 
> kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
> at 
> kafka.server.ServerShutdownTest$$anonfun$verifyNonDaemonThreadsStatus$2.apply(ServerShutdownTest.scala:147)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:114)
> at 
> scala.collection.TraversableOnce$$anonfun$count$1.apply(TraversableOnce.scala:113)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:105)
> at 
> scala.collection.TraversableOnce$class.count(TraversableOnce.scala:113)
> at scala.collection.mutable.ArrayOps$ofRef.count(ArrayOps.scala:105)
> at 
> kafka.server.ServerShutdownTest.verifyNonDaemonThreadsStatus(ServerShutdownTest.scala:147)
> at 
> kafka.server.ServerShutdownTest.testCleanShutdownAfterFailedStartup(ServerShutdownTest.scala:141)
> It looks like Jenkins also had issues with these tests:
> https://builds.apache.org/job/Kafka-trunk/351/console
> I would like to provide a patch that fixes this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28859: Patch for KAFKA-1812

2014-12-10 Thread Gwen Shapira


> On Dec. 10, 2014, 3:21 p.m., Gwen Shapira wrote:
> > Non-binding LGTM from me.
> > 
> > Any chance of cleaning up the whitespace? makes the change looks way bigger 
> > than it really is.
> > 
> > Few suggested improvements below:

My mistake - the whitespaces were in the code originally and this patch cleans 
them up.
Sorry and thanks for the cleanup :)


- Gwen


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28859/#review64546
---


On Dec. 9, 2014, 6:11 p.m., Jeff Holoman wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28859/
> ---
> 
> (Updated Dec. 9, 2014, 6:11 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1812
> https://issues.apache.org/jira/browse/KAFKA-1812
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1812 initial
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/utils/Utils.scala 
> 58685cc47b4c43e4ee68b73f1ee34eb99a5aa547 
>   core/src/test/scala/unit/kafka/utils/UtilsTest.scala 
> 0d0f0e2fba367180eeb718a259e8d680a73c3a73 
> 
> Diff: https://reviews.apache.org/r/28859/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jeff Holoman
> 
>



[jira] [Resolved] (KAFKA-1814) FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)

2014-12-10 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-1814.
-
Resolution: Won't Fix

Replied on stack overflow. 

Closing jira as this is a known configuration issue and not a bug.

> FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown 
> (kafka.server.KafkaServerStartable)
> --
>
> Key: KAFKA-1814
> URL: https://issues.apache.org/jira/browse/KAFKA-1814
> Project: Kafka
>  Issue Type: Bug
> Environment: OpenSuse 13.2, with installed jdk_1.7_u51, scala-2.11.4 
> and gradle-2.2.1
>Reporter: Stefan 
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> I have build Kafka on my system by executing command in order they where 
> written in the readme file in kafka downloaded folder. Every ./gradlew 
> command is done successfully, except tests (93% PASSED, 19 test FAIL (tests 
> in ProducerTest.class, SyncProducerTest.class, LeaderElectionTest.class, 
> LogOffsetTest.class) they are failing saying that thay can not access port, 
> so i thought ok, something is using that ports, i'll continue building). I 
> bulid kafka, and get .targz, and i run ./bin/zookeeper-server-start 
> ./confing/zookeeper.properties, but then i run ./bin/kafka-server-start.sh i 
> get errors and kafka immediately shutts down. I have posted my problems on 
> http://stackoverflow.com/questions/27381802/kafka-shutting-down-kafka-server-kafkaserver-problems-with-starting-kafka-s,
>  could anyone help me?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 28859: Patch for KAFKA-1812

2014-12-10 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/28859/#review64546
---


Non-binding LGTM from me.

Any chance of cleaning up the whitespace? makes the change looks way bigger 
than it really is.

Few suggested improvements below:


core/src/main/scala/kafka/utils/Utils.scala


Scala has Pair type. Since we always expect 2 elements, it will be more 
descriptive and safer to use it instead of the internal Array.



core/src/test/scala/unit/kafka/utils/UtilsTest.scala


Since this is a generic function, I'd add tests for:

1. list with a single host:port pair (i.e. no commas)
2. ipv4


- Gwen Shapira


On Dec. 9, 2014, 6:11 p.m., Jeff Holoman wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/28859/
> ---
> 
> (Updated Dec. 9, 2014, 6:11 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1812
> https://issues.apache.org/jira/browse/KAFKA-1812
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> KAFKA-1812 initial
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/utils/Utils.scala 
> 58685cc47b4c43e4ee68b73f1ee34eb99a5aa547 
>   core/src/test/scala/unit/kafka/utils/UtilsTest.scala 
> 0d0f0e2fba367180eeb718a259e8d680a73c3a73 
> 
> Diff: https://reviews.apache.org/r/28859/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Jeff Holoman
> 
>



[jira] [Commented] (KAFKA-1816) Topic configs reset after partition increase

2014-12-10 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14241150#comment-14241150
 ] 

Gwen Shapira commented on KAFKA-1816:
-

And the relevant jira:
KAFKA-1562; kafka-topics.sh alter add partitions resets cleanup.policy; patched 
by Jonathan Natkins; reviewed by Jun Rao

> Topic configs reset after partition increase
> 
>
> Key: KAFKA-1816
> URL: https://issues.apache.org/jira/browse/KAFKA-1816
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Andrew Jorgensen
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.3
>
>
> If you alter a topic to increase the number of partitions then the 
> configuration erases the existing configs for that topic. This can be 
> reproduces by doing the following:
> {code:none}
> $ bin/kafka-topics.sh --create --zookeeper localhost --topic test_topic 
> --partitions 5 --config retention.ms=3600
> $ bin/kafka-topics.sh --describe --zookeeper localhost --topic test_topic
> > Topic:test_topicPartitionCount:5ReplicationFactor:1 
> > Configs:retention.ms=3600
> $ bin/kafka-topics.sh --alter --zookeeper localhost --topic test_topic 
> --partitions 10
> $ bin/kafka-topics.sh --describe --zookeeper localhost --topic test_topic
> > Topic:test_topicPartitionCount:10ReplicationFactor:1 
> > Configs:
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1816) Topic configs reset after partition increase

2014-12-10 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14241145#comment-14241145
 ] 

Gwen Shapira commented on KAFKA-1816:
-

Looks fixed in 0.8.2:

```
[root@kafkaf-2 ~]# /usr/bin/kafka-topics --create --zookeeper 
kafkaf-1:2181/kafka --topic test_topic --partitions 5 --replication-factor 1 
--config retention.ms=3600

[root@kafkaf-2 ~]# /usr/bin/kafka-topics --describe --zookeeper 
kafkaf-1:2181/kafka --topic test_topic
Topic:test_topicPartitionCount:5ReplicationFactor:1 
Configs:retention.ms=3600

[root@kafkaf-2 ~]# /usr/bin/kafka-topics --alter --zookeeper 
kafkaf-1:2181/kafka --topic test_topic --partitions 10

[root@kafkaf-2 ~]# /usr/bin/kafka-topics --describe --zookeeper 
kafkaf-1:2181/kafka --topic test_topic
Topic:test_topicPartitionCount:10   ReplicationFactor:1 
Configs:retention.ms=3600
```

> Topic configs reset after partition increase
> 
>
> Key: KAFKA-1816
> URL: https://issues.apache.org/jira/browse/KAFKA-1816
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Andrew Jorgensen
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.3
>
>
> If you alter a topic to increase the number of partitions then the 
> configuration erases the existing configs for that topic. This can be 
> reproduces by doing the following:
> {code:none}
> $ bin/kafka-topics.sh --create --zookeeper localhost --topic test_topic 
> --partitions 5 --config retention.ms=3600
> $ bin/kafka-topics.sh --describe --zookeeper localhost --topic test_topic
> > Topic:test_topicPartitionCount:5ReplicationFactor:1 
> > Configs:retention.ms=3600
> $ bin/kafka-topics.sh --alter --zookeeper localhost --topic test_topic 
> --partitions 10
> $ bin/kafka-topics.sh --describe --zookeeper localhost --topic test_topic
> > Topic:test_topicPartitionCount:10ReplicationFactor:1 
> > Configs:
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

2014-12-10 Thread Joe Stein (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14241143#comment-14241143
 ] 

Joe Stein commented on KAFKA-1207:
--

Hey [~jayson.minard] we have gone back and forth the last year between "build a 
scheduler" just for Kafka or "build an executor layer that works in 
Marathon/Aurora". What we did first was give Aurora a shot since it already has 
an executor (Thermus) and see about  getting Kafka to run there. That script is 
here https://github.com/stealthly/borealis/blob/master/scripts/kafka.aurora for 
doing what we did. It relied on an undocumented feature in Aurora that we used 
which Bill Farner talked about here when I spoke with him on a podcast 
http://allthingshadoop.com/2014/10/26/resource-scheduling-and-task-launching-with-apache-mesos-and-apache-aurora-at-twitter/

Anyways, there were/are issues with that implementation so we decided then to 
give Marathon https://mesosphere.github.io/marathon/docs/ a try. We started off 
with this code as a pattern to use 
https://github.com/brndnmtthws/kafka-on-marathon and so far it is working out 
great. It definitely added more work on our side but it is running and doing 
exactly what we expect.

We have been speaking with others about this too and think we could come up 
with a standalone scheduler that would work out of the box. I don't know if it 
makes sense though for that to be a JVM process though. We were thinking of 
writing it in Go. One *VERY* important reason to have another shell launching 
Kafka is because you want to be able to change scripts and bounce brokers (you 
kind of have to do this) and if you rolling restart or something your tasks 
Mesos will schedule them to wherever it wants. Some Kafka improvements are 
coming that mitigate that some 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements
 but I don't think it would ever be 100% (Kafka is not like Storm or Spark in 
how it runs). On the Mesos side you can manage this with roles and constraints 
but at the end of the day you are dealing with a *persistent* server. The way 
we have gotten around this is using the shell script as an agent that can fetch 
the updates configs and do restart of the process, etc, etc, etc. There is new 
feature coming out in Mesos https://issues.apache.org/jira/browse/MESOS-1554 
that will make this better however I still like the supervisor shell script 
strategy ... we could morph the supervisor shell script strategy as a custom 
scheduler/executor (framework) for Kafka (absolutely) but I am not sure if the 
project would accept Go code for this feature or not?  I would be +1 on it 
going in and have a few engineers available to work on it over the next 1-2 
months. We could also write the whole thing in Java or Scala too though I still 
don't know if that is going to make it any easier/better to support in the 
community vs Go.

Would love more thoughts and discussions on this here.

> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: mesos
> Fix For: 0.9.0
>
> Attachments: KAFKA-1207.patch, KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch
>
>
> There are a few components to this.
> 1) The Framework:  This is going to be responsible for starting up and 
> managing the fail over of brokers within the mesos cluster.  This will have 
> to get some Kafka focused paramaters for launching new replica brokers, 
> moving topics and partitions around based on what is happening in the grid 
> through time.
> 2) The Scheduler: This is what is going to ask for resources for Kafka 
> brokers (new ones, replacement ones, commissioned ones) and other operations 
> such as stopping tasks (decommissioning brokers).  I think this should also 
> expose a user interface (or at least a rest api) for producers and consumers 
> so we can have producers and consumers run inside of the mesos cluster if 
> folks want (just add the jar)
> 3) The Executor : This is the task launcher.  It launches tasks kills them 
> off.
> 4) Sharing data between Scheduler and Executor: I looked at the a few 
> implementations of this.  I like parts of the Storm implementation but think 
> using the environment variable 
> ExectorInfo.CommandInfo.Enviornment.Variables[] is the best shot.  We can 
> have a command line bin/kafka-mesos-scheduler-start.sh that would build the 
> contrib project if not already built and support conf/server.properties to 
> start.
> The Framework and operating Scheduler would run in on an administrative node. 
>  I am probably going to hook Apache Curator into it so it can do it's own 
> failure to a anothe

[jira] [Commented] (KAFKA-1207) Launch Kafka from within Apache Mesos

2014-12-10 Thread Jayson Minard (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14240939#comment-14240939
 ] 

Jayson Minard commented on KAFKA-1207:
--

Hi John, what is the current status of this patch, looks like pending review 
since January.  

How does it stand in terms of "working" :-)  In case someone wants to use it 
now.

> Launch Kafka from within Apache Mesos
> -
>
> Key: KAFKA-1207
> URL: https://issues.apache.org/jira/browse/KAFKA-1207
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>  Labels: mesos
> Fix For: 0.9.0
>
> Attachments: KAFKA-1207.patch, KAFKA-1207_2014-01-19_00:04:58.patch, 
> KAFKA-1207_2014-01-19_00:48:49.patch
>
>
> There are a few components to this.
> 1) The Framework:  This is going to be responsible for starting up and 
> managing the fail over of brokers within the mesos cluster.  This will have 
> to get some Kafka focused paramaters for launching new replica brokers, 
> moving topics and partitions around based on what is happening in the grid 
> through time.
> 2) The Scheduler: This is what is going to ask for resources for Kafka 
> brokers (new ones, replacement ones, commissioned ones) and other operations 
> such as stopping tasks (decommissioning brokers).  I think this should also 
> expose a user interface (or at least a rest api) for producers and consumers 
> so we can have producers and consumers run inside of the mesos cluster if 
> folks want (just add the jar)
> 3) The Executor : This is the task launcher.  It launches tasks kills them 
> off.
> 4) Sharing data between Scheduler and Executor: I looked at the a few 
> implementations of this.  I like parts of the Storm implementation but think 
> using the environment variable 
> ExectorInfo.CommandInfo.Enviornment.Variables[] is the best shot.  We can 
> have a command line bin/kafka-mesos-scheduler-start.sh that would build the 
> contrib project if not already built and support conf/server.properties to 
> start.
> The Framework and operating Scheduler would run in on an administrative node. 
>  I am probably going to hook Apache Curator into it so it can do it's own 
> failure to a another follower.  Running more than 2 should be sufficient as 
> long as it can bring back it's state (e.g. from zk).  I think we can add this 
> in after once everything is working.
> Additional detail can be found on the Wiki page 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38570672



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)