Re: [kafka-clients] Re: [VOTE] 0.8.2.0 Candidate 3

2015-01-30 Thread Joe Stein
+1 (binding)

verified signatures, ran quick start, tests all passed.

- Joe Stein

On Fri, Jan 30, 2015 at 12:04 PM, Jun Rao  wrote:

> This is a reminder that the vote will close tomorrow night. Please test
> RC3 out and vote before the deadline.
>
> Thanks,
>
> Jun
>
> On Wed, Jan 28, 2015 at 11:22 PM, Jun Rao  wrote:
>
>> This is the third candidate for release of Apache Kafka 0.8.2.0.
>>
>> Release Notes for the 0.8.2.0 release
>>
>> https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/RELEASE_NOTES.html
>>
>> *** Please download, test and vote by Saturday, Jan 31, 11:30pm PT
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> http://kafka.apache.org/KEYS in addition to the md5, sha1 and sha2
>> (SHA256) checksum.
>>
>> * Release artifacts to be voted upon (source and binary):
>> https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/
>>
>> * Maven artifacts to be voted upon prior to release:
>> https://repository.apache.org/content/groups/staging/
>>
>> * scala-doc
>> https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/scaladoc/
>>
>> * java-doc
>> https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/javadoc/
>>
>> * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
>>
>> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=223ac42a7a2a0dab378cc411f4938a9cea1eb7ea
>> (commit 7130da90a9ee9e6fb4beb2a2a6ab05c06c9bfac4)
>>
>> /***
>>
>> Thanks,
>>
>> Jun
>>
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at http://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAFc58G9pGUpo7jp4BXY3Y_doPTzZiJKRCPPY-oYj1sqAapUx-Q%40mail.gmail.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>


[jira] [Updated] (KAFKA-1840) Add a simple message handler in Mirror Maker

2015-01-30 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-1840:

Attachment: KAFKA-1840_2015-01-30_18:25:00.patch

> Add a simple message handler in Mirror Maker
> 
>
> Key: KAFKA-1840
> URL: https://issues.apache.org/jira/browse/KAFKA-1840
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1840.patch, KAFKA-1840_2015-01-20_11:36:14.patch, 
> KAFKA-1840_2015-01-30_18:25:00.patch
>
>
> Currently mirror maker simply mirror all the messages it consumes from the 
> source cluster to target cluster. It would be useful to allow user to do some 
> simple process such as filtering/reformatting in mirror maker. We can allow 
> user to wire in a message handler to handle messages. The default handler 
> could just do nothing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1840) Add a simple message handler in Mirror Maker

2015-01-30 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299579#comment-14299579
 ] 

Jiangjie Qin commented on KAFKA-1840:
-

Updated reviewboard https://reviews.apache.org/r/30063/diff/
 against branch origin/trunk

> Add a simple message handler in Mirror Maker
> 
>
> Key: KAFKA-1840
> URL: https://issues.apache.org/jira/browse/KAFKA-1840
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Attachments: KAFKA-1840.patch, KAFKA-1840_2015-01-20_11:36:14.patch, 
> KAFKA-1840_2015-01-30_18:25:00.patch
>
>
> Currently mirror maker simply mirror all the messages it consumes from the 
> source cluster to target cluster. It would be useful to allow user to do some 
> simple process such as filtering/reformatting in mirror maker. We can allow 
> user to wire in a message handler to handle messages. The default handler 
> could just do nothing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30063: Patch for KAFKA-1840

2015-01-30 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30063/
---

(Updated Jan. 31, 2015, 2:25 a.m.)


Review request for kafka.


Bugs: KAFKA-1840
https://issues.apache.org/jira/browse/KAFKA-1840


Repository: kafka


Description (updated)
---

Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into 
KAFKA-1840


Addressed Joel's comments


Allow message handler to specify partitions for produce


Diffs (updated)
-

  core/src/main/scala/kafka/tools/MirrorMaker.scala 
81ae205ef7b2050d0152f29f8da7dd91b17b8b00 

Diff: https://reviews.apache.org/r/30063/diff/


Testing
---


Thanks,

Jiangjie Qin



Re: [KIP-DISCUSSION] Mirror Maker Enhancement

2015-01-30 Thread Jiangjie Qin
Hi Bhavesh,

Please see inline comments.

Jiangjie (Becket) Qin

On 1/29/15, 7:00 PM, "Bhavesh Mistry"  wrote:

>Hi Jiangjie,
>
>Thanks for the input.
>
>a) Is MM will  producer ack will be attach to Producer Instance or per
>topic.  Use case is that one instance of MM
>needs to handle both strong ack and also ack=0 for some topic.  Or it
>would
>be better to set-up another instance of MM.
The acks setting is producer level setting instead of topic level setting.
In this case you probably need to set up another instance.
>
>b) Regarding TCP connections, Why does #producer instance attach to TCP
>connection.  Is it possible to use Broker Connection TCP Pool, producer
>will just checkout TCP connection  to Broker.  So, # of Producer Instance
>does not correlation to Brokers Connection.  Is this possible ?
In new producer, each producer maintains a connection to each broker
within the producer instance. Making producer instances to share the TCP
connections is a very big change to the current design, so I suppose we
won’t be able to do that.
>
>
>Thanks,
>
>Bhavesh
>
>On Thu, Jan 29, 2015 at 11:50 AM, Jiangjie Qin 
>wrote:
>
>> Hi Bhavesh,
>>
>> I think it is the right discussion to have when we are talking about the
>> new new design for MM.
>> Please see the inline comments.
>>
>> Jiangjie (Becket) Qin
>>
>> On 1/28/15, 10:48 PM, "Bhavesh Mistry" 
>>wrote:
>>
>> >Hi Jiangjie,
>> >
>> >I just wanted to let you know about our use case and stress the point
>>that
>> >local data center broker cluster have fewer partitions than the
>> >destination
>> >offline broker cluster. Just because we do the batch pull from CAMUS
>>and
>> >in
>> >order to drain data faster than the injection rate (from four DCs for
>>same
>> >topic).
>> Keeping the same partition number in source and target cluster will be
>>an
>> option but will not be enforced by default.
>> >
>> >We are facing following issues (probably due to configuration):
>> >
>> >1)  We occasionally loose data due to message batch size is too
>>large
>> >(2MB) on target data (we are using old producer but I think new
>>producer
>> >will solve this problem to some extend).
>> We do see this issue in LinkedIn as well. New producer also might have
>> this issue. There are some proposal of solutions, but no real work
>>started
>> yet. For now, as a workaround, setting a more aggressive batch size on
>> producer side should work.
>> >2)  Since only one instance is set to MM data,  we are not able to
>> >set-up ack per topic instead ack is attached to producer instance.
>> I don’t quite get the question here.
>> >3)  How are you going to address two phase commit problem if ack is
>> >set
>> >to strongest, but auto commit is on for consumer (meaning producer does
>> >not
>> >get ack,  but consumer auto committed offset that message).  Is there
>> >transactional (Kafka transaction is in process) based ack and commit
>> >offset
>> >?
>> Auto offset commit should be turned off in this case. The offset will
>>only
>> be committed once by the offset commit thread. So there is no two phase
>> commit.
>> >4)  How are you planning to avoid duplicated message?  ( Is
>> >brokergoing
>> >have moving window of message collected and de-dupe ?)  Possibly, we
>>get
>> >this from retry set to 5…?
>> We are not trying to completely avoid duplicates. The duplicates will
>> still be there if:
>> 1. Producer retries on failure.
>> 2. Mirror maker is hard killed.
>> Currently, dedup is expected to be done by user if necessary.
>> >5)  Last, is there any warning or any thing you can provide insight
>> >from MM component about data injection rate into destination
>>partitions is
>> >NOT evenly distributed regardless  of  keyed or non-keyed message
>>(Hence
>> >there is ripple effect such as data not arriving late, or data is
>>arriving
>> >out of order in  intern of time stamp  and early some time, and CAMUS
>> >creates huge number of file count on HDFS due to uneven injection rate
>>.
>> >Camus Job is  configured to run every 3 minutes.)
>> I think uneven data distribution is typically caused by server side
>> unbalance, instead of something mirror maker could control. In new
>>mirror
>> maker, however, there is a customizable message handler, that might be
>> able to help a little bit. In message handler, you can explicitly set a
>> partition that you want to produce the message to. So if you know the
>> uneven data distribution in target cluster, you may offset it here. But
>> that probably only works for non-keyed messages.
>> >
>> >I am not sure if this is right discussion form to bring these to
>> >your/kafka
>> >Dev team attention.  This might be off track,
>> >
>> >
>> >Thanks,
>> >
>> >Bhavesh
>> >
>> >On Wed, Jan 28, 2015 at 11:07 AM, Jiangjie Qin
>>> >
>> >wrote:
>> >
>> >> I’ve updated the KIP page. Feedbacks are welcome.
>> >>
>> >> Regarding the simple mirror maker design. I thought over it and have
>> >>some
>> >> worries:
>> >> There are two things that might

Re: [VOTE] 0.8.2.0 Candidate 3

2015-01-30 Thread Jun Rao
This is a reminder that the vote will close tomorrow night. Please test RC3
out and vote before the deadline.

Thanks,

Jun

On Wed, Jan 28, 2015 at 11:22 PM, Jun Rao  wrote:

> This is the third candidate for release of Apache Kafka 0.8.2.0.
>
> Release Notes for the 0.8.2.0 release
>
> https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/RELEASE_NOTES.html
>
> *** Please download, test and vote by Saturday, Jan 31, 11:30pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS in addition to the md5, sha1 and sha2
> (SHA256) checksum.
>
> * Release artifacts to be voted upon (source and binary):
> https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/
>
> * Maven artifacts to be voted upon prior to release:
> https://repository.apache.org/content/groups/staging/
>
> * scala-doc
> https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/scaladoc/
>
> * java-doc
> https://people.apache.org/~junrao/kafka-0.8.2.0-candidate3/javadoc/
>
> * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=223ac42a7a2a0dab378cc411f4938a9cea1eb7ea
> (commit 7130da90a9ee9e6fb4beb2a2a6ab05c06c9bfac4)
>
> /***
>
> Thanks,
>
> Jun
>
>


Re: Review Request 29831: Patch for KAFKA-1476

2015-01-30 Thread Onur Karaman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/29831/
---

(Updated Jan. 30, 2015, 7:10 p.m.)


Review request for kafka.


Bugs: KAFKA-1476
https://issues.apache.org/jira/browse/KAFKA-1476


Repository: kafka


Description
---

Merged in work for KAFKA-1476 and sub-task KAFKA-1826


Diffs (updated)
-

  bin/kafka-consumer-groups.sh PRE-CREATION 
  core/src/main/scala/kafka/admin/AdminUtils.scala 
28b12c7b89a56c113b665fbde1b95f873f8624a3 
  core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala PRE-CREATION 
  core/src/main/scala/kafka/utils/ZkUtils.scala 
c14bd455b6642f5e6eb254670bef9f57ae41d6cb 
  core/src/test/scala/unit/kafka/admin/DeleteConsumerGroupTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/utils/TestUtils.scala 
54755e8dd3f23ced313067566cd4ea867f8a496e 

Diff: https://reviews.apache.org/r/29831/diff/


Testing
---


Thanks,

Onur Karaman



[jira] [Comment Edited] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-01-30 Thread Tong Li (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299202#comment-14299202
 ] 

Tong Li edited comment on KAFKA-1810 at 1/30/15 9:55 PM:
-

rather than add specific security measures, can we add some kind of plugin 
point so that any plugins can be configured to do that type of work. Either it 
is a IP filter or certificate filter or basic authentication filter we can 
simply enable these plugins according to our own needs. This way, kafka only 
provide the plugin point, nothing else, how the plugin gets developed , 
performs, are not really the concern of the kafka community, we can have a 
clear separation of concerns. This has been done in many other successful 
projects, new to kafka, just saying we can do some thing like middle ware (in 
python term) or servlet filter in java world. The point of doing this is to 
have the security measure become a configuration matter. One can choose any 
available plugins appropriate for their own purposes by changing 
configurations. 


was (Author: tongli):
rather than add specific security measures, can we add some kind of plugin 
point so that any plugins can be configured to do that type of work. Either it 
is a IP filter or certificate filter or basic authentication filter we can 
simply enable these plugins according to our own needs. This way, kafka only 
provide the plugin point, nothing else, how the plugin gets developed , 
performs, are not really the concern of the kafka community, we can have a 
clear separation of concerns. This has been done in many other successful 
projects, new to kafka, just saying we can do some thing like middle ware (in 
python term) or servlet filter in java world. 

> Add IP Filtering / Whitelists-Blacklists 
> -
>
> Key: KAFKA-1810
> URL: https://issues.apache.org/jira/browse/KAFKA-1810
> Project: Kafka
>  Issue Type: New Feature
>  Components: core, network, security
>Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>Priority: Minor
> Fix For: 0.8.3
>
> Attachments: KAFKA-1810.patch, KAFKA-1810_2015-01-15_19:47:14.patch
>
>
> While longer-term goals of security in Kafka are on the roadmap there exists 
> some value for the ability to restrict connection to Kafka brokers based on 
> IP address. This is not intended as a replacement for security but more of a 
> precaution against misconfiguration and to provide some level of control to 
> Kafka administrators about who is reading/writing to their cluster.
> 1) In some organizations software administration vs o/s systems 
> administration and network administration is disjointed and not well 
> choreographed. Providing software administrators the ability to configure 
> their platform relatively independently (after initial configuration) from 
> Systems administrators is desirable.
> 2) Configuration and deployment is sometimes error prone and there are 
> situations when test environments could erroneously read/write to production 
> environments
> 3) An additional precaution against reading sensitive data is typically 
> welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1810) Add IP Filtering / Whitelists-Blacklists

2015-01-30 Thread Tong Li (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299202#comment-14299202
 ] 

Tong Li commented on KAFKA-1810:


rather than add specific security measures, can we add some kind of plugin 
point so that any plugins can be configured to do that type of work. Either it 
is a IP filter or certificate filter or basic authentication filter we can 
simply enable these plugins according to our own needs. This way, kafka only 
provide the plugin point, nothing else, how the plugin gets developed , 
performs, are not really the concern of the kafka community, we can have a 
clear separation of concerns. This has been done in many other successful 
projects, new to kafka, just saying we can do some thing like middle ware (in 
python term) or servlet filter in java world. 

> Add IP Filtering / Whitelists-Blacklists 
> -
>
> Key: KAFKA-1810
> URL: https://issues.apache.org/jira/browse/KAFKA-1810
> Project: Kafka
>  Issue Type: New Feature
>  Components: core, network, security
>Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>Priority: Minor
> Fix For: 0.8.3
>
> Attachments: KAFKA-1810.patch, KAFKA-1810_2015-01-15_19:47:14.patch
>
>
> While longer-term goals of security in Kafka are on the roadmap there exists 
> some value for the ability to restrict connection to Kafka brokers based on 
> IP address. This is not intended as a replacement for security but more of a 
> precaution against misconfiguration and to provide some level of control to 
> Kafka administrators about who is reading/writing to their cluster.
> 1) In some organizations software administration vs o/s systems 
> administration and network administration is disjointed and not well 
> choreographed. Providing software administrators the ability to configure 
> their platform relatively independently (after initial configuration) from 
> Systems administrators is desirable.
> 2) Configuration and deployment is sometimes error prone and there are 
> situations when test environments could erroneously read/write to production 
> environments
> 3) An additional precaution against reading sensitive data is typically 
> welcomed in most large enterprise deployments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1862) Pass in the Time object into OffsetManager

2015-01-30 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299119#comment-14299119
 ] 

Aditya A Auradkar commented on KAFKA-1862:
--

Waiting for this to get committed.
https://issues.apache.org/jira/browse/KAFKA-1634

> Pass in the Time object into OffsetManager
> --
>
> Key: KAFKA-1862
> URL: https://issues.apache.org/jira/browse/KAFKA-1862
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Aditya Auradkar
>  Labels: newbie++
> Fix For: 0.9.0
>
>
> We should improve OffsetManager to take in a Time instance as we do for 
> LogManager and ReplicaManager. That way we can advance time with MockTime in 
> test cases. 
> Then we can move the testOffsetExpiration case from OffsetCommitTest to 
> OffsetManagerTest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1476) Get a list of consumer groups

2015-01-30 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman updated KAFKA-1476:

Attachment: KAFKA-1476_2015-01-30_11:09:59.patch

> Get a list of consumer groups
> -
>
> Key: KAFKA-1476
> URL: https://issues.apache.org/jira/browse/KAFKA-1476
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Ryan Williams
>Assignee: Balaji Seshadri
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: ConsumerCommand.scala, KAFKA-1476-LIST-GROUPS.patch, 
> KAFKA-1476-RENAME.patch, KAFKA-1476-REVIEW-COMMENTS.patch, KAFKA-1476.patch, 
> KAFKA-1476.patch, KAFKA-1476.patch, KAFKA-1476.patch, 
> KAFKA-1476_2014-11-10_11:58:26.patch, KAFKA-1476_2014-11-10_12:04:01.patch, 
> KAFKA-1476_2014-11-10_12:06:35.patch, KAFKA-1476_2014-12-05_12:00:12.patch, 
> KAFKA-1476_2015-01-12_16:22:26.patch, KAFKA-1476_2015-01-12_16:31:20.patch, 
> KAFKA-1476_2015-01-13_10:36:18.patch, KAFKA-1476_2015-01-15_14:30:04.patch, 
> KAFKA-1476_2015-01-22_02:32:52.patch, KAFKA-1476_2015-01-30_11:09:59.patch, 
> sample-kafka-consumer-groups-sh-output-1-23-2015.txt, 
> sample-kafka-consumer-groups-sh-output.txt
>
>
> It would be useful to have a way to get a list of consumer groups currently 
> active via some tool/script that ships with kafka. This would be helpful so 
> that the system tools can be explored more easily.
> For example, when running the ConsumerOffsetChecker, it requires a group 
> option
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --topic test --group 
> ?
> But, when just getting started with kafka, using the console producer and 
> consumer, it is not clear what value to use for the group option.  If a list 
> of consumer groups could be listed, then it would be clear what value to use.
> Background:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201405.mbox/%3cCAOq_b1w=slze5jrnakxvak0gu9ctdkpazak1g4dygvqzbsg...@mail.gmail.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1476) Get a list of consumer groups

2015-01-30 Thread Onur Karaman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299071#comment-14299071
 ] 

Onur Karaman commented on KAFKA-1476:
-

Updated reviewboard https://reviews.apache.org/r/29831/diff/
 against branch origin/trunk

> Get a list of consumer groups
> -
>
> Key: KAFKA-1476
> URL: https://issues.apache.org/jira/browse/KAFKA-1476
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 0.8.1.1
>Reporter: Ryan Williams
>Assignee: Balaji Seshadri
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: ConsumerCommand.scala, KAFKA-1476-LIST-GROUPS.patch, 
> KAFKA-1476-RENAME.patch, KAFKA-1476-REVIEW-COMMENTS.patch, KAFKA-1476.patch, 
> KAFKA-1476.patch, KAFKA-1476.patch, KAFKA-1476.patch, 
> KAFKA-1476_2014-11-10_11:58:26.patch, KAFKA-1476_2014-11-10_12:04:01.patch, 
> KAFKA-1476_2014-11-10_12:06:35.patch, KAFKA-1476_2014-12-05_12:00:12.patch, 
> KAFKA-1476_2015-01-12_16:22:26.patch, KAFKA-1476_2015-01-12_16:31:20.patch, 
> KAFKA-1476_2015-01-13_10:36:18.patch, KAFKA-1476_2015-01-15_14:30:04.patch, 
> KAFKA-1476_2015-01-22_02:32:52.patch, KAFKA-1476_2015-01-30_11:09:59.patch, 
> sample-kafka-consumer-groups-sh-output-1-23-2015.txt, 
> sample-kafka-consumer-groups-sh-output.txt
>
>
> It would be useful to have a way to get a list of consumer groups currently 
> active via some tool/script that ships with kafka. This would be helpful so 
> that the system tools can be explored more easily.
> For example, when running the ConsumerOffsetChecker, it requires a group 
> option
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --topic test --group 
> ?
> But, when just getting started with kafka, using the console producer and 
> consumer, it is not clear what value to use for the group option.  If a list 
> of consumer groups could be listed, then it would be clear what value to use.
> Background:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201405.mbox/%3cCAOq_b1w=slze5jrnakxvak0gu9ctdkpazak1g4dygvqzbsg...@mail.gmail.com%3e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1909) VerifiableProperties does not "see" default properties of the wrapped Properties instance

2015-01-30 Thread Tommy Becker (JIRA)
Tommy Becker created KAFKA-1909:
---

 Summary: VerifiableProperties does not "see" default properties of 
the wrapped Properties instance
 Key: KAFKA-1909
 URL: https://issues.apache.org/jira/browse/KAFKA-1909
 Project: Kafka
  Issue Type: Bug
  Components: config
Affects Versions: 0.8.1.1
Reporter: Tommy Becker


The VerifiableProperties class wraps a java.util.Properties instance.  The 
various getXXX methods in VerifiableProperties that do not take a default value 
issue a containsKey() call to the underlying Properties instance to determine 
if the property exists.  Unfortunately, the containsKey method is merely 
(mis)-inherited from Hashtable; it doesn't query the Properties instance own 
defaults.  The net effect of this is that only key value pairs defined directly 
in the Properties instance are usable by Kafka.  We have a base config that is 
used throughout our application but one particular consumer needs different 
settings.  Trying to achieve this by using new Properties(baseProperties) and 
setting the consumer specific values in that doesn't work :(

VerifiableProperties already provides its own containsKey method that should 
simply be changed to return getProperty() != null to avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1760) Implement new consumer client

2015-01-30 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299027#comment-14299027
 ] 

Guozhang Wang commented on KAFKA-1760:
--

Has committed to trunk, closing this ticket.

> Implement new consumer client
> -
>
> Key: KAFKA-1760
> URL: https://issues.apache.org/jira/browse/KAFKA-1760
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Fix For: 0.8.3
>
> Attachments: KAFKA-1760.patch, KAFKA-1760_2015-01-11_16:57:15.patch, 
> KAFKA-1760_2015-01-18_19:10:13.patch, KAFKA-1760_2015-01-21_08:42:20.patch, 
> KAFKA-1760_2015-01-22_10:03:26.patch, KAFKA-1760_2015-01-22_20:21:56.patch, 
> KAFKA-1760_2015-01-23_13:13:00.patch, KAFKA-1760_2015-01-29_03:20:20.patch
>
>
> Implement a consumer client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1760) Implement new consumer client

2015-01-30 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1760:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Implement new consumer client
> -
>
> Key: KAFKA-1760
> URL: https://issues.apache.org/jira/browse/KAFKA-1760
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Fix For: 0.8.3
>
> Attachments: KAFKA-1760.patch, KAFKA-1760_2015-01-11_16:57:15.patch, 
> KAFKA-1760_2015-01-18_19:10:13.patch, KAFKA-1760_2015-01-21_08:42:20.patch, 
> KAFKA-1760_2015-01-22_10:03:26.patch, KAFKA-1760_2015-01-22_20:21:56.patch, 
> KAFKA-1760_2015-01-23_13:13:00.patch, KAFKA-1760_2015-01-29_03:20:20.patch
>
>
> Implement a consumer client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1862) Pass in the Time object into OffsetManager

2015-01-30 Thread Aditya A Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14298961#comment-14298961
 ] 

Aditya A Auradkar commented on KAFKA-1862:
--

[~guozhang] I cannot find the testOffsetExpiration case in OffsetCommitTest. 
Does this already exist?

> Pass in the Time object into OffsetManager
> --
>
> Key: KAFKA-1862
> URL: https://issues.apache.org/jira/browse/KAFKA-1862
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Aditya Auradkar
>  Labels: newbie++
> Fix For: 0.9.0
>
>
> We should improve OffsetManager to take in a Time instance as we do for 
> LogManager and ReplicaManager. That way we can advance time with MockTime in 
> test cases. 
> Then we can move the testOffsetExpiration case from OffsetCommitTest to 
> OffsetManagerTest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)