Re: [ANNOUNCE] New committer: Damian Guy

2017-06-12 Thread BigData dev
congrats Damian!!


Regards,
Bharat


On Mon, Jun 12, 2017 at 12:52 PM, Konstantine Karantasis <
konstant...@confluent.io> wrote:

> Congrats!
>
> On Mon, Jun 12, 2017 at 12:48 PM, Michael Noll 
> wrote:
>
> > Congratulations, Damian!
> >
> > On Mon, Jun 12, 2017 at 9:07 AM, Molnár Bálint 
> > wrote:
> >
> > > Congrats, Damien!
> > >
> > > 2017-06-12 8:44 GMT+02:00 Rajini Sivaram :
> > >
> > > > Congratulations, Damian!
> > > >
> > > > On Sat, Jun 10, 2017 at 12:15 PM, Mickael Maison <
> > > mickael.mai...@gmail.com
> > > > >
> > > > wrote:
> > > >
> > > > > Congrats Damian!
> > > > >
> > > > > On Sat, Jun 10, 2017 at 8:46 AM, Damian Guy 
> > > > wrote:
> > > > > > Thanks everyone. Looking forward to making many more
> contributions
> > > > > > On Sat, 10 Jun 2017 at 02:46, Joe Stein 
> > wrote:
> > > > > >
> > > > > >> Congrats!
> > > > > >>
> > > > > >>
> > > > > >> ~ Joe Stein
> > > > > >>
> > > > > >> On Fri, Jun 9, 2017 at 6:49 PM, Neha Narkhede <
> n...@confluent.io>
> > > > > wrote:
> > > > > >>
> > > > > >> > Well deserved. Congratulations Damian!
> > > > > >> >
> > > > > >> > On Fri, Jun 9, 2017 at 1:34 PM Guozhang Wang <
> > wangg...@gmail.com>
> > > > > wrote:
> > > > > >> >
> > > > > >> > > Hello all,
> > > > > >> > >
> > > > > >> > >
> > > > > >> > > The PMC of Apache Kafka is pleased to announce that we have
> > > > invited
> > > > > >> > Damian
> > > > > >> > > Guy as a committer to the project.
> > > > > >> > >
> > > > > >> > > Damian has made tremendous contributions to Kafka. He has
> not
> > > only
> > > > > >> > > contributed a lot into the Streams api, but have also been
> > > > involved
> > > > > in
> > > > > >> > many
> > > > > >> > > other areas like the producer and consumer clients,
> > broker-side
> > > > > >> > > coordinators (group coordinator and the ongoing transaction
> > > > > >> coordinator).
> > > > > >> > > He has contributed more than 100 patches so far, and have
> been
> > > > > driving
> > > > > >> > on 6
> > > > > >> > > KIP contributions.
> > > > > >> > >
> > > > > >> > > More importantly, Damian has been a very prolific reviewer
> on
> > > open
> > > > > PRs
> > > > > >> > and
> > > > > >> > > has been actively participating on community activities such
> > as
> > > > > email
> > > > > >> > lists
> > > > > >> > > and slack overflow questions. Through his code contributions
> > and
> > > > > >> reviews,
> > > > > >> > > Damian has demonstrated good judgement on system design and
> > code
> > > > > >> > qualities,
> > > > > >> > > especially on thorough unit test coverages. We believe he
> will
> > > > make
> > > > > a
> > > > > >> > great
> > > > > >> > > addition to the committers of the community.
> > > > > >> > >
> > > > > >> > >
> > > > > >> > > Thank you for your contributions, Damian!
> > > > > >> > >
> > > > > >> > >
> > > > > >> > > -- Guozhang, on behalf of the Apache Kafka PMC
> > > > > >> > >
> > > > > >> > --
> > > > > >> > Thanks,
> > > > > >> > Neha
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
>


Re: Reg: [VOTE] KIP 157 - Add consumer config options to streams reset tool

2017-06-11 Thread BigData dev
Thanks everyone for voting.

KIP-157 has passed with +3 binding (Ewen, Jason and Guozhang Wang) and +3
non-binding (Matthias J. Sax and Bill Bejeck)

Thanks,

Bharat Viswanadham

On Sat, Jun 10, 2017 at 4:49 PM, Guozhang Wang <wangg...@gmail.com> wrote:

> Bharat,
>
> I think we already have 3 committers voted on this KIP, could you conclude
> the thread?
>
>
> Guozhang
>
>
> On Fri, Jun 2, 2017 at 3:44 PM, Jason Gustafson <ja...@confluent.io>
> wrote:
>
> > Thanks. +1
> >
> > On Thu, Jun 1, 2017 at 9:40 PM, Matthias J. Sax <matth...@confluent.io>
> > wrote:
> >
> > > +1
> > >
> > > Thanks for updating the KIP!
> > >
> > > -Matthias
> > >
> > > On 6/1/17 6:18 PM, Bill Bejeck wrote:
> > > > +1
> > > >
> > > > Thanks,
> > > > Bill
> > > >
> > > > On Thu, Jun 1, 2017 at 7:45 PM, Guozhang Wang <wangg...@gmail.com>
> > > wrote:
> > > >
> > > >> +1 again. Thanks.
> > > >>
> > > >> On Tue, May 30, 2017 at 1:46 PM, BigData dev <
> bigdatadev...@gmail.com
> > >
> > > >> wrote:
> > > >>
> > > >>> Hi All,
> > > >>> Updated the KIP, as the consumer configurations are required for
> both
> > > >> Admin
> > > >>> Client and Consumer in Stream reset tool. Updated the KIP to use
> > > >>> command-config option, similar to other tools like
> > > >> kafka-consumer-groups.sh
> > > >>>
> > > >>>
> > > >>> *https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> > > >>> 157+-+Add+consumer+config+options+to+streams+reset+tool
> > > >>> <https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> > > >>> 157+-+Add+consumer+config+options+to+streams+reset+tool>*
> > > >>>
> > > >>>
> > > >>> So, starting the voting process again for further inputs.
> > > >>>
> > > >>> This vote will run for a minimum of 72 hours.
> > > >>>
> > > >>> Thanks,
> > > >>>
> > > >>> Bharat
> > > >>>
> > > >>>
> > > >>>
> > > >>> On Tue, May 30, 2017 at 1:18 PM, Guozhang Wang <wangg...@gmail.com
> >
> > > >> wrote:
> > > >>>
> > > >>>> +1. Thanks!
> > > >>>>
> > > >>>> On Tue, May 16, 2017 at 1:12 AM, Eno Thereska <
> > eno.there...@gmail.com
> > > >
> > > >>>> wrote:
> > > >>>>
> > > >>>>> +1 thanks.
> > > >>>>>
> > > >>>>> Eno
> > > >>>>>> On 16 May 2017, at 04:20, BigData dev <bigdatadev...@gmail.com>
> > > >>> wrote:
> > > >>>>>>
> > > >>>>>> Hi All,
> > > >>>>>> Given the simple and non-controversial nature of the KIP, I
> would
> > > >>> like
> > > >>>> to
> > > >>>>>> start the voting process for KIP-157: Add consumer config
> options
> > > >> to
> > > >>>>>> streams reset tool
> > > >>>>>>
> > > >>>>>> *https://cwiki.apache.org/confluence/display/KAFKA/KIP+157+-
> > > >>>>> +Add+consumer+config+options+to+streams+reset+tool
> > > >>>>>> <https://cwiki.apache.org/confluence/display/KAFKA/KIP+157+-
> > > >>>>> +Add+consumer+config+options+to+streams+reset+tool>*
> > > >>>>>>
> > > >>>>>>
> > > >>>>>> The vote will run for a minimum of 72 hours.
> > > >>>>>>
> > > >>>>>> Thanks,
> > > >>>>>>
> > > >>>>>> Bharat
> > > >>>>>
> > > >>>>>
> > > >>>>
> > > >>>>
> > > >>>> --
> > > >>>> -- Guozhang
> > > >>>>
> > > >>>
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> -- Guozhang
> > > >>
> > > >
> > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


Info regarding kafka topic

2017-06-08 Thread BigData dev
Hi,
I have a 3 node Kafka Broker cluster.
I have created a topic and the leader for the topic is broker 1(1001). And
the broker got died.
But when I see the information in zookeeper for the topic, I see the leader
is still set to broker 1 (1001) and isr is set to 1001. Is this a bug in
kafka, as now leader is died, the leader should have set to none.

*[zk: localhost:2181(CONNECTED) 7] get
/brokers/topics/t3/partitions/0/state*

*{"controller_epoch":1,"leader":1001,"version":1,"leader_epoch":1,"isr":[1001]}*

*cZxid = 0x10078*

*ctime = Thu Jun 08 14:50:07 PDT 2017*

*mZxid = 0x1008c*

*mtime = Thu Jun 08 14:51:09 PDT 2017*

*pZxid = 0x10078*

*cversion = 0*

*dataVersion = 1*

*aclVersion = 0*

*ephemeralOwner = 0x0*

*dataLength = 78*

*numChildren = 0*

*[zk: localhost:2181(CONNECTED) 8] *


And when I use describe command the output is

*[root@meets2 kafka-broker]# bin/kafka-topics.sh --describe --topic t3
--zookeeper localhost:2181*

*Topic:t3 PartitionCount:1 ReplicationFactor:2 Configs:*

*Topic: t3 Partition: 0 Leader: 1001 Replicas: 1001,1003 Isr: 1001*


When I use unavailable-partition option, I can know correctly.

*[root@meets2 kafka-broker]# bin/kafka-topics.sh --describe --topic t3
--zookeeper localhost:2181 --unavailable-partitions*

* Topic: t3 Partition: 0 Leader: 1001 Replicas: 1001,1003 Isr: 1001*


But in zookeeper topic state, the leader should have been set to none, not
the actual leader when the broker has died. Is this according to design or
is it a bug in Kafka. Could you please provide any information on this?


*Thanks,*

*Bharat*


Re: [VOTE] KIP-162: Enable topic deletion by default

2017-06-06 Thread BigData dev
+1 (non-binding)

Thanks,
Bharat

On Tue, Jun 6, 2017 at 9:21 AM, Ashwin Sinha 
wrote:

> +1
>
> On Tue, Jun 6, 2017 at 11:20 PM, Mickael Maison 
> wrote:
>
> > +1 (non binding), thanks
> >
> > On Tue, Jun 6, 2017 at 2:16 PM, Bill Bejeck  wrote:
> > > +1
> > >
> > > -Bill
> > >
> > > On Tue, Jun 6, 2017 at 9:08 AM, Ismael Juma  wrote:
> > >
> > >> Thanks for the KIP, Gwen. +1 (binding).
> > >>
> > >> Ismael
> > >>
> > >> On Tue, Jun 6, 2017 at 5:37 AM, Gwen Shapira 
> wrote:
> > >>
> > >> > Hi,
> > >> >
> > >> > The discussion has been quite positive, so I posted a JIRA, a PR and
> > >> > updated the KIP with the latest decisions.
> > >> >
> > >> > Lets officially vote on the KIP:
> > >> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> > 162+-+Enable+topic+deletion+by+default
> > >> >
> > >> > JIRA is here: https://issues.apache.org/jira/browse/KAFKA-5384
> > >> >
> > >> > Gwen
> > >> >
> > >>
> >
>
>
>
> --
> Thanks and Regards,
> Ashwin
>


Re: Reg: [VOTE] KIP 157 - Add consumer config options to streams reset tool

2017-05-30 Thread BigData dev
Hi All,
Updated the KIP, as the consumer configurations are required for both Admin
Client and Consumer in Stream reset tool. Updated the KIP to use
command-config option, similar to other tools like kafka-consumer-groups.sh


*https://cwiki.apache.org/confluence/display/KAFKA/KIP+157+-+Add+consumer+config+options+to+streams+reset+tool
<https://cwiki.apache.org/confluence/display/KAFKA/KIP+157+-+Add+consumer+config+options+to+streams+reset+tool>*


So, starting the voting process again for further inputs.

This vote will run for a minimum of 72 hours.

Thanks,

Bharat



On Tue, May 30, 2017 at 1:18 PM, Guozhang Wang <wangg...@gmail.com> wrote:

> +1. Thanks!
>
> On Tue, May 16, 2017 at 1:12 AM, Eno Thereska <eno.there...@gmail.com>
> wrote:
>
> > +1 thanks.
> >
> > Eno
> > > On 16 May 2017, at 04:20, BigData dev <bigdatadev...@gmail.com> wrote:
> > >
> > > Hi All,
> > > Given the simple and non-controversial nature of the KIP, I would like
> to
> > > start the voting process for KIP-157: Add consumer config options to
> > > streams reset tool
> > >
> > > *https://cwiki.apache.org/confluence/display/KAFKA/KIP+157+-
> > +Add+consumer+config+options+to+streams+reset+tool
> > > <https://cwiki.apache.org/confluence/display/KAFKA/KIP+157+-
> > +Add+consumer+config+options+to+streams+reset+tool>*
> > >
> > >
> > > The vote will run for a minimum of 72 hours.
> > >
> > > Thanks,
> > >
> > > Bharat
> >
> >
>
>
> --
> -- Guozhang
>


Re: Reg: [VOTE] KIP 157 - Add consumer config options to streams reset tool

2017-05-22 Thread BigData dev
Hi Matthias,
For the AdminClient, client configuration is needed. And for zookeeper, no
properties are required.
So, in other tools like consumerGroupCommand, they used the command config
option.
I think consumer-config and consumer-property are not required here. We
will use the configurations passed through command-config for both admin
client and Embedded consumer.

Thanks,
Bharat


On Fri, May 19, 2017 at 4:26 PM, Matthias J. Sax <matth...@confluent.io>
wrote:

> Couple of follow ups.
>
> The reset tool uses AdminClient, ZkUtils, and a KafkaConsumer
> internally. Thus, I am wondering if we need the possibility to specify
> configs for all of them?
>
> The original JIRA reported, that the reset tool does not work for a
> secured cluster, and thus, I doubt that consumer properties are
> sufficient for resolve this.
>
> Maybe some people that are more familiar with Kafka security can help
> out here. I personally have only limited knowledge about this topic.
>
>
> -Matthias
>
>
>
> On 5/19/17 11:09 AM, BigData dev wrote:
> > Thanks for the info, Matthias.
> >
> > Regards,
> > Bharat
> >
> >
> > On Fri, May 19, 2017 at 10:25 AM, Matthias J. Sax <matth...@confluent.io
> >
> > wrote:
> >
> >> KIP-157 cannot be included in 0.11.0.0 anymore. KIP freeze date deadline
> >> is strict.
> >>
> >> -Matthias
> >>
> >> On 5/19/17 10:15 AM, BigData dev wrote:
> >>> Hi Matthias,
> >>> I will start a new KIP for Kafka tools options to be a standard across
> >> all
> >>> tools shortly. But I think the KIP 157 for Kafka Streams, should be
> >> needed
> >>> for 0.11.0.0 release, (KIP freeze date is already over, but I think
> this
> >> is
> >>> minor code change in tools to add option to streams reset tool) as
> >> without
> >>> this consumer config options, it will not be possible to use the tool
> in
> >> a
> >>> secured environment. Please let me know your thoughts on this. If it
> >> needs
> >>> to be moved to next release, I will work on this as part of KIP 14.
> >>>
> >>> Thanks,
> >>> Bharat
> >>>
> >>>
> >>> On Fri, May 19, 2017 at 10:10 AM, Matthias J. Sax <
> matth...@confluent.io
> >>>
> >>> wrote:
> >>>
> >>>> I double checked with Matthew Warhaftig (the original author of
> KIP-14)
> >>>> and he has not interest to continue the KIP atm.
> >>>>
> >>>> Thus, Bharat can continue the work on KIP-14. I think it would be
> best,
> >>>> to start a new DISCUSS thread after you update KIP-14.
> >>>>
> >>>> Thanks for your contributions!
> >>>>
> >>>>
> >>>> -Matthias
> >>>>
> >>>>
> >>>> On 5/17/17 12:56 PM, BigData dev wrote:
> >>>>> Hi,
> >>>>> When I was trying to find more info, there is already a proposed KIP
> >> for
> >>>>> this
> >>>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >>>> 14+-+Tools+Standardization
> >>>>>
> >>>>>
> >>>>> Thanks,
> >>>>> Bharat
> >>>>>
> >>>>> On Wed, May 17, 2017 at 12:38 PM, BigData dev <
> bigdatadev...@gmail.com
> >>>
> >>>>> wrote:
> >>>>>
> >>>>>> Hi Ewen, Matthias,
> >>>>>> For common configuration across all the tools, I will work on that
> as
> >>>> part
> >>>>>> of other KIP by looking into all Kafka tools.
> >>>>>>
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Bharat
> >>>>>>
> >>>>>>
> >>>>>> On Wed, May 17, 2017 at 9:40 AM, Matthias J. Sax <
> >> matth...@confluent.io
> >>>>>
> >>>>>> wrote:
> >>>>>>
> >>>>>>> +1
> >>>>>>>
> >>>>>>> I also second Ewen comment -- standardizing the common supported
> >>>>>>> parameters over all tools would be great!
> >>>>>>>
> >>>>>>>
> >>>>>>> -Matthias
> >>>>>>>
> >>>>>>> On 5/17/17 12:57 AM, Damian Guy wr

Re: Reg: [VOTE] KIP 157 - Add consumer config options to streams reset tool

2017-05-19 Thread BigData dev
Thanks for the info, Matthias.

Regards,
Bharat


On Fri, May 19, 2017 at 10:25 AM, Matthias J. Sax <matth...@confluent.io>
wrote:

> KIP-157 cannot be included in 0.11.0.0 anymore. KIP freeze date deadline
> is strict.
>
> -Matthias
>
> On 5/19/17 10:15 AM, BigData dev wrote:
> > Hi Matthias,
> > I will start a new KIP for Kafka tools options to be a standard across
> all
> > tools shortly. But I think the KIP 157 for Kafka Streams, should be
> needed
> > for 0.11.0.0 release, (KIP freeze date is already over, but I think this
> is
> > minor code change in tools to add option to streams reset tool) as
> without
> > this consumer config options, it will not be possible to use the tool in
> a
> > secured environment. Please let me know your thoughts on this. If it
> needs
> > to be moved to next release, I will work on this as part of KIP 14.
> >
> > Thanks,
> > Bharat
> >
> >
> > On Fri, May 19, 2017 at 10:10 AM, Matthias J. Sax <matth...@confluent.io
> >
> > wrote:
> >
> >> I double checked with Matthew Warhaftig (the original author of KIP-14)
> >> and he has not interest to continue the KIP atm.
> >>
> >> Thus, Bharat can continue the work on KIP-14. I think it would be best,
> >> to start a new DISCUSS thread after you update KIP-14.
> >>
> >> Thanks for your contributions!
> >>
> >>
> >> -Matthias
> >>
> >>
> >> On 5/17/17 12:56 PM, BigData dev wrote:
> >>> Hi,
> >>> When I was trying to find more info, there is already a proposed KIP
> for
> >>> this
> >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 14+-+Tools+Standardization
> >>>
> >>>
> >>> Thanks,
> >>> Bharat
> >>>
> >>> On Wed, May 17, 2017 at 12:38 PM, BigData dev <bigdatadev...@gmail.com
> >
> >>> wrote:
> >>>
> >>>> Hi Ewen, Matthias,
> >>>> For common configuration across all the tools, I will work on that as
> >> part
> >>>> of other KIP by looking into all Kafka tools.
> >>>>
> >>>>
> >>>> Thanks,
> >>>> Bharat
> >>>>
> >>>>
> >>>> On Wed, May 17, 2017 at 9:40 AM, Matthias J. Sax <
> matth...@confluent.io
> >>>
> >>>> wrote:
> >>>>
> >>>>> +1
> >>>>>
> >>>>> I also second Ewen comment -- standardizing the common supported
> >>>>> parameters over all tools would be great!
> >>>>>
> >>>>>
> >>>>> -Matthias
> >>>>>
> >>>>> On 5/17/17 12:57 AM, Damian Guy wrote:
> >>>>>> +1
> >>>>>>
> >>>>>> On Wed, 17 May 2017 at 05:40 Ewen Cheslack-Postava <
> e...@confluent.io
> >>>
> >>>>>> wrote:
> >>>>>>
> >>>>>>> +1 (binding)
> >>>>>>>
> >>>>>>> I mentioned this in the PR that triggered this:
> >>>>>>>
> >>>>>>>> KIP is accurate, though this is one of those things that we should
> >>>>>>> probably get a KIP for a standard set of config options across all
> >>>>> tools so
> >>>>>>> additions like this can just fall under the umbrella of that KIP...
> >>>>>>>
> >>>>>>> I think it would be great if someone wrote up a small KIP providing
> >>>>> some
> >>>>>>> standardized settings that we could get future additions
> >> automatically
> >>>>>>> umbrella'd under, e.g. no need to do a KIP if just adding a
> >>>>> consumer.config
> >>>>>>> or consumer-property config conforming to existing expectations for
> >>>>> other
> >>>>>>> tools. We could also standardize on a few other settings names that
> >> are
> >>>>>>> inconsistent across different tools and set out a clear path
> forward
> >>>>> for
> >>>>>>> future tools.
> >>>>>>>
> >>>>>>> I think I still have at least one open PR from when I first started
> >> on
> >>>>> the
> >>>>>>> project where I was trying to clean up some command line stuff to
> be
> >>>>> more
> >>>>>>> consistent. This has been an issue for many years now...
> >>>>>>>
> >>>>>>> -Ewen
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> On Tue, May 16, 2017 at 1:12 AM, Eno Thereska <
> >> eno.there...@gmail.com>
> >>>>>>> wrote:
> >>>>>>>
> >>>>>>>> +1 thanks.
> >>>>>>>>
> >>>>>>>> Eno
> >>>>>>>>> On 16 May 2017, at 04:20, BigData dev <bigdatadev...@gmail.com>
> >>>>> wrote:
> >>>>>>>>>
> >>>>>>>>> Hi All,
> >>>>>>>>> Given the simple and non-controversial nature of the KIP, I would
> >>>>> like
> >>>>>>> to
> >>>>>>>>> start the voting process for KIP-157: Add consumer config options
> >> to
> >>>>>>>>> streams reset tool
> >>>>>>>>>
> >>>>>>>>> *https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> >>>>>>>> 157+-+Add+consumer+config+options+to+streams+reset+tool
> >>>>>>>>> <https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> >>>>>>>> 157+-+Add+consumer+config+options+to+streams+reset+tool>*
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> The vote will run for a minimum of 72 hours.
> >>>>>>>>>
> >>>>>>>>> Thanks,
> >>>>>>>>>
> >>>>>>>>> Bharat
> >>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>>
> >>>>
> >>>
> >>
> >>
> >
>
>


Re: Reg: [VOTE] KIP 157 - Add consumer config options to streams reset tool

2017-05-19 Thread BigData dev
Hi Matthias,
I will start a new KIP for Kafka tools options to be a standard across all
tools shortly. But I think the KIP 157 for Kafka Streams, should be needed
for 0.11.0.0 release, (KIP freeze date is already over, but I think this is
minor code change in tools to add option to streams reset tool) as without
this consumer config options, it will not be possible to use the tool in a
secured environment. Please let me know your thoughts on this. If it needs
to be moved to next release, I will work on this as part of KIP 14.

Thanks,
Bharat


On Fri, May 19, 2017 at 10:10 AM, Matthias J. Sax <matth...@confluent.io>
wrote:

> I double checked with Matthew Warhaftig (the original author of KIP-14)
> and he has not interest to continue the KIP atm.
>
> Thus, Bharat can continue the work on KIP-14. I think it would be best,
> to start a new DISCUSS thread after you update KIP-14.
>
> Thanks for your contributions!
>
>
> -Matthias
>
>
> On 5/17/17 12:56 PM, BigData dev wrote:
> > Hi,
> > When I was trying to find more info, there is already a proposed KIP for
> > this
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 14+-+Tools+Standardization
> >
> >
> > Thanks,
> > Bharat
> >
> > On Wed, May 17, 2017 at 12:38 PM, BigData dev <bigdatadev...@gmail.com>
> > wrote:
> >
> >> Hi Ewen, Matthias,
> >> For common configuration across all the tools, I will work on that as
> part
> >> of other KIP by looking into all Kafka tools.
> >>
> >>
> >> Thanks,
> >> Bharat
> >>
> >>
> >> On Wed, May 17, 2017 at 9:40 AM, Matthias J. Sax <matth...@confluent.io
> >
> >> wrote:
> >>
> >>> +1
> >>>
> >>> I also second Ewen comment -- standardizing the common supported
> >>> parameters over all tools would be great!
> >>>
> >>>
> >>> -Matthias
> >>>
> >>> On 5/17/17 12:57 AM, Damian Guy wrote:
> >>>> +1
> >>>>
> >>>> On Wed, 17 May 2017 at 05:40 Ewen Cheslack-Postava <e...@confluent.io
> >
> >>>> wrote:
> >>>>
> >>>>> +1 (binding)
> >>>>>
> >>>>> I mentioned this in the PR that triggered this:
> >>>>>
> >>>>>> KIP is accurate, though this is one of those things that we should
> >>>>> probably get a KIP for a standard set of config options across all
> >>> tools so
> >>>>> additions like this can just fall under the umbrella of that KIP...
> >>>>>
> >>>>> I think it would be great if someone wrote up a small KIP providing
> >>> some
> >>>>> standardized settings that we could get future additions
> automatically
> >>>>> umbrella'd under, e.g. no need to do a KIP if just adding a
> >>> consumer.config
> >>>>> or consumer-property config conforming to existing expectations for
> >>> other
> >>>>> tools. We could also standardize on a few other settings names that
> are
> >>>>> inconsistent across different tools and set out a clear path forward
> >>> for
> >>>>> future tools.
> >>>>>
> >>>>> I think I still have at least one open PR from when I first started
> on
> >>> the
> >>>>> project where I was trying to clean up some command line stuff to be
> >>> more
> >>>>> consistent. This has been an issue for many years now...
> >>>>>
> >>>>> -Ewen
> >>>>>
> >>>>>
> >>>>>
> >>>>> On Tue, May 16, 2017 at 1:12 AM, Eno Thereska <
> eno.there...@gmail.com>
> >>>>> wrote:
> >>>>>
> >>>>>> +1 thanks.
> >>>>>>
> >>>>>> Eno
> >>>>>>> On 16 May 2017, at 04:20, BigData dev <bigdatadev...@gmail.com>
> >>> wrote:
> >>>>>>>
> >>>>>>> Hi All,
> >>>>>>> Given the simple and non-controversial nature of the KIP, I would
> >>> like
> >>>>> to
> >>>>>>> start the voting process for KIP-157: Add consumer config options
> to
> >>>>>>> streams reset tool
> >>>>>>>
> >>>>>>> *https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> >>>>>> 157+-+Add+consumer+config+options+to+streams+reset+tool
> >>>>>>> <https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> >>>>>> 157+-+Add+consumer+config+options+to+streams+reset+tool>*
> >>>>>>>
> >>>>>>>
> >>>>>>> The vote will run for a minimum of 72 hours.
> >>>>>>>
> >>>>>>> Thanks,
> >>>>>>>
> >>>>>>> Bharat
> >>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>
> >>>
> >>
> >
>
>


Re: Reg: [VOTE] KIP 157 - Add consumer config options to streams reset tool

2017-05-17 Thread BigData dev
Hi,
When I was trying to find more info, there is already a proposed KIP for
this
https://cwiki.apache.org/confluence/display/KAFKA/KIP-14+-+Tools+Standardization


Thanks,
Bharat

On Wed, May 17, 2017 at 12:38 PM, BigData dev <bigdatadev...@gmail.com>
wrote:

> Hi Ewen, Matthias,
> For common configuration across all the tools, I will work on that as part
> of other KIP by looking into all Kafka tools.
>
>
> Thanks,
> Bharat
>
>
> On Wed, May 17, 2017 at 9:40 AM, Matthias J. Sax <matth...@confluent.io>
> wrote:
>
>> +1
>>
>> I also second Ewen comment -- standardizing the common supported
>> parameters over all tools would be great!
>>
>>
>> -Matthias
>>
>> On 5/17/17 12:57 AM, Damian Guy wrote:
>> > +1
>> >
>> > On Wed, 17 May 2017 at 05:40 Ewen Cheslack-Postava <e...@confluent.io>
>> > wrote:
>> >
>> >> +1 (binding)
>> >>
>> >> I mentioned this in the PR that triggered this:
>> >>
>> >>> KIP is accurate, though this is one of those things that we should
>> >> probably get a KIP for a standard set of config options across all
>> tools so
>> >> additions like this can just fall under the umbrella of that KIP...
>> >>
>> >> I think it would be great if someone wrote up a small KIP providing
>> some
>> >> standardized settings that we could get future additions automatically
>> >> umbrella'd under, e.g. no need to do a KIP if just adding a
>> consumer.config
>> >> or consumer-property config conforming to existing expectations for
>> other
>> >> tools. We could also standardize on a few other settings names that are
>> >> inconsistent across different tools and set out a clear path forward
>> for
>> >> future tools.
>> >>
>> >> I think I still have at least one open PR from when I first started on
>> the
>> >> project where I was trying to clean up some command line stuff to be
>> more
>> >> consistent. This has been an issue for many years now...
>> >>
>> >> -Ewen
>> >>
>> >>
>> >>
>> >> On Tue, May 16, 2017 at 1:12 AM, Eno Thereska <eno.there...@gmail.com>
>> >> wrote:
>> >>
>> >>> +1 thanks.
>> >>>
>> >>> Eno
>> >>>> On 16 May 2017, at 04:20, BigData dev <bigdatadev...@gmail.com>
>> wrote:
>> >>>>
>> >>>> Hi All,
>> >>>> Given the simple and non-controversial nature of the KIP, I would
>> like
>> >> to
>> >>>> start the voting process for KIP-157: Add consumer config options to
>> >>>> streams reset tool
>> >>>>
>> >>>> *https://cwiki.apache.org/confluence/display/KAFKA/KIP+
>> >>> 157+-+Add+consumer+config+options+to+streams+reset+tool
>> >>>> <https://cwiki.apache.org/confluence/display/KAFKA/KIP+
>> >>> 157+-+Add+consumer+config+options+to+streams+reset+tool>*
>> >>>>
>> >>>>
>> >>>> The vote will run for a minimum of 72 hours.
>> >>>>
>> >>>> Thanks,
>> >>>>
>> >>>> Bharat
>> >>>
>> >>>
>> >>
>> >
>>
>>
>


Re: Reg: [VOTE] KIP 157 - Add consumer config options to streams reset tool

2017-05-17 Thread BigData dev
Hi Ewen, Matthias,
For common configuration across all the tools, I will work on that as part
of other KIP by looking into all Kafka tools.


Thanks,
Bharat


On Wed, May 17, 2017 at 9:40 AM, Matthias J. Sax <matth...@confluent.io>
wrote:

> +1
>
> I also second Ewen comment -- standardizing the common supported
> parameters over all tools would be great!
>
>
> -Matthias
>
> On 5/17/17 12:57 AM, Damian Guy wrote:
> > +1
> >
> > On Wed, 17 May 2017 at 05:40 Ewen Cheslack-Postava <e...@confluent.io>
> > wrote:
> >
> >> +1 (binding)
> >>
> >> I mentioned this in the PR that triggered this:
> >>
> >>> KIP is accurate, though this is one of those things that we should
> >> probably get a KIP for a standard set of config options across all
> tools so
> >> additions like this can just fall under the umbrella of that KIP...
> >>
> >> I think it would be great if someone wrote up a small KIP providing some
> >> standardized settings that we could get future additions automatically
> >> umbrella'd under, e.g. no need to do a KIP if just adding a
> consumer.config
> >> or consumer-property config conforming to existing expectations for
> other
> >> tools. We could also standardize on a few other settings names that are
> >> inconsistent across different tools and set out a clear path forward for
> >> future tools.
> >>
> >> I think I still have at least one open PR from when I first started on
> the
> >> project where I was trying to clean up some command line stuff to be
> more
> >> consistent. This has been an issue for many years now...
> >>
> >> -Ewen
> >>
> >>
> >>
> >> On Tue, May 16, 2017 at 1:12 AM, Eno Thereska <eno.there...@gmail.com>
> >> wrote:
> >>
> >>> +1 thanks.
> >>>
> >>> Eno
> >>>> On 16 May 2017, at 04:20, BigData dev <bigdatadev...@gmail.com>
> wrote:
> >>>>
> >>>> Hi All,
> >>>> Given the simple and non-controversial nature of the KIP, I would like
> >> to
> >>>> start the voting process for KIP-157: Add consumer config options to
> >>>> streams reset tool
> >>>>
> >>>> *https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> >>> 157+-+Add+consumer+config+options+to+streams+reset+tool
> >>>> <https://cwiki.apache.org/confluence/display/KAFKA/KIP+
> >>> 157+-+Add+consumer+config+options+to+streams+reset+tool>*
> >>>>
> >>>>
> >>>> The vote will run for a minimum of 72 hours.
> >>>>
> >>>> Thanks,
> >>>>
> >>>> Bharat
> >>>
> >>>
> >>
> >
>
>


Reg: [VOTE] KIP 157 - Add consumer config options to streams reset tool

2017-05-15 Thread BigData dev
Hi All,
Given the simple and non-controversial nature of the KIP, I would like to
start the voting process for KIP-157: Add consumer config options to
streams reset tool

*https://cwiki.apache.org/confluence/display/KAFKA/KIP+157+-+Add+consumer+config+options+to+streams+reset+tool
*


The vote will run for a minimum of 72 hours.

Thanks,

Bharat


Re: [VOTE] KIP-156 Add option "dry run" to Streams application reset tool

2017-05-10 Thread BigData dev
Thanks everyone for voting.

KIP-156 has passed with +4 binding (Neha, Jay Kreps, Sriram Subramanian and
Gwen Shapira) and +3 non-binding (Eno Thereska, Matthias J. Sax and Bill
Bejeck)

Thanks,

Bharat Viswanadham

On Wed, May 10, 2017 at 9:46 AM, Sriram Subramanian <r...@confluent.io>
wrote:

> +1
>
> On Wed, May 10, 2017 at 9:45 AM, Neha Narkhede <n...@confluent.io> wrote:
>
> > +1
> >
> > On Wed, May 10, 2017 at 12:32 PM Gwen Shapira <g...@confluent.io> wrote:
> >
> > > +1. Also not sure that adding a parameter to a CLI requires a KIP. It
> > seems
> > > excessive.
> > >
> > >
> > > On Tue, May 9, 2017 at 7:57 PM Jay Kreps <j...@confluent.io> wrote:
> > >
> > > > +1
> > > > On Tue, May 9, 2017 at 3:41 PM BigData dev <bigdatadev...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi, Everyone,
> > > > >
> > > > > Since this is a relatively simple change, I would like to start the
> > > > voting
> > > > > process for KIP-156: Add option "dry run" to Streams application
> > reset
> > > > tool
> > > > >
> > > > >
> > > >
> > > https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=69410150
> > > > >
> > > > >
> > > > > The vote will run for a minimum of 72 hours.
> > > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Bharat
> > > > >
> > > >
> > >
> > --
> > Thanks,
> > Neha
> >
>


Re: [VOTE] KIP-156 Add option "dry run" to Streams application reset tool

2017-05-09 Thread BigData dev
Eno,
Got info from the JIRA all tools and their parameters are public API.
So, I have started voting for this KIP.

Thanks,
Bharat

On Tue, May 9, 2017 at 1:09 PM, Eno Thereska <eno.there...@gmail.com> wrote:

> +1 for me. I’m not sure we even need a KIP for this but it’s better to be
> safe I guess.
>
> Eno
>
> > On May 9, 2017, at 8:41 PM, BigData dev <bigdatadev...@gmail.com> wrote:
> >
> > Hi, Everyone,
> >
> > Since this is a relatively simple change, I would like to start the
> voting
> > process for KIP-156: Add option "dry run" to Streams application reset
> tool
> >
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=69410150
> >
> >
> > The vote will run for a minimum of 72 hours.
> >
> >
> > Thanks,
> >
> > Bharat
>
>


[VOTE] KIP-156 Add option "dry run" to Streams application reset tool

2017-05-09 Thread BigData dev
Hi, Everyone,

Since this is a relatively simple change, I would like to start the voting
process for KIP-156: Add option "dry run" to Streams application reset tool

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=69410150


The vote will run for a minimum of 72 hours.


Thanks,

Bharat


Re: [VOTE] KIP-154 Add Kafka Connect configuration properties for creating internal topics

2017-05-08 Thread BigData dev
+1 (non-binding)

On Mon, May 8, 2017 at 3:25 PM, Dongjin Lee  wrote:

> +1
>
> On Tue, May 9, 2017 at 7:24 AM, Sriram Subramanian 
> wrote:
>
> > +1
> >
> > On Mon, May 8, 2017 at 2:14 PM, Konstantine Karantasis <
> > konstant...@confluent.io> wrote:
> >
> > > +1 (non binding)
> > >
> > > On Mon, May 8, 2017 at 1:33 PM, Stephane Maarek <
> > > steph...@simplemachines.com.au> wrote:
> > >
> > > > +1 (non binding)
> > > >
> > > >
> > > >
> > > > On 9/5/17, 5:51 am, "Randall Hauch"  wrote:
> > > >
> > > > Hi, everyone.
> > > >
> > > > Given the simple and non-controversial nature of the KIP, I would
> > > like
> > > > to
> > > > start the voting process for KIP-154: Add Kafka Connect
> > configuration
> > > > properties for creating internal topics:
> > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 154+Add+Kafka+Connect+configuration+properties+for+
> > > > creating+internal+topics
> > > >
> > > > The vote will run for a minimum of 72 hours.
> > > >
> > > > Thanks,
> > > >
> > > > Randall
> > > >
> > > >
> > > >
> > > >
> > >
> >
>
>
>
> --
> *Dongjin Lee*
>
>
>
> *A hitchhiker in the mathematical
> world.facebook: www.facebook.com/dongjin.lee.kr
> linkedin:
> kr.linkedin.com/in/dongjinleekr
> github:
> github.com/dongjinleekr
> twitter: www.twitter.com/dongjinleekr
> *
>


Re: [DISCUSS] KIP-154 Add Kafka Connect configuration properties for creating internal topics

2017-05-08 Thread BigData dev
Thank You got it.


On Mon, May 8, 2017 at 8:34 PM, Randall Hauch <rha...@gmail.com> wrote:

> Yes, that's the approach I'm suggesting and that is mentioned in the KIP. I
> also propose that the distributed configuration provided in the examples
> set the replication factor to one but include a relevant comment.
>
> On Mon, May 8, 2017 at 11:14 PM, BigData dev <bigdatadev...@gmail.com>
> wrote:
>
> > So, when Kafka broker is less than 3, and the user has not set the
> > replication configuration it will throw an error to the user, to correct
> > the configuration according to his setup? Is this the approach you are
> > suggesting here?
> >
> >
> >
> > On Mon, May 8, 2017 at 7:13 PM, Randall Hauch <rha...@gmail.com> wrote:
> >
> > > One of the "Rejected Alternatives" was to do something "smarter" by
> > > automatically reducing the replication factor when the cluster size is
> > > smaller than the replication factor. However, this is extremely
> > > unintuitive, and in rare cases (e.g., during a partial outage) might
> even
> > > result in internal topics being created with too small of a replication
> > > factor. And defaulting to 1 is certainly bad for production use cases,
> so
> > > that's not an option, either.
> > >
> > > While defaulting to 3 and failing if the cluster doesn't have 3 nodes
> is
> > a
> > > bit harsher than I'd like, it does appear to be the safer option: an
> > error
> > > message (with instructions on how to correct) is better than
> > inadvertently
> > > setting the replication factor too small and not knowing about it until
> > it
> > > is too late.
> > >
> > > On Mon, May 8, 2017 at 6:12 PM, BigData dev <bigdatadev...@gmail.com>
> > > wrote:
> > >
> > > > Hi,
> > > > I liked the KIP, as it will avoid so many errors which user can make
> > > during
> > > > setup.
> > > > I have 1 questions here.
> > > > 1. As default replication factor is set to 3, but if Kafka cluster is
> > > setup
> > > > for one node, then the user needs to override the default
> configuraion,
> > > > till then topics will not be created.
> > > > So, this is the behavior we want to give?
> > > >
> > > > On Mon, May 8, 2017 at 2:25 PM, Konstantine Karantasis <
> > > > konstant...@confluent.io> wrote:
> > > >
> > > > > Thanks a lot for the KIP Randall. This improvement should simplify
> > both
> > > > > regular deployments and testing!
> > > > >
> > > > > A minor comment. Maybe it would be nice to add a note about why
> > there's
> > > > no
> > > > > need for the property: config.storage.partitions
> > > > > I'm mentioning this for the sake of completeness, in case someone
> > > notices
> > > > > this slight asymmetry with respect to the newly introduced config
> > > > > properties.
> > > > >
> > > > > This is by no means a blocking comment.
> > > > >
> > > > > Thanks,
> > > > > Konstantine
> > > > >
> > > > > On Fri, May 5, 2017 at 7:18 PM, Randall Hauch <rha...@gmail.com>
> > > wrote:
> > > > >
> > > > > > Thanks, Gwen.
> > > > > >
> > > > > > Switching to low-priority is a great idea.
> > > > > >
> > > > > > The default value for the replication factor configuration is 3,
> > > since
> > > > > > that makes sense and is safe for production. Using the default
> > values
> > > > in
> > > > > > the example would mean it could only be run against a Kafka
> cluster
> > > > with
> > > > > a
> > > > > > minimum of 3 nodes. I propose overriding the example's
> replication
> > > > factor
> > > > > > configurations to be 1 so that the examples could be run on any
> > sized
> > > > > > cluster.
> > > > > >
> > > > > > The rejected alternatives mentions why the implementation doesn't
> > try
> > > > to
> > > > > > be too smart by calculating the replication factor.
> > > > > >
> > > > > > Best regards,
> > > > > >
> > > > > > Randall
> > > > > >
> &

Re: [DISCUSS] KIP-154 Add Kafka Connect configuration properties for creating internal topics

2017-05-08 Thread BigData dev
So, when Kafka broker is less than 3, and the user has not set the
replication configuration it will throw an error to the user, to correct
the configuration according to his setup? Is this the approach you are
suggesting here?



On Mon, May 8, 2017 at 7:13 PM, Randall Hauch <rha...@gmail.com> wrote:

> One of the "Rejected Alternatives" was to do something "smarter" by
> automatically reducing the replication factor when the cluster size is
> smaller than the replication factor. However, this is extremely
> unintuitive, and in rare cases (e.g., during a partial outage) might even
> result in internal topics being created with too small of a replication
> factor. And defaulting to 1 is certainly bad for production use cases, so
> that's not an option, either.
>
> While defaulting to 3 and failing if the cluster doesn't have 3 nodes is a
> bit harsher than I'd like, it does appear to be the safer option: an error
> message (with instructions on how to correct) is better than inadvertently
> setting the replication factor too small and not knowing about it until it
> is too late.
>
> On Mon, May 8, 2017 at 6:12 PM, BigData dev <bigdatadev...@gmail.com>
> wrote:
>
> > Hi,
> > I liked the KIP, as it will avoid so many errors which user can make
> during
> > setup.
> > I have 1 questions here.
> > 1. As default replication factor is set to 3, but if Kafka cluster is
> setup
> > for one node, then the user needs to override the default configuraion,
> > till then topics will not be created.
> > So, this is the behavior we want to give?
> >
> > On Mon, May 8, 2017 at 2:25 PM, Konstantine Karantasis <
> > konstant...@confluent.io> wrote:
> >
> > > Thanks a lot for the KIP Randall. This improvement should simplify both
> > > regular deployments and testing!
> > >
> > > A minor comment. Maybe it would be nice to add a note about why there's
> > no
> > > need for the property: config.storage.partitions
> > > I'm mentioning this for the sake of completeness, in case someone
> notices
> > > this slight asymmetry with respect to the newly introduced config
> > > properties.
> > >
> > > This is by no means a blocking comment.
> > >
> > > Thanks,
> > > Konstantine
> > >
> > > On Fri, May 5, 2017 at 7:18 PM, Randall Hauch <rha...@gmail.com>
> wrote:
> > >
> > > > Thanks, Gwen.
> > > >
> > > > Switching to low-priority is a great idea.
> > > >
> > > > The default value for the replication factor configuration is 3,
> since
> > > > that makes sense and is safe for production. Using the default values
> > in
> > > > the example would mean it could only be run against a Kafka cluster
> > with
> > > a
> > > > minimum of 3 nodes. I propose overriding the example's replication
> > factor
> > > > configurations to be 1 so that the examples could be run on any sized
> > > > cluster.
> > > >
> > > > The rejected alternatives mentions why the implementation doesn't try
> > to
> > > > be too smart by calculating the replication factor.
> > > >
> > > > Best regards,
> > > >
> > > > Randall
> > > >
> > > > > On May 5, 2017, at 8:02 PM, Gwen Shapira <g...@confluent.io>
> wrote:
> > > > >
> > > > > Looks great to me :)
> > > > >
> > > > > Just one note - configurations have levels (which reflect in the
> > docs)
> > > -
> > > > I
> > > > > suggest putting the whole thing as LOW. Most users will never need
> to
> > > > worry
> > > > > about these. For same reason I recommend leaving them out of the
> > > example
> > > > > config files - we already have issues with users playing with
> configs
> > > > > without understanding what they are doing and not liking the
> results.
> > > > >
> > > > >> On Fri, May 5, 2017 at 3:42 PM, Randall Hauch <rha...@gmail.com>
> > > wrote:
> > > > >>
> > > > >> Hi, all.
> > > > >>
> > > > >> I've been working on KAFKA-4667 to change the distributed worker
> of
> > > > Kafka
> > > > >> Connect to look for the topics used to store connector and task
> > > > >> configurations, offsets, and status, and if those tasks do not
> exist
> > > to
> > > > 

Re: [VOTE] KIP-151: Expose Connector type in REST API (first attempt :)

2017-05-08 Thread BigData dev
+1 (non-binding)

Thanks,
Bharat


On Mon, May 8, 2017 at 4:39 PM, Konstantine Karantasis <
konstant...@confluent.io> wrote:

> +1 (non-binding)
>
> On Mon, May 8, 2017 at 3:39 PM, dan  wrote:
>
> > i'd like to begin voting on
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 151+Expose+Connector+type+in+REST+API
> >
> > discussion should remain on
> > http://mail-archives.apache.org/mod_mbox/kafka-dev/201705.
> > mbox/%3CCAFJy-U-pF7YxSRadx_zAQYCX2+SswmVPSBcA4tDMPP5834s6Kg@mail.
> > gmail.com%3E
> >
> > This voting thread will stay active for a minimum of 72 hours.
> >
> > thanks
> > dan
> >
>


Re: [DISCUSS] KIP-154 Add Kafka Connect configuration properties for creating internal topics

2017-05-08 Thread BigData dev
Hi,
I liked the KIP, as it will avoid so many errors which user can make during
setup.
I have 1 questions here.
1. As default replication factor is set to 3, but if Kafka cluster is setup
for one node, then the user needs to override the default configuraion,
till then topics will not be created.
So, this is the behavior we want to give?

On Mon, May 8, 2017 at 2:25 PM, Konstantine Karantasis <
konstant...@confluent.io> wrote:

> Thanks a lot for the KIP Randall. This improvement should simplify both
> regular deployments and testing!
>
> A minor comment. Maybe it would be nice to add a note about why there's no
> need for the property: config.storage.partitions
> I'm mentioning this for the sake of completeness, in case someone notices
> this slight asymmetry with respect to the newly introduced config
> properties.
>
> This is by no means a blocking comment.
>
> Thanks,
> Konstantine
>
> On Fri, May 5, 2017 at 7:18 PM, Randall Hauch  wrote:
>
> > Thanks, Gwen.
> >
> > Switching to low-priority is a great idea.
> >
> > The default value for the replication factor configuration is 3, since
> > that makes sense and is safe for production. Using the default values in
> > the example would mean it could only be run against a Kafka cluster with
> a
> > minimum of 3 nodes. I propose overriding the example's replication factor
> > configurations to be 1 so that the examples could be run on any sized
> > cluster.
> >
> > The rejected alternatives mentions why the implementation doesn't try to
> > be too smart by calculating the replication factor.
> >
> > Best regards,
> >
> > Randall
> >
> > > On May 5, 2017, at 8:02 PM, Gwen Shapira  wrote:
> > >
> > > Looks great to me :)
> > >
> > > Just one note - configurations have levels (which reflect in the docs)
> -
> > I
> > > suggest putting the whole thing as LOW. Most users will never need to
> > worry
> > > about these. For same reason I recommend leaving them out of the
> example
> > > config files - we already have issues with users playing with configs
> > > without understanding what they are doing and not liking the results.
> > >
> > >> On Fri, May 5, 2017 at 3:42 PM, Randall Hauch 
> wrote:
> > >>
> > >> Hi, all.
> > >>
> > >> I've been working on KAFKA-4667 to change the distributed worker of
> > Kafka
> > >> Connect to look for the topics used to store connector and task
> > >> configurations, offsets, and status, and if those tasks do not exist
> to
> > >> create them using the new AdminClient. To make this as useful as
> > possible
> > >> and to minimize the need to still manually create the topics, I
> propose
> > >> adding several new distributed worker configurations to specify the
> > >> partitions and replication factor for these topics, and have outlined
> > them
> > >> in "KIP-154 Add Kafka Connect configuration properties for creating
> > >> internal topics".
> > >>
> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> 154+Add+Kafka+Connect+configuration+properties+for+
> > >> creating+internal+topics
> > >>
> > >> Please take a look and provide feedback. Thanks!
> > >>
> > >> Best regards,
> > >>
> > >> Randall
> > >>
> > >
> > >
> > >
> > > --
> > > *Gwen Shapira*
> > > Product Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter  | blog
> > > 
> >
>


[DISCUSS] KIP-156 Add option "dry run" to Streams application reset tool

2017-05-08 Thread BigData dev
Hi All,
I want to start a discussion on this simple KIP for Kafka Streams reset
tool (kafka-streams-application-reset.sh).
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=69410150

Thank you, Matthias J Sax for providing me Jira and info to work on.


Thanks,
Bharat


Need Permissions for creating KIP in Kafka

2017-05-05 Thread BigData dev
Hi,
Could you please provide permission for creating KIP in Kafka.

username: bharatv
email: bigdatadev...@gmail.com


Reg: Kafka HDFS Connector with (HDFS SSL enabled)

2017-02-15 Thread BigData dev
Hi,

Does Kafka HDFS Connect work with HDFS (SSL). As I see only properties in
security is
hdfs.authentication.kerberos, connect.hdfs.keytab,hdfs.namenode.principal
as these properties are all related to HDFS Kerberos.

As from the configuration and code I see we pass only Kerberos parameters,
not seen SSL configuration, so want to confirm will the Kafka HDFS
Connector works with HDFS (SSL enabled)?

Could you please provide any information on this.


Thanks


Re: [VOTE] KIP-118: Drop Support for Java 7 in Kafka 0.11

2017-02-09 Thread BigData dev
+1

Thanks,
Bharat


On Thu, Feb 9, 2017 at 9:27 AM, Jason Gustafson  wrote:

> +1
>
> On Thu, Feb 9, 2017 at 9:00 AM, Grant Henke  wrote:
>
> > +1
> >
> > On Thu, Feb 9, 2017 at 10:51 AM, Mickael Maison <
> mickael.mai...@gmail.com>
> > wrote:
> >
> > > +1 too.
> > >
> > > On Thu, Feb 9, 2017 at 4:30 PM, Edoardo Comar 
> wrote:
> > > > +1 (non-binding)
> > > > --
> > > > Edoardo Comar
> > > > IBM MessageHub
> > > > eco...@uk.ibm.com
> > > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > >
> > > > IBM United Kingdom Limited Registered in England and Wales with
> number
> > > > 741598 Registered office: PO Box 41, North Harbour, Portsmouth,
> Hants.
> > > PO6
> > > > 3AU
> > > >
> > > >
> > > >
> > > > From:   Ismael Juma 
> > > > To: dev@kafka.apache.org
> > > > Date:   09/02/2017 15:33
> > > > Subject:[VOTE] KIP-118: Drop Support for Java 7 in Kafka 0.11
> > > > Sent by:isma...@gmail.com
> > > >
> > > >
> > > >
> > > > Hi everyone,
> > > >
> > > > Since everyone in the discuss thread was in favour (10 people
> > responded),
> > > > I
> > > > would like to initiate the voting process for KIP-118: Drop Support
> for
> > > > Java 7 in Kafka 0.11:
> > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 118%3A+Drop+Support+for+Java+7+in+Kafka+0.11
> > > >
> > > >
> > > > The vote will run for a minimum of 72 hours.
> > > >
> > > > Thanks,
> > > > Ismael
> > > >
> > > >
> > > >
> > > > Unless stated otherwise above:
> > > > IBM United Kingdom Limited - Registered in England and Wales with
> > number
> > > > 741598.
> > > > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire
> PO6
> > > 3AU
> > >
> >
> >
> >
> > --
> > Grant Henke
> > Software Engineer | Cloudera
> > gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> >
>


Reg: Kafka Kerberos

2017-02-07 Thread BigData dev
Hi,
I am using Kafka 0.10.1.0 and kerberozied cluster.

Kafka_jaas.conf file:

Client {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   keyTab="/etc/security/keytabs/kafka.service.keytab"
   storeKey=true
   useTicketCache=false
   serviceName="zookeeper"
   principal="kafka/h...@example.com";
};

If I change the keytab to user keytab (ex kafkatest) topic will be
created. (Creating topic using kafka console command). It is not
having any metadata information and leader assigned to it
(As kafka service user is not having access. because when i check
under zookeeper nodes it is having below permission for the topic
node)

getAcl /brokers/topicsuser-topic-test1
'world,'anyone
: r
'sasl,'kafkatest
: cdrwa


So, if i do setAcl /brokers/topics/user-topic-test1
world:anyone:r,sasl:kafkatest:cdrwa,sasl:kafka:cdrwa and then restart
kafka, the topic is having leader assigned to it.

So, is it mandatory for Client Section to use kafka service keytab or
add the keytab specified in the keyTab to super user to make it work?


Could any one please provide information on this.


Thanks


Reg: Kafka ACLS

2017-01-25 Thread BigData dev
Hi,
I have a question, can we use Kafka ACL's with only SASL/PLAIN mechanism.
Because after I enabled, still I am able to produce/consume from topics.

And one more observation is in kafka-_jaas.conf, there is no client
section, will get an WARN as below, as we dont have this kind of mechanisim
with zookeeper.  Just want to confirm is this expected?

*WARN SASL configuration failed: javax.security.auth.login.LoginException:
No JAAS configuration section named 'Client' was found in specified JAAS
configuration file: '/usr/iop/current/kafka-broker/conf/kafka_jaas.conf'.
Will continue connection to Zookeeper server without SASL authentication,
if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)*

KafkaClient {

org.apache.kafka.common.security.plain.PlainLoginModule required

username="alice"

password="alice-secret";

};


KafkaServer {

org.apache.kafka.common.security.plain.PlainLoginModule required

username="admin"

password="admin-secret"

user_admin="admin-secret"

user_alice="alice-secret";

};


I see recommended is SASL/PLAIN with SSL, just can we use only SASL/PLAIN
mechanisim with ACLS?

Thanks


Re: Reg: ACLS

2016-12-10 Thread BigData dev
Hi,

bin/kafka-acls.sh --topic kafka-testtopic --add -allow-host 9.30.15.19
--operation Write --authorizer-properties
zookeeper.connect=hostname.abc.com:2181

Below message I am getting.
You must specify one of: --allow-principal, --deny-principal when
trying to add ACLs.

So, as kerberos is not enabled what will be the allow-principal value.

Any information on this would be greatly helpful.



Thanks

On Sat, Dec 10, 2016 at 11:02 AM, BigData dev <bigdatadev...@gmail.com>
wrote:

> Hi Ashish, Ismael
> Thanks for Info.
> So on Kafka Cluster (With out any security enabled) I can add ACLS with IP
> address.
> Is that correct?
>
>
> Thanks,
> Bharat
>
>
> On Fri, Dec 9, 2016 at 11:14 AM, Ashish Singh <asi...@cloudera.com> wrote:
>
>> Ismael, thanks for the correction. I assumed the question was targeted for
>> without any security enabled, but yea even then IP based auth is possible.
>>
>> On Fri, Dec 9, 2016 at 11:01 AM, Ismael Juma <ism...@juma.me.uk> wrote:
>>
>> > It is possible to use ACLs with IPs or other SASL mechanisms (PLAIN for
>> > example). So Kerberos and SSL are not required (although commonly used).
>> >
>> > Ismael
>> >
>> > On Fri, Dec 9, 2016 at 6:59 PM, Ashish Singh <asi...@cloudera.com>
>> wrote:
>> >
>> > > Hey,
>> > >
>> > > No it does not. Without kerberos or ssl, all requests will appear to
>> come
>> > > from anonymous user, and as long as a user is not identified it is not
>> > > possible to do authorization on.
>> > >
>> > > On Fri, Dec 9, 2016 at 10:40 AM, BigData dev <bigdatadev...@gmail.com
>> >
>> > > wrote:
>> > >
>> > > > Hi All,
>> > > > I have a question here, Does Kafka support ACL's with out
>> kerberos/SSL?
>> > > >
>> > > > Any info on this would be greatly helpful.
>> > > >
>> > > >
>> > > > Thanks
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > >
>> > > Regards,
>> > > Ashish
>> > >
>> >
>>
>>
>>
>> --
>>
>> Regards,
>> Ashish
>>
>
>


Re: Reg: ACLS

2016-12-10 Thread BigData dev
Hi Ashish, Ismael
Thanks for Info.
So on Kafka Cluster (With out any security enabled) I can add ACLS with IP
address.
Is that correct?


Thanks,
Bharat


On Fri, Dec 9, 2016 at 11:14 AM, Ashish Singh <asi...@cloudera.com> wrote:

> Ismael, thanks for the correction. I assumed the question was targeted for
> without any security enabled, but yea even then IP based auth is possible.
>
> On Fri, Dec 9, 2016 at 11:01 AM, Ismael Juma <ism...@juma.me.uk> wrote:
>
> > It is possible to use ACLs with IPs or other SASL mechanisms (PLAIN for
> > example). So Kerberos and SSL are not required (although commonly used).
> >
> > Ismael
> >
> > On Fri, Dec 9, 2016 at 6:59 PM, Ashish Singh <asi...@cloudera.com>
> wrote:
> >
> > > Hey,
> > >
> > > No it does not. Without kerberos or ssl, all requests will appear to
> come
> > > from anonymous user, and as long as a user is not identified it is not
> > > possible to do authorization on.
> > >
> > > On Fri, Dec 9, 2016 at 10:40 AM, BigData dev <bigdatadev...@gmail.com>
> > > wrote:
> > >
> > > > Hi All,
> > > > I have a question here, Does Kafka support ACL's with out
> kerberos/SSL?
> > > >
> > > > Any info on this would be greatly helpful.
> > > >
> > > >
> > > > Thanks
> > > >
> > >
> > >
> > >
> > > --
> > >
> > > Regards,
> > > Ashish
> > >
> >
>
>
>
> --
>
> Regards,
> Ashish
>


Reg: ACLS

2016-12-09 Thread BigData dev
Hi All,
I have a question here, Does Kafka support ACL's with out kerberos/SSL?

Any info on this would be greatly helpful.


Thanks


Re: [ANNOUNCE] New committer: Jiangjie (Becket) Qin

2016-10-31 Thread BigData dev
Congratulations Becket!!



Thanks,
Bharat

On Mon, Oct 31, 2016 at 11:26 AM, Jun Rao  wrote:

> Congratulations, Jiangjie. Thanks for all your contributions to Kafka.
>
> Jun
>
> On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy  wrote:
>
> > The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to join as a
> > committer and we are pleased to announce that he has accepted!
> >
> > Becket has made significant contributions to Kafka over the last two
> years.
> > He has been deeply involved in a broad range of KIP discussions and has
> > contributed several major features to the project. He recently completed
> > the implementation of a series of improvements (KIP-31, KIP-32, KIP-33)
> to
> > Kafka’s message format that address a number of long-standing issues such
> > as avoiding server-side re-compression, better accuracy for time-based
> log
> > retention, log roll and time-based indexing of messages.
> >
> > Congratulations Becket! Thank you for your many contributions. We are
> > excited to have you on board as a committer and look forward to your
> > continued participation!
> >
> > Joel
> >
>


Reg: Kafka Security features

2016-10-12 Thread BigData dev
Hi All,
Could you please provide below information.

1. Kafka security features (Kerberos , ACL's) are beta quality code or can
they be used in production?
 Because Kafka documentation shows they are of beta code quality.

>From Apache Kafka Documentation "In release 0.9.0.0, the Kafka community
added a number of features that, used either separately or together,
increases security in a Kafka cluster. These features are considered to be
of beta quality."

2. Kafka new Consumer/Producer only supports the security features.
>From Apache Kafka Documentation "The code is considered beta quality. Below
is the configuration for the new consumer"

So, can we use the Kafka security features on production cluster?
Could any one help on this.


Reg: DefaultParititioner in Kafka

2016-08-29 Thread BigData dev
Hi All,
In DefaultPartitioner implementation, when key is null, we get the
partition number by modulo of available partitions. Below is the code
snippet.

if (availablePartitions.size() > 0)
{ int part = Utils.toPositive(nextValue) % availablePartitions.size();
return availablePartitions.get(part).partition();
}
Where as when key is not null, we get the partition number by modulo of
total no og partitions.

return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions;

As if some partitions are not available,then the producer will not be able
to publish message to that partition.

Should n't we do the same as by considering only available partitions?

https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/internals/DefaultPartitioner.java#L67

Could any help to clarify on this issue.


Thanks,
Bharat


Re: [DISCUSS] KIP-76: Enable getting password from executable rather than passing as plaintext in config files

2016-08-24 Thread BigData dev
+1 (non-binding)


Thanks,
Bharat

On Wed, Aug 24, 2016 at 12:03 PM, Ashish Singh  wrote:

> Hey Guys,
>
> I’ve just posted KIP-76: Enable getting password from executable rather
> than passing as plaintext in config files
>  76+Enable+getting+password+from+executable+rather+than+
> passing+as+plaintext+in+config+files>
> .
>
> The proposal is to enable getting passwords from executable. This is an ask
> from very security conscious users.
>
> Full details are here:
>
> KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 76+Enable+getting+password+from+executable+rather+than+
> passing+as+plaintext+in+config+files
> JIRA: https://issues.apache.org/jira/browse/KAFKA-2629
> POC: https://github.com/apache/kafka/pull/1770
>
> Thanks
>
> --
>
> Regards,
> Ashish
>


Reg: SSL setup

2016-08-03 Thread BigData dev
Hi,
Can you please provide information on Self signed certificate setup in
Kafka. As in Kafka documentation only CA signed setup is provided.

http://kafka.apache.org/documentation.html#security_ssl


As because, we need to provide parameters trustore, keystore during
configuration.

Or to work with self signed certificate, do we need to import all nodes
certificates to trustore on all machines?

Can you please provide information on this, if you have worked on this.


Thanks,
Bharat


Reg: Build Validation Jenkins

2016-06-16 Thread BigData dev
Hi,
I have opened pull request for one of the JIRA.
My build has been marked has failed due to time out exception. I think
configuration setup is like that after 2 hours mark it as build fail.

I have seen other builds on jenkins they also has same issue.

Is this jenkins server executing machines have been slow or what could have
been the reason for this?

Can you please let me know, how can i resolve this issue, and validate my
build.



Thanks,
Bharat


Re: Reg: Kafka-Acls

2016-05-05 Thread BigData dev
Hi,
Thanks for Info.
It worked.
Acls are correctly set, but when i run the producer is throwing error, even
if acl's are correctlt set.

bin/kafka-console-producer.sh --broker-list bdavm1222.svl.ibm.com:6667
--topic permissiontopic --producer.config producer.properties
jj
[2016-05-05 16:02:23,308] WARN Error while fetching metadata with
correlation id 0 : {permissiontopic=TOPIC_AUTHORIZATION_FAILED}
(org.apache.kafka.clients.NetworkClient)
[2016-05-05 16:02:23,309] ERROR Error when sending message to topic
permissiontopic with key: null, value: 2 bytes with error: Not authorized
to access topics: [permissiontopic]
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
^C[2016-05-05 16:02:45,754] WARN TGT renewal thread has been interrupted
and will exit. (org.apache.kafka.common.security.kerberos.Login)
producer is throwing error

Anythoughts on this.


Regards,
Bharat


On Thu, May 5, 2016 at 4:19 PM, parth brahmbhatt <brahmbhatt.pa...@gmail.com
> wrote:

> Acls will be written in zookeeper but you are using getAcl , what you need
>  is get  /kafka-acl/Topic/permissiontopic
>
> Thanks
> Parth
>
> On Thu, May 5, 2016 at 3:28 PM, BigData dev <bigdatadev...@gmail.com>
> wrote:
>
> > Hi,
> > When I run the command
> >  /bin/kafka-acls.sh --topic permissiontopic --add --allow-host {host}
> > --allow-principal User:dev --operation Write --authorizer-properties
> > zookeeper.connect={host:port}
> >
> > I am getting output as acls are set.
> >
> > But when i check under zookeeper using below command, it is not showing
> the
> > acls which I have set for user dev.
> >
> > [zk: (CONNECTED) 13] getAcl /kafka-acl/Topic/permissiontopic
> > 'world,'anyone
> > : r
> > 'sasl,'kafka
> > : cdrwa
> >
> > Is my understanding correct kafka-acls will be written to zookeeper node.
> >
> >
> > This is causing when i run producer, it is failing as topic authorization
> > failed.
> >
> > If any one has used this, can you please provide the inputs
> >
> > Regards,
> > Bharat
> >
>


Reg: Kafka-Acls

2016-05-05 Thread BigData dev
Hi,
When I run the command
 /bin/kafka-acls.sh --topic permissiontopic --add --allow-host {host}
--allow-principal User:dev --operation Write --authorizer-properties
zookeeper.connect={host:port}

I am getting output as acls are set.

But when i check under zookeeper using below command, it is not showing the
acls which I have set for user dev.

[zk: (CONNECTED) 13] getAcl /kafka-acl/Topic/permissiontopic
'world,'anyone
: r
'sasl,'kafka
: cdrwa

Is my understanding correct kafka-acls will be written to zookeeper node.


This is causing when i run producer, it is failing as topic authorization
failed.

If any one has used this, can you please provide the inputs

Regards,
Bharat


Re: [ANNOUNCE] New committer: Ismael Juma

2016-04-26 Thread BigData dev
Congrats Ismael!!


Regards,
Bharat


On Tue, Apr 26, 2016 at 4:17 PM, Edward Ribeiro 
wrote:

> Congratulations, Ismael!
>
> E. Ribeiro
> Em 26/04/2016 15:33, "Jason Gustafson"  escreveu:
>
> > Great work, Ismael!
> >
> > On Tue, Apr 26, 2016 at 9:00 AM, Grant Henke 
> wrote:
> >
> > > Congratulations Ismael!
> > > On Apr 26, 2016 8:55 AM, "Harsha"  wrote:
> > >
> > > > Congrats, Ismael
> > > >
> > > > -Harsha
> > > >
> > > > On Tue, Apr 26, 2016, at 08:01 AM, Jun Rao wrote:
> > > > > Congratulations, Ismael!
> > > > >
> > > > > Jun
> > > > >
> > > > > On Mon, Apr 25, 2016 at 10:52 PM, Neha Narkhede  >
> > > > > wrote:
> > > > >
> > > > > > The PMC for Apache Kafka has invited Ismael Juma to join as a
> > > > committer and
> > > > > > we are pleased to announce that he has accepted!
> > > > > >
> > > > > > Ismael has contributed 121 commits
> > > > > >  to a wide
> > > range
> > > > of
> > > > > > areas, notably within the security and the network layer. His
> > > > involvement
> > > > > > has been phenomenal across the board from mailing lists, JIRA,
> code
> > > > reviews
> > > > > > and helping us move to GitHub pull requests to contributing
> > features,
> > > > bug
> > > > > > fixes and code and documentation improvements.
> > > > > >
> > > > > > Thank you for your contribution and welcome to Apache Kafka,
> > Ismael!
> > > > > >
> > > > > > --
> > > > > > Thanks,
> > > > > > Neha
> > > > > >
> > > >
> > >
> >
>


Re: Kafka missing from ASF Jira?

2016-04-25 Thread BigData dev
Hi Wang,
Me too facing the same issue.
Can you provide access to me too.


Regards,
Bharat


On Mon, Apr 25, 2016 at 11:21 AM, Guozhang Wang  wrote:

> Greg,
>
> Could you try again now? I think even non-contributor should be able to
> create Kafka JIRAs, but they may not be able to assign JIRAs to themselves.
>
> Anyways, have just added you to the contributor list.
>
>
> Guozhang
>
> On Mon, Apr 25, 2016 at 9:14 AM, Greg Fodor  wrote:
>
> > If I go there, and hit "Create", the Project List at the top has lots
> > of apache projects does not contain any items containing 'kafka' :(
> >
> > On Mon, Apr 25, 2016 at 12:35 AM, Liquan Pei 
> wrote:
> > > Hi Greg,
> > >
> > > Can you try this link?
> > >
> >
> https://issues.apache.org/jira/browse/KAFKA/?selectedTab=com.atlassian.jira.jira-projects-plugin:summary-panel
> > >
> > > Thanks,
> > > Liquan
> > >
> > > On Mon, Apr 25, 2016 at 12:04 AM, Greg Fodor  wrote:
> > >
> > >> I am trying to file a bug, but when I go to create a ticket on the ASF
> > >> Jira, Kafka is not visible in the list of projects in the first field
> > >> of the ticket. I see the hundred+ other Apache projects, but no Kafka
> > >> :(
> > >>
> > >
> > >
> > >
> > > --
> > > Liquan Pei
> > > Software Engineer, Confluent Inc
> >
>
>
>
> --
> -- Guozhang
>


Reg: Kafka With Kerberos/SSL [Enhancement to add option, Need suggestions on this]

2016-04-14 Thread BigData dev
Hi All,
When Kafka is running on kerberoized cluster/ SSL. Can we add an option
security.protocol. So, that user can given PLAINTEXT, SSL, SASL_PLAINTEXT,
SASL_SSL. This will be helpful during running console producer and console
consumer.

./bin/kafka-console-producer.sh --broker-list  --topic  --security-protocol SASL_PLAINTEXT


./bin/kafka-console-consumer.sh --zookeeper c6401.ambari.apache.org:2181
--topic test_topic --from-beginning --security-protocol
SASL_PLAINTEXT



*Currently, this property can be configured in producer.properties when
running console-producer. And consumer.properties when running
cosole-consumer.*

Can we add as an option to user during running from console with other
options like topic, broker-list.

Can you please provide your thoughts on this, if it is useful to go in as a
feature or not. If it is useful i will create a jira and work on it.


*Note: This is the way HDP stack is working with kafka 0.9.0.. Link is
provided below.*

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_secure-kafka-ambari/content/ch_secure-kafka-produce-events.html




Regards,
Bharat


Reg: Issue with Kafka Kerberos (Kafka Version 0.9.0.1)

2016-04-12 Thread BigData dev
Hi All,
I am facing issue with kafka kerberoized cluster.

After following the steps how to enables SASL on kafka by using below link.
http://docs.confluent.io/2.0.0/kafka/sasl.html



After this,when i start the kafka-server I am getting below error.
[2016-04-12 16:59:26,201] ERROR [KafkaApi-1001] error when handling request
Name:LeaderAndIsrRequest;Version:0;Controller:1001;ControllerEpoch:3;CorrelationId:3;ClientId:1001;Leaders:BrokerEndPoint(1001,
hostname.com,6667);PartitionState:(t1,0) ->
(LeaderAndIsrInfo:(Leader:1001,ISR:1001,LeaderEpoch:1,ControllerEpoch:3),ReplicationFactor:1),AllReplicas:1001),(ambari_kafka_service_check,0)
->
(LeaderAndIsrInfo:(Leader:1001,ISR:1001,LeaderEpoch:2,ControllerEpoch:3),ReplicationFactor:1),AllReplicas:1001)
(kafka.server.KafkaApis)
kafka.common.ClusterAuthorizationException: Request
Request(0,9.30.150.20:6667-9.30.150.20:37550,Session(User:kafka,/9.30.150.20),null,1460505566200,SASL_PLAINTEXT)
is not authorized.
at kafka.server.KafkaApis.authorizeClusterAction(KafkaApis.scala:910)
at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:113)
at kafka.server.KafkaApis.handle(KafkaApis.scala:72)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)


Regards,
Bharat


Reg: Contribution to Kafka

2016-03-24 Thread BigData dev
Hi Kafka Contributors,
I am interested in contributing to kafka open source.
Can you please provide some suggestions in understanding the code of kafka.
Can you please provide any method you have followed to understand the code.



Regards,
BigDataDev