Re: [VOTE] KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-01-22 Thread Navinder Brar
Oh sorry, my bad. Will wait for another 12 hours. 

On Thursday, 23 January, 2020, 12:09:57 pm IST, Matthias J. Sax 
 wrote:  
 
 Navinder,

a KIP vote must be open for 72h and cannot be closed earlier.

-Matthias

On 1/22/20 10:27 PM, Navinder Brar wrote:
> Thanks, everyone for voting. KIP-562 has been accepted with binding votes 
> from John, Matthias, and Guozhang. 
> 
>    On Thursday, 23 January, 2020, 09:40:07 am IST, Guozhang Wang 
> wrote:  
>  
>  +1 (binding) from me as well.
> 
> On Wed, Jan 22, 2020 at 5:59 PM Matthias J. Sax 
> wrote:
> 
>> I have a few minor comments (compare the DISCUSS thread), but overall
>> the KIP looks good.
>>
>> +1 (binding)
>>
>>
>> -Matthias
>>
>> On 1/22/20 10:09 AM, John Roesler wrote:
>>> Thanks for updating the KIP, Navinder.
>>>
>>> I'm +1 (binding) on the current proposal
>>>
>>> Thanks,
>>> -John
>>>
>>> On Tue, Jan 21, 2020, at 12:50, Navinder Brar wrote:
 Thanks, Guozhang. I agree it makes total sense. I will make the
 edits.~Navinder

     On Tuesday, 21 January, 2020, 11:00:32 pm IST, Guozhang Wang
  wrote:

   Hello Navinder,

 Thanks for brining up this proposal. I made a quick pass on that and
 overall I think I agree with your ideas. Just a few thoughts about the
 public APIs:

 1) As we are adding a new overload to `KafkaStreams#store`, could we
>> just
 add the storeName and queryableStoreType as part of StoreQueryParam, and
 leaving that the only parameter of the function?

 2) along with 1), for the static constructors, instead of iterating over
 all possible combos I'd suggest we make constructors with only, say,
 storeName, and then adding `withXXX()` setters to set other fields.
>> This is
 in case we want to add more param fields into the object, that we do not
 need to exponentially adding and deprecating the static constructors.


 Guozhang


 On Mon, Jan 20, 2020 at 10:42 AM Navinder Brar
  wrote:

> Hello all,
>
> I'd like to propose a vote to serve keys from a specific
>> partition-store
> instead of iterating over all the local stores of an instance to
>> locate the
> key, as which happens currently.
> The full KIP is provided here:
>
>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-562%3A+Allow+fetching+a+key+from+a+single+partition+rather+than+iterating+over+all+the+stores+on+an+instance
>
>
> Thanks,
> Navinder
>


 --
 -- Guozhang

>>
>>
> 
  

Re: [VOTE] KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-01-22 Thread Matthias J. Sax
Navinder,

a KIP vote must be open for 72h and cannot be closed earlier.

-Matthias

On 1/22/20 10:27 PM, Navinder Brar wrote:
> Thanks, everyone for voting. KIP-562 has been accepted with binding votes 
> from John, Matthias, and Guozhang. 
> 
> On Thursday, 23 January, 2020, 09:40:07 am IST, Guozhang Wang 
>  wrote:  
>  
>  +1 (binding) from me as well.
> 
> On Wed, Jan 22, 2020 at 5:59 PM Matthias J. Sax 
> wrote:
> 
>> I have a few minor comments (compare the DISCUSS thread), but overall
>> the KIP looks good.
>>
>> +1 (binding)
>>
>>
>> -Matthias
>>
>> On 1/22/20 10:09 AM, John Roesler wrote:
>>> Thanks for updating the KIP, Navinder.
>>>
>>> I'm +1 (binding) on the current proposal
>>>
>>> Thanks,
>>> -John
>>>
>>> On Tue, Jan 21, 2020, at 12:50, Navinder Brar wrote:
 Thanks, Guozhang. I agree it makes total sense. I will make the
 edits.~Navinder

     On Tuesday, 21 January, 2020, 11:00:32 pm IST, Guozhang Wang
  wrote:

   Hello Navinder,

 Thanks for brining up this proposal. I made a quick pass on that and
 overall I think I agree with your ideas. Just a few thoughts about the
 public APIs:

 1) As we are adding a new overload to `KafkaStreams#store`, could we
>> just
 add the storeName and queryableStoreType as part of StoreQueryParam, and
 leaving that the only parameter of the function?

 2) along with 1), for the static constructors, instead of iterating over
 all possible combos I'd suggest we make constructors with only, say,
 storeName, and then adding `withXXX()` setters to set other fields.
>> This is
 in case we want to add more param fields into the object, that we do not
 need to exponentially adding and deprecating the static constructors.


 Guozhang


 On Mon, Jan 20, 2020 at 10:42 AM Navinder Brar
  wrote:

> Hello all,
>
> I'd like to propose a vote to serve keys from a specific
>> partition-store
> instead of iterating over all the local stores of an instance to
>> locate the
> key, as which happens currently.
> The full KIP is provided here:
>
>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-562%3A+Allow+fetching+a+key+from+a+single+partition+rather+than+iterating+over+all+the+stores+on+an+instance
>
>
> Thanks,
> Navinder
>


 --
 -- Guozhang

>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-01-22 Thread Navinder Brar
Thanks, everyone for voting. KIP-562 has been accepted with binding votes from 
John, Matthias, and Guozhang. 

On Thursday, 23 January, 2020, 09:40:07 am IST, Guozhang Wang 
 wrote:  
 
 +1 (binding) from me as well.

On Wed, Jan 22, 2020 at 5:59 PM Matthias J. Sax 
wrote:

> I have a few minor comments (compare the DISCUSS thread), but overall
> the KIP looks good.
>
> +1 (binding)
>
>
> -Matthias
>
> On 1/22/20 10:09 AM, John Roesler wrote:
> > Thanks for updating the KIP, Navinder.
> >
> > I'm +1 (binding) on the current proposal
> >
> > Thanks,
> > -John
> >
> > On Tue, Jan 21, 2020, at 12:50, Navinder Brar wrote:
> >> Thanks, Guozhang. I agree it makes total sense. I will make the
> >> edits.~Navinder
> >>
> >>    On Tuesday, 21 January, 2020, 11:00:32 pm IST, Guozhang Wang
> >>  wrote:
> >>
> >>  Hello Navinder,
> >>
> >> Thanks for brining up this proposal. I made a quick pass on that and
> >> overall I think I agree with your ideas. Just a few thoughts about the
> >> public APIs:
> >>
> >> 1) As we are adding a new overload to `KafkaStreams#store`, could we
> just
> >> add the storeName and queryableStoreType as part of StoreQueryParam, and
> >> leaving that the only parameter of the function?
> >>
> >> 2) along with 1), for the static constructors, instead of iterating over
> >> all possible combos I'd suggest we make constructors with only, say,
> >> storeName, and then adding `withXXX()` setters to set other fields.
> This is
> >> in case we want to add more param fields into the object, that we do not
> >> need to exponentially adding and deprecating the static constructors.
> >>
> >>
> >> Guozhang
> >>
> >>
> >> On Mon, Jan 20, 2020 at 10:42 AM Navinder Brar
> >>  wrote:
> >>
> >>> Hello all,
> >>>
> >>> I'd like to propose a vote to serve keys from a specific
> partition-store
> >>> instead of iterating over all the local stores of an instance to
> locate the
> >>> key, as which happens currently.
> >>> The full KIP is provided here:
> >>>
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-562%3A+Allow+fetching+a+key+from+a+single+partition+rather+than+iterating+over+all+the+stores+on+an+instance
> >>>
> >>>
> >>> Thanks,
> >>> Navinder
> >>>
> >>
> >>
> >> --
> >> -- Guozhang
> >>
>
>

-- 
-- Guozhang
  

Re: [VOTE] KIP-216: IQ should throw different exceptions for different errors

2020-01-22 Thread Vito Jeng
Thanks to everyone for voting.

The KIP-216 has been accepted, with 3 binding votes from
Bill, John and Matthias and 1 non-binding votes.


---
Vito


On Thu, Jan 23, 2020 at 1:46 PM Matthias J. Sax 
wrote:

> +1 (binding)
>
> -Matthias
>
> On 1/17/20 9:35 PM, John Roesler wrote:
> > Thanks for the KIP!
> >
> > I'm +1 (binding)
> >
> > Thanks,
> > -John
> >
> > On Thu, Jan 16, 2020, at 08:46, Bill Bejeck wrote:
> >> Thanks for the KIP.
> >>
> >> +1 (binding)
> >>
> >> -Bill
> >>
> >> On Tue, Jan 14, 2020 at 9:41 AM Navinder Brar
> >>  wrote:
> >>
> >>> +1 (non-binding) With a small comment which was mentioned by Vinoth as
> >>> well. Did we fix on the flag for StreamsRebalancingException, I don't
> see
> >>> it in the KIP.
> >>> -Navinder
> >>>
> >>>
> >>> On Tuesday, 14 January, 2020, 08:00:11 pm IST, Vito Jeng <
> >>> v...@is-land.com.tw> wrote:
> >>>
> >>>  Hi, all,
> >>>
> >>> I would like to start the vote for KIP-216.
> >>>
> >>> Currently, IQ throws InvalidStateStoreException for any types of error.
> >>> With this KIP, user can distinguish different types of error.
> >>>
> >>> KIP is here:
> >>>
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-216%3A+IQ+should+throw+different+exceptions+for+different+errors
> >>>
> >>> Thanks
> >>>
> >>> ---
> >>> Vito
> >>> --
> >>>
> >>>
> >>> ---
> >>> Vito
> >>>
> >>
>
>


Re: [DISCUSS]KIP-216: IQ should throw different exceptions for different errors

2020-01-22 Thread Vito Jeng
Got it, thanks Matthias.

---
Vito


On Thu, Jan 23, 2020 at 1:31 PM Matthias J. Sax 
wrote:

> Thanks Vito.
>
> I am also ok with either name. Just a personal slight preference, but
> not a important.
>
>
> -Matthias
>
> On 1/21/20 6:52 PM, Vito Jeng wrote:
> > Thanks Matthias.
> >
> > The KIP is about InvalidStateStoreException.
> > I pick `StateStoreNotAvailableException` because it may be more intuitive
> > than `StreamsNotRunningException`.
> >
> > No matter which one picked, it's good to me.
> >
> > ---
> > Vito
> >
> >
> > On Wed, Jan 22, 2020 at 7:44 AM Matthias J. Sax 
> > wrote:
> >
> >> Thanks for updating the KIP!
> >>
> >> One last comment/question: you kept `StateStoreNotAvailableException` in
> >> favor of `StreamsNotRunningException` (to merge both as suggested).
> >>
> >> I am wondering, if it might be better to keep
> >> `StreamsNotRunningException` instead of
> >> `StateStoreNotAvailableException`, because this exception is thrown if
> >> Streams is in state PENDING_SHUTDOWN / NOT_RUNNING / ERROR ?
> >>
> >>
> >>
> >> -Matthias
> >>
> >> On 1/17/20 9:56 PM, John Roesler wrote:
> >>> Thanks, Vito. I've just cast my vote.
> >>> -John
> >>>
> >>> On Fri, Jan 17, 2020, at 21:32, Vito Jeng wrote:
>  Hi, folks,
> 
>  Just update the KIP, please take a look.
> 
>  Thanks!
> 
>  ---
>  Vito
> 
> 
>  On Fri, Jan 17, 2020 at 9:12 AM Vito Jeng 
> wrote:
> 
> > Thanks Bill, John and Matthias. Glad you guys joined this discussion.
> > I got a lot out of the discussion.
> >
> > I would like to update KIP-216 base on John's suggestion to remove
> the
> > category.
> >
> >
> > ---
> > Vito
> >
> >
> > On Fri, Jan 17, 2020 at 2:30 AM Matthias J. Sax <
> matth...@confluent.io
> >>>
> > wrote:
> >
> >>> Nevertheless, if we omit the categorization, it’s moot.
> >>
> >> Ack.
> >>
> >> I am fine to remove the middle tier. As John pointed out, it might
> be
> >> weird to have only one concrete exception type per category. We can
> >> also
> >> explain in detail how to handle each exception in their JavaDocs.
> >>
> >>
> >> -Matthias
> >>
> >> On 1/16/20 6:38 AM, Bill Bejeck wrote:
> >>> Vito,
> >>>
> >>> Thanks for the updates, the KIP LGTM.
> >>>
> >>> -Bill
> >>>
> >>> On Wed, Jan 15, 2020 at 11:31 PM John Roesler  >
> >> wrote:
> >>>
>  Hi Vito,
> 
>  Haha, your archive game is on point!
> 
>  What Matthias said in that email is essentially what I figured was
> >> the
>  rationale. It makes sense, but the point I was making is that this
> >> really
>  doesn’t seem like a good way to structure a production app. On the
> >> other
>  hand, considering the exception fatal has a good chance of
> avoiding
> >> a
>  frustrating debug session if you just forgot to call start.
> 
>  Nevertheless, if we omit the categorization, it’s moot.
> 
>  It would be easy to add a categorization layer later if we want
> it,
> >> but
>  not very easy to change it if we get it wrong.
> 
>  Thanks for your consideration!
>  -John
> 
>  On Wed, Jan 15, 2020, at 21:14, Vito Jeng wrote:
> > Hi John,
> >
> > About `StreamsNotStartedException is strange` --
> > The original idea came from Matthias, two years ago. :)
> > You can reference here:
> >
> 
> >>
> >>
> https://mail-archives.apache.org/mod_mbox/kafka-dev/201806.mbox/%3c6c32083e-b63c-435b-521d-032d45cc5...@confluent.io%3e
> >
> > About omitting the categorization --
> > It looks reasonable. I'm fine with omitting the categorization
> but
> >> not
>  very
> > sure it is a good choice.
> > Does any other folks provide opinion?
> >
> >
> > Hi, folks,
> >
> > Just update the KIP-216, please take a look.
> >
> > ---
> > Vito
> >
> >
> > On Thu, Jan 16, 2020 at 6:35 AM Vito Jeng 
> >> wrote:
> >
> >>
> >> Hi, folks,
> >>
> >> Thank you suggestion, really appreciate it. :)
> >> I understand your concern. I'll merge StreamsNotRunningException
> >> and
> >> StateStoreNotAvailableException.
> >>
> >>
> >> ---
> >> Vito
> >>
> >>
> >> On Thu, Jan 16, 2020 at 6:22 AM John Roesler <
> vvcep...@apache.org
> >>>
>  wrote:
> >>
> >>> Hey Vito,
> >>>
> >>> Yes, thanks for the KIP. Sorry the discussion has been so long.
> >>> Hopefully, we can close it out soon.
> >>>
> >>> I agree we can drop StreamsNotRunningException in favor of
> >>> just StateStoreNotAvailableException.
> 

Re: [VOTE] KIP-558: Add Connect REST API endpoints to view the topics used by connectors in Kafka Connect

2020-01-22 Thread Konstantine Karantasis
I agree Matthias. I also see the metadata stored in the value of the topic
status record being extended in the near future to include additional
useful information.

I'm happy to announce that we met our first deadline and that this KIP has
been approved with:

3 binding +1 votes from Randall Hauch, Bill Bejeck and Matthias Sax, as
well as
2 non-binding +1 votes from Tom Bentley and Almog Gavra.

KIP-558 is now adopted in its latest version.
Thank you all for the helpful discussion!

- Konstantine

On Wed, Jan 22, 2020 at 5:29 PM Matthias J. Sax 
wrote:

> Thanks for the KIP.
>
> I am not sure how useful the timestamp and taskId information will be in
> practice, but I don't have any concern with regard to
> overhead/performance. Hence, as you think it might be useful, I trust
> your judgement.
>
> For the timestamp though, I would like to emphasize that I personally
> think, that the idea (that was rejected) to track the "latest timestamp"
> when a topic was used might be quite useful information. Of course, as
> already mentioned, there could be a follow up to add this feature.
>
>
> +1 (binding)
>
>
> -Matthias
>
> On 1/21/20 12:32 PM, Bill Bejeck wrote:
> > Thanks for the KIP Konstantine.  This will be very useful for Connect.
> >
> > +1(binding)
> >
> > -Bill
> >
> > On Tue, Jan 21, 2020 at 2:12 PM Almog Gavra  wrote:
> >
> >> Another thanks from me! +1 (non-binding)
> >>
> >> On Tue, Jan 21, 2020 at 11:04 AM Randall Hauch 
> wrote:
> >>
> >>> Thanks again for the KIP and this improvement for Connect.
> >>>
> >>> +1 (binding)
> >>>
> >>> Randall
> >>>
> >>> On Tue, Jan 21, 2020 at 10:45 AM Tom Bentley 
> >> wrote:
> >>>
>  +1 (non-binding). Thanks for the KIP Konstantine.
> 
>  On Sat, Jan 18, 2020 at 2:18 AM Konstantine Karantasis <
>  konstant...@confluent.io> wrote:
> 
> > Hi all,
> >
> > I'd like to open the vote on KIP-558 that had a constructive flurry
> >> of
> > discussions in the past few days, in order to give this KIP the
>  opportunity
> > to be voted on by the current KIP deadline (Wed, Jan 22, 2020), if -
> >> of
> > course - there's agreement upon its final form.
> >
> > KIP link here:
> >
> >
> 
> >>>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-558%3A+Track+the+set+of+actively+used+topics+by+connectors+in+Kafka+Connect
> >
> > Best regards,
> > Konstantine
> >
> 
> >>>
> >>
> >
>
>


Re: [VOTE] KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-01-22 Thread Guozhang Wang
+1 (binding) from me as well.

On Wed, Jan 22, 2020 at 5:59 PM Matthias J. Sax 
wrote:

> I have a few minor comments (compare the DISCUSS thread), but overall
> the KIP looks good.
>
> +1 (binding)
>
>
> -Matthias
>
> On 1/22/20 10:09 AM, John Roesler wrote:
> > Thanks for updating the KIP, Navinder.
> >
> > I'm +1 (binding) on the current proposal
> >
> > Thanks,
> > -John
> >
> > On Tue, Jan 21, 2020, at 12:50, Navinder Brar wrote:
> >> Thanks, Guozhang. I agree it makes total sense. I will make the
> >> edits.~Navinder
> >>
> >> On Tuesday, 21 January, 2020, 11:00:32 pm IST, Guozhang Wang
> >>  wrote:
> >>
> >>  Hello Navinder,
> >>
> >> Thanks for brining up this proposal. I made a quick pass on that and
> >> overall I think I agree with your ideas. Just a few thoughts about the
> >> public APIs:
> >>
> >> 1) As we are adding a new overload to `KafkaStreams#store`, could we
> just
> >> add the storeName and queryableStoreType as part of StoreQueryParam, and
> >> leaving that the only parameter of the function?
> >>
> >> 2) along with 1), for the static constructors, instead of iterating over
> >> all possible combos I'd suggest we make constructors with only, say,
> >> storeName, and then adding `withXXX()` setters to set other fields.
> This is
> >> in case we want to add more param fields into the object, that we do not
> >> need to exponentially adding and deprecating the static constructors.
> >>
> >>
> >> Guozhang
> >>
> >>
> >> On Mon, Jan 20, 2020 at 10:42 AM Navinder Brar
> >>  wrote:
> >>
> >>> Hello all,
> >>>
> >>> I'd like to propose a vote to serve keys from a specific
> partition-store
> >>> instead of iterating over all the local stores of an instance to
> locate the
> >>> key, as which happens currently.
> >>> The full KIP is provided here:
> >>>
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-562%3A+Allow+fetching+a+key+from+a+single+partition+rather+than+iterating+over+all+the+stores+on+an+instance
> >>>
> >>>
> >>> Thanks,
> >>> Navinder
> >>>
> >>
> >>
> >> --
> >> -- Guozhang
> >>
>
>

-- 
-- Guozhang


Re: [DISCUSS] : KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-01-22 Thread Navinder Brar
+1 on changing to storeName() and includeStateStores(). We can add this to 
grammar wiki as Matthias suggested.

I have edited the KIP to remove "Deprecating" in favor of "Changing" and I 
agree we can deprecate store(final String storeName, final 
QueryableStoreType queryableStoreType). 

Thanks
Navinder
On Thursday, 23 January, 2020, 07:28:38 am IST, Matthias J. Sax 
 wrote:  
 
 Thanks for the clarifications about the getters. I agree that it makes
sense to move to the new pattern incrementally. Might be useful to
create a Jira (or multiple?) to track this. It's an straight forward change.

A nit about the KIP: it should only list the signature but not the full
code of the implementation (ie, only package name and the class + method
names; we can omit toString(), equals(), and hashCode(), too -- alo, no
license header please ;))


nit: `isIncludeStaleStores` -> `includeStaleStores` (the "is"-prefix
reads clumsy and it's common in Kafka code base to omit the "get"-prefix
for getters -- we should adopt this)

@John: might be worth to include this in the Grammar wiki page?

nit (similar as above):

 - `getStoreName` -> `storeName`
 - `getQueryableStoreType` -> `queryableStoreType`


The KIP says

> Deprecating the KafkaStreams#store(final String storeName, final 
> QueryableStoreType queryableStoreType, final boolean includeStaleStores) 
> in favour of the funtion mentioned below.

We don't need to deprecate this method but we can remove it directly,
because it was never release.


What is the plan for

> store(final String storeName, final QueryableStoreType queryableStoreType) 
> {

Given that the new `StoreQueryParams` allows to specify `storeName` and
`queryableStoreType`, should we deprecate this method in favor of the
new `store(StoreQueryParams)` overload?


-Matthias



On 1/22/20 10:06 AM, John Roesler wrote:
> Thanks Navinder! I've also updated the motivation.
> 
> Thanks,
> -John
> 
> On Wed, Jan 22, 2020, at 11:12, Navinder Brar wrote:
>> I went through the grammar wiki page and since it is already agreed in 
>> principle I will change from constructor to below method and add the 
>> getters back.
>> public static  StoreQueryParams fromNameAndType(
>>   final String storeName,
>>   final QueryableStoreType  queryableStoreType
>> )
>>
>>
>> Thanks,
>> Navinder
>>
>>    On Wednesday, 22 January, 2020, 09:32:07 pm IST, John Roesler 
>>  wrote:  
>>  
>>  22) I'm specifically proposing to establish a new convention.
>> The existing convention is fundamentally broken and has
>> been costly both for users and maintainers. That is the purpose
>> of the grammar I proposed. The plan is to implement  new APIs
>> following the grammar and gradually to port old APIs to it.
>>
>> The grammar wiki page has plenty of justification, so I won't 
>> recapitulate it here.
>>
>> Thanks,
>> -John
>>
>> On Wed, Jan 22, 2020, at 09:39, Navinder Brar wrote:
>>> 10) Sure John, please go ahead.
>>>
>>> 21) I have no strong opinion on constructor vs static factory. If 
>>> everyone's okay with it, I can make the change.
>>>
>>> 22) I looked at classes suggested by Matthias and I see there are no 
>>> getters there. We are ok with breaking the convention?
>>>
>>> Thanks,Navinder Pal Singh Brar
>>>
>>>   
>>>
>>>     On Wednesday, 22 January, 2020, 08:40:27 pm IST, John Roesler 
>>>  wrote:  
>>>   
>>>   Hi all,
>>>
>>> 10) For the motivation, I have some thoughts for why this KIP is
>>> absolutely essential as designed. If it's ok with you, Navinder,
>>> I'd just edit the motivation section of the wiki? If you're unhappy
>>> with my wording, you're of course welcome to revert or revise it; 
>>> it just seems more efficient than discussing it over email.
>>>
>>> 20) The getters were my fault :) 
>>> I proposed to design this KIP following the grammar proposal:
>>> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams+DSL+Grammar
>>> At the risk of delaying the vote on this KIP, I'd humbly suggest we 
>>> keep the getters,
>>> for all the reasons laid out on that grammar.
>>>
>>> I realize this introduces an inconsistency, but my hope is that we 
>>> would close that
>>> gap soon. I can even create tickets for migrating each API, if that 
>>> helps make 
>>> this idea more palatable. IMO, this proposed API is likely to be a bit 
>>> "out of
>>> the way", in that it's not likely to be heavily used by a broad 
>>> audience in 2.5, 
>>> so the API inconsistency wouldn't be too apparent. Plus, it will save 
>>> us from 
>>> implementing a config object in the current style, along with an 
>>> "internal" 
>>> counterpart, which we would immediately have plans to deprecate.
>>>
>>> Just to clarify (I know this has been a bit thrashy):
>>> 21. there should be no public constructor, instead (since there are 
>>> required arguments),
>>> there should be just one factory method:
>>> public static  StoreQueryParams fromNameAndType(
>>>   final String storeName, 
>>>   final QueryableStoreType  queryableStoreType
>>> )
>

[jira] [Created] (KAFKA-9468) config.storage.topic partition count issue is hard to debug

2020-01-22 Thread Evelyn Bayes (Jira)
Evelyn Bayes created KAFKA-9468:
---

 Summary: config.storage.topic partition count issue is hard to 
debug
 Key: KAFKA-9468
 URL: https://issues.apache.org/jira/browse/KAFKA-9468
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 2.3.1, 2.4.0, 2.2.2, 2.1.1, 2.0.1, 1.1.1, 1.0.2
Reporter: Evelyn Bayes


When you run connect distributed with 2 or more workers and 
config.storage.topic has more then 1 partition, you can end up with one of the 
workers rebalancing endlessly:

[2020-01-13 12:53:23,535] INFO [Worker clientId=connect-1, 
groupId=connect-cluster] Current config state offset 37 is behind group 
assignment 63, reading to end of config log 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2020-01-13 12:53:23,584] INFO [Worker clientId=connect-1, 
groupId=connect-cluster] Finished reading to end of log and updated config 
snapshot, new config log offset: 37 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2020-01-13 12:53:23,584] INFO [Worker clientId=connect-1, 
groupId=connect-cluster] Current config state offset 37 does not match group 
assignment 63. Forcing rebalance. 
(org.apache.kafka.connect.runtime.distributed.DistributedHerder)

 

*Suggested Solution*

Make the connect worker check the partition count when it starts and if 
partition count is > 1 Kafka Connect stops and logs the reason why.

I think this is reasonable as it would stop users just starting out from 
building it incorrectly and would be easy to fix early. For those upgrading 
this would easily be caught in a PRE-PROD environment. And even if they 
upgraded directly in PROD you would only be impacted if upgraded all connect 
workers at the same time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9467) Multiple wallclock punctuators may be scheduled after a rebalance

2020-01-22 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-9467:
--

 Summary: Multiple wallclock punctuators may be scheduled after a 
rebalance
 Key: KAFKA-9467
 URL: https://issues.apache.org/jira/browse/KAFKA-9467
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Sophie Blee-Goldman


In the eager rebalancing protocol*, Streams will suspend all tasks at the 
beginning of a rebalance and then resume those which have been reassigned to 
the same StreamThread. Part of suspending and resuming a task involves closing 
and reinitializing the topology, specifically calling Processor#close followed 
by Processor#init. If a wallclock punctuator is scheduled as part of init, it 
will be rescheduled again after every rebalance. Streams does not cancel 
existing punctuators during suspension, and does not tell users they must 
cancel punctuations themselves during Processor#close.

This can cause multiple punctuators to build up over time, which has the 
apparent effect of increasing the net punctuation rate for wallclock 
punctuators. (The same technically occurs with event-time punctuators, but the 
punctuation times are anchored relative to a fixed point and only one will be 
triggered at a time, so there is no increased punctuation rate).

There are several options at this point:

A) Clear/cancel any existing punctuators during task suspension

B) Push it to the user to cancel their punctuators in Processor#close, and 
update the documentation and examples to clarify this.

C) Leave existing punctuators alone during suspension, and instead block new 
ones from being scheduled on top during re-initialization.

One drawback of options A and B is that cancelling/rescheduling punctuators can 
mean a punctuation is never triggered if rebalances are more frequent than the 
punctuation interval. Even if they are still triggered, the effective 
punctuation interval will actually decrease as each rebalance delays the 
punctuation.

Of course, if the task _does_ get migrated to another thread/instance the 
punctuation would be reset anyways with option C, since we do not currently 
store/persist the punctuation information anywhere. The wallclock semantics are 
somewhat loosely defined, but I think most users would not consider any of 
these a proper fix on their own as it just pushes the issue in the other 
direction.

Of course, if we were to anchor the wallclock punctuations to a fixed time then 
this would not be a problem. At that point it seems reasonable to just leave it 
up to the user to cancel the punctuation during Processor#close, similar to any 
other kind of resource that must be cleaned up. Even if users forgot to do so 
it wouldn't affect the actual behavior, just causes unused punctuators to build 
up. See https://issues.apache.org/jira/browse/KAFKA-7699.

Given this, I think the options for a complete solution are:

1) Implement KAFKA-7699 and then do A or B

2) Persist the current punctuation schedule while migrating a task (presumably 
in the Subscription userdata) and then do C

Choosing the best course of action here is probably blocked on a decision on 
whether or not we want to anchor wallclock punctuations (KAFKA-7699). If we 
can't get consensus on that, we could always

3) Introduce a third type of punctuation, then do both 1 and 2 (for the new 
"anchored-wall-clock" type and the existing "wall-clock" type, respectively).

 

-*Another naive workaround for this issue is to turn on/upgrade to cooperative 
rebalancing, which will not suspend and resume all active tasks during a 
rebalance, and only suspend tasks that will be immediately closed and migrated 
to another instance or StreamThread. Of course, this will still cause the 
punctuation to be reset for tasks that _are_ actually closed/migrated, so 
practically speaking it's identical to option C alone



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-526: Reduce Producer Metadata Lookups for Large Number of Topics

2020-01-22 Thread Jason Gustafson
Thanks for the proposal. Looks good overall. I wanted to suggest a possible
name change. I was considering something like `idle.metadata.expiration.ms`
or maybe `metadata.max.idle.ms`. Thoughts?

-Jason


On Tue, Jan 21, 2020 at 11:38 AM Guozhang Wang  wrote:

> Got it.
>
> I was proposing that we do the "delayed async batch" but I think your
> argument for complexity and pushing it out of the scope is convincing, so
> instead I propose we do the synchronous mini batching still but obviously
> it is already there :)  I'm +1 on the current proposal scope.
>
> Guozhang
>
> On Tue, Jan 21, 2020 at 10:16 AM Brian Byrne  wrote:
>
> > Hi Guozhang,
> >
> > Ah, sorry, I misunderstood. Actually, this is solved for us today. How
> the
> > producer works is that it maintains at most one inflight metadata fetch
> > request at any time, where each request is tagged with the current
> > (monotonically increasing) request version. This version is bumped
> whenever
> > a new topic is encountered, and metadata fetching will continue to
> process
> > while the latest metadata response's version is below the current
> version.
> >
> > So if a metadata request is in flight, and a number of threads produce to
> > new topics, they'll be added to the working set but the next metadata
> > request won't take place until the outstanding one returns. So their
> > updates will be batched together. As you suggest, we can have a simple
> list
> > that tracks unknown topics to isolate new vs. old topics.
> >
> > Thanks,
> > Brian
> >
> >
> >
> > On Tue, Jan 21, 2020 at 10:04 AM Guozhang Wang 
> wrote:
> >
> > > Hi Brian,
> > >
> > > I think I buy the complexity and extra end-to-end-latency argument :)
> I'm
> > > fine with delaying the asynchronous tech fetching to future works and
> > keep
> > > the current KIP's scope as-is for now. Under that case can we consider
> > just
> > > a minor implementation detail (since it is not affecting public APIs we
> > > probably do not even need to list it, but just thinking loud here):
> > >
> > > In your proposal when we request for a topic of unknown metadata, we
> are
> > > going to directly set the topic name as that singleton in the request.
> > I'm
> > > wondering for the scenario that KAFKA-8904 described, if the
> > producer#send
> > > for thousands of new topics are triggered sequentially by a single
> thread
> > > or concurrent threads? If it's the latter, and we expect in such
> > scenarios
> > > we may have multiple topics being requests within a very short time,
> then
> > > we can probably do sth. like this internally in a synchronized manner:
> > >
> > > 1) put the topic name into a list, as "unknown topics", then
> > > 2) exhaust the list, and put all topics from that list to the request;
> if
> > > the list is empty, it means it has been emptied by another thread so we
> > > skip sending a new request and just wait for the returned metadata
> > refresh.
> > >
> > > In most cases the list would just be a singleton with the one that
> thread
> > > has just enqueued, but under extreme scenarios it can help batching a
> few
> > > topic names probably (of course, I'm thinking about very extreme cases
> > > here, assuming that's was what we've seen in 8904). Since these two
> steps
> > > are very light-weighted, doing that in a synchronized block would not
> > hurt
> > > the concurrency too much.
> > >
> > >
> > > Guozhang
> > >
> > >
> > > On Tue, Jan 21, 2020 at 9:39 AM Brian Byrne 
> wrote:
> > >
> > > > Hi Guozhang,
> > > >
> > > > Your understanding of the rationale is accurate, and what you suggest
> > is
> > > > completely plausible, however I have a slightly different take on the
> > > > situation.
> > > >
> > > > When the KIP was originally drafted, making KafkaProducer#send
> > > asynchronous
> > > > was one element of the changes (this is a little more general than
> (a),
> > > but
> > > > has similar implications). As you're aware, doing so would allow new
> > > topics
> > > > to aggregate since the producer could continue to push new records,
> > > whereas
> > > > today the producer thread is blocked waiting for resolution.
> > > >
> > > > However, there were concerns about changing client behavior
> > unexpectedly
> > > in
> > > > this manner, and the change isn't as trivial as one would hope. For
> > > > example, we'd have to introduce an intermediate queue of records for
> > > topics
> > > > without metadata, and have that play well with the buffer pool which
> > > > ensures the memory limit isn't exceeded. A side effect is that a
> > producer
> > > > could hit 'memory full' conditions easier, which could have
> unintended
> > > > consequences if, say, the model was setup such that different
> producer
> > > > threads produced to a disjoint set of topics. Where one producer
> thread
> > > was
> > > > blocked waiting for new metadata, it could now push enough data to
> > block
> > > > all producer threads due to memory limits, so we'd need to be careful
> > > here.
> > > >

Jenkins build is back to normal : kafka-trunk-jdk11 #1103

2020-01-22 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-01-22 Thread Matthias J. Sax
I have a few minor comments (compare the DISCUSS thread), but overall
the KIP looks good.

+1 (binding)


-Matthias

On 1/22/20 10:09 AM, John Roesler wrote:
> Thanks for updating the KIP, Navinder.
> 
> I'm +1 (binding) on the current proposal
> 
> Thanks,
> -John
> 
> On Tue, Jan 21, 2020, at 12:50, Navinder Brar wrote:
>> Thanks, Guozhang. I agree it makes total sense. I will make the 
>> edits.~Navinder  
>>
>> On Tuesday, 21 January, 2020, 11:00:32 pm IST, Guozhang Wang 
>>  wrote:  
>>  
>>  Hello Navinder,
>>
>> Thanks for brining up this proposal. I made a quick pass on that and
>> overall I think I agree with your ideas. Just a few thoughts about the
>> public APIs:
>>
>> 1) As we are adding a new overload to `KafkaStreams#store`, could we just
>> add the storeName and queryableStoreType as part of StoreQueryParam, and
>> leaving that the only parameter of the function?
>>
>> 2) along with 1), for the static constructors, instead of iterating over
>> all possible combos I'd suggest we make constructors with only, say,
>> storeName, and then adding `withXXX()` setters to set other fields. This is
>> in case we want to add more param fields into the object, that we do not
>> need to exponentially adding and deprecating the static constructors.
>>
>>
>> Guozhang
>>
>>
>> On Mon, Jan 20, 2020 at 10:42 AM Navinder Brar
>>  wrote:
>>
>>> Hello all,
>>>
>>> I'd like to propose a vote to serve keys from a specific partition-store
>>> instead of iterating over all the local stores of an instance to locate the
>>> key, as which happens currently.
>>> The full KIP is provided here:
>>>
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-562%3A+Allow+fetching+a+key+from+a+single+partition+rather+than+iterating+over+all+the+stores+on+an+instance
>>>
>>>
>>> Thanks,
>>> Navinder
>>>
>>
>>
>> -- 
>> -- Guozhang
>>



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] : KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-01-22 Thread Matthias J. Sax
Thanks for the clarifications about the getters. I agree that it makes
sense to move to the new pattern incrementally. Might be useful to
create a Jira (or multiple?) to track this. It's an straight forward change.

A nit about the KIP: it should only list the signature but not the full
code of the implementation (ie, only package name and the class + method
names; we can omit toString(), equals(), and hashCode(), too -- alo, no
license header please ;))


nit: `isIncludeStaleStores` -> `includeStaleStores` (the "is"-prefix
reads clumsy and it's common in Kafka code base to omit the "get"-prefix
for getters -- we should adopt this)

@John: might be worth to include this in the Grammar wiki page?

nit (similar as above):

 - `getStoreName` -> `storeName`
 - `getQueryableStoreType` -> `queryableStoreType`


The KIP says

> Deprecating the KafkaStreams#store(final String storeName, final 
> QueryableStoreType queryableStoreType, final boolean includeStaleStores) 
> in favour of the funtion mentioned below.

We don't need to deprecate this method but we can remove it directly,
because it was never release.


What is the plan for

> store(final String storeName, final QueryableStoreType queryableStoreType) 
> {

Given that the new `StoreQueryParams` allows to specify `storeName` and
`queryableStoreType`, should we deprecate this method in favor of the
new `store(StoreQueryParams)` overload?


-Matthias



On 1/22/20 10:06 AM, John Roesler wrote:
> Thanks Navinder! I've also updated the motivation.
> 
> Thanks,
> -John
> 
> On Wed, Jan 22, 2020, at 11:12, Navinder Brar wrote:
>> I went through the grammar wiki page and since it is already agreed in 
>> principle I will change from constructor to below method and add the 
>> getters back.
>> public static  StoreQueryParams fromNameAndType(
>>   final String storeName,
>>   final QueryableStoreType  queryableStoreType
>> )
>>
>>
>> Thanks,
>> Navinder
>>
>> On Wednesday, 22 January, 2020, 09:32:07 pm IST, John Roesler 
>>  wrote:  
>>  
>>  22) I'm specifically proposing to establish a new convention.
>> The existing convention is fundamentally broken and has
>> been costly both for users and maintainers. That is the purpose
>> of the grammar I proposed. The plan is to implement  new APIs
>> following the grammar and gradually to port old APIs to it.
>>
>> The grammar wiki page has plenty of justification, so I won't 
>> recapitulate it here.
>>
>> Thanks,
>> -John
>>
>> On Wed, Jan 22, 2020, at 09:39, Navinder Brar wrote:
>>> 10) Sure John, please go ahead.
>>>
>>> 21) I have no strong opinion on constructor vs static factory. If 
>>> everyone's okay with it, I can make the change.
>>>
>>> 22) I looked at classes suggested by Matthias and I see there are no 
>>> getters there. We are ok with breaking the convention?
>>>
>>> Thanks,Navinder Pal Singh Brar
>>>
>>>   
>>>
>>>     On Wednesday, 22 January, 2020, 08:40:27 pm IST, John Roesler 
>>>  wrote:  
>>>   
>>>   Hi all,
>>>
>>> 10) For the motivation, I have some thoughts for why this KIP is
>>> absolutely essential as designed. If it's ok with you, Navinder,
>>> I'd just edit the motivation section of the wiki? If you're unhappy
>>> with my wording, you're of course welcome to revert or revise it; 
>>> it just seems more efficient than discussing it over email.
>>>
>>> 20) The getters were my fault :) 
>>> I proposed to design this KIP following the grammar proposal:
>>> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams+DSL+Grammar
>>> At the risk of delaying the vote on this KIP, I'd humbly suggest we 
>>> keep the getters,
>>> for all the reasons laid out on that grammar.
>>>
>>> I realize this introduces an inconsistency, but my hope is that we 
>>> would close that
>>> gap soon. I can even create tickets for migrating each API, if that 
>>> helps make 
>>> this idea more palatable. IMO, this proposed API is likely to be a bit 
>>> "out of
>>> the way", in that it's not likely to be heavily used by a broad 
>>> audience in 2.5, 
>>> so the API inconsistency wouldn't be too apparent. Plus, it will save 
>>> us from 
>>> implementing a config object in the current style, along with an 
>>> "internal" 
>>> counterpart, which we would immediately have plans to deprecate.
>>>
>>> Just to clarify (I know this has been a bit thrashy):
>>> 21. there should be no public constructor, instead (since there are 
>>> required arguments),
>>> there should be just one factory method:
>>> public static  StoreQueryParams fromNameAndType(
>>>   final String storeName, 
>>>   final QueryableStoreType  queryableStoreType
>>> )
>>>
>>> 22. there should be getters corresponding to each argument (required 
>>> and optional):
>>> Integer getPartition()
>>> boolean getIncludeStaleStores()
>>>
>>> Instead of adding the extra getAllPartitions() pseudo-getter, let's 
>>> follow Ted's advice and
>>> just document that getPartition() would return `null`, and that it 
>>> means that a
>>> specific partition hasn't been 

Re: [DISCUSS]KIP-216: IQ should throw different exceptions for different errors

2020-01-22 Thread Matthias J. Sax
Thanks Vito.

I am also ok with either name. Just a personal slight preference, but
not a important.


-Matthias

On 1/21/20 6:52 PM, Vito Jeng wrote:
> Thanks Matthias.
> 
> The KIP is about InvalidStateStoreException.
> I pick `StateStoreNotAvailableException` because it may be more intuitive
> than `StreamsNotRunningException`.
> 
> No matter which one picked, it's good to me.
> 
> ---
> Vito
> 
> 
> On Wed, Jan 22, 2020 at 7:44 AM Matthias J. Sax 
> wrote:
> 
>> Thanks for updating the KIP!
>>
>> One last comment/question: you kept `StateStoreNotAvailableException` in
>> favor of `StreamsNotRunningException` (to merge both as suggested).
>>
>> I am wondering, if it might be better to keep
>> `StreamsNotRunningException` instead of
>> `StateStoreNotAvailableException`, because this exception is thrown if
>> Streams is in state PENDING_SHUTDOWN / NOT_RUNNING / ERROR ?
>>
>>
>>
>> -Matthias
>>
>> On 1/17/20 9:56 PM, John Roesler wrote:
>>> Thanks, Vito. I've just cast my vote.
>>> -John
>>>
>>> On Fri, Jan 17, 2020, at 21:32, Vito Jeng wrote:
 Hi, folks,

 Just update the KIP, please take a look.

 Thanks!

 ---
 Vito


 On Fri, Jan 17, 2020 at 9:12 AM Vito Jeng  wrote:

> Thanks Bill, John and Matthias. Glad you guys joined this discussion.
> I got a lot out of the discussion.
>
> I would like to update KIP-216 base on John's suggestion to remove the
> category.
>
>
> ---
> Vito
>
>
> On Fri, Jan 17, 2020 at 2:30 AM Matthias J. Sax >>
> wrote:
>
>>> Nevertheless, if we omit the categorization, it’s moot.
>>
>> Ack.
>>
>> I am fine to remove the middle tier. As John pointed out, it might be
>> weird to have only one concrete exception type per category. We can
>> also
>> explain in detail how to handle each exception in their JavaDocs.
>>
>>
>> -Matthias
>>
>> On 1/16/20 6:38 AM, Bill Bejeck wrote:
>>> Vito,
>>>
>>> Thanks for the updates, the KIP LGTM.
>>>
>>> -Bill
>>>
>>> On Wed, Jan 15, 2020 at 11:31 PM John Roesler 
>> wrote:
>>>
 Hi Vito,

 Haha, your archive game is on point!

 What Matthias said in that email is essentially what I figured was
>> the
 rationale. It makes sense, but the point I was making is that this
>> really
 doesn’t seem like a good way to structure a production app. On the
>> other
 hand, considering the exception fatal has a good chance of avoiding
>> a
 frustrating debug session if you just forgot to call start.

 Nevertheless, if we omit the categorization, it’s moot.

 It would be easy to add a categorization layer later if we want it,
>> but
 not very easy to change it if we get it wrong.

 Thanks for your consideration!
 -John

 On Wed, Jan 15, 2020, at 21:14, Vito Jeng wrote:
> Hi John,
>
> About `StreamsNotStartedException is strange` --
> The original idea came from Matthias, two years ago. :)
> You can reference here:
>

>>
>> https://mail-archives.apache.org/mod_mbox/kafka-dev/201806.mbox/%3c6c32083e-b63c-435b-521d-032d45cc5...@confluent.io%3e
>
> About omitting the categorization --
> It looks reasonable. I'm fine with omitting the categorization but
>> not
 very
> sure it is a good choice.
> Does any other folks provide opinion?
>
>
> Hi, folks,
>
> Just update the KIP-216, please take a look.
>
> ---
> Vito
>
>
> On Thu, Jan 16, 2020 at 6:35 AM Vito Jeng 
>> wrote:
>
>>
>> Hi, folks,
>>
>> Thank you suggestion, really appreciate it. :)
>> I understand your concern. I'll merge StreamsNotRunningException
>> and
>> StateStoreNotAvailableException.
>>
>>
>> ---
>> Vito
>>
>>
>> On Thu, Jan 16, 2020 at 6:22 AM John Roesler >>
 wrote:
>>
>>> Hey Vito,
>>>
>>> Yes, thanks for the KIP. Sorry the discussion has been so long.
>>> Hopefully, we can close it out soon.
>>>
>>> I agree we can drop StreamsNotRunningException in favor of
>>> just StateStoreNotAvailableException.
>>>
>>> Unfortunately, I have some higher-level concerns. The value
>>> of these exceptions is that they tell you how to handle the
>>> various situations that can arise while querying a distributed
>>> data store.
>>>
>>> Ideally, as a caller, I should be able to just catch "retriable"
>> or
>>> "fatal" and handle them appropriately. Otherwise, there's no
>>> point in having categories, and we should just have all t

Re: [VOTE] KIP-216: IQ should throw different exceptions for different errors

2020-01-22 Thread Matthias J. Sax
+1 (binding)

-Matthias

On 1/17/20 9:35 PM, John Roesler wrote:
> Thanks for the KIP!
> 
> I'm +1 (binding)
> 
> Thanks,
> -John
> 
> On Thu, Jan 16, 2020, at 08:46, Bill Bejeck wrote:
>> Thanks for the KIP.
>>
>> +1 (binding)
>>
>> -Bill
>>
>> On Tue, Jan 14, 2020 at 9:41 AM Navinder Brar
>>  wrote:
>>
>>> +1 (non-binding) With a small comment which was mentioned by Vinoth as
>>> well. Did we fix on the flag for StreamsRebalancingException, I don't see
>>> it in the KIP.
>>> -Navinder
>>>
>>>
>>> On Tuesday, 14 January, 2020, 08:00:11 pm IST, Vito Jeng <
>>> v...@is-land.com.tw> wrote:
>>>
>>>  Hi, all,
>>>
>>> I would like to start the vote for KIP-216.
>>>
>>> Currently, IQ throws InvalidStateStoreException for any types of error.
>>> With this KIP, user can distinguish different types of error.
>>>
>>> KIP is here:
>>>
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-216%3A+IQ+should+throw+different+exceptions+for+different+errors
>>>
>>> Thanks
>>>
>>> ---
>>> Vito
>>> --
>>>
>>>
>>> ---
>>> Vito
>>>
>>



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] KIP-558: Add Connect REST API endpoints to view the topics used by connectors in Kafka Connect

2020-01-22 Thread Matthias J. Sax
Thanks for the KIP.

I am not sure how useful the timestamp and taskId information will be in
practice, but I don't have any concern with regard to
overhead/performance. Hence, as you think it might be useful, I trust
your judgement.

For the timestamp though, I would like to emphasize that I personally
think, that the idea (that was rejected) to track the "latest timestamp"
when a topic was used might be quite useful information. Of course, as
already mentioned, there could be a follow up to add this feature.


+1 (binding)


-Matthias

On 1/21/20 12:32 PM, Bill Bejeck wrote:
> Thanks for the KIP Konstantine.  This will be very useful for Connect.
> 
> +1(binding)
> 
> -Bill
> 
> On Tue, Jan 21, 2020 at 2:12 PM Almog Gavra  wrote:
> 
>> Another thanks from me! +1 (non-binding)
>>
>> On Tue, Jan 21, 2020 at 11:04 AM Randall Hauch  wrote:
>>
>>> Thanks again for the KIP and this improvement for Connect.
>>>
>>> +1 (binding)
>>>
>>> Randall
>>>
>>> On Tue, Jan 21, 2020 at 10:45 AM Tom Bentley 
>> wrote:
>>>
 +1 (non-binding). Thanks for the KIP Konstantine.

 On Sat, Jan 18, 2020 at 2:18 AM Konstantine Karantasis <
 konstant...@confluent.io> wrote:

> Hi all,
>
> I'd like to open the vote on KIP-558 that had a constructive flurry
>> of
> discussions in the past few days, in order to give this KIP the
 opportunity
> to be voted on by the current KIP deadline (Wed, Jan 22, 2020), if -
>> of
> course - there's agreement upon its final form.
>
> KIP link here:
>
>

>>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-558%3A+Track+the+set+of+actively+used+topics+by+connectors+in+Kafka+Connect
>
> Best regards,
> Konstantine
>

>>>
>>
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Resolved] (KAFKA-9082) Move ConfigCommand to use KafkaAdminClient APIs

2020-01-22 Thread Brian Byrne (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Byrne resolved KAFKA-9082.

Resolution: Duplicate

The outstanding work to be completed is now identical to KAFKA-7740. Marking as 
duplicate.

> Move ConfigCommand to use KafkaAdminClient APIs
> ---
>
> Key: KAFKA-9082
> URL: https://issues.apache.org/jira/browse/KAFKA-9082
> Project: Kafka
>  Issue Type: Sub-task
>  Components: admin
>Reporter: Brian Byrne
>Assignee: Brian Byrne
>Priority: Critical
> Fix For: 2.5.0
>
>
> The ConfigCommand currently only supports a subset of commands when 
> interacting with the KafkaAdminClient (as opposed to ZooKeeper directly). It 
> needs to be brought up to parity for KIP-500 work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4179

2020-01-22 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9418; Add new sendOffsetsToTransaction API to KafkaProducer


--
[...truncated 2.83 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Ta

[jira] [Created] (KAFKA-9466) Add documentation for new stream EOS change

2020-01-22 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9466:
--

 Summary: Add documentation for new stream EOS change
 Key: KAFKA-9466
 URL: https://issues.apache.org/jira/browse/KAFKA-9466
 Project: Kafka
  Issue Type: Sub-task
Reporter: Boyang Chen
Assignee: Matthias J. Sax






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-513: Distinguish between Key and Value serdes in scala wrapper library for kafka streams

2020-01-22 Thread Михаил Ерёменко
Hi, John! 

Sorry for the late reply. I am not really familiar with this mail list 
discussions, so I have not seen your mails.

Regarding your question:
>   I guess what
  I'm struggling with is why you actually want to have different key and
  serdes for the same type

I think good example will be (and it is actually what we do in ours project) 
using confluent schema registry in conjunction with kafka streams. Some models 
can be used as keys as well as values. When we define schema registry 
compatible serde, we have to specify is it for key or not. We can of course 
create two serdes for the same model, but in this case implicit semantic will 
not work because scala doesn’t know which implicit to pick. And things become 
even more complicated in case if you will try to derive your serdes (we derive 
serdes in ours project).

One more thing:
> every method in the streams-scala DSL.

So far we've just changed org.apache.kafka.streams.scala.ImplicitConversions 
and org.apache.kafka.streams.scala.kstream.Materialized and it works for us. 
Also we did introduce default serdes for primitive types. 

Regards,
Mykhailo

[jira] [Created] (KAFKA-9465) Enclose consumer call with catching InvalidOffsetException

2020-01-22 Thread Ted Yu (Jira)
Ted Yu created KAFKA-9465:
-

 Summary: Enclose consumer call with catching InvalidOffsetException
 Key: KAFKA-9465
 URL: https://issues.apache.org/jira/browse/KAFKA-9465
 Project: Kafka
  Issue Type: Improvement
Reporter: Ted Yu


In maybeUpdateStandbyTasks, the try block encloses restoreConsumer.poll and 
record handling.
Since InvalidOffsetException is thrown by restoreConsumer.poll, we should 
enclose this call in the try block.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9418) Add new sendOffsets API to include consumer group metadata

2020-01-22 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9418.

Fix Version/s: 2.5.0
   Resolution: Fixed

> Add new sendOffsets API to include consumer group metadata
> --
>
> Key: KAFKA-9418
> URL: https://issues.apache.org/jira/browse/KAFKA-9418
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.5.0
>
>
> Add the consumer group metadata as part of producer sendTransactions API to 
> enable proper fencing under 447



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9464) Close the producer in completeShutdown

2020-01-22 Thread Ted Yu (Jira)
Ted Yu created KAFKA-9464:
-

 Summary: Close the producer in completeShutdown
 Key: KAFKA-9464
 URL: https://issues.apache.org/jira/browse/KAFKA-9464
 Project: Kafka
  Issue Type: Bug
Reporter: Ted Yu


In StreamThread#completeShutdown, the producer (if not null) should be closed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4178

2020-01-22 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-7273 Clarification on mutability of headers passed to


--
[...truncated 5.72 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :s

Re: [DISCUSS] KIP-546: Add quota-specific APIs to the Admin Client, redux

2020-01-22 Thread Brian Byrne
Hi Jason,

I agree on (1). It was Colin's original suggestion, too, but he had changed
his mind in preference for enums. Strings are the more generic way for now,
so hopefully Colin can share his thinking when he's back. The QuotaFilter
usage was an error, this has been corrected.

For (2), the config-centric mode is what we have today in CommandConfig:
reading/altering the configuration as it's described. The
DescribeEffectiveClientQuotas would be resolving the various config entries
to see what actually applies to a particular entity. The examples are a
little trivial, but the resolution can become much more complicated as the
number of config entries grows.

List/describe aren't perfect either. Perhaps describe/resolve are a better
pair, with DescribeEffectiveClientQuotas -> ResolveClientQuotas?

I appreciate the feedback!

Thanks,
Brian



On Tue, Jan 21, 2020 at 12:09 PM Jason Gustafson  wrote:

> Hi Brian,
>
> Thanks for the proposal! I have a couple comments/questions:
>
> 1. I'm having a hard time understanding the point of `QuotaEntity.Type`. It
> sounds like this might be just for convenience since the APIs are using
> string types. If so, I think it's a bit misleading to represent it as an
> enum. In particular, I cannot see how the UNKNOWN type would be used. The
> `PrincipalBuilder` plugin allows users to provide their own principal type,
> so I think the API should be usable even for unknown entity types. Note
> also that we appear to be relying on this enum in `QuotaFilter` class. I
> think that should be changed to just a string?
>
> 2. It's a little annoying that we have two separate APIs to describe client
> quotas. The names do not really make it clear which API someone should use.
> It might just be a naming problem. In the command utility, it looks like
> you are using --list and --describe to distinguish the two. Perhaps the
> APIs can be named similarly: e.g. ListClientQuotas and
> DescribeClientQuotas. However, looking at the examples, it's still not very
> clear to me why we need both options. Basically I'm finding the
> "config-centric" mode not very intuitive.
>
> Thanks,
> Jason
>
>
> On Fri, Jan 17, 2020 at 2:14 PM Brian Byrne  wrote:
>
> > Thanks Colin, I've updated the KIP with the relevant changes.
> >
> > On Fri, Jan 17, 2020 at 10:17 AM Colin McCabe 
> wrote:
> >
> > > I thought about this a little bit more, and maybe we can leave in the
> > > enums rather than going with strings.  But we need to have an "UNKNOWN"
> > > value for all the enums, so that if a value that the client doesn't
> > > understand is returned, it can get translated to that.  This is what we
> > did
> > > with the ACLs API, and it worked out well.
> > >
> >
> > Done. One thing I omitted here was that the API still accepts/returns
> > Strings, since there may be plugins that specify their own types/units.
> If
> > we'd like to keep it this way, then the UNKNOWN may be unnecessary. Let
> me
> > know how you'd feel this is best resolved.
> >
> >
> > > On balance, I think we should leave in "units."  It could be useful for
> > > future-proofing.
> > >
> >
> > Done. Also added a comment in the ClientQuotaCommand to default to
> RATE_BPS
> > if no unit is supplied to ease adoption.
> >
> >
> > > Also, since there are other kinds of quotas not covered by this API, we
> > > should rename DescribeQuotas -> DescribeClientQuotas, AlterQuotas ->
> > > AlterClientQuotas, etc. etc.
> > >
> >
> > Done. Updated command and script name, too.
> >
> >
> > > Maybe QuotaFilter doesn't need a "rule" argument to its constructor
> right
> > > now.  We can just do literal matching for everything.  Like I said
> > earlier,
> > > I don't think people do a lot of prefixing of principal names.  When we
> > > added the "prefix matching" stuff for ACLs, it was mostly to let people
> > do
> > > it for topics.  Then we made it more generic because it was easy to do
> > so.
> > > In this case, the API is probably easier to understand if we just do a
> > > literal match.  We can always have a follow-on KIP to add fancier
> > filtering
> > > if needed.
> > >
> >
> > Done.
> >
> >
> > > For DescribeEffectiveQuotasResult, if you request all relevant quotas,
> it
> > > would be nice to see which ones apply and which ones don't.  Right now,
> > you
> > > just get a map, but you don't know which quotas are actually in force,
> > and
> > > which are not relevant but might be in the future if a different quota
> > gets
> > > deleted.  One way to do this would be to have two maps-- one for
> > applicable
> > > quotas and one for shadowed quotas.
> > >
> >
> > So the way it's specified is that it maps QuotaKey -> Value, however
> Value
> > is actually defined to have two parts: the entry, and a list of
> overridden
> > entries (where an entry is the value, along with the source). Perhaps the
> > Value is poorly named, or maybe there's a simpler structure to be had?
> >
> > Thanks,
> > Brian
> >
> >
> >
> > > On Tue, Jan 14, 2020, at 13:32, Brian

Build failed in Jenkins: kafka-trunk-jdk11 #1102

2020-01-22 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-7273 Clarification on mutability of headers passed to


--
[...truncated 2.84 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.To

Re: [VOTE] KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-01-22 Thread John Roesler
Thanks for updating the KIP, Navinder.

I'm +1 (binding) on the current proposal

Thanks,
-John

On Tue, Jan 21, 2020, at 12:50, Navinder Brar wrote:
> Thanks, Guozhang. I agree it makes total sense. I will make the 
> edits.~Navinder  
> 
> On Tuesday, 21 January, 2020, 11:00:32 pm IST, Guozhang Wang 
>  wrote:  
>  
>  Hello Navinder,
> 
> Thanks for brining up this proposal. I made a quick pass on that and
> overall I think I agree with your ideas. Just a few thoughts about the
> public APIs:
> 
> 1) As we are adding a new overload to `KafkaStreams#store`, could we just
> add the storeName and queryableStoreType as part of StoreQueryParam, and
> leaving that the only parameter of the function?
> 
> 2) along with 1), for the static constructors, instead of iterating over
> all possible combos I'd suggest we make constructors with only, say,
> storeName, and then adding `withXXX()` setters to set other fields. This is
> in case we want to add more param fields into the object, that we do not
> need to exponentially adding and deprecating the static constructors.
> 
> 
> Guozhang
> 
> 
> On Mon, Jan 20, 2020 at 10:42 AM Navinder Brar
>  wrote:
> 
> > Hello all,
> >
> > I'd like to propose a vote to serve keys from a specific partition-store
> > instead of iterating over all the local stores of an instance to locate the
> > key, as which happens currently.
> > The full KIP is provided here:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-562%3A+Allow+fetching+a+key+from+a+single+partition+rather+than+iterating+over+all+the+stores+on+an+instance
> >
> >
> > Thanks,
> > Navinder
> >
> 
> 
> -- 
> -- Guozhang
>


Re: [DISCUSS] : KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-01-22 Thread John Roesler
Thanks Navinder! I've also updated the motivation.

Thanks,
-John

On Wed, Jan 22, 2020, at 11:12, Navinder Brar wrote:
> I went through the grammar wiki page and since it is already agreed in 
> principle I will change from constructor to below method and add the 
> getters back.
> public static  StoreQueryParams fromNameAndType(
>   final String storeName,
>   final QueryableStoreType  queryableStoreType
> )
> 
> 
> Thanks,
> Navinder
> 
> On Wednesday, 22 January, 2020, 09:32:07 pm IST, John Roesler 
>  wrote:  
>  
>  22) I'm specifically proposing to establish a new convention.
> The existing convention is fundamentally broken and has
> been costly both for users and maintainers. That is the purpose
> of the grammar I proposed. The plan is to implement  new APIs
> following the grammar and gradually to port old APIs to it.
> 
> The grammar wiki page has plenty of justification, so I won't 
> recapitulate it here.
> 
> Thanks,
> -John
> 
> On Wed, Jan 22, 2020, at 09:39, Navinder Brar wrote:
> > 10) Sure John, please go ahead.
> > 
> > 21) I have no strong opinion on constructor vs static factory. If 
> > everyone's okay with it, I can make the change.
> > 
> > 22) I looked at classes suggested by Matthias and I see there are no 
> > getters there. We are ok with breaking the convention?
> > 
> > Thanks,Navinder Pal Singh Brar
> > 
> >  
> > 
> >    On Wednesday, 22 January, 2020, 08:40:27 pm IST, John Roesler 
> >  wrote:  
> >  
> >  Hi all,
> > 
> > 10) For the motivation, I have some thoughts for why this KIP is
> > absolutely essential as designed. If it's ok with you, Navinder,
> > I'd just edit the motivation section of the wiki? If you're unhappy
> > with my wording, you're of course welcome to revert or revise it; 
> > it just seems more efficient than discussing it over email.
> > 
> > 20) The getters were my fault :) 
> > I proposed to design this KIP following the grammar proposal:
> > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams+DSL+Grammar
> > At the risk of delaying the vote on this KIP, I'd humbly suggest we 
> > keep the getters,
> > for all the reasons laid out on that grammar.
> > 
> > I realize this introduces an inconsistency, but my hope is that we 
> > would close that
> > gap soon. I can even create tickets for migrating each API, if that 
> > helps make 
> > this idea more palatable. IMO, this proposed API is likely to be a bit 
> > "out of
> > the way", in that it's not likely to be heavily used by a broad 
> > audience in 2.5, 
> > so the API inconsistency wouldn't be too apparent. Plus, it will save 
> > us from 
> > implementing a config object in the current style, along with an 
> > "internal" 
> > counterpart, which we would immediately have plans to deprecate.
> > 
> > Just to clarify (I know this has been a bit thrashy):
> > 21. there should be no public constructor, instead (since there are 
> > required arguments),
> > there should be just one factory method:
> > public static  StoreQueryParams fromNameAndType(
> >   final String storeName, 
> >   final QueryableStoreType  queryableStoreType
> > )
> > 
> > 22. there should be getters corresponding to each argument (required 
> > and optional):
> > Integer getPartition()
> > boolean getIncludeStaleStores()
> > 
> > Instead of adding the extra getAllPartitions() pseudo-getter, let's 
> > follow Ted's advice and
> > just document that getPartition() would return `null`, and that it 
> > means that a
> > specific partition hasn't been requested, so the store would wrap all 
> > local partitions.
> > 
> > With those two changes, this proposal would be 100% in line with the 
> > grammar,
> > and IMO ready to go.
> > 
> > Thanks,
> > -John
> > 
> > Thanks,
> > -John
> > 
> > On Wed, Jan 22, 2020, at 03:56, Navinder Brar wrote:
> > > Thanks Matthias for the feedback.
> > > 
> > > 10) As Guozhang suggested above, we thought of adding storeName and 
> > > queryableStoreType as well in the StoreQueryParams, which is another 
> > > motivation for this KIP as it overloads KafkaStreams#store(). I have 
> > > updated the motivation in the KIP as well.
> > > 
> > > 20) I agree we can probably remove getPartition() and 
> > > getIncludeStaleStores() but we would definitely need getStoreName and 
> > > getQueryableStoreType() as they would be used in internal classes 
> > > QueryableStoreProvider.java and StreamThreadStateStoreProvider.java.
> > > 
> > >  30) I have edited the KIP to include only the changed 
> > >KafkaStreams#store().
> > > 
> > > 40) Removed the internal classes from the KIP.
> > > 
> > > I have incorporated feedback from Guozhang as well in the KIP. If 
> > > nothing else is pending, vote is ongoing.
> > > 
> > > ~Navinder    On Wednesday, 22 January, 2020, 12:49:51 pm IST, Matthias 
> > > J. Sax  wrote:  
> > >  
> > >  Thanks for the KIP. Overall it makes sense.
> > > 
> > > Couple of minor comments/questions:
> > > 
> > > 10) To me, it was initially quite unclear why we need this KIP. The

Re: [DISCUSS] KIP-553: Disable all SSL protocols except TLSV1.2 by default.

2020-01-22 Thread Николай Ижиков
Hello, Rajini.

PR - https://github.com/apache/kafka/pull/7998
Please, review.

> 22 янв. 2020 г., в 14:28, Николай Ижиков  написал(а):
> 
> Yes, I will do it the next few hours.
> 
>> 22 янв. 2020 г., в 14:24, Rajini Sivaram  
>> написал(а):
>> 
>> Hi Nikolay,
>> 
>> Do you have time to submit a PR for this before 2.5.0 feature freeze on Jan
>> 29th?
>> 
>> On Tue, Jan 21, 2020 at 1:09 PM Ron Dagostino  wrote:
>> 
>>> Sure, go for it.
>>> 
 On Jan 21, 2020, at 8:05 AM, Николай Ижиков  wrote:
 
 Hello, Ron.
 
 Let’s start vote right now.
 What do you think?
 
> 21 янв. 2020 г., в 15:48, Ron Dagostino  написал(а):
> 
> LGTM.  The KIP freeze for 2.5 is officially upon us tomorrow, but
>>> hopefully this is such a simple and straightforward change with obvious
>>> security benefits that it can be added anyway.  I would put it up for a
>>> vote very quickly — tomorrow at the latest.
> 
> Ron
> 
>> On Jan 21, 2020, at 7:38 AM, Николай Ижиков 
>>> wrote:
>> 
>> Hello.
>> 
>> KIP [1] updated.
>> Only TLSv1.2 will be enabled by default, as Rajini suggested.
>> 
>> Any objections to it?
>> 
>> 
>>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=142641956
>> 
>> 
>>> 17 янв. 2020 г., в 14:56, Николай Ижиков 
>>> написал(а):
>>> 
>>> Thanks, Rajini.
>>> 
>>> Will do it, shortly.
>>> 
 17 янв. 2020 г., в 14:50, Rajini Sivaram 
>>> написал(а):
 
 Hi Nikolay,
 
 1) You can update KIP-553 to disable old protocols. This would mean:
 1a) SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS would be just TLSv1.2
 1b) SslConfigs.DEFAULT_SSL_PROTOCOL would become TLSv1.2
 
 2) When the testing for TLSv1.3 has been done, open a new KIP to
>>> enable
 TLSv1.3 by default. This would mean adding TLSv1.3 to
 SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS.
 
 
> On Fri, Jan 17, 2020 at 11:40 AM Николай Ижиков <
>>> nizhi...@apache.org> wrote:
> 
> Hello, Rajini.
> 
> Yes, we can!
> 
> I have to write another KIP that goal will be keep only TLSv1.2 and
> TLSv1.3 in SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS
> Is it correct?
> 
> 
>> 17 янв. 2020 г., в 14:13, Rajini Sivaram 
> написал(а):
>> 
>> Hi Nikolay,
>> 
>> Can we split this KIP into two:
>> 1) Remove insecure TLS protocols from the default values
>> 2) Enable TLSv1.3
>> 
>> Since we are coming up to KIP freeze for 2.5.0 release, it will be
>>> good
> if
>> we can get at least the first one into 2.5.0. It would be a much
>>> smaller
>> change and won't get blocked behind TLSv1.3 testing.
>> 
>> Thank you,
>> 
>> Rajini
>> 
>> On Tue, Jan 7, 2020 at 11:49 AM Rajini Sivaram <
>>> rajinisiva...@gmail.com>
>> wrote:
>> 
>>> Hi Nikolay,
>>> 
>>> There a couple of things you could do:
>>> 
>>> 1) Run all system tests that use SSL with TLSv1.3. I had run a
>>> subset,
> but
>>> it will be good to run all of them. You can do this locally using
>>> docker
>>> with JDK 11 by updating the files in tests/docker. You will need
>>> to
> update
>>> tests/kafkatest/services/security/security_config.py to enable
>>> only
>>> TLSv1.3. Instructions for running system tests using docker are in
>>> https://github.com/apache/kafka/blob/trunk/tests/README.md.
>>> 2) For integration tests, we run a small number of tests using
>>> TLSv1.3
> if
>>> the tests are run using JDK 11 and above. We need to do this for
>>> system
>>> tests as well. There is an open JIRA:
>>> https://issues.apache.org/jira/browse/KAFKA-9319. Feel free to
>>> assign
>>> this to yourself if you have time to do this.
>>> 
>>> Regards,
>>> 
>>> Rajini
>>> 
>>> 
>>> On Tue, Jan 7, 2020 at 5:15 AM Николай Ижиков <
>>> nizhi...@apache.org>
> wrote:
>>> 
 Hello, Rajini.
 
 Can you, please, clarify, what should be done?
 I can try to do tests by myself.
 
> 6 янв. 2020 г., в 21:29, Rajini Sivaram <
>>> rajinisiva...@gmail.com>
 написал(а):
> 
> Hi Brajesh.
> 
> No one is working on this yet, but will follow up with the
>>> Confluent
 tools
> team to see when this can be done.
> 
> On Mon, Jan 6, 2020 at 3:29 PM Brajesh Kumar <
>>> kbrajesh...@gmail.com>
 wrote:
> 
>> Hello Rajini,
>> 
>> What is the plan to run syst

Re: [DISCUSS] : KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-01-22 Thread Navinder Brar
I went through the grammar wiki page and since it is already agreed in 
principle I will change from constructor to below method and add the getters 
back.
public static  StoreQueryParams fromNameAndType(
  final String storeName,
  final QueryableStoreType  queryableStoreType
)


Thanks,
Navinder

On Wednesday, 22 January, 2020, 09:32:07 pm IST, John Roesler 
 wrote:  
 
 22) I'm specifically proposing to establish a new convention.
The existing convention is fundamentally broken and has
been costly both for users and maintainers. That is the purpose
of the grammar I proposed. The plan is to implement  new APIs
following the grammar and gradually to port old APIs to it.

The grammar wiki page has plenty of justification, so I won't 
recapitulate it here.

Thanks,
-John

On Wed, Jan 22, 2020, at 09:39, Navinder Brar wrote:
> 10) Sure John, please go ahead.
> 
> 21) I have no strong opinion on constructor vs static factory. If 
> everyone's okay with it, I can make the change.
> 
> 22) I looked at classes suggested by Matthias and I see there are no 
> getters there. We are ok with breaking the convention?
> 
> Thanks,Navinder Pal Singh Brar
> 
>  
> 
>    On Wednesday, 22 January, 2020, 08:40:27 pm IST, John Roesler 
>  wrote:  
>  
>  Hi all,
> 
> 10) For the motivation, I have some thoughts for why this KIP is
> absolutely essential as designed. If it's ok with you, Navinder,
> I'd just edit the motivation section of the wiki? If you're unhappy
> with my wording, you're of course welcome to revert or revise it; 
> it just seems more efficient than discussing it over email.
> 
> 20) The getters were my fault :) 
> I proposed to design this KIP following the grammar proposal:
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams+DSL+Grammar
> At the risk of delaying the vote on this KIP, I'd humbly suggest we 
> keep the getters,
> for all the reasons laid out on that grammar.
> 
> I realize this introduces an inconsistency, but my hope is that we 
> would close that
> gap soon. I can even create tickets for migrating each API, if that 
> helps make 
> this idea more palatable. IMO, this proposed API is likely to be a bit 
> "out of
> the way", in that it's not likely to be heavily used by a broad 
> audience in 2.5, 
> so the API inconsistency wouldn't be too apparent. Plus, it will save 
> us from 
> implementing a config object in the current style, along with an 
> "internal" 
> counterpart, which we would immediately have plans to deprecate.
> 
> Just to clarify (I know this has been a bit thrashy):
> 21. there should be no public constructor, instead (since there are 
> required arguments),
> there should be just one factory method:
> public static  StoreQueryParams fromNameAndType(
>   final String storeName, 
>   final QueryableStoreType  queryableStoreType
> )
> 
> 22. there should be getters corresponding to each argument (required 
> and optional):
> Integer getPartition()
> boolean getIncludeStaleStores()
> 
> Instead of adding the extra getAllPartitions() pseudo-getter, let's 
> follow Ted's advice and
> just document that getPartition() would return `null`, and that it 
> means that a
> specific partition hasn't been requested, so the store would wrap all 
> local partitions.
> 
> With those two changes, this proposal would be 100% in line with the grammar,
> and IMO ready to go.
> 
> Thanks,
> -John
> 
> Thanks,
> -John
> 
> On Wed, Jan 22, 2020, at 03:56, Navinder Brar wrote:
> > Thanks Matthias for the feedback.
> > 
> > 10) As Guozhang suggested above, we thought of adding storeName and 
> > queryableStoreType as well in the StoreQueryParams, which is another 
> > motivation for this KIP as it overloads KafkaStreams#store(). I have 
> > updated the motivation in the KIP as well.
> > 
> > 20) I agree we can probably remove getPartition() and 
> > getIncludeStaleStores() but we would definitely need getStoreName and 
> > getQueryableStoreType() as they would be used in internal classes 
> > QueryableStoreProvider.java and StreamThreadStateStoreProvider.java.
> > 
> >  30) I have edited the KIP to include only the changed KafkaStreams#store().
> > 
> > 40) Removed the internal classes from the KIP.
> > 
> > I have incorporated feedback from Guozhang as well in the KIP. If 
> > nothing else is pending, vote is ongoing.
> > 
> > ~Navinder    On Wednesday, 22 January, 2020, 12:49:51 pm IST, Matthias 
> > J. Sax  wrote:  
> >  
> >  Thanks for the KIP. Overall it makes sense.
> > 
> > Couple of minor comments/questions:
> > 
> > 10) To me, it was initially quite unclear why we need this KIP. The
> > motivation section does only talk about some performance issues (that
> > are motivated by single key look-ups) -- however, all issues mentioned
> > in the KIP could be fixed without any public API change. The important
> > cases, why the public API changes (and thus this KIP) is useful are
> > actually missing in the motivation section. I would be helpful to add
> > more details.
> >

Re: [VOTE] KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies

2020-01-22 Thread Eno Thereska
This is awesome! +1 (non binding)
Eno

On Tue, Jan 21, 2020 at 10:00 PM Gwen Shapira  wrote:
>
> Thank you for the KIP. Awesomely cloud-native improvement :)
>
> +1 (binding)
>
>
> On Tue, Jan 21, 2020, 9:35 AM David Jacot  wrote:
>
> > Hi all,
> >
> > I would like to start a vote on KIP-559: Make the Kafka Protocol Friendlier
> > with L7 Proxies.
> >
> > The KIP is here:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-559%3A+Make+the+Kafka+Protocol+Friendlier+with+L7+Proxies
> >
> > Thanks,
> > David
> >


Re: [DISCUSS] : KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-01-22 Thread John Roesler
22) I'm specifically proposing to establish a new convention.
The existing convention is fundamentally broken and has
been costly both for users and maintainers. That is the purpose
of the grammar I proposed. The plan is to implement  new APIs
following the grammar and gradually to port old APIs to it.

The grammar wiki page has plenty of justification, so I won't 
recapitulate it here.

Thanks,
-John

On Wed, Jan 22, 2020, at 09:39, Navinder Brar wrote:
> 10) Sure John, please go ahead.
> 
> 21) I have no strong opinion on constructor vs static factory. If 
> everyone's okay with it, I can make the change.
> 
> 22) I looked at classes suggested by Matthias and I see there are no 
> getters there. We are ok with breaking the convention?
> 
> Thanks,Navinder Pal Singh Brar
> 
>  
> 
> On Wednesday, 22 January, 2020, 08:40:27 pm IST, John Roesler 
>  wrote:  
>  
>  Hi all,
> 
> 10) For the motivation, I have some thoughts for why this KIP is
> absolutely essential as designed. If it's ok with you, Navinder,
> I'd just edit the motivation section of the wiki? If you're unhappy
> with my wording, you're of course welcome to revert or revise it; 
> it just seems more efficient than discussing it over email.
> 
> 20) The getters were my fault :) 
> I proposed to design this KIP following the grammar proposal:
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams+DSL+Grammar
> At the risk of delaying the vote on this KIP, I'd humbly suggest we 
> keep the getters,
> for all the reasons laid out on that grammar.
> 
> I realize this introduces an inconsistency, but my hope is that we 
> would close that
> gap soon. I can even create tickets for migrating each API, if that 
> helps make 
> this idea more palatable. IMO, this proposed API is likely to be a bit 
> "out of
> the way", in that it's not likely to be heavily used by a broad 
> audience in 2.5, 
> so the API inconsistency wouldn't be too apparent. Plus, it will save 
> us from 
> implementing a config object in the current style, along with an 
> "internal" 
> counterpart, which we would immediately have plans to deprecate.
> 
> Just to clarify (I know this has been a bit thrashy):
> 21. there should be no public constructor, instead (since there are 
> required arguments),
> there should be just one factory method:
> public static  StoreQueryParams fromNameAndType(
>   final String storeName, 
>   final QueryableStoreType  queryableStoreType
> )
> 
> 22. there should be getters corresponding to each argument (required 
> and optional):
> Integer getPartition()
> boolean getIncludeStaleStores()
> 
> Instead of adding the extra getAllPartitions() pseudo-getter, let's 
> follow Ted's advice and
> just document that getPartition() would return `null`, and that it 
> means that a
> specific partition hasn't been requested, so the store would wrap all 
> local partitions.
> 
> With those two changes, this proposal would be 100% in line with the grammar,
> and IMO ready to go.
> 
> Thanks,
> -John
> 
> Thanks,
> -John
> 
> On Wed, Jan 22, 2020, at 03:56, Navinder Brar wrote:
> > Thanks Matthias for the feedback.
> > 
> > 10) As Guozhang suggested above, we thought of adding storeName and 
> > queryableStoreType as well in the StoreQueryParams, which is another 
> > motivation for this KIP as it overloads KafkaStreams#store(). I have 
> > updated the motivation in the KIP as well.
> > 
> > 20) I agree we can probably remove getPartition() and 
> > getIncludeStaleStores() but we would definitely need getStoreName and 
> > getQueryableStoreType() as they would be used in internal classes 
> > QueryableStoreProvider.java and StreamThreadStateStoreProvider.java.
> > 
> >  30) I have edited the KIP to include only the changed KafkaStreams#store().
> > 
> > 40) Removed the internal classes from the KIP.
> > 
> > I have incorporated feedback from Guozhang as well in the KIP. If 
> > nothing else is pending, vote is ongoing.
> > 
> > ~Navinder    On Wednesday, 22 January, 2020, 12:49:51 pm IST, Matthias 
> > J. Sax  wrote:  
> >  
> >  Thanks for the KIP. Overall it makes sense.
> > 
> > Couple of minor comments/questions:
> > 
> > 10) To me, it was initially quite unclear why we need this KIP. The
> > motivation section does only talk about some performance issues (that
> > are motivated by single key look-ups) -- however, all issues mentioned
> > in the KIP could be fixed without any public API change. The important
> > cases, why the public API changes (and thus this KIP) is useful are
> > actually missing in the motivation section. I would be helpful to add
> > more details.
> > 
> > 20) `StoreQueryParams` has a lot of getter methods that we usually don't
> > have for config objects (compare `Consumed`, `Produced`, `Materialized`,
> > etc). Is there any reason why we need to add those getters to the public
> > API?
> > 
> > 30) The change to remove `KafkaStreams#store(...)` as introduced in
> > KIP-535 should be listed in sections Public API changes.

Re: [EXTERNAL] Re: Enable both SASL & SSL authentication...

2020-01-22 Thread Ron Dagostino
Hi Senthil.  Yes, you should read KIP-368: Allow SASL Connections to
Periodically Re-Authenticate
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-368%3A+Allow+SASL+Connections+to+Periodically+Re-Authenticate).
This KIP was added in AK 2.2 and addresses your question about
re-authentication.

Ron

On Wed, Jan 22, 2020 at 10:31 AM Senthilnathan Muthusamy
 wrote:
>
> Hi Ron,
>
> Thanks for the details and this answers my question (i.e. we can have 2 
> listeners - 1 with SASL_SSL and another with SSL to achieve this).
>
> Another question related to oAuth token revoke scenario. Say once broker 
> authenticated the presented oAuth token and if is valid for 24 hours.
>1. Will broker automatically invalid the token after expiry?
>2. Also if the oAuth server revoked the token (say in the 6th hour), is 
> there a way on the broker side to invalid the token thru any configuration 
> (something like revalidating the token in a configured internal to make sure 
> the token still valid)?
>
> Regards,
> Senthil
>
> -Original Message-
> From: Ron Dagostino 
> Sent: Wednesday, January 22, 2020 5:15 AM
> To: dev@kafka.apache.org
> Subject: [EXTERNAL] Re: Enable both SASL & SSL authentication...
>
> <<< some of our clients uses oAuth and some uses cert based auth Hi Senthil.  
> Brokers support different clients using different types of authentication, so 
> there is no problem here.  The way it works is via the broker's listener -- 
> each one listens on a separate port and is either a SSL listener (mutual cert 
> authentication), a SASL listener (or which there are two styles, with and 
> without encryption -- more on that below), or a PLAINTEXT listener (no 
> authentication).  One thing to clarify is that any particular client cannot 
> authenticate with multiple identities -- Kafka does not support multiple 
> identities on a single session -- so if the client connects on the port 
> associated with SASL then the broker will ignore any client-side certificate. 
>  As mentioned, there are two types of listeners associated with SASL: one 
> called SASL_PLAINTEXT where the communication happens in the clear and 
> another called SASL_SSL where the communication is TLS-encrypted.  It is this 
> second case -- SASL_SSL -- where the client could potentially present a 
> certificate, but the broker ignores it in this case even if the broker's 
> config says it is required.  This is done because of the constraint mentioned 
> above -- a particular client can authenticate with at most 1 identity over 
> any single connection.
>
> I hope this helps.  You may find the blog post at
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.confluent.io%2Fblog%2Fkafka-listeners-explained&data=02%7C01%7Csenthilm%40microsoft.com%7C0896e8bba4554c0ca32c08d79f3d1a8e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637152957096095910&sdata=WE2SeBn2c%2BAce2YgAwfjvCvsYdZS8r8sSNZewZFFEUA%3D&reserved=0
>  to be interesting and helpful, too.
>
> Ron
>
> On Wed, Jan 22, 2020 at 2:07 AM Senthilnathan Muthusamy 
>  wrote:
> >
> > Hi,
> >
> > We want both SASL (oAuthBearer) & SSL authentication to be enabled. However 
> > based on the below doc, the SSL auth will be disabled if SASL is enabled.
> >
> > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs
> > .confluent.io%2Fcurrent%2Fkafka%2Fauthentication_ssl.html%23brokers&am
> > p;data=02%7C01%7Csenthilm%40microsoft.com%7C0896e8bba4554c0ca32c08d79f
> > 3d1a8e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637152957096105864
> > &sdata=4uECZuJXY19JNS4GbzYSGeeDqMPIXE%2FTtMnSdsILX8U%3D&reserv
> > ed=0
> >
> >
> > If any SASL authentication mechanisms are enabled for a given listener, 
> > then SSL client authentication is disabled-even if you have specified 
> > ssl.client.auth=required and the broker authenticates clients only using 
> > SASL on that listener.
> >
> > How can we have both SASL & SSL authentication enabled as some of our 
> > clients uses oAuth and some uses cert based auth?
> >
> > Appreciate any pointers.
> >
> > Thanks,
> > Senthil


Re: [DISCUSS] : KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-01-22 Thread Navinder Brar
10) Sure John, please go ahead.

21) I have no strong opinion on constructor vs static factory. If everyone's 
okay with it, I can make the change.

22) I looked at classes suggested by Matthias and I see there are no getters 
there. We are ok with breaking the convention?

Thanks,Navinder Pal Singh Brar

 

On Wednesday, 22 January, 2020, 08:40:27 pm IST, John Roesler 
 wrote:  
 
 Hi all,

10) For the motivation, I have some thoughts for why this KIP is
absolutely essential as designed. If it's ok with you, Navinder,
I'd just edit the motivation section of the wiki? If you're unhappy
with my wording, you're of course welcome to revert or revise it; 
it just seems more efficient than discussing it over email.

20) The getters were my fault :) 
I proposed to design this KIP following the grammar proposal:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams+DSL+Grammar
At the risk of delaying the vote on this KIP, I'd humbly suggest we keep the 
getters,
for all the reasons laid out on that grammar.

I realize this introduces an inconsistency, but my hope is that we would close 
that
gap soon. I can even create tickets for migrating each API, if that helps make 
this idea more palatable. IMO, this proposed API is likely to be a bit "out of
the way", in that it's not likely to be heavily used by a broad audience in 
2.5, 
so the API inconsistency wouldn't be too apparent. Plus, it will save us from 
implementing a config object in the current style, along with an "internal" 
counterpart, which we would immediately have plans to deprecate.

Just to clarify (I know this has been a bit thrashy):
21. there should be no public constructor, instead (since there are required 
arguments),
there should be just one factory method:
public static  StoreQueryParams fromNameAndType(
  final String storeName, 
  final QueryableStoreType  queryableStoreType
)

22. there should be getters corresponding to each argument (required and 
optional):
Integer getPartition()
boolean getIncludeStaleStores()

Instead of adding the extra getAllPartitions() pseudo-getter, let's follow 
Ted's advice and
just document that getPartition() would return `null`, and that it means that a
specific partition hasn't been requested, so the store would wrap all local 
partitions.

With those two changes, this proposal would be 100% in line with the grammar,
and IMO ready to go.

Thanks,
-John

Thanks,
-John

On Wed, Jan 22, 2020, at 03:56, Navinder Brar wrote:
> Thanks Matthias for the feedback.
> 
> 10) As Guozhang suggested above, we thought of adding storeName and 
> queryableStoreType as well in the StoreQueryParams, which is another 
> motivation for this KIP as it overloads KafkaStreams#store(). I have 
> updated the motivation in the KIP as well.
> 
> 20) I agree we can probably remove getPartition() and 
> getIncludeStaleStores() but we would definitely need getStoreName and 
> getQueryableStoreType() as they would be used in internal classes 
> QueryableStoreProvider.java and StreamThreadStateStoreProvider.java.
> 
>  30) I have edited the KIP to include only the changed KafkaStreams#store().
> 
> 40) Removed the internal classes from the KIP.
> 
> I have incorporated feedback from Guozhang as well in the KIP. If 
> nothing else is pending, vote is ongoing.
> 
> ~Navinder    On Wednesday, 22 January, 2020, 12:49:51 pm IST, Matthias 
> J. Sax  wrote:  
>  
>  Thanks for the KIP. Overall it makes sense.
> 
> Couple of minor comments/questions:
> 
> 10) To me, it was initially quite unclear why we need this KIP. The
> motivation section does only talk about some performance issues (that
> are motivated by single key look-ups) -- however, all issues mentioned
> in the KIP could be fixed without any public API change. The important
> cases, why the public API changes (and thus this KIP) is useful are
> actually missing in the motivation section. I would be helpful to add
> more details.
> 
> 20) `StoreQueryParams` has a lot of getter methods that we usually don't
> have for config objects (compare `Consumed`, `Produced`, `Materialized`,
> etc). Is there any reason why we need to add those getters to the public
> API?
> 
> 30) The change to remove `KafkaStreams#store(...)` as introduced in
> KIP-535 should be listed in sections Public API changes. Also, existing
> methods should not be listed -- only changes. Hence, in
> `KafkaStreams.java` only one new method and the `store()` method as
> added via KIP-535 should be listed.
> 
> 40) `QueryableStoreProvider` and `StreamThreadStateStoreProvider` are
> internal classes and thus we can remove all changes to it from the KIP.
> 
> 
> Thanks!
> 
> 
> -Matthias
> 
> 
> 
> On 1/21/20 11:46 AM, Vinoth Chandar wrote:
> > Chiming in a bit late here..
> > 
> > +1 This is a very valid improvement. Avoiding doing gets on irrelevant
> > partitions will improve performance and efficiency for IQs.
> > 
> > As an incremental improvement to the current APIs,  adding an option to
> > filter out base

RE: [EXTERNAL] Re: Enable both SASL & SSL authentication...

2020-01-22 Thread Senthilnathan Muthusamy
Hi Ron,

Thanks for the details and this answers my question (i.e. we can have 2 
listeners - 1 with SASL_SSL and another with SSL to achieve this).

Another question related to oAuth token revoke scenario. Say once broker 
authenticated the presented oAuth token and if is valid for 24 hours. 
   1. Will broker automatically invalid the token after expiry? 
   2. Also if the oAuth server revoked the token (say in the 6th hour), is 
there a way on the broker side to invalid the token thru any configuration 
(something like revalidating the token in a configured internal to make sure 
the token still valid)?

Regards,
Senthil

-Original Message-
From: Ron Dagostino  
Sent: Wednesday, January 22, 2020 5:15 AM
To: dev@kafka.apache.org
Subject: [EXTERNAL] Re: Enable both SASL & SSL authentication...

<<< some of our clients uses oAuth and some uses cert based auth Hi Senthil.  
Brokers support different clients using different types of authentication, so 
there is no problem here.  The way it works is via the broker's listener -- 
each one listens on a separate port and is either a SSL listener (mutual cert 
authentication), a SASL listener (or which there are two styles, with and 
without encryption -- more on that below), or a PLAINTEXT listener (no 
authentication).  One thing to clarify is that any particular client cannot 
authenticate with multiple identities -- Kafka does not support multiple 
identities on a single session -- so if the client connects on the port 
associated with SASL then the broker will ignore any client-side certificate.  
As mentioned, there are two types of listeners associated with SASL: one called 
SASL_PLAINTEXT where the communication happens in the clear and another called 
SASL_SSL where the communication is TLS-encrypted.  It is this second case -- 
SASL_SSL -- where the client could potentially present a certificate, but the 
broker ignores it in this case even if the broker's config says it is required. 
 This is done because of the constraint mentioned above -- a particular client 
can authenticate with at most 1 identity over any single connection.

I hope this helps.  You may find the blog post at
https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.confluent.io%2Fblog%2Fkafka-listeners-explained&data=02%7C01%7Csenthilm%40microsoft.com%7C0896e8bba4554c0ca32c08d79f3d1a8e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637152957096095910&sdata=WE2SeBn2c%2BAce2YgAwfjvCvsYdZS8r8sSNZewZFFEUA%3D&reserved=0
 to be interesting and helpful, too.

Ron

On Wed, Jan 22, 2020 at 2:07 AM Senthilnathan Muthusamy 
 wrote:
>
> Hi,
>
> We want both SASL (oAuthBearer) & SSL authentication to be enabled. However 
> based on the below doc, the SSL auth will be disabled if SASL is enabled.
>
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs
> .confluent.io%2Fcurrent%2Fkafka%2Fauthentication_ssl.html%23brokers&am
> p;data=02%7C01%7Csenthilm%40microsoft.com%7C0896e8bba4554c0ca32c08d79f
> 3d1a8e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637152957096105864
> &sdata=4uECZuJXY19JNS4GbzYSGeeDqMPIXE%2FTtMnSdsILX8U%3D&reserv
> ed=0
>
>
> If any SASL authentication mechanisms are enabled for a given listener, then 
> SSL client authentication is disabled-even if you have specified 
> ssl.client.auth=required and the broker authenticates clients only using SASL 
> on that listener.
>
> How can we have both SASL & SSL authentication enabled as some of our clients 
> uses oAuth and some uses cert based auth?
>
> Appreciate any pointers.
>
> Thanks,
> Senthil


Re: [DISCUSS] : KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-01-22 Thread John Roesler
Hi all,

10) For the motivation, I have some thoughts for why this KIP is
absolutely essential as designed. If it's ok with you, Navinder,
I'd just edit the motivation section of the wiki? If you're unhappy
with my wording, you're of course welcome to revert or revise it; 
it just seems more efficient than discussing it over email.

20) The getters were my fault :) 
I proposed to design this KIP following the grammar proposal:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams+DSL+Grammar
At the risk of delaying the vote on this KIP, I'd humbly suggest we keep the 
getters,
for all the reasons laid out on that grammar.

I realize this introduces an inconsistency, but my hope is that we would close 
that
gap soon. I can even create tickets for migrating each API, if that helps make 
this idea more palatable. IMO, this proposed API is likely to be a bit "out of
the way", in that it's not likely to be heavily used by a broad audience in 
2.5, 
so the API inconsistency wouldn't be too apparent. Plus, it will save us from 
implementing a config object in the current style, along with an "internal" 
counterpart, which we would immediately have plans to deprecate.

Just to clarify (I know this has been a bit thrashy):
21. there should be no public constructor, instead (since there are required 
arguments),
there should be just one factory method:
public static  StoreQueryParams fromNameAndType(
  final String storeName, 
  final QueryableStoreType  queryableStoreType
)

22. there should be getters corresponding to each argument (required and 
optional):
Integer getPartition()
boolean getIncludeStaleStores()

Instead of adding the extra getAllPartitions() pseudo-getter, let's follow 
Ted's advice and
just document that getPartition() would return `null`, and that it means that a
specific partition hasn't been requested, so the store would wrap all local 
partitions.

With those two changes, this proposal would be 100% in line with the grammar,
and IMO ready to go.

Thanks,
-John

Thanks,
-John

On Wed, Jan 22, 2020, at 03:56, Navinder Brar wrote:
> Thanks Matthias for the feedback.
> 
> 10) As Guozhang suggested above, we thought of adding storeName and 
> queryableStoreType as well in the StoreQueryParams, which is another 
> motivation for this KIP as it overloads KafkaStreams#store(). I have 
> updated the motivation in the KIP as well.
> 
> 20) I agree we can probably remove getPartition() and 
> getIncludeStaleStores() but we would definitely need getStoreName and 
> getQueryableStoreType() as they would be used in internal classes 
> QueryableStoreProvider.java and StreamThreadStateStoreProvider.java.
> 
>  30) I have edited the KIP to include only the changed KafkaStreams#store().
> 
> 40) Removed the internal classes from the KIP.
> 
> I have incorporated feedback from Guozhang as well in the KIP. If 
> nothing else is pending, vote is ongoing.
> 
> ~NavinderOn Wednesday, 22 January, 2020, 12:49:51 pm IST, Matthias 
> J. Sax  wrote:  
>  
>  Thanks for the KIP. Overall it makes sense.
> 
> Couple of minor comments/questions:
> 
> 10) To me, it was initially quite unclear why we need this KIP. The
> motivation section does only talk about some performance issues (that
> are motivated by single key look-ups) -- however, all issues mentioned
> in the KIP could be fixed without any public API change. The important
> cases, why the public API changes (and thus this KIP) is useful are
> actually missing in the motivation section. I would be helpful to add
> more details.
> 
> 20) `StoreQueryParams` has a lot of getter methods that we usually don't
> have for config objects (compare `Consumed`, `Produced`, `Materialized`,
> etc). Is there any reason why we need to add those getters to the public
> API?
> 
> 30) The change to remove `KafkaStreams#store(...)` as introduced in
> KIP-535 should be listed in sections Public API changes. Also, existing
> methods should not be listed -- only changes. Hence, in
> `KafkaStreams.java` only one new method and the `store()` method as
> added via KIP-535 should be listed.
> 
> 40) `QueryableStoreProvider` and `StreamThreadStateStoreProvider` are
> internal classes and thus we can remove all changes to it from the KIP.
> 
> 
> Thanks!
> 
> 
> -Matthias
> 
> 
> 
> On 1/21/20 11:46 AM, Vinoth Chandar wrote:
> > Chiming in a bit late here..
> > 
> > +1 This is a very valid improvement. Avoiding doing gets on irrelevant
> > partitions will improve performance and efficiency for IQs.
> > 
> > As an incremental improvement to the current APIs,  adding an option to
> > filter out based on partitions makes sense
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > On Mon, Jan 20, 2020 at 3:13 AM Navinder Brar
> >  wrote:
> > 
> >> Thanks John. If there are no other comments to be addressed, I will start
> >> a vote today so that we are on track for this release.~Navinder
> >>
> >>
> >> On Monday, January 20, 2020, 8:32 AM, John Roesler 
> >> wrote:
> >>
> >> Thanks, Navinder,
> >>

Re: [VOTE] KIP-515: Hardened TLS Configs to ZooKeeper

2020-01-22 Thread Ron Dagostino
Hi everyone.  While finishing the PR for this KIP I realized that the
inheritance of TLS ZooKeeper configs that happens in the *authorizer*
does not reflect he spirit of our discussion.  In particular, based on
our inheritance discussion in the DISCUSS thread, the inheritance of
authorizer configs needn't be as constrained as it is currently
documented to be.  I am going to update the KIP as described below and
will assume there are no objections if nobody comments as such on this
VOTE thread.

The KIP currently states that there is a limited inheritance for
authorizer ZooKeeper TLS configs as follows: "Every config can be
prefixed with "authorizer." for the case when
kafka.security.authorizer.AclAuthorizer connects via TLS to a
ZooKeeper quorum separate from the one that Kafka is using – this
specific use case will be identified in the configuration by
explicitly setting authorizer.zookeeper.ssl.client.enable=true."

In other words, the authorizer inherits the broker's ZK TLS configs
*unless* it explicitly indicates via
authorizer.zookeeper.ssl.client.enable=true that it is going to use
its own configs, in which case inheritance does not occur -- i.e.
there is no overriding or merging going on where the broker's
ZooKeeper TLS configs act as a base upon which any "authorizer."
prefixed configs act as an overlay/override; instead, if you point to
another ZooKeeper quorum and want to change anything related to TLS
then you must restate everything.

We had a discussion related to potentially inheriting a broker's
*non-ZooKeeper* TLS configs.  Inheritance was desirable, and I came
around to that way of thinking, but it turned out to be impossible to
do given that the broker's non-ZooKeeper TLS configs are potentially
stored in ZooKeeper.  Still, inheritance was desirable as a concept,
so we should do it for the authorizer since the broker's *ZooKeeper*
TLS configs are available in the config file.

The KIP will now state that the broker's ZooKeeper TLS configs will
act as a base config upon which any "authorizer." ZooKeeper TLS
configs act as an overlay -- the configs are merged.  This is
consistent with how the other "authorizer." configs for ZooKeeper work
(connection/session timeouts and max inflight requests, for example).
This means that the order of evaluation for any particular authorizer
ZooKeeper TLS configuration will be:

(1) system property
(2) broker non-prefixed ZooKeeper TLS config
(3) "authorizer." prefixed ZooKeeper TLS config

Note that (1) + (2) simply yields the ZooKeeper TLS configs that the
broker is using -- with (2) overlaying (1) -- so any "authorizer."
prefixed ZooKeeper TLS configs are a true additional level of overlay
(again, consistent with the behavior of the ZooKeeper configs for
connection/session timeouts and max inflight requests).

Ron

On Mon, Jan 20, 2020 at 11:14 AM Manikumar  wrote:
>
> +1 (binding).
>
> Thanks for the KIP.
>
> On Mon, Jan 20, 2020 at 9:21 PM Rajini Sivaram 
> wrote:
>
> > +1 (binding)
> >
> > Thanks for the KIP, Ron!
> >
> > Regards,
> >
> > Rajini
> >
> >
> > On Mon, Jan 20, 2020 at 3:36 PM Gwen Shapira  wrote:
> >
> > > +1 (binding), this has been an on-going concern. Thank you for
> > > addressing this, Ron.
> > >
> > > On Mon, Jan 20, 2020 at 5:22 AM Ron Dagostino  wrote:
> > > >
> > > > Hi everyone.  I would like to start a vote on KIP-515: Enable ZK
> > > > client to use the new TLS supported authentication.
> > > >
> > > > The KIP is here:
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-515%3A+Enable+ZK+client+to+use+the+new+TLS+supported+authentication
> > > >
> > > > The discussion thread is here:
> > > >
> > >
> > https://lists.apache.org/thread.html/519d07e607cf6598a8126139c964d31fa46f2c028b88b1d44086b8a9%40%3Cdev.kafka.apache.org%3E
> > > >
> > > > Thanks,
> > > >
> > > > Ron
> > >
> >


Re: Enable both SASL & SSL authentication...

2020-01-22 Thread Ron Dagostino
<<< some of our clients uses oAuth and some uses cert based auth
Hi Senthil.  Brokers support different clients using different types
of authentication, so there is no problem here.  The way it works is
via the broker's listener -- each one listens on a separate port and
is either a SSL listener (mutual cert authentication), a SASL listener
(or which there are two styles, with and without encryption -- more on
that below), or a PLAINTEXT listener (no authentication).  One thing
to clarify is that any particular client cannot authenticate with
multiple identities -- Kafka does not support multiple identities on a
single session -- so if the client connects on the port associated
with SASL then the broker will ignore any client-side certificate.  As
mentioned, there are two types of listeners associated with SASL: one
called SASL_PLAINTEXT where the communication happens in the clear and
another called SASL_SSL where the communication is TLS-encrypted.  It
is this second case -- SASL_SSL -- where the client could potentially
present a certificate, but the broker ignores it in this case even if
the broker's config says it is required.  This is done because of the
constraint mentioned above -- a particular client can authenticate
with at most 1 identity over any single connection.

I hope this helps.  You may find the blog post at
https://www.confluent.io/blog/kafka-listeners-explained to be
interesting and helpful, too.

Ron

On Wed, Jan 22, 2020 at 2:07 AM Senthilnathan Muthusamy
 wrote:
>
> Hi,
>
> We want both SASL (oAuthBearer) & SSL authentication to be enabled. However 
> based on the below doc, the SSL auth will be disabled if SASL is enabled.
>
> https://docs.confluent.io/current/kafka/authentication_ssl.html#brokers
>
>
> If any SASL authentication mechanisms are enabled for a given listener, then 
> SSL client authentication is disabled-even if you have specified 
> ssl.client.auth=required and the broker authenticates clients only using SASL 
> on that listener.
>
> How can we have both SASL & SSL authentication enabled as some of our clients 
> uses oAuth and some uses cert based auth?
>
> Appreciate any pointers.
>
> Thanks,
> Senthil


Re: [VOTE] KIP-553: Disable all SSL protocols except TLSV1.2 by default.

2020-01-22 Thread M. Manna
+1 (binding). A simple, and yet powerful enforcement of TLS version.

Thanks for this KIP :)

On Tue, 21 Jan 2020 at 20:39, Mickael Maison 
wrote:

> +1 (binding)
> Thanks
>
> On Tue, Jan 21, 2020 at 7:58 PM Ron Dagostino  wrote:
> >
> > +1 (non-binding)
> >
> > Ron
> >
> > On Tue, Jan 21, 2020 at 11:29 AM Manikumar 
> wrote:
> > >
> > > +1 (binding).
> > >
> > > Thanks for the KIP.
> > >
> > >
> > > On Tue, Jan 21, 2020 at 9:56 PM Ted Yu  wrote:
> > >
> > > > +1
> > > >
> > > > On Tue, Jan 21, 2020 at 8:24 AM Rajini Sivaram <
> rajinisiva...@gmail.com>
> > > > wrote:
> > > >
> > > > > +1 (binding)
> > > > >
> > > > > Thanks for the KIP!
> > > > >
> > > > > Regards,
> > > > >
> > > > > Rajini
> > > > >
> > > > >
> > > > > On Tue, Jan 21, 2020 at 3:43 PM Николай Ижиков <
> nizhi...@apache.org>
> > > > > wrote:
> > > > >
> > > > > > Hello.
> > > > > >
> > > > > > I would like to start vote for KIP-553: Disable all SSL protocols
> > > > except
> > > > > > TLSV1.2 by default.
> > > > > >
> > > > > > KIP -
> > > > > >
> > > > >
> > > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=142641956
> > > > > > Discussion thread -
> > > > > >
> > > > >
> > > >
> https://lists.apache.org/thread.html/9c6201fe403a24f84fc3aa27f47dd06b718c1d80de0ee3412b9b877c%40%3Cdev.kafka.apache.org%3E
> > > > >
> > > >
>


Re: [DISCUSS] KIP-553: Disable all SSL protocols except TLSV1.2 by default.

2020-01-22 Thread Николай Ижиков
Yes, I will do it the next few hours.

> 22 янв. 2020 г., в 14:24, Rajini Sivaram  написал(а):
> 
> Hi Nikolay,
> 
> Do you have time to submit a PR for this before 2.5.0 feature freeze on Jan
> 29th?
> 
> On Tue, Jan 21, 2020 at 1:09 PM Ron Dagostino  wrote:
> 
>> Sure, go for it.
>> 
>>> On Jan 21, 2020, at 8:05 AM, Николай Ижиков  wrote:
>>> 
>>> Hello, Ron.
>>> 
>>> Let’s start vote right now.
>>> What do you think?
>>> 
 21 янв. 2020 г., в 15:48, Ron Dagostino  написал(а):
 
 LGTM.  The KIP freeze for 2.5 is officially upon us tomorrow, but
>> hopefully this is such a simple and straightforward change with obvious
>> security benefits that it can be added anyway.  I would put it up for a
>> vote very quickly — tomorrow at the latest.
 
 Ron
 
> On Jan 21, 2020, at 7:38 AM, Николай Ижиков 
>> wrote:
> 
> Hello.
> 
> KIP [1] updated.
> Only TLSv1.2 will be enabled by default, as Rajini suggested.
> 
> Any objections to it?
> 
> 
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=142641956
> 
> 
>> 17 янв. 2020 г., в 14:56, Николай Ижиков 
>> написал(а):
>> 
>> Thanks, Rajini.
>> 
>> Will do it, shortly.
>> 
>>> 17 янв. 2020 г., в 14:50, Rajini Sivaram 
>> написал(а):
>>> 
>>> Hi Nikolay,
>>> 
>>> 1) You can update KIP-553 to disable old protocols. This would mean:
>>> 1a) SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS would be just TLSv1.2
>>> 1b) SslConfigs.DEFAULT_SSL_PROTOCOL would become TLSv1.2
>>> 
>>> 2) When the testing for TLSv1.3 has been done, open a new KIP to
>> enable
>>> TLSv1.3 by default. This would mean adding TLSv1.3 to
>>> SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS.
>>> 
>>> 
 On Fri, Jan 17, 2020 at 11:40 AM Николай Ижиков <
>> nizhi...@apache.org> wrote:
 
 Hello, Rajini.
 
 Yes, we can!
 
 I have to write another KIP that goal will be keep only TLSv1.2 and
 TLSv1.3 in SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS
 Is it correct?
 
 
> 17 янв. 2020 г., в 14:13, Rajini Sivaram 
 написал(а):
> 
> Hi Nikolay,
> 
> Can we split this KIP into two:
> 1) Remove insecure TLS protocols from the default values
> 2) Enable TLSv1.3
> 
> Since we are coming up to KIP freeze for 2.5.0 release, it will be
>> good
 if
> we can get at least the first one into 2.5.0. It would be a much
>> smaller
> change and won't get blocked behind TLSv1.3 testing.
> 
> Thank you,
> 
> Rajini
> 
> On Tue, Jan 7, 2020 at 11:49 AM Rajini Sivaram <
>> rajinisiva...@gmail.com>
> wrote:
> 
>> Hi Nikolay,
>> 
>> There a couple of things you could do:
>> 
>> 1) Run all system tests that use SSL with TLSv1.3. I had run a
>> subset,
 but
>> it will be good to run all of them. You can do this locally using
>> docker
>> with JDK 11 by updating the files in tests/docker. You will need
>> to
 update
>> tests/kafkatest/services/security/security_config.py to enable
>> only
>> TLSv1.3. Instructions for running system tests using docker are in
>> https://github.com/apache/kafka/blob/trunk/tests/README.md.
>> 2) For integration tests, we run a small number of tests using
>> TLSv1.3
 if
>> the tests are run using JDK 11 and above. We need to do this for
>> system
>> tests as well. There is an open JIRA:
>> https://issues.apache.org/jira/browse/KAFKA-9319. Feel free to
>> assign
>> this to yourself if you have time to do this.
>> 
>> Regards,
>> 
>> Rajini
>> 
>> 
>> On Tue, Jan 7, 2020 at 5:15 AM Николай Ижиков <
>> nizhi...@apache.org>
 wrote:
>> 
>>> Hello, Rajini.
>>> 
>>> Can you, please, clarify, what should be done?
>>> I can try to do tests by myself.
>>> 
 6 янв. 2020 г., в 21:29, Rajini Sivaram <
>> rajinisiva...@gmail.com>
>>> написал(а):
 
 Hi Brajesh.
 
 No one is working on this yet, but will follow up with the
>> Confluent
>>> tools
 team to see when this can be done.
 
 On Mon, Jan 6, 2020 at 3:29 PM Brajesh Kumar <
>> kbrajesh...@gmail.com>
>>> wrote:
 
> Hello Rajini,
> 
> What is the plan to run system tests using JDK 11? Is someone
>> working
>>> on
> this?
> 
> On Mon, Jan 6, 2020 at 3:00 PM Rajini Sivaram <
 rajinisiva...@gmail.com
 
> wrote:
> 
>> Hi Nikolay,
>> 
>> We can

Re: [DISCUSS] KIP-553: Disable all SSL protocols except TLSV1.2 by default.

2020-01-22 Thread Rajini Sivaram
Hi Nikolay,

Do you have time to submit a PR for this before 2.5.0 feature freeze on Jan
29th?

On Tue, Jan 21, 2020 at 1:09 PM Ron Dagostino  wrote:

> Sure, go for it.
>
> > On Jan 21, 2020, at 8:05 AM, Николай Ижиков  wrote:
> >
> > Hello, Ron.
> >
> > Let’s start vote right now.
> > What do you think?
> >
> >> 21 янв. 2020 г., в 15:48, Ron Dagostino  написал(а):
> >>
> >> LGTM.  The KIP freeze for 2.5 is officially upon us tomorrow, but
> hopefully this is such a simple and straightforward change with obvious
> security benefits that it can be added anyway.  I would put it up for a
> vote very quickly — tomorrow at the latest.
> >>
> >> Ron
> >>
> >>> On Jan 21, 2020, at 7:38 AM, Николай Ижиков 
> wrote:
> >>>
> >>> Hello.
> >>>
> >>> KIP [1] updated.
> >>> Only TLSv1.2 will be enabled by default, as Rajini suggested.
> >>>
> >>> Any objections to it?
> >>>
> >>>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=142641956
> >>>
> >>>
>  17 янв. 2020 г., в 14:56, Николай Ижиков 
> написал(а):
> 
>  Thanks, Rajini.
> 
>  Will do it, shortly.
> 
> > 17 янв. 2020 г., в 14:50, Rajini Sivaram 
> написал(а):
> >
> > Hi Nikolay,
> >
> > 1) You can update KIP-553 to disable old protocols. This would mean:
> > 1a) SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS would be just TLSv1.2
> > 1b) SslConfigs.DEFAULT_SSL_PROTOCOL would become TLSv1.2
> >
> > 2) When the testing for TLSv1.3 has been done, open a new KIP to
> enable
> > TLSv1.3 by default. This would mean adding TLSv1.3 to
> > SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS.
> >
> >
> >> On Fri, Jan 17, 2020 at 11:40 AM Николай Ижиков <
> nizhi...@apache.org> wrote:
> >>
> >> Hello, Rajini.
> >>
> >> Yes, we can!
> >>
> >> I have to write another KIP that goal will be keep only TLSv1.2 and
> >> TLSv1.3 in SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS
> >> Is it correct?
> >>
> >>
> >>> 17 янв. 2020 г., в 14:13, Rajini Sivaram 
> >> написал(а):
> >>>
> >>> Hi Nikolay,
> >>>
> >>> Can we split this KIP into two:
> >>> 1) Remove insecure TLS protocols from the default values
> >>> 2) Enable TLSv1.3
> >>>
> >>> Since we are coming up to KIP freeze for 2.5.0 release, it will be
> good
> >> if
> >>> we can get at least the first one into 2.5.0. It would be a much
> smaller
> >>> change and won't get blocked behind TLSv1.3 testing.
> >>>
> >>> Thank you,
> >>>
> >>> Rajini
> >>>
> >>> On Tue, Jan 7, 2020 at 11:49 AM Rajini Sivaram <
> rajinisiva...@gmail.com>
> >>> wrote:
> >>>
>  Hi Nikolay,
> 
>  There a couple of things you could do:
> 
>  1) Run all system tests that use SSL with TLSv1.3. I had run a
> subset,
> >> but
>  it will be good to run all of them. You can do this locally using
> docker
>  with JDK 11 by updating the files in tests/docker. You will need
> to
> >> update
>  tests/kafkatest/services/security/security_config.py to enable
> only
>  TLSv1.3. Instructions for running system tests using docker are in
>  https://github.com/apache/kafka/blob/trunk/tests/README.md.
>  2) For integration tests, we run a small number of tests using
> TLSv1.3
> >> if
>  the tests are run using JDK 11 and above. We need to do this for
> system
>  tests as well. There is an open JIRA:
>  https://issues.apache.org/jira/browse/KAFKA-9319. Feel free to
> assign
>  this to yourself if you have time to do this.
> 
>  Regards,
> 
>  Rajini
> 
> 
>  On Tue, Jan 7, 2020 at 5:15 AM Николай Ижиков <
> nizhi...@apache.org>
> >> wrote:
> 
> > Hello, Rajini.
> >
> > Can you, please, clarify, what should be done?
> > I can try to do tests by myself.
> >
> >> 6 янв. 2020 г., в 21:29, Rajini Sivaram <
> rajinisiva...@gmail.com>
> > написал(а):
> >>
> >> Hi Brajesh.
> >>
> >> No one is working on this yet, but will follow up with the
> Confluent
> > tools
> >> team to see when this can be done.
> >>
> >> On Mon, Jan 6, 2020 at 3:29 PM Brajesh Kumar <
> kbrajesh...@gmail.com>
> > wrote:
> >>
> >>> Hello Rajini,
> >>>
> >>> What is the plan to run system tests using JDK 11? Is someone
> working
> > on
> >>> this?
> >>>
> >>> On Mon, Jan 6, 2020 at 3:00 PM Rajini Sivaram <
> >> rajinisiva...@gmail.com
> >>
> >>> wrote:
> >>>
>  Hi Nikolay,
> 
>  We can leave the KIP open and restart the discussion once
> system
> >> tests
> >>> are
>  running.
> 
>  Thanks,
> 
>  Raji

Re: [DISCUSS] : KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-01-22 Thread Navinder Brar
Thanks Matthias for the feedback.

10) As Guozhang suggested above, we thought of adding storeName and 
queryableStoreType as well in the StoreQueryParams, which is another motivation 
for this KIP as it overloads KafkaStreams#store(). I have updated the 
motivation in the KIP as well.

20) I agree we can probably remove getPartition() and getIncludeStaleStores() 
but we would definitely need getStoreName and getQueryableStoreType() as they 
would be used in internal classes QueryableStoreProvider.java and 
StreamThreadStateStoreProvider.java.

 30) I have edited the KIP to include only the changed KafkaStreams#store().

40) Removed the internal classes from the KIP.

I have incorporated feedback from Guozhang as well in the KIP. If nothing else 
is pending, vote is ongoing.

~NavinderOn Wednesday, 22 January, 2020, 12:49:51 pm IST, Matthias J. Sax 
 wrote:  
 
 Thanks for the KIP. Overall it makes sense.

Couple of minor comments/questions:

10) To me, it was initially quite unclear why we need this KIP. The
motivation section does only talk about some performance issues (that
are motivated by single key look-ups) -- however, all issues mentioned
in the KIP could be fixed without any public API change. The important
cases, why the public API changes (and thus this KIP) is useful are
actually missing in the motivation section. I would be helpful to add
more details.

20) `StoreQueryParams` has a lot of getter methods that we usually don't
have for config objects (compare `Consumed`, `Produced`, `Materialized`,
etc). Is there any reason why we need to add those getters to the public
API?

30) The change to remove `KafkaStreams#store(...)` as introduced in
KIP-535 should be listed in sections Public API changes. Also, existing
methods should not be listed -- only changes. Hence, in
`KafkaStreams.java` only one new method and the `store()` method as
added via KIP-535 should be listed.

40) `QueryableStoreProvider` and `StreamThreadStateStoreProvider` are
internal classes and thus we can remove all changes to it from the KIP.


Thanks!


-Matthias



On 1/21/20 11:46 AM, Vinoth Chandar wrote:
> Chiming in a bit late here..
> 
> +1 This is a very valid improvement. Avoiding doing gets on irrelevant
> partitions will improve performance and efficiency for IQs.
> 
> As an incremental improvement to the current APIs,  adding an option to
> filter out based on partitions makes sense
> 
> 
> 
> 
> 
> 
> 
> On Mon, Jan 20, 2020 at 3:13 AM Navinder Brar
>  wrote:
> 
>> Thanks John. If there are no other comments to be addressed, I will start
>> a vote today so that we are on track for this release.~Navinder
>>
>>
>> On Monday, January 20, 2020, 8:32 AM, John Roesler 
>> wrote:
>>
>> Thanks, Navinder,
>>
>> The Param object looks a bit different than I would have done, but it
>> certainly is explicit. We might have to deprecate those particular factory
>> methods and move to a builder pattern if we need to add any more options in
>> the future, but I’m fine with that possibility.
>>
>> The KIP also discusses some implementation details that aren’t necessary
>> here. We really only need to see the public interfaces. We can discuss the
>> implementation in the PR.
>>
>> That said, the public API part of the current proposal looks good to me! I
>> would be a +1 if you called for a vote.
>>
>> Thanks,
>> John
>>
>> On Sun, Jan 19, 2020, at 20:50, Navinder Brar wrote:
>>> I have made some edits in the KIP, please take another look. It would
>>> be great if we can push it in 2.5.0.
>>> ~Navinder
>>>
>>>
>>> On Sunday, January 19, 2020, 12:59 AM, Navinder Brar
>>>  wrote:
>>>
>>> Sure John, I will update the StoreQueryParams with static factory
>>> methods.
>>> @Ted, we would need to create taskId only in case a user provides one
>>> single partition. In case user wants to query all partitions of an
>>> instance the current code is good enough where we iterate over all
>>> stream threads and go over all taskIds to match the store. But in case
>>> a user requests for a single partition-based store, we need to create a
>>> taskId out of that partition and store name(using
>>> internalTopologyBuilder class) and match with the taskIds belonging to
>>> that instance. I will add the code in the KIP.
>>>
>>>    On Sunday, 19 January, 2020, 12:47:08 am IST, Ted Yu
>>>  wrote:
>>>
>>>  Looking at the current KIP-562:
>>>
>>> bq. Create a taskId from the combination of store name and partition
>>> provided by the user
>>>
>>> I wonder if a single taskId would be used for the “all partitions” case.
>>> If so, we need to choose a numerical value for the partition portion of
>> the
>>> taskId.
>>>
>>> On Sat, Jan 18, 2020 at 10:27 AM John Roesler 
>> wrote:
>>>
 Thanks, Ted!

 This makes sense, but it seems like we should lean towards explicit
 semantics in the public API. ‘-1’ meaning “all partitions” is
>> reasonable,
 but not explicit. That’s why I suggested the Boolean for “all
>> partitions”.
 I guess this

Build failed in Jenkins: kafka-2.4-jdk8 #131

2020-01-22 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9143: Log task reconfiguration error only when it happened 
(#7648)


--
[...truncated 5.76 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache

Re: [DISCUSS] KIP-519: Make SSL context/engine configuration extensible

2020-01-22 Thread Maulin Vasavada
Hi all,

Finally I squeezed time and I've a suggested code changes shown at
https://github.com/maulin-vasavada/kafka/pull/4/files for discussing this
further. I'll update the KIP document soon. Meanwhile, can you please take
a look and continue the discussion?

One challenge is at:
https://github.com/maulin-vasavada/kafka/pull/4/files#diff-1e3432211fdbb7b2e2b44b5d8838a40bR89

Thanks
Maulin


On Tue, Oct 22, 2019 at 11:13 PM Maulin Vasavada 
wrote:

> bump! Clement/Rajini? Any responses based on the latest posts?
>
> On Wed, Oct 16, 2019 at 10:58 PM Maulin Vasavada <
> maulin.vasav...@gmail.com> wrote:
>
>> bump!
>>
>> On Sun, Oct 13, 2019 at 11:16 PM Maulin Vasavada <
>> maulin.vasav...@gmail.com> wrote:
>>
>>> Hi Clement
>>>
>>> 1) existing validation code will remain in SslFactory
>>> 2) the createEngine() method in SslEngineBuilder will move to SslFactory
>>> and the client/server mode setting will go there (I documented this in the
>>> latest KIP update)
>>>
>>> In the current KIP I am proposing (as per the latest updates) to make
>>> SSLContext loading/configuration/creation pluggable. I am not suggesting we
>>> do/repeat anything that is already addressed by the existing Providers for
>>> SSLContext implementation. The createEngine() method (which will move to
>>> SslFactory) will call SslContextFactory.create() to get references to the
>>> SSLContext and then call SSLContext#createEngine(peer, host) and set
>>> client/server mode as it does today. I'll try to put that in a sequence
>>> diagram and update the KIP to make it clearer.
>>>
>>> So to your question about SslFactory returning SSLContext - I am saying
>>> register SslContextFactory interface to provide the SSLContext object
>>> instead and keep SslFactory more-or-less as it is today with some
>>> additional responsibility of createEngine() method.
>>>
>>> Thanks
>>> Maulin
>>>
>>> Thanks
>>> Maulin
>>>
>>>
>>>
>>>
>>> On Fri, Oct 11, 2019 at 6:17 AM Pellerin, Clement <
>>> clement_pelle...@ibi.com> wrote:
>>>
 Can you clarify a few points for me?

 The two stumbling blocks we have are:
 1) reuse of the validation code in the existing SslFactory
 2) the client/server mode on the SSLEngine

 How do you deal with those issues in your new proposal?

 My use case is to register a custom SslFactory that returns an
 SSLContext previously created elsewhere in the application. Can your new
 proposal handle this use case?

 -Original Message-
 From: Maulin Vasavada [mailto:maulin.vasav...@gmail.com]
 Sent: Friday, October 11, 2019 2:13 AM
 To: dev@kafka.apache.org
 Subject: Re: [DISCUSS] KIP-519: Make SSL context/engine configuration
 extensible

 Check this out-

 https://github.com/apache/httpcomponents-core/blob/master/httpcore5/src/main/java/org/apache/hc/core5/ssl/SSLContextBuilder.java#L349

 This is exactly what I mean by using existing provider's SSLContext
 implementation and customizing it with our data points. The similar
 thing
 Kafka's SslEngineBuilder is doing right now.

 On Thu, Oct 10, 2019 at 11:06 PM Maulin Vasavada <
 maulin.vasav...@gmail.com>
 wrote:

 > You meant JSSE not JCE right? We are not talking about cryptographic
 > providers we are talking about ssl providers hence JSSE.
 >
 > I do understand how JSSE Providers work and also the impact of
 multiple
 > JSSE providers with same algorithms in same JVM along with sequencing
 > challenges for the same.
 >
 > Like you said- we need to allow customizing the configuration for
 > SSLContext, so how many ways we have?
 >
 > Option-1: Write a custom JSSE Provider with our SSLContext
 >
 > Option-2: Use whichever SSLContext impl that you get from existing
 JSSE
 > Provider for SSLContext AND customize data for key material, trust
 material
 > AND secure random.
 >
 > Which one you prefer for this context?
 >
 > I feel we are making it complicated for no reason. It is very simple -
 > When we need to have SSL we need data points like - 1) Keys, 2) Trust
 certs
 > and 3) Secure Random which is feed to SSLContext and we are done. So
 we can
 > keep existing Kafka implementation as is by just making those data
 points
 > pluggable. Now SecureRandom is already pluggable via
 > 'ssl.secure.random.implementation' so that leaves us with keys and
 trusted
 > certs. For that purpose I raised KIP-486 BUT everybody feels we still
 need
 > higher level of pluggability hence this KIP.
 >
 > I"ve been taking advice from the domain experts and Application
 security
 > teams and to them it is very straight-forward - Make SSLContext
 > configuration/building pluggable and that's it!
 >
 > Thanks
 > Maulin
 >
 >
 >
 >
 >
 >
 >
 >
 >
 >
 > On Mon, Oct 7, 2019