Build failed in Jenkins: kafka-2.1-jdk8 #25

2018-10-15 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: updates docs for KIP-358 (#5796)

[wangguoz] MINOR: fix non-deterministic streams-scala tests (#5792)

[matthias] MINOR: Doc changes for KIP-324 (#5788)

[matthias] MINOR: doc changes for KIP-372 (#5790)

--
[...truncated 896.22 KB...]

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaAssign STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaAssign PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED


Re: Kafka 7382 - Guarantee atleast one replica of partition be alive during create topic.

2018-10-15 Thread Suman B N
Any update on this? This has been reviewed by one reviewer. Need more
inputs on this one. Please take a look. Sorry to bother again and again.
Just nagging as documented .

On Monday, October 1, 2018, Suman B N  wrote:

> Any update on this? Should this be considered with KIP?
>
> On Wednesday, September 26, 2018, Suman B N  wrote:
>
>> Team,
>>
>> Issue KAFKA-7382  has
>> been fixed. Please review and merge. We need this fix in one of our
>> production clusters.
>>
>> Pull request is here .
>>
>> --
>> *Suman*
>> *OlaCabs*
>>
>
>
> --
> *Suman*
> *OlaCabs*
>
>

-- 
*Suman*
*OlaCabs*


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-15 Thread Abhimanyu Nagrath
Congratulations Manikumar

On Tue, Oct 16, 2018 at 10:09 AM Satish Duggana 
wrote:

> Congratulations Mani!
>
>
> On Fri, Oct 12, 2018 at 9:41 PM Colin McCabe  wrote:
> >
> > Congratulations, Manikumar!  Well done.
> >
> > best,
> > Colin
> >
> >
> > On Fri, Oct 12, 2018, at 01:25, Edoardo Comar wrote:
> > > Well done Manikumar !
> > > --
> > >
> > > Edoardo Comar
> > >
> > > IBM Event Streams
> > > IBM UK Ltd, Hursley Park, SO21 2JN
> > >
> > >
> > >
> > >
> > > From:   "Matthias J. Sax" 
> > > To: dev 
> > > Cc: users 
> > > Date:   11/10/2018 23:41
> > > Subject:Re: [ANNOUNCE] New Committer: Manikumar Reddy
> > >
> > >
> > >
> > > Congrats!
> > >
> > >
> > > On 10/11/18 2:31 PM, Yishun Guan wrote:
> > > > Congrats Manikumar!
> > > > On Thu, Oct 11, 2018 at 1:20 PM Sönke Liebau
> > > >  wrote:
> > > >>
> > > >> Great news, congratulations Manikumar!!
> > > >>
> > > >> On Thu, Oct 11, 2018 at 9:08 PM Vahid Hashemian
> > > 
> > > >> wrote:
> > > >>
> > > >>> Congrats Manikumar!
> > > >>>
> > > >>> On Thu, Oct 11, 2018 at 11:49 AM Ryanne Dolan <
> ryannedo...@gmail.com>
> > > >>> wrote:
> > > >>>
> > >  Bravo!
> > > 
> > >  On Thu, Oct 11, 2018 at 1:48 PM Ismael Juma 
> > > wrote:
> > > 
> > > > Congratulations Manikumar! Thanks for your continued
> contributions.
> > > >
> > > > Ismael
> > > >
> > > > On Thu, Oct 11, 2018 at 10:39 AM Jason Gustafson
> > > 
> > > > wrote:
> > > >
> > > >> Hi all,
> > > >>
> > > >> The PMC for Apache Kafka has invited Manikumar Reddy as a
> committer
> > > >>> and
> > > > we
> > > >> are
> > > >> pleased to announce that he has accepted!
> > > >>
> > > >> Manikumar has contributed 134 commits including significant
> work to
> > > >>> add
> > > >> support for delegation tokens in Kafka:
> > > >>
> > > >> KIP-48:
> > > >>
> > > >>
> > > >
> > > 
> > > >>>
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> > >
> > > >> KIP-249
> > > >> <
> > > >
> > > 
> > > >>>
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> > >
> > > >>
> > > >> :
> > > >>
> > > >>
> > > >
> > > 
> > > >>>
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> > >
> > > >>
> > > >> He has broad experience working with many of the core
> components in
> > >  Kafka
> > > >> and he has reviewed over 80 PRs. He has also made huge progress
> > > > addressing
> > > >> some of our technical debt.
> > > >>
> > > >> We appreciate the contributions and we are looking forward to
> more.
> > > >> Congrats Manikumar!
> > > >>
> > > >> Jason, on behalf of the Apache Kafka PMC
> > > >>
> > > >
> > > 
> > > >>>
> > > >>
> > > >>
> > > >> --
> > > >> Sönke Liebau
> > > >> Partner
> > > >> Tel. +49 179 7940878
> > > >> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel -
> Germany
> > >
> > > [attachment "signature.asc" deleted by Edoardo Comar/UK/IBM]
> > >
> > >
> > > Unless stated otherwise above:
> > > IBM United Kingdom Limited - Registered in England and Wales with
> number
> > > 741598.
> > > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> 3AU
>


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-15 Thread Satish Duggana
Congratulations Mani!


On Fri, Oct 12, 2018 at 9:41 PM Colin McCabe  wrote:
>
> Congratulations, Manikumar!  Well done.
>
> best,
> Colin
>
>
> On Fri, Oct 12, 2018, at 01:25, Edoardo Comar wrote:
> > Well done Manikumar !
> > --
> >
> > Edoardo Comar
> >
> > IBM Event Streams
> > IBM UK Ltd, Hursley Park, SO21 2JN
> >
> >
> >
> >
> > From:   "Matthias J. Sax" 
> > To: dev 
> > Cc: users 
> > Date:   11/10/2018 23:41
> > Subject:Re: [ANNOUNCE] New Committer: Manikumar Reddy
> >
> >
> >
> > Congrats!
> >
> >
> > On 10/11/18 2:31 PM, Yishun Guan wrote:
> > > Congrats Manikumar!
> > > On Thu, Oct 11, 2018 at 1:20 PM Sönke Liebau
> > >  wrote:
> > >>
> > >> Great news, congratulations Manikumar!!
> > >>
> > >> On Thu, Oct 11, 2018 at 9:08 PM Vahid Hashemian
> > 
> > >> wrote:
> > >>
> > >>> Congrats Manikumar!
> > >>>
> > >>> On Thu, Oct 11, 2018 at 11:49 AM Ryanne Dolan 
> > >>> wrote:
> > >>>
> >  Bravo!
> > 
> >  On Thu, Oct 11, 2018 at 1:48 PM Ismael Juma 
> > wrote:
> > 
> > > Congratulations Manikumar! Thanks for your continued contributions.
> > >
> > > Ismael
> > >
> > > On Thu, Oct 11, 2018 at 10:39 AM Jason Gustafson
> > 
> > > wrote:
> > >
> > >> Hi all,
> > >>
> > >> The PMC for Apache Kafka has invited Manikumar Reddy as a committer
> > >>> and
> > > we
> > >> are
> > >> pleased to announce that he has accepted!
> > >>
> > >> Manikumar has contributed 134 commits including significant work to
> > >>> add
> > >> support for delegation tokens in Kafka:
> > >>
> > >> KIP-48:
> > >>
> > >>
> > >
> > 
> > >>>
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> >
> > >> KIP-249
> > >> <
> > >
> > 
> > >>>
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> >
> > >>
> > >> :
> > >>
> > >>
> > >
> > 
> > >>>
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> >
> > >>
> > >> He has broad experience working with many of the core components in
> >  Kafka
> > >> and he has reviewed over 80 PRs. He has also made huge progress
> > > addressing
> > >> some of our technical debt.
> > >>
> > >> We appreciate the contributions and we are looking forward to more.
> > >> Congrats Manikumar!
> > >>
> > >> Jason, on behalf of the Apache Kafka PMC
> > >>
> > >
> > 
> > >>>
> > >>
> > >>
> > >> --
> > >> Sönke Liebau
> > >> Partner
> > >> Tel. +49 179 7940878
> > >> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
> >
> > [attachment "signature.asc" deleted by Edoardo Comar/UK/IBM]
> >
> >
> > Unless stated otherwise above:
> > IBM United Kingdom Limited - Registered in England and Wales with number
> > 741598.
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


Re: [DISCUSS] KIP-381 Connect: Tell about records that had their offsets flushed in callback

2018-10-15 Thread Konstantine Karantasis
This is a significant improvement to the semantics of source connectors.
I'm expecting that it will facilitate source connector implementations and
even enrich the application uses cases that we see. I only have a few minor
suggestions at the moment.

I believe that Acked is a common abbreviation for Acknowledged and that we
could use it in this context. And this suggestion is coming from a big
proponent of complete words in variables and method names. Thus, feel free
to consider 'offsetsFlushedAndAcked' as well as 'recordSentAndAcked'. Since
this is a public interface, I'd also make the implementation specific
comment that a Collection might be more versatile than
List as argument in offsetsFlushedAndAcknowledged.

The rejected approaches section could use some of the material in the
original jira ticket, which is pretty insightful in order to understand how
we arrived to this KIP. For example, I think it'd be useful to state
briefly why the 'commit' method is not getting removed completely but it's
substituted with 'offsetsFlushedAndAcked'. Also useful I believe it would
be to include in the motivation section some info related to why and how a
source system could use these method calls to safely recycle data that have
been surely imported to Kafka. I see this type of use cases having an
increased importance as Kafka is used more and more as the source of truth
and persistence layer for an increasing number of applications.

These suggestions, although they are not strictly required in order to move
forward with this improvement, I believe can help a lot to understand the
context of this proposed changed, without having to read the complete
history in the jira ticket.

Thanks for the KIP Per!

-Konstantine


On Wed, Oct 10, 2018 at 6:50 AM Per Steffensen  wrote:

> Please help make the proposed changes in KIP-381 become reality. Please
> comment.
>
> KIP:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-381%3A+Connect%3A+Tell+about+records+that+had+their+offsets+flushed+in+callback
>
> JIRA: https://issues.apache.org/jira/browse/KAFKA-5716
>
> PR: https://github.com/apache/kafka/pull/3872
>
> Thanks!
>
>
>


Re: KAFKA-6654 custom SSLContext

2018-10-15 Thread Colin McCabe
Hi Clement,

Thanks for the clarification.  Perhaps a pluggable interface makes sense here.  
Maybe someone more familiar with the SSL code can comment.

best,
Colin


On Mon, Oct 15, 2018, at 19:53, Pellerin, Clement wrote:
> OK, I can see why passing an instance is not language neutral.
> All the libraries I can think of accept the SSLSocketFactory, but they 
> most likely don't support C or Python.
> 
> My exact use case is to reuse the SSLContext configured in my 
> application outside Kafka.
> I'm afraid no amount of extra configuration properties can achieve that.
> It appears the creator of KAFKA-6654 agrees with me.
> 
> I could solve my problem if I could convince SslChannelBuilder to create 
> my own SslFactory implementation.
> The Kafka config already contains properties that hold class names.
> Like I suggested before, we could have a property for the class name 
> that implements an SslFactory interface.
> I would also need to pass custom config parameters to my SslFactory 
> implementation without causing warnings.
> By default, the SslFactory implementation would be the one built into 
> Kafka which uses all the Kafka ssl properties.
> 
> Is that acceptable to resolve KAFKA-6654?
> Can you think of a better solution?
> 
> 
> -Original Message-
> From: Colin McCabe [mailto:cmcc...@apache.org] 
> Sent: Monday, October 15, 2018 7:58 PM
> To: dev@kafka.apache.org
> Subject: Re: KAFKA-6654 custom SSLContext
> 
> In general Kafka makes an effort to be langauge-neutral so that Kafka 
> clients can be implemented on platforms other than Java.  For example, 
> we have things like librdkafka which allow people to access Kafka from C 
> or Python.  Unless I'm misunderstanding something, giving direct access 
> to the SSLContext and SSLSocketFactory seems like it would make that 
> kind of compatibility harder, if it were even still possible.  I'm 
> curious if there's a way to do this by adding configuration entries for 
> what you need?
> 
> best,
> Colin
> 
> 
> On Mon, Oct 15, 2018, at 13:20, Pellerin, Clement wrote:
> > I am new to this mailing list. I am not sure what I should do next. 
> > Should I create a KIP to discuss this?
> > 
> > -Original Message-
> > From: Pellerin, Clement 
> > Sent: Wednesday, October 10, 2018 4:38 PM
> > To: dev@kafka.apache.org
> > Subject: KAFKA-6654 custom SSLContext
> > 
> > KAFKA-6654 correctly states that there will never be enough 
> > configuration parameters to fully configure the SSLContext/
> > SSLSocketFactory created by Kafka.
> > For example, in our case, we need an alias to choose the key in the 
> > keystore, and we need an implementation of OCSP.
> > KAFKA-6654 suggests to make the creation of the SSLContext a pluggable 
> > implementation.
> > Maybe by declaring an interface and passing the name of an 
> > implementation class in a new parameter.
> > 
> > Many libraries solve this problem by accepting the SSLContextFactory 
> > instance from the application.
> > How about passing the instance as the value of a runtime configuration 
> > parameter?
> > If that parameter is set, all other ssl.* parameters would be ignored.
> > Obviously, this parameter could only be set programmatically.
> > 
> > I would like to hear the proposed solution by the Kafka maintainers.
> > 
> > I can help implementing a patch if there is an agreement on the desired 
> > solution.


RE: KAFKA-6654 custom SSLContext

2018-10-15 Thread Pellerin, Clement
OK, I can see why passing an instance is not language neutral.
All the libraries I can think of accept the SSLSocketFactory, but they most 
likely don't support C or Python.

My exact use case is to reuse the SSLContext configured in my application 
outside Kafka.
I'm afraid no amount of extra configuration properties can achieve that.
It appears the creator of KAFKA-6654 agrees with me.

I could solve my problem if I could convince SslChannelBuilder to create my own 
SslFactory implementation.
The Kafka config already contains properties that hold class names.
Like I suggested before, we could have a property for the class name that 
implements an SslFactory interface.
I would also need to pass custom config parameters to my SslFactory 
implementation without causing warnings.
By default, the SslFactory implementation would be the one built into Kafka 
which uses all the Kafka ssl properties.

Is that acceptable to resolve KAFKA-6654?
Can you think of a better solution?


-Original Message-
From: Colin McCabe [mailto:cmcc...@apache.org] 
Sent: Monday, October 15, 2018 7:58 PM
To: dev@kafka.apache.org
Subject: Re: KAFKA-6654 custom SSLContext

In general Kafka makes an effort to be langauge-neutral so that Kafka clients 
can be implemented on platforms other than Java.  For example, we have things 
like librdkafka which allow people to access Kafka from C or Python.  Unless 
I'm misunderstanding something, giving direct access to the SSLContext and 
SSLSocketFactory seems like it would make that kind of compatibility harder, if 
it were even still possible.  I'm curious if there's a way to do this by adding 
configuration entries for what you need?

best,
Colin


On Mon, Oct 15, 2018, at 13:20, Pellerin, Clement wrote:
> I am new to this mailing list. I am not sure what I should do next. 
> Should I create a KIP to discuss this?
> 
> -Original Message-
> From: Pellerin, Clement 
> Sent: Wednesday, October 10, 2018 4:38 PM
> To: dev@kafka.apache.org
> Subject: KAFKA-6654 custom SSLContext
> 
> KAFKA-6654 correctly states that there will never be enough 
> configuration parameters to fully configure the SSLContext/
> SSLSocketFactory created by Kafka.
> For example, in our case, we need an alias to choose the key in the 
> keystore, and we need an implementation of OCSP.
> KAFKA-6654 suggests to make the creation of the SSLContext a pluggable 
> implementation.
> Maybe by declaring an interface and passing the name of an 
> implementation class in a new parameter.
> 
> Many libraries solve this problem by accepting the SSLContextFactory 
> instance from the application.
> How about passing the instance as the value of a runtime configuration 
> parameter?
> If that parameter is set, all other ssl.* parameters would be ignored.
> Obviously, this parameter could only be set programmatically.
> 
> I would like to hear the proposed solution by the Kafka maintainers.
> 
> I can help implementing a patch if there is an agreement on the desired 
> solution.


Re: [DISCUSS] KIP-354 Time-based log compaction policy

2018-10-15 Thread Mayuresh Gharat
Hi Wesley,

Thanks for the KIP and sorry for being late to the party.
 I wanted to understand, the scenario you mentioned in Proposed changes :

-
>
> Estimate the earliest message timestamp of an un-compacted log segment. we
> only need to estimate earliest message timestamp for un-compacted log
> segments to ensure timely compaction because the deletion requests that
> belong to compacted segments have already been processed.
>
>1.
>
>for the first (earliest) log segment:  The estimated earliest
>timestamp is set to the timestamp of the first message if timestamp is
>present in the message. Otherwise, the estimated earliest timestamp is set
>to "segment.largestTimestamp - maxSegmentMs”
> (segment.largestTimestamp is lastModified time of the log segment or max
>timestamp we see for the log segment.). In the later case, the actual
>timestamp of the first message might be later than the estimation, but it
>is safe to pick up the log for compaction earlier.
>
> When we say "actual timestamp of the first message might be later than the
estimation, but it is safe to pick up the log for compaction earlier.",
doesn't that violate the assumption that we will consider a segment for
compaction only if the time of creation the segment has crossed the "now -
maxCompactionLagMs" ?

Thanks,

Mayuresh

On Mon, Sep 3, 2018 at 7:28 PM Brett Rann  wrote:

> Might also be worth moving to a vote thread? Discussion seems to have gone
> as far as it can.
>
> > On 4 Sep 2018, at 12:08, xiongqi wu  wrote:
> >
> > Brett,
> >
> > Yes, I will post PR tomorrow.
> >
> > Xiongqi (Wesley) Wu
> >
> >
> > On Sun, Sep 2, 2018 at 6:28 PM Brett Rann 
> wrote:
> >
> > > +1 (non-binding) from me on the interface. I'd like to see someone
> familiar
> > > with
> > > the code comment on the approach, and note there's a couple of
> different
> > > approaches: what's documented in the KIP, and what Xiaohe Dong was
> working
> > > on
> > > here:
> > >
> > >
> https://github.com/dongxiaohe/kafka/tree/dongxiaohe/log-cleaner-compaction-max-lifetime-2.0
> > >
> > > If you have code working already Xiongqi Wu could you share a PR? I'd
> be
> > > happy
> > > to start testing.
> > >
> > > On Tue, Aug 28, 2018 at 5:57 AM xiongqi wu 
> wrote:
> > >
> > > > Hi All,
> > > >
> > > > Do you have any additional comments on this KIP?
> > > >
> > > >
> > > > On Thu, Aug 16, 2018 at 9:17 PM, xiongqi wu 
> wrote:
> > > >
> > > > > on 2)
> > > > > The offsetmap is built starting from dirty segment.
> > > > > The compaction starts from the beginning of the log partition.
> That's
> > > how
> > > > > it ensure the deletion of tomb keys.
> > > > > I will double check tomorrow.
> > > > >
> > > > > Xiongqi (Wesley) Wu
> > > > >
> > > > >
> > > > > On Thu, Aug 16, 2018 at 6:46 PM Brett Rann
> 
> > > > > wrote:
> > > > >
> > > > >> To just clarify a bit on 1. whether there's an external storage/DB
> > > isn't
> > > > >> relevant here.
> > > > >> Compacted topics allow a tombstone record to be sent (a null value
> > > for a
> > > > >> key) which
> > > > >> currently will result in old values for that key being deleted if
> some
> > > > >> conditions are met.
> > > > >> There are existing controls to make sure the old values will stay
> > > around
> > > > >> for a minimum
> > > > >> time at least, but no dedicated control to ensure the tombstone
> will
> > > > >> delete
> > > > >> within a
> > > > >> maximum time.
> > > > >>
> > > > >> One popular reason that maximum time for deletion is desirable
> right
> > > now
> > > > >> is
> > > > >> GDPR with
> > > > >> PII. But we're not proposing any GDPR awareness in kafka, just
> being
> > > > able
> > > > >> to guarantee
> > > > >> a max time where a tombstoned key will be removed from the
> compacted
> > > > >> topic.
> > > > >>
> > > > >> on 2)
> > > > >> huh, i thought it kept track of the first dirty segment and didn't
> > > > >> recompact older "clean" ones.
> > > > >> But I didn't look at code or test for that.
> > > > >>
> > > > >> On Fri, Aug 17, 2018 at 10:57 AM xiongqi wu 
> > > > wrote:
> > > > >>
> > > > >> > 1, Owner of data (in this sense, kafka is the not the owner of
> data)
> > > > >> > should keep track of lifecycle of the data in some external
> > > > storage/DB.
> > > > >> > The owner determines when to delete the data and send the delete
> > > > >> request to
> > > > >> > kafka. Kafka doesn't know about the content of data but to
> provide a
> > > > >> mean
> > > > >> > for deletion.
> > > > >> >
> > > > >> > 2 , each time compaction runs, it will start from first
> segments (no
> > > > >> > matter if it is compacted or not). The time estimation here is
> only
> > > > used
> > > > >> > to determine whether we should run compaction on this log
> partition.
> > > > So
> > > > >> we
> > > > >> > only need to estimate uncompacted segments.
> > > > >> >
> > > > >> > On Thu, Aug 16, 2018 at 5:35 PM, Dong Lin 
> > > > wrote:
> > > > >> >
> > > > >> > > Hey Xiongqi,
> > > > >> > >

[jira] [Created] (KAFKA-7510) KStreams RecordCollectorImpl leaks data to logs on error

2018-10-15 Thread Mr Kafka (JIRA)
Mr Kafka created KAFKA-7510:
---

 Summary: KStreams RecordCollectorImpl leaks data to logs on error
 Key: KAFKA-7510
 URL: https://issues.apache.org/jira/browse/KAFKA-7510
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Mr Kafka


org.apache.kafka.streams.processor.internals.RecordCollectorImpl leaks data on 
error as it dumps the *value* / message payload to the logs.

This is problematic as it may contain personally identifiable information (pii) 
or other secret information to plain text log files which can then be 
propagated to other log systems i.e Splunk.

I suggest the *key*, and *value* fields be moved to debug level as it is useful 
for some people while error level contains the *errorMessage, timestamp, topic* 
and *stackTrace*.
{code:java}
private  void recordSendError(
final K key,
final V value,
final Long timestamp,
final String topic,
final Exception exception
) {
String errorLogMessage = LOG_MESSAGE;
String errorMessage = EXCEPTION_MESSAGE;
if (exception instanceof RetriableException) {
errorLogMessage += PARAMETER_HINT;
errorMessage += PARAMETER_HINT;
}
log.error(errorLogMessage, key, value, timestamp, topic, 
exception.toString());
sendException = new StreamsException(
String.format(
errorMessage,
logPrefix,
"an error caught",
key,
value,
timestamp,
topic,
exception.toString()
),
exception);
}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: KAFKA-6654 custom SSLContext

2018-10-15 Thread Colin McCabe
In general Kafka makes an effort to be langauge-neutral so that Kafka clients 
can be implemented on platforms other than Java.  For example, we have things 
like librdkafka which allow people to access Kafka from C or Python.  Unless 
I'm misunderstanding something, giving direct access to the SSLContext and 
SSLSocketFactory seems like it would make that kind of compatibility harder, if 
it were even still possible.  I'm curious if there's a way to do this by adding 
configuration entries for what you need?

best,
Colin


On Mon, Oct 15, 2018, at 13:20, Pellerin, Clement wrote:
> I am new to this mailing list. I am not sure what I should do next. 
> Should I create a KIP to discuss this?
> 
> -Original Message-
> From: Pellerin, Clement 
> Sent: Wednesday, October 10, 2018 4:38 PM
> To: dev@kafka.apache.org
> Subject: KAFKA-6654 custom SSLContext
> 
> KAFKA-6654 correctly states that there will never be enough 
> configuration parameters to fully configure the SSLContext/
> SSLSocketFactory created by Kafka.
> For example, in our case, we need an alias to choose the key in the 
> keystore, and we need an implementation of OCSP.
> KAFKA-6654 suggests to make the creation of the SSLContext a pluggable 
> implementation.
> Maybe by declaring an interface and passing the name of an 
> implementation class in a new parameter.
> 
> Many libraries solve this problem by accepting the SSLContextFactory 
> instance from the application.
> How about passing the instance as the value of a runtime configuration 
> parameter?
> If that parameter is set, all other ssl.* parameters would be ignored.
> Obviously, this parameter could only be set programmatically.
> 
> I would like to hear the proposed solution by the Kafka maintainers.
> 
> I can help implementing a patch if there is an agreement on the desired 
> solution.


Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-10-15 Thread Ryanne Dolan
Rhys, thanks for your enthusiasm!

In your example, us-west.us-east.us-central.us-west.topic is an invalid
"remote topic" name because us-west appears twice. MM2 will not replicate
us-east.us-central.us-west.topic into us-west a second time, because the
source topic already has us-west in the prefix. This is what I mean by
"cycle detection" -- cyclical replication does not result in infinite
recursion.

It's important to note that MM2 does NOT disallow these sort of cycles, it
just knows how to deal with them properly.

Also notice this is done at the topic level, not per record. The records
don't need any special header or anything for this cycle detection
mechanism to work.

Thanks!
Ryanne

On Mon, Oct 15, 2018 at 3:40 PM McCaig, Rhys 
wrote:

> Hi Ryanne,
>
> This KIP is fantastic. It provides a great vision for how MirrorMaker
> should evolve in the Kafka project.
>
> I have a question on cycle detection - In a scenario where I have 3
> clusters replicating between each other, it seems it may be easy to
> misconfigure the connectors if auto topic creation is turned on so that
> records become replicated to increasingly longer topic names (until the
> topic name limit is reached). Consider clusters us-west, us-central,
> us-east:
>
> us-west: topic
> us-central: us-west.topic
> us-east: us-central.us-west.topic
> us-west: us-east.us-central.us-west.topic
> us-central: us-west.us-east.us-central.us-west.topic
>
> I’m not sure whether this scenario would actually justify implementing
> additional measures to avoid such a configuration, rather than ensuring
> that the documentation is clear on how to avoid such scenarios - would be
> good to hear what others think on this.
>
> Excited to see the discussion on this one.
>
> Rhys
>
> > On Oct 15, 2018, at 9:16 AM, Ryanne Dolan  wrote:
> >
> > Hey y'all!
> >
> > Please take a look at KIP-382:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0
> >
> > Thanks for your feedback and support.
> >
> > Ryanne
>
>


[jira] [Created] (KAFKA-7509) Kafka Connect logs unnecessary warnings about unused configurations

2018-10-15 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-7509:


 Summary: Kafka Connect logs unnecessary warnings about unused 
configurations
 Key: KAFKA-7509
 URL: https://issues.apache.org/jira/browse/KAFKA-7509
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 0.10.2.0
Reporter: Randall Hauch
Assignee: Randall Hauch


When running Connect, the logs contain quite a few warnings about "The 
configuration '{}' was supplied but isn't a known config." This occurs when 
Connect creates producers, consumers, and admin clients, because the 
AbstractConfig is logging unused configuration properties upon construction. 
It's complicated by the fact that the Producer, Consumer, and AdminClient all 
create their own AbstractConfig instances within the constructor, so we can't 
even call its {{ignore(String key)}} method.

See also KAFKA-6793 for a similar issue with Streams.

There are no arguments in the Producer, Consumer, or AdminClient constructors 
to control  whether the configs log these warnings, so a simpler workaround is 
to only pass those configuration properties to the Producer, Consumer, and 
AdminClient that the ProducerConfig, ConsumerConfig, and AdminClientConfig 
configdefs know about.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-10-15 Thread McCaig, Rhys
Hi Ryanne,

This KIP is fantastic. It provides a great vision for how MirrorMaker should 
evolve in the Kafka project.

I have a question on cycle detection - In a scenario where I have 3 clusters 
replicating between each other, it seems it may be easy to misconfigure the 
connectors if auto topic creation is turned on so that records become 
replicated to increasingly longer topic names (until the topic name limit is 
reached). Consider clusters us-west, us-central, us-east:

us-west: topic
us-central: us-west.topic
us-east: us-central.us-west.topic
us-west: us-east.us-central.us-west.topic
us-central: us-west.us-east.us-central.us-west.topic

I’m not sure whether this scenario would actually justify implementing 
additional measures to avoid such a configuration, rather than ensuring that 
the documentation is clear on how to avoid such scenarios - would be good to 
hear what others think on this.

Excited to see the discussion on this one.

Rhys

> On Oct 15, 2018, at 9:16 AM, Ryanne Dolan  wrote:
> 
> Hey y'all!
> 
> Please take a look at KIP-382:
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0
> 
> Thanks for your feedback and support.
> 
> Ryanne



RE: KAFKA-6654 custom SSLContext

2018-10-15 Thread Pellerin, Clement
I am new to this mailing list. I am not sure what I should do next. Should I 
create a KIP to discuss this?

-Original Message-
From: Pellerin, Clement 
Sent: Wednesday, October 10, 2018 4:38 PM
To: dev@kafka.apache.org
Subject: KAFKA-6654 custom SSLContext

KAFKA-6654 correctly states that there will never be enough configuration 
parameters to fully configure the SSLContext/SSLSocketFactory created by Kafka.
For example, in our case, we need an alias to choose the key in the keystore, 
and we need an implementation of OCSP.
KAFKA-6654 suggests to make the creation of the SSLContext a pluggable 
implementation.
Maybe by declaring an interface and passing the name of an implementation class 
in a new parameter.

Many libraries solve this problem by accepting the SSLContextFactory instance 
from the application.
How about passing the instance as the value of a runtime configuration 
parameter?
If that parameter is set, all other ssl.* parameters would be ignored.
Obviously, this parameter could only be set programmatically.

I would like to hear the proposed solution by the Kafka maintainers.

I can help implementing a patch if there is an agreement on the desired 
solution.


[jira] [Created] (KAFKA-7508) Kafka broker anonymous disconnected from Zookeeper

2018-10-15 Thread Sathish Yanamala (JIRA)
Sathish Yanamala created KAFKA-7508:
---

 Summary: Kafka broker anonymous disconnected from Zookeeper
 Key: KAFKA-7508
 URL: https://issues.apache.org/jira/browse/KAFKA-7508
 Project: Kafka
  Issue Type: Task
  Components: config
Reporter: Sathish Yanamala


Hello Team,

 

We are facing below Error , Kafka broker unable to connect Zookeeper , Please 
check and suggest is there any configuration changes required on Kafka Broker.

 

 ERROR:

2018-10-15 12:24:07,502 WARN kafka.network.Processor: Attempting to send 
response via channel for which there is no open connection, connection id 
- -:9093-- -:47542-25929

2018-10-15 12:24:09,428 INFO kafka.coordinator.group.GroupCoordinator: 
[GroupCoordinator 3]: Group KMOffsetCache-xxx  with generation 9 is now 
empty (__consumer_offsets-22)

2018-10-15 12:24:09,428 INFO kafka.server.epoch.LeaderEpochFileCache: Updated 
PartitionLeaderEpoch. New: \{epoch:1262, offset:151}, Current: \{epoch:1261, 
offset144} for Partition: __consumer_offsets-22. Cache now contains 15 entries.

{color:#d04437}*2018-10-15 12:24:10,905 ERROR kafka.utils.KafkaScheduler: 
Uncaught exception in scheduled task 'highwatermark-checkpoint'*{color}

{color:#d04437}*java.lang.OutOfMemoryError: Java heap space*{color}

{color:#d04437}    at{color} 
scala.collection.convert.DecorateAsScala$$Lambda$214/x.get$Lambda(Unknown 
Source)

    at 
java.lang.invoke.LambdaForm$DMH/xxx.invokeStatic_LL_L(LambdaForm$DMH)

    at 
java.lang.invoke.LambdaForm$MH/xx.linkToTargetMethod(LambdaForm$MH)

    at 
scala.collection.convert.DecorateAsScala.collectionAsScalaIterableConverter(DecorateAsScala.scala:45)

    at 
scala.collection.convert.DecorateAsScala.collectionAsScalaIterableConverter$(DecorateAsScala.scala:44)

    at 
scala.collection.JavaConverters$.collectionAsScalaIterableConverter(JavaConverters.scala:73)

    at kafka.utils.Pool.values(Pool.scala:85)

    at 
kafka.server.ReplicaManager.nonOfflinePartitionsIterator(ReplicaManager.scala:397)

    at 
kafka.server.ReplicaManager.checkpointHighWatermarks(ReplicaManager.scala:1340)

    at 
kafka.server.ReplicaManager.$anonfun$startHighWaterMarksCheckPointThread$1(ReplicaManager.scala:253)

    at kafka.server.ReplicaManager$$Lambda$608/xx.apply$mcV$sp(Unknown 
Source)

    at 
kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:110)

    at kafka.utils.KafkaScheduler$$Lambda$269/.apply$mcV$sp(Unknown 
Source)

    at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)

    at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)

    at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)

    at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)

    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

 

Thank you,

Sathish Yanamala

M:832-382-4487



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3140

2018-10-15 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-7498: Remove references from `common.requests` to `clients`

--
[...truncated 2.83 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED


[DISCUSS] KIP-382: MirrorMaker 2.0

2018-10-15 Thread Ryanne Dolan
Hey y'all!

Please take a look at KIP-382:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0

Thanks for your feedback and support.

Ryanne


[jira] [Created] (KAFKA-7507) Kafka Topics Error : "NotLeaderForPartitionException: This server is not the leader for that topic-partition"

2018-10-15 Thread Sathish Yanamala (JIRA)
Sathish Yanamala created KAFKA-7507:
---

 Summary: Kafka Topics Error : "NotLeaderForPartitionException: 
This server is not the leader for that topic-partition"
 Key: KAFKA-7507
 URL: https://issues.apache.org/jira/browse/KAFKA-7507
 Project: Kafka
  Issue Type: Bug
Reporter: Sathish Yanamala


Hello Team,

We are facing below Error on existing application , We have this error first 
time in our application.

Please suggest , Is there any configuration changes required on below issue, I 
just reviewed some of JIRA story’s related to below.

We running our Environment with 5 Brokers and each topic we are following as 
replication -3 and partitions – 6 forall our topics and we have Zookeeper 
Environment .  

 

 +*Error Log :*+

2018-10-15 03:42:16,596 ERROR kafka.server.ReplicaFetcherThread: 
[ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
__consumer_offsets-41 to broker 
1:{color:#d04437}org.apache.kafka.common.errors.NotLeaderForPartitionException: 
This server is not the leader for that topic-partition{color}.

2018-10-15 03:42:16,606 ERROR kafka.server.ReplicaFetcherThread: 
[ReplicaFetcher replicaId=2, leaderId=4, fetcherId=0] Error for partition 
CSL_SUR_DL_LXNX-3 to broker 
4:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition.

2018-10-15 03:42:16,608 ERROR kafka.server.ReplicaFetcherThread: 
[ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
CSL_SUR_DL_SOC_EAS-4 to broker 
1:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition.

2018-10-15 03:42:16,609 ERROR kafka.server.ReplicaFetcherThread: 
[ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Error for partition 
__consumer_offsets-26 to broker 
3:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition.

2018-10-15 03:42:16,613 ERROR kafka.server.ReplicaFetcherThread: 
[ReplicaFetcher replicaId=2, leaderId=5, fetcherId=0] Error for partition 
EDL_Datashare_Genesys_Request-3 to broker 
5:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition.

2018-10-15 03:42:16,613 ERROR kafka.server.ReplicaFetcherThread: 
[ReplicaFetcher replicaId=2, leaderId=5, fetcherId=0] Error for partition 
CSL_TRANSMIT_Data-3 to broker 
5:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition.

2018-10-15 03:42:16,613 ERROR kafka.server.ReplicaFetcherThread: 
[ReplicaFetcher replicaId=2, leaderId=4, fetcherId=0] Error for partition 
CHR_ResOrch_03-0 to broker 
4:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition.

2018-10-15 03:42:16,616 ERROR kafka.server.ReplicaFetcherThread: 
[ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
CSL_SUR_LXNX_REQUEST-4 to broker 
1:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition.

2018-10-15 03:42:16,616 ERROR kafka.server.ReplicaFetcherThread: 
[ReplicaFetcher replicaId=2, leaderId=4, fetcherId=0] Error for partition 
EDL_Datashare_JCP_GPAs-4 to broker 
4:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition.

2018-10-15 03:42:16,617 ERROR kafka.server.ReplicaFetcherThread: 
[ReplicaFetcher replicaId=2, leaderId=5, fetcherId=0] Error for partition 
CHR_ResOrch_03-5 to broker 
5:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition.

2018-10-15 03:42:16,619 ERROR kafka.server.ReplicaFetcherThread: 
[ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Error for partition 
CHR_ResOrch-0 to broker 
3:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition.

2018-10-15 03:42:16,621 ERROR kafka.server.ReplicaFetcherThread: 
[ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
__consumer_offsets-21 to broker 
1:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition.

2018-10-15 03:42:16,623 ERROR kafka.server.ReplicaFetcherThread: 
[ReplicaFetcher replicaId=2, leaderId=3, fetcherId=0] Error for partition 
CHR_ResOrch_05-3 to broker 
3:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition.

2018-10-15 03:42:16,624 ERROR kafka.server.ReplicaFetcherThread: 
[ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
CHR_DataCompOrch-5 to broker 
1:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that 

[jira] [Resolved] (KAFKA-7498) common.requests.CreatePartitionsRequest uses clients.admin.NewPartitions

2018-10-15 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-7498.
---
Resolution: Fixed
  Reviewer: Ismael Juma

> common.requests.CreatePartitionsRequest uses clients.admin.NewPartitions
> 
>
> Key: KAFKA-7498
> URL: https://issues.apache.org/jira/browse/KAFKA-7498
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.1.0
>
>
> `org.apache.kafka.common.requests.CreatePartitionsRequest` currently uses 
> `org.apache.kafka.clients.admin.NewPartitions`. We shouldn't have references 
> from `common` to `clients`. Since `org.apache.kafka.clients.admin` is a 
> public package, we cannot use a common class for Admin API and requests. So 
> we should do something similar to CreateTopicsRequest for which we have 
> `org.apache.kafka.clients.admin.NewTopic` class used for the admin API and an 
> equivalent 
> `org.apache.kafka.common.requests.CreateTopicsRequest.TopicDetails` class 
> that doesn't refer to `clients.admin`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3139

2018-10-15 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update reverse lookup test to work when ipv6 not enabled (#5797)

--
[...truncated 2.84 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED


[jira] [Created] (KAFKA-7506) KafkaStreams repartition topic settings not suitable for processing old records

2018-10-15 Thread JIRA
Niklas Lönn created KAFKA-7506:
--

 Summary: KafkaStreams repartition topic settings not suitable for 
processing old records
 Key: KAFKA-7506
 URL: https://issues.apache.org/jira/browse/KAFKA-7506
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.1.0
Reporter: Niklas Lönn


Hi, We are using Kafka Streams to process a compacted store, when resetting the 
application/processing from scratch the default topic configuration for 
repartition topics is 50MB and 10min segment sizes.

 

As the retention.ms is undefined, this leads to default retention.ms and log 
cleaner starts competing with the application, effectively causing the streams 
app to skip records.

{{Application logs the following:}}

{\{ Fetch offset 213792 is out of range for partition 
app-id-KTABLE-AGGREGATE-STATE-STORE-15-repartition-7, resetting offset}}
 \{{ Fetch offset 110227 is out of range for partition 
app-id-KTABLE-AGGREGATE-STATE-STORE-15-repartition-2, resetting offset}}
 \{{ Resetting offset for partition 
app-id-KTABLE-AGGREGATE-STATE-STORE-15-repartition-7 to offset 233302.}}
 \{{ Resetting offset for partition 
app-id-KTABLE-AGGREGATE-STATE-STORE-15-repartition-2 to offset 119914.}}

By adding the following configuration to RepartitionTopicConfig.java the issue 
is solved

{{tempTopicDefaultOverrides.put(TopicConfig.RETENTION_MS_CONFIG, "-1"); // 
Infinite}}

 
 My understanding is that this should be safe as KafkaStreams uses the admin 
API to delete segments.
  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)