Re: [VOTE] KIP-447: Producer scalability for exactly once semantics

2020-03-27 Thread Guozhang Wang
Hello Colin, Ismael,

Thanks for your feedbacks, they are quite helpful. Just to provide some
context here about OffsetFetch:

1) When building the offset fetch request, we used to auto "downgrade" the
request by falling back the requireStable flag when broker supporter
version is < 7.

https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchRequest.java#L82

2) The newly introduced config value "eos-beta" in KIP-447 requires brokers
to be on version 2.5 or later, and if broker is not on newer version, we
need to let the client to fail fast; in order to do so an internal config
from consumer is exposed to Streams, which allows the build function to
throw the UnsupportedVersionException instead of auto downgrading when
building the offset-fetch that requireStable flag. This is only turned on
by Streams so that if broker is not on newer version when "eos-beta" is
enabled on the client, we will fail fast .

In that sense, a new client would not use old RPC talking to the brokers --
the key point is that, upon starting a task the offset fetch is firstly
sent before we start processing records, so that's where we should stop the
client right away instead of silently reading unstable offsets which would
already break the fencing guarantees if producers were not fenced on txn
coordinator. In other words, we think that letting the overloaded
producer#sendOffsetsToTxn to throw UnsupportedVersion exception is already
too late, and hence we should not relying on that API to let clients detect
if brokers are on older versions.

For KIP-98, however, letting producer#initTxn to throw UnsupportedVersion
makes total sense since that's the first API triggered before producers try
to send any records.

3) With that in mind, I think letting the overloaded
producer#sendOffsetsToTxn to not throw UnsupportedVersion is appropriate,
since KIP-447 only applies to a consume -> process -> produce scenario and
does not apply to a producer-only scenario for transactional messaging.
Instead, the consumer API calls that trigger earlier would detect older
brokers. A side benefit of this is that, on the caller side (a.k.a Streams)
we do not need to first detect the broker version and then decide which
overloaded `producer#sendOffsetsToTxn` to call based on that, which also
means that we may eventually deprecate the old overloaded function one day.


Please let me know if you have any questions regarding this rationale, and
we can continue the discussion from there.


Guozhang



On Fri, Mar 27, 2020 at 9:21 PM Ismael Juma  wrote:

> I'm a bit puzzled. We added this feature because we thought it was useful.
> And now we are saying that you don't know if you can rely on it since the
> downgrade happens silently. Can you provide more context on the OffsetFetch
> downgrade? What is the implication of that?
>
> Ismael
>
> On Fri, Mar 27, 2020 at 7:30 PM Boyang Chen 
> wrote:
>
> > Thanks Colin, I think the point of this change is to make the new client
> > experience better while working with old brokers, when the upgrade is
> > happening on client side only. For Streams, it is very common to have
> more
> > advanced client version. On code level, the path to call transaction
> commit
> > is better to be unified instead of diverged with try-catch. As a matter
> of
> > fact, we have already implemented the downgrade of the OffsetFetch
> protocol
> > to adapt this broker version incompatibility. New transaction commit
> > protocol thus behaves inconsistent with what we did with other protocols.
> >
> > In terms of the impact for this last minute change, silent downgrade has
> > one downside which is the customized EOS applications could not alarm the
> > user of semantic violation anymore by simply crashing. To be honest, I'm
> > not sure how severe this would affect customized community usages, as we
> > don't have user stats for that yet :)
> >
> > Boyang
> >
> > On Fri, Mar 27, 2020 at 6:32 PM Colin McCabe  wrote:
> >
> > > On Fri, Mar 27, 2020, at 18:29, Colin McCabe wrote:
> > > > On Fri, Mar 27, 2020, at 16:06, Boyang Chen wrote:
> > > > > Hey there,
> > > > >
> > > > > we would like to address an improvement on the
> > > > > *Producer#sendOffsetsToTransaction(offsets,
> > > > > groupMetadata) *API. Previously we assume the calling of this
> > function
> > > > > would crash the app if broker is not on version 2.5.0 or higher. As
> > > Streams
> > > > > side change was rolling out, the disadvantage of this behavior
> > becomes
> > > > > obvious, since on the application level it is hard to detect broker
> > > version
> > > > > and do separate call paths for old and new transaction commit
> > > protocols.
> > > >
> > > > Hi Boyang,
> > > >
> > > > Applications are never supposed to detect broker versions, but they
> may
> > > > be required to detect the presence or absence of a feature.  That
> would
> > > > happen if a new feature was introduced that couldn't be emulated with
> > > > 

Re: [VOTE] KIP-447: Producer scalability for exactly once semantics

2020-03-27 Thread Ismael Juma
I'm a bit puzzled. We added this feature because we thought it was useful.
And now we are saying that you don't know if you can rely on it since the
downgrade happens silently. Can you provide more context on the OffsetFetch
downgrade? What is the implication of that?

Ismael

On Fri, Mar 27, 2020 at 7:30 PM Boyang Chen 
wrote:

> Thanks Colin, I think the point of this change is to make the new client
> experience better while working with old brokers, when the upgrade is
> happening on client side only. For Streams, it is very common to have more
> advanced client version. On code level, the path to call transaction commit
> is better to be unified instead of diverged with try-catch. As a matter of
> fact, we have already implemented the downgrade of the OffsetFetch protocol
> to adapt this broker version incompatibility. New transaction commit
> protocol thus behaves inconsistent with what we did with other protocols.
>
> In terms of the impact for this last minute change, silent downgrade has
> one downside which is the customized EOS applications could not alarm the
> user of semantic violation anymore by simply crashing. To be honest, I'm
> not sure how severe this would affect customized community usages, as we
> don't have user stats for that yet :)
>
> Boyang
>
> On Fri, Mar 27, 2020 at 6:32 PM Colin McCabe  wrote:
>
> > On Fri, Mar 27, 2020, at 18:29, Colin McCabe wrote:
> > > On Fri, Mar 27, 2020, at 16:06, Boyang Chen wrote:
> > > > Hey there,
> > > >
> > > > we would like to address an improvement on the
> > > > *Producer#sendOffsetsToTransaction(offsets,
> > > > groupMetadata) *API. Previously we assume the calling of this
> function
> > > > would crash the app if broker is not on version 2.5.0 or higher. As
> > Streams
> > > > side change was rolling out, the disadvantage of this behavior
> becomes
> > > > obvious, since on the application level it is hard to detect broker
> > version
> > > > and do separate call paths for old and new transaction commit
> > protocols.
> > >
> > > Hi Boyang,
> > >
> > > Applications are never supposed to detect broker versions, but they may
> > > be required to detect the presence or absence of a feature.  That would
> > > happen if a new feature was introduced that couldn't be emulated with
> > > some combination of the features that previously existed.
> > >
> > > That was the case for the original EOS work.  For example,
> > > initTransactions will throw UnsupportedVersionException if the producer
> > > tries to use it, but the broker doesn't support it.
> > >
> > > >
> > > > Thus, we are planning to change the behavior from crashing to
> silently
> > > > downgrade to the older transactional offset commit protocol by
> > resetting
> > > > the consumer group metadata fields. Details in this PR
> > > > .
> > > >
> > >
> > > The big question is will using the old client code with the old RPC be
> > > less safe than using the old client code with the old RPC?  If it will,
> > > there's a strong case to be made that streams should catch the
> > > UnsupportedVersionException and fall back to its old behavior.
> >
> > Sorry, that should read "new client code with old RPC".  Basically, are
> we
> > making things worse for users with older brokers and newer clients?
> >
> > Colin
> >
> > >
> > > best,
> > > Colin
> > >
> > > >
> > > > Let me know if you have any questions, thanks!
> > > >
> > > > On Thu, Mar 26, 2020 at 12:45 PM Matthias J. Sax 
> > wrote:
> > > >
> > > > > One more change for KIP-447.
> > > > >
> > > > > Currently, Kafka Streams collects task-level metrics called
> > > > > "commit-latency-[max|avg]". However, with KIP-447 tasks don't get
> > > > > committed individually any longer, and thus this metrics do not
> make
> > > > > sense going forward.
> > > > >
> > > > > Therefore, we propose to remove those metrics in 2.6 release.
> > > > > Deprecation does not make much sense, as there is just nothing to
> be
> > > > > recorded in a useful way any longer.
> > > > >
> > > > > I also updated the upgrade path section, as the upgrade path is
> > actually
> > > > > simpler as described originally.
> > > > >
> > > > >
> > > > > If there are any concerns, please let us know!
> > > > >
> > > > >
> > > > > -Matthias
> > > > >
> > > > >
> > > > > On 3/5/20 12:56 PM, Matthias J. Sax wrote:
> > > > > > There is one more change to this KIP for the upgrade path of
> Kafka
> > > > > > Streams applications:
> > > > > >
> > > > > > We cannot detect broker versions reliable, and thus, we need
> users
> > to
> > > > > > manually opt-in to the feature. Thus, we need to add a third
> value
> > for
> > > > > > configuration parameter `processing.guarantee` that we call
> > > > > > `exactly_once_beta` -- specifying this config will enable the
> > producer
> > > > > > per thread design.
> > > > > >
> > > > > > I updated the KIP accordingly. Please let us know if there are
> any
> > > > > > concerns.
> > > > > >
> > > > > >
> > > 

Build failed in Jenkins: kafka-trunk-jdk8 #4377

2020-03-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9770: Close underlying state store also when flush throws (#8368)


--
[...truncated 2.95 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task 

Re: [VOTE] KIP-447: Producer scalability for exactly once semantics

2020-03-27 Thread Boyang Chen
Thanks Colin, I think the point of this change is to make the new client
experience better while working with old brokers, when the upgrade is
happening on client side only. For Streams, it is very common to have more
advanced client version. On code level, the path to call transaction commit
is better to be unified instead of diverged with try-catch. As a matter of
fact, we have already implemented the downgrade of the OffsetFetch protocol
to adapt this broker version incompatibility. New transaction commit
protocol thus behaves inconsistent with what we did with other protocols.

In terms of the impact for this last minute change, silent downgrade has
one downside which is the customized EOS applications could not alarm the
user of semantic violation anymore by simply crashing. To be honest, I'm
not sure how severe this would affect customized community usages, as we
don't have user stats for that yet :)

Boyang

On Fri, Mar 27, 2020 at 6:32 PM Colin McCabe  wrote:

> On Fri, Mar 27, 2020, at 18:29, Colin McCabe wrote:
> > On Fri, Mar 27, 2020, at 16:06, Boyang Chen wrote:
> > > Hey there,
> > >
> > > we would like to address an improvement on the
> > > *Producer#sendOffsetsToTransaction(offsets,
> > > groupMetadata) *API. Previously we assume the calling of this function
> > > would crash the app if broker is not on version 2.5.0 or higher. As
> Streams
> > > side change was rolling out, the disadvantage of this behavior becomes
> > > obvious, since on the application level it is hard to detect broker
> version
> > > and do separate call paths for old and new transaction commit
> protocols.
> >
> > Hi Boyang,
> >
> > Applications are never supposed to detect broker versions, but they may
> > be required to detect the presence or absence of a feature.  That would
> > happen if a new feature was introduced that couldn't be emulated with
> > some combination of the features that previously existed.
> >
> > That was the case for the original EOS work.  For example,
> > initTransactions will throw UnsupportedVersionException if the producer
> > tries to use it, but the broker doesn't support it.
> >
> > >
> > > Thus, we are planning to change the behavior from crashing to silently
> > > downgrade to the older transactional offset commit protocol by
> resetting
> > > the consumer group metadata fields. Details in this PR
> > > .
> > >
> >
> > The big question is will using the old client code with the old RPC be
> > less safe than using the old client code with the old RPC?  If it will,
> > there's a strong case to be made that streams should catch the
> > UnsupportedVersionException and fall back to its old behavior.
>
> Sorry, that should read "new client code with old RPC".  Basically, are we
> making things worse for users with older brokers and newer clients?
>
> Colin
>
> >
> > best,
> > Colin
> >
> > >
> > > Let me know if you have any questions, thanks!
> > >
> > > On Thu, Mar 26, 2020 at 12:45 PM Matthias J. Sax 
> wrote:
> > >
> > > > One more change for KIP-447.
> > > >
> > > > Currently, Kafka Streams collects task-level metrics called
> > > > "commit-latency-[max|avg]". However, with KIP-447 tasks don't get
> > > > committed individually any longer, and thus this metrics do not make
> > > > sense going forward.
> > > >
> > > > Therefore, we propose to remove those metrics in 2.6 release.
> > > > Deprecation does not make much sense, as there is just nothing to be
> > > > recorded in a useful way any longer.
> > > >
> > > > I also updated the upgrade path section, as the upgrade path is
> actually
> > > > simpler as described originally.
> > > >
> > > >
> > > > If there are any concerns, please let us know!
> > > >
> > > >
> > > > -Matthias
> > > >
> > > >
> > > > On 3/5/20 12:56 PM, Matthias J. Sax wrote:
> > > > > There is one more change to this KIP for the upgrade path of Kafka
> > > > > Streams applications:
> > > > >
> > > > > We cannot detect broker versions reliable, and thus, we need users
> to
> > > > > manually opt-in to the feature. Thus, we need to add a third value
> for
> > > > > configuration parameter `processing.guarantee` that we call
> > > > > `exactly_once_beta` -- specifying this config will enable the
> producer
> > > > > per thread design.
> > > > >
> > > > > I updated the KIP accordingly. Please let us know if there are any
> > > > > concerns.
> > > > >
> > > > >
> > > > > -Matthias
> > > > >
> > > > > On 2/11/20 4:44 PM, Guozhang Wang wrote:
> > > > >> Boyang,
> > > > >
> > > > >> Thanks for the update. This change makes sense to me.
> > > > >
> > > > >> Guozhang
> > > > >
> > > > >> On Tue, Feb 11, 2020 at 11:37 AM Boyang Chen
> > > > >>  wrote:
> > > > >
> > > > >>> Hey there,
> > > > >>>
> > > > >>> we are adding a small change to the KIP-447 public API. The
> > > > >>> default value of
> > > > >>> `transaction.abort.timed.out.transaction.cleanup.interval.ms`
> > > > >>> shall be changed from 1 minute to 10 seconds. The goal 

Re: [VOTE] KIP-447: Producer scalability for exactly once semantics

2020-03-27 Thread Colin McCabe
On Fri, Mar 27, 2020, at 18:29, Colin McCabe wrote:
> On Fri, Mar 27, 2020, at 16:06, Boyang Chen wrote:
> > Hey there,
> > 
> > we would like to address an improvement on the
> > *Producer#sendOffsetsToTransaction(offsets,
> > groupMetadata) *API. Previously we assume the calling of this function
> > would crash the app if broker is not on version 2.5.0 or higher. As Streams
> > side change was rolling out, the disadvantage of this behavior becomes
> > obvious, since on the application level it is hard to detect broker version
> > and do separate call paths for old and new transaction commit protocols.
> 
> Hi Boyang,
> 
> Applications are never supposed to detect broker versions, but they may 
> be required to detect the presence or absence of a feature.  That would 
> happen if a new feature was introduced that couldn't be emulated with 
> some combination of the features that previously existed.
> 
> That was the case for the original EOS work.  For example, 
> initTransactions will throw UnsupportedVersionException if the producer 
> tries to use it, but the broker doesn't support it.
> 
> > 
> > Thus, we are planning to change the behavior from crashing to silently
> > downgrade to the older transactional offset commit protocol by resetting
> > the consumer group metadata fields. Details in this PR
> > .
> > 
> 
> The big question is will using the old client code with the old RPC be 
> less safe than using the old client code with the old RPC?  If it will, 
> there's a strong case to be made that streams should catch the 
> UnsupportedVersionException and fall back to its old behavior.

Sorry, that should read "new client code with old RPC".  Basically, are we 
making things worse for users with older brokers and newer clients?

Colin

> 
> best,
> Colin
> 
> >
> > Let me know if you have any questions, thanks!
> > 
> > On Thu, Mar 26, 2020 at 12:45 PM Matthias J. Sax  wrote:
> > 
> > > One more change for KIP-447.
> > >
> > > Currently, Kafka Streams collects task-level metrics called
> > > "commit-latency-[max|avg]". However, with KIP-447 tasks don't get
> > > committed individually any longer, and thus this metrics do not make
> > > sense going forward.
> > >
> > > Therefore, we propose to remove those metrics in 2.6 release.
> > > Deprecation does not make much sense, as there is just nothing to be
> > > recorded in a useful way any longer.
> > >
> > > I also updated the upgrade path section, as the upgrade path is actually
> > > simpler as described originally.
> > >
> > >
> > > If there are any concerns, please let us know!
> > >
> > >
> > > -Matthias
> > >
> > >
> > > On 3/5/20 12:56 PM, Matthias J. Sax wrote:
> > > > There is one more change to this KIP for the upgrade path of Kafka
> > > > Streams applications:
> > > >
> > > > We cannot detect broker versions reliable, and thus, we need users to
> > > > manually opt-in to the feature. Thus, we need to add a third value for
> > > > configuration parameter `processing.guarantee` that we call
> > > > `exactly_once_beta` -- specifying this config will enable the producer
> > > > per thread design.
> > > >
> > > > I updated the KIP accordingly. Please let us know if there are any
> > > > concerns.
> > > >
> > > >
> > > > -Matthias
> > > >
> > > > On 2/11/20 4:44 PM, Guozhang Wang wrote:
> > > >> Boyang,
> > > >
> > > >> Thanks for the update. This change makes sense to me.
> > > >
> > > >> Guozhang
> > > >
> > > >> On Tue, Feb 11, 2020 at 11:37 AM Boyang Chen
> > > >>  wrote:
> > > >
> > > >>> Hey there,
> > > >>>
> > > >>> we are adding a small change to the KIP-447 public API. The
> > > >>> default value of
> > > >>> `transaction.abort.timed.out.transaction.cleanup.interval.ms`
> > > >>> shall be changed from 1 minute to 10 seconds. The goal here is to
> > > >>> trigger the expired transaction more frequently in order to
> > > >>> reduce the consumer pending offset fetch wait time.
> > > >>>
> > > >>> Let me know if you have further questions, thanks!
> > > >>>
> > > >>>
> > > >>> On Wed, Jan 8, 2020 at 3:44 PM Boyang Chen
> > > >>>  wrote:
> > > >>>
> > >  Thanks Guozhang for another review! I have addressed all the
> > >  javadoc changes for PendingTransactionException in the KIP.
> > >  For
> > > >>> FENCED_INSTANCE_ID
> > >  the only thrown place would be on the new send offsets API,
> > >  which is also addressed.
> > > 
> > >  Thanks Matthias for the vote! As we have 3 binding votes
> > >  (Guozhang,
> > > >>> Jason,
> > >  and Matthias), the KIP is officially accepted and prepared to
> > >  ship in
> > > >>> 2.5.
> > > 
> > >  Still feel free to put more thoughts on either discussion or
> > >  voting
> > > >>> thread
> > >  to refine the KIP!
> > > 
> > > 
> > >  On Wed, Jan 8, 2020 at 3:15 PM Matthias J. Sax
> > >   wrote:
> > > 
> > > > I just re-read the KIP. Overall I am +1 as well.
> > > >
> > > 
> > > 

Re: [VOTE] KIP-447: Producer scalability for exactly once semantics

2020-03-27 Thread Colin McCabe
On Fri, Mar 27, 2020, at 16:06, Boyang Chen wrote:
> Hey there,
> 
> we would like to address an improvement on the
> *Producer#sendOffsetsToTransaction(offsets,
> groupMetadata) *API. Previously we assume the calling of this function
> would crash the app if broker is not on version 2.5.0 or higher. As Streams
> side change was rolling out, the disadvantage of this behavior becomes
> obvious, since on the application level it is hard to detect broker version
> and do separate call paths for old and new transaction commit protocols.

Hi Boyang,

Applications are never supposed to detect broker versions, but they may be 
required to detect the presence or absence of a feature.  That would happen if 
a new feature was introduced that couldn't be emulated with some combination of 
the features that previously existed.

That was the case for the original EOS work.  For example, initTransactions 
will throw UnsupportedVersionException if the producer tries to use it, but the 
broker doesn't support it.

> 
> Thus, we are planning to change the behavior from crashing to silently
> downgrade to the older transactional offset commit protocol by resetting
> the consumer group metadata fields. Details in this PR
> .
> 

The big question is will using the old client code with the old RPC be less 
safe than using the old client code with the old RPC?  If it will, there's a 
strong case to be made that streams should catch the 
UnsupportedVersionException and fall back to its old behavior.

best,
Colin

>
> Let me know if you have any questions, thanks!
> 
> On Thu, Mar 26, 2020 at 12:45 PM Matthias J. Sax  wrote:
> 
> > One more change for KIP-447.
> >
> > Currently, Kafka Streams collects task-level metrics called
> > "commit-latency-[max|avg]". However, with KIP-447 tasks don't get
> > committed individually any longer, and thus this metrics do not make
> > sense going forward.
> >
> > Therefore, we propose to remove those metrics in 2.6 release.
> > Deprecation does not make much sense, as there is just nothing to be
> > recorded in a useful way any longer.
> >
> > I also updated the upgrade path section, as the upgrade path is actually
> > simpler as described originally.
> >
> >
> > If there are any concerns, please let us know!
> >
> >
> > -Matthias
> >
> >
> > On 3/5/20 12:56 PM, Matthias J. Sax wrote:
> > > There is one more change to this KIP for the upgrade path of Kafka
> > > Streams applications:
> > >
> > > We cannot detect broker versions reliable, and thus, we need users to
> > > manually opt-in to the feature. Thus, we need to add a third value for
> > > configuration parameter `processing.guarantee` that we call
> > > `exactly_once_beta` -- specifying this config will enable the producer
> > > per thread design.
> > >
> > > I updated the KIP accordingly. Please let us know if there are any
> > > concerns.
> > >
> > >
> > > -Matthias
> > >
> > > On 2/11/20 4:44 PM, Guozhang Wang wrote:
> > >> Boyang,
> > >
> > >> Thanks for the update. This change makes sense to me.
> > >
> > >> Guozhang
> > >
> > >> On Tue, Feb 11, 2020 at 11:37 AM Boyang Chen
> > >>  wrote:
> > >
> > >>> Hey there,
> > >>>
> > >>> we are adding a small change to the KIP-447 public API. The
> > >>> default value of
> > >>> `transaction.abort.timed.out.transaction.cleanup.interval.ms`
> > >>> shall be changed from 1 minute to 10 seconds. The goal here is to
> > >>> trigger the expired transaction more frequently in order to
> > >>> reduce the consumer pending offset fetch wait time.
> > >>>
> > >>> Let me know if you have further questions, thanks!
> > >>>
> > >>>
> > >>> On Wed, Jan 8, 2020 at 3:44 PM Boyang Chen
> > >>>  wrote:
> > >>>
> >  Thanks Guozhang for another review! I have addressed all the
> >  javadoc changes for PendingTransactionException in the KIP.
> >  For
> > >>> FENCED_INSTANCE_ID
> >  the only thrown place would be on the new send offsets API,
> >  which is also addressed.
> > 
> >  Thanks Matthias for the vote! As we have 3 binding votes
> >  (Guozhang,
> > >>> Jason,
> >  and Matthias), the KIP is officially accepted and prepared to
> >  ship in
> > >>> 2.5.
> > 
> >  Still feel free to put more thoughts on either discussion or
> >  voting
> > >>> thread
> >  to refine the KIP!
> > 
> > 
> >  On Wed, Jan 8, 2020 at 3:15 PM Matthias J. Sax
> >   wrote:
> > 
> > > I just re-read the KIP. Overall I am +1 as well.
> > >
> > 
> > > Some minor comments (also apply to the Google design doc):
> > >
> > > 1) As 2.4 was release, references should be updated to 2.5.
> > >
> > > Addressed
> > 
> > >
> > >
> > >> 2) About the upgrade path, the KIP says:
> > >
> > > 2a)
> > >
> > >> Broker must be upgraded to 2.4 first. This means the
> > > `inter.broker.protocol.version` (IBP) has to be set to the
> > > latest. Any produce request with 

回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in StreamsResetter

2020-03-27 Thread feyman2009
Bump, can anyone kindly take a look at the updated KIP-571? Thanks!


--
发件人:feyman2009 
发送时间:2020年3月23日(星期一) 08:51
收件人:dev 
主 题:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in 
StreamsResetter

Hi, team
I have updated the KIP-571 according to our latest discussion results, 
would you mind to take a look? Thanks!

Feyman


--
发件人:Boyang Chen 
发送时间:2020年3月19日(星期四) 13:41
收件人:dev ; feyman2009 
主 题:Re: 回复:回复:回复:[Vote] KIP-571: Add option to force remove members in 
StreamsResetter

Thanks for the insight Feyman. I personally feel adding another admin client 
command is redundant, so picking option 1). The memberInfos struct is internal 
and just used for result reference purposes. I think it could still work even 
we overload with `removeAll` option, if that makes sense.

Boyang
On Wed, Mar 18, 2020 at 8:51 PM feyman2009  
wrote:
Hi, team
 Before going too far on the KIP update, I would like to hear your opinions 
about how we would change the interface of AdminClient, the two alternatives I 
could think of are:
 1) Extend adminClient.removeMembersFromConsumerGroup to support remove all
 As Guochang suggested, we could add some flag param in 
RemoveMembersFromConsumerGroupOptions to indicated the "remove all" logic.  
 2) Add a new API like 
adminClient.removeAllMembersFromConsumerGroup(groupId, options) 

 I think 1) will be more compact from the API perspective, but looking at 
the code, I found that the if we are going to remove all, then the 
RemoveMembersFromConsumerGroupResult#memberInfos/memberResult()/all() should be 
changed accordingly, and they seem not that meaningful under the "remove all" 
scenario.

 A minor thought about the adminClient.removeMembersFromConsumerGroup API 
is:
 Looking at some other deleteXX APIs, like deleteTopics, deleteRecords, the 
results contains only a Map>, I think it's enough to describe the 
related results, is it make sense that we may remove memberInfos in 
RemoveMembersFromConsumerGroupResult ? This KIP has no dependency on this if we 
choose alternative 2)

 Could you advise? Thanks!

 Feyman


 送时间:2020年3月15日(星期日) 10:11
 收件人:dev 
 主 题:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in 
StreamsResetter

 Hi, all
 Thanks a lot for your feedback!
 According to the discussion, it seems we don't have some valid use cases 
for removing specific dynamic members, I think it makes sense to encapsulate 
the "get and delete" logic in adminClient. I will update the KIP shortly!

 Thanks!

 Feyman


 --
 发件人:Boyang Chen 
 发送时间:2020年3月14日(星期六) 00:39
 收件人:dev 
 主 题:Re: 回复:回复:回复:[Vote] KIP-571: Add option to force remove members in 
StreamsResetter

 Thanks Matthias and Guozhang for the feedback. I'm not worrying too much
 about the member.id exposure as we have done so in a couple of areas. As
 for the recommended admin client change, I think it makes sense in an
 encapsulation perspective. Maybe I'm still a bit hesitant as we are losing
 the flexibility of closing only a subset of `dynamic members` potentially,
 but we could always get back and address it if some user feels necessary to
 have it.

 My short answer would be, LGTM :)

 Boyang

 On Thu, Mar 12, 2020 at 5:26 PM Guozhang Wang  wrote:

 > Hi Matthias,
 >
 > About the AdminClient param API: that's a great point here. I think overall
 > if users want to just "remove all members" they should not need to first
 > get all the member.ids themselves, but instead internally the admin client
 > can first issue a describe-group request to get all the member.ids, and
 > then use them in the next issued leave-group request, all abstracted away
 > from the users. With that in mind, maybe in
 > RemoveMembersFromConsumerGroupOptions we can just introduce an overloaded
 > flag param besides "members" that indicate "remove all"?
 >
 > Guozhang
 >
 > On Thu, Mar 12, 2020 at 2:59 PM Matthias J. Sax  wrote:
 >
 > > Feyman,
 > >
 > > some more comments/questions:
 > >
 > > The description of `LeaveGroupRequest` is clear but it's unclear how
 > > `MemberToRemove` should behave. Which parameter is required? Which is
 > > optional? What is the relationship between both.
 > >
 > > The `LeaveGroupRequest` description clearly states that specifying a
 > > `memberId` is optional if the `groupInstanceId` is provided. If
 > > `MemberToRemove` applies the same pattern, it must be explicitly defined
 > > in the KIP (and explained in the JavaDocs of `MemberToRemove`) because
 > > we cannot expect that an admin-client users knows that internally a
 > > `LeaveGroupRequest` is used nor what the semantics of a
 > > `LeaveGroupRequest` are.
 > >
 > >
 > > About Admin API:
 > >
 > > In general, I am also confused that we allow so specify a `memberId` at
 > > all, because the 

[jira] [Created] (KAFKA-9780) Deprecate commit records without record metadata

2020-03-27 Thread Mario Molina (Jira)
Mario Molina created KAFKA-9780:
---

 Summary: Deprecate commit records without record metadata
 Key: KAFKA-9780
 URL: https://issues.apache.org/jira/browse/KAFKA-9780
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 2.4.1
Reporter: Mario Molina
Assignee: Mario Molina
 Fix For: 2.5.0, 2.6.0


Since KIP-382 (MirrorMaker 2.0) a new method {{commitRecord}} was included in 
{{SourceTask}} class to be called by the worker adding a new parameter with the 
record metadata. The old {{commitRecord}} method is called and from the new one 
and it's preserved just for backwards compatibility.

The idea is to deprecate this method so that we could remove it in a future 
release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-447: Producer scalability for exactly once semantics

2020-03-27 Thread Boyang Chen
Hey there,

we would like to address an improvement on the
*Producer#sendOffsetsToTransaction(offsets,
groupMetadata) *API. Previously we assume the calling of this function
would crash the app if broker is not on version 2.5.0 or higher. As Streams
side change was rolling out, the disadvantage of this behavior becomes
obvious, since on the application level it is hard to detect broker version
and do separate call paths for old and new transaction commit protocols.

Thus, we are planning to change the behavior from crashing to silently
downgrade to the older transactional offset commit protocol by resetting
the consumer group metadata fields. Details in this PR
.

Let me know if you have any questions, thanks!

On Thu, Mar 26, 2020 at 12:45 PM Matthias J. Sax  wrote:

> One more change for KIP-447.
>
> Currently, Kafka Streams collects task-level metrics called
> "commit-latency-[max|avg]". However, with KIP-447 tasks don't get
> committed individually any longer, and thus this metrics do not make
> sense going forward.
>
> Therefore, we propose to remove those metrics in 2.6 release.
> Deprecation does not make much sense, as there is just nothing to be
> recorded in a useful way any longer.
>
> I also updated the upgrade path section, as the upgrade path is actually
> simpler as described originally.
>
>
> If there are any concerns, please let us know!
>
>
> -Matthias
>
>
> On 3/5/20 12:56 PM, Matthias J. Sax wrote:
> > There is one more change to this KIP for the upgrade path of Kafka
> > Streams applications:
> >
> > We cannot detect broker versions reliable, and thus, we need users to
> > manually opt-in to the feature. Thus, we need to add a third value for
> > configuration parameter `processing.guarantee` that we call
> > `exactly_once_beta` -- specifying this config will enable the producer
> > per thread design.
> >
> > I updated the KIP accordingly. Please let us know if there are any
> > concerns.
> >
> >
> > -Matthias
> >
> > On 2/11/20 4:44 PM, Guozhang Wang wrote:
> >> Boyang,
> >
> >> Thanks for the update. This change makes sense to me.
> >
> >> Guozhang
> >
> >> On Tue, Feb 11, 2020 at 11:37 AM Boyang Chen
> >>  wrote:
> >
> >>> Hey there,
> >>>
> >>> we are adding a small change to the KIP-447 public API. The
> >>> default value of
> >>> `transaction.abort.timed.out.transaction.cleanup.interval.ms`
> >>> shall be changed from 1 minute to 10 seconds. The goal here is to
> >>> trigger the expired transaction more frequently in order to
> >>> reduce the consumer pending offset fetch wait time.
> >>>
> >>> Let me know if you have further questions, thanks!
> >>>
> >>>
> >>> On Wed, Jan 8, 2020 at 3:44 PM Boyang Chen
> >>>  wrote:
> >>>
>  Thanks Guozhang for another review! I have addressed all the
>  javadoc changes for PendingTransactionException in the KIP.
>  For
> >>> FENCED_INSTANCE_ID
>  the only thrown place would be on the new send offsets API,
>  which is also addressed.
> 
>  Thanks Matthias for the vote! As we have 3 binding votes
>  (Guozhang,
> >>> Jason,
>  and Matthias), the KIP is officially accepted and prepared to
>  ship in
> >>> 2.5.
> 
>  Still feel free to put more thoughts on either discussion or
>  voting
> >>> thread
>  to refine the KIP!
> 
> 
>  On Wed, Jan 8, 2020 at 3:15 PM Matthias J. Sax
>   wrote:
> 
> > I just re-read the KIP. Overall I am +1 as well.
> >
> 
> > Some minor comments (also apply to the Google design doc):
> >
> > 1) As 2.4 was release, references should be updated to 2.5.
> >
> > Addressed
> 
> >
> >
> >> 2) About the upgrade path, the KIP says:
> >
> > 2a)
> >
> >> Broker must be upgraded to 2.4 first. This means the
> > `inter.broker.protocol.version` (IBP) has to be set to the
> > latest. Any produce request with higher version will
> > automatically get fenced
> >>> because
> > of no support.
> >
> > From my understanding, this is not correct? After a broker is
> > updated to the new binaries, it should accept new requests,
> > even if IBP was not bumped yet?
> >
> > Your understanding was correct, after some offline discussion
> > we should
>  not worry about IBP in this case.
> 
> > 2b)
> >
> > About the two rolling bounces for KS apps and the statement
> >
> >> one should never allow task producer and thread producer
> >> under the
> >>> same
> > application group
> >
> > In the second rolling bounce, we might actually mix both (ie,
> > per-task and per-thread producers) but this is fine as
> > explained in the KIP. The only case we cannot allow is, old
> > per-task producers (without consumer generation fencing) to
> > be mixed with per-thread producers (that rely solely on
> > consumer generation fencing).
> >
> > Does this sound correct?
> 

Build failed in Jenkins: kafka-trunk-jdk11 #1297

2020-03-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9771: Port patch for inter-worker Connect SSL from Jetty 9.4.25


--
[...truncated 2.96 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

[jira] [Created] (KAFKA-9779) Add version 2.5 to streams system tests

2020-03-27 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9779:
--

 Summary: Add version 2.5 to streams system tests
 Key: KAFKA-9779
 URL: https://issues.apache.org/jira/browse/KAFKA-9779
 Project: Kafka
  Issue Type: Improvement
Reporter: Boyang Chen
Assignee: Boyang Chen






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9778) Add validateConnector functionality to the EmbeddedConnectCluster

2020-03-27 Thread Daniel Osvath (Jira)
Daniel Osvath created KAFKA-9778:


 Summary: Add validateConnector functionality to the 
EmbeddedConnectCluster
 Key: KAFKA-9778
 URL: https://issues.apache.org/jira/browse/KAFKA-9778
 Project: Kafka
  Issue Type: Improvement
Reporter: Daniel Osvath


A validate endpoint should be added to enables the integration testing of 
validation functionalities, including validation success and assertion of 
specific error messages.

This PR adds a method {{validateConnectorConfig}} to the 
{{EmbeddedConnectCluster}} that pings the {{/config/validate}} endpoint with 
the given configurations. [More about the endpoint 
here.|https://docs.confluent.io/current/connect/references/restapi.html#put--connector-plugins-(string-name)-config-validate]

With this addition, the validations for the connector can be tested in a 
similar way integration tests currently use the {{configureConnector}} method, 
for ex: {{connect.configureConnector(CONNECTOR_NAME, props);}}. The validation 
call would look like: {{ConfigInfos validateResponse = 
connect.validateConnectorConfig(CONNECTOR_CLASS_NAME, props);}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.5-jdk8 #80

2020-03-27 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-9771: Port patch for inter-worker Connect SSL from Jetty 
9.4.25


--
[...truncated 2.90 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #1296

2020-03-27 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: optimize integration test shutdown (#8366)


--
[...truncated 5.99 MB...]

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task 

Build failed in Jenkins: kafka-trunk-jdk8 #4376

2020-03-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9771: Port patch for inter-worker Connect SSL from Jetty 9.4.25


--
[...truncated 2.94 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain 

[jira] [Created] (KAFKA-9777) Purgatory locking bug can lead to hanging transaction

2020-03-27 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-9777:
--

 Summary: Purgatory locking bug can lead to hanging transaction
 Key: KAFKA-9777
 URL: https://issues.apache.org/jira/browse/KAFKA-9777
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.4.1, 2.3.1, 2.2.2, 2.1.1, 2.0.1, 1.1.1
Reporter: Jason Gustafson
Assignee: Jason Gustafson


Once a transaction reaches the `PrepareCommit` or `PrepareAbort` state, the 
transaction coordinator must send markers to all partitions included in the 
transaction. After all markers have been sent, then the transaction transitions 
to the corresponding completed state. Until this transition occurs, no 
additional progress can be made by the producer.

The transaction coordinator uses a purgatory to track completion of the markers 
that need to be sent. Once all markers have been written, then the 
`DelayedTxnMarker` task becomes completable. We depend on its completion in 
order to transition to the completed state.

Related to KAFKA-8334, there is a bug in the locking protocol which is used to 
check completion of the `DelayedTxnMarker` task. The purgatory attempts to 
provide a "happens before" contract for task completion with 
`checkAndComplete`. Basically if a task is completed before calling 
`checkAndComplete`, then it should be given an opportunity to complete as long 
as there is sufficient time remaining before expiration. 

The bug in the locking protocol is that it expects that the operation lock is 
exclusive to the operation. See here: 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/DelayedOperation.scala#L114.
 The logic assumes that if the lock cannot be acquired, then the other holder 
of the lock must be attempting completion of the same delayed operation. If 
that is not the case, then the "happens before" contract is broken and a task 
may not get completed until expiration even if it has been satisfied.

In the case of `DelayedTxnMarker`, the lock in use is the read side of a 
read-write lock which is used for partition loading: 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/coordinator/transaction/TransactionMarkerChannelManager.scala#L264.
 In fact, if the lock cannot be acquired, it means that it is being held in 
order to complete some loading operation, in which case it will definitely not 
attempt completion of the delayed operation. If this happens to occur on the 
last call to `checkAndComplete` after all markers have been written, then the 
transition to the completing state will never occur.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4375

2020-03-27 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: optimize integration test shutdown (#8366)


--
[...truncated 2.95 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord 

[jira] [Created] (KAFKA-9776) Producer should automatically downgrade CommitTxRequest

2020-03-27 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-9776:
--

 Summary: Producer should automatically downgrade CommitTxRequest
 Key: KAFKA-9776
 URL: https://issues.apache.org/jira/browse/KAFKA-9776
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Affects Versions: 2.5.0
Reporter: Matthias J. Sax


When using transactions with a 2.5 producer against 2.4 (or older) brokers, it 
is not possible to call `producer.commitTransaction(..., 
ConsumerGroupMetadata)` but only the old API `producer.commitTransaction(..., 
String applicationId)` is supported.

This implies that a developer needs to know the broker version when writing an 
application or write additional code to call the one or the other API depending 
on the broker version (the developer would need to write code to figure out the 
broker version, too).

We should change the producer to automatically downgrade to the older 
CommitTxRequest if `commitTransaction(..., ConsumerGroupMetadata)` is used 
against older brokers to avoid an `UnsupportedVersionException`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-574: CLI Dynamic Configuration with file input

2020-03-27 Thread Kamal Chandraprakash
+1 (non-binding). Thanks for the KIP!

On Fri, Mar 27, 2020 at 9:16 PM Gwen Shapira  wrote:

> Vote is still good. We can always add things later if needed.
>
> On Fri, Mar 27, 2020, 8:15 AM Aneel Nazareth  wrote:
>
> > Hi Colin and Gwen,
> >
> > I wanted to double-check that your +1 votes were still good, since the
> > proposal has been simplified (removing the --delete-config-file option
> > and taking out the support for reading from STDIN).
> >
> > Thanks!
> >
> > On Fri, Mar 27, 2020 at 10:01 AM Rajini Sivaram  >
> > wrote:
> > >
> > > +1 (binding)
> > >
> > > Thanks for the KIP, Aneel!
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > > On Wed, Mar 25, 2020 at 5:06 AM Gwen Shapira 
> wrote:
> > >
> > > > +1 (binding), thank you
> > > >
> > > > On Wed, Mar 18, 2020, 2:26 PM Colin McCabe 
> wrote:
> > > >
> > > > > +1 (binding).
> > > > >
> > > > > Thanks, Aneel.
> > > > >
> > > > > best,
> > > > > Colin
> > > > >
> > > > > On Wed, Mar 18, 2020, at 00:04, David Jacot wrote:
> > > > > > +1 (non-binding), thanks for the KIP!
> > > > > >
> > > > > > David
> > > > > >
> > > > > > On Mon, Mar 16, 2020 at 4:06 PM Aneel Nazareth <
> an...@confluent.io
> > >
> > > > > wrote:
> > > > > >
> > > > > > > Hello all,
> > > > > > >
> > > > > > > Thanks to the folks who have given feedback. I've incorporated
> > the
> > > > > > > suggestions, and think that this is now ready for a vote:
> > > > > > >
> > > > > > >
> > > > > > >
> > > > >
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
> > > > > > >
> > > > > > > Thank you,
> > > > > > > Aneel
> > > > > > >
> > > > > >
> > > > >
> > > >
> >
>


Re: [VOTE] KIP-580: Exponential Backoff for Kafka Clients

2020-03-27 Thread Sanjana Kaundinya
Thank you to all that voted. I have 4 binding votes and 4 non-binding votes. I 
will begin work on this KIP soon.

Thanks,
Sanjana
On Mar 26, 2020, 3:03 PM -0700, Konstantine Karantasis 
, wrote:
> Thank you Sanjana!
>
> +1 (binding) from me too.
>
> Konstantine
>
>
> On Wed, Mar 25, 2020 at 1:57 PM Sanjana Kaundinya 
> wrote:
>
> > Hi Konstantine,
> >
> > Thanks for the feedback, I have addressed it on the [DISCUSS] thread and
> > will update the KIP shortly.
> >
> > Thanks,
> > Sanjana
> > On Mar 25, 2020, 10:52 AM -0700, Konstantine Karantasis <
> > konstant...@confluent.io>, wrote:
> > > Hi Sanjana.
> > > Thanks for the KIP! Seems quite useful not to overwhelm the brokers with
> > > the described requests from clients.
> > >
> > > You have the votes already, and I'm also in favor overall, but I've made
> > a
> > > couple of questions (sorry for the delay) regarding Connect, which is
> > also
> > > using retry.backoff.ms but currently is not mentioned in the KIP, as
> > well
> > > as a question around how we expect the new setting to work with
> > rebalances
> > > in clients that inherit from the AbstractCoordinator (Consumer and
> > Connect
> > > at a minimum).
> > >
> > > Maybe it's worth clarifying these points in the KIP, or the mailing list
> > > thread in case I missed something w/r/t the intent of the changes.
> > >
> > > Best,
> > > Konstantine
> > >
> > >
> > > On Tue, Mar 24, 2020 at 9:42 PM David Jacot  wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > Thanks for the KIP, great improvement!
> > > >
> > > > Le mer. 25 mars 2020 à 04:44, Gwen Shapira  a
> > écrit :
> > > >
> > > > > +1 (binding) - thank you
> > > > >
> > > > > On Mon, Mar 23, 2020, 10:50 AM Sanjana Kaundinya <
> > skaundi...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi Everyone,
> > > > > >
> > > > > > I’d like to start a vote for KIP-580: Exponential Backoff for Kafka
> > > > > > Clients. The link to the KIP can be found here:
> > > > > >
> > > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-580%3A+Exponential+Backoff+for+Kafka+Clients
> > > > > > .
> > > > > >
> > > > > > Thanks,
> > > > > > Sanjana
> > > > > >
> > > > > >
> > > > >
> > > >
> >


[jira] [Resolved] (KAFKA-9771) Inter-worker SSL is broken for keystores with multiple certificates

2020-03-27 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-9771.
---
Resolution: Fixed

The fix was merged in `trunk` and the `2.5` release branch in time for the 
release of `2.5.0`

> Inter-worker SSL is broken for keystores with multiple certificates
> ---
>
> Key: KAFKA-9771
> URL: https://issues.apache.org/jira/browse/KAFKA-9771
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Blocker
>
> The recent bump in Jetty version causes inter-worker communication to fail in 
> Connect when SSL is enabled and the keystore for the worker contains multiple 
> certificates (which it might, in the case that SNI is enabled and the 
> worker's REST interface is bound to multiple domain names). This is caused by 
> [changes introduced in Jetty 
> 9.4.23|https://github.com/eclipse/jetty.project/pull/4085], which are later 
> [fixed in Jetty 9.4.25|https://github.com/eclipse/jetty.project/pull/4404].
> We recently tried and failed to [upgrade to Jetty 
> 9.4.25|https://github.com/apache/kafka/pull/8183], so upgrading the Jetty 
> version to fix this issue isn't a viable option. Additionally, the [earliest 
> clean version of Jetty|https://www.eclipse.org/jetty/security-reports.html] 
> (at the time of writing) with regards to CVEs is 9.4.24, so reverting to a 
> pre-9.4.23 version is also not a viable option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-574: CLI Dynamic Configuration with file input

2020-03-27 Thread Gwen Shapira
Vote is still good. We can always add things later if needed.

On Fri, Mar 27, 2020, 8:15 AM Aneel Nazareth  wrote:

> Hi Colin and Gwen,
>
> I wanted to double-check that your +1 votes were still good, since the
> proposal has been simplified (removing the --delete-config-file option
> and taking out the support for reading from STDIN).
>
> Thanks!
>
> On Fri, Mar 27, 2020 at 10:01 AM Rajini Sivaram 
> wrote:
> >
> > +1 (binding)
> >
> > Thanks for the KIP, Aneel!
> >
> > Regards,
> >
> > Rajini
> >
> > On Wed, Mar 25, 2020 at 5:06 AM Gwen Shapira  wrote:
> >
> > > +1 (binding), thank you
> > >
> > > On Wed, Mar 18, 2020, 2:26 PM Colin McCabe  wrote:
> > >
> > > > +1 (binding).
> > > >
> > > > Thanks, Aneel.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > > On Wed, Mar 18, 2020, at 00:04, David Jacot wrote:
> > > > > +1 (non-binding), thanks for the KIP!
> > > > >
> > > > > David
> > > > >
> > > > > On Mon, Mar 16, 2020 at 4:06 PM Aneel Nazareth  >
> > > > wrote:
> > > > >
> > > > > > Hello all,
> > > > > >
> > > > > > Thanks to the folks who have given feedback. I've incorporated
> the
> > > > > > suggestions, and think that this is now ready for a vote:
> > > > > >
> > > > > >
> > > > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
> > > > > >
> > > > > > Thank you,
> > > > > > Aneel
> > > > > >
> > > > >
> > > >
> > >
>


Re: [VOTE] KIP-574: CLI Dynamic Configuration with file input

2020-03-27 Thread Aneel Nazareth
Hi Colin and Gwen,

I wanted to double-check that your +1 votes were still good, since the
proposal has been simplified (removing the --delete-config-file option
and taking out the support for reading from STDIN).

Thanks!

On Fri, Mar 27, 2020 at 10:01 AM Rajini Sivaram  wrote:
>
> +1 (binding)
>
> Thanks for the KIP, Aneel!
>
> Regards,
>
> Rajini
>
> On Wed, Mar 25, 2020 at 5:06 AM Gwen Shapira  wrote:
>
> > +1 (binding), thank you
> >
> > On Wed, Mar 18, 2020, 2:26 PM Colin McCabe  wrote:
> >
> > > +1 (binding).
> > >
> > > Thanks, Aneel.
> > >
> > > best,
> > > Colin
> > >
> > > On Wed, Mar 18, 2020, at 00:04, David Jacot wrote:
> > > > +1 (non-binding), thanks for the KIP!
> > > >
> > > > David
> > > >
> > > > On Mon, Mar 16, 2020 at 4:06 PM Aneel Nazareth 
> > > wrote:
> > > >
> > > > > Hello all,
> > > > >
> > > > > Thanks to the folks who have given feedback. I've incorporated the
> > > > > suggestions, and think that this is now ready for a vote:
> > > > >
> > > > >
> > > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
> > > > >
> > > > > Thank you,
> > > > > Aneel
> > > > >
> > > >
> > >
> >


Re: [VOTE] KIP-574: CLI Dynamic Configuration with file input

2020-03-27 Thread Rajini Sivaram
+1 (binding)

Thanks for the KIP, Aneel!

Regards,

Rajini

On Wed, Mar 25, 2020 at 5:06 AM Gwen Shapira  wrote:

> +1 (binding), thank you
>
> On Wed, Mar 18, 2020, 2:26 PM Colin McCabe  wrote:
>
> > +1 (binding).
> >
> > Thanks, Aneel.
> >
> > best,
> > Colin
> >
> > On Wed, Mar 18, 2020, at 00:04, David Jacot wrote:
> > > +1 (non-binding), thanks for the KIP!
> > >
> > > David
> > >
> > > On Mon, Mar 16, 2020 at 4:06 PM Aneel Nazareth 
> > wrote:
> > >
> > > > Hello all,
> > > >
> > > > Thanks to the folks who have given feedback. I've incorporated the
> > > > suggestions, and think that this is now ready for a vote:
> > > >
> > > >
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
> > > >
> > > > Thank you,
> > > > Aneel
> > > >
> > >
> >
>


Re: [DISCUSS] KIP-574: CLI Dynamic Configuration with file input

2020-03-27 Thread Aneel Nazareth
Update: I have simplified the KIP down to just adding the single new
--add-config-file option. Thanks for your input, everyone!

On Thu, Mar 26, 2020 at 10:13 AM Aneel Nazareth  wrote:
>
> Hi Kamal,
>
> Thanks for taking a look at this KIP.
>
> Unfortunately the user actually can't pass the arguments on the
> command line using the existing --add-config option if the values are
> complex structures that contain commas. --add-config assumes that
> commas separate distinct configuration properties. There's a
> workaround using square brackets ("[a,b,c]") for simple lists, but it
> doesn't work for things like nested lists or JSON values.
>
> The motivation for allowing STDIN as well as files is to enable
> grep/pipe workflows in scripts without creating a temporary file. I
> don't know if such workflows will end up being common, and hopefully
> someone with a complex enough use case to require it would also be
> familiar with techniques for securely creating and cleaning up
> temporary files.
>
> I'm okay with excluding the option to allow STDIN in the name of
> consistency, if the consensus thinks that's wise. Anyone else have
> opinions on this?
>
> On Thu, Mar 26, 2020 at 9:02 AM Kamal Chandraprakash
>  wrote:
> >
> > Hi Colin,
> >
> > We should not support STDIN to maintain uniformity across scripts. If the
> > user wants to pass the arguments in command line,
> > they can always use the existing --add-config option.
> >
> >
> >
> >
> > On Thu, Mar 26, 2020 at 7:20 PM David Jacot  wrote:
> >
> > > Rajini has made a good point. I don't feel strong for either ways but if
> > > people
> > > are confused by this, it is probably better without it.
> > >
> > > Best,
> > > David
> > >
> > > On Thu, Mar 26, 2020 at 7:23 AM Colin McCabe  wrote:
> > >
> > > > Hi Kamal,
> > > >
> > > > Are you suggesting that we not support STDIN here?  I have mixed
> > > feelings.
> > > >
> > > > I think the ideal solution would be to support "-" in these tools
> > > whenever
> > > > a file argument was expected.  But that would be a bigger change than
> > > what
> > > > we're talking about here.  Maybe you are right and we should keep it
> > > simple
> > > > for now.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > > On Wed, Mar 25, 2020, at 01:24, Kamal Chandraprakash wrote:
> > > > > STDIN wasn't standard practice in other scripts like
> > > > > kafka-console-consumer.sh, kafka-console-producer.sh and kafka-acls.sh
> > > > > in which the props file is accepted via consumer.config /
> > > > producer.config /
> > > > > command-config parameter.
> > > > >
> > > > > Shouldn't we have to maintain the uniformity across scripts?
> > > > >
> > > > > On Mon, Mar 16, 2020 at 4:13 PM David Jacot 
> > > wrote:
> > > > >
> > > > > > Hi Aneel,
> > > > > >
> > > > > > Thanks for the updated KIP. I have made a second pass over it and 
> > > > > > the
> > > > > > KIP looks good to me.
> > > > > >
> > > > > > Best,
> > > > > > David
> > > > > >
> > > > > > On Tue, Mar 10, 2020 at 9:39 PM Aneel Nazareth 
> > > > wrote:
> > > > > >
> > > > > > > After reading a bit more about it in the Kubernetes case, I think
> > > > it's
> > > > > > > reasonable to do this and be explicit that we're ignoring the
> > > value,
> > > > > > > just deleting all keys that appear in the file.
> > > > > > >
> > > > > > > I've updated the KIP wiki page to reflect that:
> > > > > > >
> > > > > > >
> > > > > >
> > > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input
> > > > > > >
> > > > > > > And updated my sample PR:
> > > > > > > https://github.com/apache/kafka/pull/8184
> > > > > > >
> > > > > > > If there are no further comments, I'll request a vote in a few
> > > days.
> > > > > > >
> > > > > > > Thanks for the feedback!
> > > > > > >
> > > > > > > On Mon, Mar 9, 2020 at 1:24 PM Aneel Nazareth 
> > > > > > wrote:
> > > > > > > >
> > > > > > > > Hi David,
> > > > > > > >
> > > > > > > > Is the expected behavior that the keys are deleted without
> > > > checking the
> > > > > > > values?
> > > > > > > >
> > > > > > > > Let's say I had this file new.properties:
> > > > > > > > a=1
> > > > > > > > b=2
> > > > > > > >
> > > > > > > > And ran:
> > > > > > > >
> > > > > > > > bin/kafka-configs --bootstrap-server localhost:9092 \
> > > > > > > >   --entity-type brokers --entity-default \
> > > > > > > >   --alter --add-config-file new.properties
> > > > > > > >
> > > > > > > > It seems clear what should happen if I run this immediately:
> > > > > > > >
> > > > > > > > bin/kafka-configs --bootstrap-server localhost:9092 \
> > > > > > > >   --entity-type brokers --entity-default \
> > > > > > > >   --alter --delete-config-file new.properties
> > > > > > > >
> > > > > > > > (Namely that both a and b would now have no values in the 
> > > > > > > > config)
> > > > > > > >
> > > > > > > > But what if this were run in-between:
> > > > > > > >
> > > > > > > > bin/kafka-configs --bootstrap-server localhost:9092 \
> > > > 

TLS failures and map I/O faults

2020-03-27 Thread Alexandre Dupriez
Dear community,

I recently faced an unexpected type of failures in the middle of an
incident related to the exhaustion of memory-map handles on a Kafka
broker.

The use case is as follows - a broker, not overloaded, manages enough
indexes to reach the limit on mmap count per process. This leads to
file memory-mapping failures at broker start-up.
It was eventually mitigated by increasing the said limit or reducing
the number of files to mmap.

But before I could mitigate the problem, I was trying to restart the
broker and faced the same failure every time - except once, where map
I/O failures disappeared and instead, every TLS connection attempt
started to fail, with the following exception:

INFO Failed to create channel due to
(org.apache.kafka.common.network.SslChannelBuilder)
java.lang.IllegalArgumentException: Cannot support
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 with currently installed
providers
at sun.security.ssl.CipherSuiteList.(CipherSuiteList.java:81)
at 
sun.security.ssl.SSLEngineImpl.setEnabledCipherSuites(SSLEngineImpl.java:2027)
at 
org.apache.kafka.common.security.ssl.SslFactory.createSslEngine(SslFactory.java:278)
...
at java.lang.Thread.run(Thread.java:748)

However, there were absolutely no change on the certificates,
truststore and keystore files on the host, and neither were the
application binaries changed nor the JRE used to run Kafka. And at the
subsequent restart, this particular type of failure disappeared, and
the map I/O failures resumed.

I cannot understand the origin of these failures, and figure out if it
can find its foundations in (map or regular) I/O faults as the
surrounding failures.

Has anyone encountered this scenario in the past?
How strong would you estimate the correlation between map I/O failures
and that one?

Many thanks,

Alexandre


[jira] [Created] (KAFKA-9775) IllegalFormatConversionException from kafka-consumer-perf-test.sh

2020-03-27 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9775:
--

 Summary: IllegalFormatConversionException from 
kafka-consumer-perf-test.sh
 Key: KAFKA-9775
 URL: https://issues.apache.org/jira/browse/KAFKA-9775
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Tom Bentley
Assignee: Tom Bentley


Exception in thread "main" java.util.IllegalFormatConversionException: f != 
java.lang.Integer
at 
java.base/java.util.Formatter$FormatSpecifier.failConversion(Formatter.java:4426)
at 
java.base/java.util.Formatter$FormatSpecifier.printFloat(Formatter.java:2951)
at 
java.base/java.util.Formatter$FormatSpecifier.print(Formatter.java:2898)
at java.base/java.util.Formatter.format(Formatter.java:2673)
at java.base/java.util.Formatter.format(Formatter.java:2609)
at java.base/java.lang.String.format(String.java:2897)
at scala.collection.immutable.StringLike.format(StringLike.scala:354)
at scala.collection.immutable.StringLike.format$(StringLike.scala:353)
at scala.collection.immutable.StringOps.format(StringOps.scala:33)
at kafka.utils.ToolsUtils$.$anonfun$printMetrics$3(ToolsUtils.scala:60)
at 
kafka.utils.ToolsUtils$.$anonfun$printMetrics$3$adapted(ToolsUtils.scala:58)
at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.utils.ToolsUtils$.printMetrics(ToolsUtils.scala:58)
at kafka.tools.ConsumerPerformance$.main(ConsumerPerformance.scala:82)
at kafka.tools.ConsumerPerformance.main(ConsumerPerformance.scala)




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9774) Create official Docker image for Kafka Connect

2020-03-27 Thread Jordan Moore (Jira)
Jordan Moore created KAFKA-9774:
---

 Summary: Create official Docker image for Kafka Connect
 Key: KAFKA-9774
 URL: https://issues.apache.org/jira/browse/KAFKA-9774
 Project: Kafka
  Issue Type: Task
  Components: build, KafkaConnect, packaging
Affects Versions: 2.4.1
Reporter: Jordan Moore
 Attachments: image-2020-03-27-05-04-46-792.png, 
image-2020-03-27-05-05-59-024.png

This is a ticket for creating an *official* apache/kafka-connect Docker image. 

Does this need a KIP?  -  I don't think so. This would be a new feature, not 
any API change. 

Why is this needed?
 # Kafka Connect is stateless. I believe this is why a Kafka image is not 
created?
 # It scales much more easily with Docker and orchestrators. It operates much 
like any other serverless / "microservice" web application 
 # People struggle with deploying it because it is packaged _with Kafka_ , 
which leads some to believe it needs to _*run* with Kafka_ on the same machine. 

I think there is separate ticket for creating an official Docker image for 
Kafka but clearly none exist. I reached out to Confluent about this, but heard 
nothing yet.

!image-2020-03-27-05-05-59-024.png|width=740,height=196!

 

Zookeeper already has one , btw  
!image-2020-03-27-05-04-46-792.png|width=739,height=288!

*References*: 

[Docs for Official Images|[https://docs.docker.com/docker-hub/official_images/]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9773) Option combination "[bootstrap-server],[config]" can't be used with option "[alter]"

2020-03-27 Thread startjava (Jira)
startjava created KAFKA-9773:


 Summary: Option combination "[bootstrap-server],[config]" can't be 
used with option "[alter]"
 Key: KAFKA-9773
 URL: https://issues.apache.org/jira/browse/KAFKA-9773
 Project: Kafka
  Issue Type: Task
Reporter: startjava


ghy@ghy-VirtualBox:~/T/k/bin$ ./kafka-topics.sh --topic my3 --bootstrap-server 
localhost:9081,localhost:9082,localhost:9083 --alter --config 
max.message.bytes=20480
Option combination "[bootstrap-server],[config]" can't be used with option 
"[alter]"

 

use kafka2.4.1 version bottom error!

how do not show bottom error ?

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.4-jdk8 #178

2020-03-27 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-9707: Fix InsertField.Key should apply to keys of tombstone


--
[...truncated 7.71 MB...]
org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldNotAddStatisticsToInjectedMetricsRecorderWhenUserProvidesStatistics 
STARTED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldNotAddStatisticsToInjectedMetricsRecorderWhenUserProvidesStatistics PASSED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > shouldPutAll STARTED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > shouldPutAll PASSED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldRemoveStatisticsFromInjectedMetricsRecorderOnCloseWhenRecordingLevelIsDebug
 STARTED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldRemoveStatisticsFromInjectedMetricsRecorderOnCloseWhenRecordingLevelIsDebug
 PASSED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldNotAddStatisticsToInjectedMetricsRecorderWhenRecordingLevelIsInfo STARTED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldNotAddStatisticsToInjectedMetricsRecorderWhenRecordingLevelIsInfo PASSED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > shouldRestoreAll 
STARTED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > shouldRestoreAll 
PASSED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldAddStatisticsToInjectedMetricsRecorderWhenRecordingLevelIsDebug STARTED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldAddStatisticsToInjectedMetricsRecorderWhenRecordingLevelIsDebug PASSED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldThrowProcessorStateExceptionOnPutDeletedDir STARTED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldThrowProcessorStateExceptionOnPutDeletedDir PASSED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldPutOnlyIfAbsentValue STARTED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldPutOnlyIfAbsentValue PASSED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldNotRemoveStatisticsFromInjectedMetricsRecorderOnCloseWhenRecordingLevelIsInfo
 STARTED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldNotRemoveStatisticsFromInjectedMetricsRecorderOnCloseWhenRecordingLevelIsInfo
 PASSED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldThrowNullPointerExceptionOnDelete STARTED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldThrowNullPointerExceptionOnDelete PASSED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldThrowNullPointerExceptionOnNullGet STARTED

org.apache.kafka.streams.state.internals.RocksDBStoreTest > 
shouldThrowNullPointerExceptionOnNullGet PASSED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldNotSetFlushListenerOnWrappedNoneCachingStore STARTED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldNotSetFlushListenerOnWrappedNoneCachingStore PASSED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldNotThrowNullPointerExceptionIfFetchReturnsNull STARTED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldNotThrowNullPointerExceptionIfFetchReturnsNull PASSED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldRecordFetchLatency STARTED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldRecordFetchLatency PASSED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldRecordFetchRangeLatency STARTED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldRecordFetchRangeLatency PASSED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldCloseUnderlyingStore STARTED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldCloseUnderlyingStore PASSED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldRecordFlushLatency STARTED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldRecordFlushLatency PASSED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldRecordPutLatency STARTED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldRecordPutLatency PASSED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > testMetrics 
STARTED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > testMetrics 
PASSED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldRecordRestoreLatencyOnInit STARTED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldRecordRestoreLatencyOnInit PASSED

org.apache.kafka.streams.state.internals.MeteredWindowStoreTest > 
shouldSetFlushListenerOnWrappedCachingStore STARTED


Build failed in Jenkins: kafka-trunk-jdk11 #1295

2020-03-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9756: Process more than one record of one task at a time (#8358)


--
[...truncated 2.96 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #4374

2020-03-27 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9756: Process more than one record of one task at a time (#8358)


--
[...truncated 2.95 MB...]
org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
>